Privacy

Amazon Explains Why Alexa Recorded And Emailed A Private Conversation (mercurynews.com) 146

Amazon has issued the following statement about why their Alexa device recorded a woman's private conversation and then emailed it to one of her friends: Echo woke up due to a word in background conversation sounding like "Alexa." Then, the subsequent conversation was heard as a "send message" request. At which point, Alexa said out loud "To whom?" At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, "[contact name], right?" Alexa then interpreted background conversation as "right." As unlikely as this string of events is, we are evaluating options to make this case even less likely.
This apparently didn't satisfy the woman whose conversation was recorded, according to the Mercury News:
Now her family has unplugged all the devices, and although Amazon offered to "de-provision" the devices of their communications features so they could keep using them to control their home, Danielle and her family reportedly want a refund instead.

When reached Friday, an Amazon spokeswoman would not comment about whether the company will issue a refund.

Other smart home speakers carry similar privacy risks. Last year, for example, Google had to release a patch for its Home Mini speakers after some of them were found to be recording everything.

Robotics

AI-Enhanced Weed-Killing Robots Frighten Pesticide Industry (reuters.com) 159

Rick Schumann writes: A Swiss company called ecoRobotix is betting the agricultural industry will be willing to welcome their solar-powered weed-killing autonomous robot, in an effort to reduce the use of herbicides by up to a factor of 20 and perhaps even eliminate the need for herbicide-resistant GMO crops entirely.

The 'see-and-spray' robot goes from plant to plant, visually differentiating the actual crops and weeds, and squirting the weeds selectively and precisely with weed killer, as opposed to the current technique of using large quantities of weed killer like Monsantos' Roundup to spray entire crops.

Weeds are already becoming resistant to such glyphosate-based herbicides after "more than 20 years of near-ubiquitous use," reports Reuters. (The head of one pesticide company's science division concedes that "That was probably a once-in-a-lifetime product.") But AI-based precision spraying "could mean established herbicides whose effect has worn off on some weeds could be used successfully in more potent, targeted doses."

Meanwhile, another Silicon Valley startup has built a machine using on-board cameras to distinguish weeds from crops -- and was recently acquired by the John Deere tractor company. Reuters calls these companies the "new breed of AI weeders that investors say could disrupt the $100 billion pesticides and seeds industry."

The original submission asks: Should we welcome our weed-killing robotic overlords?
Canada

How Canada Ended Up As An AI Superpower 58

pacopico writes: Neural nets and deep learning are all the rage these days, but their rise was anything but sudden. A handful of determined researchers scattered around the globe spent decades developing neural nets while most of their peers thought they were mad. An unusually large number of these academics -- including Geoff Hinton, Yoshua Bengio, Yann LeCun and Richard Sutton -- were working at universities in Canada. Bloomberg Businessweek has put together an oral history of how Canada brought them all together, why they kept chasing neural nets in the face of so much failure, and why their ideas suddenly started to take off. There's also a documentary featuring the researchers and Prime Minster Justin Trudeau that tells more of the story and looks at where AI technology is heading -- both the good and the bad. Overall, it's a solid primer for people wanting to know about AI and the weird story of where the technology came from, but might be kinda basic for hardcore AI folks.
AI

Eric Schmidt Says Elon Musk Is 'Exactly Wrong' About AI (techcrunch.com) 139

At the VivaTech conference in Paris, Alphabet CEO Eric Schmidt was asked about Elon Musk's warnings about AI. He responded by saying: "I think Elon is exactly wrong. He doesn't understand the benefits that this technology will provide to making every human being smarter. The fact of the matter is that AI and machine learning are so fundamentally good for humanity." TechCrunch reports: He acknowledged that there are risks around how the technology might be misused, but he said they're outweighed by the benefits: "The example I would offer is, would you not invent the telephone because of the possible misuse of the telephone by evil people? No, you would build the telephone and you would try to find a way to police the misuse of the telephone."

After wryly observing that Schmidt had just given the journalists in the audience their headlines, interviewer (and former Publicis CEO) Maurice Levy asked how AI and public policy can be developed so that some groups aren't "left behind." Schmidt replied that government should fund research and education around these technologies. "As [these new solutions] emerge, they will benefit all of us, and I mean the people who think they're in trouble, too," he said. He added that data shows "workers who work in jobs where the job gets more complicated get higher wages -- if they can be helped to do it." Schmidt also argued that contrary to concerns that automation and technology will eliminate jobs, "The embracement of AI is net positive for jobs." In fact, he said there will be "too many jobs" -- because as society ages, there won't be enough people working and paying taxes to fund crucial services. So AI is "the best way to make them more productive, to make them smarter, more scalable, quicker and so forth."

Privacy

Zimbabwe is Introducing a Mass Facial Recognition Project With Chinese AI Firm CloudWalk (qz.com) 33

An anonymous reader shares a report: In March, the Zimbabwean government signed a strategic partnership with the Gunagzhou-based startup CloudWalk Technology to begin a large-scale facial recognition program throughout the country. The agreement, backed by the Chinese government's Belt and Road initiative, will see the technology primarily used in security and law enforcement and will likely be expanded to other public programs.

[...] Zimbabwe may be giving away valuable data as Chinese AI technologists stand to benefit from access to a database of millions of Zimbabwean faces Harare will share with CloudWalk. [...] CloudWalk has already recalibrated its existing technology through three-dimensional light technology in order to recognize darker skin tones. In order to recognize other characteristics that may differ from China's population, CloudWalk is also developing a system that recognizes different hairstyles and body shapes, another representative explained to the Global Times.

Privacy

Woman Says Alexa Device Recorded Her Private Conversation and Sent It To Random Contact; Amazon Confirms the Incident (kiro7.com) 270

Gary Horcher, reporting for KIRO7: A Portland family contacted Amazon to investigate after they say a private conversation in their home was recorded by Amazon's Alexa -- the voice-controlled smart speaker -- and that the recorded audio was sent to the phone of a random person in Seattle, who was in the family's contact list. "My husband and I would joke and say I'd bet these devices are listening to what we're saying," said Danielle, who did not want us to use her last name. Every room in her family home was wired with the Amazon devices to control her home's heat, lights and security system. But Danielle said two weeks ago their love for Alexa changed with an alarming phone call. "The person on the other line said, 'unplug your Alexa devices right now,'" she said. '"You're being hacked.'" That person was one of her husband's employees, calling from Seattle. "We unplugged all of them and he proceeded to tell us that he had received audio files of recordings from inside our house," she said. "At first, my husband was, like, 'no you didn't!' And the (recipient of the message) said 'You sat there talking about hardwood floors.' And we said, 'oh gosh, you really did hear us.'" Danielle listened to the conversation when it was sent back to her, and she couldn't believe someone 176 miles away heard it too. In a statement, an Amazon spokesperson said, "Amazon takes privacy very seriously. We investigated what happened and determined this was an extremely rare occurrence. We are taking steps to avoid this from happening in the future."

Further reading: Amazon Admits Its AI Alexa is Creepily Laughing at People.
Facebook

Facebook Asks British Users To Submit Their Nudes as Protection Against Revenge Porn (betanews.com) 300

Mark Wilson writes: Following on from a trial in Australia, Facebook is rolling out anti-revenge porn measures to the UK. In order that it can protect British users from failing victim to revenge porn, the social network is asking them to send in naked photos of themselves. The basic premise of the idea is: send us nudes, and we'll stop others from seeing them .
AI

UK Military Fears Robots Learning War From Video Games (bbc.com) 68

Robots that train themselves in battle tactics by playing video games could be used to mount cyber-attacks, the UK military fears. From a report: The warning is in a Ministry of Defence report on artificial intelligence. Researchers in Silicon Valley are using strategy games, such as Starcraft II, to teach systems how to solve complex problems on their own. But artificial intelligence (AI) programs can then "be readily adapted" to wage cyber-warfare, the MoD says. Officials are particularly concerned about the ability of rogue states and terrorists to mount advanced persistent threat attacks, which can disable critical infrastructure and steal sensitive information.
United States

The US Military is Funding an Effort To Catch Deepfakes and Other AI Trickery (technologyreview.com) 70

The Department of Defense is funding a project that will try to determine whether the increasingly real-looking fake video and audio generated by artificial intelligence might soon be impossible to distinguish from the real thing -- even for another AI system. From a report: This summer, under a project funded by the Defense Advanced Research Projects Agency (DARPA), the world's leading digital forensics experts will gather for an AI fakery contest. They will compete to generate the most convincing AI-generated fake video, imagery, and audio -- and they will also try to develop tools that can catch these counterfeits automatically. The contest will include so-called "deepfakes," videos in which one person's face is stitched onto another person's body.

Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos. But the method could also be used to create a clip of a politician saying or doing something outrageous. DARPA's technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically. Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.

AI

Microsoft Also Has An AI Bot That Makes Phone Calls To Humans (theverge.com) 61

An anonymous reader quotes a report from The Verge: At an AI event in London today, Microsoft CEO Satya Nadella showed off the company's Xiaoice (pronounced "SHAO-ICE") social chat bot. Microsoft has been testing Xiaoice in China, and Nadella revealed the bot has 500 million "friends" and more than 16 channels for Chinese users to interact with it through WeChat and other popular messaging services. Microsoft has turned Xiaoice, which is Chinese for "little Bing," into a friendly bot that has convinced some of its users that the bot is a friend or a human being. "Xiaoice has her own TV show, it writes poetry, and it does many interesting things," reveals Nadella. "It's a bit of a celebrity."

While most of Xiaoice's interactions have been in text conversations, Microsoft has started allowing the chat bot to call people on their phones. It's not exactly the same as Google Duplex, which uses the Assistant to make calls on your behalf, but instead it holds a phone conversation with you. "One of the things we started doing earlier this year is having full duplex conversations," explains Nadella. "So now Xiaoice can be conversing with you in WeChat and stop and call you. Then you can just talk to it using voice." (The term "full duplex" here refers to a conversation where both participants can speak at the same time; it's not a reference to Google's product, which was named after the same jargon.)

Youtube

Google Launches YouTube Music Service With Creepy AI To Predict Listening Habits (audioholics.com) 87

Audiofan writes: Will the new YouTube Music streaming service provide the soundtrack to your life? Google believes that its ability to harness the power of artificial intelligence will help the new service catch up to its rivals in the music streaming business. Google's latest attempt to compete with Spotify and Apple Music may finally have what it takes if it doesn't creep users out in the process. While the service officially rolls out on Tuesday, May 22nd, only some users will be able to use it at launch. What separates YouTube's music streaming service from the competition is its catalog of remixes, live versions, and covers of official versions of songs. It also uses the Google Assistant to make music recommendations based on everything it knows (and can learn) about you and your listening habits. "When you arrive at the gym, for example, YouTube Music will offer up a playlist of hard-hitting pump-up jams (if that's your thing)," reports Audioholics. "Late at night, softer tunes will set a more relaxing mood."

YouTube Music is free with ads, but will cost $9.99 for ad-free listening. There is also YouTube Premium, which will cost $11.99 per month, and will include both the ad-free music service and the exclusive video content from the now-defunct YouTube Red.
Microsoft

The Whole World is Now a Computer, Says Microsoft CEO Satya Nadella (zdnet.com) 182

Thanks to cloud computing, the Internet of Things and artificial intelligence, we should start to think of the planet as one giant computer, according to Microsoft chief executive Satya Nadella. From a report: "Digital technology, pervasively, is getting embedded in every place: every thing, every person, every walk of life is being fundamentally shaped by digital technology -- it is happening in our homes, our work, our places of entertainment," said Nadella speaking in London. "It's amazing to think of a world as a computer. I think that's the right metaphor for us as we go forward."

[...] AI is core to Microsoft's strategy, Nadella said: "AI is the run time which is going to shape all of what we do going forward in terms of applications as well as the platform." Microsoft is rethinking its core products by using AI to connect them together, he said, giving an example of a meeting using translation, transcription, Microsoft's HoloLens and other devices to improve decision-making. "The idea that you can now use all of the computing power that is around you -- this notion of the world as a computer -- completely changes how you conduct a meeting and fundamentally what presence means for a meeting," he said.

AI

New Toronto Declaration Calls On Algorithms To Respect Human Rights 166

A coalition of human rights and technology groups released a new declaration on machine learning standards, calling on both governments and tech companies to ensure that algorithms respect basic principles of equality and non-discrimination. The Verge reports: Called The Toronto Declaration, the document focuses on the obligation to prevent machine learning systems from discriminating, and in some cases violating, existing human rights law. The declaration was announced as part of the RightsCon conference, an annual gathering of digital and human rights groups. "We must keep our focus on how these technologies will affect individual human beings and human rights," the preamble reads. "In a world of machine learning systems, who will bear accountability for harming human rights?" The declaration has already been signed by Amnesty International, Access Now, Human Rights Watch, and the Wikimedia Foundation. More signatories are expected in the weeks to come.

Beyond general non-discrimination practices, the declaration focuses on the individual right to remedy when algorithmic discrimination does occur. "This may include, for example, creating clear, independent, and visible processes for redress following adverse individual or societal effects," the declaration suggests, "[and making decisions] subject to accessible and effective appeal and judicial review."
AI

Did Google's Duplex Testing Break the Law? (daringfireball.net) 73

An anonymous reader writes: Tech blogger John Gruber appears to have successfully identified one of the restaurants mentioned in a post on Google's AI blog that bragged about "a meal booked through a call from Duplex." Mashable then asked a restaurant employee there if Google had let him know in advance that they'd be receiving a call from their non-human personal assistant AI. "No, of course no," he replied. And "When I asked him to confirm one more time that Duplex had called...he appeared to get nervous and immediately said he needed to go. He then hung up the phone."

John Gruber now asks: "How many real-world businesses has Google Duplex been calling and not identifying itself as an AI, leaving people to think they're actually speaking to another human...? And if 'Victor' is correct that Hong's Gourmet had no advance knowledge of the call, Google may have violated California law by recording the call." Friday he added that "This wouldn't send anyone to prison, but it would be a bit of an embarrassment, and would reinforce the notion that Google has a cavalier stance on privacy (and adhering to privacy laws)."

The Mercury News also reports that legal experts "raised questions about how Google's possible need to record Duplex's phone conversations to improve its artificial intelligence may come in conflict with California's strict two-party consent law, where all parties involved in a private phone conversation need to agree to being recorded."

For another perspective, Gizmodo's senior reviews editor reminds readers that "pretty much all tech demos are fake as hell." Speaking of Google's controversial Duplex demo, she writes that "If it didn't happen, if it is all a lie, well then I'll be totally disappointed. But I can't say I'll be surprised."
AI

Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI? (wikipedia.org) 234

"If science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned...?" asks Slashdot reader OpenSourceAllTheWay. There is much screaming lately about possible dangers to humanity posed by AI that gets smarter and smarter and more capable and might -- at some point -- even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-bit home computers entered our lives.
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"

Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."

And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."

But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?


Transportation

Should The Media Cover Tesla Accidents? (chicagotribune.com) 268

Long-time Slashdot reader rufey writes: Last weekend a Tesla vehicle was involved in a crash near Salt Lake City Utah while its Autopilot feature was enabled. The Tesla, a Model S, crashed into the rear end of a fire department utility truck, which was stopped at a red light, at an estimated speed of 60 MPH. "The car appeared not to brake before impact, police said. The driver, whom police have not named, was taken to a hospital with a broken foot," according to the Associated Press. "The driver of the fire truck suffered whiplash and was not taken to a hospital."
Elon Musk tweeted about the accident:

It's super messed up that a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in past year get almost no coverage. What's actually amazing about this accident is that a Model S hit a fire truck at 60mph and the driver only broke an ankle. An impact at that speed usually results in severe injury or death.

The Associated Press defended their news coverage Friday, arguing that the facts show that "not all Tesla crashes end the same way." They also fact-check Elon Musk's claim that "probability of fatality is much lower in a Tesla," reporting that it's impossible to verify since Tesla won't release the number of miles driven by their cars or the number of fatalities. "There have been at least three already this year and a check of 2016 NHTSA fatal crash data -- the most recent year available -- shows five deaths in Tesla vehicles."

Slashdot reader Reygle argues the real issue is with the drivers in the Autopilot cars. "Someone unwilling to pay attention to the road shouldn't be allowed anywhere near that road ever again."


AI

Google's Duplex AI Robot Will Warn That Calls Are Recorded (bloomberg.com) 28

An anonymous reader quotes a report from Bloomberg: On Thursday, the Alphabet Inc. unit shared more details on how the Duplex robot-calling feature will operate when it's released publicly, according to people familiar with the discussion. Duplex is an extension of the company's voice-based digital assistant that automatically phones local businesses and speaks with workers there to book appointments. At Google's weekly TGIF staff meeting on Thursday, executives gave employees their first full Duplex demo and told them the bot would identify itself as the Google assistant. It will also inform people on the phone that the line is being recorded in certain jurisdictions, the people said.
AI

AI Can't Reason Why (wsj.com) 185

The current data-crunching approach to machine learning misses an essential element of human intelligence. From a report: Amid rapid developments and nagging setbacks, one essential building block of human intelligence has eluded machines for decades: Understanding cause and effect. Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" and "What if I had acted differently?" are at the core of the cognitive advances that made us human, and so far are missing from machines.

Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie. The program reviews the store's records and sees that past variations of the price of toothpaste haven't correlated with changes in sales volume. So Charlie recommends raising the price to generate more revenue. A month later, the sales of toothpaste have dropped -- along with dental floss, cookies and other items. Where did Charlie go wrong? Charlie didn't understand that the previous (human) manager varied prices only when the competition did. When Charlie unilaterally raised the price, dentally price-conscious customers took their business elsewhere. The example shows that historical data alone tells us nothing about causes -- and that the direction of causation is crucial.

AI

NYC Announces Plans To Test Algorithms For Bias (betanews.com) 79

The mayor of New York City, Bill de Blasio, has announced the formation of a new task force to examine the fairness of the algorithms used in the city's automated systems. From a report: The Automated Decision Systems Task Force will review algorithms that are in use to determine that they are free from bias. Representatives from the Department of Social Services, the NYC Police Department, the Department of Transportation, the Mayor's Office of Criminal Justice, the Administration for Children's Services, and the Department of Education will be involved, and the aim is to produce a report by December 2019. However, it may be some time before the task force has any sort of effect. While a report is planned for the end of next year, it will merely recommend "procedures for reviewing and assessing City algorithmic tools to ensure equity and opportunity" -- it will be a while before any recommendation might be assessed and implemented.
Google

Google Won't Confirm If Its Human-Like AI Actually Called a Salon To Make an Appointment As Demoed at I/O (axios.com) 95

The headline demo at Google's I/O conference earlier this month continues to be a talking point in the industry. The remarkable demo, which saw Google Assistant call a salon to successfully fix an appointment, continues to draw skepticism. News outlet Axios followed up with Google to get some clarifications only to find that the company did not wish to talk about it. From the report: What's suspicious? When you call a business, the person picking up the phone almost always identifies the business itself (and sometimes gives their own name as well). But that didn't happen when the Google assistant called these "real" businesses. Axios called over two dozen hair salons and restaurants -- including some in Google's hometown of Mountain View -- and every one immediately gave the business name.

Axios asked Google for the name of the hair salon or restaurant, in order to verify both that the businesses exist and that the calls were not pre-planned. We also said that we'd guarantee, in writing, not to publicly identify either establishment (so as to prevent them from receiving unwanted attention). A longtime Google spokeswoman declined to provide either name.

We also asked if either call was edited, even perhaps just cutting the second or two when the business identifies itself. And, if so, were there other edits? The spokeswoman declined comment, but said she'd check and get back to us. She didn't.

Slashdot Top Deals