Privacy

Everything You Say To Your Echo Will Be Sent To Amazon Starting On March 28 (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon's email said: "As we continue to expand Alexa's capabilities with generative AI features that rely on the processing power of Amazon's secure cloud, we have decided to no longer support this feature." One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.

[...] Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
Amazon's email continues: "Alexa voice requests are always encrypted in transit to Amazon's secure cloud, which was designed with layers of security protections to keep customer information safe. Customers can continue to choose from a robust set of controls by visiting the Alexa Privacy dashboard online or navigating to More - Alexa Privacy in the Alexa app."

Further reading: Google's Gemini AI Can Now See Your Search History
AI

AI Summaries Are Coming To Notepad (theverge.com) 26

way2trivial shares a report: Microsoft is testing AI-powered summaries in Notepad. In an update rolling out to Windows Insiders in the Canary and Dev channels, you'll be able to summarize information in Notepad by highlighting a chunk of text, right-clicking it, and selecting Summarize.

Notepad will then generate a summary of the text, as well as provide an option to change its length. You can also generate summaries by selecting text and using the Ctrl + M shortcut or choosing Summarize from the Copilot menu.

AI

China Announces Generative AI Labeling To Cull Disinformation (bloomberg.com) 20

China has introduced regulations requiring service providers to label AI-generated content, joining similar efforts by the European Union and United States to combat disinformation. The Cyberspace Administration of China and three other agencies announced Friday that AI-generated material must be labeled explicitly or via metadata, with implementation beginning September 1.

"The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content," the CAC said. App store operators must verify whether applications provide AI-generated content and review their labeling mechanisms. Platforms can still offer unlabeled AI content if they comply with relevant regulations and respond to user demand.
AI

'No One Knows What the Hell an AI Agent Is' (techcrunch.com) 40

Major technology companies are heavily promoting AI agents as transformative tools for work, but industry insiders say no one can agree on what these systems actually are, according to TechCrunch. OpenAI CEO Sam Altman said agents will "join the workforce" this year, while Microsoft CEO Satya Nadella predicted they will replace certain knowledge work. Salesforce CEO Marc Benioff declared his company's goal to become "the number one provider of digital labor in the world."

The definition problem has worsened recently. OpenAI published a blog post defining agents as "automated systems that can independently accomplish tasks," but its developer documentation described them as "LLMs equipped with instructions and tools." Microsoft distinguishes between agents and AI assistants, while Salesforce lists six different categories of agents. "I think that our industry overuses the term 'agent' to the point where it is almost nonsensical," Ryan Salva, senior director of product at Google, told TechCrunch. Andrew Ng, founder of DeepLearning.ai, blamed marketing: "The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning, but about a year ago, marketers and a few big companies got a hold of them." Analysts say this ambiguity threatens to create misaligned expectations as companies build product lineups around agents.
AI

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 96

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

AI

Yale Suspends Palestine Activist After AI Article Linked Her To Terrorism 151

Yale University has suspended a law scholar and pro-Palestinian activist after an AI-generated article from Jewish Onliner falsely linked her to a terrorist group. Gizmodo reports: Helyeh Doutaghi, the scholar at Yale Law School, told the New York Times that she is a "loud and proud" supporter of Palestinian rights. "I am not a member of any organization that would constitute a violation of U.S. law." The article that led to her suspension was published in Jewish Onliner, a Substack that says it is "empowered by A.I. capabilities." The website does not publish the names of its authors out of fear of harassment. Ironically, Doutaghi and Yale were reportedly the subject of intense harassment after Jewish Onliner published the article linking Doutaghi to terrorism by citing appearances she made at events sponsored by Samidoun, a pro-Palestinian group. [...]

Jewish Onliner is vague about how it uses AI to produce its articles, but the technology is known for making lots of mistakes and hallucinating information that is not true. It is quite possible that Jewish Onliner relied on AI to source information it used to write the article. That could open it up to liability if it did not perform fact-checking and due diligence on its writing. Besides the fact that Doutaghi says she is not a member of Samidoun, she attended events it sponsored that support Palestinian causes, Yale Law School said the allegations against her reflect "potential unlawful conduct."
AI

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button 57

An anonymous reader quotes a report from Ars Technica: Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant. Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."

Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration.
"So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."

Amodei's comments drew immediate skepticism on X and Reddit.
Google

Google's Gemini AI Can Now See Your Search History (arstechnica.com) 30

Google is continuing its quest to get more people to use Gemini, and it's doing that by giving away even more AI computing. From a report: Today, Google is releasing a raft of improvements for the Gemini 2.0 models, and as part of that upgrade, some of the AI's most advanced features are now available to free users. You'll be able to use the improved Deep Research to get in-depth information on a topic, and Google's newest reasoning model can peruse your search history to improve its understanding of you as a person.

[...] With the aim of making Gemini more personal to you, Google is also plugging Flash Thinking Experimental into a new source of data: your search history. Google stresses that you have to opt in to this feature, and it can be disabled at any time. Gemini will even display a banner to remind you it's connected to your search history so you don't forget.

China

OpenAI Warns Limiting AI Access To Copyrighted Content Could Give China Advantage 74

OpenAI has warned the U.S. government that restricting AI models from learning from copyrighted material would threaten America's technological leadership against China, according to a proposal submitted [PDF] to the Office of Science and Technology Policy for the AI Action Plan.

In its March 13 document, OpenAI argues its AI training aligns with fair use doctrine, saying its models don't replicate works but extract "patterns, linguistic structures, and contextual insights" without harming commercial value of original content. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI," OpenAI stated.

The Microsoft-backed startup criticized European and UK approaches that allow copyright holders to opt out of AI training, claiming these restrictions hinder innovation, particularly for smaller companies with limited resources. The proposal comes as China-based DeepSeek recently released an AI model with capabilities comparable to American systems despite development at a fraction of the cost.
AI

Microsoft's Xbox Copilot Will Act As an AI Gaming Coach (theverge.com) 32

Microsoft is preparing to launch an AI-powered Copilot for Gaming soon that will guide Xbox players through games and act as an assistant to download and launch games. From a report: Copilot for Gaming, as Microsoft is branding it, will be available through the Xbox mobile app initially and is designed to work on a second screen as a companion or assistant.

Microsoft is positioning Copilot for Gaming as a sidekick of sorts, one that will accompany you through games, offering up tips and guides and useful information about a game world. During a press briefing, Sonali Yadav, product manager for gaming AI, demonstrated several scenarios for what Copilot for Gaming could be used for. One involved a concept demo of Copilot assisting an Overwatch 2 player by coaching them on the mistakes they made when trying to push without teammates.

AI

Anthropic CEO Says Spies Are After $100 Million AI Secrets In a 'Few Lines of Code' (techcrunch.com) 47

An anonymous reader quotes a report from TechCrunch: Anthropic's CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly "algorithmic secrets" from the U.S.'s top AI companies -- and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its "large-scale industrial espionage" and that AI companies like Anthropic are almost certainly being targeted. "Many of these algorithmic secrets, there are $100 million secrets that are a few lines of code," he said. "And, you know, I'm sure that there are folks trying to steal them, and they may be succeeding."

More help from the U.S. government to defend against this risk is "very important," Amodei added, without specifying exactly what kind of help would be required. Anthropic declined to comment to TechCrunch on the remarks specifically but referred to Anthropic's recommendations to the White House's Office of Science and Technology Policy (OSTP) earlier this month. In the submission, Anthropic argues that the federal government should partner with AI industry leaders to beef up security at frontier AI labs, including by working with U.S. intelligence agencies and their allies.

AI

Netflix Used AI To Upscale 'A Different World' and It's a Melted Nightmare (vice.com) 57

Netflix has deployed AI upscaling on the 1987-1993 sitcom "A Different World," resulting in significant visual artifacts documented by technology commentator Scott Hanselman. The AI processing, intended to enhance the original 360p footage for modern displays, has generated distortions resembling "lava lamp effects" on actors' bodies, improperly rendered mouths, and misshapen background objects including posters and tennis rackets. This marks Netflix's second controversial AI implementation in recent months, following December's AI-powered dubbing and mouth morphing on "La Palma."
AI

Google Claims Gemma 3 Reaches 98% of DeepSeek's Accuracy Using Only One GPU 58

Google says its new open-source AI model, Gemma 3, achieves nearly the same performance as DeepSeek AI's R1 while using just one Nvidia H100 GPU, compared to an estimated 32 for R1. ZDNet reports: Using "Elo" scores, a common measurement system used to rank chess and athletes, Google claims Gemma 3 comes within 98% of the score of DeepSeek's R1, 1338 versus 1363 for R1. That means R1 is superior to Gemma 3. However, based on Google's estimate, the search giant claims that it would take 32 of Nvidia's mainstream "H100" GPU chips to achieve R1's score, whereas Gemma 3 uses only one H100 GPU.

Google's balance of compute and Elo score is a "sweet spot," the company claims. In a blog post, Google bills the new program as "the most capable model you can run on a single GPU or TPU," referring to the company's custom AI chip, the "tensor processing unit." "Gemma 3 delivers state-of-the-art performance for its size, outperforming Llama-405B, DeepSeek-V3, and o3-mini in preliminary human preference evaluations on LMArena's leaderboard," the blog post relates, referring to the Elo scores. "This helps you to create engaging user experiences that can fit on a single GPU or TPU host."

Google's model also tops Meta's Llama 3's Elo score, which it estimates would require 16 GPUs. (Note that the numbers of H100 chips used by the competition are Google's estimate; DeepSeek AI has only disclosed an example of using 1,814 of Nvidia's less-powerful H800 GPUs to server answers with R1.) More detailed information is provided in a developer blog post on HuggingFace, where the Gemma 3 repository is offered.
Robotics

Google's New Robot AI Can Fold Delicate Origami, Close Zipper Bags (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants. [...] Google's new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems. For example, with Gemini Robotics, you can ask a robot to "pick up the banana and put it in the basket," and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, "fold an origami fox," and it will use its knowledge of origami and how to fold paper carefully to perform the task.

In 2023, we covered Google's RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn't handle. While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics' biggest challenges: getting robots to turn their "knowledge" into careful, precise movements in the real world.
DeepMind claims Gemini Robotics "more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models."

Google is advancing this effort through a partnership with Apptronik to develop next-generation humanoid robots powered by Gemini 2.0. Availability timelines or specific commercial applications for the new AI models were not made available.
AI

US Schools Deploy AI Surveillance Amid Security Lapses, Privacy Concerns (apnews.com) 62

Schools across the United States are increasingly using artificial intelligence to monitor students' online activities, raising significant privacy concerns after Vancouver Public Schools inadvertently released nearly 3,500 unredacted, sensitive student documents to reporters.

The surveillance software, developed by companies like Gaggle Safety Management, scans school-issued devices 24/7 for signs of bullying, self-harm, or violence, alerting staff when potential issues are detected. Approximately 1,500 school districts nationwide use Gaggle's technology to track six million students, with Vancouver schools paying $328,036 for three years of service.

While school officials maintain the technology has helped counselors intervene with at-risk students, documents revealed LGBTQ+ students were potentially outed to administrators through the monitoring.
Programming

IBM CEO Doesn't Think AI Will Replace Programmers Anytime Soon (techcrunch.com) 58

IBM CEO Arvind Krishna has publicly disagreed with Anthropic CEO Dario Amodei's prediction that AI will write 90% of code within 3-6 months, estimating instead that only "20-30% of code could get written by AI."

"Are there some really simple use cases? Yes, but there's an equally complicated number of ones where it's going to be zero," Krishna said during an onstage interview at SXSW. He argued AI will boost programmer productivity rather than eliminate jobs. "If you can do 30% more code with the same number of people, are you going to get more code written or less?" he asked. "History has shown that the most productive company gains market share, and then you can produce more products."
Iphone

Morgan Stanley Cuts iPhone Shipment Forecast on Siri Upgrade Delay, China Tariffs 9

Morgan Stanley has reduced its iPhone shipment forecasts after Apple confirmed the delay of a more advanced Siri personal assistant, dampening prospects for accelerating phone upgrades. The investment bank now predicts 230 million iPhone shipments in 2025 (flat year-over-year) and 243 million in 2026 (up 6%), down from previous estimates.

An upgraded Siri was the most sought-after Apple Intelligence feature among prospective buyers, according to the bank's survey data. "Access to Advanced AI Features" appeared as a top-five driver of smartphone upgrades for the first time, with about 50% of iPhone owners who didn't upgrade to iPhone 16 citing the delayed Apple Intelligence rollout as affecting their decision. The firm also incorporated headwinds from China tariffs in its assessment, noting Apple is unlikely to fully offset these costs without broader exemptions.
Facebook

Amazon, Google and Meta Support Tripling Nuclear Power By 2050 (cnbc.com) 68

Amazon, Alphabet's Google and Meta Platforms on Wednesday said they support efforts to at least triple nuclear energy worldwide by 2050. From a report: The tech companies signed a pledge first adopted in December 2023 by more than 20 countries, including the U.S., at the U.N. Climate Change Conference. Financial institutions including Bank of America, Goldman Sachs and Morgan Stanley backed the pledge last year.

The pledge is nonbinding, but highlights the growing support for expanding nuclear power among leading industries, finance and governments. Amazon, Google and Meta are increasingly important drivers of energy demand in the U.S. as they build out AI centers. The tech sector is turning to nuclear power after concluding that renewables alone won't provide enough reliable power for their energy needs.
Microsoft and Apple did not sign the statement.
AI

OpenAI Pushes AI Agent Capabilities With New Developer API 8

An anonymous reader quotes a report from Ars Technica: On Tuesday, OpenAI unveiled a new "Responses API" designed to help software developers create AI agents that can perform tasks independently using the company's AI models. The Responses API will eventually replace the current Assistants API, which OpenAI plans to retire in the first half of 2026. With the new offering, users can develop custom AI agents that scan company files with a file search utility that rapidly checks company databases (with OpenAI promising not to train its models on these files) and navigate websites -- similar to functions available through OpenAI's Operator agent, whose underlying Computer-Using Agent (CUA) model developers can also access to enable automation of tasks like data entry and other operations.

However, OpenAI acknowledges that its CUA model is not yet reliable for automating tasks on operating systems and can make unintended mistakes. The company describes the new API as an early iteration that it will continue to improve over time. Developers using the Responses API can access the same models that power ChatGPT Search: GPT-4o search and GPT-4o mini search. These models can browse the web to answer questions and cite sources in their responses. That's notable because OpenAI says the added web search ability dramatically improves the factual accuracy of its AI models. On OpenAI's SimpleQA benchmark, which aims to measure confabulation rate, GPT-4o search scored 90 percent, while GPT-4o mini search achieved 88 percent -- both substantially outperforming the larger GPT-4.5 model without search, which scored 63 percent.

Despite these improvements, the technology still has significant limitations. Aside from issues with CUA properly navigating websites, the improved search capability doesn't completely solve the problem of AI confabulations, with GPT-4o search still making factual mistakes 10 percent of the time. Alongside the Responses API, OpenAI released the open source Agents SDK, providing developers free tools to integrate models with internal systems, implement safeguards, and monitor agent activities. This toolkit follows OpenAI's earlier release of Swarm, a framework for orchestrating multiple agents.
Earth

Geothermal Could Power Nearly All New Data Centers Through 2030 (techcrunch.com) 26

An anonymous reader quotes a report from TechCrunch: There's a power crunch looming as AI and cloud providers ramp up data center construction. But a new report suggests that a solution lies beneath their foundations. Advanced geothermal power could supply nearly two-thirds of new data center demand by 2030, according to an analysis by the Rhodium Group. The additions would quadruple the amount of geothermal power capacity in the U.S. -- from 4 gigawatts to about 16 gigawatts -- while costing the same or less than what data center operators pay today. In the western U.S., where geothermal resources are more plentiful, the technology could provide 100% of new data center demand. Phoenix, for example, could add 3.8 gigawatts of data center capacity without building a single new conventional power plant.

Geothermal resources have enormous potential to provide consistent power. Historically, geothermal power plants have been limited to places where Earth's heat seeps close to the surface. But advanced geothermal techniques could unlock 90 gigawatts of clean power in the U.S. alone, according to the U.S. Department of Energy. [...] Because geothermal power has very low running costs, its price is competitive with data centers' energy costs today, the Rhodium report said. When data centers are sited similarly to how they are today, a process that typically takes into account proximity to fiber optics and major metro areas, geothermal power costs just over $75 per megawatt hour. But when developers account for geothermal potential in their siting, the costs drop significantly, down to around $50 per megawatt hour.

The report assumes that new generating capacity would be "behind the meter," which is what experts call power plants that are hooked up directly to a customer, bypassing the grid. Wait times for new power plants to connect to the grid can stretch on for years. As a result, behind the meter arrangements have become more appealing for data center operators who are scrambling to build new capacity.

Slashdot Top Deals