AI

Anthropic CEO Warns 'All Bets Are Off' in 10 Years, Opposes AI Regulation Moratorium (nytimes.com) 50

Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state AI regulation currently under consideration by the Senate, arguing instead for federal transparency standards in a New York Times opinion piece published Thursday. Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down. He writes: But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop. The disclosure comes as similar concerning behaviors have emerged from other major AI developers -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown, while Google acknowledged its Gemini model approaches capabilities that could enable cyberattacks. Rather than blocking state oversight entirely, Amodei proposed requiring frontier AI developers to publicly disclose their testing policies and risk mitigation strategies on company websites, codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily.
Programming

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job (businessinsider.com) 79

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that."

In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Google

Waymo Set To Double To 20 Million Rides As Self-Driving Reaches Tipping Point (msn.com) 47

Google's self-driving taxi service Waymo has surpassed 10 million total paid rides, marking a significant milestone in the transition of autonomous vehicles from novelty to mainstream transportation option. The company's growth trajectory, WSJ argues, shows clear signs of exponential scaling, with weekly rides jumping from 10,000 in August 2023 to over 250,000 currently. Waymo is on track to hit 20 million rides by the end of 2025. The story adds: This is not just because Waymo is expanding into new markets. It's because of the way existing markets have come to embrace self-driving cars.

In California, the most recent batch of quarterly data reported by the company was the most encouraging yet. It showed that Waymo's number of paid rides inched higher by roughly 2% in both January and February -- and then increased 27% in March. In the nearly two years that people in San Francisco have been paying for robot chauffeurs, it was the first time that Waymo's growth slowed down for several months only to dramatically speed up again.
Waymo currently operates in Phoenix, Los Angeles, and San Francisco, with expansion planned for Austin, Atlanta, Miami, and Washington D.C. The service faces incoming competition from Tesla, which plans to launch its own robotaxi service in Austin this month. Waymo remains unprofitable despite raising $5.6 billion in funding last year.
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
Businesses

Fake IT Support Calls Hit 20 Orgs, End in Stolen Salesforce Data and Extortion, Google Warns (theregister.com) 8

A group of financially motivated cyberscammers who specialize in Scattered-Spider-like fake IT support phone calls managed to trick employees at about 20 organizations into installing a modified version of Salesforce's Data Loader that allows the criminals to steal sensitive data. From a report: Google Threat Intelligence Group (GTIG) tracks this crew as UNC6040, and in research published today said they specialize in voice-phishing campaigns targeting Salesforce instances for large-scale data theft and extortion.

These attacks began around the beginning of the year, GTIG principal threat analyst Austin Larsen told The Register. "Our current assessment indicates that a limited number of organizations were affected as part of this campaign, approximately 20," he said. "We've seen UNC6040 targeting hospitality, retail, education and various other sectors in the Americas and Europe." The criminals are really good at impersonating IT support personnel and convincing employees at English-speaking branches of multinational corporations into downloading a modified version of Data Loader, a Salesforce app that allows users to export and update large amounts of data.

AI

ChatGPT Adds Enterprise Cloud Integrations For Dropbox, Box, OneDrive, Google Drive, Meeting Transcription 17

OpenAI is expanding ChatGPT's enterprise capabilities with new integrations that connect the chatbot directly to business cloud services and productivity tools. The Microsoft-backed startup announced connectors for Dropbox, Box, SharePoint, OneDrive and Google Drive that allow ChatGPT to search across users' organizational documents and files to answer questions, such as helping analysts build investment theses from company slide decks.

The update includes meeting recording and transcription features that generate timestamped notes and suggest action items, competing directly with similar offerings from ClickUp, Zoom, and Notion. OpenAI also introduced beta connectors for HubSpot, Linear, and select Microsoft and Google tools for deep research reports, plus Model Context Protocol support for Pro, Team, and Enterprise users.
Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Privacy

Meta and Yandex Are De-Anonymizing Android Users' Web Browsing Identifiers (github.io) 77

"It appears as though Meta (aka: Facebook's parent company) and Yandex have found a way to sidestep the Android Sandbox," writes Slashdot reader TheWho79. Researchers disclose the novel tracking method in a report: We found that native Android apps -- including Facebook, Instagram, and several Yandex apps including Maps and Browser -- silently listen on fixed local ports for tracking purposes.

These native Android apps receive browsers' metadata, cookies and commands from the Meta Pixel and Yandex Metrica scripts embedded on thousands of web sites. These JavaScripts load on users' mobile browsers and silently connect with native apps running on the same device through localhost sockets. As native apps access programmatically device identifiers like the Android Advertising ID (AAID) or handle user identities as in the case of Meta apps, this method effectively allows these organizations to link mobile browsing sessions and web cookies to user identities, hence de-anonymizing users' visiting sites embedding their scripts.

This web-to-app ID sharing method bypasses typical privacy protections such as clearing cookies, Incognito Mode and Android's permission controls. Worse, it opens the door for potentially malicious apps eavesdropping on users' web activity.

While there are subtle differences in the way Meta and Yandex bridge web and mobile contexts and identifiers, both of them essentially misuse the unvetted access to localhost sockets. The Android OS allows any installed app with the INTERNET permission to open a listening socket on the loopback interface (127.0.0.1). Browsers running on the same device also access this interface without user consent or platform mediation. This allows JavaScript embedded on web pages to communicate with native Android apps and share identifiers and browsing habits, bridging ephemeral web identifiers to long-lived mobile app IDs using standard Web APIs.
This technique circumvents privacy protections like Incognito Mode, cookie deletion, and Android's permission model, with Meta Pixel and Yandex Metrica scripts silently communicating with apps across over 6 million websites combined.

Following public disclosure, Meta ceased using this method on June 3, 2025. Browser vendors like Chrome, Brave, Firefox, and DuckDuckGo have implemented or are developing mitigations, but a full resolution may require OS-level changes and stricter enforcement of platform policies to prevent further abuse.
Google

Google Settles Shareholder Lawsuit, Sill Spend $500 Million On Being Less Evil (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: It has become a common refrain during Google's antitrust saga: What happened to "don't be evil?" Google's unofficial motto has haunted it as it has grown ever larger, but a shareholder lawsuit sought to rein in some of the company's excesses. And it might be working. The plaintiffs in the case have reached a settlement with Google parent company Alphabet, which will spend a boatload of cash on "comprehensive" reforms. The goal is to steer Google away from the kind of anticompetitive practices that got it in hot water.

Under the terms of the settlement, obtained by Bloomberg Law, Alphabet will spend $500 million over the next 10 years on systematic reforms. The company will have to form a board-level committee devoted to overseeing the company's regulatory compliance and antitrust risk, a rarity for US firms. This group will report directly to CEO Sundar Pichai. There will also be reforms at other levels of the company that allow employees to identify potential legal pitfalls before they affect the company. Google has also agreed to preserve communications. Google's propensity to use auto-deleting chats drew condemnation from several judges overseeing its antitrust cases. The agreement still needs approval from US District Judge Rita Lin in San Francisco, but that's mainly a formality at this point. Naturally, Alphabet does not admit to any wrongdoing under the terms of the settlement, but it may have to pay tens of millions in legal fees on top of the promised $500 million investment.

Google

Microsoft, Google, Others Team Up To Standardize Confusing Hacker Group Nicknames 20

Microsoft, CrowdStrike, Palo Alto Networks, and Google announced Monday they will create a public glossary standardizing the nicknames used for state-sponsored hacking groups and cybercriminals.

The initiative aims to reduce confusion caused by the proliferation of disparate naming conventions across cybersecurity firms, which have assigned everything from technical designations like "APT1" to colorful monikers like "Cozy Bear" and "Kryptonite Panda" to the same threat actors. The companies hope to bring additional industry partners and the U.S. government into the effort to streamline identification of digital espionage groups.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Google

Google Maps Falsely Told Drivers in Germany That Roads Across the Country Were Closed (engadget.com) 36

"Chaos ensued on German roads this week after Google Maps wrongly informed drivers that highways throughout the country were closed during a busy holiday," writes Engadget. The problem reportedly only lasted for a few hours and by Thursday afternoon only genuine road closures were being displayed. It's not clear whether Google Maps had just malfunctioned, or if something more nefarious was to blame. "The information in Google Maps comes from a variety of sources. Information such as locations, street names, boundaries, traffic data, and road networks comes from a combination of third-party providers, public sources, and user input," a spokesperson for Google told German newspaper Berliner Morgenpost, adding that it is internally reviewing the problem.

Technical issues with Google Maps are not uncommon. Back in March, users were reporting that their Timeline — which keeps track of all the places you've visited before for future reference — had been wiped, with Google later confirming that some people had indeed had their data deleted, and in some cases, would not be able to recover it.

The Guardian describes German drives "confronted with maps sprinkled with a mass of red dots indicating stop signs," adding "The phenomenon also affected parts of Belgium and the Netherlands." Those relying on Google Maps were left with the impression that large parts of Germany had ground to a halt... The closure reports led to the clogging of alternative routes on smaller thoroughfares and lengthy delays as people scrambled to find detours. Police and road traffic control authorities had to answer a flood of queries as people contacted them for help.

Drivers using or switching to alternative apps, such as Apple Maps or Waze, or turning to traffic news on their radios, were given a completely contrasting picture, reflecting the reality that traffic was mostly flowing freely on the apparently affected routes.

Biotech

Uploading the Human Mind Could One Day Become a Reality, Predicts Neuroscientist (sciencealert.com) 107

A 15-year-old asked the question — receiving an answer from an associate professor of psychology at Georgia Institute of Technology. They write (on The Conversation) that "As a brain scientist who studies perception, I fully expect mind uploading to one day be a reality.

"But as of today, we're nowhere close..." Replicating all that complexity will be extraordinarily difficult. One requirement: The uploaded brain needs the same inputs it always had. In other words, the external world must be available to it. Even cloistered inside a computer, you would still need a simulation of your senses, a reproduction of the ability to see, hear, smell, touch, feel — as well as move, blink, detect your heart rate, set your circadian rhythm and do thousands of other things... For now, researchers don't have the computing power, much less the scientific knowledge, to perform such simulations.

The first task for a successful mind upload: Scanning, then mapping the complete 3D structure of the human brain. This requires the equivalent of an extraordinarily sophisticated MRI machine that could detail the brain in an advanced way. At the moment, scientists are only at the very early stages of brain mapping — which includes the entire brain of a fly and tiny portions of a mouse brain. In a few decades, a complete map of the human brain may be possible. Yet even capturing the identities of all 86 billion neurons, all smaller than a pinhead, plus their trillions of connections, still isn't enough. Uploading this information by itself into a computer won't accomplish much. That's because each neuron constantly adjusts its functioning, and that has to be modeled, too. It's hard to know how many levels down researchers must go to make the simulated brain work. Is it enough to stop at the molecular level? Right now, no one knows.

Knowing how the brain computes things might provide a shortcut. That would let researchers simulate only the essential parts of the brain, and not all biological idiosyncrasies. Here's another way: Replace the 86 billion real neurons with artificial ones, one at a time. That approach would make mind uploading much easier. Right now, though, scientists can't replace even a single real neuron with an artificial one. But keep in mind the pace of technology is accelerating exponentially. It's reasonable to expect spectacular improvements in computing power and artificial intelligence in the coming decades.

One other thing is certain: Mind uploading will certainly have no problem finding funding. Many billionaires appear glad to part with lots of their money for a shot at living forever. Although the challenges are enormous and the path forward uncertain, I believe that one day, mind uploading will be a reality.

"The most optimistic forecasts pinpoint the year 2045, only 20 years from now. Others say the end of this century.

"But in my mind, both of these predictions are probably too optimistic. I would be shocked if mind uploading works in the next 100 years.

"But it might happen in 200..."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
AI

Does Anthropic's Success Prove Businesses are Ready to Adopt AI? (reuters.com) 19

AI company Anthropic (founded in 2021 by a team that left OpenAI) is now making about $3 billion a year in revenue, reports Reuters (citing "two sources familiar with the matter.") The sources said December's projections had been for just $1 billion a year, but it climbed to $2 billion by the end of March (and now to $3 billion) — a spectacular growth rate that one VC says "has never happened." A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and Amazon, is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models.
Anthropic sells AI models as a service to other companies, according to the article, and Reuters calls Anthropic's success "an early validation of generative AI use in the business world" — and a long-awaited indicator that it's growing. (Their rival OpenAI earns more than half its revenue from ChatGPT subscriptions and "is shaping up to be a consumer-oriented company," according to their article, with "a number of enterprises" limiting their rollout of ChatGPT to "experimentation.")

Then again, in February OpenAI's chief operating officer said they had 2 million paying enterprise users, roughly doubling from September, according to CNBC. The latest figures from Reuters...
  • Anthropic's valuation: $61.4 billion.
  • OpenAI's valuation: $300 billion.

Encryption

Help Wanted To Build an Open Source 'Advanced Data Protection' For Everyone (github.com) 46

Apple's end-to-end iCloud encryption product ("Advanced Data Protection") was famously removed in the U.K. after a government order demanded backdoors for accessing user data.

So now a Google software engineer wants to build an open source version of Advanced Data Protection for everyone. "We need to take action now to protect users..." they write (as long-time Slashdot reader WaywardGeek). "The whole world would be able to use it for free, protecting backups, passwords, message history, and more, if we can get existing applications to talk to the new data protection service." "I helped build Google's Advanced Data Protection (Google Cloud Key VaultService) in 2018, and Google is way ahead of Apple in this area. I know exactly how to build it and can have it done in spare time in a few weeks, at least server-side... This would be a distributed trust based system, so I need folks willing to run the protection service. I'll run mine on a Raspberry PI...

The scheme splits a secret among N protection servers, and when it is time to recover the secret, which is basically an encryption key, they must be able to get key shares from T of the original N servers. This uses a distributed oblivious pseudo random function algorithm, which is very simple.

In plain English, it provides nation-state resistance to secret back doors, and eliminates secret mass surveillance, at least when it comes to data backed up to the cloud... The UK and similarly confused governments will need to negotiate with operators in multiple countries to get access to any given users's keys. There are cases where rational folks would agree to hand over that data, and I hope we can end the encryption wars and develop sane policies that protect user data while offering a compromise where lives can be saved.

"I've got the algorithms and server-side covered," according to their original submission. "However, I need help." Specifically...
  • Running protection servers. "This is a T-of-N scheme, where users will need say 9 of 15 nodes to be available to recover their backups."
  • Android client app. "And preferably tight integration with the platform as an alternate backup service."
  • An iOS client app. (With the same tight integration with the platform as an alternate backup service.)
  • Authentication. "Users should register and login before they can use any of their limited guesses to their phone-unlock secret."

"Are you up for this challenge? Are you ready to plunge into this with me?"


In the comments he says anyone interested can ask to join the "OpenADP" project on GitHub — which is promising "Open source Advanced Data Protection for everyone."


Power

AI Could Consume More Power Than Bitcoin By the End of 2025 (digit.fyi) 76

Artificial intelligence could soon outpace Bitcoin mining in energy consumption, according to Alex de Vries-Gao, a PhD candidate at Vrije Universiteit Amsterdam's Institute for Environmental Studies. His research estimates that by the end of 2025, AI could account for nearly half of all electricity used by data centers worldwide -- raising significant concerns about its impact on global climate goals.

"While companies like Google and Microsoft disclose total emissions, few provide transparency on how much of that is driven specifically by AI," notes DIGIT. To fill this gap, de Vries-Gao employed a triangulation method combining chip production data, corporate disclosures, and industry analyst estimates to map AI's growing energy footprint.

His analysis suggests that specialized AI hardware could consume between 46 and 82 terawatt-hours (TWh) in 2025 -- comparable to the annual energy usage of countries like Switzerland. Drawing on supply chain data, the study estimates that millions of AI accelerators from NVIDIA and AMD were produced between 2023 and 2024, with a potential combined power demand exceeding 12 gigawatts (GW). A detailed explanation of his methodology is available in his commentary published in Joule.
Piracy

Football and Other Premium TV Being Pirated At 'Industrial Scale' (bbc.com) 132

An anonymous reader quotes a report from the BBC: A lack of action by big tech firms is enabling the "industrial scale theft" of premium video services, especially live sport, a new report says. The research by Enders Analysis accuses Amazon, Google, Meta and Microsoft of "ambivalence and inertia" over a problem it says costs broadcasters revenue and puts users at an increased risk of cyber-crime. Gareth Sutcliffe and Ollie Meir, who authored the research, described the Amazon Fire Stick -- which they argue is the device many people use to access illegal streams -- as "a piracy enabler." [...] The device plugs into TVs and gives the viewer thousands of options to watch programs from legitimate services including the BBC iPlayer and Netflix. They are also being used to access illegal streams, particularly of live sport.

In November last year, a Liverpool man who sold Fire Stick devices he reconfigured to allow people to illegally stream Premier League football matches was jailed. After uploading the unauthorized services on the Amazon product, he advertised them on Facebook. Another man from Liverpool was given a two-year suspended sentence last year after modifying fire sticks and selling them on Facebook and WhatsApp. According to data for the first quarter of this year, provided to Enders by Sky, 59% of people in UK who said they had watched pirated material in the last year while using a physical device said they had used a Amazon fire product. The Enders report says the fire stick enables "billions of dollars in piracy" overall. [...]

The researchers also pointed to the role played by the "continued depreciation" of Digital Rights Management (DRM) systems, particularly those from Google and Microsoft. This technology enables high quality streaming of premium content to devices. Two of the big players are Microsoft's PlayReady and Google's Widevine. The authors argue the architecture of the DRM is largely unchanged, and due to a lack of maintenance by the big tech companies, PlayReady and Widevine "are now compromised across various security levels." Mr Sutcliffe and Mr Meir said this has had "a seismic impact across the industry, and ultimately given piracy the upper hand by enabling theft of the highest quality content." They added: "Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline. A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority."

AI

Gmail's AI Summaries Now Appear Automatically (theverge.com) 44

Google has begun automatically generating AI-powered email summaries for Gmail Workspace users, eliminating the need to manually trigger the feature that has been available since last year. The company's Gemini AI will now independently determine when longer email threads or messages with multiple replies would benefit from summarization, displaying these summaries above the email content itself. The automatic summaries currently appear only on mobile devices for English-language emails and may take up to two weeks to roll out to individual accounts, with Google providing no timeline for desktop expansion or availability to non-Workspace Gmail users.

Slashdot Top Deals