AI

AI Set To Consume Electricity Equivalent To 22% of US Homes By 2028, New Analysis Says (technologyreview.com) 95

New analysis by MIT Technology Review reveals AI's rapidly growing energy demands, with data centers expected to triple their share of US electricity consumption from 4.4% to 12% by 2028. According to Lawrence Berkeley National Laboratory projections, AI alone could soon consume electricity equivalent to 22% of all US households annually, driven primarily by inference operations that represent 80-90% of AI's computing power.

The carbon intensity of electricity used by data centers is 48% higher than the US average, researchers found, as facilities increasingly turn to dirtier energy sources like natural gas to meet immediate needs.

Tech giants are racing to secure unprecedented energy resources: OpenAI and President Trump announced a $500 billion Stargate initiative, Apple plans to spend $500 billion on manufacturing and data centers, and Google expects to invest $75 billion in AI infrastructure in 2025 alone. Despite their massive energy ambitions, leading AI companies remain largely silent about their per-query energy consumption, leaving researchers struggling to assemble what one expert called "a total black box."
Google

Google's Brin: 'I Made a Lot of Mistakes With Google Glass' 34

Google co-founder Sergey Brin candidly addressed the failure of Google Glass during an unscheduled appearance at Tuesday's Google I/O conference, where the company announced a new smart glasses partnership with Warby Parker. "I definitely feel like I made a lot of mistakes with Google Glass, I'll be honest," Brin said.

He noted several key issues that doomed the $1,500 device launched in 2013, including a conspicuous front-facing camera that sparked privacy concerns. "Now it looks like normal glasses without that thing in front," Brin said of the new design. He also blamed the "technology gap" that existed a decade ago and his own inexperience with supply chains that prevented pricing the original Glass competitively.
Chrome

Google Is Baking Gemini AI Into Chrome (pcworld.com) 54

An anonymous reader quotes a report from PCWorld: Microsoft famously brought its Copilot AI to the Edge browser in Windows. Now Google is doing the same with Chrome. In a list of announcements that spanned dozens of pages, Google allocated just a single line to the announcement: "Gemini is coming to Chrome, so you can ask questions while browsing the web." Google later clarified what Gemini on Chrome can do: "This first version allows you to easily ask Gemini to clarify complex information on any webpage you're reading or summarize information," the company said in a blog post. "In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf."

Other examples of what Gemini can do involves coming up with personal quizzes based on material in the Web page, or altering what the page suggests, like a recipe. In the future, Google plans to allow Gemini in Chrome to work on multiple tabs, navigate within Web sites, and automate tasks. Google said that you'll be able to either talk or type commands to Gemini. To access it, you can use the Alt+G shortcut in Windows. [...] You'll see Gemini appear in Chrome as early as this week, Google executives said -- on May 21, a representative clarified. However, you'll need to be a Gemini subscriber to take advantage of its features, a requirement that Microsoft does not apply with Copilot for Edge. Otherwise, Google will let those who participate in the Google Chrome Beta, Dev, and Canary programs test it out.

KDE

KDE Is Getting a Native Virtual Machine Manager Called 'Karton' (neowin.net) 37

A new virtual machine manager called Karton is being developed specifically for the KDE Plasma desktop, aiming to offer a seamless, Qt-native alternative to GNOME-centric tools like GNOME Boxes. Spearheaded by University of Waterloo student Derek Lin as part of Google Summer of Code 2025, Karton uses libvirt and Qt Quick to build a user-friendly, fully integrated VM experience, with features like a custom SPICE viewer, snapshot support, and a mobile-friendly UI expected by September 2025. Neowin reports: To feel right at home in KDE, Karton is being built with Qt Quick and Kirigami. It uses the libvirt API to handle virtual machines and could eventually work across different platforms. Right now, development is focused on getting the core parts in place. Lin is working on a new domain installer that ditches direct virt-install calls in favor of libosinfo, which helps detect OS images and generate the right libvirt XML for setting up virtual machines more precisely. He's still refining device configuration and working on broader hypervisor support. Another key part of the work is building a custom SPICE viewer using Qt Quick from scratch:

If you're curious, here's the list of specific deliverables Lin included in his GSoC proposal, though he notes the proposal itself is a bit outdated [...]. For those interested in the timeline, Lin's GSoC proposal says the official GSoC coding starts June 2, 2025. The goal is to have a working app ready by the midterm evaluation around July 14, 2025, with the final submission due September 1, 2025.
You can learn more via KDE.org.
The Internet

KrebsOnSecurity Hit With Near-Record 6.3 Tbps DDoS (krebsonsecurity.com) 16

KrebsOnSecurity was hit with a near-record 6.3 Tbps DDoS attack, believed to be a test of the powerful new Aisuru IoT botnet. The attack, lasting under a minute, was the largest Google has ever mitigated and is linked to a DDoS-for-hire operation run by a 21-year-old Brazilian known as "Forky." Brian Krebs writes: [Google Security Engineer Damian Menscher] said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second. "It was the type of attack normally designed to overwhelm network links," Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). "For most companies, this size of attack would kill them." [...]

The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief -- lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google's Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet's capabilities. In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.

AI

Google Launches Veo 3, an AI Video Generator That Incorporates Audio 5

Google on Tuesday unveiled Veo 3, an AI video generator that includes synchronized audio -- such as dialogue and animal sounds -- setting it apart from rivals like OpenAI's Sora. The company also launched Imagen 4 for high-quality image generation, Flow for cinematic video creation, and made updates to its Veo 2 and Lyria 2 tools. CNBC reports: "Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing," Eli Collins, Google DeepMind product vice president, said in a blog Tuesday. The video-audio AI tool is available Tuesday to U.S. subscribers of Google's new $249.99 per month Ultra subscription plan, which is geared toward hardcore AI enthusiasts. Veo 3 will also be available for users of Google's Vertex AI enterprise platform.

Google also announced Imagen 4, its latest image-generation tool, which the company said produces higher-quality images through user prompts. Additionally, Google unveiled Flow, a new filmmaking tool that allows users to create cinematic videos by describing locations, shots and style preferences. Users can access the tool through Gemini, Whisk, Vertex AI and Workspace.
Google

Google Is Rolling Out AI Mode To Everyone In the US (engadget.com) 44

Google has unveiled a major overhaul of its search engine with the introduction of A.I. Mode -- a new feature that works like a chatbot, enabling users to ask follow-up questions and receive detailed, conversational answers. Announced at the I/O 2025 conference, the feature is now being rolled out to all Search users in the U.S. Engadget reports: Google first began previewing AI Mode with testers in its Labs program at the start of March. Since then, it has been gradually rolling out the feature to more people, including in recent weeks regular Search users. At its keynote today, Google shared a number of updates coming to AI Mode as well, including some new tools for shopping, as well as the ability to compare ticket prices for you and create custom charts and graphs for queries on finance and sports.

For the uninitiated, AI Mode is a chatbot built directly into Google Search. It lives in a separate tab, and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI Mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5. What's more, Google plans to bring many of AI Mode's capabilities to other parts of the Search experience.

Looking to the future, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode. [...] Another new feature that's coming to AI Mode builds on the work Google did with Project Mariner, the web-surfing AI agent the company began previewing with "trusted testers" at the end of last year. This addition gives AI Mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next MLB game in your city. AI Mode will compare "hundreds of potential" tickets for you and return with a few of the best options. From there, you can complete a purchase without having done the comparison work yourself. [...] All of the new AI Mode features Google previewed today will be available to Labs users first before they roll out more broadly.

Google

Google's Gemini 2.5 Models Gain "Deep Think" Reasoning (venturebeat.com) 30

Google today unveiled significant upgrades to its Gemini 2.5 AI models, introducing an experimental "Deep Think" reasoning mode for 2.5 Pro that allows the model to consider multiple hypotheses before responding. The new capability has achieved impressive results on complex benchmarks, scoring highly on the 2025 USA Mathematical Olympiad and leading on LiveCodeBench, a competition-level coding benchmark. Gemini 2.5 Pro also tops the WebDev Arena leaderboard with an ELO score of 1420.

"Based on Google's experience with AlphaGo, AI model responses improve when they're given more time to think," said Demis Hassabis, CEO of Google DeepMind. The enhanced Gemini 2.5 Flash, Google's efficiency-focused model, has improved across reasoning, multimodality, and code benchmarks while using 20-30% fewer tokens. Both models now feature native audio capabilities with support for 24+ languages, thought summaries, and "thinking budgets" that let developers control token usage. Gemini 2.5 Flash is currently available in preview with general availability expected in early June, while Deep Think remains limited to trusted testers during safety evaluations.
Google

Google Brings AI-Powered Live Translation To Meet 19

Google is adding AI-powered live translation to Meet, enabling participants to converse in their native languages while the system automatically translates in real time with the speaker's original vocal characteristics intact. Initially launching with English-Spanish translation this week, the technology processes speech with minimal delay, preserving tone, cadence, and expressions -- creating an effect similar to professional dubbing but with the speaker's own voice, the company announced at its developer conference Tuesday.

In some testings, WSJ found occasional limitations: initial sentences sometimes appear garbled before smoothing out, context-dependent words like "match" might translate imperfectly (rendered as "fight" in Spanish), and the slight delay can create confusing crosstalk with multiple participants. Google plans to extend support to Italian, German, and Portuguese in the coming weeks. The feature is rolling out to Google AI Pro and Ultra subscribers now, with enterprise availability planned later this year. The company says that no meeting data is stored when translation is active, and conversation audio isn't used to train AI models.
AI

Apple's Next-Gen Version of Siri Is 'On Par' With ChatGPT 41

According to Bloomberg's Mark Gurman (paywalled), Apple has big plans to turn Siri into a true ChatGPT competitor. "A next-generation, chatbot version of Siri has reportedly made significant progress during testing over the past six months; some executives allegedly now see it as 'on par' with recent versions of ChatGPT," reports MacRumors. "Apple is also apparently discussing giving Siri the ability to access the internet to gather and synthesize data from multiple sources, just like ChatGPT." From the report: The report added that Apple now has artificial intelligence offices in Zurich, where employees are working on an all-new software architecture for Siri. This "monolithic model" is entirely built on an LLM engine that will eventually replace Siri's current "hybrid" architecture that has been incoherently layered up with different functionality over many years. The new model will make Siri more conversational and better at synthesizing information.

Google's Gemini is expected to be added to iOS 19 as an alternative to ChatGPT in Siri, but Apple is also apparently in talks with Perplexity to add their AI service as another option in the future, for both Siri and Safari search.
Google

Google Decided Against Offering Publishers Options In AI Search 14

An anonymous reader quotes a report from Bloomberg: While using website data to build a Google Search topped with artificial intelligence-generated answers, an Alphabet executive acknowledged in an internal document that there was an alternative way to do things: They could ask web publishers for permission, or let them directly opt out of being included. But giving publishers a choice would make training AI models in search too complicated, the company concludes in the document, which was unearthed in the company's search antitrust trial.

It said Google had a "hard red line" and would require all publishers who wanted their content to show up in the search page to also be used to feed AI features. Instead of giving options, Google decided to "silently update," with "no public announcement" about how they were using publishers' data, according to the document, written by Chetna Bindra, a product management executive at Google Search. "Do what we say, say what we do, but carefully."
"It's a little bit damning," said Paul Bannister, the chief strategy officer at Raptive, which represents online creators. "It pretty clearly shows that they knew there was a range of options and they pretty much chose the most conservative, most protective of them -- the option that didn't give publishers any controls at all."

For its part, Google said in a statement to Bloomberg: "Publishers have always controlled how their content is made available to Google as AI models have been built into Search for many years, helping surface relevant sites and driving traffic to them. This document is an early-stage list of options in an evolving space and doesn't reflect feasibility or actual decisions." They added that Google continually updates its product documentation for search online.
Android

Google Launches NotebookLM App For Android and iOS 26

Google has launched the NotebookLM app for Android and iOS, offering a native mobile experience with offline support, audio overviews, and integration into the system share sheet for adding sources like PDFs and YouTube videos. 9to5Google reports: This native experience starts on a homepage of your notebooks with filters at the top for Recent, Shared, Title, and Downloaded. The app features a light and dark mode based on your device's system theme with no manual toggle. Each colorful card features the notebook name, emoji, number of sources, and date, as well as a play button for Audio Overviews. There's background playback and offline support for the podcast-style experience (the fullscreen player has a nice glow), while you can "Join" the AI hosts (in beta) to ask follow-up questions.

You get a "Create new" button at the bottom of the list to add PDFs, websites, YouTube videos, and text. Notably, the NotebookLM app will appear in the Android and iOS share sheet to quickly add sources. When you open a notebook, there's a bottom bar for the list of Sources, Chat Q&A, and Studio. It's similar to the current mobile website, with the native client letting users ditch the Progressive Web App. Out of the gate, there are phone and (straightforward) tablet interfaces.
You can download the app for iOS and Android using their respective links.
AI

How Miami Schools Are Leading 100,000 Students Into the A.I. Future 63

Miami-Dade County Public Schools, the nation's third-largest school district, is now deploying Google's Gemini chatbots to more than 105,000 high school students -- marking the largest U.S. school district AI deployment to date. This represents a dramatic reversal from just two years ago when the district blocked such tools over cheating and misinformation concerns.

The initiative follows President Trump's recent executive order promoting AI integration "in all subject areas" from kindergarten through 12th grade. District officials spent months testing various chatbots for accuracy, privacy, and safety before selecting Google's platform.
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
Open Source

OSU's Open Source Lab Eyes Infrastructure Upgrades and Sustainability After Recent Funding Success (osuosl.org) 11

It's a nonprofit that's provide hosting for the Linux Foundation, the Apache Software Foundation, Drupal, Firefox, and 160 other projects — delivering nearly 430 terabytes of information every month. (It's currently hosting Debian, Fedora, and Gentoo Linux.) But hosting only provides about 20% of its income, with the rest coming from individual and corporate donors (including Google and IBM). "Over the past several years, we have been operating at a deficit due to a decline in corporate donations," the Open Source Lab's director announced in late April.

It's part of the CS/electrical engineering department at Oregon State University, and while the department "has generously filled this gap, recent changes in university funding makes our current funding model no longer sustainable. Unless we secure $250,000 in committed funds, the OSL will shut down later this year."

But "Thankfully, the call for support worked, paving the way for the OSU Open Source Lab to look ahead, into what the future holds for them," reports the blog It's FOSS.

"Following our OSL Future post, the community response has been incredible!" posted director Lance Albertson. "Thanks to your amazing support, our team is funded for the next year. This is a huge relief and lets us focus on building a truly self-sustaining OSL." To get there, we're tackling two big interconnected goals:

1. Finding a new, cost-effective physical home for our core infrastructure, ideally with more modern hardware.
2. Securing multi-year funding commitments to cover all our operations, including potential new infrastructure costs and hardware refreshes.


Our current data center is over 20 years old and needs to be replaced soon. With Oregon State University evaluating the future of this facility, it's very likely we'll need to relocate in the near future. While migrating to the State of Oregon's data center is one option, it comes with significant new costs. This makes finding free or very low-cost hosting (ideally between Eugene and Portland for ~13-20 racks) a huge opportunity for our long-term sustainability. More power-efficient hardware would also help us shrink our footprint.

Speaking of hardware, refreshing some of our older gear during a move would be a game-changer. We don't need brand new, but even a few-generations-old refurbished systems would boost performance and efficiency. (Huge thanks to the Yocto Project and Intel for a recent hardware donation that showed just how impactful this is!) The dream? A data center partner donating space and cycled-out hardware. Our overall infrastructure strategy is flexible. We're enhancing our OpenStack/Ceph platforms and exploring public cloud credits and other donated compute capacity. But whatever the resource, it needs to fit our goals and come with multi-year commitments for stability. And, a physical space still offers unique value, especially the invaluable hands-on data center experience for our students....

[O]ur big focus this next year is locking in ongoing support — think annualized pledges, different kinds of regular income, and other recurring help. This is vital, especially with potential new data center costs and hardware needs. Getting this right means we can stop worrying about short-term funding and plan for the future: investing in our tech and people, growing our awesome student programs, and serving the FOSS community. We're looking for partners, big and small, who get why foundational open source infrastructure matters and want to help us build this sustainable future together.

The It's FOSS blog adds that "With these prerequisites in place, the OSUOSL intends to expand their student program, strengthen their managed services portfolio for open source projects, introduce modern tooling like Kubernetes and Terraform, and encourage more community volunteers to actively contribute."

Thanks to long-time Slashdot reader I'm just joshin for suggesting the story.
Youtube

YouTube Announces Gemini AI Feature to Target Ads When Viewers are Most Engaged (techcrunch.com) 123

A new YouTube tool will let advertisers use Google's Gemini AI model to target ads to viewers when they're most engaged, reports CNBC: Peak Points has the potential to enable more impressions and a higher click-through rate on YouTube, a primary metric that determines how creators earn money on the video platform... Peak Points is currently in a pilot program and will be rolling out over the rest of the year.
The product "aims to benefit advertisers by using a tactic that aims to grab users' attention right when they're most invested in the content," reports TechCrunch: This approach appears to be similar to a strategy called emotion-based targeting, where advertisers place ads that align with the emotions evoked by the video. It's believed that when viewers experience heightened emotional states, it leads to better recall of the ads. However, viewers may find these interruptions frustrating, especially when they're deeply engaged in the emotional arc of a video and want the ad to be over quickly to resume watching.

In related news, YouTube announced another ad format that may be more appealing to users. The platform debuted a shoppable product feed where users can browse and purchase items during an ad.

Android

Google Restores Nextcloud Users' File Access on Android (arstechnica.com) 9

An anonymous reader shared this report from Ars Technica: Nextcloud, a host-your-own cloud platform that wants to help you "regain control over your data," has had to tell its Android-using customers for months now that they cannot upload files from their phone to their own servers. Months of emails and explanations to Google's Play Store representatives have yielded no changes, Nextcloud .

That blog post — and media coverage of it — seem to have moved the needle. In an update to the post, Nextcloud wrote that as of May 15, Google has offered to restore full file access permissions. "We are preparing a test release first (expected tonight) and a final update with all functionality restored. If no issues occur, the update will hopefully be out early next week," the Nextcloud team wrote....

[Nextcloud] told The Register that it had more than 800,000 Android users. The company's blog post goes further than pinpointing technical and support hurdles. "It is a clear example of Big Tech gatekeeping smaller software vendors, making the products of their competitors worse or unable to provide the same services as the giants themselves sell," Nextcloud's post states. "Big Tech is scared that small players like Nextcloud will disrupt them, like they once disrupted other companies. So they try to shut the door." Nextcloud is one of the leaders of an antitrust-minded movement against Microsoft's various integrated apps and services, having filed a complaint against the firm in 2021.

Programming

Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary (rustfoundation.org) 35

Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure." It's a story with many actors:

- The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications.

- The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision.

- The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure.

- The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades.

All these actors have a common interest in infrastructure.

Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task." Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments...

We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features.

Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM" (developed by researchers at UIUC and later funded by Apple, Qualcomm, Google, ARM, Huawei, and many other organizations), while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell."

And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others." JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget.

All this investment, despite the long time horizon, paid off. We're all better for it.

He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatability (including "investment in ever-greater reliability technology, including the many emerging formal methods projects built on Rust.")

And he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like.

"Everyone involved should be proud of what they've built."
Facebook

Meta Argues Enshittification Isn't Real (arstechnica.com) 67

An anonymous reader quotes a report from Ars Technica: Meta thinks there's no reason to carry on with its defense after the Federal Trade Commission closed its monopoly case, and the company has moved to end the trial early by claiming that the FTC utterly failed to prove its case. "The FTC has no proof that Meta has monopoly power," Meta's motion for judgment (PDF) filed Thursday said, "and therefore the court should rule in favor of Meta." According to Meta, the FTC failed to show evidence that "the overall quality of Meta's apps has declined" or that the company shows too many ads to users. Meta says that's "fatal" to the FTC's case that the company wielded monopoly power to pursue more ad revenue while degrading user experience over time (an Internet trend known as "enshittification"). And on top of allegedly showing no evidence of "ad load, privacy, integrity, and features" degradation on Meta apps, Meta argued there's no precedent for an antitrust claim rooted in this alleged harm.

"Meta knows of no case finding monopoly power based solely on a claimed degradation in product quality, and the FTC has cited none," Meta argued. Meta has maintained throughout the trial that its users actually like seeing ads. In the company's recent motion, Meta argued that the FTC provided no insights into what "the right number of ads" should be, "let alone" provide proof that "Meta showed more ads" than it would in a competitive market where users could easily switch services if ad load became overwhelming. Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it "does not profit by showing more ads to users who do not click on them," so it only shows more ads to users who click ads.

Meta also insisted that there's "nothing but speculation" showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them. The company claimed that without Meta's resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was "pretty broken and duct-taped" together, making it "vulnerable to spam" before Meta bought it. Rather than enshittification, what Meta did to Instagram could be considered "a consumer-welfare bonanza," Meta argued, while dismissing "smoking gun" emails from Mark Zuckerberg discussing buying Instagram to bury it as "legally irrelevant." Dismissing these as "a few dated emails," Meta argued that "efforts to litigate Mr. Zuckerberg's state of mind before the acquisition in 2012 are pointless."

"What matters is what Meta did," Meta argued, which was pump Instagram with resources that allowed it "to 'thrive' -- adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success." In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that "the sole Meta witness to (supposedly) learn of Google's acquisition efforts testified that he did not have that worry."
In sum: A ruling in Meta's favor could prevent a breakup of its apps, while a denial would push the trial toward a possible order to divest Instagram and WhatsApp.
Android

Google Restores File Permissions For Nexcloud (nextcloud.com) 11

Longtime Slashdot reader mprindle writes: Nextcloud has been in an ongoing battle with Google over the tech giant revoking the All Files permission from the Nextcloud Android App, which prevents users from managing their files on their server. After a blog post and several tech sites reported on the issue, "Google reached out to us [Nexcloud] and offered to restore the permission, which will give users back the functionality that was lost." Nextcloud is working on an app update and hopes to have it pushed out within a week.

Slashdot Top Deals