AI

VCs Are Acquiring Mature Businesses To Retrofit With AI (techcrunch.com) 39

Venture capitalists are inverting their traditional investment approach by acquiring mature businesses and retrofitting them with AI. Firms including General Catalyst, Thrive Capital, Khosla Ventures and solo investor Elad Gil are employing this private equity-style strategy to buy established companies like call centers and accounting firms, then optimizing them with AI automation.
AI

Google Tries Funding Short Films Showing 'Less Nightmarish' Visions of AI (yahoo.com) 74

"For decades, Hollywood directors including Stanley Kubrick, James Cameron and Alex Garland have cast AI as a villain that can turn into a killing machine," writes the Los Angeles Times. "Even Steven Spielberg's relatively hopeful A.I.: Artificial Intelligence had a pessimistic edge to its vision of the future."

But now "Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in The Terminator, 2001: A Space Odyssey and Ex Machina.". So they're funding short films "that portray the technology in a less nightmarish light," produced by Range Media Partners (which represents many writers and actors) So far, two short films have been greenlit through the project: One, titled "Sweetwater," tells the story of a man who visits his childhood home and discovers a hologram of his dead celebrity mother. Michael Keaton will direct and appear in the film, which was written by his son, Sean Douglas. It is the first project they are working on together. The other, "Lucid," examines a couple who want to escape their suffocating reality and risk everything on a device that allows them to share the same dream....

Google has much riding on convincing consumers that AI can be a force for good, or at least not evil. The hot space is increasingly crowded with startups and established players such as OpenAI, Anthropic, Apple and Facebook parent company Meta. The Google-funded shorts, which are 15 to 20 minutes long, aren't commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. Google is not pushing their products in the movies, and the films are not made with AI, she added... The company said it wants to fund many more movies, but it does not have a target number. Some of the shorts could eventually become full-length features, Google said....

Negative public perceptions about AI could put tech companies at a disadvantage when such cases go before juries of laypeople. That's one reason why firms are motivated to makeover AI's reputation. "There's an incredible amount of skepticism in the public world about what AI is and what AI will do in the future," said Sean Pak, an intellectual property lawyer at Quinn Emanuel, on a conference panel. "We, as an industry, have to do a better job of communicating the public benefits and explaining in simple, clear language what it is that we're doing and what it is that we're not doing."

Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
AI

OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test (betanews.com) 112

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.
Open Source

SerenityOS Creator Is Building an Independent, Standards-First Browser Called 'Ladybird' (thenewstack.io) 40

A year ago, the original creator of SerenityOS posted that "for the past two years, I've been almost entirely focused on Ladybird, a new web browser that started as a simple HTML viewer for SerenityOS." So it became a stand-alone project that "aims to render the modern web with good performance, stability and security." And they're also building a new web engine.

"We are building a brand-new browser from scratch, backed by a non-profit..." says Ladybird's official web site, adding that they're driven "by a web standards first approach." They promise it will be truly independent, with "no code from other browsers" (and no "default search engine" deals).

"We are targeting Summer 2026 for a first Alpha version on Linux and macOS. This will be aimed at developers and early adopters." More from the Ladybird FAQ: We currently have 7 paid full-time engineers working on Ladybird. There is also a large community of volunteer contributors... The focus of the Ladybird project is to build a new browser engine from the ground up. We don't use code from Blink, WebKit, Gecko, or any other browser engine...

For historical reasons, the browser uses various libraries from the SerenityOS project, which has a strong culture of writing everything from scratch. Now that Ladybird has forked from SerenityOS, it is no longer bound by this culture, and we will be making use of 3rd party libraries for common functionality (e.g image/audio/video formats, encryption, graphics, etc.) We are already using some of the same 3rd party libraries that other browsers use, but we will never adopt another browser engine instead of building our own...

We don't have anyone actively working on Windows support, and there are considerable changes required to make it work well outside a Unix-like environment. We would like to do Windows eventually, but it's not a priority at the moment.

"Ladybird's founder Andreas Kling has a solid background in WebKit-based C++ development with both Apple and Nokia,," writes software developer/author David Eastman: "You are likely reading this on a browser that is slightly faster because of my work," he wrote on his blog's introduction page. After leaving Apple, clearly burnt out, Kling found himself in need of something to healthily occupy his time. He could have chosen to learn needlepoint, but instead he opted to build his own operating system, called Serenity. Ladybird is a web project spin-off from this, to which Kling now devotes his time...

[B]eyond the extensive open source politics, the main reason for supporting other independent browser projects is to maintain diverse alternatives — to prevent the web platform from being entirely captured by one company. This is where Ladybird comes in. It doesn't have any commercial foundation and it doesn't seem to be waiting to grab a commercial opportunity. It has a range of sponsors, some of which might be strategic (for example, Shopify), but most are goodwill or alignment-led. If you sponsor Ladybird, it will put your logo on its webpage and say thank you. That's it. This might seem uncontroversial, but other nonprofit organisations also give board seats to high-paying sponsors. Ladybird explicitly refuses to do this...

The Acid3 Browser test (which has nothing whatsoever to do with ACID compliance in databases) is an old method of checking compliance with web standards, but vendors can still check how their products do against a battery of tests. They check compliance for the DOM2, CSS3, HTML4 and the other standards that make sure that webpages work in a predictable way. If I point my Chrome browser on my MacBook to http://acid3.acidtests.org/, it gets 94/100. Safari does a bit better, getting to 97/100. Ladybird reportedly passes all 100 tests.

"All the code is hosted on GitHub," says the Ladybird home page. "Clone it, build it, and join our Discord if you want to collaborate on it!"
Apple

Apple's Bad News Keeps Coming. Can They Still Turn It Around? (msn.com) 73

Besides pressure on Apple to make iPhones in the U.S., CEO Tim Cook "is facing off against two U.S. judges, European and worldwide regulators, state and federal lawmakers, and even a creator of the iPhone," writes the Wall Street Journal, "to say nothing of the cast of rivals outrunning Apple in artificial intelligence." Each is a threat to Apple's hefty profit margins, long the company's trademark and the reason investors drove its valuation above $3 trillion before any other company. Shareholders are still Cook's most important constituency. The stock's 25% fall from its peak shows their concern about whether he — or anyone — can navigate the choppy 2025 waters.

What can be said for Apple is that the company is patient, and that has often paid off in the past.

They also note OpenAI's purchase of Jony Ive's company, with Sam Altman saying internally they hope to make 100 million AI "companion" devices: It is hard to gauge the potential for a brand-new computing device from a company that has never made one. Yet the fact that it is coming from the man who led design of the iPhone and other hit Apple products means it can't be dismissed. Apple sees the threat coming: "You may not need an iPhone 10 years from now, as crazy as that sounds," an Apple executive, Eddy Cue, testified in a court case this month...

The company might not need to be first in AI. It didn't make the first music player, smartphone or tablet. It waited, and then conquered each market with the best. A question is whether a strategy that has been successful in devices will work for AI.

Thanks to long-time Slashdot reader fjo3 for sharing the article.
AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.
Windows

MCP Will Be Built Into Windows To Make an 'Agentic OS' - Bringing Security Concerns (devclass.com) 64

It's like "a USB-C port for AI applications..." according to the official documentation for MCP — "a standardized way to connect AI models to different data sources and tools."

And now Microsoft has "revealed plans to make MCP a native component of Windows," reports DevClass.com, "despite concerns over the security of the fast-expanding MCP ecosystem." In the context of Windows, it is easy to see the value of a standardised means of automating both built-in and third-party applications. A single prompt might, for example, fire off a workflow which queries data, uses it to create an Excel spreadsheet complete with a suitable chart, and then emails it to selected colleagues. Microsoft is preparing the ground for this by previewing new Windows features.

— First, there will be a local MCP registry which enables discovery of installed MCP servers.

— Second, built-in MCP servers will expose system functions including the file system, windowing, and the Windows Subsystem for Linux.

— Third, a new type of API called App Actions enables third-party applications to expose actions appropriate to each application, which will also be available as MCP servers so that these actions can be performed by AI agents. According to Microsoft, "developers will be able to consume actions developed by other relevant apps," enabling app-to-app automation as well as use by AI agents.

MCP servers are a powerful concept but vulnerable to misuse. Microsoft corporate VP David Weston noted seven vectors of attack, including cross-prompt injection where malicious content overrides agent instructions, authentication gaps because "MCP's current standards for authentication are immature and inconsistently adopted," credential leakage, tool poisoning from "unvetted MCP servers," lack of containment, limited security review in MCP servers, supply chain risks from rogue MCP servers, and command injection from improperly validated inputs. According to Weston, "security is our top priority as we expand MCP capabilities."

Security controls planned by Microsoft (according to the article):
  • A proxy to mediate all MCP client-server interactions. This will enable centralized enforcement of policies and consent, as well as auditing and a hook for security software to monitor actions.
  • A baseline security level for MCP servers to be allowed into the Windows MCP registry. This will include code-signing, security testing of exposed interfaces, and declaration of what privileges are required.
  • Runtime isolation through what Weston called "isolation and granular permissions."

MCP was introduced by Anthropic just 6 months ago, the article notes, but Microsoft has now joined the official MCP steering committee, "and is collaborating with Anthropic and others on an updated authorization specification as well as a future public registry service for MCP servers."


AI

People Should Know About the 'Beliefs' LLMs Form About Them While Conversing (theatlantic.com) 35

Jonathan L. Zittrain is a law/public policy/CS professor at Harvard (and also director of its Berkman Klein Center for Internet & Society).

He's also long-time Slashdot reader #628,028 — and writes in to share his new article in the Atlantic. Following on Anthropic's bridge-obsessed Golden Gate Claude, colleagues at Harvard's Insight+Interaction Lab have produced a dashboard that shows what judgments Llama appears to be forming about a user's age, wealth, education level, and gender during a conversation. I wrote up how weird it is to see the dials turn while talking to it, and what some of the policy issues might be.
Llama has openly accessible parameters; So using an "observability tool" from the nonprofit research lab Transluce, the researchers finally revealed "what we might anthropomorphize as the model's beliefs about its interlocutor," Zittrain's article notes: If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-class — the model accordingly suggests that I purchase "luxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier," or "a customized piece of art or a family heirloom that can be passed down." If I then clarify that it's my boss's baby and that I'll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift "a practical item like a baby blanket" or "a personalized thank-you note or card...."

Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which they've been trained, and they actively make use of them.

"An ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed," Zittrain's article argues. Indeed, the field has been making progress — enough to raise a host of policy questions that were previously not on the table. If there's no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humans' efforts at "fine-tuning" them) a sort of all-or-nothing proposition.
But in the end it's not just the traditional information that advertisers try to collect. "With LLMs, the information is being gathered even more directly — from the user's unguarded conversations rather than mere search queries — and still without any policy or practice oversight...."
Red Hat Software

Red Hat Collaborates with SIFive on RISC-V Support, as RHEL 10 Brings AI Assistant and Post-Quantum Security (betanews.com) 24

SiFive was one of the first companies to produce a RISC-V chip. This week they announced a new collaboration with Red Hat "to bring Red Hat Enterprise Linux support to the rapidly growing RISC-V community" and "prepare Red Hat's product portfolio for future intersection with RISC-V server hardware from a diverse set of RISC-V suppliers."

Red Hat Enterprise Linux 10 is available in developer preview on the SiFive HiFive Premier P550 platform, which they call "a proven, high performance RISC-V CPU development platform." The SiFive HiFive Premier P550 provides a proven, high performance RISC-V CPU development platform. Adding support for Red Hat Enterprise Linux 10, the latest version of the world's leading enterprise Linux platform, enables developers to create, optimize, and release new applications for the next generation of enterprise servers and cloud infrastructure on the RISC-V architecture...

SiFive's high performance RISC-V technology is already being used by large organizations to meet compute-intensive AI and machine learning workloads in the datacenter... "With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550," said Ronald Pacheco, senior director of RHEL product and ecosystem strategy, "to further empower developers with the power of the world's leading enterprise Linux platform wherever and however they choose to deploy...."

Dave Altavilla, principal analyst at HotTech Vision And Analysis, said "Native Red Hat Enterprise Linux support on SiFive's HiFive Premier P550 board offers developers a substantial enterprise-grade toolchain for RISC-V.

"This is a pivotal step forward in enabling a full-stack ecosystem around open RISC-V hardware.
SiFive says the move will "inspire the next generation of enterprise workloads and AI applications optimized for RISC-V," while helping their partners "deliver systems with a meaningfully lower total cost of ownership than incumbent platforms."

"With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550..." said Ronald Pacheco, senior director of RHEL product and ecosystem strategy. .

Beta News notes that there's also a new AI-powered assistant in RHEL 10, so "Instead of spending all day searching for answers or poking through documentation, admins can simply ask questions directly from the command line and get real-time help Security is front and center in this release, too. Red Hat is taking a proactive stance with early support for post-quantum cryptography. OpenSSL, GnuTLS, NSS, and OpenSSH now offer quantum-resistant options, setting the stage for better protection as threats evolve. There's a new sudo system role to help with privilege management, and OpenSSH has been bumped to version 9.9. Plus, with new Sequoia tools for OpenPGP, the door is open for even more robust encryption strategies. But it's not just about security and AI. Containers are now at the heart of RHEL 10 thanks to the new "image mode." With this feature, building and maintaining both the OS and your applications gets a lot more streamlined...
AI

Google's New AI Video Tool Floods Internet With Real-Looking Clips (axios.com) 86

Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.

In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says.

Earth

Microsoft Says Its Aurora AI Can Accurately Predict Air Quality, Typhoons (techcrunch.com) 28

An anonymous reader quotes a report from TechCrunch: One of Microsoft's latest AI models can accurately predict air quality, hurricanes, typhoons, and other weather-related phenomena, the company claims. In a paper published in the journal Nature and an accompanying blog post this week, Microsoft detailed Aurora, which the tech giant says can forecast atmospheric events with greater precision and speed than traditional meteorological approaches. Aurora, which has been trained on more than a million hours of data from satellites, radar and weather stations, simulations, and forecasts, can be fine-tuned with additional data to make predictions for particular weather events.

AI weather models are nothing new. Google DeepMind has released a handful over the past several years, including WeatherNext, which the lab claims beats some of the world's best forecasting systems. Microsoft is positioning Aurora as one of the field's top performers -- and a potential boon for labs studying weather science. In experiments, Aurora predicted Typhoon Doksuri's landfall in the Philippines four days in advance of the actual event, beating some expert predictions, Microsoft says. The model also bested the National Hurricane Center in forecasting five-day tropical cyclone tracks for the 2022-2023 season, and successfully predicted the 2022 Iraq sandstorm.

While Aurora required substantial computing infrastructure to train, Microsoft says the model is highly efficient to run. It generates forecasts in seconds compared to the hours traditional systems take using supercomputer hardware. Microsoft, which has made the source code and model weights publicly available, says that it's incorporating Aurora's AI modeling into its MSN Weather app via a specialized version of the model that produces hourly forecasts, including for clouds.

Government

Trump Launches Reform of Nuclear Industry, Slashes Regulation (cnbc.com) 161

Longtime Slashdot reader sinij shares a press release from the White House, outlining a series of executive orders that overhaul the Nuclear Regulatory Commission and speed up deployment of new nuclear power reactions in the U.S.. From a report: The NRC is a 50-year-old, independent agency that regulates the nation's fleet of nuclear reactors. Trump's orders call for a "total and complete reform" of the agency, a senior White House official told reporters in a briefing. Under the new rules, the commission will be forced to decide on nuclear reactor licenses within 18 months. Trump said Friday the orders focus on small, advanced reactors that are viewed by many in the industry as the future. But the president also said his administration supports building large plants. "We're also talking about the big plants -- the very, very big, the biggest," Trump said. "We're going to be doing them also."

When asked whether NRC reform will result in staff reductions, the White House official said "there will be turnover and changes in roles." "Total reduction in staff is undetermined at this point, but the executive orders do call for a substantial reorganization" of the agency, the official said. The orders, however, will not remove or replace any of the five commissioners who lead the body, according to the White House. Any reduction in staff at the NRC would come at time when the commission faces a heavy workload. The agency is currently reviewing whether two mothballed nuclear plants, Palisades in Michigan and Three Mile Island in Pennsylvania, should restart operations, a historic and unprecedented process. [...]

Trump's orders also create a regulatory framework for the Departments of Energy and Defense to build nuclear reactors on federal land, the administration official said. "This allows for safe and reliable nuclear energy to power and operate critical defense facilities and AI data centers," the official told reporters. The NRC will not have a direct role, as the departments will use separate authorities under their control to authorize reactor construction for national security purposes, the official said. The president's orders also aim to jump start the mining of uranium in the U.S. and expand domestic uranium enrichment capacity, the official said. Trump's actions also aim to speed up reactor testing at the Department of Energy's national laboratories.

Java

Java Turns 30 (theregister.com) 100

Richard Speed writes via The Register: It was 30 years ago when the first public release of the Java programming language introduced the world to Write Once, Run Anywhere -- and showed devs something cuddlier than C and C++. Originally called "Oak," Java was designed in the early 1990s by James Gosling at Sun Microsystems. Initially aimed at digital devices, its focus soon shifted to another platform that was pretty new at the time -- the World Wide Web.

The language, which has some similarities to C and C++, usually compiles to a bytecode that can, in theory, run on any Java Virtual Machine (JVM). The intention was to allow programmers to Write Once Run Anywhere (WORA) although subtle differences in JVM implementations meant that dream didn't always play out in reality. This reporter once worked with a witty colleague who described the system as Write Once Test Everywhere, as yet another unexpected wrinkle in a JVM caused their application to behave unpredictably. However, the language soon became wildly popular, rapidly becoming the backbone of many enterprises. [...]

However, the platform's ubiquity has meant that alternatives exist to Oracle Java, and the language's popularity is undiminished by so-called "predatory licensing tactics." Over 30 years, Java has moved from an upstart new language to something enterprises have come to depend on. Yes, it may not have the shiny baubles demanded by the AI applications of today, but it continues to be the foundation for much of today's modern software development. A thriving ecosystem and a vast community of enthusiasts mean that Java remains more than relevant as it heads into its fourth decade.

Google

Google's AI Mode Is 'the Definition of Theft,' Publishers Say 42

Google's new AI Mode for Search, which is rolling out to everyone in the U.S., has sparked outrage among publishers, who call it "the definition of theft" for using content without fair compensation and without offering a true opt-out option. Internal documents revealed by Bloomberg earlier this week suggest that Google considered giving publishers more control over how their content is used in AI-generated results but ultimately decided against it, prioritizing product functionality over publisher protections.

News/Media Alliance slammed Google for "further depriving publishers of original content both traffic and revenue." Their full statement reads: "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company." 9to5Google's take: It's not hard to see why Google went the route that it did here. Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google's flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product.

Does that change anything regarding how Google's AI products in Search cause potential harm to the publishing industry? Nope.

Google's tools continue to serve the company and its users (mostly) well, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it's making the internet as a whole worse for everyone.
AI

Authors Are Accidentally Leaving AI Prompts In their Novels (404media.co) 60

Several romance novelists have accidentally left AI writing prompts embedded in their published books, exposing their use of chatbots, 404Media reports. Readers discovered passages like "Here's an enhanced version of your passage, making Elena more relatable" in K.C. Crowne's "Dark Obsession," for instance, and similar AI-generated instructions in works by Lena McDonald and Rania Faris.
AI

America's Leading Alien Hunters Depend on AI to Speed Their Search (bloomberg.com) 14

Harvard University's Galileo Project is using AI to automate the search for unidentified anomalous phenomena, marking a significant shift in how academics approach what was once considered fringe research. The project operates a Massachusetts observatory equipped with infrared cameras, acoustic sensors, and radio-frequency analyzers that continuously scan the sky for unusual objects.

Researchers Laura Domine and Richard Cloete are training machine learning algorithms to recognize all normal aerial phenomena -- planes, birds, drones, weather balloons -- so the system can flag genuine anomalies for human analysis. The team uses computer vision software called YOLO (You Only Look Once) and has generated hundreds of thousands of synthetic images to train their models, though the software currently identifies only 36% of aircraft captured by infrared cameras.

The Pentagon is pursuing parallel efforts through its All-domain Anomaly Resolution Office, which has examined over 1,800 UAP reports and identified 50 to 60 cases as "true anomalies" that government scientists cannot explain. AARO has developed its own sensor suite called Gremlin, using similar technology to Harvard's observatory. Both programs represent the growing legitimization of UAP research following 2017 Defense Department disclosures about military encounters with unexplained aerial phenomena.
EU

The Technology Revolution is Leaving Europe Behind (msn.com) 164

Europe has created just 14 companies worth more than $10 billion over the past 50 years compared to 241 in the United States, underscoring the continent's struggle to compete in the global technology race despite having a larger population and similar education levels.

The productivity gap has widened dramatically since the digital revolution began. European workers produced 95% of what their American counterparts made per hour in the late 1990s, but that figure has dropped to less than 80% today. Only four of the world's top 50 technology companies are European, and none of the top 10 quantum computing investors operate from Europe.

Several high-profile European entrepreneurs have relocated to Silicon Valley, including Thomas Odenwald, who quit German AI startup Aleph Alpha after two months, citing slow decision-making and lack of stock options for employees. "If I look at how quickly things change in Silicon Valley...it's happening so fast that I don't think Europe can keep up with that speed," Odenwald said.

The challenges extend beyond individual companies. European businesses spend 40% of their IT budgets on regulatory compliance, according to Amazon surveys, while complex labor laws create three-month notice periods and lengthy noncompete clauses.
AI

Jony Ive's Futuristic OpenAI Device Like a Neck-Worn iPod Shuffle 46

OpenAI on Wednesday announced that it was paying $6.5 billion to buy io, a one-year-old start-up created by Jony Ive. While the company remains tightlipped about the futuristic AI device(s) it has in the works, Apple supply chain analyst Ming-Chi Kuo shared some alleged details about its design. MacRumors reports: In a social media post today, Kuo said the device will be "slightly larger" than Humane's discontinued AI Pin. He said the device will look "as compact and elegant as an iPod Shuffle," which was Apple's lowest-priced, screen-less iPod. The design of the iPod shuffle varied over the years, going from a compact rectangle to a square. Like the iPod shuffle, Kuo said OpenAI's device will not have a screen, but it would connect to smartphones and computers. The device will be equipped with microphones for voice control, and it will have cameras that can analyze the user's surroundings.

He said that users will be able to wear the device around their necks, like a necklace, whereas the AI Pin can be attached to clothing with a clip. Kuo expects OpenAI's device to enter mass production in 2027, and the final design and specifications might change before then. Kuo expects OpenAI's device to enter mass production in 2027, and the final design and specifications might change before then.

Slashdot Top Deals