AI

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist (404media.co) 61

An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.

"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.

OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.

Social Networks

OpenAI is Building a Social Network (theverge.com) 30

An anonymous reader shares a report: OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter. While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It's unclear if OpenAI's plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month.

Launching a social network in or around ChatGPT would likely increase Altman's already-bitter rivalry with Elon Musk. In February, after Musk made an unsolicited offer to purchase OpenAI for $97.4 billion, Altman responded: "no thank you but we will buy twitter for $9.74 billion if you want." Entering the social media market also puts OpenAI on more of a collision course with Meta, which we're told is planning to add a social feed to its coming standalone app for its AI assistant. When reports of Meta building a rival to the ChatGPT app first surfaced a couple of months ago, Altman shot back on X again by saying, "ok fine maybe we'll do a social app."

Communications

FCC Chairman Tells Europe To Choose Between US or Chinese Communications Tech (ft.com) 146

FCC Chairman Brendan Carr has issued a stark ultimatum to European allies, telling them to choose between US and Chinese communications technology. In an interview with Financial Times, Carr urged "allied western democracies" to "focus on the real long-term bogey: the rise of the Chinese Communist party." The warning comes as European governments question Starlink's reliability after Washington threatened to switch off its services in Ukraine.

UK telecoms BT and Virgin Media O2 are currently trialing Starlink's satellite internet technology but haven't signed full agreements. "If you're concerned about Starlink, just wait for the CCP's version, then you'll be really worried," said Carr. Carr claimed Europe is "caught" between Washington and Beijing, with a "great divide" emerging between "CCP-aligned countries and others" in AI and satellite technology. He also accused the European Commission of "protectionism" and an "anti-American" attitude while suggesting Nokia and Ericsson should relocate manufacturing to the US to avoid Trump's import tariffs.
AI

Indian IT Faces Its Kodak Moment (indiadispatch.com) 54

An anonymous reader shares a report: Generative AI offers remarkable efficiency gains while presenting a profound challenge for the global IT services industry -- a sector concentrated in India and central to its export economy.

For decades, Indian technology firms thrived by deploying their engineering talent to serve primarily Western clients. Now they face a critical question. Will AI's productivity dividend translate into revenue growth? Or will fierce competition see these gains competed away through price reductions?

Industry soundings suggest the deflationary dynamic may already be taking hold. JPMorgan's conversations with executives, deal advisors and consultants across India's technology hubs reveal growing concern -- AI-driven efficiencies are fuelling pricing pressures. This threatens to constrain medium-term industry growth to a modest 4-5%, with little prospect of acceleration into fiscal year 2026. This emerging reality challenges the earlier narrative that AI would primarily unlock new revenue streams.

AI

Publishers and Law Professors Back Authors in Meta AI Copyright Battle 14

Publishers and law professors have filed amicus briefs supporting authors who sued Meta over its AI training practices, arguing that the company's use of "thousands of pirated books" fails to qualify as fair use under copyright law.

The filings [PDF] in California's Northern District federal court came from copyright law professors, the International Association of Scientific, Technical and Medical Publishers (STM), Copyright Alliance, and Association of American Publishers. The briefs counter earlier support for Meta from the Electronic Frontier Foundation and IP professors.

While Meta's defenders pointed to the 2015 Google Books ruling as precedent, the copyright professors distinguished Meta's use, arguing Google Books told users something "about" books without "exploiting expressive elements," whereas AI models leverage the books' creative content.

"Meta's use wasn't transformative because, like the AI models, the plaintiffs' works also increased 'knowledge and skill,'" the professors wrote, warning of a "cascading effect" if Meta prevails. STM is specifically challenging Meta's data sources: "While Meta attempts to label them 'publicly available datasets,' they are only 'publicly available' because those perpetuating their existence are breaking the law."
China

Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com) 43

An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.
On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.
AI

Apple To Analyze User Data on Devices To Bolster AI Technology 38

Apple will begin analyzing data on customers' devices in a bid to improve its AI platform, a move designed to safeguard user information while still helping it catch up with AI rivals. From a report: Today, Apple typically trains AI models using synthetic data -- information that's meant to mimic real-world inputs without any personal details. But that synthetic information isn't always representative of actual customer data, making it harder for its AI systems to work properly.

The new approach will address that problem while ensuring that user data remains on customers' devices and isn't directly used to train AI models. The idea is to help Apple catch up with competitors such as OpenAI and Alphabet, which have fewer privacy restrictions. The technology works like this: It takes the synthetic data that Apple has created and compares it to a recent sample of user emails within the iPhone, iPad and Mac email app. By using actual emails to check the fake inputs, Apple can then determine which items within its synthetic dataset are most in line with real-world messages.
EU

Meta Starts Using Data From EU Users To Train Its AI Models (engadget.com) 29

Meta said the company plans to start using data collected from its users in the European Union to train its AI systems. Engadget reports: Starting this week, the tech giant will begin notifying Europeans through email and its family of apps of the fact, with the message set to include an explanation of the kind of data it plans to use as part of the training. Additionally, the notification will link out to a form users can complete to opt out of the process. "We have made this objection form easy to find, read, and use, and we'll honor all objection forms we have already received, as well as newly submitted ones," says Meta.

The company notes it will only use data it collects from public posts and Meta AI interactions for training purposes. It won't use private messages in its training sets, nor any interactions, public or otherwise, made by users under the age of 18. As for why the company wants to start using EU data now, it claims the information will allow it to fine tune its future models to better serve Europeans.
"We believe we have a responsibility to build AI that's not just available to Europeans, but is actually built for them. That's why it's so important for our generative AI models to be trained on a variety of data so they can understand the incredible and diverse nuances and complexities that make up European communities," Meta states.

"That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products. This is particularly important as AI models become more advanced with multi-modal functionality, which spans text, voice, video, and imagery."
The Military

NATO Inks Deal With Palantir For Maven AI System (defensescoop.com) 31

An anonymous reader quotes a report from DefenseScoop: NATO announced Monday that it has awarded a contract to Palantir to adopt its Maven Smart System for artificial intelligence-enabled battlefield operations. Through the contract, which was finalized March 25, the NATO Communications and Information Agency (NCIA) plans to use a version of the AI system -- Maven Smart System NATO -- to support the transatlantic military organization's Allied Command Operations strategic command. NATO plans to use the system to provide "a common data-enabled warfighting capability to the Alliance, through a wide range of AI applications -- from large language models (LLMs) to generative and machine learning," it said in a release, ultimately enhancing "intelligence fusion and targeting, battlespace awareness and planning, and accelerated decision-making." [...] NATO's Allied Command Operations will begin using Maven within the next 30 days, the organization said Monday, adding that it hopes that using it will accelerate further adoption of emerging AI capabilities. Palantir said the contract "was one of the most expeditious in [its] history, taking only six months from outlining the requirement to acquiring the system."
United States

Nvidia To Make AI Supercomputers in US for First Time (nvidia.com) 37

Nvidia has announced plans to manufacture AI supercomputers entirely within the United States, commissioning over 1 million square feet of manufacturing space across Arizona and Texas. Production of Blackwell chips has begun at TSMC's Phoenix facilities, while supercomputer assembly will occur at new Foxconn and Wistron plants in Houston and Dallas respectively.

"The engines of the world's AI infrastructure are being built in the United States for the first time," said Jensen Huang, Nvidia's founder and CEO. "Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency."

The company will deploy its own AI, robotics, and digital twin technologies in these facilities, using Nvidia Omniverse to create digital twins of factories and Isaac GR00T to build manufacturing automation robots. Nvidia projects an ambitious $500 billion in domestic AI infrastructure production over the next four years, with manufacturing expected to create hundreds of thousands of jobs.
AI

Can AI Help Manage Nuclear Reactors? (msn.com) 63

America's Department of Energy launched a federally funded R&D center in 1946 called the Argonne National Laboratory, and its research became the basis for all of the world's commercial nuclear reactors.

But it's now developed an AI-based tool that can "help operators run nuclear plants," reports the Wall Street Journal, citing comments from a senior nuclear engineer in the lab's nuclear science and engineering division: Argonne's plan is to offer the Parameter-Free Reasoning Operator for Automated Identification and Diagnosis, or PRO-AID, to new, tech-forward nuclear builds, but it's also eyeing the so-called dinosaurs, some of which are being resurrected by companies like Amazon and Microsoft to help power their AI data centers. The global push for AI is poised to fuel a sharp rise in electricity demand, with consumption from data centers expected to more than double by the end of the decade, the International Energy Agency said Thursday. The owners of roughly a third of U.S. nuclear plants are in talks with tech companies to provide electricity for those data centers, the Wall Street Journal has reported.

PRO-AID performs real-time monitoring and diagnostics using generative AI combined with large language models that notify and explain to staff when something seems amiss at a plant. It also uses a form of automated reasoning — which uses mathematical logic to encode knowledge in AI systems — to mimic the way a human operator asks questions and comes to understand how the plant is operating [according to Richard Vilim, a senior nuclear engineer within the lab's nuclear science and engineering division].

The tool can also help improve the efficiency of the personnel needed to operate a nuclear plant, Vilim said. That's especially important as older employees leave the workforce. "If we can hand off some of these lower-level capabilities to a machine, when someone retires, you don't need to replace him or her," he said... Part of the efficiency in updating technology will come from consolidating the monitoring staff at a utility's nuclear plants at a single, centralized location — much as gas-powered plants already do.

It hasn't found its way into a commercial nuclear plant yet, the article acknowledges. But the senior nuclear engineer points out that America's newer gas-powered plants ended up being more automated with digital monitoring tools. Meanwhile the average age of America's 94 operating nuclear reactors is 42 years old, and "nearly all" of them have had their licenses extended, according to the article. (Those nuclear plants still provide almost 20% of America's electricity.)
Facebook

After Meta Cheating Allegations, 'Unmodified' Llama 4 Maverick Model Tested - Ranks #32 (neowin.net) 17

Remember how last weekend Meta claimed its "Maverick" AI model (in the newly-released Llama-4 series) beat GPT-4o and Gemini Flash 2 "on all benchmarks... This thing is a beast."

And then how within a day several AI researchers pointed out that even Meta's own announcement admitted the Maverick tested on LM Arena was an "experimental chat version," as TechCrunch pointed out. ("As we've written about before, for various reasons, LM Arena has never been the most reliable measure of an AI model's performance. But AI companies generally haven't customized or otherwise fine-tuned their models to score better on LM Arena — or haven't admitted to doing so, at least.")

Friday TechCrunch on what happened when LMArena tested the unmodified release version of Maverick (Llama-4-Maverick-17B-128E-Instruct).

It ranked 32nd.

"For the record, older models like Claude 3.5 Sonnet, released last June, and Gemini-1.5-Pro-002, released last September, rank higher," notes the tech site Neowin.
United States

America's Dirtiest Coal Power Plants Given Exemptions from Pollution Rules to Help Power AI (msn.com) 126

Somewhere in Montana sits the only coal-fired power plant in America that hasn't installed modern pollution controls to limit particulate matter, according to the Environmental Protecction Agency. Mining.com notes that it has the highest emission rate of fine particulate matter out of any U.S. coal-burning power plant. When inhaled, the finest particles are able to penetrate deep into the lungs and even potentially the bloodstream, exacerbating heart and lung disease, causing asthma attacks and even sometimes leading to premature death.
Yet America's dirtiest coal-fired power plant — and dozens of others — "are being exempted from stringent air pollution mandates," reports Bloomberg, "as part of US. President Donald Trump's bid to revitalize the industry: Talen Energy Corp.'s Colstrip in Montana is among 47 plants receiving two-year waivers from rules to control mercury and other pollutants as part of a White House effort to ease regulation on coal-fired sites, according to a list seen by Bloomberg News. The exemptions were among a slew of actions announced by the White House Tuesday to expand the mining and use of coal. The Trump administration has argued coal is a vital part of the mix to ensure sufficient energy supply to meet booming demand for AI data centers. The carve-out, which begins in July 2027, lasts until July 2029, according to the proclamation.
In an email to Bloomberg, a White House spokesperson said the move meant that America "will produce beautiful, clean coal" while addressing "necessary electrical demand from emerging technologies such as AI."
AMD

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23

"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."

The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

AI

AI Industry Tells US Congress: 'We Need Energy' (msn.com) 98

The Washington Post reports: The United States urgently needs more energy to fuel an artificial intelligence race with China that the country can't afford to lose, industry leaders told lawmakers at a House hearing on Wednesday. "We need energy in all forms," said Eric Schmidt, former CEO of Google, who now leads the Special Competitive Studies Project, a think tank focused on technology and security. "Renewable, nonrenewable, whatever. It needs to be there, and it needs to be there quickly." It was a nearly unanimous sentiment at the four-hour-plus hearing of the House Energy and Commerce Committee, which revealed bipartisan support for ramping up U.S. energy production to meet skyrocketing demand for energy-thirsty AI data centers.

The hearing showed how the country's AI policy priorities have changed under President Donald Trump. President Joe Biden's wide-ranging 2023 executive order on AI had sought to balance the technology's potential rewards with the risks it poses to workers, civil rights and national security. Trump rescinded that order within days of taking office, saying its "onerous" requirements would "threaten American technological leadership...." [Data center power consumption] is already straining power grids, as residential consumers compete with data centers that can use as much electricity as an entire city. And those energy demands are projected to grow dramatically in the coming years... [Former Google CEO Eric] Schmidt, whom the committee's Republicans called as a witness on Wednesday, told [committee chairman Brett] Guthrie that winning the AI race is too important to let environmental considerations get in the way...

Once the United States beats China to develop superintelligence, Schmidt said, AI will solve the climate crisis. And if it doesn't, he went on, China will become the world's sole superpower. (Schmidt's view that AI will become superintelligent within a decade is controversial among experts, some of whom predict the technology will remain limited by fundamental shortcomings in its ability to plan and reason.)

The industry's wish list also included "light touch" federal regulation, high-skill immigration and continued subsidies for chip development. Alexandr Wang, the young billionaire CEO of San Francisco-based Scale AI, said a growing patchwork of state privacy laws is hampering AI companies' access to the data needed to train their models. He called for a federal privacy law that would preempt state regulations and prioritize innovation.

Some committee Democrats argued that cuts to scientific research and renewable energy will actually hamper America's AI competitiveness, according to the article. " But few questioned the premise that the U.S. is locked in an existential struggle with China for AI supremacy.

"That stark outlook has nearly coalesced into a consensus on Capitol Hill since China's DeepSeek chatbot stunned the AI industry with its reasoning skills earlier this year."
Facebook

Facebook Whistleblower Alleges Meta's AI Model Llama Was Used to Help DeepSeek (cbsnews.com) 10

A former Facebook employee/whistleblower alleges Meta's AI model Lllama was used to help DeepSeek.

The whistleblower — former Facebook director of global policy Sarah Wynn-Williams — testified before U.S. Senators on Wednesday. CBS News found this earlier response from Meta: In a statement last year on Llama, Meta spokesperson Andy Stone wrote, "The alleged role of a single and outdated version of an American open-source model is irrelevant when we know China is already investing over 1T to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast, or faster, than US ones."

Wynn-Williams encouraged senators to continue investigating Meta's role in the development of artificial intelligence in China, as they continue their probe into the social media company founded by Zuckerberg. "The greatest trick Mark Zuckerberg ever pulled was wrapping the American flag around himself and calling himself a patriot and saying he didn't offer services in China, while he spent the last decade building an $18 billion business there," she said.

The testimony also left some of the lawmakers skeptical of Zuckerberg's commitment to free speech after the whistleblower also alleged Facebook worked "hand in glove" with the Chinese government to censor its platforms: In her almost seven years with the company, Wynn-Williams told the panel she witnessed the company provide "custom built censorship tools" for the Chinese Communist Party. She said a Chinese dissident living in the United States was removed from Facebook in 2017 after pressure from Chinese officials. Facebook said at the time it took action against the regime critic, Guo Wengui, for sharing someone else's personal information. Wynn-Williams described the use of a "virality counter" that flagged posts with over 10,000 views for review by a "chief editor," which Democratic Sen. Richard Blumenthal of Connecticut called "an Orwellian censor." These "virality counters" were used not only in Mainland China, but also in Hong Kong and Taiwan, according to Wynn-Williams's testimony.

Wynn-Williams also told senators Chinese officials could "potentially access" the data of American users.

Social Networks

Adobe Retreats from Bluesky After Massive User Backlash (petapixel.com) 73

Adobe has deleted all its posts on Twitter-alternative Bluesky after a disastrous April 8 debut that drew over 1,600 angry comments from digital creators. The software giant's innocuous first post asking "What's fueling your creativity right now?" triggered immediate criticism targeting Adobe's controversial subscription model, continual price increases, and AI implementation.

"Y'all keep raising your prices for a product that keeps getting worse," wrote one user, while another referenced Adobe's "subscription model" with "I assume you'll be charging us monthly to read your posts." Recent price hikes have been substantial, with one commenter reporting a 53.88% increase from CDN$14.68 to CDN$22.59 monthly.
AI

Ex-OpenAI Staffers File Amicus Brief Opposing the Company's For-Profit Transition (techcrunch.com) 13

A group of ex-OpenAI employees on Friday filed a proposed amicus brief in support of Elon Musk in his lawsuit against OpenAI, opposing OpenAI's planned conversion from a nonprofit to a for-profit corporation. From a report: The brief, filed by Harvard law professor and Creative Commons founder Lawrence Lessig, names 12 former OpenAI employees: Steven Adler, Rosemary Campbell, Neil Chowdhury, Jacob Hilton, Daniel Kokotajlo, Gretchen Krueger, Todor Markov, Richard Ngo, Girish Sastry, William Saunders, Carrol Wainwright, and Jeffrey Wu. It makes the case that, if OpenAI's non-profit ceded control of the organization's business operations, it would "fundamentally violate its mission."

Several of the ex-staffers have spoken out against OpenAI's practices publicly before. Krueger has called on the company to improve its accountability and transparency, while Kokotajlo and Saunders previously warned that OpenAI is in a "reckless" race for AI dominance. Wainwright has said that OpenAI "should not [be trusted] when it promises to do the right thing later."

AI

James Cameron: AI Could Help Cut VFX Costs in Half, Saving Blockbuster Cinema (variety.com) 68

Director James Cameron argues that blockbuster filmmaking can only survive if the industry finds ways to "cut the cost of [VFX] in half," with AI potentially offering solutions that don't eliminate jobs.

"If we want to continue to see the kinds of movies that I've always loved and that I like to make -- 'Dune,' 'Dune: Part Two,' or one of my films or big effects-heavy, CG-heavy films -- we've got to figure out how to cut the cost of that in half," Cameron said.

Rather than staff reductions, Cameron envisions AI accelerating VFX workflows: "That's about doubling their speed to completion on a given shot, so your cadence is faster and your throughput cycle is faster, and artists get to move on and do other cool things."
Science

FDA Plans To Phase Out Animal Testing Requirements (axios.com) 43

The Food and Drug Administration says it would begin phasing out animal testing requirements for antibody therapies and other drugs and move toward AI-based models and other tools it deems "human-relevant." Axios: The FDA said it would launch a pilot program over the next year allowing select developers of monoclonal antibodies to use a primarily non-animal-based testing strategy. Commissioner Marty Makary in a statement said the shift would improve drug safety, lower research and development costs and address ethical concerns about animal experimentation.

Slashdot Top Deals