Programming

Curl Warns GitHub About 'Malicious Unicode' Security Issue (daniel.haxx.se) 69

A Curl contributor replaced an ASCII letter with a Unicode alternative in a pull request, writes Curl lead developer/founder Daniel Stenberg. And not a single human reviewer on the team (or any of their CI jobs) noticed.

The change "looked identical to the ASCII version, so it was not possible to visually spot this..." The impact of changing one or more letters in a URL can of course be devastating depending on conditions... [W]e have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added a CI job that scans all files and validates every UTF-8 sequence in the git repository.

In the curl git repository most files and most content are plain old ASCII so we can "easily" whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. In order to drive this change home, we went through all the test files in the curl repository and made sure that all the UTF-8 occurrences were instead replaced by other kind of escape sequences and similar. Some of them were also used more or less by mistake and could easily be replaced by their ASCII counterparts.

The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us... We want and strive to be proactive and tighten everything before malicious people exploit some weakness somewhere but security remains this never-ending race where we can only do the best we can and while the other side is working in silence and might at some future point attack us in new creative ways we had not anticipated. That future unknown attack is a tricky thing.

In the original blog post Stenberg complained he got "barely no responses" from GitHub (joking "perhaps they are all just too busy implementing the next AI feature we don't want.") But hours later he posted an update.

"GitHub has told me they have raised this as a security issue internally and they are working on a fix."
AI

Walmart Prepares for a Future Where AI Shops for Consumers 73

Walmart is preparing for a future where AI agents shop on behalf of consumers by adapting its systems to serve both humans and autonomous bots. As major players like Visa and PayPal also invest in agentic commerce, Walmart is positioning itself as a leader by developing its own AI agents and supporting broader industry integration. PYMNTS reports: Instead of scrolling through ads or comparing product reviews, future consumers may rely on digital assistants, like OpenAI's Operator, to manage their shopping lists, from replenishing household essentials to selecting the best TV based on personal preferences, according to the report (paywalled). "It will be different," Walmart U.S. Chief Technology Officer Hari Vasudev said, per the report. "Advertising will have to evolve." The emergence of AI-generated summaries in search results has already altered the way consumers gather product information, the report said. However, autonomous shopping agents represent a bigger transformation. These bots could not only find products but also finalize purchases, including payments, without the user ever lifting a finger. [...]

Retail experts say agentic commerce will require companies to overhaul how they market and present their products online, the WSJ report said. They may need to redesign product pages and pricing strategies to cater to algorithmic buyers. The customer relationship could shift away from retailers if purchases are completed through third-party agents. [...] To prepare, Walmart is developing its own AI shopping agents, accessible through its website and app, according to the WSJ report. These bots can already handle basic tasks like reordering groceries, and they're being trained to respond to broader prompts, such as planning a themed birthday party. Walmart is working toward a future in which outside agents can seamlessly communicate with the retailer's own systems -- something Vasudev told the WSJ he expects to be governed by industry-wide protocols that are still under development. [...]

Third-party shopping bots may also act independently, crawling retailers' websites much like consumers browse stores without engaging sales associates, the WSJ report said. In those cases, the retailer has little control over how its products are evaluated. Whether consumers instruct their AI to shop specifically at Walmart or ask for the best deal available, the outcomes will increasingly be shaped by algorithms, per the report. Operator, for example, considers search ranking, sponsored content and user preferences when making recommendations. That's a far cry from how humans shop. Bots don't respond to eye-catching visuals or emotionally driven branding in the same way people do. This means retailers must optimize their content not just for people but for machine readers as well, the report said. Pricing strategies could also shift as companies may need to make rapid pricing decisions and determine whether it's worth offering AI agents exclusive discounts to keep them from choosing a competitor's lower-priced item, according to the report.
Cloud

UK Needs More Nuclear To Power AI, Says Amazon Boss 66

In an exclusive interview with the BBC, AWS CEO Matt Garman said the UK must expand nuclear energy to meet the soaring electricity demands of AI-driven data centers. From the report: Amazon Web Services (AWS), which is part of the retail giant Amazon, plans to spend 8 billion pounds on new data centers in the UK over the next four years. Matt Garman, chief executive of AWS, told the BBC nuclear is a "great solution" to data centres' energy needs as "an excellent source of zero carbon, 24/7 power." AWS is the single largest corporate buyer of renewable energy in the world and has funded more than 40 renewable solar and wind farm projects in the UK.

The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030. The body that runs the UK's power grid estimates that by 2050 data centers alone will use nearly as much energy as all industrial users consume today.

In an exclusive interview with the BBC, Matt Garman said that future energy needs were central to AWS planning process. "It's something we plan many years out," he said. "We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that particularly as we look 10 years out."
AI

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery 19

MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn't provide specific details on what it believes is wrong with the paper. From a post: "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv's Code of Conduct.

"Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible." Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication's prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. [...]

"We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics."
The paper in question, titled "Artificial Intelligence, Scientific Discovery, and Product Innovation" and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization.

The Wall Street Journal reported on MIT's statement.
AI

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT 12

OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved.

The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running.

Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.
United States

MIT Says It No Longer Stands Behind Student's AI Research Paper (msn.com) 28

MIT said Friday it can no longer stand behind a widely circulated paper on AI written by a doctoral student in its economics program. The paper said that the introduction of an AI tool in a materials-science lab led to gains in new discoveries, but had more ambiguous effects on the scientists who used it. WSJ: MIT didn't name the student in its statement Friday, but it did name the paper. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets. In a press release, MIT said it "has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper."

The university said the author of the paper is no longer at MIT. The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials -- a result that suggested that, in certain settings, AI could substantially improve worker productivity. But it also showed that most of the productivity gains went to scientists who were already highly effective, and that overall the AI tool made scientists less happy about their work.

AI

US, UAE Unveil Plan For New 5GW AI Campus In Abu Dhabi (patentlyapple.com) 30

An anonymous reader quotes a report from Patently Apple: It's being reported in the Gulf region that a new 5GW UAE-US AI Campus in Abu Dhabi was unveiled on Thursday at Qasr Al Watan in the presence of President His Highness Sheikh Mohamed bin Zayed Al Nahyan and US. President Donald Trump, who is on a state visit to the UAE. The new AI campus -- the largest of its kind outside the United States -- will host US hyperscalers and large enterprises, enabling them to leverage regional compute resources with the capability to serve the Global South. The UAE-US AI Campus will feature 5GW of capacity for AI data centers in Abu Dhabi, offering a regional platform through which US hyperscalers can provide low-latency services to nearly half of the global population.

Upon completion, the facility will utilize nuclear, solar, and gas power to minimize carbon emissions. It will also house a science park focused on advancing innovation in artificial intelligence. The campus will be built by G42 and operated in partnership with several US companies including NVIDIA, OpenAI, SoftBank, Cisco and others. The initiative is part of the newly established US-UAE AI Acceleration Partnership, a bilateral framework designed to deepen collaboration on artificial intelligence and advanced technologies. The UAE and US will jointly regulate access to the compute resources, which are reserved for US hyperscalers and approved cloud service providers.
An official press release from the White House can be found here.
China

China Launches First of 2,800 Satellites For AI Space Computing Constellation (spacenews.com) 71

China launched 12 satellites on Wednesday as part of the âoeThree-Body Computing Constellation,â the worldâ(TM)s first dedicated orbital computing network led by ADA Space and Zhejiang Lab. SpaceNews reports: A Long March 2D rocket lifted off at 12:12 a.m. Eastern (0412 UTC) May 14 from Jiuquan Satellite Launch Center in northwest China. Insulation tiles fell away from the payload fairing as the rocket climbed into a clear blue sky above the spaceport. The China Aerospace Science and Technology Corporation (CASC) announced a fully successful launch, revealing the mission to have sent 12 satellites for a space computing constellation into orbit. Commercial company ADA Space released further details, stating that the 12 satellites form the "Three-Body Computing Constellation," which will directly process data in space, rather than on the ground, reducing reliance on ground-based computing infrastructure. The constellation will be capable of a combined 5 peta operations per second (POPS) with 30 terabytes of onboard storage.

The satellites feature advanced AI capabilities, up to 100 Gbps laser inter-satellite links and remote sensing payloads -- data from which will be processed onboard, reducing data transmission requirements. One satellite also carries a cosmic X-ray polarimeter developed by Guangxi University and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), which will detect, identify and classify transient events such as gamma-ray bursts, while also triggering messages to enable followup observations by other missions. [...] The company says the constellation can meet the growing demand for real-time computing in space, as well as help China take the lead globally in building space computing infrastructure, seize the commanding heights of this future industry. The development could mark the beginning of space-based cloud computing as a new capability, as well as open a new arena for strategic competition with the U.S.
You can watch a recording of the launch here.
Education

Student Demands Tuition Refund After Catching Professor Using ChatGPT (fortune.com) 115

A Northeastern University student demanded her tuition money back after discovering her business professor was secretly using AI to create course materials. Ella Stapleton, who graduated this year, grew suspicious when she noticed telltale signs of AI generation in her professor's lecture notes, including a stray ChatGPT citation in the bibliography, recurring typos matching machine outputs, and images showing figures with extra limbs.

"He's telling us not to use it, and then he's using it himself," Stapleton told the New York Times. After filing a formal complaint with Northeastern's business school, Stapleton requested a tuition refund of about $8,000 for the course. The university ultimately rejected her claim. Professor Rick Arrowood acknowledged using ChatGPT, Perplexity AI, and presentation generator Gamma. "In hindsight, I wish I would have looked at it more closely," he said.
Facebook

Do You Trust Mark Zuckerberg To Solve Your Loneliness With an 'AI Friend'? 106

An anonymous reader shares an opinion piece from The Guardian, written by columnist Emma Brockes: Mark Zuckerberg has gone on a promotional tour to talk up the potential of AI in human relationships. I know; listening to Zuck on friendship is a bit like taking business advice from Bernie Madoff or lessons in sportsmanship from Tonya Harding. But at recent tech conferences and on podcasts, Zuck has been saying he has seen the future and it's one in which the world's "loneliness epidemic" is alleviated by people finding friendship with "a system that knows them well and that kind of understands them in the way that their feed algorithms do." In essence, we'll be friends with AI, instead of people. The missing air quotes around "knows" and "understands" is a distinction we can assume Zuck neither knows nor understands.

This push by the 41-year-old tech leader would be less startling if it weren't for the fact that semi-regularly online now you can find people writing about their relationships with their AI therapist or chatbot and insisting that if it's real to them, then it's real, period. The chatbot is, they will argue, "actively" listening to them. On a podcast with Dwarkesh Patel last month Zuck envisaged a near-future in which "you'll be scrolling through your feed, and there will be content that maybe looks like a Reel to start, but you can talk to it, or interact with it and it talks back." The average American, he said, has fewer than three friends but needs more. Hey presto, a ready solution.

The problem, obviously, isn't that chatting to a bot gives the illusion of intimacy, but that, in Zuckerberg's universe, it is indistinguishable from real intimacy, an equivalent and equally meaningful version of human-to-human contact. If that makes no sense, suggests Zuck, then either the meaning of words has to change or we have to come up with new words: "Over time," says Zuckerberg, as more and more people turn to AI friends, "we'll find the vocabulary as a society to be able to articulate why that is valuable." ... The sheer wrongness of this argument is so stark that it puts anyone who gives it more than a moment's thought in the weird position of having to define units of reality as basic as "person." To extend Zuckerberg's logic: a book can make you feel less alone and that feeling can be real. Which doesn't mean that your relationship with the author is genuine, intimate or reciprocated in anything like the way a relationship with your friends is.
Youtube

YouTube Crackdowns on AI-Generated Fake Movie Trailers (pocket-lint.com) 42

YouTube has suspended ad revenue for two additional channels -- Screen Trailers and Royal Trailer -- as part of an ongoing effort to combat fake movie trailers using AI-generated content. These channels, alternative accounts of previously demonetized Screen Culture and KH Studio, splice actual movie footage with AI-generated material, often accumulating millions of views.

The action follows a recent Deadline investigation revealing Hollywood studios had requested YouTube redirect revenue from these misleading videos. Despite losing monetization, Screen Culture, which has 1.42 million subscribers, continues uploading content including a recent "Trailer 2 concept" for James Gunn's upcoming Superman film.
Privacy

FBI: US Officials Targeted In Voice Deepfake Attacks Since April (bleepingcomputer.com) 8

The FBI has issued a warning that cybercriminals have started using AI-generated voice deepfakes in phishing attacks impersonating senior U.S. officials. These attacks, involving smishing and vishing tactics, aim to compromise personal accounts and contacts for further social engineering and financial fraud. BleepingComputer reports: "Since April 2025, malicious actors have impersonated senior U.S. officials to target individuals, many of whom are current or former senior U.S. federal or state government officials and their contacts. If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic," the FBI warned. "The malicious actors have sent text messages and AI-generated voice messages -- techniques known as smishing and vishing, respectively -- that claim to come from a senior U.S. official in an effort to establish rapport before gaining access to personal accounts."

The attackers can gain access to the accounts of U.S. officials by sending malicious links disguised as links designed to move the discussion to another messaging platform. By compromising their accounts, the threat actors can gain access to other government officials' contact information. Next, they can use social engineering to impersonate the compromised U.S. officials to steal further sensitive information and trick targeted contacts into transferring funds. Today's PSA follows a March 2021 FBI Private Industry Notification (PIN) [PDF] warning that deepfakes (including AI-generated or manipulated audio, text, images, or video) would likely be widely employed in "cyber and foreign influence operations" after becoming increasingly sophisticated.

Advertising

Netflix Will Show Generative AI Ads Midway Through Streams In 2026 (arstechnica.com) 62

At its second annual Upfront 2025 event yesterday, Netflix announced that it has created interactive mid-roll ads and pause ads that incorporate generative AI. These new ad formats are expected to roll out in 2026. Ars Technica reports: "[Netflix] members pay as much attention to midroll ads as they do to the shows and movies themselves," Amy Reinhard, president of advertising at Netflix, said. Netflix started testing pause ads in July 2024, per The Verge. Speaking to advertisers, Reinhard claimed that ad subscribers spend 41 hours per month on Netflix on average. The new ad formats follow Netflix's launch of its own in-house advertising platform in the US in April. It had previously debuted the platform in Canada and plans to expand it globally by June, per The Verge.

Netflix considers its advertising business to be in its early stages, meaning customers can expect the firm's ad efforts to continue expanding at a faster rate over the coming years. The company plans to double its advertising revenue in 2025. "The foundations of our ads business are in place, and going forward, the pace of progress will be even faster," Reinhard said today.
Further reading: Netflix Says Its Ad Tier Now Has 94 Million Monthly Active Users
AI

Anthropic's Lawyer Forced To Apologize After Claude Hallucinated Legal Citation (techcrunch.com) 39

An anonymous reader quotes a report from TechCrunch: A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations. Anthropic apologized for the error and called it "an honest citation mistake and not a fabrication of authority." Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness -- one of the company's employees, Olivia Chen -- of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations. Last week, a California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." The judge imposed $31,000 in sanctions against the law firms and said "no reasonably competent attorney should out-source research and writing" to AI.
AI

Meta Delays 'Behemoth' AI Model Release (axios.com) 8

According to the Wall Street Journal (paywalled), Meta is delaying the release of its largest Llama 4 AI model, known as "Behemoth," over concerns that it may not be enough of an advance on previous models. "It's another indicator that the AI industry's scaling strategy -- 'just make everything bigger' -- could be hitting a wall," notes Axios. From the report: The Journal says that Behemoth is now expected to be released in the fall or even later. It was originally scheduled to coincide with Meta's Llamacon event last month, then later postponed till June. It's also possible the company could speed up a more limited Behemoth release.
Transportation

Uber Expects More Drivers Amid Robotaxi Push 40

Uber's autonomous vehicle chief Andrew Macdonald predicted this week that the company will employ more human drivers in a decade despite aggressively expanding robotaxi operations. Speaking at the Financial Times' Future of the Car conference, Macdonald outlined a "hybrid marketplace" where autonomous vehicles dominate city centers while human drivers serve areas beyond robotaxi coverage, handle airport runs, and respond during extreme weather events.

"I am almost certain that there will be more Uber drivers in 10 years, not less, because I think the world will move from individual car ownership to mobility as a service," Macdonald said. The ride-hailing giant has struck partnerships with Waymo, Volkswagen, Wayve, WeRide, and Pony AI. Robotaxis are already operational in Austin and Phoenix, with CEO Dara Khosrowshahi claiming Waymo vehicles in Austin are busier than "99%" of human drivers.
Education

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show (404media.co) 140

School districts across the United States were woefully unprepared for ChatGPT's impact on education, according to thousands of pages of public records obtained by 404 Media. Documents from early 2023, the publication reports, show a "total crapshoot" in responses, with some state education departments admitting they hadn't considered ChatGPT's implications while others hired pro-AI consultants to train educators.

In California, when principals sought guidance, state officials responded that "unfortunately, the topic of ChatGPT has not come up in our circles." One California official admitted, "I have never heard of ChatGPT prior to your email." Meanwhile, Louisiana's education department circulated presentations suggesting AI "is like giving a computer a brain" and warning that "going back to writing essays - only in class - can hurt struggling learners."

Some administrators accepted the technology enthusiastically, with one Idaho curriculum head calling ChatGPT "AMAZING" and comparing resistance to early reactions against spell-check.
Google

Google Dominates AI Patent Applications (axios.com) 12

Google has overtaken IBM to become the leader in generative AI-related patents and also leads in the emerging area of agentic AI, according to data from IFI Claims. Axios: In the patents-for-agents U.S. rankings, Google and Nvidia top the list, followed by IBM, Intel and Microsoft, according to an analysis released Thursday.

Globally, Google and Nvidia also led the agentic patents list, but three Chinese universities also make the top 10, highlighting China's place as the chief U.S. rival in the field. In global rankings for generative AI, Google was also the leader -- but six of the top 10 global spots were held by Chinese companies or universities. Microsoft was No. 3, with Nvidia and IBM also in the top 10.

AI

ChatGPT Diminishes Idea Diversity in Brainstorming, Study Finds 52

A new study published in Nature Human Behaviour reveals that ChatGPT diminishes the diversity of ideas generated during brainstorming sessions. Researchers from the University of Pennsylvania's Wharton School found [PDF] that while generative AI tools may enhance individual creativity, they simultaneously reduce the collective diversity of novel content.

The investigation responds to previous research that examined ChatGPT's impact on creativity. Their findings align with separate research published in Science Advances suggesting AI-generated content tends toward homogeneity. This phenomenon mirrors what researchers call the "fourth grade slump in creativity," referencing earlier studies on how structured approaches can limit innovative thinking.
Microsoft

Microsoft Layoffs Hit Coders Hardest With AI Costs on the Rise 48

An anonymous reader shares a report: Microsoft's recently announced job cuts fell hardest on the people who build the company's products, showing that even software developers are at risk in the age of artificial intelligence.

In Microsoft's home state of Washington, software engineering was by far the largest single job category to receive layoff notices, making up more than 40% of the roughly 2,000 positions cut, according to state documents reviewed by Bloomberg. Microsoft on Tuesday said it would cut about 6,000 workers across the company. The Washington state data represents about a third of the total.

Slashdot Top Deals