IBM

IBM Pledges $150 Billion US Investment (reuters.com) 42

IBM announced plans to invest $150 billion in the United States over the next five years, with more than $30 billion earmarked specifically for research and development of mainframes and quantum computing technology. The investment follows similar commitments from tech giants including Apple and Nvidia -- each pledging approximately $500 billion -- in the wake of President Trump's election and tariff threats.

"We have been focused on American jobs and manufacturing since our founding 114 years ago," said IBM CEO Arvind Krishna in a statement. The company currently manufactures its mainframe systems in upstate New York and plans to continue designing and assembling quantum computers domestically. The announcement comes amid challenging circumstances for IBM, which recently saw 15 government contracts shelved under the Trump administration's cost-cutting initiatives.

Further reading: IBM US Cuts May Run Deeper Than Feared - and the Jobs Are Heading To India;
IBM Now Has More Employees In India Than In the US (2017).
Chrome

'Don't Make Google Sell Chrome' (hey.com) 180

Ruby on Rails creator and Basecamp CTO David Heinemeier Hansson, makes a case for why Google shouldn't be forced to sell Chrome: First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best. Besides, we have a million alternatives. Firefox still exists, so does Safari, so does the billion Chromium-based browsers like Brave and Edge. And we finally even have new engines on the way with the Ladybird browser.

Look, Google's trillion-dollar business depends on a thriving web that can be searched by Google.com, that can be plastered in AdSense, and that now can feed the wisdom of AI. Thus, Google's incredible work to further the web isn't an act of charity, it's of economic self-interest, and that's why it works. Capitalism doesn't run on benevolence, but incentives.

We want an 800-pound gorilla in the web's corner! Because Apple would love nothing better (despite the admirable work to keep up with Chrome by Team Safari) to see the web's capacity as an application platform diminished. As would every other owner of a proprietary application platform. Microsoft fought the web tooth and nail back in the 90s because they knew that a free, open application platform would undermine lock-in -- and it did!

AI

AI Helps Unravel a Cause of Alzheimer's Disease and Identify a Therapeutic Candidate (ucsd.edu) 40

"A new study found that a gene recently recognized as a biomarker for Alzheimer's disease is actually a cause of it," announced the University of California, San Diego, "due to its previously unknown secondary function."

"Researchers at the University of California San Diego used artificial intelligence to help both unravel this mystery of Alzheimer's disease and discover a potential treatment that obstructs the gene's moonlighting role."

A team led by Sheng Zhong, a professor in the university's bioengineering department, had previously discovered a potential blood biomarker for early detection of Alzheimer's disease (called PHGDH). But now they've discovered a correlation: the more protein and RNA that it produces, the more advanced the disease. And after more research they ended up with "a therapeutic candidate with demonstrated efficacy that has the potential of being further developed into clinical tests..." That correlation has since been verified in multiple cohorts from different medical centers, according to Zhong... [T]he researchers established that PHGDH is indeed a causal gene to spontaneous Alzheimer's disease. In further support of that finding, the researchers determined — with the help of AI — that PHGDH plays a previously undiscovered role: it triggers a pathway that disrupts how cells in the brain turn genes on and off. And such a disturbance can cause issues, like the development of Alzheimer's disease....

With AI, they could visualize the three-dimensional structure of the PHGDH protein. Within that structure, they discovered that the protein has a substructure... Zhong said, "It really demanded modern AI to formulate the three-dimensional structure very precisely to make this discovery." After discovering the substructure, the team then demonstrated that with it, the protein can activate two critical target genes. That throws off the delicate balance, leading to several problems and eventually the early stages of Alzheimer's disease. In other words, PHGDH has a previously unknown role, independent of its enzymatic function, that through a novel pathway leads to spontaneous Alzheimer's disease...

Now that the researchers uncovered the mechanism, they wanted to figure out how to intervene and thus possibly identify a therapeutic candidate, which could help target the disease.... Given that PHGDH is such an important enzyme, there are past studies on its possible inhibitors. One small molecule, known as NCT-503, stood out to the researchers because it is not quite effective at impeding PHGDH's enzymatic activity (the production of serine), which they did not want to change. NCT-503 is also able to penetrate the blood-brain-barrier, which is a desirable characteristic. They turned to AI again for three-dimensional visualization and modeling. They found that NCT-503 can access that DNA-binding substructure of PHGDH, thanks to a binding pocket. With more testing, they saw that NCT-503 does indeed inhibit PHGDH's regulatory role.

When the researchers tested NCT-503 in two mouse models of Alzheimer's disease, they saw that it significantly alleviated Alzheimer's progression. The treated mice demonstrated substantial improvement in their memory and anxiety tests...

The next steps will be to optimize the compound and subject it to FDA IND-enabling studies.



The research team published their results on April 23 in the journal Cell.
Math

Could a 'Math Genius' AI Co-author Proofs Within Three Years? (theregister.com) 71

A new DARPA project called expMath "aims to jumpstart math innovation with the help of AI," writes The Register. America's "Defense Advanced Research Projects Agency" believes mathematics isn't advancing fast enough, according to their article... So to accelerate — or "exponentiate" — the rate of mathematical research, DARPA this week held a Proposers Day event to engage with the technical community in the hope that attendees will prepare proposals to submit once the actual Broad Agency Announcement solicitation goes out...

[T]he problem is that AI just isn't very smart. It can do high school-level math but not high-level math. [One slide from DARPA program manager Patrick Shafto noted that OpenAI o1 "continues to abjectly fail at basic math despite claims of reasoning capabilities."] Nonetheless, expMath's goal is to make AI models capable of:

- auto decomposition — automatically decompose natural language statements into reusable natural language lemmas (a proven statement used to prove other statements); and
auto(in)formalization — translate the natural language lemma into a formal proof and then translate the proof back to natural language.

"How must faster with technology advance with AI agents solving new mathematical proofs?" asks former DARPA research scientist Robin Rowe (also long-time Slashdot reader robinsrowe): DARPA says that "The goal of Exponentiating Mathematics is to radically accelerate the rate of progress in pure mathematics by developing an AI co-author capable of proposing and proving useful abstractions."
Rowe is cited in the article as the founder/CEO of an AI research institute named "Fountain Adobe". (He tells The Register that "It's an indication of DARPA's concern about how tough this may be that it's a three-year program. That's not normal for DARPA.") Rowe is optimistic. "I think we're going to kill it, honestly. I think it's not going to take three years. But I think it might take three years to do it with LLMs. So then the question becomes, how radical is everybody willing to be?"
"We will robustly engage with the math and AI communities toward fundamentally reshaping the practice of mathematics by mathematicians," explains the project's home page. They've already uploaded an hour-long video of their Proposers Day event.

"It's very unclear that current AI systems can succeed at this task..." program manager Shafto says in a short video introducing the project. But... "There's a lot of enthusiasm in the math community for the possibility of changes in the way mathematics is practiced. It opens up fundamentally new things for mathematicians. But of course, they're not AI researchers. One of the motivations for this program is to bring together two different communities — the people who are working on AI for mathematics, and the people who are doing mathematics — so that we're solving the same problem.

At its core, it's a very hard and rather technical problem. And this is DARPA's bread-and-butter, is to sort of try to change the world. And I think this has the potential to do that.

AI

Consumers Aren't Flocking to Microsoft's AI Tool 'Copilot' (xda-developers.com) 100

Microsoft Copilot "isn't doing as well as the company would like," reports XDA-Developers.com (citing a report from startup/VC industry site Newcomer). The Redmond giant has invested billions of dollars and a lot of manpower into making it happen, but as a recent report claims, people just don't care. In fact, if the report is to be believed, Microsoft's rise in the AI scene has already come to a screeching halt:

At Microsoft's annual executive huddle last month, the company's chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT's growth over the same period, arching ever upward toward 400 million weekly users. OpenAI's iconic chatbot was soaring, while Microsoft's best hope for a mass-adoption AI tool was idling. It was a sobering chart for Microsoft's consumer AI team...

That's right; Microsoft Copilot's weekly user base is only 5% of the number of people who use ChatGPT, and it's not increasing. It's also worth noting that there are approximately 1.5 billion Windows users worldwide, which means just over 1% of them are using Copilot, a tool that's now a Windows default app....

It's not a huge surprise that Copilot is faltering. Despite Microsoft's CEO claiming that Copilot will become "the next Start button", the company has had to backtrack on the Copilot key and allow people to customise it to do something else, including giving back its original feature of the Menu key.

They also note earlier reports that Intel's AI PC chips aren't selling well.
AI

Google's DeepMind UK Team Reportedly Seeks to Unionize (techcrunch.com) 36

"Google's DeepMind UK team reportedly seeks to unionize," reports TechCrunch: Around 300 London-based members of Google's AI-focused DeepMind team are seeking to unionize with the Communication Workers Union, according to a Financial Times report that cites three people involved with the unionization effort.

These DeepMind employees are reportedly unhappy about Google's decision to remove a pledge not to use AI for weapons or surveillance from its website. They're also concerned about the company's work with the Israeli military, including a $1.2 billion cloud computing contract that has prompted protests elsewhere at Google.

At least five DeepMind employees quit, according to the report (out of a 2,000 total U.K. staff members).

"A small group of around 200 employees of Google and its parent company Alphabet previously announced that they were unionizing," the article adds, "though as a union representing just a tiny slice of the total Google workforce, it lacked the ability to collectively bargain."
IT

WSJ: Tech-Industry Workers Now 'Miserable', Fearing Layoffs, Working Longer Hours (msn.com) 166

"Not so long ago, working in tech meant job security, extravagant perks and a bring-your-whole-self-to-the-office ethos rare in other industries," writes the Wall Street Journal.

But now tech work "looks like a regular job," with workers "contending with the constant fear of layoffs, longer hours and an ever-growing list of responsibilities for the same pay." Now employees find themselves doing the work of multiple laid-off colleagues. Some have lost jobs only to be rehired into positions that aren't eligible for raises or stock grants. Changing jobs used to be a surefire way to secure a raise; these days, asking for more money can lead to a job offer being withdrawn.

The shift in tech has been building slowly. For years, demand for workers outstripped supply, a dynamic that peaked during the Covid-19 pandemic. Big tech companies like Meta and Salesforce admitted they brought on too many employees. The ensuing downturn included mass layoffs that started in 2022...

[S]ome longtime tech employees say they no longer recognize the companies they work for. Management has become more focused on delivering the results Wall Street expects. Revenue remains strong for tech giants, but they're pouring resources into costly AI infrastructure, putting pressure on cash flow. With the industry all grown up, a heads-down, keep-quiet mentality has taken root, workers say... Tech workers are still well-paid compared with other sectors, but currently there's a split in the industry. Those working in AI — and especially those with Ph.D.s — are seeing their compensation packages soar. But those without AI experience are finding they're better off staying where they are, because companies aren't paying what they were a few years ago.

Other excepts from the Wall Street Journal's article:
  • "I'm hearing of people having 30 direct reports," says David Markley, who spent seven years at Amazon and is now an executive coach for workers at large tech companies. "It's not because the companies don't have the money. In a lot of ways, it's because of AI and the narratives out there about how collapsing the organization is better...."
  • Google co-founder Sergey Brin told a group of employees in February that 60 hours a week was the sweet spot of productivity, in comments reported earlier by the New York Times.
  • One recruiter at Meta who had been laid off by the company was rehired into her old role last year, but with a catch: She's now classified as a "short-term employee." Her contract is eligible for renewal, but she doesn't get merit pay increases, promotions or stock. The recruiter says she's responsible for a volume of work that used to be spread among several people. The company refers to being loaded with such additional responsibilities as "agility."
  • More than 50,000 tech workers from over 100 companies have been laid off in 2025, according to Layoffs.fyi, a website that tracks job cuts and crowdsources lists of laid off workers...

Even before those 50,000 layoffs in 2025, Silicon Valley's Mercury News was citing some interesting statistics from economic research/consulting firm Beacon Economics. In 2020, 2021 and 2022, the San Francisco Bay Area added 74,700 tech jobs But then in 2023 and 2024 the industry had slashed even more tech jobs -- 80,200 -- for a net loss (over five years) of 5,500.

So is there really a cutback in perks and a fear of layoffs that's casting a pall over the industry? share your own thoughts and experiences in the comments. Do you agree with the picture that's being painted by the Wall Street Journal?

They told their readers that tech workers are now "just like the rest of us: miserable at work."


Education

Canadian University Cancels Coding Competition Over Suspected AI Cheating (uwaterloo.ca) 40

The university blamed it on "the significant number of students" who violated their coding competition's rules. Long-time Slashdot reader theodp quotes this report from The Logic: Finding that many students violated rules and submitted code not written by themselves, the University of Waterloo's Centre for Computing and Math decided not to release results from its annual Canadian Computing Competition (CCC), which many students rely on to bolster their chances of being accepted into Waterloo's prestigious computing and engineering programs, or land a spot on teams to represent Canada in international competitions.

"It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help," the CCC co-chairs explained in a statement. "As such, the reliability of 'ranking' students would neither be equitable, fair, or accurate."

"It is disappointing that the students who violated the CCC Rules will impact those students who are deserving of recognition," the univeresity said in its statement. They added that they are "considering possible ways to address this problem for future contests."
AI

NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 105

A New York Times technology columnist has a question.

"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?

But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....

Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...

[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

Microsoft

Devs Sound Alarm After Microsoft Subtracts C/C++ Extension From VS Code Forks (theregister.com) 42

Some developers are "crying foul" after Microsoft's C/C++ extension for Visual Studio Code stopped working with VS Code derivatives like VS Codium and Cursor, reports The Register. The move has prompted Cursor to transition to open-source alternatives, while some developers are calling for a regulatory investigation into Microsoft's alleged anti-competitive behavior. From the report: In early April, programmers using VS Codium, an open-source fork of Microsoft's MIT-licensed VS Code, and Cursor, a commercial AI code assistant built from the VS Code codebase, noticed that the C/C++ extension stopped working. The extension adds C/C++ language support, such as Intellisense code completion and debugging, to VS Code. The removal of these capabilities from competing tools breaks developer workflows, hobbles the editor, and arguably hinders competition. The breaking change appears to have occurred with the release of v1.24.5 on April 3, 2025.

Following the April update, attempts to install the C/C++ extension outside of VS Code generate this error message: "The C/C++ extension may be used only with Microsoft Visual Studio, Visual Studio for Mac, Visual Studio Code, Azure DevOps, Team Foundation Server, and successor Microsoft products and services to develop and test your applications." Microsoft has forbidden the use of its extensions outside of its own software products since at least September 2020, when the current licensing terms were published. But it hasn't enforced those terms in its C/C++ extension with an environment check in its binaries until now. [...]

Developers discussing the issue in Cursor's GitHub repo have noted that Microsoft recently rolled out a competing AI software agent capability, dubbed Agent Mode, within its Copilot software. One such developer who contacted us anonymously told The Register they sent a letter about the situation to the US Federal Trade Commission, asking them to probe Microsoft for unfair competition -- alleging self-preferencing, bundling Copilot without a removal option, and blocking rivals like Cursor to lock users into its AI ecosystem.

Intel

Intel's AI PC Chips Aren't Selling Well (tomshardware.com) 56

Intel is grappling with an unexpected market shift as customers eschew its new AI-focused processors for cheaper previous-generation chips. The company revealed during its recent earnings call that demand for older Raptor Lake processors has surged while its newer, more expensive Lunar Lake and Meteor Lake AI PC chips struggle to gain traction.

This surprising trend, first reported by Tom's Hardware, has created a production capacity shortage for Intel's 'Intel 7' process node that will "persist for the foreseeable future," despite the fact that current-generation chips utilize TSMC's newer nodes. "Customers are demanding system price points that consumers really want," explained Intel executive Michelle Johnston Holthaus, noting that economic concerns and tariffs have affected inventory decisions.
Microsoft

Microsoft's Big AI Hire Can't Match OpenAI (newcomer.co) 25

An anonymous reader shares a report: At Microsoft's annual executive huddle last month, the company's chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT's growth over the same period, arching ever upward toward 400 million weekly users.

OpenAI's iconic chatbot was soaring, while Microsoft's best hope for a mass-adoption AI tool was idling. It was a sobering chart for Microsoft's consumer AI team and the man who's been leading it for the past year, Mustafa Suleyman. Microsoft brought Suleyman aboard in March of 2024, along with much of the talent at his struggling AI startup Inflection, in return for a $650 million licensing fee that made Inflection's investors whole, and then some.

[...] Yet from the very start, people inside the company told me they were skeptical. Many outsiders have struggled to make an impact or even survive at Microsoft, a company that's full of lifers who cut their tech teeth in a different era. My skeptical sources noted Suleyman's previous run at a big company hadn't gone well, with Google stripping him of some management responsibilities following complaints of how he treated staff, the Wall Street Journal reported at the time. There was also much eye-rolling at the fact that Suleyman was given the title of CEO of Microsoft AI. That designation is typically reserved for the top executive at companies it acquires and lets operate semi-autonomously, such as LinkedIn or Github.

AI

YC Partner Argues Most AI Apps Are Currently 'Horseless Carriages' (koomen.dev) 15

Pete Koomen, a Y Combinator partner, argues that current AI applications often fail by unnecessarily constraining their underlying models, much like early automobiles that mimicked horse-drawn carriages rather than reimagining transportation. In his detailed critique, Koomen uses Gmail's AI email draft feature as a prime example. The tool generates formal, generic emails that don't match users' actual writing styles, often producing drafts longer than what users would naturally write.

The critical flaw, according to Koomen, is that users cannot customize the system prompt -- the instructions that tell the AI how to behave. "When an LLM agent is acting on my behalf I should be allowed to teach it how to do that by editing the System Prompt," Koomen writes. Koomen suggests AI is actually better at reading and transforming text than generating it. His vision for truly useful AI email tools involves automating mundane work -- categorizing, prioritizing, and drafting contextual replies based on personalized rules -- rather than simply generating content from scratch. The essay argues that developers should build "agent builders" instead of agents, allowing users to teach AI systems their preferences and patterns.
The Internet

Perplexity CEO Says Its Browser Will Track Everything Users Do Online To Sell Ads (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Perplexity CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads. "That's kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you," Srinivas said. "Because some of the prompts that people do in these AIs is purely work-related. It's not like that's personal."

And work-related queries won't help the AI company build an accurate-enough dossier. "On the other hand, what are the things you're buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you," he explained. Srinivas believes that Perplexity's browser users will be fine with such tracking because the ads should be more relevant to them. "We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there," he said. The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

AI

Sydney Radio Station Secretly Used AI-Generated Host For 6 Months Without Disclosure 57

The Sydney-based CADA station secretly used an AI-generated host named "Thy" for its weekday shows over six months without disclosure. The Sydney Morning Herald reports: After initial questioning from Stephanie Coombes in The Carpet newsletter, it was revealed that the station used ElevenLabs -- a generative AI audio platform that transforms text into speech -- to create Thy, whose likeness and voice were cloned from a real employee in the ARN finance team. The Australian Communications and Media Authority said there were currently no specific restrictions on the use of AI in broadcast content, and no obligation to disclose its use.

An ARN spokesperson said the company was exploring how new technology could enhance the listener experience. "We've been trialling AI audio tools on CADA, using the voice of Thy, an ARN team member. This is a space being explored by broadcasters globally, and the trial has offered valuable insights." However, it has also "reinforced the power of real personalities in driving compelling content," the spokesperson added.

The Australian Financial Review reported that Workdays with Thy has been broadcast on CADA since November, and was reported to have reached at least 72,000 people in last month's ratings. Vice president of the Australian Association of Voice Actors, Teresa Lim, said CADA's failure to disclose its use of AI reinforces how necessary legislation around AI labelling has become. "AI can be such a powerful and positive tool in broadcasting if there are correct safeguards in place," she said. "Authenticity and truth are so important for broadcast media. The public deserves to know what the source is of what's being broadcast ... We need to have these discussions now before AI becomes so advanced that it's too difficult to regulate."
Businesses

You'll Soon Manage a Team of AI Agents, Says Microsoft's Work Trend Report (zdnet.com) 56

ZipNada shares a report from ZDNet: Microsoft's latest research identifies a new type of organization known as the Frontier Firm, where on-demand intelligence requirements are managed by hybrid teams of AI agents and humans. The report identified real productivity gains from implementing AI into organizations, with one of the biggest being filling the capacity gap -- as many as 80% of the global workforce, both employees and leaders, report having too much work to do, but not enough time or energy to do it. ... According to the report, business leaders need to separate knowledge workers from knowledge work, acknowledging that humans who can complete higher-level tasks, such as creativity and judgment, should not be stuck answering emails. Rather, in the same way working professionals say they send emails or create pivot tables, soon they will be able to say they create and manage agents -- and Frontier Firms are showing the potential possibilities of this approach. ... "Everyone will need to manage agents," said Cambron. "I think it's exciting to me to think that, you know, with agents, every early-career person will be able to experience management from day one, from their first job."
Windows

Microsoft Brings Native PyTorch Arm Support To Windows Devices (neowin.net) 3

Microsoft has announced native PyTorch support for Windows on Arm devices with the release of PyTorch 2.7, making it significantly easier for developers to build and run machine learning models directly on Arm-powered Windows machines. This eliminates the need for manual compilation and opens up performance gains for AI tasks like image classification, NLP, and generative AI. Neowin reports: With the release of PyTorch 2.7, native Arm builds for Windows on Arm are now readily available for Python 3.12. This means developers can simply install PyTorch using a standard package manager like pip.

According to Microsoft: "This unlocks the potential to leverage the full performance of Arm64 architecture on Windows devices, like Copilot+ PCs, for machine learning experimentation, providing a robust platform for developers and researchers to innovate and refine their models."

Apple

Apple To Strip Secret Robotics Unit From AI Chief Weeks After Moving Siri (bloomberg.com) 8

An anonymous reader shares a report: Apple will remove its secret robotics unit from the command of its artificial intelligence chief, the latest shake-up in response to the company's AI struggles. Apple plans to relocate the robotics team from John Giannandrea's AI organization to the hardware division later this month, according to people with knowledge of the move.

That will place it under Senior Vice President John Ternus, who oversees hardware engineering, said the people, who asked not to be identified because the change isn't public. The pending shift will mark the second major project to be removed from Giannandrea in the past month: The company stripped the flailing Siri voice assistant from his purview in March.

Google

Google AI Fabricates Explanations For Nonexistent Idioms (wired.com) 99

Google's search AI is confidently generating explanations for nonexistent idioms, once again revealing fundamental flaws in large language models. Users discovered that entering any made-up phrase plus "meaning" triggers AI Overviews that present fabricated etymologies with unwarranted authority.

When queried about phrases like "a loose dog won't surf," Google's system produces detailed, plausible-sounding explanations rather than acknowledging these expressions don't exist. The system occasionally includes reference links, further enhancing the false impression of legitimacy.

Computer scientist Ziang Xiao from Johns Hopkins University attributes this behavior to two key LLM characteristics: prediction-based text generation and people-pleasing tendencies. "The prediction of the next word is based on its vast training data," Xiao explained. "However, in many cases, the next coherent word does not lead us to the right answer."
Programming

AI Tackles Aging COBOL Systems as Legacy Code Expertise Dwindles 76

US government agencies and Fortune 500 companies are turning to AI to modernize mission-critical systems built on COBOL, a programming language dating back to the late 1950s. The US Social Security Administration plans a three-year, $1 billion AI-assisted upgrade of its legacy COBOL codebase [alternative source], according to Bloomberg.

Treasury Secretary Scott Bessent has repeatedly stressed the need to overhaul government systems running on COBOL. As experienced programmers retire, organizations face growing challenges maintaining these systems that power everything from banking applications to pension disbursements. Engineers now use tools like ChatGPT and IBM's watsonX to interpret COBOL code, create documentation, and translate it to modern languages.

Slashdot Top Deals