AI

Hugging Face Clones OpenAI's Deep Research In 24 Hours 17

An anonymous reader quotes a report from Ars Technica: On Tuesday, Hugging Face researchers released an open source AI research agent called "Open Deep Research," created by an in-house team as a challenge 24 hours after the launch of OpenAI's Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research's performance while making the technology freely available to developers. "While powerful LLMs are now freely available in open-source, OpenAI didn't disclose much about the agentic framework underlying Deep Research," writes Hugging Face on its announcement page. "So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!"

Similar to both OpenAI's Deep Research and Google's implementation of its own "Deep Research" using Gemini (first introduced in December -- before OpenAI), Hugging Face's solution adds an "agent" framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end. The open source clone is already racking up comparable benchmark results. After only a day's work, Hugging Face's Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which tests an AI model's ability to gather and synthesize information from multiple sources. OpenAI's Deep Research scored 67.36 percent accuracy on the same benchmark with a single-pass response (OpenAI's score went up to 72.57 percent when 64 responses were combined using a consensus mechanism).

As Hugging Face points out in its post, GAIA includes complex multi-step questions such as this one: "Which of the fruits shown in the 2008 painting 'Embroidery from Uzbekistan' were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film 'The Last Voyage'? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o'clock position. Use the plural form of each fruit." To correctly answer that type of question, the AI agent must seek out multiple disparate sources and assemble them into a coherent answer. Many of the questions in GAIA represent no easy task, even for a human, so they test agentic AI's mettle quite well.
Open Deep Research "builds on OpenAI's large language models (such as GPT-4o) or simulated reasoning models (such as o1 and o3-mini) through an API," notes Ars. "But it can also be adapted to open-weights AI models. The novel part here is the agentic structure that holds it all together and allows an AI language model to autonomously complete a research task."

The code has been made public on GitHub.
AI

DeepSeek's AI App Will 'Highly Likely' Get Banned in the US, Jefferies Says 64

DeepSeek's AI app will highly likely face a US consumer ban after topping download charts on Apple's App Store and Google Play, according to analysts at US investment bank Jefferies. The US federal government, Navy and Texas have already banned the app, and analysts expect broader restrictions using legislation similar to that targeting TikTok.

While consumer access may be blocked, US developers could still be allowed to self-host DeepSeek's model to eliminate security risks, the analysts added. Even if completely banned, DeepSeek's impact on pushing down AI costs will persist as US companies work to replicate its technology, Jefferies said in a report this week reviewed by Slashdot.

The app's pricing advantage remains significant, with OpenAI's latest o3-mini model still costing 100% more than DeepSeek's R1 despite being 63% cheaper than o1-mini. The potential ban comes amid broader US-China tech tensions. While restrictions on H20 chips appear unlikely given their limited training capabilities, analysts expect the Biden administration's AI diffusion policies to remain largely intact under Trump, with some quota increases possible for overseas markets based on their AI activity levels.
AI

Researchers Created an Open Rival To OpenAI's o1 'Reasoning' Model for Under $50 23

AI researchers at Stanford and the University of Washington were able to train an AI "reasoning" model for under $50 in cloud compute credits, according to a research paper. From a report: The model, known as s1, performs similarly to cutting-edge reasoning models, such as OpenAI's o1 and DeepSeek's R1, on tests measuring math and coding abilities. The s1 model is available on GitHub, along with the data and code used to train it.

The team behind s1 said they started with an off-the-shelf base model, then fine-tuned it through distillation, a process to extract the "reasoning" capabilities from another AI model by training on its answers. The researchers said s1 is distilled from one of Google's reasoning models, Gemini 2.0 Flash Thinking Experimental. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month.
The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
Businesses

AMD Outsells Intel In the Datacenter For the First Time (tomshardware.com) 21

During the fourth quarter of 2024, AMD surpassed Intel in datacenter sales for the first time in history -- despite weaker-than-expected sales of its datacenter GPUs. Tom's Hardware reports: AMD's revenue in Q4 2024 totaled $7.658 billion, up 24% year-over-year. The company's gross margin hit 51%, whereas net income was $482 million. On the year basis, 2024 was AMD's best year ever as the company's revenue reached $25.8 billion, up 14% year-over-year. The company earned net income of $1.641 billion as its gross margin hit 49%. But while the company's annual results are impressive, there is something about Q4 results that AMD should be proud of.

Datacenter business was the company's primary source of earnings, with net revenue reaching record $3.86 billion in Q4, marking a 69% year-over-year (YoY) increase and a 9% quarter-over-quarter (QoQ) rise. Operating income also saw substantial improvement, surging 74% YoY to $1.16 billion. By contrast, Intel's datacenter and AI business unit posted $3.4 billion revenue, while its operating income reached $200 million. But while the quarter marked a milestone for AMD, market analysts expected AMD to sell more of its Instinct MI300-series GPUs for AI and HPC.
You can view AMD's 2024 financial results here.
China

Researchers Link DeepSeek To Chinese Telecom Banned In US (apnews.com) 86

An anonymous reader quotes a report from the Associated Press: The website of the Chinese artificial intelligence company DeepSeek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of DeepSeek's chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company. The code appears to be part of the account creation and user login process for DeepSeek.

In its privacy policy, DeepSeek acknowledged storing data on servers inside the People's Republic of China. But its chatbot appears more directly tied to the Chinese state than previously known through the link revealed by researchers to China Mobile. The U.S. has claimed there are close ties between China Mobile and the Chinese military as justification for placing limited sanctions on the company. [...] The code linking DeepSeek to one of China's leading mobile phone providers was first discovered by Feroot Security, a Canadian cybersecurity company, which shared its findings with The Associated Press. The AP took Feroot's findings to a second set of computer experts, who independently confirmed that China Mobile code is present. Neither Feroot nor the other researchers observed data transferred to China Mobile when testing logins in North America, but they could not rule out that data for some users was being transferred to the Chinese telecom.

The analysis only applies to the web version of DeepSeek. They did not analyze the mobile version, which remains one of the most downloaded pieces of software on both the Apple and the Google app stores. The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing "substantial" national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.
"It's mindboggling that we are unknowingly allowing China to survey Americans and we're doing nothing about it," said Ivan Tsarynny, CEO of Feroot. "It's hard to believe that something like this was accidental. There are so many unusual things to this. You know that saying 'Where there's smoke, there's fire'? In this instance, there's a lot of smoke," Tsarynny said.

Further reading: Senator Hawley Proposes Jail Time For People Who Download DeepSeek
Supercomputing

Google Says Commercial Quantum Computing Applications Arriving Within 5 Years (msn.com) 38

Google aims to release commercial quantum computing applications within five years, challenging Nvidia's prediction of a 20-year timeline. "We're optimistic that within five years we'll see real-world applications that are possible only on quantum computers," founder and lead of Google Quantum AI Hartmut Neven said in a statement. Reuters reports: Real-world applications Google has discussed are related to materials science - applications such as building superior batteries for electric cars - creating new drugs and potentially new energy alternatives. [...] Google has been working on its quantum computing program since 2012 and has designed and built several quantum chips. By using quantum processors, Google said it had managed to solve a computing problem in minutes that would take a classical computer more time than the history of the universe.

Google's quantum computing scientists announced another step on the path to real world applications within five years on Wednesday. In a paper published in the scientific journal Nature, the scientists said they had discovered a new approach to quantum simulation, which is a step on the path to achieving Google's objective.

Businesses

Workday To Cut Nearly 2,000 Workers on Profitability Focus (yahoo.com) 21

Workday is cutting about 8.5% of its workforce, making it the latest technology company to begin 2025 with headcount reductions. From a report: The cuts will amount to about 1,750 workers, Chief Executive Officer Carl Eschenbach wrote in a note to employees Wednesday. "The environment we're operating in today demands a new approach, particularly given our size and scale," he wrote. Workday intends to hire in strategic areas such as AI, allow faster decision-making, and take on more people overseas, Eschenbach wrote. This will advance the company's "ongoing focus on durable growth," Workday said in a filing Wednesday. Shares of Workday jumped more than 5% on the news.
AI

'AI Granny' Driving Scammers Up the Wall 82

Since November, British telecom O2 has deployed an AI chatbot masquerading as a 78-year-old grandmother to waste scammers' time. The bot, named Daisy, engages fraudsters by discussing knitting patterns, recipes, and asking about tea preferences while feigning computer illiteracy. The Guardian has an update this week: In tests over several weeks, Daisy has kept individual scammers occupied for up to 40 minutes, with one case showing her being passed between four different callers. An excerpt from the story: "When a third scammer tries to get her to download the Google Play Store, she replies: 'Dear, did you say pastry? I'm not really on the right page.' She then complains that her screen has gone blank, saying it has 'gone black like the night sky'."
Google

Google To Spend $75 Billion on AI Push (cnbc.com) 33

Google parent Alphabet plans to spend $75 billion on capital expenditures in 2025, up from $52.5 billion last year, as it races to compete with Microsoft and Meta in AI infrastructure. CNBC: On its earnings call, Alphabet said it expects $16 billion to $18 billion of those expenses to come in the first quarter. Overall, the expenditures will go toward "technical infrastructure, primarily for servers, followed by data centers and networking," finance chief Anat Ashkenazi said.

[...] Alphabet and its megacap tech rivals are rushing to build out their data centers with next-generation AI infrastructure, packed with Nvidia's graphics processing units, or GPUs. Last month, Meta said it plans to invest $60 billion to $65 billion this year as part of its AI push. Microsoft has committed to $80 billion in AI-related capital expenditures in its current fiscal year.

Facebook

Meta CTO: 2025 Make or Break Year for Metaverse (msn.com) 80

Meta's metaverse ambitions face a decisive year in 2025, with Chief Technology Officer Andrew Bosworth warning employees that the project could become either "a legendary misadventure" or prove visionary, Business Insider is reporting, citing an internal memo. Bosworth called for increased sales and user engagement for Meta's mixed reality products, noting the company plans to launch several AI-powered wearable devices.

The tech giant's Reality Labs division, which develops virtual and augmented reality products, reported record revenue of $1.08 billion in the fourth quarter but posted its largest-ever quarterly loss of $4.97 billion. Meta CEO Mark Zuckerberg told staff the company's AI-powered smart glasses, which sold over 1 million units in 2024, marked a "great start" but would not significantly impact the business. The Reality Labs unit has accumulated losses of approximately $60 billion since 2020.
Transportation

UK Team Invents Self-Healing Road Surface To Prevent Potholes (theguardian.com) 34

An anonymous reader quotes a report from The Guardian: For all motorists, but perhaps the Ferrari-collecting rocker Rod Stewart in particular, it will be music to the ears: researchers have developed a road surface that heals when it cracks, preventing potholes without a need for human intervention. The international team devised a self-healing bitumen that mends cracks as they form by fusing the asphalt back together. In laboratory tests, pieces of the material repaired small fractures within an hour of them first appearing. "When you close the cracks you prevent potholes forming in the future and extend the lifespan of the road," said Dr Jose Norambuena-Contreras, a researcher on the project at Swansea University. "We can extend the surface lifespan by 30%."

Potholes typically start from small surface cracks that form under the weight of traffic. These allow water to seep into the road surface, where it causes more damage through cycles of freezing and thawing. Bitumen, the sticky black substance used in asphalt, becomes susceptible to cracking when it hardens through oxidation. To make the self-healing bitumen, the researchers mixed in tiny porous plant spores soaked in recycled oils. When the road surface is compressed by passing traffic, it squeezes the spores, which release their oil into any nearby cracks. The oils soften the bitumen enough for it to flow and seal the cracks. Working with researchers at King's College London and Google Cloud, the scientists used machine learning, a form of artificial intelligence, to model the movement of organic molecules in bitumen and simulate the behaviour of the self-healing material to see how it responded to newly formed cracks. The material could be scaled up for use on British roads in a couple of years, the researchers believe.
Google published a blog post with more information about the "self-healing" asphalt.
Education

OpenAI Partners With California State University System 16

OpenAI is partnering with the California State University (CSU) system to bring ChatGPT Edu to the 23-campus community of 500,000 students, calling it the "largest implementation of ChatGPT by any single organization or company anywhere in the world." Fortune reports: As part of ChatGPT Edu, members of the CSU community will get special access to ChatGPT-4o and advanced research and analysis capabilities. The partnership allows schools to create customizable AI chatbots for any project, like a campus IT help desk bot, financial aid assistant, chemistry tutor, or orientation buddy. CSU also plans to introduce free AI skills training for its students, faculty, and staff as well as connect students with AI-related apprenticeship programs. CSU joins a number of other schools with ChatGPT Edu partnerships, including Arizona State University (AS), The University of Texas, Austin, University of Oxford, Columbia University, and the Wharton School at the University of Pennsylvania.
Google

Google Removes Pledge To Not Use AI For Weapons From Website 58

Google has updated its public AI principles page to remove a pledge to not build AI for weapons or surveillance. TechCrunch reports: Asked for comment, the company pointed TechCrunch to a new blog post on "responsible AI." It notes, in part, "we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Google's newly updated AI principles note the company will work to "mitigate unintended or harmful outcomes and avoid unfair bias," as well as align the company with "widely accepted principles of international law and human rights." Further reading: Google Removes 'Don't Be Evil' Clause From Its Code of Conduct
Books

AI-Generated Slop Is Already In Your Public Library 20

An anonymous reader writes: Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don't realize is AI-generated.

Public libraries primarily use two companies to manage and lend ebooks: Hoopla and OverDrive, the latter of which people may know from its borrowing app, Libby. Both companies have a variety of payment options for libraries, but generally libraries get access to the companies' catalog of books and pay for customers to be able to borrow that book, with different books having different licenses and prices. A key difference is that with OverDrive, librarians can pick and choose which books in OverDrive's catalog they want to give their customers the option of borrowing. With Hoopla, librarians have to opt into Hoopla's entire catalog, then pay for whatever their customers choose to borrow from that catalog. The only way librarians can limit what Hoopla books their customers can borrow is by setting a limit on the price of books. For example, a library can use Hoopla but make it so their customers can only borrow books that cost the library $5 per use.

On one hand, Hoopla's gigantic catalog, which includes ebooks, audio books, and movies, is a selling point because it gives librarians access to more for cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for a healthier liver might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver. The book was authored by Magda Tangy, who has no online footprint, and who has an AI-generated profile picture on Amazon, where her books are also for sale. Note the earring that is only on one ear and seems slightly deformed. A spokesperson for deepfake detection company Reality Defender said that according to their platform, the headshot is 85 percent likely to be AI-generated. [...] It is impossible to say exactly how many AI-generated books are included in Hoopla's catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform.
"This type of low quality, AI generated content, is what we at 404 Media and others have come to call AI slop," writes Emanuel Maiberg. "Librarians, whose job it is in part to curate what books their community can access, have been dealing with similar problems in the publishing industry for years, and have a different name for it: vendor slurry."

"None of the librarians I talked to suggested the AI-generated content needed to be banned from Hoopla and libraries only because it is AI-generated. It might have its place, but it needs to be clearly labeled, and more importantly, provide borrowers with quality information."

Sarah Lamdan, deputy director of the American Library Association, told 404 Media: "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well-identified in library catalogs, so it is clear to readers that the books were not written by human authors. If library visitors choose to read AI eBooks, they should do so with the knowledge that the books are AI-generated."
Red Hat Software

Red Hat Plans to Add AI to Fedora and GNOME 49

In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system."

"Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."
Businesses

Panasonic To Cut Costs To Support Shift Into AI 12

Panasonic will cut its costs, restructure underperforming units and revamp its workforce as it pivots toward AI data centers and away from its consumer electronics roots, the company said on Tuesday. The Japanese conglomerate aims to boost profits by 300 billion yen ($1.93 billion) by March 2029, partly by consolidating production and logistics operations.

Bloomberg reports that CEO Yuki Kusumi has declined to confirm if the company would divest its TV business but said alternatives were being considered. The Tesla battery supplier plans to integrate AI across operations through a partnership with Anthropic, targeting growth in components for data centers.
EU

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72

AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
AI

Salesforce Cutting 1,000 Roles While Hiring Salespeople for AI 20

Salesforce is cutting jobs as its latest fiscal year gets underway, Bloomberg reported Monday, citing a person familiar with the matter, even as the company simultaneously hires workers to sell new artificial intelligence products. From the report: More than 1,000 roles will be affected, according to the person, who asked not to be identified because the information is private. Displaced workers will be able to apply for other jobs internally, the person added. Salesforce had nearly 73,000 workers as of January 2024, when that fiscal year ended.
AI

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics (theguardian.com) 96

An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse.

"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."

The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together.
Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."

The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."

Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"

Slashdot Top Deals