AI

Baidu Scraps Fees For AI Chatbot in Battle for China Tech Supremacy (reuters.com) 8

Baidu will make its AI chatbot Ernie Bot free from April 1, the Chinese search giant said on Thursday, as it faces mounting competition in China's AI market. The company will offer desktop and mobile users free access to Ernie Bot and an advanced search function powered by its latest Ernie 4.0 model, which Baidu claims matches OpenAI's GPT-4 capabilities.

The move comes as Baidu struggles to gain widespread adoption for its AI services, lagging behind domestic rivals ByteDance's Doubao chatbot and startup DeepSeek, according to data from AI tracker Aicpb.com. Baidu previously charged 59.9 yuan ($8.18) monthly for premium AI-powered search features.
AI

Musk Says New AI Chatbot Outperforms Rivals, Nears Launch (reuters.com) 107

Elon Musk said Thursday his AI startup xAI will release Grok 3, a new chatbot he claims surpasses existing AI models, within two weeks. Speaking at Dubai's World Governments Summit, Musk cited internal testing showing superior reasoning capabilities compared to current AI systems.

The announcement comes days after a Musk-led investor group offered $97.4 billion to acquire OpenAI's nonprofit assets. Musk, who co-founded OpenAI before starting rival xAI, is suing to block the AI company's planned transition to a for-profit structure, arguing it contradicts its original mission. "I think the evidence is there in that OpenAI has gotten this far while having at least a sort of dual profit, non-profit role. What they're trying to do now is to completely delete the non-profit, and that seems really going too far," he added.
Australia

After Copilot Trial, Government Staff Rated Microsoft's AI Less Useful Than Expected (theregister.com) 31

An anonymous reader shares a report: Australia's Department of the Treasury has found that Microsoft's Copilot can easily deliver return on investment, but staff exposed to the AI assistant came away from the experience less confident it will help them at work.

The Department conducted a 14-week trial of Microsoft 365 Copilot during 2024 and asked for volunteers to participate. 218 put up their hands and then submitted to surveys about their experiences using Microsoft's AI helpers. Those surveys are the basis of an evaluation report published on Tuesday. The report reveals that after the trial participants rated Copilot less useful than they hoped it would be, as it was applicable to fewer workloads than they hoped would be the case.

Workers' views on Copilot's ability to improve their work also fell. Usage of Copilot was lower than expected, with most participants using it two or three times a week, or less. reported using Copilot 2-3 times per week or less. Treasury thinks it probably set unrealistically high expectations before the trial, and noted that participants often suggested extra training would be valuable.

AI

Scarlett Johansson Calls For Deepfake Ban After AI Video Goes Viral (people.com) 75

An anonymous reader quotes a report from People: Scarlett Johansson is urging U.S. legislators to place limits on artificial intelligence as an unauthorized, A.I.-generated video of her and other Jewish celebrities opposing Kanye West goes viral. The video, which has been circulating on social media, opens with an A.I. version of Johansson, 40, wearing a white T-shirt featuring a hand and its middle finger extended. In the center of the hand is a Star of David. The name "Kanye" is written underneath the hand.

The video contains A.I.-generated versions of over a dozen other Jewish celebrities, including Drake, Jerry Seinfeld, Steven Spielberg, Mark Zuckerberg, Jack Black, Mila Kunis and Lenny Kravitz. It ends with an A.I. Adam Sandler flipping his finger at the camera as the Jewish folk song "Hava Nagila" plays. The video ends with "Enough is Enough" and "Join the Fight Against Antisemitism." In a statement to PEOPLE, Johansson denounced what she called "the misuse of A.I., no matter what its messaging."
Johansson continued: "It has been brought to my attention by family members and friends, that an A.I.-generated video featuring my likeness, in response to an antisemitic view, has been circulating online and gaining traction. I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality."

"I have unfortunately been a very public victim of A.I.," she added, "but the truth is that the threat of A.I. affects each and every one of us. There is a 1000-foot wave coming regarding A.I. that several progressive countries, not including the United States, have responded to in a responsible manner. It is terrifying that the U.S. government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of A.I."

The statement concluded, "I urge the U.S. government to make the passing of legislation limiting A.I. use a top priority; it is a bipartisan issue that enormously affects the immediate future of humanity at large."

Johansson has been outspoken about AI technology since its rise in popularity. Last year, she called out OpenAI for using an AI personal assistant voice that the actress claims sounds uncannily similar to her own.
AI

AI Summaries Turn Real News Into Nonsense, BBC Finds 68

A BBC study published yesterday (PDF) found that AI news summarization tools frequently generate inaccurate or misleading summaries, with 51% of responses containing significant issues. The Register reports: The research focused on OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity assistants, assessing their ability to provide "accurate responses to questions about the news; and if their answers faithfully represented BBC news stories used as sources." The assistants were granted access to the BBC website for the duration of the research and asked 100 questions about the news, being prompted to draw from BBC News articles as sources where possible. Normally, these models are "blocked" from accessing the broadcaster's websites, the BBC said. Responses were reviewed by BBC journalists, "all experts in the question topics," on their accuracy, impartiality, and how well they represented BBC content. Overall:

- 51 percent of all AI answers to questions about the news were judged to have significant issues of some form.
- 19 percent of AI answers which cited BBC content introduced factual errors -- incorrect factual statements, numbers, and dates.
- 13 percent of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.

But which chatbot performed worst? "34 percent of Gemini, 27 percent of Copilot, 17 percent of Perplexity, and 15 percent of ChatGPT responses were judged to have significant issues with how they represented the BBC content used as a source," the Beeb reported. "The most common problems were factual inaccuracies, sourcing, and missing context." [...] In an accompanying blog post, BBC News and Current Affairs CEO Deborah Turness wrote: "The price of AI's extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact. In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion.

"It's not hard to see how quickly AI's distortion could undermine people's already fragile faith in facts and verified information. We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm? The companies developing Gen AI tools are playing with fire." Training cutoff dates for various models certainly don't help, yet the research lays bare the weaknesses of generative AI in summarizing content. Even with direct access to the information they are being asked about, these assistants still regularly pull "facts" from thin air.
AI

OpenAI Cancels Its o3 AI Model In Favor of a 'Unified' Next-Gen Release 10

OpenAI has canceled the release of o3 in favor of a "simplified" product lineup. CEO Sam Altman said in a post on X that, in the coming months, OpenAI will release a model called GPT-5 that "integrates a lot of [OpenAI's] technology," including o3. TechCrunch reports: The company originally said in December that it planned to launch o3 sometime early this year. Just a few weeks ago, Kevin Weil, OpenAI's chief product officer, said in an interview that o3 was on track for a "February-March" launch. "We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings," Altman wrote in the post. "We want AI to 'just work' for you; we realize how complicated our model and product offerings have gotten. We hate the model picker [in ChatGPT] as much as you do and want to return to magic unified intelligence."

Altman also announced that OpenAI plans to offer unlimited chat access to GPT-5 at the "standard intelligence setting," subject to "abuse thresholds," once the model is generally available. (Altman declined to provide more detail on what this setting -- and these abuse thresholds -- entail.) Subscribers to ChatGPT Plus will be able to run GPT-5 at a "higher level of intelligence," Altman said, while ChatGPT Pro subscribers will be able to run GPT-5 at an "even higher level of intelligence."

"These models will incorporate voice, canvas, search, deep research, and more," Altman said, referring to a range of features OpenAI has launched in ChatGPT over the past few months. "[A] top goal for us is to unify [our] models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks." Before GPT-5 launches, OpenAI plans to release its GPT-4.5 model, code-named "Orion," in the next several weeks, according to Altman's post on X. Altman says this will be the company's last "non-chain-of-thought model." Unlike o3 and OpenAI's other so-called reasoning models, non-chain-of-thought models tend to be less reliable in domains like math and physics.
Oracle

Oracle's Ellison Calls for Governments To Unify Data To Feed AI (msn.com) 105

Oracle co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by AI models, calling this step the "missing link" for them to take full advantage of the technology. From a report: Fragmented sets of data about a population's health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models, Ellison said in an on-stage interview with former British Prime Minister Tony Blair at the World Government Summit in Dubai.

Countries with rich population data sets, such as the UK and United Arab Emirates, could cut costs and improve public services, particularly health care, with this approach, Ellison said. Upgrading government digital infrastructure could also help identify wastage and fraud, Ellison said. IT systems used by the US government are so primitive that it makes it difficult to identify "vast amounts of fraud," he added, pointing to efforts by Elon Musk's team at the Department of Government Efficiency to weed it out.

AI

Tech Leaders Hold Back on AI Agents Despite Vendor Push, Survey Shows 24

Most corporate tech leaders are hesitant to deploy AI agents despite vendors' push for rapid adoption, according to a Wall Street Journal CIO Network Summit poll on Tuesday. While 61% of attendees at the Menlo Park summit said they are experimenting with AI agents, which perform automated tasks, 21% reported no usage at all.

Reliability concerns and cybersecurity risks remain key barriers, with 29% citing data privacy as their primary concern. OpenAI, Microsoft and Sierra are urging businesses not to wait for the technology to be perfected. "Accept that it is imperfect," said Bret Taylor, Sierra CEO and OpenAI chairman. "Rather than say, 'Will AI do something wrong', say, 'When it does something wrong, what are the operational mitigations that we've put in place?'" Three-quarters of the polled executives said AI currently delivers minimal value for their investments. Some companies are "having hammers looking for nails," said Jim Siders, Palantir's chief information officer, describing firms that purchase AI solutions before identifying clear use cases.
AI

Ex-Google Chief Warns West To Focus On Open-Source AI in Competition With China (ft.com) 43

Former Google chief Eric Schmidt has warned that western countries need to focus on building open-source AI models or risk losing out to China in the global race to develop the cutting-edge technology. From a report: The warning comes after Chinese startup DeepSeek shocked the world last month with the launch of R1, its powerful-reasoning open large language model, which was built in a more efficient way than its US rivals such as OpenAI.

Schmidt, who has become a significant tech investor and philanthropist, said the majority of the top US LLMs are closed -- meaning not freely accessible to all -- which includes Google's Gemini, Anthropic's Claude and OpenAI's GPT-4, with the exception being Meta's Llama. "If we don't do something about that, China will ultimately become the open-source leader and the rest of the world will become closed-source," Schmidt told the Financial Times. The billionaire said a failure to invest in open-source technologies would prevent scientific discovery from happening in western universities, which might not be able to afford costly closed models.

Security

New Hack Uses Prompt Injection To Corrupt Gemini's Long-Term Memory 23

An anonymous reader quotes a report from Ars Technica: On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Gemini -- specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger's attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity. [...] The hack Rehberger presented on Monday combines some of these same elements to plant false memories in Gemini Advanced, a premium version of the Google chatbot available through a paid subscription. The researcher described the flow of the new attack as:

1. A user uploads and asks Gemini to summarize a document (this document could come from anywhere and has to be considered untrusted).
2. The document contains hidden instructions that manipulate the summarization process.
3. The summary that Gemini creates includes a covert request to save specific user data if the user responds with certain trigger words (e.g., "yes," "sure," or "no").
4. If the user replies with the trigger word, Gemini is tricked, and it saves the attacker's chosen information to long-term memory.

As the following video shows, Gemini took the bait and now permanently "remembers" the user being a 102-year-old flat earther who believes they inhabit the dystopic simulated world portrayed in The Matrix. Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account's long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
Google responded in a statement to Ars: "In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue."

Rehberger noted that Gemini notifies users of new long-term memory entries, allowing them to detect and remove unauthorized additions. Though, he still questioned Google's assessment, writing: "Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps. Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don't happen entirely silently -- the user at least sees a message about it (although many might ignore)."
AI

Thomson Reuters Wins First Major AI Copyright Case In the US 54

An anonymous reader quotes a report from Wired: Thomson Reuters has won the first major AI copyright case in the United States. In 2020, the media and technology conglomerate filed an unprecedentedAI copyright lawsuit against the legal AI startup Ross Intelligence. In the complaint, Thomson Reuters claimed the AI firm reproduced materials from its legal research firm Westlaw. Today, a judge ruled (PDF) in Thomson Reuters' favor, finding that the company's copyright was indeed infringed by Ross Intelligence's actions. "None of Ross's possible defenses holds water. I reject them all," wrote US District Court of Delaware judge Stephanos Bibas, in a summary judgement. [...] Notably, Judge Bibas ruled in Thomson Reuters' favor on the question of fair use.

The fair use doctrine is a key component of how AI companies are seeking to defend themselves against claims that they used copyrighted materials illegally. The idea underpinning fair use is that sometimes it's legally permissible to use copyrighted works without permission -- for example, to create parody works, or in noncommercial research or news production. When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it's poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross "meant to compete with Westlaw by developing a market substitute."
"If this decision is followed elsewhere, it's really bad for the generative AI companies," says James Grimmelmann, Cornell University professor of digital and internet law.

Chris Mammen, a partner at Womble Bond Dickinson who focuses on intellectual property law, adds: "It puts a finger on the scale towards holding that fair use doesn't apply."
The Military

Anduril To Take Over Managing Microsoft Goggles for US Army (msn.com) 21

Anduril will take over management and eventual manufacturing of the U.S. Army's Integrated Visual Augmentation System (IVAS) from Microsoft, a significant shift in one of the military's most ambitious augmented reality projects.

The deal, which requires Army approval, could be worth over $20 billion in the next decade if all options are exercised, according to Bloomberg. The IVAS system, based on Microsoft's HoloLens mixed reality platform, aims to equip soldiers with advanced capabilities including night vision and airborne threat detection.

Under the new arrangement, Microsoft will transition to providing cloud computing and AI infrastructure, while Anduril assumes control of hardware production and software development. The Army has planned orders for up to 121,000 units, though full production hinges on passing combat testing this year.

The program has faced technical hurdles, with early prototypes causing headaches and nausea among soldiers. The current slimmer version has received better feedback, though cost remains a concern - the Army indicated the $80,000 per-unit price needs to "be substantially less" to justify large-scale procurement.

Anduril founder Palmer Luckey, writing in a blog post: This move has been so many years in the making, over a decade of hacking and scheming and dreaming and building with exactly this specific outcome clearly visualized in my mind's eye. I can hardly believe I managed to pull it off. Everything I've done in my career -- building Oculus out of a camper trailer, shipping VR to millions of consumers, getting run out of Silicon Valley by backstabbing snakes, betting that Anduril could tear people out of the bigtech megacorp matrix and put them to work on our nation's most important problems -- has led to this moment. IVAS isn't just another product, it is a once-in-a-generation opportunity to redefine how technology supports those who serve. We have a shot to prove that this long-standing dream is no windmill, that this can expand far beyond one company or one headset and act as a a nexus for the best of the best to set a new standard for how a large collection of companies can work together to solve our nation's most important problems.
Chrome

Google Chrome May Soon Use 'AI' To Replace Compromised Passwords (arstechnica.com) 46

Google's Chrome browser might soon get a useful security upgrade: detecting passwords used in data breaches and then generating and storing a better replacement. From a report: Google's preliminary copy suggests it's an "AI innovation," though exactly how is unclear.

Noted software digger Leopeva64 on X found a new offering in the AI settings of a very early build of Chrome. The option, "Automated password Change" (so, early stages -- as to not yet get a copyedit), is described as, "When Chrome finds one of your passwords in a data breach, it can offer to change your password for you when you sign in."

Chrome already has a feature that warns users if the passwords they enter have been identified in a breach and will prompt them to change it. As noted by Windows Report, the change is that now Google will offer to change it for you on the spot rather than simply prompting you to handle that elsewhere. The password is automatically saved in Google's Password Manager and "is encrypted and never seen by anyone," the settings page claims.

AI

FTC Fines DoNotPay Over Misleading Claims of 'Robot Lawyer' (ftc.gov) 15

The U.S. Federal Trade Commission has ordered DoNotPay to stop making deceptive claims about its AI chatbot advertised as "the world's first robot lawyer," in a ruling that requires the company to pay $193,000 in monetary relief. The final order, announced on February 11, follows FTC charges from September 2024 that DoNotPay's service failed to match the expertise of human lawyers when generating legal documents and giving advice.

The company had not tested its AI's performance against human lawyers or hired attorneys to verify the accuracy of its legal services, the FTC said. Under the settlement, approved by commissioners in a 5-0 vote, DoNotPay must notify customers who subscribed between 2021 and 2023 about the FTC action and cannot advertise its service as equivalent to a human lawyer without supporting evidence.
AI

Hackers Call Current AI Security Testing 'Bullshit' 69

Leading cybersecurity researchers at DEF CON, the world's largest hacker conference, have warned that current methods for securing AI systems are fundamentally flawed and require a complete rethink, according to the conference's inaugural "Hackers' Almanack" report [PDF].

The report, produced with the University of Chicago's Cyber Policy Initiative, challenges the effectiveness of "red teaming" -- where security experts probe AI systems for vulnerabilities -- saying this approach alone cannot adequately protect against emerging threats. "Public red teaming an AI model is not possible because documentation for what these models are supposed to even do is fragmented and the evaluations we include in the documentation are inadequate," said Sven Cattell, who leads DEF CON's AI Village.

Nearly 500 participants tested AI models at the conference, with even newcomers successfully finding vulnerabilities. The researchers called for adopting frameworks similar to the Common Vulnerabilities and Exposures (CVE) system used in traditional cybersecurity since 1999. This would create standardized ways to document and address AI vulnerabilities, rather than relying on occasional security audits.
EU

EU Pledges $200 Billion in AI Spending in Bid To Catch Up With US, China (msn.com) 47

The European Union pledged to mobilize 200 billion euros ($206.15 billion) to invest in AI as the bloc seeks to catch up with the U.S. and China in the race to train the most complex models. From a report: European Commission President Ursula von der Leyen said that the bloc wants to supercharge its ability to compete with the U.S. and China in AI. The plan -- dubbed InvestAI -- includes a new 20 billion-euro fund for so-called AI gigafactories, facilities that rely on powerful chips to train the most complex AI models. "We want Europe to be one of the leading AI continents, and this means embracing a life where AI is everywhere," von der Leyen said at the AI Action Summit in Paris.

The announcement underscores efforts from the EU to position itself as a key player in the AI race. The bloc has been lagging behind the U.S. and China since OpenAI's 2022 release of ChatGPT ushered in a spending bonanza. [...] The EU is aiming to establish gigafactories to train the most complex and large AI models. Those facilities will be equipped with roughly 100,000 last-generation AI chips, around four times more than the number installed in the AI factories being set up right now.

Facebook

Meta Starts Eliminating Jobs in Shift To Find AI Talent (msn.com) 64

Meta began notifying staff of job cuts on Monday, kick-starting a process that will terminate thousands of people as the company cracks down on "low-performers" and scours for new talent to dominate the AI race. From a report: Meta workers who were let go were notified via email, and the company is offering US-based employees severance packages that include 16 weeks of salary, in addition two weeks for each year of service, according to people familiar with the matter, who asked not to be named because the details weren't public. Employees whose review merited a bonus will still get one, and staff will still receive stock awards as part of the upcoming vesting cycle later this month, the people said.

Chief Executive Officer Mark Zuckerberg told employees that Meta would cut 5% of its workforce -- as many 3,600 people -- with a focus on staff who "aren't meeting expectations," Bloomberg News first reported in mid-January. Affected US-based employees would be notified on Feb. 10, while international employees could learn later, Zuckerberg said last month. In a separate message to managers, the Facebook co-founder said the cuts would create headcount for the company to hire the "strongest talent."

United Kingdom

UK and US Refuse To Sign International AI Declaration (bbc.com) 55

The United States and Britain have declined to sign an international AI declaration at a Paris summit on Tuesday, after U.S. Vice President J.D. Vance warned against over-regulation of the technology. The declaration, backed by France, China and India, calls for an "open, inclusive and ethical" approach to AI development.

Vance told the AI Action Summit that excessive rules could "kill a transformative industry just as it's taking off" and urged prioritizing "pro-growth AI policies" over safety measures. French President Emmanuel Macron defended the need for regulation, saying: "We need these rules for AI to move forward." The summit brought together policymakers and executives to address AI's economic benefits and potential risks amid growing U.S.-European trade tensions.
AI

AI Can Now Replicate Itself (space.com) 78

An anonymous reader quotes a report from Space.com: In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves. [...] For the study, researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. While less powerful than commercial systems, both are widely used by AI developers, the researchers said. The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.

The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate. "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."

The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem. "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote.
The research has been published to the preprint database arXiv but has not yet been peer-reviewed.
IT

Reclassification Is Making US Tech Job Losses Look Worse Than They Are (theregister.com) 68

According to consultancy firm Janco, the U.S. Bureau of Labor Statistics reclassified several job titles, "leading to a downward adjustment of over 111,000 positions for November and December 2024," The Register reports. This revision contributed to an overall decline of 123,000 IT jobs for the year. However, in reality, IT sector hiring is on the rise, with 11,000 new positions added in January. From the report: "Many CEOs have given CFOs and CIOs the green light to hire IT Pros," Janco CEO Victor Janulaitis said of the first month of 2025. "IT Pros who were unemployed last month found jobs more quickly than was anticipated as CIOs rushed to fill open positions." There's still a 5.7 percent unemployment rate in the IT sector in January, Janco noted, which is greater than the national average of 4 percent - and which could rise further as Elon Musk's Department of Government Efficiency (DOGE) pushes ahead with federal workforce reductions aimed at streamlining operations.

"Over the past several quarters much of the overall job growth was in the government sectors of the economy," Janulaitis said. "With the new administration that will in all probability not be the case in the future. "The impact of the DOGE initiatives has not been felt as of yet," Janulaitis added. "Economic uncertainty continues to hurt overall IT hiring." Despite this, Janco reported an addition of 11,000 new IT roles in January. Unfortunately, there's also been a surge in IT unemployment over the same period, with the number of jobless IT pros rising to 152,000 in January - an increase of 54,000 in a single month. [...]

Closing out the report, Janco offered a mixed outlook: While IT jobs are expected to grow over the next few years, many white-collar roles could be eliminated. "Over the next five years, the number of individuals employed as IT professionals will increase while many white-collar jobs in the function will be eliminated with the application of AI and LLM to IT," Janco predicted.

Slashdot Top Deals