AI

AI Could Affect 40% of Jobs and Widen Inequality Between Nations, UN Warns (cnbc.com) 40

An anonymous reader shares a report: AI is projected to reach $4.8 trillion in market value by 2033, but the technology's benefits remain highly concentrated, according to the U.N. Trade and Development agency. In a report released on Thursday, UNCTAD said the AI market cap would roughly equate to the size of Germany's economy, with the technology offering productivity gains and driving digital transformation. However, the agency also raised concerns about automation and job displacement, warning that AI could affect 40% of jobs worldwide. On top of that, AI is not inherently inclusive, meaning the economic gains from the tech remain "highly concentrated," the report added.

"The benefits of AI-driven automation often favour capital over labour, which could widen inequality and reduce the competitive advantage of low-cost labour in developing economies," it said. The potential for AI to cause unemployment and inequality is a long-standing concern, with the IMF making similar warnings over a year ago. In January, The World Economic Forum released findings that as many as 41% of employers were planning on downsizing their staff in areas where AI could replicate them. However, the UNCTAD report also highlights inequalities between nations, with U.N. data showing that 40% of global corporate research and development spending in AI is concentrated among just 100 firms, mainly those in the U.S. and China.

AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Facebook

Schrodinger's Economics (thetimes.com) 38

databasecowgirl writes: Commenting in The Times on the absurdity of Meta's copyright infringement claims, Caitlin Moran defines Schrodinger's economics: where a company is both [one of] the most valuable on the planet yet also too poor to pay for the materials it profits from.

Ultimately "move fast and break things" means breaking other people's things. Or, possibly worse, going full 'The Talented Mr Ripley': slowly feeling so entitled to the things you are enamored of that you end up clubbing out the brains of your beloved in a boat.

Microsoft

Microsoft Pulls Back on Data Centers From Chicago To Jakarta 21

Microsoft has pulled back on data center projects around the world, suggesting the company is taking a harder look at its plans to build the server farms powering artificial intelligence and the cloud. From a report: The software company has recently halted talks for, or delayed development of, sites in Indonesia, the UK, Australia, Illinois, North Dakota and Wisconsin, according to people familiar with the situation. Microsoft is widely seen as a leader in commercializing AI services, largely thanks to its close partnership with OpenAI. Investors closely track Microsoft's spending plans to get a sense of long-term customer demand for cloud and AI services.

It's hard to know how much of the company's data center pullback reflects expectations of diminished demand versus temporary construction challenges, such as shortages of power and building materials. Some investors have interpreted signs of retrenchment as an indication that projected purchases of AI services don't justify Microsoft's massive outlays on server farms. Those concerns have weighed on global tech stocks in recent weeks, particularly chipmakers like Nvidia which suck up a significant share of data center budgets.
AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
The Internet

NaNoWriMo To Close After 20 Years (theguardian.com) 15

NaNoWriMo, the nonprofit behind the annual novel-writing challenge, is shutting down after 20 years but will keep its websites online temporarily so users can retrieve their content. The Guardian reports: A 27-minute YouTube video posted the same day by the organization's interim executive director Kilby Blades explained that it had to close due to ongoing financial problems, which were compounded by reputational damage. In November 2023, several community members complained to the nonprofit's board, Blades said. They believed that staff had mishandled accusations made in May 2023 that a NaNoWriMo forum moderator was grooming children on a different website. The moderator was eventually removed, though this was for unrelated code of conduct violations and occurred "many weeks" after the initial complaints. In the wake of this, community members came forward with other complaints related to child safety on the NaNoWriMo sites.

The organization was also widely criticized last year over a statement on the use of artificial intelligence in creative writing. After stating that it did not support or explicitly condemn any approach to writing, including the use of AI, it said that the "categorical condemnation of artificial intelligence has classist and ableist undertones." It went on to say that "not all writers have the financial ability to hire humans to help at certain phases of their writing," and that "not all brains have same abilities ... There is a wealth of reasons why individuals can't 'see' the issues in their writing without help."
"We hold no belief that people will stop writing 50,000 words in November," read Monday's email. "Many alternatives to NaNoWriMo popped up this year, and people did find each other. In so many ways, it's easier than it was when NaNoWriMo began in 1999 to find your writing tribe online."
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Microsoft

Bill Gates Celebrates Microsoft's 50th By Releasing Altair BASIC Source Code (thurrott.com) 97

To mark Microsoft's 50th anniversary, Bill Gates has released the original Altair BASIC source code he co-wrote with Paul Allen, calling it the "coolest code" he's ever written and a symbol of the company's humble beginnings. Thurrott reports: "Before there was Office or Windows 95 or Xbox or AI, there was Altair BASIC," Bill Gates writes on his Gates Notes website. "In 1975, Paul Allen and I created Microsoft because we believed in our vision of a computer on every desk and in every home. Five decades later, Microsoft continues to innovate new ways to make life easier and work more productive. Making it 50 years is a huge accomplishment, and we couldn't have done it without incredible leaders like Steve Ballmer and Satya Nadella, along with the many people who have worked at Microsoft over the years."

Today, Gates says that the 50th anniversary of Microsoft is "bittersweet," and that it feels like yesterday when he and Allen "hunched over the PDP-10 in Harvard's computer lab, writing the code that would become the first product of our new company." That code, he says, remains "the coolest code I've ever written to this day ... I still get a kick out of seeing it, even all these years later."

Crime

Global Scam Industry Evolving at 'Unprecedented Scale' Despite Recent Crackdown (cnn.com) 13

Online scam operations across Southeast Asia are rapidly adapting to recent crackdowns, adopting AI and expanding globally despite the release of 7,000 trafficking victims from compounds along the Myanmar-Thailand border, experts say. These releases represent just a fraction of an estimated 100,000 people trapped in facilities run by criminal syndicates that rake in billions through investment schemes and romance scams targeting victims worldwide, CNN reports.

"Billions of dollars are being invested in these kinds of businesses," said Kannavee Suebsang, a Thai lawmaker leading efforts to free those held in scam centers. "They will not stop." Crime groups are exploiting AI to write scamming scripts and using deepfakes to create personas, while networks have expanded to Africa, South Asia, and the Pacific region, according to the United Nations Office of Drugs and Crime. "This is a situation the region has never faced before," said John Wojcik, a UN organized crime analyst. "The evolving situation is trending towards something far more dangerous than scams alone."
Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
AI

AI Masters Minecraft: DeepMind Program Finds Diamonds Without Being Taught 9

An AI system has for the first time figured out how to collect diamonds in the hugely popular video game Minecraft -- a difficult task requiring multiple steps -- without being shown how to play. Its creators say the system, called Dreamer, is a step towards machines that can generalize knowledge learned in one domain to new situations, a major goal of AI. From a report: "Dreamer marks a significant step towards general AI systems," says Danijar Hafner, a computer scientist at Google DeepMind in San Francisco, California. "It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do." Hafner and his colleagues describe Dreamer in a study in Nature published on 2 April.

In Minecraft, players explore a virtual 3D world containing a variety of terrains, including forests, mountains, deserts and swamps. Players use the world's resources to create objects, such as chests, fences and swords -- and collect items, among the most prized of which are diamonds. Importantly, says Hafner, no two experiences are the same. Every time you play Minecraft, it's a new, randomly generated world," he says. This makes it useful for challenging an AI system that researchers want to be able to generalize from one situation to the next. "You have to really understand what's in front of you; you can't just memorize a specific strategy," he says.
AI

95% of Code Will Be AI-Generated Within Five Years, Microsoft CTO Says 130

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.
AI

OpenAI Accused of Training GPT-4o on Unlicensed O'Reilly Books (techcrunch.com) 49

A new paper [PDF] from the AI Disclosures Project claims OpenAI likely trained its GPT-4o model on paywalled O'Reilly Media books without a licensing agreement. The nonprofit organization, co-founded by O'Reilly Media CEO Tim O'Reilly himself, used a method called DE-COP to detect copyrighted content in language model training data.

Researchers analyzed 13,962 paragraph excerpts from 34 O'Reilly books, finding that GPT-4o "recognized" significantly more paywalled content than older models like GPT-3.5 Turbo. The technique, also known as a "membership inference attack," tests whether a model can reliably distinguish human-authored texts from paraphrased versions.

"GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O'Reilly books published prior to its training cutoff date," wrote the co-authors, which include O'Reilly, economist Ilan Strauss, and AI researcher Sruly Rosenblat.
Mozilla

Mozilla To Launch 'Thunderbird Pro' Paid Services (techspot.com) 36

Mozilla plans to introduce a suite of paid professional services for its open-source Thunderbird email client, transforming the application into a comprehensive communication platform. Dubbed "Thunderbird Pro," the package aims to compete with established ecosystems like Gmail and Office 365 while maintaining Mozilla's commitment to open-source software.

The Pro tier will include four core services: Thunderbird Appointment for streamlined scheduling, Thunderbird Send for file sharing (reviving the discontinued Firefox Send), Thunderbird Assist offering AI capabilities powered by Flower AI, and Thundermail, a revamped email client built on Stalwart's open-source stack. Initially, Thunderbird Pro will be available free to "consistent community contributors," with paid access for other users.

Mozilla Managing Director Ryan Sipes indicated the company may consider limited free tiers once the service establishes a sustainable user base. This initiative follows Mozilla's 2023 announcement about "remaking" Thunderbird's architecture to modernize its aging codebase, addressing user losses to more feature-rich competitors.
AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

AI

DeepMind is Holding Back Release of AI Research To Give Google an Edge (arstechnica.com) 31

Google's AI arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. From a report: The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared with others.

The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern that the Big Tech group had ceded its early lead to the likes of ChatGPT maker OpenAI.

"I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies is a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.

AI

Alan Turing Institute Plans Revamp in Face of Criticism and Technological Change 29

Britain's flagship AI agency will slash the number of projects it backs and prioritize work on defense, environment and health as it seeks to respond to technological advances and criticism of its record. From a report: The Alan Turing Institute -- named after the pioneering British computer scientist -- will shut or offload almost a quarter of its 101 current initiatives and is considering job cuts as part of a change programme that led scores of staff to write a letter expressing their loss of confidence in the leadership in December.

Jean Innes, appointed chief executive in July 2023, argued that huge advances in AI meant the Turing needed to modernise after being founded as a national data science institute by David Cameron's government a decade ago this month. "The Turing has chalked up some really great achievements," Innes said in an interview. "[But we need] a big strategic shift to a much more focused agenda on a small number of problems that have an impact in the real world." A review last year by UK Research and Innovation, the government funding body, found "a clear need for the governance and leadership structure of the Institute to evolve." It called for a move away from the dominance of universities to a structure more representative of AI in UK.
AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
Programming

'There is No Vibe Engineering' 121

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.

Slashdot Top Deals