The Internet

Perplexity Teases AI Web Browser Called Comet 32

AI-powered search engine Perplexity is developing its own web browser named Comet. "Just like Perplexity reinvented search, we're also reinventing the browser," a Perplexity spokesperson told TechCrunch. "Stay tuned for updates." From the report: In a post on X on Monday, the company launched a sign-up list for the browser, which isn't yet available. It's unclear when it might be -- or what the browser will look like, even. But we do have a name: Comet. [...] Perplexity may be betting that it can leverage its search engine user base to quickly ramp up and make some sort of a dent in the space with Comet.
AI

Anthropic Launches the World's First 'Hybrid Reasoning' AI Model (wired.com) 18

An anonymous reader quotes a report from Wired: Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of "reasoning" needed to solve more grueling problems. Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation. "The [user] has a lot of control over the behavior -- how long it thinks, and can trade reasoning and intelligence with time and budget," says Michael Gerstenhaber, product lead, AI platform at Anthropic.

Claude 3.7 also features a new "scratchpad" that reveals the model's reasoning process. A similar feature proved popular with theChinese AI model DeepSeek. It can help a user understand how a model is working over a problem in order to modify or refine prompts. Dianne Penn, product lead of research at Anthropic, says the scratchpad is even more helpful when combined with the ability to ratchet a model's "reasoning" up and down. If, for example, the model struggles to break down a problem correctly, a user can ask it to spend more time working on it. [...]

Penn says that Claude's reasoning mode received additional data on business applications including writing and fixing code, using computers, and answering complex legal questions. "The things that we made improvements on are ... technical subjects or subjects which require long reasoning," Penn says. "What we have from our customers is a lot of interest in deploying our models into their actual workloads." Anthropic says that Claude 3.7 is especially good at solving coding problems that require step-by-step reasoning, outscoring OpenAI's o1 on some benchmarks like SWE-bench. The company is today releasing a new tool, called Claude Code, specifically designed for this kind of AI-assisted coding. "The model is already good at coding," Penn says. But "additional thinking would be good for cases that might require very complex planning -- say you're looking at an extremely large code base for a company."

AI

AI Reshapes Corporate Workforce as Companies Halt Traditional Hiring 119

Major corporations are reshaping their workforces around AI with Salesforce announcing it will not hire software engineers in 2025 and other companies laying off thousands while shifting focus to AI-specific roles. Duolingo has laid off thousands after implementing ChatGPT-4, UPS cut 4,000 jobs in its largest layoff in 116 years, and IBM paused hiring for back-office and HR positions that AI can now handle.

Amazon is redirecting staff from Alexa to AI areas, while Intuit is laying off 10% of its non-AI workforce. Cisco plans to cut 7% of employees in its second round of job cuts this year as it prioritizes AI and cybersecurity. Salesforce reports its AI platform is boosting software engineering productivity by 30%. SAP is restructuring 8,000 positions to focus on AI-driven business areas. The trend extends globally, with Microsoft relocating thousands during an "exodus" from China, while entry-level jobs on Wall Street are becoming obsolete.

A study found that 3 out of 10 companies replaced workers with AI last year, with over one-third of firms using AI likely to automate more roles in 2025. Job listings at large privately-held AI companies have dropped 14.2% over six months, JP Morgan wrote in a note seen by Slashdot. The transformation is creating new opportunities, with rising demand for AI skills in job postings. A survey of more than 1,200 users found nearly two-thirds of young professionals use AI tools at work, with 93% not worried about job threats, as business leaders view Generation Z's digital skills as beneficial for leveraging AI.
Earth

Ellison's Half-Billion-Dollar Quest To Change Farming Has Been a Bust (msn.com) 63

Oracle founder Larry Ellison's agricultural technology venture Sensei Ag has largely failed to deliver on its ambitious goals despite costing more than half a billion dollars, more than he spent to purchase Hawaii's Lanai island itself. Eight years after its founding, little of the revolutionary technology Sensei promised - including AI crop breeding, robotic harvesting, and advanced sensors - is being utilized in its six greenhouses on Lanai, according to WSJ.

The company has faced numerous setbacks, including greenhouses that weren't built to withstand Lanai's strong winds, solar panels that malfunctioned, and executives with limited agricultural experience. Far from its original mission to "feed the world," Sensei currently grows lettuce and cherry tomatoes primarily for Hawaii's local market, while its Canadian operations supply some East Coast supermarkets. The company has pivoted to focus on developing software and robotics at test centers in Southern California, aiming to eventually license technology packages to other indoor farms.
Businesses

Software Firm Bird To Leave Europe Due To Onerous Regulations in AI Era, Says CEO (reuters.com) 64

Cloud communications software firm Bird, one of the Netherlands' most prominent tech startups, plans to move most of its operations out of Europe, its CEO said, citing restrictive regulations and difficulties hiring skilled technology workers. From a report: "We are mostly leaving Europe as it lacks the environment we need to innovate in an AI-first era of technology," CEO Robert Vis told Reuters on Monday. "We foresee that regulations in Europe will block true innovation in a global economy moving extremely fast to AI," he said in a text message response to Reuters queries. Bird's operations in future will be mostly split between New York, Singapore and Dubai, he said.
Microsoft

Microsoft Dropped Some AI Data Center Leases, TD Cowen Says (yahoo.com) 10

Microsoft has canceled some leases for US data center capacity, according to TD Cowen, raising broader concerns over whether it's securing more AI computing capacity than it needs in the long term. From a report: OpenAI's biggest backer has voided leases in the US totaling "a couple of hundred megawatts" of capacity -- the equivalent of roughly two data centers -- canceling agreements with at least a couple of private operators, the US brokerage wrote Friday, citing "channel checks" or inquiries with supply chain providers. TD Cowen said its checks also suggest Microsoft has pulled back on converting so-called statements of qualifications, agreements that usually lead to formal leases.

Microsoft in a statement on Monday reiterated its spending target for the fiscal year ending June, but declined to comment on TD Cowen's note. Exactly why Microsoft may be pulling some leases is unclear. TD Cowen posited in a second report on Monday that OpenAI is shifting workloads from Microsoft to Oracle as part of a relatively new partnership. The tech giant is also among the largest owners and operators of data centers in its own right and is spending billions of dollars on its own capacity. TD Cowen separately suggested that Microsoft may be reallocating some of that in-house investment to the US from abroad.

AI

Pre-Product AI 'Company' Now Valued at $30 Billion 69

Financial Times: Venture capitalists have always been happy to back pre-profit companies. Back in the halcyon ZIRP era, they became happy to finance pre-revenue companies. But at least even Juicero, Wag and the Fyre Festival had an actual product. From Bloomberg over the weekend: "OpenAI co-founder Ilya Sutskever is raising more than $1 billion for his start-up at a valuation of over $30 billion, according to a person familiar with the matter -- vaulting the nascent venture into the ranks of the world's most valuable private technology companies.

Greenoaks Capital Partners, a San Francisco-based venture capital firm, is leading the deal for the start-up, Safe Superintelligence, and plans to invest $500 million, said the person, who asked not to be identified discussing private information. Greenoaks is also an investor in AI companies Scale AI and Databricks.

The round marks a significant valuation jump from the $5 billion that Sutskever's company was worth before, according to Reuters, which earlier reported some details of the new funding. The financing talks are ongoing and the details could still change."


OK, so a jump from a $5bn valuation less than half a year ago to $30bn must mean that Safe Superintelligence has an absolutely killer product right? SSI focuses on developing safe AI systems. It isn't generating revenue yet and doesn't intend to sell AI products in the near future. "This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever told Bloomberg in June. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."
AI

Meet the Journalists Training AI Models for Meta and OpenAI (niemanlab.org) 18

After completing a journalism graduate degree, Carla McCanna took a job "training AI models to optimize accuracy and efficiency," according an article by Nieman Journalism Lab: Staff jobs are scarce... and the competition for them is daunting. (In 2024, the already beleaguered U.S. news industry cut nearly 5,000 jobs, up 59% from the previous year, according to an annual report from Challenger, Gray & Christmas....) For the past couple months, McCanna has been working close to full-time for [AI training data company] Outlier, picking up projects on its gig platform at about $35 per hour. Data work has quickly become her primary source of income and a hustle she's recommended [to her journalism program classmates]. "A lot of us are still looking for jobs. Three times I told someone what I do, and they're like, please send it to me," she said. "It's hard right now, and a lot of my colleagues are saying the same thing."

McCanna is just one of many journalists who has been courted by Outlier to take on part-time, remote data work over the past year... Several of them told me they have taken on Outlier projects to supplement their income or replace their work in journalism entirely, because of dwindling staff jobs or freelance assignments drying up. Some are early-career journalists like McCanna, but others are reporters with over a decade of experience. One thing they all had in common? Before last year they'd never heard of Outlier or even knew that this type of work existed.

Launched back in 2023, Outlier is a platform owned and managed by Scale AI, a San Francisco-based data annotation company valued at $13.8 billion. It counts among its customers the world's largest AI companies, including OpenAI, Meta, and Microsoft. Outlier, and similar platforms like CrowdGen and Remotasks, use networks of remote human workers to improve the AI models of their clients. Workers are paid by the hour for tasks like labeling training data, drafting test prompts, and grading the factual accuracy and grammar of outputs. Often their work is fed back into an AI model to improve its performance, through a process called reinforcement learning with human feedback (RLHF). This human feedback loop has been core to building models like OpenAI's GPT and Meta's Llama.

Aside from direct recruitment messages, I also found dozens of recent public job postings that underscore this growing trend of hiring journalists for data work... Rather than training a replacement, McCanna sees her data work as an asset, growing her knowledge of AI tools as they continue to embed in the workplace. "Actually doing this work you realize AI models still need us ... I think it's going to be a really, really long time until they can truly write like humans."

AI

'Robot' Umpires Come to Major League Baseball (Spring Training) Games (apnews.com) 41

An anonymous reader shared this report from the Associated Press: A computerized system that calls balls and strikes is being tested during Major League Baseball spring training exhibition games starting Thursday after four years of experiments in the minor leagues. Baseball Commissioner Rob Manfred is an advocate of the Automated Ball-Strike System, which potentially as early as 2026 could be used to aid MLB home plate umpires, but not replace them...

Stadiums are outfitted with cameras that track each pitch and judge whether it crossed home plate within the strike zone. In early testing, umpires wore ear buds and would hear "ball" or "strike," then relay that to players and fans with traditional hand signals. The challenge system adds a wrinkle. During spring training, human umps will call every pitch, but each team will have the ability to challenge two calls per game, with no additions for extra innings. A team retains its challenge if successful, similar to the regulations for big league teams with video reviews, which were first used for home run calls in August 2008 and widely expanded to many calls for the 2014 season.

Only a batter, pitcher or catcher may challenge a call, signaling with the tap of a helmet or cap; and assistance from the dugout is not allowed. A challenge must be made within 2 seconds... MLB has installed the system in 13 spring training ballparks that are home to 19 teams.

After a full season of testing in the Triple-A minor league, roughly 51% of the challenges were successful. Interestingly, the system makes its call exactly halfway across home plate> , where human umpires consider the strike zone to cover the whole 17 inches from the front to the back of home plate.
Power

Is the AI Boom Leading to More Natural Gas-Powered Utilities? (msn.com) 41

New power plants burning natural gas are being built all across America, reports the Washington Post, calling it a gas boom "driven in large part by AI."

They blame tech companies like Microsoft and Meta — which "looked to gas amid a shortage of adequate new clean energy" — while noting that those companies "say they plan to offset their development of natural gas capacity with equal investments in clean energy like solar and wind." [E]ven coal is making a comeback. But the biggest push is for gas, with more than 220 plants in various stages of development nationwide. They are often pitched as a bridge until more clean power is available, sometimes with promises the plants will eventually be equipped with nascent technology that traps greenhouse gas emissions. But the timeline for installing such "carbon capture" is vague. "These companies are building these massive new gas plants that are going to be there for 30 to 50 years," said Bill Weihl, a former director of sustainability at Facebook and founder of the nonprofit ClimateVoice. "That's not a bridge. It is a giant bomb in our carbon budget...."

Public filings from some of the big tech companies driving this development show their greenhouse gas emissions are soaring... "The last few years have revealed that a global energy transition is more complex and less linear than anticipated," Microsoft's board wrote in urging rejection of a December shareholder resolution demanding the company confront the climate risks of AI. "While urgency builds for decarbonization, so does the demand for energy."

Shareholders rejected the resolution. Microsoft is battling with environmental groups over its plans to build a multibillion-dollar data center in Mount Pleasant, Wisconsin, powered with electricity from natural gas. Their petition warns Microsoft's project "will push our state's climate goals out of reach, locking us into 30 more years of fossil fuels." The company said in a statement that it remains committed to erasing its emissions by adding substantial clean power to regional power grids. "By the end of 2025 we expect to meet our goal of adding new sources of carbon-free electricity to the grid equal to 100 percent of the electricity used by our datacenters," the statement said.

Meta says it is doing the same in Louisiana [where it's building a new 4-million-square-foot data center] and is "committed to matching our electricity use with 100 percent clean and renewable energy."

The article includes two revealing quotes:
  • "It is like everyone just gave up," said Aaron Zubaty, CEO of Eolian, a large clean energy developer that works with data centers.
  • American Petroleum Institute President Mike Sommers (who represents the oil and gas industry in Washington), said "The words that have replaced 'energy transition' are 'AI' and 'data centers'. We're transitioning from the energy transition to the energy reality ... We're going to need a lot more oil and gas."

AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
AI

Angry Workers Use AI to Bombard Businesses With Employment Lawsuits (telegraph.co.uk) 36

An anonymous reader shared this report from the Telegraph: Workers with an axe to grind against their employer are using AI to bombard businesses with costly and inaccurate lawsuits, experts have warned.

Frustration is growing among employment lawyers who say they are seeing a trend of litigants using AI to help them run their claims, which they say is generating "inconsistent, lengthy, and often incorrect arguments" and causing a spike in legal fees... Ailie Murray, an employment partner at law firm Travers Smith, said AI submissions are produced so rapidly that they are "often excessively lengthy and full of inconsistencies", but employers must then spend vast amounts of money responding to them. She added: "In many cases, the AI-generated output is inaccurate, leading to claimants pleading invalid claims or arguments.

"It is not an option for an employer to simply ignore such submissions. This leads to a cycle of continuous and costly correspondence. Such dynamics could overburden already stretched tribunals with unfounded and poorly pleaded claims."

There's definitely been a "significant increase" in the number of clients using AI, James Hockin, an employment partner at Withers, told the Telegraph. The danger? "There is a risk that we see unrepresented individuals pursuing the wrong claims in the UK employment tribunal off the back of a duff result from an AI tool."
Mozilla

Mozilla Wants to Expand from Firefox to Open-Source AI and Privacy-Respecting Ads (omgubuntu.co.uk) 63

On Wednesday Mozilla president Mark Surman "announced plans to tackle what he says are 'major headwinds' facing the company's ability to grow, make money, and remain relevant," reports the blog OMG Ubuntu: "Mozilla's impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways," says Surman... It will continue to invest in privacy-respecting advertising; fund, develop and push open-source AI features in order to retain 'product relevance'; and will go all-out on novel new fundraising initiatives to er, get us all to chip in and pay for it!

Mozilla is all-in on AI; Surman describes it as Mozilla's North Star for the work it will do over the next few years. I wrote about its new 'Orbit' AI add-on for Firefox recently...

Helping to co-ordinate, collaborate and come up with ways to keep the company fixed and focused on these fledgling effort is a brand new Mozilla Leadership Council.

The article argues that without Mozilla the web would be "a far poorer, much ickier, and notably less FOSS-ier place..." Or, as Mozilla's blog post put it Wednesday, "Mozilla is entering a new chapter — one where we need to both defend what is good about the web and steer the technology and business models of the AI era in a better direction.

"I believe that we have the people — indeed, we ARE the people — to do this, and that there are millions around the world ready to help us. I am driven and excited by what lies ahead."
AI

AI May Not Impact Tech-Sector Employment, Projects US Department of Labor (investopedia.com) 67

America's Labor Department includes the fact-finding Bureau of Labor Statistics — and they recently explained how AI impacts their projections for the next 10 years. Their conclusion, writes Investopedia, was that "tech workers might not have as much to worry about as one might think." Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Their employment projections through 2033 predict the fastest-growing sector within the tech industry will be computer system design, while the fastest-growing occupation will be data scientist.

And they also project that from 2023 through 2033 AI will "primarily affect occupations whose core tasks can be most easily replicated by GenAI in its current form." So over those 10 years they project a 4.7% drop in employment of medical transcriptionists and a 5.0% drop in employment of customer service representatives. Other occupations also may see AI impacts, although not to the same extent. For instance, computer occupations may see productivity impacts from AI, but the need to implement and maintain AI infrastructure could in actuality boost demand for some occupations in this group.
They also project decreasing employment for paralegals, but with actual lawyers being "less affected."
AI

Game Developers Revolt Against Microsoft's New AI Gaming Tool (wired.com) 109

Microsoft's newly announced Muse AI model for game development has triggered immediate backlash from industry professionals. "Fuck this shit," responded David Goldfarb, founder of The Outsiders, arguing that such AI tools primarily serve to "reduce capital expenditure" while devaluing developers' collective artistic contributions.

Multiple developers told Wired that the tool is aimed at shareholders rather than actual developers. "Nobody will want this. They don't CARE that nobody will want this," one AAA developer said, noting that internal criticism remains muted due to job security concerns amid industry-wide layoffs.

The resistance comes as developers increasingly view AI initiatives as threats to job security rather than helpful tools. One anonymous developer called it "gross" that they needed to remain unnamed while criticizing Muse, as their studio still depends on potential Game Pass deals with Microsoft. Even in prototyping, where Microsoft sees AI potential, Creative Assembly's Marc Burrage warns that automated shortcuts could undermine crucial learning experiences in game development.
Businesses

Data Is Very Valuable, Just Don't Ask Us To Measure It, Leaders Say 14

The Register's Lindsay Clark reports: Fifteen years of big data hype, and guess what? Less than one in four of those in charge of analytics projects actually measure the value of the activity to the organization they work for. The result from Gartner -- a staggering one considering the attention heaped on big data and its various hype-oriented successors -- found that in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases.

It wasn't for lack of interest though. For more than 90 percent of the 504 respondents, value-focused and outcome-focused areas of the D&A leader's role have gained dominance over the past 12 to 18 months, and will continue to be a concern in the future. It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes.

"There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, but there are few who can substantiate it," said Michael Gabbard, senior director analyst at Gartner. He added that while most chief data and analytics officers were responsible for data strategy, a third do not see putting in place an operating model as a primary responsibility. "There is a perennial gap between planning and execution for D&A leaders," he said.
China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

The Protesters Who Want To Ban AGI Before It Even Exists (theregister.com) 72

An anonymous reader quotes a report from The Register: On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence. The name of their non-violent civil resistance group, STOP AI, makes its mission clear. The organization wants to ban something that, by most accounts, doesn't yet exist -- artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."

STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive."
The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.

"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
Businesses

OpenAI Plans To Shift Compute Needs From Microsoft To SoftBank (techcrunch.com) 9

According to The Information (paywalled), OpenAI plans to shift most of its computing power from Microsoft to SoftBank-backed Stargate by 2030. TechCrunch reports: That represents a major shift away from Microsoft, OpenAI's biggest shareholder, who fulfills most of the startup's power needs today. The change won't happen overnight. OpenAI still plans to increase its spending on Microsoft-owned data centers in the next few years.

During that time, OpenAI's overall costs are set to grow dramatically. The Information reports that OpenAI projects to burn $20 billion in cash during 2027, far more than the $5 billion it reportedly burned through in 2024. By 2030, OpenAI reportedly forecasts that its costs around running AI models, also known as inference, will outpace what the startup spends on training AI models.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

Slashdot Top Deals