×
AI

For Data-Guzzling AI Companies, the Internet Is Too Small (wsj.com) 60

Companies racing to develop more powerful artificial intelligence are rapidly nearing a new problem: The internet might be too small for their plans (non-paywalled link). From a report: Ever more powerful systems developed by OpenAI, Google and others require larger oceans of information to learn from. That demand is straining the available pool of quality public data online at the same time that some data owners are blocking access to AI companies. Some executives and researchers say the industry's need for high-quality text data could outstrip supply within two years, potentially slowing AI's development.

AI companies are hunting for untapped information sources, and rethinking how they train these systems. OpenAI, the maker of ChatGPT, has discussed training its next model, GPT-5, on transcriptions of public YouTube videos, people familiar with the matter said. Companies also are experimenting with using AI-generated, or synthetic, data as training material -- an approach many researchers say could actually cause crippling malfunctions. These efforts are often secret, because executives think solutions could be a competitive advantage.

Data is among several essential AI resources in short supply. The chips needed to run what are called large-language models behind ChatGPT, Google's Gemini and other AI bots also are scarce. And industry leaders worry about a dearth of data centers and the electricity needed to power them. AI language models are built using text vacuumed up from the internet, including scientific research, news articles and Wikipedia entries. That material is broken into tokens -- words and parts of words that the models use to learn how to formulate humanlike expressions.

The Matrix

'Yes, We're All Trapped in the Matrix Now' (cnn.com) 185

"As you're reading this, you're more likely than not already inside 'The Matrix'," according to a headline on the front page of CNN.com this weekend.

It linked to an opinion piece by Rizwan Virk, founder of MIT's startup incubator/accelerator program. He's now a doctoral researcher at Arizona State University, where his profile identifies him as an "entrepreneur, video game pioneer, film producer, venture capitalist, computer scientist and bestselling author." Virk's 2019 book was titled "The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are in a Video Game." In the decades since [The Matrix was released], this idea, now called the simulation hypothesis, has come to be taken more seriously by technologists, scientists and philosophers. The main reason for this shift is the stunning improvements in computer graphics, virtual and augmented reality (VR and AR) and AI. Taking into account three developments just this year from Apple, Neuralink and OpenAI, I can now confidently state that as you are reading this article, you are more likely than not already inside a computer simulation. This is because the closer our technology gets to being able to build a fully interactive simulation like the Matrix, the more likely it is that someone has already built such a world, and we are simply inside their video game world...

In 2003, Oxford philosopher Nick Bostrom imagined a "technologically mature" civilization could easily create a simulated world. The logic, then, is that if any civilization ever reaches this point, it would create not just one but a very large number of simulations (perhaps billions), each with billions of AI characters, simply by firing up more servers. With simulated worlds far outnumbering the "real" world, the likelihood that we are in a simulation would be significantly higher than not. It was this logic that prompted Elon Musk to state, a few years ago, that the chances that we are not in a simulation (i.e. that we are in base reality) was "one in billions." It's a theory that is difficult to prove — but difficult to disprove as well. Remember, the simulations would be so good that you wouldn't be able to tell the difference between a physical and a simulated world. Either the signals are being beamed directly into your brain, or we are simply AI characters inside the simulation...

Recent developments in Silicon Valley show that we could get to the simulation point very soon. Just this year, Apple released its Vision Pro headset — a mixed-reality (including augmented and virtual reality) device that, if you believe initial reviews (ranging from mildly positive to ecstatic), heralds the beginning of a new era of spatial computing — or the merging of digital and physical worlds... we can see a direct line to being able to render a realistic fictional world around us... Just last month, OpenAI released Sora AI, which can now generate highly realistic videos that are pretty damn difficult to distinguish from real human videos. The fact that AI can so easily fool humans visually as well as through text (and according to some, has already passed the well-known Turing Test) shows that we are not far from fully immersive worlds populated with simulated AI characters that seem (and perhaps even think they are) conscious. Already, millions of humans are chatting with AI characters, and millions of dollars are pouring into making AI characters more realistic. Some of us may be players of the game, who have forgotten that we allowed the signal to be beamed into our brain, while others, like Neo or Morpheus or Trinity in "The Matrix," may have been plugged in at birth...

The fact that we are approaching the simulation point so soon in our future means that the likelihood that we are already inside someone else's advanced simulation goes up exponentially. Like Neo, we would be unable to tell the difference between a simulated and a physical world. Perhaps the most appropriate response to that is another of Reeves' most famous lines from that now-classic sci-fi film: Woah.

The author notes that the idea of being trapped inside a video game already "had been articulated by one of the Wachowskis' heroes, science fiction author Philip K. Dick, who stated, all the way back in 1977, 'We are living in a computer programmed reality.'" A few years ago, I interviewed Dick's wife Tessa and asked her what he would have thought of "The Matrix." She said his first reaction would have been that he loved it; however, his second reaction would most likely have been to call his agent to see if he could sue the filmmakers for stealing his ideas.
AI

More AI Safeguards Coming, Including Right to Refuse Face-Recognition Scans at US Airports (cnn.com) 23

This week every U.S. agency was ordered to appoint a "chief AI officer".

But that wasn't the only AI policy announced. According to CNN, "By the end of the year, travelers should be able to refuse facial recognition scans at airport security screenings without fear it could delay or jeopardize their travel plans." That's just one of the concrete safeguards governing artificial intelligence that the Biden administration says it's rolling out across the U.S. government, in a key first step toward preventing government abuse of AI. The move could also indirectly regulate the AI industry using the government's own substantial purchasing power... The mandates aim to cover situations ranging from screenings by the Transportation Security Administration to decisions by other agencies affecting Americans' health care, employment and housing. Under the requirements taking effect on December 1, agencies using AI tools will have to verify they do not endanger the rights and safety of the American people. In addition, each agency will have to publish online a complete list of the AI systems it uses and their reasons for using them, along with a risk assessment of those systems...

[B]ecause the government is such a large purchaser of commercial technology, its policies around procurement and use of AI are expected to have a powerful influence on the private sector.

CNN notes that Vice President Harris told reporters that the administration intends for the policies to serve as a global model. "Meanwhile, the European Union this month gave final approval to a first-of-its-kind artificial intelligence law, once again leapfrogging the United States on regulating a critical and disruptive technology."

CNN adds that last year, "the White House announced voluntary commitments by leading AI companies to subject their models to outside safety testing."
AI

AI Hallucinated a Dependency. So a Cybersecurity Researcher Built It as Proof-of-Concept Malware (theregister.com) 44

"Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI," the Register reported Thursday

"Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned." If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.

According to Bar Lanyado, security researcher at Lasso Security, one of the businesses fooled by AI into incorporating the package is Alibaba, which at the time of writing still includes a pip command to download the Python package huggingface-cli in its GraphTranslator installation instructions. There is a legit huggingface-cli, installed using pip install -U "huggingface_hub[cli]". But the huggingface-cli distributed via the Python Package Index (PyPI) and required by Alibaba's GraphTranslator — installed using pip install huggingface-cli — is fake, imagined by AI and turned real by Lanyado as an experiment.

He created huggingface-cli in December after seeing it repeatedly hallucinated by generative AI; by February this year, Alibaba was referring to it in GraphTranslator's README instructions rather than the real Hugging Face CLI tool... huggingface-cli received more than 15,000 authentic downloads in the three months it has been available... "In addition, we conducted a search on GitHub to determine whether this package was utilized within other companies' repositories," Lanyado said in the write-up for his experiment. "Our findings revealed that several large companies either use or recommend this package in their repositories...."

Lanyado also said that there was a Hugging Face-owned project that incorporated the fake huggingface-cli, but that was removed after he alerted the biz.

"With GPT-4, 24.2 percent of question responses produced hallucinated packages, of which 19.6 percent were repetitive, according to Lanyado..."
Security

'Security Engineering' Author Ross Anderson, Cambridge Professor, Dies at Age 67 (therecord.media) 7

The Record reports: Ross Anderson, a professor of security engineering at the University of Cambridge who is widely recognized for his contributions to computing, passed away at home on Thursday according to friends and colleagues who have been in touch with his family and the University.

Anderson, who also taught at Edinburgh University, was one of the most respected academic engineers and computer scientists of his generation. His research included machine learning, cryptographic protocols, hardware reverse engineering and breaking ciphers, among other topics. His public achievements include, but are by no means limited to, being awarded the British Computer Society's Lovelace Medal in 2015, and publishing several editions of the Security Engineering textbook.

Anderson's security research made headlines throughout his career, with his name appearing in over a dozen Slashdot stories...

My favorite story? UK Banks Attempt To Censor Academic Publication.

"Cambridge University has resisted the demands and has sent a response to the bankers explaining why they will keep the page online..."


Government

Congress Bans Staff Use of Microsoft's AI Copilot (axios.com) 32

The U.S. House has set a strict ban on congressional staffers' use of Microsoft Copilot, the company's AI-based chatbot, Axios reported Friday. From the report: The House last June restricted staffers' use of ChatGPT, allowing limited use of the paid subscription version while banning the free version. The House's Chief Administrative Officer Catherine Szpindor, in guidance to congressional offices obtained by Axios, said Microsoft Copilot is "unauthorized for House use."

"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," it said. The guidance added that Copilot "will be removed from and blocked on all House Windows devices."

AI

NYC's Government Chatbot Is Lying About City Laws and Regulations (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: NYC's "MyCity" ChatBot was rolled out as a "pilot" program last October. The announcement touted the ChatBot as a way for business owners to "save ... time and money by instantly providing them with actionable and trusted information from more than 2,000 NYC Business web pages and articles on topics such as compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines." But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing. Further testing from BlueSky user Kathryn Tewson shows the MyCity chatbot giving some dangerously wrong answers regarding treatment of workplace whistleblowers, as well as some hilariously bad answers regarding the need to pay rent.

MyCity's Microsoft Azure-powered chatbot uses a complex process of statistical associations across millions of tokens to essentially guess at the most likely next word in any given sequence, without any real understanding of the underlying information being conveyed. That can cause problems when a single factual answer to a question might not be reflected precisely in the training data. In fact, The Markup said that at least one of its tests resulted in the correct answer on the same query about accepting Section 8 housing vouchers (even as "ten separate Markup staffers" got the incorrect answer when repeating the same question). The MyCity Chatbot -- which is prominently labeled as a "Beta" product -- does tell users who bother to read the warnings that it "may occasionally produce incorrect, harmful or biased content" and that users should "not rely on its responses as a substitute for professional advice." But the page also states front and center that it is "trained to provide you official NYC Business information" and is being sold as a way "to help business owners navigate government."
NYC Office of Technology and Innovation Spokesperson Leslie Brown told The Markup that the bot "has already provided thousands of people with timely, accurate answers" and that "we will continue to focus on upgrading this tool so that we can better support small businesses across the city."
Businesses

Microsoft, OpenAI Plan $100 Billlion 'Stargate' AI Supercomputer (reuters.com) 41

According to The Information (paywalled), Microsoft and OpenAI are planning a $100 billion datacenter project that will include an artificial intelligence supercomputer called "Stargate." Reuters reports: The Information reported that Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of the biggest current data centers, citing people involved in private conversations about the proposal. OpenAI's next major AI upgrade is expected to land by early next year, the report said, adding that Microsoft executives are looking to launch Stargate as soon as 2028. The proposed U.S.-based supercomputer would be the biggest in a series of installations the companies are looking to build over the next six years, the report added.

The Information attributed the tentative cost of $100 billion to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoft's initial cost estimates. It did not identify those sources. Altman and Microsoft employees have spread supercomputers across five phases, with Stargate as the fifth phase. Microsoft is working on a smaller, fourth-phase supercomputer for OpenAI that it aims to launch around 2026, according to the report. Microsoft and OpenAI are in the middle of the third phase of the five-phase plan, with much of the cost of the next two phases involving procuring the AI chips that are needed, the report said. The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.

AI

OpenAI Reveals AI Tool To Recreate Human Voices (axios.com) 24

An anonymous reader quotes a report from Axios: OpenAI said on Friday it's allowed a small number of businesses to test a new tool that can recreate a person's voice from just a 15-second recording. The company said it is taking "a cautious and informed approach" to releasing the program, called Voice Engine, more broadly given the high risk of abuse presented by synthetic voice generators.

Based on the 15-second recording, the program can create a "emotive and realistic" natural-sounding voice that closely resembles the original speaker. This synthetic voice can then be used to read text inputs, even if the text isn't in the original speaker's native language. In one example offered by the company, an English speaker's voice was translated into Spanish, Mandarin, German, French and Japanese while preserving the speaker's native accent.

OpenAI said Voice Engine has so far been used to provide reading assistance to non-readers, translate content and to help people who are non-verbal. It said the program has already been used in its text-to-speech application and its ChatGPT Voice and Read Aloud tool.
"We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities," the company said. "Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale."
AI

Hillary Clinton, Election Officials Warn AI Could Threaten Elections (wsj.com) 255

Hillary Clinton and U.S. election officials said they are concerned disinformation generated and spread by AI could threaten the 2024 presidential election [non-paywalled link]. WSJ: Clinton, a former secretary of state and 2016 presidential candidate, said she thinks foreign actors like Russian President Vladimir Putin could use AI to interfere in elections in the U.S. and elsewhere. Dozens of countries are running elections this year. "Anybody who's not worried is not paying attention," Clinton said Thursday at Columbia University, where election officials and tech executives discussed how AI could impact global elections.

She added: "It could only be a very small handful of people in St. Petersburg or Moldova or wherever they are right now who are lighting the fire, but because of the algorithms everyone gets burned." Clinton said Putin tried to undermine her before the 2016 election by spreading disinformation on Facebook, Twitter and Snapchat about "all these terrible things" she purportedly did. "I don't think any of us understood it," she said. "I did not understand it. I can tell you my campaign did not understand it. The so-called dark web was filled with these kinds of memes and stories and videos of all sorts portraying me in all kinds of less than flattering ways." Clinton added: "What they did to me was primitive and what we're talking about now is the leap in technology."

AI

Larry Summers, Now an OpenAI Board Member, Thinks AI Could Replace 'Almost All' Forms of Labor (fortune.com) 126

Former U.S. Treasury Secretary Larry Summers, now on the board of ChatGPT developer OpenAI, believes that while AI has the potential to revolutionize the economy, its impact will take time to materialize. However, Summers maintains that AI could be the biggest economic development since the Industrial Revolution, eventually replacing most forms of human labor, particularly cognitive tasks. He said: "If one takes a view over the next generation, this could be the biggest thing that has happened in economic history since the Industrial Revolution,"he added. "This offers the prospect of not replacing some forms of human labor, but almost all forms of human labor."

From building homes to making medical diagnoses, Summers predicted that AI will eventually be able to do nearly every human job, particularly white collar workers' "cognitive labor."

Cloud

Cloud Server Host Vultr Rips User Data Ownership Clause From ToS After Web Outage (theregister.com) 28

Tobias Mann reports via The Register: Cloud server provider Vultr has rapidly revised its terms-of-service after netizens raised the alarm over broad clauses that demanded the "perpetual, irrevocable, royalty-free" rights to customer "content." The red tape was updated in January, as captured by the Internet Archive, and this month users were asked to agree to the changes by a pop-up that appeared when using their web-based Vultr control panel. That prompted folks to look through the terms, and there they found clauses granting the US outfit a "worldwide license ... to use, reproduce, process, adapt ... modify, prepare derivative works, publish, transmit, and distribute" user content.

It turned out these demands have been in place since before the January update; customers have only just noticed them now. Given Vultr hosts servers and storage in the cloud for its subscribers, some feared the biz was giving itself way too much ownership over their stuff, all in this age of AI training data being put up for sale by platforms. In response to online outcry, largely stemming from Reddit, Vultr in the past few hours rewrote its ToS to delete those asserted content rights. CEO J.J. Kardwell told The Register earlier today it's a case of standard legal boilerplate being taken out of context. The clauses were supposed to apply to customer forum posts, rather than private server content, and while, yes, the terms make more sense with that in mind, one might argue the legalese was overly broad in any case.

"We do not use user data," Kardwell stressed to us. "We never have, and we never will. We take privacy and security very seriously. It's at the core of what we do globally." [...] According to Kardwell, the content clauses are entirely separate to user data deployed in its cloud, and are more aimed at one's use of the Vultr website, emphasizing the last line of the relevant fine print: "... for purposes of providing the services to you." He also pointed out that the wording has been that way for some time, and added the prompt asking users to agree to an updated ToS was actually spurred by unrelated Microsoft licensing changes. In light of the controversy, Vultr vowed to remove the above section to "simplify and further clarify" its ToS, and has indeed done so. In a separate statement, the biz told The Register the removal will be followed by a full review and update to its terms of service.
"It's clearly causing confusion for some portion of users. We recognize that the average user doesn't have a law degree," Kardwell added. "We're very focused on being responsive to the community and the concerns people have and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it."
AI

Meta Is Adding AI To Its Ray-Ban Smart Glasses 23

Starting next month, Meta's Ray-Ban smart glasses will support multimodal AI features to perform translation, along with object, animal, and monument identification. The Verge reports: Users can activate the glasses' smart assistant by saying "Hey Meta," and then saying a prompt or asking a question. It will then respond through the speakers built into the frames. The NYT offers a glimpse at how well Meta's AI works when taking the glasses for a spin in a grocery store, while driving, at museums, and even at the zoo.

Although Meta's AI was able to correctly identify pets and artwork, it didn't get things right 100 percent of the time. The NYT found that the glasses struggled to identify zoo animals that were far away and behind cages. It also didn't properly identify an exotic fruit, called a cherimoya, after multiple tries. As for AI translations, the NYT found that the glasses support English, Spanish, Italian, French, and German.
Government

Biden Orders Every US Agency To Appoint a Chief AI Officer 48

An anonymous reader quotes a report from Ars Technica: The White House has announced the "first government-wide policy (PDF) to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide.
Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."
AI

AI Leaders Press Advantage With Congress as China Tensions Rise (nytimes.com) 14

Silicon Valley chiefs are swarming the Capitol to try to sway lawmakers on the dangers of falling behind in the AI race. From a report: In recent weeks, American lawmakers have moved to ban the Chinese-owned app TikTok. President Biden reinforced his commitment to overcome China's rise in tech. And the Chinese government added chips from Intel and AMD to a blacklist of imports. Now, as the tech and economic cold war between the United States and China accelerates, Silicon Valley's leaders are capitalizing on the strife with a lobbying push for their interests in another promising field of technology: artificial intelligence.

On May 1, more than 100 tech chiefs and investors, including Alex Karp, the head of the defense contractor Palantir, and Roelof Botha, the managing partner of the venture capital firm Sequoia Capital, will come to Washington for a daylong conference and private dinner focused on drumming up more hawkishness toward China's progress in A.I. Dozens of lawmakers, including Speaker Mike Johnson, Republican of Louisiana, will also attend the event, the Hill & Valley Forum, which will include fireside chats and keynote discussions with members of a new House A.I. task force.

Tech executives plan to use the event to directly lobby against A.I. regulations that they consider onerous, as well as ask for more government spending on the technology and research to support its development. They also plan to ask to relax immigration restrictions to bring more A.I. experts to the United States. The event highlights an unusual area of agreement between Washington and Silicon Valley, which have long clashed on topics like data privacy, children's online protections and even China.

Software

'Software Vendors Dump Open Source, Go For the Cash Grab' (computerworld.com) 120

Steven J. Vaughan-Nichols, writing for ComputerWorld: Essentially, all software is built using open source. By Synopsys' count, 96% of all codebases contain open-source software. Lately, though, there's been a very disturbing trend. A company will make its program using open source, make millions from it, and then -- and only then -- switch licenses, leaving their contributors, customers, and partners in the lurch as they try to grab billions. I'm sick of it. The latest IT melodrama baddie is Redis. Its program, which goes by the same name, is an extremely popular in-memory database. (Unless you're a developer, chances are you've never heard of it.) One recent valuation shows Redis to be worth about $2 billion -- even without an AI play! That, anyone can understand.

What did it do? To quote Redis: "Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD)." For those of you who aren't open-source licensing experts, this means developers can no longer use Redis' code. Sure, they can look at it, but they can't export, borrow from, or touch it.

Redis pulled this same kind of trick in 2018 with some of its subsidiary code. Now it's done so with the company's crown jewels. Redis is far from the only company to make such a move. Last year, HashiCorp dumped its main program Terraform's Mozilla Public License (MPL) for the Business Source License (BSL) 1.1. Here, the name of the new license game is to prevent anyone from competing with Terraform. Would it surprise you to learn that not long after this, HashiCorp started shopping itself around for a buyer? Before this latest round of license changes, MongoDB and Elastic made similar shifts. Again, you might never have heard of these companies or their programs, but each is worth, at a minimum, hundreds of millions of dollars. And, while you might not know it, if your company uses cloud services behind the scenes, chances are you're using one or more of their programs,

AI

Claude 3 Surpasses GPT-4 on Chatbot Arena For the First Time (arstechnica.com) 19

Anthropic's recently released Claude 3 Opus large language model has beaten OpenAI's GPT-4 for the first time on Chatbot Arena, a popular crowdsourced leaderboard used by AI researchers to gauge the relative capabilities of AI language models. A report adds: "The king is dead," tweeted software developer Nick Dobos in a post comparing GPT-4 Turbo and Claude 3 Opus that has been making the rounds on social media. "RIP GPT-4."

Since GPT-4 was included in Chatbot Arena around May 10, 2023 (the leaderboard launched May 3 of that year), variations of GPT-4 have consistently been on the top of the chart until now, so its defeat in the Arena is a notable moment in the relatively short history of AI language models. One of Anthropic's smaller models, Haiku, has also been turning heads with its performance on the leaderboard.

"For the first time, the best available models -- Opus for advanced tasks, Haiku for cost and efficiency -- are from a vendor that isn't OpenAI," independent AI researcher Simon Willison told Ars Technica. "That's reassuring -- we all benefit from a diversity of top vendors in this space. But GPT-4 is over a year old at this point, and it took that year for anyone else to catch up." Chatbot Arena is run by Large Model Systems Organization (LMSYS ORG), a research organization dedicated to open models that operates as a collaboration between students and faculty at University of California, Berkeley, UC San Diego, and Carnegie Mellon University.

Cloud

Amazon Bets $150 Billion on Data Centers Required for AI Boom (yahoo.com) 26

Amazon plans to spend almost $150 billion in the coming 15 years on data centers, giving the cloud-computing giant the firepower to handle an expected explosion in demand for artificial intelligence applications and other digital services. From a report: The spending spree is a show of force as the company looks to maintain its grip on the cloud services market, where it holds about twice the share of No. 2 player Microsoft. Sales growth at Amazon Web Services slowed to a record low last year as business customers cut costs and delayed modernization projects. Now spending is starting to pick up again, and Amazon is keen to secure land and electricity for its power-hungry facilities.

"We're expanding capacity quite significantly," said Kevin Miller, an AWS vice president who oversees the company's data centers. "I think that just gives us the ability to get closer to customers." Over the past two years, according to a Bloomberg tally, Amazon has committed to spending $148 billion to build and operate data centers around the world. The company plans to expand existing server farm hubs in northern Virginia and Oregon as well as push into new precincts, including Mississippi, Saudi Arabia and Malaysia.

Businesses

Why the US Could Be On the Cusp of a Productivity Boom 129

Neil Irwin reports via Axios: The dearth of productivity growth over the last couple of decades has held back incomes in the U.S. and other rich countries, according to a report out Wednesday from the McKinsey Global Institute, the research arm of the global consultancy. Productivity growth has been weak in the U.S. and Western Europe since the 2008 global financial crisis, but things looked better among many emerging markets. The McKinsey report finds that global labor productivity growth was 2.3% a year from 1997 to 2022, a rapid rate that has increased incomes and quality of life in large parts of the world. China and India account for the largest portion of that surge -- half of overall global productivity improvement, with other emerging markets accounting for another 25%, led by Central and Eastern Europe and emerging Asian economies.

In the U.S., the report finds that the decline in capital investment following the 2008 financial crisis has resulted in a $4,500 lower per-capita GDP in 2022 than it would have if pre-crisis trends had continued. Rapid advances in manufacturing technology, especially for electronics, petered out in the same time period, subtracting another $5,000 from per-capita GDP. "Digitization was much discussed as the main candidate to rev up productivity again, but its impact failed to spread beyond" the tech sector, the authors write. The authors are optimistic that a confluence of factors will make the years ahead different.

The rise in global interest rates and inflation are evidence of stronger global demand. Many countries are experiencing labor shortages that may incentivize more productivity-enhancing investment. And artificial intelligence and related technologies create big opportunities. "Inflationary pressure and rising interest rates could be signs that we are leaving behind secular stagnation and entering an era of higher demand and investment," the report finds. "In corporate boardrooms around the world right now, there's a tremendous amount of conversation associated with [generative] AI, and I think there's a broad acknowledgment that this could very much transform productivity at the company level," Olivia White, a McKinsey senior partner and co-author of the report, tells Axios. "Another thing that's happening right now is the conversation about labor. Labor markets in all advanced economies, and the U.S. is really sort of top of the heap, are very, very tight right now. So there's a lot of conversation around what do we do to make the people that we have as productive as they can be?"
AI

Amazon Spends $2.75 Billion on AI Startup Anthropic in Its Largest Venture Investment Yet (cnbc.com) 10

Amazon is making its largest outside investment in its three-decade history as it looks to gain an edge in the AI race. From a report: The tech giant said it will spend another $2.75 billion backing Anthropic, a San Francisco-based startup that's widely viewed as a frontrunner in generative artificial intelligence. Its foundation model and chatbot Claude competes with OpenAI and ChatGPT. The companies announced an initial $1.25 billion investment in September, and said at the time that Amazon would invest up to $4 billion. Wednesday's news marks Amazon's second tranche of that funding.

Amazon will maintain a minority stake in the company and won't have an Anthropic board seat, the company said. The deal was struck at the AI startup's last valuation, which was $18.4 billion, according to a source. Over the past year, Anthropic closed five different funding deals worth about $7.3 billion -- and with the new Amazon investment, the total exceeds $10 billion. The company's product directly competes with OpenAI's ChatGPT in both the enterprise and consumer worlds, and it was founded by ex-OpenAI research executives and employees.

Slashdot Top Deals