×
Graphics

Microsoft Paint Is Getting an AI-Powered Image Generator (engadget.com) 39

Microsoft Paint is getting a new image generator tool called Cocreator that can generate images based on text prompts and doodles. Engadget reports: During a demo at its Surface event, the company showed off how Cocreator combines your own drawings with text prompts to create an image. There's also a "creativity slider" that allows you to control how much you want AI to take over compared with your original art. As Microsoft pointed out, the combination of text prompts and your own brush strokes enables faster edits. It could also help provide a more precise rendering than what you'd be able to achieve with DALL-E or another text-to-image generator alone.
Microsoft

Microsoft Launches Arm-Powered Surface Laptop (theverge.com) 26

Microsoft today launched its new Surface Laptop, featuring Qualcomm's Snapdragon X Elite or Plus chips, aiming to compete with Apple's powerful and efficient MacBook laptops. The Surface Laptop, available for preorder starting at $999.99, boasts up to 22 hours of battery life, a haptic touchpad, and support for three external 4K monitors. Microsoft claims the device is 80% faster than its predecessor and comes with AI features powered by its Copilot technology.
AI

With Recall, Microsoft is Using AI To Fix Windows' Eternally Broken Search 95

Microsoft today unveiled Recall, a new AI-powered feature for Windows 11 PCs, at its Build 2024 conference. Recall aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides.

The feature uses semantic associations to make connections, as demonstrated by Microsoft Product Manager Caroline Hernandez, who searched for a blue dress and refined the query with specific details. Microsoft said that Recall's processing is done locally, ensuring data privacy and security. The feature utilizes over 40 local multi-modal small language models to recognize text, images, and video.
AI

OpenAI Says Sky Voice in ChatGPT Will Be Paused After Concerns It Sounds Too Much Like Scarlett Johansson (tomsguide.com) 54

OpenAI is pausing the use of the popular Sky voice in ChatGPT over concerns it sounds too much like the "Her" actress Scarlett Johansson. From a report: The company says the voices in ChatGPT were from paid voice actors. A final five were selected from an initial pool of 400 and it's purely a coincidence the unnamed actress behind the Sky voice has a similar tone to Johansson. Voice is about to become more prominent for OpenAI as it begins to roll out a new GPT-4o model into ChatGPT. With it will come an entirely new conversational interface where users can talk in real-time to a natural-sounding and emotion-mimicking AI.

While the Sky voice and a version of ChatGPT Voice have been around for some time, the comparison to Johansson became more obvious due to OpenAI CEO Sam Altman, and many others, drawing the similarity between the new AI model and the movie "Her". In "Her," Scarlett Johansson voices an advanced AI operating system named Samantha, who develops a romantic relationship with a lonely writer played by Joaquin Phoenix. With its ability to mimic emotional responses, the parallels from GPT-4o were obvious.

Supercomputing

Linux Foundation Announces Launch of 'High Performance Software Foundation' (linuxfoundation.org) 4

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts."

It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

- Spack: the HPC package manager.

- Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.

- Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.

- HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.

- Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.

- E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.

In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit."

The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.
China

China Uses Giant Rail Gun to Shoot a Smart Bomb Nine Miles Into the Sky (futurism.com) 128

"China's navy has apparently tested out a hypersonic rail gun," reports Futurism, describing it as "basically a device that uses a series of electromagnets to accelerate a projectile to incredible speeds."

But "during a demonstration of its power, things didn't go quite as planned." As the South China Morning Post reports, the rail gun test lobbed a precision-guided projectile — or smart bomb — nine miles into the stratosphere. But because it apparently didn't go up as high as it was supposed to, the test was ultimately declared unsuccessful. This conclusion came after an analysis led by Naval Engineering University professor Lu Junyong, whose team found with the help of AI that even though the winged smart bomb exceeded Mach 5 speeds, it didn't perform as well as it could have. This occurred, as Lu's team found, because the projectile was spinning too fast during its ascent, resulting in an "undesirable tilt."
But what's more interesting is the project itself. "Successful or not, news of the test is a pretty big deal given that it was just a few months ago that reports emerged about China's other proposed super-powered rail gun, which is intended to send astronauts on a Boeing 737-size ship into space.... which for the record did not make it all the way to space..." Chinese officials, meanwhile, are paying lip service to the hypersonic rail gun technology's potential to revolutionize civilian travel by creating even faster railways and consumer space launches, too.
Japan and France also have railgun projects, according to a recent article from Defense One. "Yet the nation that has demonstrated the most continuing interest is China," with records of railgun work dating back as far as 2011: The Chinese team claimed that their railgun can fire a projectile 100 to 200 kilometers at Mach 6. Perhaps most importantly, it uses up to 100,000 AI-enabled sensors to identify and fix any problems before critical failure, and can slowly improve itself over time. This, they said, had enabled them to test-fire 120 rounds in a row without failure, which, if true, suggests that they solved a longstanding problem that reportedly bedeviled U.S. researchers. However, the team still has a ways to go before mounting an operational railgun on a ship; according to one Chinese article, the projectiles fired were only 25mm caliber, well below the size of even lightweight naval artillery.

As with many other Chinese defense technology programs, much remains opaque about the program...

While railguns tend to get the headlines, this lab has made advances in a wide range of electric and electromagnetic applications for the PLA Navy's warships. For example, the lab's research on electromagnetic launch technology has also been applied to the development of electromagnetic catapults for the PLAN's growing aircraft carrier fleet...

While it remains to be seen whether the Chinese navy can develop a full-scale railgun, produce it at scale, and integrate it onto its warships, it is obvious that it has made steady advances in recent years on a technology of immense military significance that the US has abandoned.

Thanks to long-time Slashdot reader Tangential for sharing the news.
AI

AI 'Godfather' Geoffrey Hinton: If AI Takes Jobs We'll Need Universal Basic Income (bbc.com) 250

"The computer scientist regarded as the 'godfather of artificial intelligence' says the government will have to establish a universal basic income to deal with the impact of AI on inequality," reports the BBC: Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was "very worried about AI taking lots of mundane jobs".

"I was consulted by people in Downing Street and I advised them that universal basic income was a good idea," he said. He said while he felt AI would increase productivity and wealth, the money would go to the rich "and not the people whose jobs get lost and that's going to be very bad for society".

"Until last year he worked at Google, but left the tech giant so he could talk more freely about the dangers from unregulated AI," according to the article. Professor Hinton also made this predicction to the BBC. "My guess is in between five and 20 years from now there's a probability of half that we'll have to confront the problem of AI trying to take over".

He recommended a prohibition on the military use of AI, warning that currently "in terms of military uses I think there's going to be a race".
The Military

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation (arstechnica.com) 74

Long-time Slashdot reader SonicSpike shared this report from Ars Technica: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs "should not be construed as a capability or a singular interest in one of many use cases during an evaluation."

Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 62

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
AI

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi (ocregister.com) 54

Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car.
The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration.

In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco.

in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco.

Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 40

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
AI

'Openwashing' 36

An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.)

In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...]

The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.
The Military

Palantir's First-Ever AI Warfare Conference (theguardian.com) 37

An anonymous reader quotes a report from The Guardian, written by Caroline Haskins: On May 7th and 8th in Washington, D.C., the city's biggest convention hall welcomed America's military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that's not how they would describe it. It was the inaugural "AI Expo for National Competitiveness," hosted by the Special Competitive Studies Project -- better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces.

The conference hall was also filled with booths representing the U.S. military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software. At industry conferences like these, powerful people tend to be more unfiltered – they assume they're in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war?

Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one. I've written about relationships between tech companies and the military before, so I shouldn't have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body.
Some of the noteworthy quotes from the panel and convention, as highlighted in Haskins' reporting, include:

"It's always great when the CIA helps you out," Schmidt joked when CIA deputy director David Cohen lent him his microphone when his didn't work.

The U.S. has to "scare our adversaries to death" in war, said Karp. On university graduates protesting Israel's war in Gaza, Karp described their views as a "pagan religion infecting our universities" and "an infection inside of our society."

"The peace activists are war activists," Karp insisted. "We are the peace activists."

A huge aspect of war in a democracy, Karp went on to argue, is leaders successfully selling that war domestically. "If we lose the intellectual debate, you will not be able to deploy any armies in the west ever," Karp said.

A man in nuclear weapons research jokingly referred to himself as "the new Oppenheimer."
Businesses

OpenAI Strikes Reddit Deal To Train Its AI On Your Posts (theverge.com) 43

Emilia David reports via The Verge: OpenAI has signed a deal for access to real-time content from Reddit's data API, which means it can surface discussions from the site within ChatGPT and other new products. It's an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million. The deal will also "enable Reddit to bring new AI-powered features to Redditors and mods" and use OpenAI's large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit.

No financial terms were revealed in the blog post announcing the arrangement, and neither company mentioned training data, either. That last detail is different from the deal with Google, where Reddit explicitly stated it would give Google "more efficient ways to train models." There is, however, a disclosure mentioning that OpenAI CEO Sam Altman is also a shareholder in Reddit but that "This partnership was led by OpenAI's COO and approved by its independent Board of Directors."
"Reddit has become one of the internet's largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they're looking for, and helps new audiences find community on Reddit," Reddit CEO Steve Huffman says.

Reddit stock has jumped on news of the deal, rising 13% on Friday to $63.64. As Reuters notes, it's "within striking distance of the record closing price of $65.11 hit in late-March, putting the company on track to add $1.2 billion to its market capitalization."
Privacy

User Outcry As Slack Scrapes Customer Data For AI Model Training (securityweek.com) 34

New submitter txyoji shares a report: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software.

The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed."

AI

OpenAI's Long-Term AI Risk Team Has Disbanded (wired.com) 21

An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts.

Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

Cloud

Companies Are So Desperate For Data Centers They're Leasing Them Before They're Even Built (sherwood.news) 23

Data center construction levels are at an all-time high. And more than ever, companies that need them have already called dibs. From a report: In the first quarter of 2024, what amounts to about half of the existing supply of data center megawattage in the US is under construction, according to real estate services firm CBRE. And 84% of that is already leased. Typically that rate had been about 50% the last few years -- already notably higher than other real estate classes. "I'm astonished and impressed by the demand for facilities yet to be fully constructed," CBRE Data Center Research Director Gordon Dolven told Sherwood.

That advanced interest means that despite the huge amount of construction, there's still going to be a shortage of data centers to meet demand. In other words, data center vacancy rates are staying low and rents high. Nationwide the vacancy rates are near record lows of 3.7% and average asking rent for data centers was up 19% year over year, according to CBRE. It was up 42% in Northern Virginia, where many data centers are located. These sorts of price jumps are "unprecedented" compared with other types of real estate. For comparison, rents for industrial and logistics real estate, another hot asset class used in e-commerce, is expected to go up 8% this year.

Operating Systems

NetBSD Bans AI-Generated Code (netbsd.org) 64

Seven Spirals writes: NetBSD committers are now banned from using any AI-generated code from ChatGPT, CoPilot, or other AI tools. Time will tell how this plays out with both their users and core team. "If you commit code that was not written by yourself, double check that the license on that code permits import into the NetBSD source repository, and permits free distribution," reads NetBSD's updated commit guidelines. "Check with the author(s) of the code, make sure that they were the sole author of the code and verify with them that they did not copy any other code. Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
Sony

Sony Lays Down the Gauntlet on AI 37

Sony Music Group, one of the world's biggest record labels, warned AI companies and music streaming platforms not to use the company's content without explicit permission. From a report: Sony Music, whose artists include Lil Nas X and Celine Dion, sent letters to more than 700 companies in an effort to protect its intellectual property, which includes album cover art, metadata, musical compositions and lyrics, from being used for training AI models. "Unauthorized use" of Sony Music Group content in the "training, development or commercialization of AI systems" deprives the company and its artists of control and compensation for those works, according to the letter, which was obtained by Bloomberg News.

[...] Sony Music, along with the rest of the industry, is scrambling to balance the creative potential of the fast-moving technology while also protecting artists' rights and its own profits. "We support artists and songwriters taking the lead in embracing new technologies in support of their art," Sony Music Group said in statement Thursday. "However, that innovation must ensure that songwriters' and recording artists' rights, including copyrights, are respected."
AI

Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies (theverge.com) 10

Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...]

Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.

Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.

Slashdot Top Deals