×
Red Hat Software

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification (networkworld.com) 36

In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) — a fundamental component of that Red Hat In-Vehicle Operating System.

From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future...

By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts.

"Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..."

Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."
AI

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels? (msn.com) 56

The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..."

They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company...

[Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement.

The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity.

The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment."

The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country."
Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.
AI

Open Source ChatGPT Clone 'LibreChat' Lets You Use Multiple AI Services (thenewstack.io) 39

Slashdot reader DevNull127 writes: A free and open source ChatGPT clone — named LibreChat — lets its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation...)

Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." And its creator, Danny Avila, says in some cases it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 — and LibreChat is "inherently completely private". From the article:

With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time."

And they're also compatible with LibreChat...

AI

OpenAI CTO: AI Could Kill Some Creative Jobs That Maybe Shouldn't Exist Anyway (pcmag.com) 88

OpenAI CTO Mira Murati isn't worried about how AI could hurt some creative jobs, suggesting during a talk that some jobs were maybe always a bit replaceable anyway. From a report: "I think it's really going to be a collaborative tool, especially in the creative spaces," Murati told Darmouth University Trustee Jeffrey Blackburn during a conversation about AI hosted at the university's engineering department. "Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace. "I really believe that using it as a tool for education, [and] creativity, will expand our intelligence."
Businesses

OpenAI's First Acquisition Is Enterprise Data Startup 'Rockset' (theverge.com) 2

In a bog post on Friday, OpenAI announced it has acquired Rockset, an enterprise analytics startup, to "power our retrieval infrastructure across products." The Verge reports: This acquisition is OpenAI's first where the company will integrate both a company's technology and its team, a spokesperson tells Bloomberg. The two companies didn't share the terms of the acquisition. Rockset has raised $105 million in funding to date. "Rockset's infrastructure empowers companies to transform their data into actionable intelligence," OpenAI COO Brad Lightcap says in a statement. "We're excited to bring these benefits to our customers by integrating Rockset's foundation into OpenAI products."

"Rockset will become part of OpenAI and power the retrieval infrastructure backing OpenAI's product suite," Rockset CEO Venkat Venkataramani says in a Rockset blog post. "We'll be helping OpenAI solve the hard database problems that AI apps face at massive scale." Venkataramani says that current Rockset customers won't experience "immediate change" and that the company will gradually transition them off the platform. "Some" members of Rockset's team will move over to OpenAI, Bloomberg says.

Businesses

Stability AI Appoints New CEO 4

British startup Stability AI has appointed Prem Akkaraju as its new CEO. The 51-year-old Akkaraju, former CEO of visual effects company Weta Digital, "is part of a group of investors including former Facebook President Sean Parker that has stepped in to save Stability with a cash infusion that could result in a lower valuation for the firm," reports the Information (paywalled). "The new funding will likely shrink the stakes of some existing investors, who have collectively contributed more than $100 million."

In March, Stability AI founder and CEO Emad Mostaque stepped down from the role to pursue decentralized AI. "In a series of posts on X, Mostaque opined that one can't beat 'centralized AI' with more 'centralized AI,' referring to the ownership structure of top AI startups such as OpenAI and Anthropic," reported TechCrunch at the time. The move followed a report in April that claimed the company ran out of cash to pay its bills for its rented cloud GPUs. Last year, the company raised millions at a $1 billion valuation.
AI

Microsoft Makes Copilot Less Useful on New Copilot Plus PCs (theverge.com) 48

An anonymous reader shares a report: Microsoft launched its range of Copilot Plus PCs earlier this week, and they all come equipped with the new dedicated Copilot key on the keyboard. It's the first big change to Windows keyboards in 30 years, but all the key does now is launch a Progressive Web App (PWA) version of Copilot. The web app doesn't even integrate into Windows anymore like the previous Copilot experience did since last year, so you can't use Copilot to control Windows 11 settings or have it docked as a sidebar anymore. It's literally just a PWA. Microsoft has even removed the keyboard shortcut to Copilot on these new Copilot Plus PCs, so WINKEY + C does nothing.
EU

Apple Won't Roll Out AI Tech In EU Market Over Regulatory Concerns (bloomberg.com) 84

Apple is withholding a raft of new technologies from hundreds of millions of consumers in the European Union, citing concerns posed by the bloc's regulatory attempts to rein in Big Tech. From a report: The company announced Friday it would block the release of Apple Intelligence, iPhone Mirroring and SharePlay Screen Sharing from users in the EU this year, because the Digital Markets Act allegedly forces it to downgrade the security of its products and services.

"We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security," Apple said in a statement. Under the DMA, Apple is expected to receive a formal warning from EU regulators over how it allegedly blocks apps from steering users to cheaper subscription deals on the web -- a practice for which it received a $1.9 billion fine from Brussels regulators earlier this year.

AI

Amazon Mulls $5 To $10 Monthly Price Tag For Unprofitable Alexa Service, AI Revamp (reuters.com) 57

Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, Reuters reported Friday, citing people with direct knowledge of the company's plans. From the report: Known internally as "Banyan," a reference to the sprawling ficus trees, the project would represent the first major overhaul of the voice assistant since it was introduced in 2014 along with the Echo line of speakers. Amazon has dubbed the new voice assistant "Remarkable Alexa," the people said. Amazon has also considered a roughly $10-per-month price, the report added.
Robotics

Public Servants Uneasy As Government 'Spy' Robot Prowls Federal Offices (www.cbc.ca) 72

An anonymous reader quotes a report from CBC News: A device federal public servants call "the little robot" began appearing in Gatineau office buildings in March. It travels through the workplace to collect data using about 20 sensors and a 360-degree camera, according to Yahya Saad, co-founder of GlobalDWS, which created the robot. "Using AI on the robot, the camera takes the picture, analyzes and counts the number of people and then discards the image," he said. Part of a platform known as VirBrix, the robot also gathers information on air quality, light levels, noise, humidity, temperature and even measures CO2, methane and radon gas. The aim is to create a better work environment for humans -- one that isn't too hot, humid or dim. Saad said that means more comfortable and productive employees. The technology can also help reduce heating, cooling and hydro costs, he said. "All these measures are done to save on energy and reduce the carbon footprint," Saad explained. After the pilot program in March, VirBrix is set to return in July and October, and the government hasn't ruled out extending its use. It's paying $39,663 to lease the robot for two years.

Bruce Roy, national president of the Government Services Union, called the robot's presence in federal workplaces "intrusive" and "insulting." "People feel observed all the time," he said in French. "It's a spy. The robot is a spy for management." Roy, whose union represents more than 12,000 federal workers across several departments, said the robot is unnecessary because the employer already has ways of monitoring employee attendance and performance. "We believe that one of the robot's tasks is to monitor who is there and who is not," he said. "Folks say, why is there a robot here? Doesn't my employer trust that I'm here and doing my work properly?" [...] Jean-Yves Duclos, the minister of public services and procurement, said the government is instead using the technology as it looks to cut its office space footprint in half over the coming years. "These robots, as we call them, these sensors observe the utilization of office space and will be able to give us information over the next few years to better provide the kind of workplace employees need to do their job," Duclos said in French. "These are totally anonymous methods that allow us to evaluate which spaces are the most used and which spaces are not used, so we can better arrange them."
"In those cases we keep the images, but the whole body, not just the face, the whole body of the person is blurred," said Saad. "These are exceptional cases where we need to keep images and then the images would be handed over to the client."

The data is then stored on a server on Canadian soil, according to GlobalDWS.
AI

London Premiere of Movie With AI-Generated Script Cancelled After Backlash (theguardian.com) 57

A cinema in London has cancelled the world premiere of a film with a script generated by AI after a backlash. From a report: The Prince Charles cinema, located in London's West End and which traditionally screens cult and art films, was due to host a showing of a new production called The Last Screenwriter on Sunday. However the cinema announced on social media that the screening would not go ahead. In its statement the Prince Charles said: "The feedback we received over the last 24hrs once we advertised the film has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry."

Directed by Peter Luisi and starring Nicholas Pople, The Last Screenwriter is a Swiss production that describes itself as the story of "a celebrated screenwriter" who "finds his world shaken when he encounters a cutting edge AI scriptwriting system ... he soon realises AI not only matches his skills but even surpasses him in empathy and understanding of human emotions." The screenplay is credited to "ChatGPT 4.0." OpenAI launched its latest model, GPT-4o, in May. Luisi told the Daily Beast that the cinema had cancelled the screening after it received 200 complaints, but that a private screening for cast and crew would still go ahead in London.

Music

World's Largest Music Company Is Helping Musicians Make Their Own AI Voice Clones (rollingstone.com) 20

Universal Music Group has partnered with AI startup SoundLabs to offer voice modeling technology to its artists. The MicDrop feature, launching this summer, will allow UMG artists to create and control their own AI voice models. The tool includes voice-to-instrument functionality and language transposition capabilities. RollingStone adds: AI voice clones have become perhaps the most well-known -- and often the most controversial -- use of artificial intelligence in the music business. Viral tracks with AI vocals have spurred legislation to protect artists' virtual likenesses and rights of publicity.

Last year, an anonymous songwriter named Ghostwriter went viral with his song "Heart On My Sleeve," which featured AI-generated vocals of UMG artists Drake and The Weeknd. The song was pulled from streaming services days later following mounting pressure from the record company. Ironically, Drake got caught in a voice cloning controversy of his own a year later when he used a Tupac voice clone on his Kendrick Lamar diss track "Taylor Made Freestyle." Tupac's estate hit the rapper with a cease-and-desist in April, and the song was subsequently taken down.

AI

Anthropic Launches Claude 3.5 Sonnet, Says New Model Outperforms GPT-4 Omni (anthropic.com) 34

Anthropic launched Claude 3.5 Sonnet on Thursday, claiming it outperforms previous models and OpenAI's GPT-4 Omni. The AI startup also introduced Artifacts, a workspace for users to edit AI-generated projects. This release, part of the Claude 3.5 family, follows three months after Claude 3. Claude 3.5 Sonnet is available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits.

Anthropic plans to launch 3.5 versions of Haiku and Opus later this year, exploring features like web search and memory for future releases.

Anthropic also introduced Artifacts on Claude.ai, a new feature that expands how users can interact with Claude. When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude's creations in real-time, seamlessly integrating AI-generated content into their projects and workflows, the startup said.
AI

Perplexity AI Faces Scrutiny Over Web Scraping and Chatbot Accuracy (wired.com) 20

Perplexity AI, a billion-dollar "AI" search startup, has come under scrutiny for its data collection practices and accuracy of its chatbot responses. Despite claiming to respect website operators' wishes, Perplexity appears to scrape content from sites that have blocked its crawler, using an undisclosed IP address, a Wired investigation found. The chatbot also generates summaries that closely paraphrase original reporting with minimal attribution. Furthermore, its AI often "hallucinates," inventing false information when unable to access articles directly. Perplexity's CEO, Aravind Srinivas, maintains the company is not acting unethically.
Businesses

FedEx's Secretive Police Force Is Helping Cops Build An AI Car Surveillance Network (forbes.com) 47

Twenty years ago, FedEx established its own police force. Now it's working with local police to build out an AI car surveillance network. From a report: Forbes has learned the shipping and business services company is using AI tools made by Flock Safety, a $4 billion car surveillance startup, to monitor its distribution and cargo facilities across the United States. As part of the deal, FedEx is providing its Flock video surveillance feeds to law enforcement, an arrangement that Flock has with at least five multi-billion dollar private companies. But publicly available documents reveal that some local police departments are also sharing their Flock feeds with FedEx -- a rare instance of a private company availing itself of a police surveillance apparatus.

To civil rights activists, such close collaboration has the potential to dramatically expand Flock's car surveillance network, which already spans 4,000 cities across over 40 states and some 40,000 cameras that track vehicles by license plate, make, model, color and other identifying characteristics, like dents or bumper stickers. Lisa Femia, staff attorney at the Electronic Frontier Foundation, said because private entities aren't subject to the same transparency laws as police, this sort of arrangement could "[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network."

AI

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence 49

Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report: Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.

Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."

Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."
Technology

Former Cisco CEO: Nvidia's AI Dominance Mirrors Cisco's Internet Boom, But Market Dynamics Differ (wsj.com) 24

Nvidia has become the U.S.'s most valuable listed company, riding the wave of the AI revolution that brings back memories of one from earlier this century. The last time a big provider of computing infrastructure was the most valuable U.S. company was in March 2000, when networking-equipment company Cisco took that spot at the height of the dot-com boom.

Former Cisco CEO John Chambers, who led the company during the dot-com boom, said the implications of AI are larger than the internet and cloud computing combined, but the dynamics differ. "The implications in terms of the size of the market opportunity is that of the internet and cloud computing combined," he told WSJ. "The speed of change is different, the size of the market is different, the stage when the most valuable company was reached is different." The story adds: Chambers said [Nvidia CEO] Huang was working from a different playbook than Cisco but was facing some similar challenges. Nvidia has a dominant market share, much like Cisco did with its products as the internet grew, and is also fending off rising competition. Also like Nvidia, Cisco benefited from investments before the industry became profitable. "We were absolutely in the right spot at the right time, and we knew it, and we went for it," Chambers said.
AI

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo (venturebeat.com) 108

Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities.

Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...]

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

AI

Meta Has Created a Way To Watermark AI-Generated Speech (technologyreview.com) 64

An anonymous reader quotes a report from MIT Technology Review: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with -- for example, by removing or forging them.

Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.

The Military

Ukraine Turning To AI To Prioritize 700 Years of Landmine Removal (newscientist.com) 221

MattSparkes shares a report from NewScientist: The Russian invasion of Ukraine has seen so many landmines deployed across the country that clearing them would take 700 years, say researchers. To make the task more manageable, Ukrainian scientists are turning to artificial intelligence to identify which regions are a priority for de-mining, though they expect some may simply have to be left as a permanent "scar" on the country. The model considers vast amounts of data, including tax and property ownership records, agricultural maps, data on soil fertility, logs from the military and emergency services of where bombs and shells have landed, information gleaned from satellite images and interviews with local civilians and the military. Even climate change models and data on population density derived from mobile phone operators could be assessed. The AI then weighs factors such as civilian safety and potential economic benefits to determine the importance of a given piece of land and how urgent it is to make it safe. Ihor Bezkaravainyi, a deputy minister at Ukraine's Ministry of Economy, is leading the team, and he likens the task of de-mining during an ongoing war to designing and building a submarine entirely underwater, except that the water is on fire. "It's a big problem," he says.

Slashdot Top Deals