×
AI

Toys 'R' Us Riles Critics With 'First-Ever' AI-Generated Commercial Using Sora (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: On Monday, Toys "R" Us announced that it had partnered with an ad agency called Native Foreign to create what it calls "the first-ever brand film using OpenAI's new text-to-video tool, Sora." OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys "R" Us founder Charles Lazarus using AI-generated video clips. "We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that's gaining global attention," wrote Toys "R" Us on its website. "Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys "R" Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s."

The company says that The Origin of Toys "R" Us commercial was co-produced by Toys "R" Us Studios President Kim Miller Olko as executive producer and Native Foreign's Nik Kleverov as director. "Charles Lazarus was a visionary ahead of his time, and we wanted to honor his legacy with a spot using the most cutting-edge technology available," Miller Olko said in a statement. In the video, we see a child version of Lazarus, presumably generated using Sora, falling asleep and having a dream that he is flying through a land of toys. Along the way, he meets Geoffery, the store's mascot, who hands the child a small red car. Many of the scenes retain obvious hallmarks of AI-generated imagery, such as unnatural movement, strange visual artifacts, and the irregular shape of eyeglasses. [...] Although the Toys "R" Us video uses key visual elements from Sora, it still required quite a bit of human post-production work to put it together. Sora eliminated the need for actors and cameras, but creating successful generations and piecing together the rest still took human scriptwriters and VFX artists to fill in the AI model's shortcomings. "The brand film was almost entirely created with Sora, with some corrective VFX and an original music score composed by Aaron Marsh of famed indie rock band Copeland," wrote Toys "R" Us in a press release.
Comedy writer Mike Drucker wrapped up several of these criticisms into one post, writing: "Love this commercial is like, 'Toys R Us started with the dream of a little boy who wanted to share his imagination with the world. And to show how, we fired our artists and dried Lake Superior using a server farm to generate what that would look like in Stephen King's nightmares.'"

Other critical comments were more frank. Filmmaker Joe Russo posted: "TOYS 'R US released an AI commercial and it fucking sucks."
AI

Exam Submissions By AI Found To Earn Higher Grades Than Real-Life Students (yahoo.com) 118

Exam submissions generated by AI can not only evade detection but also earn higher grades than those submitted by university students, a real-world test has shown. From a report: The findings come as concerns mount about students submitting AI-generated work as their own, with questions being raised about the academic integrity of universities and other higher education institutions. It also shows even experienced markers could struggle to spot answers generated by AI, the University of Reading academics said.

Peter Scarfe, an associate professor at Reading's School of Psychology and Clinical Language Sciences said the findings should serve as a "wake-up call" for educational institutions as AI tools such as ChatGPT become more advanced and widespread. He said: "The data in our study shows it is very difficult to detect AI-generated answers. There has been quite a lot of talk about the use of so-called AI detectors, which are also another form of AI but (the scope here) is limited." For the study, published in the journal Plos One, Prof Scarfe and his team generated answers to exam questions using GPT-4 and submitted these on behalf of 33 fake students. Exam markers at Reading's School of Psychology and Clinical Language Sciences were unaware of the study. Answers submitted for many undergraduate psychology modules went undetected in 94% of cases and, on average, got higher grades than real student submissions, Prof Scarfe said.

Security

Rabbit R1 AI Device Exposed by API Key Leak (404media.co) 15

Security researchers claim to have discovered exposed API keys in the code of Rabbit's R1 AI device, potentially allowing access to all user responses and company services. The group, known as Rabbitude, says they could send emails from internal Rabbit addresses to demonstrate the vulnerability. 404 Media adds: In a statement, Rabbit said, "Today we were made aware of an alleged data breach. Our security team immediately began investigating it. As of right now, we are not aware of any customer data being leaked or any compromise to our systems. If we learn of any other relevant information, we will provide an update once we have more details."
Ubuntu

Canonical Expands Ubuntu Pro With Distroless Docker Image Service Offering 12-Year Support (betanews.com) 7

BrianFagioli writes: Canonical has introduced a new service enabling the creation of custom distroless Docker images under its "Everything LTS" program. This initiative allows customers to include any open-source software in their Docker images, regardless of whether it is packaged in Ubuntu, with a security maintenance commitment of up to 12 years. [...] This expansion of the Ubuntu Pro offering incorporates numerous new open-source components, especially current AI/ML tools, maintained directly from the source rather than as traditional 'deb' packages. This approach aims to minimize the attack surface of containers, thereby enhancing security and aiding compliance with various regulatory standards such as FIPS, FedRAMP, EU Cyber Resilience Act, FCC U.S. Cyber Trust Mark, and DISA-STIG.
Youtube

YouTube in Talks With Record Labels Over AI Music Deal (ft.com) 44

YouTube is negotiating with major record labels to license songs for AI tools that clone popular artists' music, according to Financial Times. The Google-owned platform is offering upfront payments to Sony, Warner, and Universal to secure rights for training AI software, aiming to launch new features this year. But there are roadblocks to the deal, the story adds: However, many artists remain fiercely opposed to AI music generation, fearing it could undermine the value of their work. Any move by a label to force their stars into such a scheme would be hugely controversial. [...]

YouTube last year began testing a generative AI tool that lets people create short music clips by entering a text prompt. The product, initially named "Dream Track," was designed to imitate the sound and lyrics of well-known singers. But only 10 artists agreed to participate in the test phase, including Charli XCX, Troye Sivan and John Legend, and Dream Track was made available to a just small group of creators.

Businesses

'Great Resignation' Enters Third Year (reuters.com) 50

An anonymous reader quotes a report from Reuters: The proportion of workers who expect to switch employers in the next 12 months is higher than that from the "Great Resignation" period of 2022, a PwC survey of the global workforce found. Around 28% of more than 56,000 workers surveyed by PwC said they were "very or extremely likely" to move from their current companies, compared to 19% in 2022, and 26% in 2023. PwC's 2024 "Hopes and Fears" survey also showed workers are embracing emerging technologies such as generative artificial intelligence (GenAI) and prioritizing upskilling amid rising workloads and heightened workplace uncertainty.

Pete Brown, global workforce leader at PwC UK, said employees are placing an "increased premium" on organizations that invest in their skills growth, and so, businesses must prioritize upskilling and employee experience. About 45% of the workers surveyed said they have experienced rising workloads and an accelerating pace of workplace change in the last 12 months, with 62% saying they have seen more change at work in the past year than the previous 12 months. Among employees who use GenAI daily, 82% said they expect it to increase their efficiency in the next 12 months. Reflecting confidence that GenAI opportunities would support their career growth, nearly half of those surveyed by PwC expected GenAI to generate higher salaries, with almost two-thirds hoping these emerging tools will improve the quality of their work.
Carol Stubbings, global markets and tax and legal services leader at PwC UK, said: "The findings suggest that job satisfaction is no longer enough." In order to retain talent and mitigate pressures, Stubbings said employers must invest in staff and tech platforms.
The Courts

Mozilla's CPO Sues Over Discrimination Post-Cancer Diagnosis (theregister.com) 43

Thomas Claburn reports via The Register: Mozilla Corporation was sued this month in the US, along with three of its executives, for alleged disability discrimination and retaliation against Chief Product Officer Steve Teixeira. Teixeira, according to a complaint filed in King County Superior Court in the State of Washington, had been tapped to become CEO when he was diagnosed with ocular melanoma on October 3, 2023. Teixeira then took medical leave for cancer treatment from October 30, 2023, through February 1, 2024. "Immediately, upon his return, Mozilla campaigned to demote or terminate Mr Teixeira citing groundless concerns and assumptions about his capabilities as an individual living with cancer," the complaint [PDF] says. "Interim Chief Executive Officer Laura Chambers and Chief People Officer Dani Chehak were clear with Mr Teixeira: He could not continue as Chief Product Officer -- and could not continue as a Mozilla employee in any capacity beyond 2024 -- because of his diagnosis."

Chambers and Chehak are both named in the complaint, along with Mitchell Baker, the former CEO of Mozilla who stepped down in February and announced Chambers as her successor. "Mr Teixeira was enthusiastic to resume his critical role after treatment, but Mozilla would not tolerate an executive with cancer," said Amy Kangas Alexander, an attorney with law firm Stokes Lawrence who is representing the plaintiff, in an email to The Register. "When Mr Teixeira refused to be marginalized because of his disability, Mozilla retaliated and placed him on leave against his will. Mozilla has sidelined Mr Teixeira at the very moment he needs to be preparing his family for the possibility of a future without him."

The complaint claims that Teixeira, appointed in August 2022, helped reverse the decade-long decline of Firefox, which generates about 90 percent of Mozilla's revenue and is the company's only profitable product. He's further credited with growing Mozilla's advertising business, and AI capabilities, and with reducing investment in the money-losing Pocket service. These and other successes, it's alleged, led to conversation in September 2023 when Baker outlined a plan for Teixeira to become CEO. Then he took medical leave and before he could return, the complaint says, Chambers was appointed interim CEO and Baker was removed, becoming Executive Chair of the Board of Directors. [...]
A Mozilla spokesperson said in a statement: "We are aware of the lawsuit filed against Mozilla. We deny the allegations and intend to vigorously defend against this lawsuit. Mozilla has a 25-plus-year track record of maintaining the highest standards of integrity and compliance with all applicable laws. We look forward to presenting our defense in court and are confident that the facts will demonstrate that we have acted appropriately. As this is an ongoing legal matter, we will not be providing further comments at this time."
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.
AI

Apple Spurned Idea of iPhone AI Partnership With Meta Months Ago (bloomberg.com) 10

An anonymous reader shares a report: Apple rejected overtures by Meta Platforms to integrate the social networking company's AI chatbot into the iPhone months ago, according to people with knowledge of the matter. The two companies aren't in discussions about using Meta's Llama chatbot in an AI partnership and only held brief talks in March, said the people, who asked not to be identified because the situation is private. The dialogue about a partnership didn't reach any formal stage, and Apple has no active plans to integrate Llama.

[...] Apple decided not to move forward with formal Meta discussions in part because it doesn't see that company's privacy practices as stringent enough, according to the people. Apple has spent years criticizing Meta's technology, and integrating Llama into the iPhone would have been a stark about-face.

Social Networks

Meta Is Tagging Real Photos As 'Made With AI,' Says Photographers (techcrunch.com) 25

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they're consuming. However, as TechCrunch's Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the "Made with AI" label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to "flatten the image" before saving it as a JPEG image. He suspects that this action has triggered Meta's algorithm to attach this label. "What's annoying is that the post forced me to include the 'Made with AI' even though I unchecked it," Souza told TechCrunch.

Meta would not answer on the record to TechCrunch's questions about Souza's experience or other photographers' posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. "Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image," a Meta spokesperson told TechCrunch.
"For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it," notes TechCrunch. "For users, it might be hard to understand how much AI was involved in a photo."

"Meta's label specifies that 'Generative AI may have been used to create or edit content in this post' -- but only if you tap on the label. Despite this approach, there are plenty of photos on Meta's platforms that are clearly AI-generated, and Meta's algorithm hasn't labeled them."
Education

Google Is Bringing Gemini Access To Teens Using Their School Accounts (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google announced on Monday that it's bringing its AI technology Gemini to teen students using their school accounts, after having already offered Gemini to teens using their personal accounts. The company is also giving educators access to new tools alongside this release. Google says that giving teens access to Gemini can help prepare them with the skills they need to thrive in a future where generative AI exists. Gemini will help students learn more confidently with real-time feedback, the company believes.

Google claims it will not use data from chats with students to train and improve its AI models, and has taken steps to ensure it's bringing this technology to students responsibly. Gemini has guardrails that will prevent inappropriate responses, such as illegal or age-gated substances, from appearing in responses. It will also actively recommend teens use its double-check feature to help them develop information literacy and critical thinking skills. Gemini will be available to teen students while using their Google Workspace for Education accounts in English in more than 100 countries. Gemini will be off by default for teens until admins choose to turn it on.
Google also announced that it's launching its Read Along in Classroom feature worldwide to help students improve reading skills with real-time support. Educators can assign grade-level or phonics-based reading activities and receive insights on students' reading accuracy, speed, and comprehension.
Businesses

OpenAI Buys Remote Collaboration Platform 'Multi' (venturebeat.com) 9

OpenAI has purchased Multi (previously Remotion), "a five-person startup based in New York City that focuses on screenshare and collaboration technologies for workers using Mac computers," reports VentureBeat. The latest acquisition comes just days after the AI company announced it had acquired enterprise analytics startup Rockset. No details were provided on the terms of the deal. From the report: Multi's co-founder and CEO Alexander Embiricos posted on his X account today stating specifically that he (and presumably the entire Multi team) has joined OpenAI's "ChatGPT desktop team," the unit at the company responsible for building the ChatGPT for Mac desktop app that was unveiled back in May 2024. Multi broke the news first to its users and followers in a blog post, writing: "Recently, we've been increasingly asking ourselves how we should work with computers. Not on or using computers, but truly with computers. With AI. We believe it's one of the most important product questions of our time. And so, we're beyond excited to share that Multi is joining OpenAI!"

The news has users on X speculating that OpenAI will use Multi to allow its AI models such as GPT-4o to "take over" a user's computer and perform actions on their behalf based on text or voice prompts. So you could say something like "ChatGPT, create a spreadsheet of my latest hours and send it to my manager" and it would try to do this. Based on what I've learned about Multi (see final section of this article below) and zero insider knowledge, I think it is at least as likely that OpenAI will seek to use the acquisition as a means of souping up and adding features to its ChatGPT Team and Enterprise subscription plans, as those are already more focused on providing tech for teams to help all the individuals on them work better together.

However, Multi also broke the news that it is "sunsetting" the current version of its software and will end support for it in one month: on July 24, 2024, as well as delete all user data. Egads! Multi states in a short FAQ in its blog post that users should go ahead and export their data before that time, using the "Export Session Notes" setting under the URL: https://app.multi.app/account. It is also opening the door to users asking for extensions to the deletion date of July 24, 2024 for their individual or company accounts, if they email Embiricos himself directly at alexander@multi.app. Multi also says its team members can help recommend alternatives through the same email address.

AI

Head of Paris's Top Tech University Says Secret To France's AI Boom Is Focus on Humanities (yahoo.com) 23

French universities are becoming hotbeds for AI innovation, attracting investors seeking the next tech breakthrough. Ecole Polytechnique, a 230-year-old institution near Paris, stands out with 57% of France's AI startup founders among its alumni, according to Dealroom data analyzed by Accel. The school's approach combines STEM education with humanities and military training, producing well-rounded entrepreneurs. "AI is now instilling every discipline the same way mathematics did years ago," said Dominique Rossin, the school's provost. "We really push our students out of their comfort zone and encourage them to try new subjects and discover new areas in science," he added.

France leads Europe in AI startup funding, securing $2.3 billion and outpacing the UK and Germany, according to Dealroom.
The Courts

Major Record Labels Sue AI Company Behind 'BBL Drizzy' (theverge.com) 53

A group of record labels including the big three -- Universal Music Group (UMG), Sony Music Entertainment, and Warner Records -- are suing two of the top names in generative AI music making, alleging the companies violated their copyright "en masse." From a report: The two AI companies, Suno and Udio, use text prompts to churn out original songs. Both companies have enjoyed a level of success: Suno is available for use in Microsoft Copilot though a partnership with the tech giant. Udio was used to create "BBL Drizzy," one of the more notable examples of AI music going viral.

The case against Suno was filed in Boston federal court, and the Udio case was filed in New York. The labels say artists across genres and eras had their work used without consent. The lawsuits were brought by the Recording Industry Association of America (RIAA), the powerful group representing major players in the music industry, and a group of labels. The RIAA is seeking damages of up to $150,000 per work, along with other fees.

AI

Apple Might Partner with Meta on AI (techcrunch.com) 27

Earlier this month Apple announced a partnership with OpenAI to bring ChatGPT to Siri.

"Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal," according to TechCrunch: A deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions... Apple has said it will ask for users' permission before sharing any questions and data with ChatGPT. Presumably, any integration with Meta would work similarly.
AI

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals (yahoo.com) 63

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild.

OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from.

OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...."

AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering...

We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients — other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures.

The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs."

They conclude that "The best chance for the wider community of artists is to band together."
AI

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity (yahoo.com) 37

The Roddenberry Foundation — named for Star Trek creator Gene Roddenberry — "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...."

Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...."

Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater.

The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early."

"We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone — regardless of background — to thrive."

"By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."
AI

Our Brains React Differently to Deepfake Voices, Researchers Find (news.uzh.ch) 14

"University of Zurich researchers have discovered that our brains process natural human voices and "deepfake" voices differently," writes Slashdot reader jenningsthecat.

From the University's announcement: The researchers first used psychoacoustical methods to test how well human voice identity is preserved in deepfake voices. To do this, they recorded the voices of four male speakers and then used a conversion algorithm to generate deepfake voices. In the main experiment, 25 participants listened to multiple voices and were asked to decide whether or not the identities of two voices were the same. Participants either had to match the identity of two natural voices, or of one natural and one deepfake voice.

The deepfakes were correctly identified in two thirds of cases. "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people," says Claudia Roswandowitz, first author and a postdoc at the Department of Computational Linguistics.

The researchers then used imaging techniques to examine which brain regions responded differently to deepfake voices compared to natural voices. They successfully identified two regions that were able to recognize the fake voices: the nucleus accumbens and the auditory cortex. "The nucleus accumbens is a crucial part of the brain's reward system. It was less active when participants were tasked with matching the identity between deepfakes and natural voices," says Claudia Roswandowitz. In contrast, the nucleus accumbens showed much more activity when it came to comparing two natural voices.

The complete paper appears in Nature.
AI

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm (yahoo.com) 108

Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters — citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled.

"What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges."


The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry."

Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.
Red Hat Software

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification (networkworld.com) 36

In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) — a fundamental component of that Red Hat In-Vehicle Operating System.

From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future...

By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts.

"Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..."

Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."

Slashdot Top Deals