×
Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
AI

Journalists at 'The Atlantic' Demand Assurances Their Jobs Will Be Protected From OpenAI (msn.com) 57

"As media bosses scramble to decide if and how they should partner with AI companies, workers are increasingly concerned that the technology could imperil their jobs or degrade their work..." reports the Washington Post.

The latest example? "Two months after the Atlantic reached a licensing deal with OpenAI, staffers at the storied magazine are demanding the company ensure their jobs and work are protected." (Nearly 60 journalists have now signed a letter demanding the company "stop prioritizing its bottom line and champion the Atlantic's journalism.") The unionized staffers want the Atlantic bosses to include AI protections in the union contract, which the two sides have been negotiating since 2022. "Our editorial leaders say that The Atlantic is a magazine made by humans, for humans," the letter says. "We could not agree more..."

The Atlantic's new deal with OpenAI grants the tech firm access to the magazine's archives to train its AI tools. While the Atlantic in return will have special access to experiment with these AI tools, the magazine says it is not using AI to create journalism. But some journalists and media observers have raised concerns about whether AI tools are accurately and fairly manipulating the human-written text they work with. The Atlantic staffers' letter noted a pattern by ChatGPT of generating gibberish web addresses instead of the links intended to attribute the reporting it has borrowed, as well as sending readers to sites that have summarized Atlantic stories rather than the original work...

Atlantic spokeswoman Anna Bross said company leaders "agree with the general principles" expressed by the union. For that reason, she said, they recently proposed a commitment to not to use AI to publish content "without human review and editorial oversight." Representatives from the Atlantic Union bargaining committee told The Washington Post that "the fact remains that the company has flatly refused to commit to not replacing employees with AI."

The article also notes that last month the union representing Lifehacker, Mashable and PCMag journalists "ratified a contract that protects union members from being laid off because AI has impacted their roles and requires the company to discuss any such plans to implement AI tools ahead of time."
Programming

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot (google.com) 12

Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas.

I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation.

The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role.

In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually....

I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November...

I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Power

Could AI Speed Up the Design of Nuclear Reactors? (byu.edu) 156

A professor at Brigham Young University "has figured out a way to shave critical years off the complicated design and licensing processes for modern nuclear reactors," according to an announcement from the university.

"AI is teaming up with nuclear power." The typical time frame and cost to license a new nuclear reactor design in the United States is roughly 20 years and $1 billion. To then build that reactor requires an additional five years and between $5 and $30 billion. By using AI in the time-consuming computational design process, [chemical engineering professor Matt] Memmott estimates a decade or more could be cut off the overall timeline, saving millions and millions of dollars in the process — which should prove critical given the nation's looming energy needs.... "Being able to reduce the time and cost to produce and license nuclear reactors will make that power cheaper and a more viable option for environmentally friendly power to meet the future demand...."

Engineers deal with elements from neutrons on the quantum scale all the way up to coolant flow and heat transfer on the macro scale. [Memmott] also said there are multiple layers of physics that are "tightly coupled" in that process: the movement of neutrons is tightly coupled to the heat transfer which is tightly coupled to materials which is tightly coupled to the corrosion which is coupled to the coolant flow. "A lot of these reactor design problems are so massive and involve so much data that it takes months of teams of people working together to resolve the issues," he said... Memmott's is finding AI can reduce that heavy time burden and lead to more power production to not only meet rising demands, but to also keep power costs down for general consumers...

Technically speaking, Memmott's research proves the concept of replacing a portion of the required thermal hydraulic and neutronics simulations with a trained machine learning model to predict temperature profiles based on geometric reactor parameters that are variable, and then optimizing those parameters. The result would create an optimal nuclear reactor design at a fraction of the computational expense required by traditional design methods. For his research, he and BYU colleagues built a dozen machine learning algorithms to examine their ability to process the simulated data needed in designing a reactor. They identified the top three algorithms, then refined the parameters until they found one that worked really well and could handle a preliminary data set as a proof of concept. It worked (and they published a paper on it) so they took the model and (for a second paper) put it to the test on a very difficult nuclear design problem: optimal nuclear shield design.

The resulting papers, recently published in academic journal Nuclear Engineering and Design, showed that their refined model can geometrically optimize the design elements much faster than the traditional method.

In two days Memmott's AI algorithm determined an optimal nuclear-reactor shield design that took a real-world molten salt reactor company spent six months. "Of course, humans still ultimately make the final design decisions and carry out all the safety assessments," Memmott says in the announcement, "but it saves a significant amount of time at the front end....

"Our demand for electricity is going to skyrocket in years to come and we need to figure out how to produce additional power quickly. The only baseload power we can make in the Gigawatt quantities needed that is completely emissions free is nuclear power."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Businesses

iPad Sales Help 'Bail Out' Apple Amid a Continued iPhone Slide (techcrunch.com) 44

Apple reported a new June quarter revenue record of $85.8 billion, up 5 percent from a year ago, fueled largely by new iPad sales. iPad "saw the biggest category increase for the quarter, up from $5.8 billion to $7.2 billion year-over-year," reports TechCrunch. It helped counter slowed iPhone revenue, "which dropped from $39.7 billion to $39.3 billion year-on-year." From the report: In spite of a drop for the quarter, iPhone remained Apple's most important category by a wide margin, followed by service, which includes software offerings like iCloud, Apple TV+ and Apple Music. That category continued to grow, up to $24.2 billion from $21.2 billion over the same three-month period last year. Much of the iPhone slowdown can be attributed to the greater China region. Overall, the region dropped from $15.8 billion to $14.7 billion for the quarter. Canalys figures from last week show a marked decline in iPhone sales, down 6.7% from 10.4 million to 9.7 million for the quarter, Reuters reported.

The drop in Apple's third-largest region (behind the Americas and Europe) had a clear impact on the company's bottom line. The company aggressively discounted iPhone prices in China starting in May, as competition intensified from domestic rivals. The strategy resulted in strong iPhone sales that month, up close to 40% from a year prior. [...] Q3 marked the second consecutive quarter decline for global iPhone sales. The news puts additional pressure on the generative AI strategy that the company laid out at WWDC in June.

Music

Suno & Udio To RIAA: Your Music Is Copyrighted, You Can't Copyright Styles (torrentfreak.com) 85

AI music generators Suno and Udio responded to the lawsuits filed by the major recording labels, arguing that their platforms are tools for making new, original music that "didn't and often couldn't previously exist."

"Those genres and styles -- the recognizable sounds of opera, or jazz, or rap music -- are not something that anyone owns," the companies said. "Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song. IP rights can attach to a particular recorded rendition of a song in one of those genres or styles. But not to the genre or style itself." TorrentFreak reports: "[The labels] frame their concern as one about 'copies' of their recordings made in the process of developing the technology -- that is, copies never heard or seen by anyone, made solely to analyze the sonic and stylistic patterns of the universe of pre-existing musical expression. But what the major record labels really don't want is competition." The labels' position is that any competition must be legal, and the AI companies state quite clearly that the law permits the use of copyrighted works in these circumstances. Suno and Udio also make it clear that snippets of copyrighted music aren't stored as a library of pre-existing content in the neural networks of their AI models, "outputting a collage of 'samples' stitched together from existing recordings" when prompted by users.

"[The neural networks were] constructed by showing the program tens of millions of instances of different kinds of recordings," Suno explains. "From analyzing their constitutive elements, the model derived a staggeringly complex collection of statistical insights about the auditory characteristics of those recordings -- what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on." These models are vast stores, not of copyrighted music, the defendants say, but information about what musical styles consist of, and it's from that information new music is made.

Most copyright lawsuits in the music industry are about reproduction and public distribution of identified copyright works, but that's certainly not the case here. "The Complaint explicitly disavows any contention that any output ever generated by Udio has infringed their rights. While it includes a variety of examples of outputs that allegedly resemble certain pre-existing songs, the Complaint goes out of its way to say that it is not alleging that those outputs constitute actionable copyright infringement." With Udio declaring that, as a matter of law, "that key point makes all the difference," Suno's conclusion is served raw. "That concession will ultimately prove fatal to Plaintiffs' claims. It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product." Noting that Congress enacted the first copyright law in 1791, Suno says that in the 233 years since, not a single case has ever reached a contrary conclusion.

In addition to addressing allegations unique to their individual cases, the AI companies accuse the labels of various types of anti-competitive behavior. Imposing conditions to prevent streaming services obtaining licensed music from smaller labels at lower rates, seeking to impose a "no AI" policy on licensees, to claims that they "may have responded to outreach from potential commercial counterparties by engaging in one or more concerted refusals to deal." The defendants say this type of behavior is fueled by the labels' dominant control of copyrighted works and by extension, the overall market. Here, however, ownership of copyrighted music is trumped by the existence and knowledge of musical styles, over which nobody can claim ownership or seek to control. "No one owns musical styles. Developing a tool to empower many more people to create music, by scrupulously analyzing what the building blocks of different styles consist of, is a quintessential fair use under longstanding and unbroken copyright doctrine. "Plaintiffs' contrary vision is fundamentally inconsistent with the law and its underlying values."
You can read Suno and Udio's answers to the RIAA's lawsuits here (PDF) and here (PDF).
Google

Google Pulls 'Dear Sydney' Olympics Ad After Appearing Tone-Deaf To AI Concerns (variety.com) 49

Google has pulled its "Dear Sydney" Olympics ad after it garnered significant backlash. (You can still watch the ad on YouTube, but comments have been turned off.) According to Ad Age, the ad was "meant to promote Google's Gemini AI platform, but viewers had a difficult time looking past its miscalculated storyline." From the report: In the ad, a father wants to help his daughter write a letter to her idol, Olympic track star Sydney McLaughlin-Levrone. But instead of encouraging her to take part in such a personal moment, he delegates Gemini to write the letter for her. Viewers and ad leaders lambasted the spot on social media for being tone-deaf. Some were upset over Google evidently seeing no problem with an AI co-opting a formative childhood act, while others alluded to its reinforcing of a more existential fear, that AI is bound to replace meaningful work. The ad got significant airplay during NBCU's TV coverage of the Olympics this week, including on NBC in primetime, as well as on E!, CNBC and USA, according to iSpot.tv. It last ran on national TV around midnight of July 30 on USA, according to iSpot.TV. "While the ad tested well before airing, given the feedback, we've decided to phase the ad out of our Olympics rotation," a Google spokesperson told Ad Age today.

The company earlier this week defended the ad in a statement: "We believe that AI can be a great tool for enhancing human creativity, but can never replace it. Our goal was to create an authentic story celebrating Team USA. It showcases a real-life track enthusiast and her father, and aims to show how the Gemini app can provide a starting point, thought starter, or early draft for someone looking for ideas for their writing."
Google

Google Hires Character.AI Cofounders and Licenses Its Models 3

An anonymous reader shares a report: Google has agreed to pay a licensing fee [non-paywalled link] to chatbot maker Character.AI for its models and will hire its cofounders and many of its researchers, Character's leaders told staff on Friday. The leaders told Character staff that investors would be bought out at a valuation of about $88 per share, the leaders said in a meeting. That's about 2.5 times the value of shares in Character's 2023 Series A, which valued the company at $1 billion, they said.

The Character employees joining Google will work on its Gemini AI efforts, they said. Character will switch to open-source models such as Meta Platforms' Llama 3.1 to power its products, rather than its in-house models, they said. The deal follows a string of similar arrangements by other well-funded artificial intelligence startups. AI developers Adept and Inflection have both effectively sold themselves to Amazon and Microsoft, respectively, in the last five months despite raising considerable capital.
United Kingdom

UK Government Shelves $1.66 Billion Tech and AI Plans 35

An anonymous reader shares a report: The new Labour government has shelved $1.66 bn of funding promised by the Conservatives for tech and Artificial Intelligence (AI) projects, the BBC has learned. It includes $1 bn for the creation of an exascale supercomputer at Edinburgh University and a further $640m for AI Research Resource, which funds computing power for AI. Both funds were unveiled less than 12 months ago.

The Department for Science, Innovation and Technology (DSIT) said the money was promised by the previous administration but was never allocated in its budget. Some in the industry have criticised the government's decision. Tech business founder Barney Hussey-Yeo posted on X that reducing investment risked "pushing more entrepreneurs to the US." Businessman Chris van der Kuyl described the move as "idiotic." Trade body techUK said the government now needed to make "new proposals quickly" or the UK risked "losing out" to other countries in what are crucial industries of the future.
AI

Elliott Says Nvidia is in a 'Bubble' and AI is 'Overhyped' 73

Hedge fund Elliott Management has told investors that Nvidia is in a "bubble," and the AI technology driving the chipmaking giant's share price is "overhyped." From a report: The Florida-based firm, which manages about $70bn in assets, said in a recent letter to clients seen by the Financial Times that the megacap technology stocks, particularly Nvidia, were in "bubble land." [non-paywalled link] It added that it was "sceptical" that Big Tech companies would keep buying the chipmaker's graphics processing units in such high volumes, and that AI is "overhyped with many applications not ready for prime time."

[...] Many of AI's supposed uses are "never going to be cost-efficient, are never going to actually work right, will take up too much energy, or will prove to be untrustworthy," it said. Elliott, which was founded by billionaire Paul Singer in 1977, added in its client letter that, so far, AI had failed to deliver a promised huge uplift in productivity. "There are few real uses," it said, other than "summarising notes of meetings, generating reports and helping with computer coding." AI, it added, was in effect software that had so far not delivered "value commensurate with the hype."
Google

Google Gemini 1.5 Pro Leaps Ahead In AI Race, Challenging GPT-4o (venturebeat.com) 11

An anonymous reader quotes a report from VentureBeat: Google launched its latest artificial intelligence powerhouse, Gemini 1.5 Pro, today, making the experimental "version 0801" available for early testing and feedback through Google AI Studio and the Gemini API. This release marks a major leap forward in the company's AI capabilities and has already sent shockwaves through the tech community. The new model has quickly claimed the top spot on the prestigious LMSYS Chatbot Arena leaderboard (built with Gradio), boasting an impressive ELO score of 1300.

This achievement puts Gemini 1.5 Pro ahead of formidable competitors like OpenAI's GPT-4o (ELO: 1286) and Anthropic's Claude-3.5 Sonnet (ELO: 1271), potentially signaling a shift in the AI landscape. Simon Tokumine, a key figure in the Gemini team, celebrated the release in a post on X.com, describing it as "the strongest, most intelligent Gemini we've ever made." Early user feedback supports this claim, with one Redditor calling the model "insanely good" and expressing hope that its capabilities won't be scaled back.
"A standout feature of the 1.5 series is its expansive context window of up to two million tokens, far surpassing many competing models," adds VentureBeat. "This allows Gemini 1.5 Pro to process and reason about vast amounts of information, including lengthy documents, extensive code bases, and extended audio or video content."
Robotics

Fully-Automatic Robot Dentist Performs World's First Human Procedure (newatlas.com) 53

For the first time, an AI-controlled autonomous robot performed an entire dental procedure on a human patient, completing the task eight times faster than a human dentist could. New Atlas reports: The system, built by Boston company Perceptive, uses a hand-held 3D volumetric scanner, which builds a detailed 3D model of the mouth, including the teeth, gums and even nerves under the tooth surface, using optical coherence tomography, or OCT. This cuts harmful X-Ray radiation out of the process, as OCT uses nothing more than light beams to build its volumetric models, which come out at high resolution, with cavities automatically detected at an accuracy rate around 90%. At this point, the (human) dentist and patient can discuss what needs doing -- but once those decisions are made, the robotic dental surgeon takes over. It plans out the operation, then jolly well goes ahead and does it.

The machine's first specialty: preparing a tooth for a dental crown. Perceptive claims this is generally a two-hour procedure that dentists will normally split into two visits. The robo-dentist knocks it off in closer to 15 minutes. Here's a time-lapse video of the drilling portion, looking very much like a CNC machine at work. Remarkably, the company claims the machine can take care of business safely "even in the most movement-heavy conditions," and that dry run testing on moving humans has all been successful. [...] The robot's not FDA-approved yet, and Perceptive hasn't placed a timeline on rollout, so it may be some years yet before the public gets access to this kind of treatment.

Microsoft

Microsoft Dynamics 365 Called Out For 'Worker Surveillance' (theregister.com) 36

Microsoft Dynamics 365's "field service management" tools enable employers to monitor mobile workers via smartphone apps -- "allegedly to the detriment of their autonomy and dignity," reports The Register. From the report: According to a probe by Cracked Labs - an Austrian nonprofit research group -- the software is part of a broader set of applications that disempowers workers through algorithmic management. The case study [PDF] summarizes how employers in Europe actually use software and smartphone apps to oversee field technicians, home workers, and cleaning staff. It's part of a larger ongoing project helmed by the group called "Surveillance and Digital Control at Work," which includes contributions from AlgorithmWatch; Jeremias Adams-Prassl, professor of law at the University of Oxford; and trade unions UNI Europa and GPA.

Mobile maintenance workers used to have a substantial amount of autonomy when they were equipped with basic mobile phones, the study notes, but smartphones have allowed employers to track what mobile workers do, when they do it, where they are, and gather many other data points. The effect of this monitoring, the report argues, means diminished worker discretion, autonomy, and sense of purpose due to task-based micromanagement. The shift has also accelerated and intensified work stress, with little respect to workers' capabilities, differences in lifestyle, and job practices.
"Field service workers travel to multiple locations servicing different products every day," a Microsoft spokesperson told The Register. "Dynamics 365 Field Service and its Copilot capabilities are designed to help field service workers schedule, plan and provide onsite maintenance and repairs in the right location, on time with the right information and workplace guides on their device to complete their jobs."

"Dynamics 365 Field Service does not use AI to recommend individual workers for specific jobs based on previous performance. Dynamics 365 Field Service was developed in accordance with our Responsible AI principles and data privacy statement. Customers are solely responsible for using Dynamics 365 Field Service in compliance with all applicable laws, including laws relating to accessing individual employee analytics and monitoring."
Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Government

Senators Propose 'Digital Replication Right' For Likeness, Extending 70 Years After Death 46

An anonymous reader quotes a report from Ars Technica: On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness. The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive. [...]

To protect a person's digital likeness, the NO FAKES Act introduces a "digital replication right" that gives individuals exclusive control over the use of their voice or visual likeness in digital replicas. This right extends 10 years after death, with possible five-year extensions if actively used. It can be licensed during life and inherited after death, lasting up to 70 years after an individual's death. Along the way, the bill defines what it considers to be a "digital replica": "DIGITAL REPLICA.-The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder."
The NO FAKES Act "includes provisions that aim to balance IP protection with free speech," notes Ars. "It provides exclusions for recognized First Amendment protections, such as documentaries, biographical works, and content created for purposes of comment, criticism, or parody."
AI

Argentina Will Use AI To 'Predict Future Crimes' (theguardian.com) 52

Argentina's security forces have announced plans to use AI to "predict future crimes" in a move experts have warned could threaten citizens' rights. From a report: The country's far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use "machine-learning algorithms to analyse historical crime data to predict future crimes." It is also expected to deploy facial recognition software to identify "wanted persons," patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to "detect potential threats, identify movements of criminal groups or anticipate disturbances," the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations. Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who -- and how many security forces -- will be able to access the information.

Microsoft

Microsoft Now Lists OpenAI as a Competitor in AI and Search (techcrunch.com) 11

An anonymous reader shares a report: Microsoft has a long and tangled history with OpenAI, having invested a reported $13 billion in the ChatGPT maker as part of a long term partnership. As part of the deal, Microsoft runs OpenAI's models across its enterprise and consumer products, and is OpenAI's exclusive cloud provider. However, the tech giant called the startup a "competitor" for the first time in an SEC filing on Tuesday.

In Microsoft's annual 10K, OpenAI joined long list of competitors in AI, alongside Anthropic, Amazon, and Meta. OpenAI was also listed alongside Google as a competitor to Microsoft in search, thanks to OpenAI's new SearchGPT feature announced last week. It's possible Microsoft is trying to change the narrative on its relationship with OpenAI in light of antitrust concerns -- the FTC is currently looking into the relationship, alongside similar cloud provider investments into AI startups.

Chrome

Chrome is Going To Use AI To Help You Compare Products From Across Your Tabs 41

Google wants to help ease the pain of comparison shopping across multiple tabs in Chrome with a new AI-powered tool that can summarize your tabs into one page. From a report: The tool, which Google is calling "tab compare," will use generative AI to pull product data from tabs you have open and collect it all into one table. Assuming it works and pulls accurate information, the tool seems like it could be a handy way to look at a number of different products in one unified view.

But while it's potentially useful, the tool could also take away traffic from sites that collect and compare product information -- which might be especially worrying for independent publishers that are already struggling to be seen on Google. I'm also skeptical that Google will correctly pull all of the finer details about various products into the tables it creates with tab compare. I don't always trust Google's accuracy right now! There are some limits on what tab compare can do. The tables it creates are limited to 10 items because "we've just found the column layout doesn't scale very well beyond that," Google spokesperson Joshua Cruz tells The Verge.
AI

AI Startup Suno Says Music Industry Suit Aims to Stifle Competition (bloomberg.com) 42

AI music startup Suno is pushing back against the world's biggest record labels, saying in a court filing that a lawsuit they filed against the company aims to stifle competition. From a report: In a filing Thursday in federal court in Massachusetts, Suno said that while the record labels argue the company infringed on their recorded music copyrights, the lawsuit actually reflects the industry's opposition to competition -- which Suno's AI software represents by making it easy for anyone to make music.

"Where Suno sees musicians, teachers, and everyday people using a new tool to create original music, the labels see a threat to their market share," the Cambridge, Massachusetts-based company wrote in the filing, which also asked the court to enter judgment in Suno's favor.

Social Networks

Reddit CEO Says Microsoft and Others Need To Pay To Search the Site (theverge.com) 78

After striking deals with Google and OpenAI, Reddit CEO Steve Huffman is calling on Microsoft and others to pay if they want to continue scraping the site's data. From a report: "Without these agreements, we don't have any say or knowledge of how our data is displayed and what it's used for, which has put us in a position now of blocking folks who haven't been willing to come to terms with how we'd like our data to be used or not used," Huffman said in an interview this week. He specifically named Microsoft, Anthropic, and Perplexity for refusing to negotiate, saying it has been "a real pain in the ass to block these companies."

Reddit has been escalating its fight against crawlers in recent months. At the beginning of July, its robots.txt file was updated to block web crawlers it doesn't have agreements with. Then people began noticing that Reddit results were only visible in Google results -- where Reddit is paid for its data to be shown -- and not other search engines like Bing. Huffman said that Microsoft has been using Reddit's data to train its AI and summarizing its content in Bing results "without telling us" and that Reddit's data has also been sold through the Bing API to other search engines.

Slashdot Top Deals