×
Social Networks

Preparing to Monetize, Threads Launches New Tools for Users (axios.com) 17

"We're testing a few new ways to plan and manage your presence on Threads," announced top Threads/Instagram executive Adam Mosseri, promising their 200 million-plus users "enhanced insights to help you better understand your followers and how posts perform, and the ability to save multiple drafts with scheduling coming soon."

Axios reports: Helping creators avoid burnout has become a growing focus for Meta CEO Mark Zuckerberg, who said in July that the company's new generative AI tools can alleviate certain tasks like communicating with followers. Thursday's announcement was positioned as helping both businesses and creators — suggesting that Meta is ramping up plans to start monetizing Threads, which could be as early as this year.
Technology

IKEA's Stock-Counting Warehouse Drones Will Fly Alongside Workers In the US (theverge.com) 47

IKEA is expanding its stock-counting drone system to operate alongside workers in the U.S., starting with its Perryville, Maryland distribution center. The Verge reports: The Verity-branded drones also come with a new AI-powered system that allows them to fly around warehouses 24/7. That means they'll now operate alongside human workers, helping to count inventory as well as identify if something's in the wrong spot. Previously, the drones only flew during nonoperational hours. Parag Parekh, the chief digital officer for Ikea retail, says in the press release that flights are prescheduled and that the drones use a "custom indoor positioning system to navigate higher levels of storage locations." They also have an obstacle detection system that allows them to reroute their paths to avoid collisions. Ikea is also working on several upgrades for the drones, including the ability to inspect unit loads and racks.

So far, Ikea's fleet consists of more than 250 drones operating across 73 warehouses in nine countries. Ikea first launched its drone system in partnership with Verity in 2021 and expanded it to more locations throughout Europe last year. Now, Ikea plans on bringing its AI-upgraded drones to more distribution centers in Europe and North America, which the company says will help "reduce the ergonomic strain on [human] co-workers, allowing them to focus on lighter and more interesting tasks."

Politics

OpenAI Says Iranian Group Used ChatGPT To Try To Influence US Election (axios.com) 27

An anonymous reader quotes a report from the Washington Post: Artificial intelligence company OpenAI said Friday that an Iranian group had used its ChatGPT chatbot to generate content to be posted on websites and social media (Warning: source is paywalled; alternative source) seemingly aimed at stirring up polarization among American voters in the presidential election. The sites and social media accounts that OpenAI discovered posted articles and opinions made with help from ChatGPT on topics including the conflict in Gaza and the Olympic Games. They also posted material about the U.S. presidential election, spreading misinformation and writing critically about both candidates, a company report said. Some appeared on sites that Microsoft last week said were used by Iran to post fake news articles intended to amp up political division in the United States, OpenAI said.

The AI company banned the ChatGPT accounts associated with the Iranian efforts and said their posts had not gained widespread attention from social media users. OpenAI found "a dozen" accounts on X and one on Instagram that it linked to the Iranian operation and said all appeared to have been taken down after it notified those social media companies. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target. "Even though it doesn't seem to have reached people, it's an important reminder, we all need to stay alert but stay calm," he said.

Businesses

Ex-Google CEO Says Successful AI Startups Can Steal IP and Hire Lawyers To 'Clean Up the Mess' 42

Eric Schmidt, at a recent talk where he also talked -- and then walked back the comment -- on Google's work-culture: If TikTok is banned, here's what I propose each and every one of you do: Say to your LLM the following: "Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines."

That's the command. Boom, boom, boom, boom.

So, in the example that I gave of the TikTok competitor -- and by the way, I was not arguing that you should illegally steal everybody's music -- what you would do if you're a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you'd hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn't matter that you stole all the content.

And do not quote me.
AI

Can Google Make Stoplights Smarter? (scientificamerican.com) 64

An anonymous reader shares a report: Traffic along some of Seattle's stop-and-go streets is running a little smoother after Google tested out a new machine-learning system to optimize stoplight timing at five intersections. The company launched this test as part of its Green Light pilot program in 2023 in Seattle and a dozen other cities, including some notoriously congested places such as Rio de Janeiro, Brazil, and Kolkata, India. Across these test sites, local traffic engineers use Green Light's suggestions -- based on artificial intelligence and Google Maps data -- to adjust stoplight timing. Google intends for these changes to curb waiting at lights while increasing vehicle flow across busy throughways and intersections -- and, ultimately, to reduce greenhouse gases.

"We have seen positive results," says Mariam Ali, a Seattle Department of Transportation spokesperson. Green Light has provided "specific, actionable recommendations," she adds, and it has identified bottlenecks (and confirmed known ones) within the traffic system.

Managing the movement of vehicles through urban streets requires lots of time, money and consideration of factors such as pedestrian safety and truck routes. Google's foray into the field is one of many ongoing attempts to modernize traffic engineering by incorporating GPS app data, connected cars and artificial intelligence. Preliminary data suggest the system could reduce stops by up to 30 percent and emissions at intersections by up to 10 percent as a result of reduced idling, according to Google's 2024 Environmental Report. The company plans to expand to more cities soon. The newfangled stoplight system doesn't come close to replacing human decision-making in traffic engineering, however, and it may not be the sustainability solution Google claims it is.

The Courts

AI-powered 'Undressing' Websites Are Getting Sued (theverge.com) 107

The San Francisco City Attorney's office is suing 16 of the most frequently visited AI-powered "undressing" websites, often used to create nude deepfakes of women and girls without their consent. From a report: The landmark lawsuit, announced at a press conference by City Attorney David Chiu, says that the targeted websites were collectively visited over 200 million times in the first six months of 2024 alone.

The offending websites allow users to upload images of real, fully clothed people, which are then digitally "undressed" with AI tools that simulate nudity. One of these websites, which wasn't identified within the complaint, reportedly advertises: "Imagine wasting time taking her out on dates, when you can just use [the redacted website] to get her nudes."

AI

California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36

An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.

[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Google

Google's AI Search Gives Sites Dire Choice: Share Data or Die (bloomberg.com) 64

An anonymous reader shares a report: Google now displays convenient AI-based answers at the top of its search pages -- meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can't afford to block Google's AI from summarizing their content. That's because the Google tool that sifts through web content to come up with its AI answers is the same one that keeps track of web pages for search results, according to publishers. Blocking Alphabet's Google the way sites have blocked some of its AI competitors would also hamper a site's ability to be discovered online.

Google's dominance in search -- which a federal court ruled last week is an illegal monopoly -- is giving it a decisive advantage in the brewing AI wars, which search startups and publishers say is unfair as the industry takes shape. The dilemma is particularly acute for publishers, which face a choice between offering up their content for use by AI models that could make their sites obsolete and disappearing from Google search, a top source of traffic.

Transportation

Intel and Karma Partner To Develop Software-Defined Car Architecture (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: Intel is partnering with Karma Automotive to develop an all-new computing platform for vehicles. The new software-defined vehicle architecture should first appear in a high-end electric coupe from Karma in 2026. But the partners have bigger plans for this architecture, with talk of open standards and working with other automakers also looking to make the leap into the software-defined future. [...] In addition to advantages in processing power and weight savings, software-defined vehicles are easier to update over-the-air, a must-have feature since Tesla changed that paradigm. Karma and Intel say their architecture should also have other efficiency benefits. They give the example of security monitoring that remains active even when the vehicle is turned off; they move this to a low-powered device using "data center application orchestration concepts."

Intel is also contributing its power management SoC to get the most out of inverters, DC-DC converters, chargers, and as you might expect, the domain controllers use Intel silicon as well, apparently with some flavor of AI enabled. [...] Karma's first car to use the software-defined vehicle architecture will be the Kayeva, a $300,000 two-door with 1,000 hp (745 kW) on tap, which is scheduled to arrive in two years. But Intel and Karma want to offer the architecture to others in the industry. "For Tier 1s and OEMs not quite ready to take the leap from the old way of doing things to the new, Karma Automotive will play as an ally, helping them make that transition," said [Karma President Marques McCammon].
"Together, we're harnessing the combined might of Intel's technological prowess and Karma's ultra-luxury vehicle expertise to co-develop a revolutionary software-defined vehicle architecture," said McCammon. "This isn't just about realizing Karma's full potential; it's about creating a blueprint for the entire industry. We're not just building exceptional vehicles, we're paving the way for a new era of automotive innovation and offering a roadmap for those ready to make the leap."
AI

Hollywood Union Strikes Deal For Advertisers To Replicate Actors' Voices With AI 32

The SAG-AFTRA actors' union has struck a deal with online talent marketplace Narrativ, allowing actors to sell advertisers the rights to replicate their voices using AI. "Not all members will be interested in taking advantage of the opportunities that licensing their digital voice replicas might offer, and that's understandable," SAG-AFTRA official Duncan Crabtree-Ireland said in a statement. "But for those who do, you now have a safe option." Reuters reports: Narrativ connects advertisers and ad agencies with actors to create audio ads using AI. Under the deal, an actor can set the price for an advertiser to digitally replicate their voice, provided it at least equals the SAG-AFTRA minimum pay for audio commercials. Brands must obtain consent from performers for each ad that uses the digital voice replica. The union hailed the pact with Narrativ as setting a standard for the ethical use of AI-generated voice replicas in advertising.
Microsoft

Microsoft Tweaks Fine Print To Warn Everyone Not To Take Its AI Seriously (theregister.com) 54

Microsoft is notifying folks that its AI services should not be taken too seriously, echoing prior service-specific disclaimers. From a report: In an update to the IT giant's Service Agreement, which takes effect on September 30, 2024, Redmond has declared that its Assistive AI isn't suitable for matters of consequence. "AI services are not designed, intended, or to be used as substitutes for professional advice," Microsoft's revised legalese explains. The changes to Microsoft's rules of engagement cover a few specific services, such as noting that Xbox customers should not expect privacy from platform partners.

"In the Xbox section, we clarified that non-Xbox third-party platforms may require users to share their content and data in order to play Xbox Game Studio titles and these third-party platforms may track and share your data, subject to their terms," the latest Service Agreement says. There are also some clarifications regarding the handling of Microsoft Cashback and Microsoft Rewards. But the most substantive revision is the addition of an AI Services section, just below a passage that says Copilot AI Experiences are governed by Bing's Terms of Use. Those using Microsoft Copilot with commercial data protection get a separate set of terms. The tweaked consumer-oriented rules won't come as much of a surprise to anyone who has bothered to read the contractual conditions governing Microsoft's Bing and associated AI stuff. For example, there's now a Services Agreement prohibition on using AI Services for "Extracting Data."

Businesses

Eric Schmidt Walks Back Claim Google Is Behind on AI Because of Remote Work (msn.com) 82

Eric Schmidt, ex-CEO and executive chairman at Google, walked back remarks in which he said his former company was losing the AI race because of its remote-work policies. From a report: "I misspoke about Google and their work hours," Schmidt said Wednesday in an email to The Wall Street Journal. "I regret my error." Schmidt, who left Google parent Alphabet's board more than five years ago, spoke earlier at a wide-ranging discussion at Stanford University. He criticized Google's remote-work policies in response to a question about Google competing with OpenAI. "Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at Stanford. "The reason startups work is because the people work like hell."

Video of Schmidt's talk was posted on YouTube this week by Stanford Online, a division of the university that offers online courses. The video, which had more than 40,000 views as of Wednesday afternoon, has since been set to private. Schmidt said he asked for the video to be taken down.

AI

Magic: The Gathering Community Fears Generative AI Will Replace Talented Artists (slate.com) 133

Slate's Derek Heckman, an avid fan of Magic: The Gathering since the age of 10, expresses concern about the potential replacement of the game's distinctive hand-drawn art with generative AI -- and he's not alone. "I think we're all pretty afraid of what the potential is, given what we've seen from the generative image side," Sam, a YouTube creator who runs the channel Rhystic Studies, told him. "It's staggeringly powerful. And it's only in its infancy."

"Magic's greatest asset has always been its commitment to create a new illustration for every new card," he said. He adds that if we sacrifice that commitment for A.I., "you'd get to a point pretty fast where it just disintegrates and becomes the ugliest definition of the word product." Here's an excerpt from his report: So far, Magic's parent company, Wizards of the Coast, has outwardly agreed with Sam, saying in an official statement in 2023 that Magic "has been built on the innovation, ingenuity, and hard work of talented people" and forbidding outside creatives from using A.I. in their work. However, a number of recent incidents -- from the accidental use of A.I. art in a Magic promotional image to a very intentional LinkedIn post for a "Principal AI Engineer," one that Wizards had to clarify was for the company's video game projects -- have left many players unsure whether Wizards is potentially evolving their stance, or merely trying to find their footing in an ever-changing A.I. landscape.

In response to fan concerns, Wizards has created an "AI art FAQ" detailing, among other things, the new technologies it's invested in to detect A.I. use in art. Still, trust in the company has been damaged by this year's incidents. Longtime Magic artist David Rapoza even severed ties with the game this past January, citing this seeming difference between Wizards' words and actions when it comes to the use of A.I. Sam says the larger audience has likewise been left "cautiously suspicious," hoping to believe Wizards' official statements while also carefully noting the company's moves and mistakes with the technology. "I think what we want is for Wizards to commit hard to one lane and stay [with] what is tried and true," Sam says. "And that is prioritizing human work over shortcuts."

The Courts

Artists Claim 'Big' Win In Copyright Suit Fighting AI Image Generators (arstechnica.com) 53

Ars Technica's Ashley Belanger reports: Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

"We won BIG," an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. "Not only do we proceed on our copyright claims," but "this order also means companies who utilize" Stable Diffusion models and LAION-like datasets that scrape artists' works for AI training without permission "could now be liable for copyright infringement violations, amongst other violations." Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing "consider the Court's order a significant step forward for the case," as "the Court allowed Plaintiffs' core copyright-infringement claims against all four defendants to proceed."

Government

FTC Finalizes Rule Banning Fake Reviews, Including Those Made With AI (techcrunch.com) 35

TechCrunch's Lauren Forristal reports: The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews. The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it's published in the official government publication called Federal Register. [...]

According to the final rule, the maximum civil penalty for fake reviews is $51,744 per violation. However, the courts could impose lower penalties depending on the specific case. "Ultimately, courts will also decide how to calculate the number of violations in a given case," the Commission wrote. [...] The FTC initially proposed the rule on June 30, 2023, following an advanced notice of proposed rulemaking issued in November 2022. You can read the finalized rule here (PDF), but we also included a summary of it below:

- No fake or disingenuous reviews. This includes AI-generated reviews and reviews from anyone who doesn't have experience with the actual product.
- Businesses can't sell or buy reviews, whether negative or positive.
- Company insiders writing reviews need to clearly disclose their connection to the business. Officers or managers are prohibited from giving testimonials and can't ask employees to solicit reviews from relatives.
- Company-controlled review websites that claim to be independent aren't allowed.
- No using legal threats, physical threats or intimidation to forcefully delete or prevent negative reviews. Businesses also can't misrepresent that the review portion of their website comprises all or most of the reviews when it's suppressing the negative ones.
- No selling or buying fake engagement like social media followers, likes or views obtained through bots or hacked accounts.

AI

Research AI Model Unexpectedly Modified Its Own Code To Extend Runtime (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly modifying its own code to extend the time it had to work on a problem. "In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example code that the AI model generated, and the 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth. While the AI Scientist's behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world. AI models do not need to be "AGI" or "self-aware" (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if accidentally.

Google

Eric Schmidt Says Google Is Falling Behind on AI - And Remote Work Is Why (msn.com) 113

Eric Schmidt, ex-CEO and executive chairman at Google, said his former company is losing the AI race and remote work is to blame. From a report: "Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at a talk at Stanford University. "The reason startups work is because the people work like hell." Schmidt made the comments earlier at a wide-ranging discussion at Stanford. His remarks about Google's remote-work policies were in response to a question about Google competing with OpenAI.
AI

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129

ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.

Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Social Networks

Deep-Live-Cam Goes Viral, Allowing Anyone To Become a Digital Doppelganger (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing -- and how the capability to deceive others remotely is getting dramatically easier over time. The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free. [...]

Like many open source GitHub projects, Deep-Live-Cam wraps together several existing software packages under a new interface (and is itself a fork of an earlier project called "roop"). It first detects faces in both the source and target images (such as a frame of live video). It then uses a pre-trained AI model called "inswapper" to perform the actual face swap and another model called GFPGAN to improve the quality of the swapped faces by enhancing details and correcting artifacts that occur during the face-swapping process. The inswapper model, developed by a project called InsightFace, can guess what a person (in a provided photo) might look like using different expressions and from different angles because it was trained on a vast dataset containing millions of facial images of thousands of individuals captured from various angles, under different lighting conditions, and with diverse expressions.

During training, the neural network underlying the inswapper model developed an "understanding" of facial structures and their dynamics under various conditions, including learning the ability to infer the three-dimensional structure of a face from a two-dimensional image. It also became capable of separating identity-specific features, which remain constant across different images of the same person, from pose-specific features that change with angle and expression. This separation allows the model to generate new face images that combine the identity of one face with the pose, expression, and lighting of another.

Google

US Considers a Rare Antitrust Move: Breaking Up Google (bloomberg.com) 87

A rare bid to break up Alphabet's Google is one of the options being considered by the Justice Department after a landmark court ruling found that the company monopolized the online search market, Bloomberg News reported Tuesday, citing sources familiar with the matter. From the report: The move would be Washington's first push to dismantle a company for illegal monopolization since unsuccessful efforts to break up Microsoft two decades ago.

Less severe options include forcing Google to share more data with competitors and measures to prevent it from gaining an unfair advantage in AI products, said the people, who asked not to be identified discussing private conversations. Regardless, the government will likely seek a ban on the type of exclusive contracts that were at the center of its case against Google. If the Justice Department pushes ahead with a breakup plan, the most likely units for divestment are the Android operating system and Google's web browser Chrome, said the people. Officials are also looking at trying to force a possible sale of AdWords, the platform the company uses to sell text advertising, one of the people said.

Slashdot Top Deals