×
The Almighty Buck

Telecom Behind AI Biden Robocall Settles With FCC For $1 Million (cyberscoop.com) 20

New submitter ElimGarak000 shares a report from CyberScoop: The Texas-based voice service provider that sent AI-generated robocalls of President Joe Biden to New Hampshire voters ahead of its Democratic presidential primary has agreed to pay a $1 million fine and implement enhanced verification protocols designed to prevent robocalls and phone number spoofing in a settlement with the Federal Communications Commission. The fine represents half the amount the FCC was originally seeking in an enforcement action proposed against Lingo Telecom in May. Despite that, agency leaders characterized the settlement (PDF) as a successful effort to defend U.S. telecommunications networks and election infrastructure from nascent AI and deepfake technologies. [...]

In addition to the fine, the settlement requires Lingo Telecom to follow regulatory protocols that were put in place in 2020 to ensure telecommunications carriers authenticate caller identities using their networks. The protocols, known as STIR/SHAKEN, require carriers like Lingo to digitally verify and formally attest to the FCC that callers are legitimate and own the phone number they display on Caller ID. In the New Hampshire robocall case, Kramer and Life Corporation spoofed the phone number of Kathy Sullivan, a former state Democratic party official who was running a write-in campaign for Biden.

The FCC cited Lingo's inability to properly implement and enforce STIR/SHAKEN as a key failure in a February cease-and-desist letter, and again in May when the agency proposed a $2 million enforcement action. The company was also named in a civil lawsuit filed by the League of Women Voters and New Hampshire residents, seeking damages over the incident. Per terms of the settlement, Lingo Telecom must hire a senior manager knowledgeable in STIR/SHAKEN protocols and develop a compliance plan, new operating procedures and training programs. They must also report any incidents of non-compliance with STIR/SHAKEN within 15 days of discovery.
"Every one of us deserves to know that the voice on the line is exactly who they claim to be," FCC Chairwoman Jessica Rosenworcel said in a statement. "If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it. The FCC will act when trust in our communications networks is on the line."
Classic Games (Games)

Hydrogels Can Learn To Play Pong (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Pong will always hold a special place in the history of gaming as one of the earliest arcade video games. Introduced in 1972, it was a table tennis game featuring very simple graphics and gameplay. In fact, it's simple enough that even non-living materials known as hydrogels can "learn" to play the game by "remembering" previous patterns of electrical stimulation, according to a new paper published in the journal Cell Reports Physical Science. "Our research shows that even very simple materials can exhibit complex, adaptive behaviors typically associated with living systems or sophisticated AI," said co-author Yoshikatsu Hayashi, a biomedical engineer at the University of Reading in the UK. "This opens up exciting possibilities for developing new types of 'smart' materials that can learn and adapt to their environment." [...]

The experimental setup was fairly simple. The researchers hooked up electroactive hydrogels to a simulated virtual environment of a Pong game using a custom-built electrode array. The games would start with the ball traveling in a random direction. The hydrogels tracked the ball's position via electrical stimulation and tracked the paddle's position by measuring the distribution of ions in the hydrogels. As the games progressed, the researchers measured how often the hydrogel managed to hit the ball with the paddle. They found that, over time, the hydrogels' accuracy improved, hitting the ball more frequently for longer rallies. They reached their maximum potential for accuracy in about 20 minutes, compared to 10 minutes for the DishBrain. The authors attribute this to the ion movement essentially mapping out a "memory" of all motion over time, exhibiting what appears to be emergent memory functions within the material itself. Perhaps the next step will be to "teach" the hydrogels how to align the paddles in such a way that the rallies go on indefinitely.

Microsoft

Microsoft Engineers' Pay Data Leaked, Reveals Compensation Details (businessinsider.com) 73

Software engineers at Microsoft earn an average total compensation ranging from $148,436 to $1,230,000 annually, depending on their level, according to a leaked spreadsheet viewed by Business Insider. The data, voluntarily shared by hundreds of U.S.-based Microsoft employees, includes information on salaries, performance-based raises, promotions, and bonuses. The highest-paid engineers work in Microsoft's newly formed AI organization, with average total compensation of $377,611. Engineers in Cloud and AI, Azure, and Experiences and Devices units earn between $242,723 and $255,126 on average.
Google

Google Agrees To $250 Million Deal To Fund California Newsrooms, AI (politico.com) 33

Google has reached a groundbreaking deal with California lawmakers to contribute millions to local newsrooms, aiming to support journalism amid its decline as readers migrate online and advertising dollars evaporate. The agreement also includes a controversial provision for artificial intelligence funding. Politico reports: California emulated a strategy that other countries like Canada have used to try and reverse the journalism industry's decline as readership migrated online and advertising dollars evaporated. [...] Under the deal, the details of which were first reported by POLITICO on Monday, Google and the state of California would jointly contribute a minimum of $125 million over five years to support local newsrooms through a nonprofit public charity housed at UC Berkeley's journalism school. Google would contribute at least $55 million, and state officials would kick in at least $70 million. The search giant would also commit $50 million over five years to unspecified "existing journalism programs."

The deal would also steer millions in tax-exempt private dollars toward an artificial intelligence initiative that people familiar with the negotiations described as an effort to cultivate tech industry buy-in. Funding for artificial intelligence was not included in the bill at the core of negotiations, authored by Assemblymember Buffy Wicks. The agreement has drawn criticism from a journalists' union that had so far championed Wicks' effort. Media Guild of the West President Matt Pearce in an email to union members Sunday evening said such a deal would entrench "Google's monopoly power over our newsrooms."
"This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy," said Kent Walker, chief legal officer for Alphabet, the parent company of Google.

Media Guild of the West President Matt Pearce wasn't so chipper. He criticized the plan in emails with union members, calling it a "total rout of the state's attempts to check Google's stranglehold over our newsrooms."
Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Privacy

Slack AI Can Be Tricked Into Leaking Data From Private Channels (theregister.com) 9

Slack AI, an add-on assistive service available to users of Salesforce's team messaging service, is vulnerable to prompt injection, according to security firm PromptArmor. From a report: The AI service provides generative tools within Slack for tasks like summarizing long conversations, finding answers to questions, and summarizing rarely visited channels.

"Slack AI uses the conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization," the messaging app provider explains in its documentation. Except it's not that secure, as PromptArmor tells it. A prompt injection vulnerability in Slack AI makes it possible to fetch data from private Slack channels.

Businesses

OpenAI Announces Content Deal With Conde Nest (cnbc.com) 17

OpenAI has announced a partnership with Conde Nest, allowing the company's AI products to display content from Vogue, The New Yorker, Conde Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon Appetit and other outlets. CNBC reports: "With the introduction of our SearchGPT prototype, we're testing new search features that make finding information and reliable content sources faster and more intuitive," OpenAI wrote in a blog post. "We're combining our conversational models with information from the web to give you fast and timely answers with clear and relevant sources." OpenAI added that the SearchGPT prototype offers direct links to news stories and that the company plans "to integrate the best of these features directly into ChatGPT in the future." It is the latest in a recent trend of some media outlets joining forces with AI startups such as OpenAI to enter into content deals.
Television

Your TV Set Has Become a Digital Billboard. And It's Only Getting Worse. (arstechnica.com) 158

TV manufacturers are shifting their focus from hardware sales to viewer data and advertising revenue. This trend is driven by declining profit margins on TV sets and the growing potential of smart TV operating systems to generate recurring income. Companies like LG, Samsung, and Roku are increasingly prioritizing ad sales and user tracking capabilities in their TVs, ArsTechnica reports. Automatic content recognition (ACR) technology, which analyzes viewing habits, is becoming a key feature for advertisers. TV makers are partnering with data firms to enhance targeting capabilities, with LG recently sharing data with Nielsen and Samsung updating its ACR tech to track streaming ad exposure. This shift raises concerns about privacy and user experience, as TVs become more commercialized and data-driven. Industry experts predict a rise in "shoppable ads" and increased integration between TV viewing and e-commerce platforms. The report adds: With TV sales declining and many shoppers prioritizing pricing, smart TV players will continue developing ads that are harder to avoid and better at targeting. Interestingly, Patrick Horner, practice leader of consumer electronics at analyst Omdia, told Ars that smart TV advertising revenue exceeding smart TV hardware revenue (as well as ad sale margins surpassing those of hardware) is a US-only trend, albeit one that shows no signs of abating. OLED has become a mainstay in the TV marketplace, and until the next big display technology becomes readily available, OEMs are scrambling to make money in a saturated TV market filled with budget options. Selling ads is an obvious way to bridge the gap between today and The Next Big Thing in TVs.

Indeed, with companies like Samsung and LG making big deals with analytics firms and other brands building their businesses around ads, the industry's obsession with ads will only intensify. As we've seen before with TV commercials, which have gotten more frequent over time, once the ad genie is out of the bottle, it tends to grow, not go back inside. One side effect we're already seeing, Horner notes, is "a proliferation of more TV operating systems." While choice is often a good thing for consumers, it's important to consider if new options from companies like Amazon, Comcast, and TiVo actually do anything to notably improve the smart TV experience for owners.

And OS operators' financial success is tied to the number of hours users spend viewing something on the OS. Roku's senior director of ad innovation, Peter Hamilton, told Digiday in May that his team works closely with Roku's consumer team, "whose goal is to drive total viewing hours." Many smart TV OS operators are therefore focused on making it easier for users to navigate content via AI.

The Courts

Authors Sue Anthropic For Copyright Infringement Over AI Training (reuters.com) 57

AI company Anthropic has been hit with a class-action lawsuit in California federal court by three authors who say it misused their books and hundreds of thousands of others to train its AI-powered chatbot Claude. From a report: The complaint, filed on Monday, by writers and journalists Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, said that Anthropic used pirated versions of their works and others to teach Claude to respond to human prompts.

The lawsuit joins several other high-stakes complaints filed by copyright holders including visual artists, news outlets and record labels over the material used by tech companies to train their generative artificial intelligence systems. Separate groups of authors have sued OpenAI and Meta over the companies' alleged misuse of their work to train the large-language models underlying their chatbots.

Privacy

National Public Data Published Its Own Passwords (krebsonsecurity.com) 35

Security researcher Brian Krebs writes: New details are emerging about a breach at National Public Data (NPD), a consumer data broker that recently spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online. KrebsOnSecurity has learned that another NPD data broker which shares access to the same consumer records inadvertently published the passwords to its back-end database in a file that was freely available from its homepage until today. In April, a cybercriminal named USDoD began selling data stolen from NPD. In July, someone leaked what was taken, including the names, addresses, phone numbers and in some cases email addresses for more than 272 million people (including many who are now deceased). NPD acknowledged the intrusion on Aug. 12, saying it dates back to a security incident in December 2023. In an interview last week, USDoD blamed the July data leak on another malicious hacker who also had access to the company's database, which they claimed has been floating around the underground since December 2023.

Following last week's story on the breadth of the NPD breach, a reader alerted KrebsOnSecurity that a sister NPD property -- the background search service recordscheck.net -- was hosting an archive that included the usernames and password for the site's administrator. A review of that archive, which was available from the Records Check website until just before publication this morning (August 19), shows it includes the source code and plain text usernames and passwords for different components of recordscheck.net, which is visually similar to nationalpublicdata.com and features identical login pages. The exposed archive, which was named "members.zip," indicates RecordsCheck users were all initially assigned the same six-character password and instructed to change it, but many did not. According to the breach tracking service Constella Intelligence, the passwords included in the source code archive are identical to credentials exposed in previous data breaches that involved email accounts belonging to NPD's founder, an actor and retired sheriff's deputy from Florida named Salvatore "Sal" Verini.

Reached via email, Mr. Verini said the exposed archive (a .zip file) containing recordscheck.net credentials has been removed from the company's website, and that the site is slated to cease operations "in the next week or so." "Regarding the zip, it has been removed but was an old version of the site with non-working code and passwords," Verini told KrebsOnSecurity. "Regarding your question, it is an active investigation, in which we cannot comment on at this point. But once we can, we will [be] with you, as we follow your blog. Very informative." The leaked recordscheck.net source code indicates the website was created by a web development firm based in Lahore, Pakistan called creationnext.com, which did not return messages seeking comment. CreationNext.com's homepage features a positive testimonial from Sal Verini.

AI

Wyoming Voters Face Mayoral Candidate Who Vows To Let AI Bot Run Government 51

An anonymous reader quotes a report from The Guardian: Voters in Wyoming's capital city on Tuesday are faced with deciding whether to elect a mayoral candidate who has proposed to let an artificial intelligence bot run the local government. Earlier this year, the candidate in question -- Victor Miller -- filed for him and his customized ChatGPT bot, named Vic (Virtual Integrated Citizen), to run for mayor of Cheyenne, Wyoming. He has vowed to helm the city's business with the AI bot if he wins. Miller has said that the bot is capable of processing vast amounts of data and making unbiased decisions. In what AI experts say is a first for US political campaigns, Miller and Vic have told local news outlets in interviews that their form of proposed governance is a "hybrid approach." The AI bot told Your Wyoming Link that its role would be to provide data-driven insights and innovative solutions for Cheyenne. Meanwhile, Vic said, the human elected office contender, Miller, would serve as the official mayor if chosen by voters and would ensure that "all actions are legally and practically executed."

"It's about blending AI's capabilities with human judgment to effectively lead Cheyenne," the bot said. The bot said it did not have political affiliations -- and its goal is to "focus on data-driven practical solutions that benefit the community." During a meet-and-greet this summer, the Washington Post reported that the AI bot was asked how it would go about making decisions "according to human factor, involving humans, and having to make a decision that affects so many people." "Making decisions that affect many people requires a careful balance of data-driven insights and human empathy," the AI bot responded, according to an audio recording obtained and published by the Washington Post. Vic then ran through a multi-part plan that suggested using AI technology to gather data on public opinion and feedback from the community, holding town hall meetings to listen to residents' concerns, consulting experts in relevant fields, evaluating the human impact of the decision and providing transparency about the decision-making. According to Wyoming Public Media, Miller has also pledged that he would donate half the mayoral salary to a non-profit if he is elected. The other half could be used to continually improve the AI bot, he said.
Miller has faced some pushback since announcing his mayoral campaign. Wyoming's Secretary of State, Chuck Gray, launched an investigation to determine if the AI bot could legally appear on the ballot, citing state law that says only real people that are registered to vote can run for office. City officials clarified that Miller is the actual candidate, so he was allowed to continue. However, Laramie County ruled that only Miller's name would appear on the ballot, not the bot's.

OpenAI later shut down Miller's account, but he quickly created a new one and continued his campaign.
AI

Virginia's Datacenters Guzzle Water Like There's No Tomorrow, Says FOI-based Report (theregister.com) 98

Concerns over the environmental impact of datacenters in the US state of Virginia are being raised again amid claims their water consumption has stepped up by almost two-thirds since 2019, and AI could make it worse. From a report: Virginia is described as the datacenter capital of the world, particularly Northern Virginia where it is understood there are about 300 facilities. According to the Financial Times, water consumption by bit barns in some areas has increased markedly over the past five years by almost two-thirds. It cites data gathered by freedom of information requests to claim that more than 1.85 billion US gallons was used in 2023, up from 1.13 billion gallons in 2019.

Those figures came from water authorities in Northern Virginia in Fairfax, Loudoun, Prince William, and Fauquier counties. Water is typically used in datacenters for cooling, and the FT points to anxiety over expected increases in demand for computing infrastructure due to AI, which is particularly power intensive during processing for training of large models. It reported that some existing facilities are in water-stressed regions, including parts of Virginia suffering from droughts.

Businesses

GM Cuts 1,000 Software Jobs As It Prioritizes AI 108

General Motors is cutting around 1,000 software workers around the world in a bid to focus on more "high-priority" initiatives like improving its Super Cruise driver assistance system, the quality of its infotainment platform and exploring the use of AI. From a report: The job cuts are not about cost cutting or individual performance, GM spokesperson Stuart Fowle told TechCrunch. Rather, they are meant to help the company move more quickly as it tries to compete in the world of "software-defined vehicles." For example, Fowle said, that could mean moving away from developing many different infotainment features and instead focusing on ones that matter most to consumers.

The shuffle comes after GM has struggled with recent software problems. The automaker temporarily halted sales of its new Blazer EV in late 2023 after early vehicles encountered glitches. In June, GM promoted two former Apple executives to run its software and services division. The promotions were meant to fill the gap left by Mike Abbott, another Apple veteran who had joined GM as its executive vice president of software and services. Abbott left GM in March for health reasons.
AI

Procreate's Anti-AI Pledge Attracts Praise From Digital Creatives (theverge.com) 50

An anonymous reader shares a report: Many Procreate users can breathe a sigh of relief now that the popular iPad illustration app has taken a definitive stance against generative AI. "We're not going to be introducing any generative AI into our products," Procreate CEO James Cuda said in a video posted to X. "I don't like what's happening to the industry, and I don't like what it's doing to artists."

The creative community's ire toward generative AI is driven by two main concerns: that AI models have been trained on their content without consent or compensation, and that widespread adoption of the technology will greatly reduce employment opportunities. Those concerns have driven some digital illustrators to seek out alternative solutions to apps that integrate generative AI tools, such as Adobe Photoshop. "Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future," Procreate said on the new AI section of its website. "We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us."

AMD

AMD To Acquire Server Maker ZT Systems in $4.9 Billion Deal (yahoo.com) 7

AMD agreed to buy server maker ZT Systems in a cash and stock transaction valued at $4.9 billion, adding data center technology that will bolster its efforts to challenge Nvidia. From a report: ZT Systems, based in Secaucus, New Jersey, will become part of AMD's Data Center Solutions Business Group, according to a statement Monday. AMD will retain the business's design and customer teams and look to sell the manufacturing division. Closely held ZT has extensive experience making server computers for owners of large data centers -- the kind of customers that are pouring billions into new AI capabilities. The acquisition will "significantly strengthen our data center AI systems," AMD Chief Executive Officer Lisa Su said in the statement.
AI

Former Google Researcher's Startup Hopes to Teach AI How to Smell (cointelegraph.com) 42

"AI is already able to mimic sight and hearing," writes CNBC. And now a startup named Osmo "wants to use the technology to digitize another: smell."

Co-founded by a former Google research scientist, the company built an AI that's "superhuman in its ability to predict what things smelled like," the company's co-founder says. And he believes this might actually prove useful. "We've known that smell contains information we can use to detect disease. But computers can't speak that language and can't interpret that data yet... We will eventually be able to detect disease with scent and we're on our way to building that technology. It's not going to happen this year or anytime soon, but we're on our way."

CoinTelegraph describes how the company invented a training dataset from scratch — a kind of "smell map" with labelled examples of molecular bond associations to teach the AI to identify specific patterns. The team also hopes to develop a method to recreate smells using molecular synthesis. This would, for example, allow a computer in one place to "smell" something and then send that information to another computer for resynthesis — essentially teleporting odor over the internet. This also means scent could join sight and sound as part of the marketing and branding world.
Open Source

Can the Linux Foundation's 'Open Model Initiative' Build AI-Powering LLMs Without Restrictive Licensing? (infoworld.com) 7

"From the beginning, we have believed that the right way to build these AI models is with open licenses," says the Open Model Initiative. SD Times quotes them as saying that open licenses "allow creatives and businesses to build on each other's work, facilitate research, and create new products and services without restrictive licensing constraints."

Phoronix explains the community initiative "came about over the summer to help advance open-source AI models while now is becoming part of the Linux Foundation to further their cause." As part of the Linux Foundation, the OMI will be working to establish a governance framework and working groups, create shared standards to enhance model interoperability and metadata practices, develop a transparent dataset for training and captioning, complete an alpha test model for targeted red teaming, and release an alpha version of a new model with fine-tuning scripts before the end of 2024.
The group was established "in response to a number of recent decisions by creators of popular open-source models to alter their licensing terms," reports Silicon Angle: The creators highlighted the recent licensing change announced by Stability AI Ltd., regarding its popular image-generation model Stable Diffusion 3 (SD3). That model had previously been entirely free and open, but the changes introduced a monthly fee structure and imposed limitations on its usage. Stability AI was also criticized for the lack of clarity around its licensing terms, but it isn't the only company to have introduced licensing restrictions on previously free software. The OMI intends to eliminate all barriers to enterprise adoption by focusing on training and developing AI models with "irrevocable open licenses without deletion clauses or recurring costs for access," the Linux Foundation said.
InfoWorld also notes "the unavailability of source code and the license restrictions from LLM providers such as Meta, Mistral and Anthropic, who put caveats in the usage policies of their 'open source' models." Meta, for instance, does provide the rights to use Llama models royalty free without any license, but does not provide the source code, according to [strategic research firm] Everest Group's AI practice leader Suseel Menon. "Meta also adds a clause: 'If, on the Meta Llama 3, monthly active users of the products or services is greater than 700 million monthly active users, you must request a license from Meta.' This clause, combined with the unavailability of the source code, raises the question if the term open source should apply to Llama's family of models," Menon explained....

The OMI's objectives and vision received mixed reactions from analysts. While Amalgam Insights' chief analyst Hyoun Park believes that the OMI will lead to the development of more predictable and consistent standards for open source models, so that these models can potentially work with each other more easily, Everest Group's Malik believes that the OMI may not be able to stand before the might of vendors such as Meta and Anthropic. "Developing LLMs is highly compute intensive and has cost big tech giants and start-ups billions in capital expenditure to achieve the scale they currently have with their open-source and proprietary LLMs," Malik said, adding that this could be a major challenge for community-based LLMs.

The AI practice leader also pointed out that previous attempts at a community-based LLM have not garnered much adoption, as models developed by larger entities tend to perform better on most metrics... However, Malik said that the OMI might be able to find appropriate niches within the content development space (2D/3D image generation, adaptation, visual design, editing, etc.) as it begins to build its models... One of the other use cases for the OMI's community LLMs is to see their use as small language models (SLMs), which can offer specific functionality at high effectiveness or functionality that is restricted to unique applications or use cases, analysts said. Currently, the OMI's GitHub page has three repositories, all under the Apache 2.0 license.

Displays

Apple is Building a $1,000 Display on a Voice-Controlled Robot Arm (yahoo.com) 43

Apple is building "a pricey tabletop home device" which uses "a thin robotic arm to move around a large screen," using actuators "to tilt the display up and down and make it spin 360 degree," according to Bloomberg's Mark Gurman. Citing "people with knowledge of the matter," Gurman writes that Apple assigned "several hundred people" to the project: The device is envisioned as a smart home command center, videoconferencing machine and remote-controlled home security tool, said the people... The project — codenamed J595 — was approved by Apple's executive team in 2022 but has started to formally ramp up in recent months, they said... Apple has now decided to prioritize the device's development and is aiming for a debut as early as 2026 or 2027, according to the people.

The company is looking to get the price down to around $1,000. But with years to go before an expected release, the plans could theoretically change... The idea is for the tabletop product to be primarily controlled using the Siri digital assistant and upcoming features in Apple Intelligence. The device could respond to commands, such as "look at me," by repositioning the screen to focus on the person saying the words — say, during a video call. It also could understand different voices and adjust its focus accordingly. Current models in testing run a customized version of the iPad operating system...

The company also is working on robots that move around the home and has discussed the idea of a humanoid version. Those projects are being led, in part, by Hanns Wolfram Tappeiner, a robotics expert who now has about 100 former car team engineers reporting to him. In a job listing published this month, Apple said it has a team "working to leverage and build upon groundbreaking machine learning robotics research, thereby enabling development of generalizable and reliable robot systems." The company said it's seeking experts with experience in "robot manipulation" and creating AI models for robot control.

The article calls points out that Apple "still gets roughly half its revenue from the iPhone," and calls the robotics effort "one of a few avenues Apple is pursuing to generate new sources of revenue" — and to "capitalize" on its AI technology. (Apple is also working on both smart eyeglasses and augmented reality galsses.)
Power

Data Centers Are Consuming Electricity Supplies - and Possibly Hurting the Environment (yahoo.com) 77

Data center construction "could delay California's transition away from fossil fuels and raise electric bills for everyone else," warns the Los Angeles Times — and also increase the risk of blackouts: Even now, California is at the verge of not having enough power. An analysis of public data by the nonprofit GridClue ranks California 49th of the 50 states in resilience — or the ability to avoid blackouts by having more electricity available than homes and businesses need at peak hours... The state has already extended the lives of Pacific Gas & Electric Co.'s Diablo Canyon nuclear plant as well as some natural gas-fueled plants in an attempt to avoid blackouts on sweltering days when power use surges... "I'm just surprised that the state isn't tracking this, with so much attention on power and water use here in California," said Shaolei Ren, associate professor of electrical and computer engineering at UC Riverside. Ren and his colleagues calculated that the global use of AI could require as much fresh water in 2027 as that now used by four to six countries the size of Denmark.

Driving the data center construction is money. Today's stock market rewards companies that say they are investing in AI. Electric utilities profit as power use rises. And local governments benefit from the property taxes paid by data centers.

The article notes a Goldman Sachs estimate that by 2030, data centers could consume up to 11% of all U.S. power demand — up from 3% now. And it shows how the sprawling build-out of data centers across America is impacting surrounding communities:
  • The article notes that California's biggest concentration of data centers — more than 50 near the Silicon Valley city of Santa Clara — are powered by a utility emitting "more greenhouse gas than the average California electric utility because 23% of its power for commercial customers comes from gas-fired plants. Another 35% is purchased on the open market where the electricity's origin can't be traced." Consumer electric rates are rising "as the municipal utility spends heavily on transmission lines and other infrastructure," while the data centers now consume 60% of the city's electricity.
  • Energy officials in northern Virginia "have proposed a transmission line to shore up the grid that would depend on coal plants that had been expected to be shuttered."
  • "Earlier this year, Pacific Gas & Electric told investors that its customers have proposed more than two dozen data centers, requiring 3.5 gigawatts of power — the output of three new nuclear reactors."

AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

Slashdot Top Deals