AI

AI Is Reshaping Hacking. No One Agrees How Fast (axios.com) 18

"Several cybersecurity companies debuted advancements in AI agents at the Black Hat conference last week," reports Axios, "signaling that cyber defenders could soon have the tools to catch up to adversarial hackers." - Microsoft shared details about a prototype for a new agent that can automatically detect malware — although it's able to detect only 24% of malicious files as of now.

- Trend Micro released new AI-driven "digital twin" capabilities that let companies simulate real-world cyber threats in a safe environment walled off from their actual systems.

- Several companies and research teams also publicly released open-source tools that can automatically identify and patch vulnerabilities as part of the government-backed AI Cyber Challenge.

Yes, but: Threat actors are now using those AI-enabled tools to speed up reconnaissance and dream up brand-new attack vectors for targeting each company, John Watters, CEO of iCounter and a former Mandiant executive, told Axios.

The article notes "two competing narratives about how AI is transforming the threat landscape." One says defenders still have the upper hand. Cybercriminals lack the money and computing resources to build out AI-powered tools, and large language models have clear limitations in their ability to carry out offensive strikes. This leaves defenders with time to tap AI's potential for themselves. [In a DEF CON presentation a member of Anthropic's red team said its Claude AI model will "soon" be able to perform at the level of a senior security researcher, the article notes later]

Then there's the darker view. Cybercriminals are already leaning on open-source LLMs to build tools that can scan internet-connected devices to see if they have vulnerabilities, discover zero-day bugs, and write malware. They're only going to get better, and quickly...

Right now, models aren't the best at making human-like judgments, such as recognizing when legitimate tools are being abused for malicious purposes. And running a series of AI agents will require cybercriminals and nation-states to have enough resources to pay the cloud bills they rack up, Michael Sikorski, CTO of Palo Alto Networks' Unit 42 threat research team, told Axios. But LLMs are improving rapidly. Sikorski predicts that malicious hackers will use a victim organization's own AI agents to launch an attack after breaking into their infrastructure.

Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
AI

Foxconn Now Making More From Servers than iPhones (theregister.com) 9

An anonymous reader shares a report: Manufacturer to the stars Foxconn is building so many AI servers that they're now bringing in more cash than consumer electronics -- even counting the colossal quantity of iPhones it creates for Apple.

The Taiwanese company revealed the shift in its Thursday announcement of Q2 results, which saw revenue grow 16% to NT$1.79 trillion ($59.73 billion) and operating profit rise 27% to NT$56.6 billion ($1.9 billion). CEO Kathy Yang told investors the company's Cloud and Networking Products division delivered 41% of total revenue, up nine percent compared to Q2 2024, and surpassing the company's Smart Consumer Electronics unit for the first time. The latter business includes Foxconn's work for Apple.

Windows

Microsoft Says Voice Will Emerge as Primary Input for Next Windows (youtube.com) 138

The next version of Windows will become "more ambient, pervasive, and multi-modal" as AI transforms how users interact with computers, Microsoft's Windows chief Pavan Davuluri said in a company video. Davuluri, Corporate Vice President and head of Windows, said that voice will emerge as a primary input method alongside keyboard and mouse, with the operating system gaining context awareness to understand screen content and user intent through natural language.

Windows interfaces, he said, will appear fundamentally different within five years as the platform becomes increasingly agentic. The transformation will rely on both local processing power and cloud computing capabilities to deliver seamless experiences where users can speak to their computers while simultaneously typing or inking.
Science

Physicists Create Quantum Radar That Could Image Buried Objects (technologyreview.com) 28

An anonymous reader quotes a report from MIT Technology Review: Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices. It's still a prototype, but its intended use is to image buried objects in situations such as constructing underground utilities, drilling wells for natural gas, and excavating archaeological sites. [...] The glass cell that serves as the radar's quantum component is full of cesium atoms kept at room temperature. The researchers use lasers to get each individual cesium atom to swell to nearly the size of a bacterium, about 10,000 times bigger than the usual size. Atoms in this bloated condition are called Rydberg atoms.

When incoming radio waves hit Rydberg atoms, they disturb the distribution of electrons around their nuclei. Researchers can detect the disturbance by shining lasers on the atoms, causing them to emit light; when the atoms are interacting with a radio wave, the color of their emitted light changes. Monitoring the color of this light thus makes it possible to use the atoms as a radio receiver. Rydberg atoms are sensitive to a wide range of radio frequencies without needing to change the physical setup... This means a single compact radar device could potentially work at the multiple frequency bands required for different applications.

[Matthew Simons, a physicist at the National Institute of Standards and Technology (NIST), who was a member of the research team] tested the radar by placing it in a specially designed room with foam spikes on the floor, ceiling, and walls like stalactites and stalagmites. The spikes absorb, rather than reflect, nearly all the radio waves that hit them. This simulates the effect of a large open space, allowing the group to test the radar's imaging capability without unwanted reflections off walls.The researchers placed a radio wave transmitter in the room, along with their Rydberg atom receiver, which was hooked up to an optical table outside the room. They aimed radio waves at a copper plate about the size of a sheet of paper, some pipes, and a steel rod in the room, each placed up to five meters away. The radar allowed them to locate the objects to within 4.7 centimeters. The team posted a paper on the research to the arXiv preprint server in late June.

AI

The Dead Need Right To Delete Their Data So They Can't Be AI-ified, Lawyer Says 71

Legal scholar Victoria Haneman argues that U.S. law should grant estates a time-limited right to delete a deceased person's data so they can't be recreated by AI without their consent. "Digital resurrection by or through AI requires the personal data of the deceased, and the amount of data that we are storing online is increasing exponentially with each passing year," writes Haneman in an article published earlier this year in the Boston College Law Review. "It has been said that data is the new uranium, extraordinarily valuable and potentially dangerous. A right to delete will provide the decedent with a time-limited right for deletion of personal data." The Register reports: A living person may have some say on the matter through the control of personal digital documents and correspondence. But a dead person can't object, and US law doesn't offer the dead much data protection in terms of privacy law, property law, intellectual property law, or criminal law. The Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA), a law developed to help fiduciaries deal with digital files of the dead or incapacitated, can come into play. But Haneman points out that most people die intestate (without a will), leaving matters up to tech platforms. Facebook's response to dead users is to allow anyone to request the memorialization of an account, which keeps posts online. As for RUFADAA, it does little to address digital resurrection, says Haneman.

The right to publicity, which provides a private right of action against unauthorized commercial use of a person's name, image, or likeness, covers the dead in about 25 states, according to Haneman. But the monetization of publicity rights has proven to be problematic. Haneman says that there are some states where it's theoretically possible to be prosecuted for libeling or defaming the deceased, such as Idaho, Nevada, and Oklahoma, but adds that such prosecutions have declined because they tread upon the constitutional right to free expression. [...] A recent California law, the Delete Act, which took effect last year, is the first to offer a way for the living to demand the deletion of personal data from data brokers in one step. But according to Haneman, it's unclear whether the text of the law will be extended to cover the dead -- a possibility think tank Aspen Tech Policy Hub supports [PDF].

Haneman argues that a data deletion law for the dead would be grounded in laws governing human remains, where corpses receive protection against abuse despite being neither a person nor property. "The personal representative of the decedent has the right to destroy all physical letters and photographs saved by the decedent; merely storing personal information in the cloud should not grant societal archival rights," she argues. "A limited right of deletion within a twelve-month window balances the interests of society against the rights of the deceased."
The Military

How 12 'Enola Gay' Crew Members Remember Dropping the Atomic Bomb (mentalfloss.com) 130

Last week saw the 80th anniversary of a turning point in World War II: the day America dropped an atomic bomb on Hiroshima.

"Twelve men were on that flight..." remembers the online magazine Mental Floss, adding "Almost all had something to say after the war." The group was segregated from the rest of the military and trained in secret. Even those in the group only knew as much as they needed to know in order to perform their duties. The group deployed to Tinian in 1945 with 15 B-29 bombers, flight crews, ground crews, and other personnel, a total of about 1770 men. The mission to drop the atomic bomb on Hiroshima, Japan (special mission 13) involved seven planes, but the one we remember was the Enola Gay.

Air Force captain Theodore "Dutch" Van Kirk did not know the destructive force of the nuclear bomb before Hiroshima. He was 24 years old at that time, a veteran of 58 missions in North Africa. Paul Tibbets told him this mission would shorten or end the war, but Van Kirk had heard that line before. Hiroshima made him a believer. Van Kirk felt the bombing of Hiroshima was worth the price in that it ended the war before the invasion of Japan, which promised to be devastating to both sides. " I honestly believe the use of the atomic bomb saved lives in the long run. There were a lot of lives saved. Most of the lives saved were Japanese."

In 2005, Van Kirk came as close as he ever got to regret. "I pray no man will have to witness that sight again. Such a terrible waste, such a loss of life..."

Many of the other crewmembers also felt the bomb ultimately saved lives.

The Washington Post has also published a new oral history of the flight after it took off from Tinian Island. The oral history was assembled for a new book published this week titled The Devil Reached Toward the Sky: An Oral History of the Making and Unleashing of the Atomic Bomb.. Col. Paul W. Tibbets, lead pilot of the Enola Gay: We were only eight minutes off the ground when Capt. William S. "Deak" Parsons and Lt. Morris R. Jeppson lowered themselves into the bomb bay to insert a slug of uranium and the conventional explosive charge into the core of the strange-looking weapon. I wondered why we were calling it ''Little Boy." Little Boy was 28 inches in diameter and 12 feet long. Its weight was a little more than 9,000 pounds. With its coat of dull gunmetal paint, it was an ugly monster...

Lt. Morris R. Jeppson, crew member of the Enola Gay: Parsons was second-in-command of the military in the Manhattan Project. The Little Boy weapon was Parsons's design. He was greatly concerned that B-29s loaded with conventional bombs were crashing at the ends of runways on Tinian during takeoff and that such an event could cause the U-235 projectile in the gun of Little Boy to fly down the barrel and into the U-235 target. This could have caused a low-level nuclear explosion on Tinian...

Jeppson: On his own, Parsons decided that he would go on the Hiroshima mission and that he would load the gun after the Enola Gay was well away from Tinian.

Tibbets: That way, if we crashed, we would lose only the airplane and crew, himself included... Jeppson held the flashlight while Parsons struggled with the mechanism of the bomb, inserting the explosive charge that would send one block of uranium flying into the other to set off the instant chain reaction that would create the atomic explosion.

The navigator on one of the other six planes on the mission remember that watching the mushroom cloud, "There was almost complete silence on the flight deck. It was evident the city of Hiroshima was destroyed."

And the Enola Gay's copilot later remembered thinking: "My God, what have we done?"
Programming

'Hour of Code' Announces It's Now Evolving Into 'Hour of AI' (hourofcode.com) 35

Last month Microsoft pledged $4 billion (in cash and AI/cloud technology) to "advance" AI education in K-12 schools, community and technical colleges, and nonprofits (according to a blog post by Microsoft President Brad Smith). But in the launch event video, Smith also says it's time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI."

Long-time Slashdot reader theodp writes: This sets the stage for Code.org CEO Hadi Partovi's announcement that his tech-backed nonprofit's [annual educational event] Hour of Code is being renamed to the Hour of AI... Explaining the pivot, Partovi says: "Computer science for the last 50 years has had a focal point around coding that's been — sort of like you learn computer science so that you create code. There's other things you learn, like data science and algorithms and cybersecurity, but the focal point has been coding.

"And we're now in a world where the focal point of computer science is shifting to AI... We all know that AI can write much of the code. You don't need to worry about where did the semicolons go, or did I close the parentheses or whatnot. The busy work of computer science is going to be done by the computer itself.

"The creativity, the thinking, the systems design, the engineering, the algorithm planning, the security concerns, privacy concerns, ethical concerns — those parts of computer science are going to be what remains with a focal point around AI. And what's going to be important is to make sure in education we give students the tools so they don't just become passive users of AI, but so that they learn how AI works."

Speaking to Microsoft's Smith, Partovi vows to redouble the nonprofit's policy work to "make this [AI literacy] a high school graduation requirement so that no student graduates school without at least a basic understanding of what's going to be part of the new liberal arts background [...] As you showed with your hat, we are renaming the Hour of Code to an Hour of AI."

Cloud

Amazon's Cloud Business Giving Federal Agencies Up To $1 Billion In Discounts (cnbc.com) 20

Amazon Web Services has struck a deal with the U.S. government to provide up to $1 billion in cloud service discounts through 2028. CNBC reports: The agreement is expected to speed up migration to the cloud, as well as adoption of artificial intelligence tools, the General Services Administration said. "AWS's partnership with GSA demonstrates a shared public-private commitment to enhancing America's AI leadership," the agency said in a release.

Amazon's cloud boss, Matt Garman, hailed the agreement as a "significant milestone in the large-scale digital transformation of government services." The discounts aggregated across federal agencies include credits to use AWS' cloud infrastructure, modernization programs and training services, as well as incentives for "direct partnership."
Further reading: OpenAI Offers ChatGPT To US Federal Agencies for $1 a Year
Businesses

The Great Indian IT Squeeze 25

An anonymous reader shares a report: The Indian IT sector has operated for decades under the dominance of major firms TCS, Infosys, Wipro, and HCLT. The historical growth of these companies was tightly coupled with the U.S. economy through a strong "multiplier effect," where Indian IT export growth significantly outpaced US GDP growth. This reliable growth model is now under pressure.

The multiplier has weakened considerably, falling from a peak of 4.1x to a projected 1.6x. This is contributing to a prolonged slowdown period for India IT exports. A primary factor in this slowdown is a clear shift in client spending priorities. While overall enterprise technology spending remains strong, clients are now allocating a larger portion of their budgets to core digital infrastructure, such as cloud platforms and SaaS platforms, over traditional IT services.

The firms are facing challenges on multiple fronts. Global corporations are increasingly establishing their own global capability centers in India, with projections indicating an accelerated pace of 120 new centers being added annually in fiscal years 2024 and 2025, up from some 40 six years ago. This insourcing trend diverts revenue from traditional IT vendors and creates direct competition for skilled technology talent.
Data Storage

What Happens To Your Data If You Stop Paying for Cloud Storage? (wired.com) 38

Major cloud storage providers maintain unclear policies about deleting user data after subscription cancellations, Wired reports, with deletion timelines ranging from six months to indefinite preservation.

Apple reserves the right to delete iCloud backups after 180 days of device inactivity but does not specify what happens to general file storage. Google may delete content after users exceed free storage limits for extended periods, though files remain safe for two years after cancellation.

Microsoft may delete OneDrive files after six months of non-payment, while Dropbox preserves files indefinitely without expiration dates. All providers revert users to limited free storage tiers upon cancellation with Apple and Microsoft offering 5GB, Google providing 15GB, and Dropbox allowing 2GB.
Businesses

Atlassian Terminates 150 Staff With Pre-Recorded Video (cyberdaily.au) 41

Atlassian laid off 150 employees via a pre-recorded video. "While not specifically outlined, the affected staff seem to be from the company's European operations, with The Australian saying that Cannon-Brooke's overshared that it would be difficult to axe its European staff due to contract arrangements, but that the company had already begun moving in that direction," reports CyberDaily. While the company claims the cuts weren't directly caused by AI, it has simultaneously rolled out AI-enhanced customer service tools and emphasized automation as a key part of its digital transformation strategy. From the report: Atlassian CEO and co-founder Mike Cannon-Brookes sent the video titled "Restructuring the CSS Team: A Difficult Decision for Our Future" to staff on Wednesday morning (30 July), informing them that 150 staff had been made redundant. The video reportedly did not make it seem that the decision was difficult, but rather said it would allow its staff "to say goodbye." The video itself did not announce who was leaving, but it told employees they would have to wait 15 minutes for an email about their employment. Those who were terminated had their laptops blocked immediately. They reportedly will receive six months' pay.

"AI is going to change Australia," [said former co-CEO and co-founder Scott Farquhar]. "Every person should be using AI daily for as many things as they can. Like any new technology, it will feel awkward to start with, but every business person, every business leader, every government leader, and every bureaucrat should be using it." He also said that governments should be implementing AI more broadly. [...] Commenting on the termination, Farquhar said the mass termination was due to the customer service team no longer being needed in the same capacity, as larger clients required less complex support following a move to the cloud.

United Kingdom

UK Competition Authority Rains on Microsoft and Amazon Cloud Parade (cnbc.com) 8

Britain's Competition and Markets Authority concluded that Microsoft and Amazon hold "significant unilateral market power" in cloud services and recommended investigating both companies under new competition rules. The regulator said it had concerns about practices creating customer "lock-in" effects through egress fees and unfavorable licensing terms that trap businesses in difficult-to-exit contracts.

Microsoft and Amazon each control roughly 30-40% of the infrastructure-as-a-service market, while Google holds 5-10%. Microsoft disputed the findings, calling the cloud market "dynamic and competitive." Amazon said the probe recommendations were "unwarranted."
Microsoft

Microsoft Joins $4 Trillion Club (yahoo.com) 35

Microsoft has reached a $4 trillion market cap, becoming only the second company to achieve this milestone. Investors drove the stock up 4.62% following the company's fourth-quarter earnings report, which showed strong growth in cloud-computing services fueled by artificial intelligence demand. Microsoft's Azure cloud business generated $75 billion in annual revenue, representing a 34% increase from the previous fiscal year.

Nvidia became the first company to reach the $4 trillion market cap earlier this month.
Data Storage

'The Future is Not Self-Hosted' (drewlyton.com) 175

A software developer who built his own home server in response to Amazon's removal of Kindle book downloads now argues that self-hosting "is NOT the future we should be fighting for." Drew Lyton constructed a home server running open-source alternatives to Google Drive, Google Photos, Audible, Kindle, and Netflix after Amazon announced that "Kindle users would no longer be able to download and back up their book libraries to their computers."

The change prompted Amazon to update Kindle store language to say "users are purchasing licenses -- not books." Lyton's setup involved a Lenovo P520 with 128GB RAM, multiple hard drives, and Docker containers running applications like Immich for photo storage and Jellyfin for media streaming. The technical complexity required "138 words to describe but took me the better part of two weeks to actually do."

The implementation was successful but Lyton concluded that self-hosting "assumes isolated, independent systems are virtuous. But in reality, this simply makes them hugely inconvenient." He proposes "publicly funded, accessible, at cost cloud-services" as an alternative, suggesting libraries could provide "100GB of encrypted file storage, photo-sharing and document collaboration tools, and media streaming services -- all for free."
Operating Systems

Linux 6.16 Brings Faster File Systems, Improved Confidential Memory Support, and More Rust Support (zdnet.com) 50

ZDNet's Steven Vaughan-Nichols shares his list of "what's new and improved" in the latest Linux 6.16 kernel. An anonymous reader shares an excerpt from the report: First, the Rust language is continuing to become more well-integrated into the kernel. At the top of my list is that the kernel now boasts Rust bindings for the driver core and PCI device subsystem. This approach will make it easier to add new Rust-based hardware drivers to Linux. Additionally, new Rust abstractions have been integrated into the Direct Rendering Manager (DRM), particularly for ioctl handling, file/GEM memory management, and driver/device infrastructure for major GPU vendors, such as AMD, Nvidia, and Intel. These changes should reduce vulnerabilities and optimize graphics performance. This will make gamers and AI/ML developers happier.

Linux 6.16 also brings general improvements to Rust crate support. Crate is Rust's packaging format. This will make it easier to build, maintain, and integrate Rust kernel modules into the kernel. For those of you who still love C, don't worry. The vast majority of kernel code remains in C, and Rust is unlikely to replace C soon. In a decade, we may be telling another story. Beyond Rust, this latest release also comes with several major file system improvements. For starters, the XFS filesystem now supports large atomic writes. This capability means that large multi-block write operations are 'atomic,' meaning all blocks are updated or none. This enhances data integrity and prevents data write errors. This move is significant for companies that use XFS for databases and large-scale storage.

Perhaps the most popular Linux file system, Ext4, is also getting many improvements. These boosts include faster commit paths, large folio support, and atomic multi-fsblock writes for bigalloc filesystems. What these improvements mean, if you're not a file-system nerd, is that we should see speedups of up to 37% for sequential I/O workloads. If your Linux laptop doubles as a music player, another nice new feature is that you can now stream your audio over USB even while the rest of your system is asleep. That capability's been available in Android for a while, but now it's part of mainline Linux.

If security is a top priority for you, the 6.16 kernel now supports Intel Trusted Execution Technology (TXT) and Intel Trusted Domain Extensions (TDX). This addition, along with Linux's improved support for AMD Secure Encrypted Virtualization and Secure Memory Encryption (SEV-SNP), enables you to encrypt your software's memory in what's known as confidential computing. This feature improves cloud security by encrypting a user's virtual machine memory, meaning someone who cracks a cloud can't access your data.
Linux 6.16 also delivers several chip-related upgrades. It introduces support for Intel's Advanced Performance Extensions (APX), doubling x86 general-purpose registers from 16 to 32 and boosting performance on next-gen CPUs like Lunar Lake and Granite Rapids Xeon. Additionally, the new CONFIG_X86_NATIVE_CPU option allows users to build processor-optimized kernels for greater efficiency.

Support for Nvidia's AI-focused Blackwell GPUs has also been improved, and updates to TCP/IP with DMABUF help offload networking tasks to GPUs and accelerators. While these changes may go unnoticed by everyday users, high-performance systems will see gains and OpenVPN users may finally experience speeds that challenge WireGuard.
AI

Cisco Donates the AGNTCY Project to the Linux Foundation 7

Cisco has donated its AGNTCY initiative to the Linux Foundation, aiming to create an open-standard "Internet of Agents" to allow AI agents from different vendors to collaborate seamlessly. The project is backed by tech giants like Google Cloud, Dell, Oracle and Red Hat. "Without such an interoperable standard, companies have been rushing to build specialized AI agents," writes ZDNet's Steven Vaughan-Nichols. "These work in isolated silos that cannot work and play well with each other. This, in turn, makes them less useful for customers than they could be." From the report: AGNTCY was first open-sourced by Cisco in March 2025 and has since attracted support from over 75 companies. By moving it under the Linux Foundation's neutral governance, the hope is that everyone else will jump on the AGNTCY bandwagon, thus making it an industry-wide standard. The Linux Foundation has a long history of providing common ground for what otherwise might be contentious technology battles. The project provides a complete framework to solve the core challenges of multi-agent collaboration:

- Agent Discovery: An Open Agent Schema Framework (OASF) acts like a "DNS for agents," allowing them to find and understand the capabilities of others.
- Agent Identity: A system for cryptographically verifiable identities ensures agents can prove who they are and perform authorized actions securely across different vendors and organizations.
- Agent Messaging: A protocol named Secure Low-latency Interactive Messaging (SLIM) is designed for the complex, multi-modal communication patterns of agents, with built-in support for human-in-the-loop interaction and quantum-safe security.
- Agent Observability: A specialized monitoring framework provides visibility into complex, multi-agent workflows, which is crucial for debugging probabilistic AI systems.

You may well ask, aren't there other emerging AI agency standards? You're right. There are. These include the Agent2Agent (A2A) protocol, which was also recently contributed to the Linux Foundation, and Anthropic's Model Context Protocol (MCP). AGNTCY will help agents using these protocols discover each other and communicate securely. In more detail, it looks like this: AGNTCY enables interoperability and collaboration in three primary ways:

- Discovery: Agents using the A2A protocol and servers using MCP can be listed and found through AGNTCY's directories. This enables different agents to discover each other and understand their functions.
- Messaging: A2A and MCP communications can be transported over SLIM, AGNTCY's messaging protocol designed for secure and efficient agent interaction.
- Observability: The interactions between these different agents and protocols can be monitored using AGNTCY's observability software development kits (SDKs), which increase transparency and help with debugging complex workflows
You can view AGNTCY's code and documentary on GitHub.
Power

AI Boom Sparks Fight Over Soaring Power Costs 88

Utilities across the U.S. are demanding tech companies pay larger shares of electricity infrastructure costs as AI drives unprecedented data center construction, creating tensions over who bears the financial burden of grid upgrades.

Virginia utility Dominion Energy received requests from data center developers requiring 40 gigawatts of electricity by the end of 2024, enough to power at least 10 million homes, and proposed measures requiring longer-term contracts and guaranteed payments. Ohio became one of the first states to mandate companies pay more connection costs after receiving power requests exceeding 50 times existing data center usage.

Tech giants Microsoft, Google, and Amazon plan to spend $80 billion, $85 billion, and $100 billion respectively this year on AI infrastructure, while utilities worry that grid upgrade costs will increase rates for residential customers.

Further reading: The AI explosion means millions are paying more for electricity
The Almighty Buck

Bankrupt Futurehome Suddenly Makes Its Smart Home Hub a Subscription Service (arstechnica.com) 81

After filing for bankruptcy, Norwegian smart home company Futurehome abruptly transitioned its Smarthub II and other devices to a subscription-only model, disabling essential features unless users pay an annual fee. Needless to say, customers aren't too happy with the move as they bought the hardware expecting lifetime functionality and now find their smart homes significantly less smart. Ars Technica reports: Launched in 2016, Futurehome's Smarthub is marketed as a central hub for controlling Internet-connected devices in smart homes. For years, the Norwegian company sold its products, which also include smart thermostats, smart lighting, and smart fire and carbon monoxide alarms, for a one-time fee that included access to its companion app and cloud platform for control and automation. As of June 26, though, those core features require a 1,188 NOK (about $116.56) annual subscription fee, turning the smart home devices into dumb ones if users don't pay up.

"You lose access to controlling devices, configuring; automations, modes, shortcuts, and energy services," a company FAQ page says. You also can't get support from Futurehome without a subscription. "Most" paid features are inaccessible without a subscription, too, the FAQ from Futurehome, which claims to be in 38,000 households, says. After June 26, customers had four weeks to continue using their devices as normal without a subscription. That grace period recently ended, and users now need a subscription for their smart devices to work properly.

Some users are understandably disheartened about suddenly having to pay a monthly fee to use devices they already purchased. More advanced users have also expressed frustration with Futurehome potentially killing its devices' ability to work by connecting to a local device instead of the cloud. In its FAQ, Futurehome says it "cannot guarantee that there will not be changes in the future" around local API access.
Futurehome claims that introducing the subscription fee was a necessary move due to its recent bankruptcy. Its FAQ page reads: "Futurehome AS was declared bankrupt on 20 May 2025. The platform and related services were purchased from the bankruptcy estate -- 50 percent by former Futurehome owners and 50 percent by Sikom Connect -- and are now operated by FHSD Connect AS. To secure stable operation, fund product development, and provide high-quality support, we are introducing a new subscription model."

The company says the subscription fee would allow it to provide customers "better functionality, more security, and higher value in the solution you have already invested in."
Earth

Researchers Quietly Planned a Test to Dim Sunlight Over 3,900 Square Miles (politico.com) 81

California researchers planned a multimillion-dollar test of salt water-spraying equipment that could one day be used to dim the sun's rays — over a 3,900-square mile are off the west coasts of North America, Chile or south-central Africa. E&E News calls it part of a "secretive" initiative backed by "wealthy philanthropists with ties to Wall Street and Silicon Valley" — and a piece of the "vast scope of research aimed at finding ways to counter the Earth's warming, work that has often occurred outside public view." "At such scales, meaningful changes in clouds will be readily detectable from space," said a 2023 research plan from the [University of Washington's] Marine Cloud Brightening Program. The massive experiment would have been contingent upon the successful completion of the thwarted pilot test on the carrier deck in Alameda, according to the plan.... Before the setback in Alameda, the team had received some federal funding and hoped to gain access to government ships and planes, the documents show.

The university and its partners — a solar geoengineering research advocacy group called SilverLining and the scientific nonprofit SRI International — didn't respond to detailed questions about the status of the larger cloud experiment. But SilverLining's executive director, Kelly Wanser, said in an email that the Marine Cloud Brightening Program aimed to "fill gaps in the information" needed to determine if the technologies are safe and effective.âIn the initial experiment, the researchers appeared to have disregarded past lessons about building community support for studies related to altering the climate, and instead kept their plans from the public and lawmakers until the testing was underway, some solar geoengineering experts told E&E News. The experts also expressed surprise at the size of the planned second experiment....

The program does not "recommend, support or develop plans for the use of marine cloud brightening to alter weather or climate," Sarah Doherty, an atmospheric and climate science professor at the university who leads the program, said in a statement to E&E News. She emphasized that the program remains focused on researching the technology, not deploying it. There are no "plans for conducting large-scale studies that would alter weather or climate," she added.

"More than 575 scientists have called for a ban on geoengineering development," according to the article, "because it 'cannot be governed globally in a fair, inclusive, and effective manner.'" But "Some scientists believe that the perils of climate change are too dire to not pursue the technology, which they say can be safely tested in well-designed experiments... " "If we really were serious about the idea that to do any controversial topic needs some kind of large-scale consensus before we can research the topic, I think that means we don't research topics," David Keith, a geophysical sciences professor at the University of Chicago, said at a think tank discussion last month... "The studies that the program is pursuing are scientifically sound and would be unlikely to alter weather patterns — even for the Puerto Rico-sized test, said Daniele Visioni, a professor of atmospheric sciences at Cornell University. Nearly 30 percent of the planet is already covered by clouds, he noted.
Thanks to Slashdot reader fjo3 for sharing the news.

Slashdot Top Deals