Microsoft

OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 (microsoft.com) 14

Microsoft and OpenAI have finalized a new agreement that removes uncertainty for investors and clears the path for OpenAI to restructure as a for-profit business. Microsoft receives a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI. OpenAI completed its recapitalization, simplifying its corporate structure while keeping the nonprofit in control of the for-profit entity. The OpenAI Foundation receives an equity stake worth roughly $130 billion and plans to initially focus on funding work to accelerate health breakthroughs.

Microsoft backed OpenAI with $13.75 billion and was the biggest holdout among investors during negotiations. Once OpenAI achieves AGI, verified by an independent expert panel, Microsoft will no longer receive a cut of OpenAI's revenue. Microsoft also loses its right of first refusal on new cloud infrastructure business from OpenAI, though OpenAI commits an additional $250 billion to Azure.
Power

NextEra Energy Partners With Google To Restart Iowa Nuclear Plant 23

NextEra Energy and Google have partnered to restart Iowa's long-shuttered Duane Arnold nuclear plant, marking the first major U.S. attempt to revive a decommissioned reactor. "We expect Duane Arnold to be back online in early 2029, and the plant will provide more than 600 MW of clean, safe, 'always-on' nuclear energy to the regional grid," said Google in a blog post. Reuters reports: Under the 25-year agreement, the tech giant will purchase power from the 615-MW plant for its growing cloud and AI infrastructure in the state, while also driving significant economic investment to the Midwest region. One of the plant's minority owners, Central Iowa Power Cooperative (CIPCO), will purchase the remaining portion of the plant's output on the same terms as Google, NextEra said. The utility added that it had also signed agreements to acquire CIPCO and Corn Belt Power Cooperative's combined 30% interest in the Duane Arnold plant, bringing NextEra's ownership to 100%.
Ubuntu

Finally, You Can Now be a 'Certified' Ubuntu Sys-Admin/Linux User (itsfoss.com) 50

Thursday Ubuntu-maker Canonical "officially launched Canonical Academy, a new certification platform designed to help professionals validate their Linux and Ubuntu skills through practical, hands-on assessments," writes the blog It's FOSS: Focusing on real-world scenarios, Canonical Academy aims to foster practical skills rather than theoretical knowledge. The end goal? Getting professionals ready for the actual challenges they will face on the job. The learning platform is already live with its first course offering, the System Administrator track (with three certification exams), which is tailored for anyone looking to validate their Linux and Ubuntu expertise.

The exams use cloud-based testing environments that simulate real workplace scenarios. Each assessment is modular, meaning you can progress through individual exams and earn badges for each one. Complete all the exams in this track to earn the full Sysadmin qualification... Canonical is also looking for community members to contribute as beta testers and subject-matter experts (SME). If you are interested in helping shape the platform or want to get started with your certification, you can visit the Canonical Academy website.

The sys-admin track offers exams for Linux Terminal, Ubuntu Desktop 2024, Ubuntu Server 2024, and "managing complex systems," according to an official FAQ. "Each exam provides an in-browser remote desktop interface into a functional Ubuntu Desktop environment running GNOME. From this initial node, you will be expected to troubleshoot, configure, install, and maintain systems, processes, and other general activities associated with managing Linux. The exam is a hybrid format featuring multiple choice, scenario-based, and performance-based questions..."

"Test-takers interested in the types of material covered on each exam can review links to tutorials and documentation on our website."

The FAQ advises test takers to use a Chromium-based browser, as Firefox "is NOT supported at this time... There is a known issue with keyboards and Firefox in the CUE.01 Linux 24.04 preview release at this time, which will be resolved in the CUE.01 Linux 24.10 exam release."
AI

Apple Begins Shipping American-Made AI Servers From Texas 47

Apple has begun shipping U.S.-made AI servers from a new factory in Houston, Texas -- part of its $600 billion investment in American manufacturing and supply chains. CNBC reports: Apple Chief Operating Officer Sabih Khan said on Thursday that the servers will power the company's Apple Intelligence and Private Cloud Compute services. Apple is using its own silicon in its Apple Intelligence servers. "Our teams have done an incredible job accelerating work to get the new Houston factory up and running ahead of schedule and we plan to continue expanding the facility to increase production next year," Khan said in a statement. The Houston factory is on track to create thousands of jobs, Apple said. The Apple servers were previously manufactured overseas.
Network

A Single Point of Failure Triggered the Amazon Outage Affecting Million (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: The outage that hit Amazon Web Services and took out vital services worldwide was the result of a single failure that cascaded from system to system within Amazon's sprawling network, according to a post-mortem from company engineers. [...] Amazon said the root cause of the outage was a software bug in software running the DynamoDB DNS management system. The system monitors the stability of load balancers by, among other things, periodically creating new DNS configurations for endpoints within the AWS network. A race condition is an error that makes a process dependent on the timing or sequence events that are variable and outside the developers' control. The result can be unexpected behavior and potentially harmful failures.

In this case, the race condition resided in the DNS Enactor, a DynamoDB component that constantly updates domain lookup tables in individual AWS endpoints to optimize load balancing as conditions change. As the enactor operated, it "experienced unusually high delays needing to retry its update on several of the DNS endpoints." While the enactor was playing catch-up, a second DynamoDB component, the DNS Planner, continued to generate new plans. Then, a separate DNS Enactor began to implement them. The timing of these two enactors triggered the race condition, which ended up taking out the entire DynamoDB. [...] The failure caused systems that relied on the DynamoDB in Amazon's US-East-1 regional endpoint to experience errors that prevented them from connecting. Both customer traffic and internal AWS services were affected.

The damage resulting from the DynamoDB failure then put a strain on Amazon's EC2 services located in the US-East-1 region. The strain persisted even after DynamoDB was restored, as EC2 in this region worked through a "significant backlog of network state propagations needed to be processed." The engineers went on to say: "While new EC2 instances could be launched successfully, they would not have the necessary network connectivity due to the delays in network state propagation." In turn, the delay in network state propagations spilled over to a network load balancer that AWS services rely on for stability. As a result, AWS customers experienced connection errors from the US-East-1 region. AWS network functions affected included the creating and modifying Redshift clusters, Lambda invocations, and Fargate task launches such as Managed Workflows for Apache Airflow, Outposts lifecycle operations, and the AWS Support Center.
Amazon has temporarily disabled its DynamoDB DNS Planner and DNS Enactor automation globally while it fixes the race condition and add safeguards against incorrect DNS plans. Engineers are also updating EC2 and its network load balancer.

Further reading: Amazon's AWS Shows Signs of Weakness as Competitors Charge Ahead
Cloud

Amazon's AWS Shows Signs of Weakness as Competitors Charge Ahead (bloomberg.com) 25

Amazon Web Services basically invented the cloud computing business and once held nearly half the market. That dominance is slipping. AWS captured 38% of corporate spending on cloud infrastructure services last year, down from almost 50% in 2018, according to Gartner. Microsoft now grows its backlog of corporate sales faster than Amazon. The company that brushed aside incumbents and transformed an internal startup into Amazon's profit engine now faces internal bureaucracy that has slowed it down.

Bloomberg interviewed 23 current and former AWS employees who described management layers that proliferated after a pandemic hiring binge. One sales engineer who was six managers from Jeff Bezos before the pandemic found himself fifteen rungs from CEO Andy Jassy earlier this year. AWS hesitated to invest in Anthropic when the AI startup was spending most of its cash on Amazon servers.

Executives doubted the Anthropic AI could be monetized and were culturally reluctant to pay for external technology they believed could be built in-house. Google invested in early 2023. Amazon followed that September with $4 billion in commitments. On Thursday, Google said it will supply up to 1 million AI chips to Anthropic.
Earth

India Trials Delhi Cloud Seeding To Clean Air in World's Most Polluted City (theguardian.com) 30

The Delhi regional government is trialling a cloud-seeding experiment to induce artificial rain, in an effort to clean the air in the world's most polluted city. From a report: The Bharatiya Janata party (BJP) has been proposing the use of cloud seeding as a way to bring Delhi's air pollution under control since it was elected to lead the regional government this year.

Cloud seeding involves using aircraft or drones to add to clouds particles of silver iodide, which have a structure similar to ice. Water droplets cluster around the particles, modifying the structure of the clouds and increasing the chance of precipitation. Months of unpredictable weather over India's capital had put the BJP's cloud-seeding plans on pause. But days after Delhi's air quality once again fell into the hazardous range after Diwali festival, and a thick brown haze settled over the city, the government said the scheme would finally be rolled out.

Businesses

Anthropic's Google Cloud Deal Includes 1 Million TPUs, 1 GW of Capacity In 2026 (cnbc.com) 8

Google and Anthropic have finalized a cloud partnership worth tens of billions of dollars, granting Anthropic access to up to one million of Google's Tensor Processing Units and more than a gigawatt of compute power by 2026. CNBC reports: Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips. While competitors tout even loftier projections -- OpenAI's 33-gigawatt "Stargate" chief among them -- Anthropic's move is a quiet power play rooted in execution, not spectacle. Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.

A key to Anthropic's infrastructure strategy is its multi-cloud architecture. The company's Claude family of language models runs across Google's TPUs, Amazon's custom Trainium chips, and Nvidia's GPUs, with each platform assigned to specialized workloads like training, inference, and research. Google said the TPUs offer Anthropic "strong price-performance and efficiency." [...] Anthropic's ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints. According to a person familiar with the company's infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.

Microsoft

Microsoft Puts Office Online Server On the Chopping Block 51

Microsoft is retiring Office Online Server on December 31, 2026, ending support and updates for organizations running browser-based Office apps on-premises. The Register reports: After this, there won't be any more security fixes, updates, or technical support from Microsoft. "This change is part of our ongoing commitment to modernizing productivity experiences and focusing on cloud-first solutions," the company said. Office Online Server provides browser-based versions of Word, Excel, PowerPoint, and OneNote for customers who want to keep things on-prem without having to roll out the full desktop applications. Microsoft's solution is to move to Microsoft 365, its decidedly off-premises version of its applications. The company said it is "focusing its browser-based Office app investments on Office for the Web to deliver secure, collaborative, and feature-rich experiences through Microsoft 365."

Other than migrating to another platform when the vendor pulls the plug, affected customers have few options. The announcement will also hit several customers running SharePoint Server SE or Exchange Server SE. While those products remain supported, Office Online Server integration will go away. The company suggested Microsoft 365 Apps for Enterprise and Office LTSC 2024 as alternatives for viewing and editing documents hosted on those servers.

Skype for Business customers will also lose some key features related to PowerPoint. Presenter notes and high-fidelity PowerPoint rendering will go away. In-meeting annotations, which allow meeting participants to write directly to slides without altering the original file, will no longer be available, and embedded video playback will run at lower fidelity. Features like whiteboards, polls, and app sharing shouldn't be affected. Microsoft's solution is a move to Teams, which the company says "offers modern meeting experiences."
Google

Google Porting All Internal Workloads To Arm (theregister.com) 44

Google is migrating all its internal workloads to run on both x86 and its custom Axion Arm chips, with major services like YouTube, Gmail, and BigQuery already running on both architectures. The Register reports: The search and ads giant documented its move in a preprint paper published last week, titled "Instruction Set Migration at Warehouse Scale," and in a Wednesday post that reveals YouTube, Gmail, and BigQuery already run on both x86 and its Axion Arm CPUs -- as do around 30,000 more applications. Both documents explain Google's migration process, which engineering fellow Parthasarathy Ranganathan and developer relations engineer Wolff Dobson said started with an assumption "that we would be spending time on architectural differences such as floating point drift, concurrency, intrinsics such as platform-specific operators, and performance." [...]

The post and paper detail work on 30,000 applications, a collection of code sufficiently large that Google pressed its existing automation tools into service -- and then built a new AI tool called "CogniPort" to do things its other tools could not. [...] Google found the agent succeeded about 30 percent of the time under certain conditions, and did best on test fixes, platform-specific conditionals, and data representation fixes. That's not an enormous success rate, but Google has at least another 70,000 packages to port.

The company's aim is to finish the job so its famed Borg cluster manager -- the basis of Kubernetes -- can allocate internal workloads in ways that efficiently utilize Arm servers. Doing so will likely save money, because Google claims its Axion-powered machines deliver up to 65 percent better price-performance than x86 instances, and can be 60 percent more energy-efficient. Those numbers, and the scale of Google's code migration project, suggest the web giant will need fewer x86 processors in years to come.

The Internet

Smart Beds Malfunctioned During AWS Outage (msn.com) 105

Early Monday, an Amazon Web Services outage disrupted banks, games, and Peloton classes. Eight Sleep customers faced a different problem. Their internet-enabled mattresses malfunctioned. People woke to beds locked in upright positions, excessive heat, flashing lights, and unexpected alarms. Matteo Franceschetti, the company's chief executive, apologized and said engineers were building an outage-proof mode. By Monday evening, all devices functioned again, though some experienced data processing delays. The mattresses adjust temperature between 55 and 110 degrees and elevate bodies into different positions. They activate soundscapes and vibrational alarms. The advanced models cost over $5,000. A yearly subscription of $199 to $399 is required for temperature controls.
Cloud

Amazon's DNS Problem Knocked Out Half the Web, Likely Costing Billions 103

An anonymous reader quotes a report from Ars Technica: On Monday afternoon, Amazon confirmed that an outage affecting Amazon Web Services' cloud hosting, which had impacted millions across the Internet, had been resolved. Considered the worst outage since last year's CrowdStrike chaos, Amazon's outage caused "global turmoil," Reuters reported. AWS is the world's largest cloud provider and, therefore, the "backbone of much of the Internet," ZDNet noted. Ultimately, more than 28 AWS services were disrupted, causing perhaps billions in damages, one analyst estimated for CNN.

[...] Amazon's problems originated at a US site that is its "oldest and largest for web services" and often "the default region for many AWS services," Reuters noted. The same site has experienced two outages before in 2020 and 2021, but while the tech giant had confirmed that those prior issues had been "fully mitigated," apparently the fixes did not ensure stability into 2025. ZDNet noted that Amazon's first sign of the outage was "increased error rates and latency across numerous key services" tied to its cloud database technology. Although "engineers later identified a Domain Name System (DNS) resolution problem" as the root of these issues and quickly fixed it, "other AWS services began to fail in its wake, leaving the platform still impaired" as more than two dozen AWS services shut down. At the peak of the outage on Monday, Down Detector tracked more than 8 million reports globally from users panicked by the outage, ZDNet reported.
Ken Birman, a computer science professor at Cornell University, told Reuters that "software developers need to build better fault tolerance."

"When people cut costs and cut corners to try to get an application up, and then forget that they skipped that last step and didn't really protect against an outage, those companies are the ones who really ought to be scrutinized later."
Cloud

Alibaba Cloud Says It Cut Nvidia AI GPU Use By 82% With New Pooling System (tomshardware.com) 27

Alibaba Cloud claims its new Aegaeon GPU pooling system cuts Nvidia GPU use by 82%, letting 213 H20 accelerators handle workloads that previously required 1,192. The advancements have been detailed in a paper (PDF) at the 2025 ACM Symposium on Operating Systems (SOSP) in Seoul. Tom's Hardware reports: Unlike training-time breakthroughs that chase model quality or speed, Aegaeon is an inference-time scheduler designed to maximize GPU utilization across many models with bursty or unpredictable demand. Instead of pinning one accelerator to one model, Aegaeon virtualizes GPU access at the token level, allowing it to schedule tiny slices of work across a shared pool. This means one H20 could serve several different models simultaneously, with system-wide "goodput" -- a measure of effective output -- rising by as much as nine times compared to older serverless systems.

The system was tested in production over several months, according to the paper, which lists authors from both Peking University and Alibaba's infrastructure division, including CTO Jingren Zhou. During that window, the number of GPUs needed to support dozens of different LLMs -- ranging in size up to 72 billion parameters -- fell from 1,192 to just 213. While the paper does not break down which models contributed most to the savings, reporting by the South China Morning Post says the tests were conducted using Nvidia's H20, one of the few accelerators still legally available to Chinese buyers under current U.S. export controls.

Security

Foreign Hackers Breached a US Nuclear Weapons Plant Via SharePoint Flaws (csoonline.com) 62

Foreign hackers breached the National Nuclear Security Administration's Kansas City National Security Campus (KCNSC) by exploiting unpatched Microsoft SharePoint vulnerabilities. The intrusion happened in August and is possibly linked to either Chinese state actors or Russian cybercriminals. CSO Online notes that "roughly 80% of the non-nuclear parts in the nation's nuclear stockpile originate from KCNSC," making it "one of the most sensitive facilities in the federal weapons complex." From the report: The breach targeted a plant that produces the vast majority of critical non-nuclear components for US nuclear weapons under the NNSA, a semi-autonomous agency within the Department of Energy (DOE) that oversees the design, production, and maintenance of the nation's nuclear weapons. Honeywell Federal Manufacturing & Technologies (FM&T) manages the Kansas City campus under contract to the NNSA. [...] The attackers exploited two recently disclosed Microsoft SharePoint vulnerabilities -- CVE-2025-53770, a spoofing flaw, and CVE-2025-49704, a remote code execution (RCE) bug -- both affecting on-premises servers. Microsoft issued fixes for the vulnerabilities on July 19.

On July 22, the NNSA confirmed it was one of the organizations hit by attacks enabled by the SharePoint flaws. "On Friday, July 18th, the exploitation of a Microsoft SharePoint zero-day vulnerability began affecting the Department of Energy," a DOE spokesperson said. However, the DOE contended at the time, "The department was minimally impacted due to its widespread use of the Microsoft M365 cloud and very capable cybersecurity systems. A very small number of systems were impacted. All impacted systems are being restored." By early August, federal responders, including personnel from the NSA, were on-site at the Kansas City facility, the source tells CSO.

The Internet

AWS Outage Takes Thousands of Websites Offline for Three Hours (cnbc.com) 56

AWS experienced a three-hour outage early Monday morning that disrupted thousands of websites and applications across the globe. The cloud computing provider reported DNS problems with DynamoDB in its US-EAST-1 region in northern Virginia starting at 12:11 a.m. Pacific time. Over 4 million users reported issues, according to Downdetector. Snapchat saw reports spike from more than 22,000 to around 4,000 as systems recovered. Roblox dropped from over 12,600 complaints to fewer than 500. Reddit and the financial platform Chime remained affected longer. Perplexity, Coinbase and Robinhood attributed their platform disruptions directly to AWS.

Gaming platforms including Fortnite, Clash Royale and Clash of Clans went offline. Signal confirmed the messaging app was down. In Britain, Lloyd Bank, Bank of Scotland, Vodafone, BT, and the HMRC website faced problems. United Airlines reported disrupted access to its app and website overnight. Some internal systems were temporarily affected. Delta experienced a small number of minor flight delays. By 3:35 a.m. Pacific time, AWS said the issue had been fully mitigated. Most service operations were succeeding normally though some requests faced throttling during final resolution. AWS holds roughly one-third of the cloud infrastructure market ahead of Microsoft and Google.
Programming

A Plan for Improving JavaScript's Trustworthiness on the Web (cloudflare.com) 48

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web."

"It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future....

We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset.

The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests).

"We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."
AI

Salesforce Sued By Authors Over AI Software (reuters.com) 4

An anonymous reader shares a report: Cloud-computing firm Salesforce was hit with a proposed class action lawsuit by two authors who alleged the company used thousands of books without permission to train its AI software. Novelists Molly Tanzer and Jennifer Gilmore said in the complaint that Salesforce infringed copyrights by using their work to train its xGen AI models to process language.
Data Storage

12 Years of HDD Analysis Brings Insight To the Bathtub Curve's Reliability (arstechnica.com) 23

Backblaze has been tracking hard disk drive failures in its datacenter since 2013. The backup and cloud storage company's latest analysis of approximately 317,230 drives shows that peak failure rates have dropped dramatically and shifted much later in a drive's lifespan. Where the company once saw failure rates of 13.73% at around three years in 2013 and 14.24% at seven years and nine months in 2021, the current data shows a peak of just 4.25% at 10 years and three months.

This represents the first time the company has observed the highest failure rate occurring at the far end of the drive curve rather than earlier in its operational life, it said. The drives maintained relatively consistent failure rates through most of their use before spiking sharply near the end. The improvement amounts to roughly one-third of the previous peak failure rates.
Science

Physicists Inadvertently Generated the Shortest X-Ray Pulses Ever Observed (theconversation.com) 18

Physicists using SLAC's X-ray free-electron laser discovered two new laser phenomena that allowed them to generate the shortest, highest-energy X-ray pulses ever recorded (60-100 attoseconds). These breakthroughs could let scientists observe electron motion and chemical bond formation in real time. Physicists Uwe Bergmann and Thomas Linker write in an article for The Conversation: In this new study we used X-rays, which have 100 million times shorter wavelengths than microwaves and 100 million times more energy. This meant the resulting new X-ray laser pulses were split into different X-ray wavelengths corresponding to Rabi frequencies in the extreme ultraviolet region. Ultraviolet light has a frequency 100 million times higher than radio waves. This Rabi cycling effect allowed us to generate the shortest high-energy X-ray pulses to date, clocking in at 60-100 attoseconds.

While the pulses that X-ray free-electron lasers currently generate allow researchers to observe atomic bonds forming, rearranging and breaking, they are not fast enough to look inside the electron cloud that generates such bonds. Using these new attosecond X-ray laser pulses could allow scientists to study the fastest processes in materials at the atomic-length scale and to discern different elements.

In the future, we also hope to use much shorter X-ray free-electron laser pulses to better generate these attosecond X-ray pulses. We are even hoping to generate pulses below 60 attoseconds by using heavier materials with shorter lifespans, such as tungsten or hafnium. These new X-ray pulses are fast enough to eventually enable scientists to answer questions such as how exactly an electron cloud moves around and what a chemical bond actually is.
The findings have been published in the journal Nature.
China

'China Has Overtaken America' (substack.com) 169

China now generates well over twice as much electricity as the United States. The country's economy has become substantially larger than America's in real terms, measured at purchasing power parity, economist Paul Krugman wrote this week. The Trump administration has moved aggressively against renewable energy development. It rolled back Biden's tax incentives for renewables through the One Big Beautiful Bill. The administration is attempting to stop a nearly completed offshore wind farm that could power hundreds of thousands of homes. It canceled $7 billion in grants for residential solar panels. A solar energy project that would have powered almost 2 million homes was killed. The administration canceled $8 billion in clean energy grants, mostly in Democratic states, and is reportedly planning to cancel tens of billions more. Energy Secretary Chris Wright said solar power is unreliable because "you have to have power when the sun goes behind a cloud and when the sun sets, which it does almost every night."

California has already integrated substantial solar power into its grid through battery storage technology. Republican support for higher education has collapsed over the past decade, according to polling data. The administration has also targeted vaccines and research in multiple areas. Krugman argues that by 2028 America will have fallen so far behind China that it is unlikely to catch up.

Slashdot Top Deals