AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

Biotech

Open Source Genetic Database Shuts Down To Protect Users From 'Authoritarian Governments' (404media.co) 28

An anonymous reader quotes a report from 404 Media: The creator of an open source genetic database is shutting it down and deleting all of its data because he has come to believe that its existence is dangerous with "a rise in far-right and other authoritarian governments" in the United States and elsewhere. "The largest use case for DTC genetic data was not biomedical research or research in big pharma," Bastian Greshake Tzovaras, the founder of OpenSNP, wrote in a blog post. "Instead, the transformative impact of the data came to fruition among law enforcement agencies, who have put the genealogical properties of genetic data to use."

OpenSNP has collected roughly 7,500 genomes over the last 14 years, primarily by allowing people to voluntarily submit their own genetic information they have downloaded from 23andMe. With the bankruptcy of 23andMe, increased interest in genetic data by law enforcement, and the return of Donald Trump and rise of authoritarian governments worldwide, Greshake Tzovaras told 404 Media he no longer believes it is ethical to run the database. "I've been thinking about it since 23andMe was on the verge of bankruptcy and been really considering it since the U.S. election. It definitely is really bad over there [in the United States]," Greshake Tzovaras told 404 Media. "I am quite relieved to have made the decision and come to a conclusion. It's been weighing on my mind for a long time."

Greshake Tzovaras said that he is proud of the OpenSNP project, but that, in a world where scientific data is being censored and deleted and where the Trump administration has focused on criminalizing immigrants and trans people, he now believes that the most responsible thing to do is to delete the data and shut down the project. "Most people in OpenSNP may not be at particular risk right now, but there are people from vulnerable populations in here as well," Greshake Tzovaras said. "Thinking about gender representation, minorities, sexual orientation -- 23andMe has been working on the whole 'gay gene' thing, it's conceivable that this would at some point in the future become an issue."
"Across the globe there is a rise in far-right and other authoritarian governments. While they are cracking down on free and open societies, they are also dedicated to replacing scientific thought and reasoning with pseudoscience across disciplines," Greshake Tzovaras wrote. "The risk/benefit calculus of providing free & open access to individual genetic data in 2025 is very different compared to 14 years ago. And so, sunsetting openSNP -- along with deleting the data stored within it -- feels like it is the most responsible act of stewardship for these data today."

"The interesting thing to me is there are data preservation efforts in the U.S. because the government is deleting scientific data that they don't like. This is approaching that same problem from a different direction," he added. "We need to protect the people in this database. I am supportive of preserving scientific data and knowledge, but the data comes second -- the people come first. We prefer deleting the data."
Privacy

Oracle Customers Confirm Data Stolen In Alleged Cloud Breach Is Valid (bleepingcomputer.com) 20

An anonymous reader quotes a report from BleepingComputer: Despite Oracle denying a breach of its Oracle Cloud federated SSO login servers and the theft of account data for 6 million people, BleepingComputer has confirmed with multiple companies that associated data samples shared by the threat actor are valid. Last week, a person named 'rose87168' claimed to have breached Oracle Cloud servers and began selling the alleged authentication data and encrypted passwords of 6 million users. The threat actor also said that stolen SSO and LDAP passwords could be decrypted using the info in the stolen files and offered to share some of the data with anyone who could help recover them.

The threat actor released multiple text files consisting of a database, LDAP data, and a list of 140,621 domains for companies and government agencies that were allegedly impacted by the breach. It should be noted that some of the company domains look like tests, and there are multiple domains per company. In addition to the data, rose87168 shared an Archive.org URL with BleepingComputer for a text file hosted on the "login.us2.oraclecloud.com" server that contained their email address. This file indicates that the threat actor could create files on Oracle's server, indicating an actual breach. However, Oracle has denied that it suffered a breach of Oracle Cloud and has refused to respond to any further questions about the incident.

"There has been no breach of Oracle Cloud. The published credentials are not for the Oracle Cloud. No Oracle Cloud customers experienced a breach or lost any data," the company told BleepingComputer last Friday. This denial, however, contradicts findings from BleepingComputer, which received additional samples of the leaked data from the threat actor and contacted the associated companies. Representatives from these companies, all who agreed to confirm the data under the promise of anonymity, confirmed the authenticity of the information. The companies stated that the associated LDAP display names, email addresses, given names, and other identifying information were all correct and belonged to them. The threat actor also shared emails with BleepingComputer, claiming to be part of an exchange between them and Oracle.

United Kingdom

UK's First Permanent Facial Recognition Cameras Installed (theregister.com) 55

The Metropolitan Police has confirmed its first permanent installation of live facial recognition (LFR) cameras is coming this summer and the location will be the South London suburb of Croydon. From a report: The two cameras will be installed in the city center in an effort to combat crime and will be attached to buildings and lamp posts on North End and London Road. According to the police they will only be turned on when officers are in the area and in a position to make an arrest if a criminal is spotted. The installation follows a two-year trial in the area where police vans fitted with the camera have been patrolling the streets matching passersby to its database of suspects or criminals, leading to hundreds of arrests. The Met claims the system can alert them in seconds if a wanted wrong'un is spotted, and if the person gets the all-clear, the image of their face will be deleted.
Open Source

FaunaDB Shuts Down But Hints At Open Source Future (theregister.com) 13

FaunaDB, a serverless database combining relational and document features, will shut down by the end of May due to unsustainable capital demands. The company plans to open source its core technology, including its FQL query language, in hopes of continuing its legacy within the developer community. The Register reports: The startup pocketed $27 million in VC funding in 2020 and boasted that 25,000 developers worldwide were using its serverless database. However, last week, FaunaDB announced that it would sunset its database services. FaunaDB said it plans to release an open-source version of its core database technology. The system stores data in JSON documents but retains relational features like consistency, support for joins and foreign keys, and full schema enforcement. Fauna's query language, FQL, will also be made available to the open-source community. "Driving broad based adoption of a new operational database that runs as a service globally is very capital intensive. In the current market environment, our board and investors have determined that it is not possible to raise the capital needed to achieve that goal independently," the leadership team said.

"While we will no longer be accepting new customers, existing Fauna customers will experience no immediate change. We will gradually transition customers off Fauna and are committed to ensuring a smooth process over the next several months," it added.
Biotech

DNA of 15 Million People For Sale In 23andMe Bankruptcy (404media.co) 51

An anonymous reader quotes a report from 404 Media: 23andMe filed for Chapter 11 bankruptcy Sunday, leaving the fate of millions of people's genetic information up in the air as the company deals with the legal and financial fallout of not properly protecting that genetic information in the first place. The filing shows how dangerous it is to provide your DNA directly to a large, for-profit commercial genetic database; 23andMe is now looking for a buyer to pull it out of bankruptcy. 23andMe said in court documents viewed by 404 Media that since hackers obtained personal data about seven million of its customers in October 2023, including, in some cases "health-related information based upon the user's genetics," it has faced "over 50 class action and state court lawsuits," and that "approximately 35,000 claimants have initiated, filed, or threatened to commence arbitration claims against the company." It is seeking bankruptcy protection in part to simplify the fallout of these legal cases, and because it believes it may not have money to pay for the potential damages associated with these cases.

CEO and cofounder Anne Wojcicki announced she is leaving the company as part of this process. The company has the genetic data of more than 15 million customers. According to its Chapter 11 filing, 23andMe owes money to a host of pharmaceutical companies, pharmacies, artificial intelligence companies (including a company called Aganitha AI and Coreweave), as well as health insurance companies and marketing companies.
Shortly before the filing, California Attorney General Rob Bonta issued an "urgent" alert to 23andMe customers: "Given 23andMe's reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company."

In a letter to customers Sunday, 23andMe said: "Your data remains protected. The Chapter 11 filing does not change how we store, manage, or protect customer data. Our users' privacy and data are important considerations in any transaction, and we remain committed to our users' privacy and to being transparent with our customers about how their data is managed." It added that any buyer will have to "comply with applicable law with respect to the treatment of customer data."

404 Media's Jason Koebler notes that "there's no way of knowing who is going to buy it, why they will be interested, and what will become of its millions of customers' DNA sequences. 23andMe has claimed over the years that it strongly resists law enforcement requests for information and that it takes customer security seriously. But the company has in recent years changed its terms of service, partnered with big pharmaceutical companies, and, of course, was hacked."
AI

Clearview Attempted To Buy Social Security Numbers and Mugshots for its Database (404media.co) 24

Controversial facial recognition company Clearview AI attempted to purchase hundreds of millions of arrest records including social security numbers, mugshots, and even email addresses to incorporate into its product, 404 Media reports. From the report: For years, Clearview AI has collected billions of photos from social media websites including Facebook, LinkedIn and others and sold access to its facial recognition tool to law enforcement. The collection and sale of user-generated photos by a private surveillance company to police without that person's knowledge or consent sparked international outcry when it was first revealed by the New York Times in 2020.

New documents obtained by 404 Media reveal that Clearview AI spent nearly a million dollars in a bid to purchase "690 million arrest records and 390 million arrest photos" from all 50 states from an intelligence firm. The contract further describes the records as including current and former home addresses, dates of birth, arrest photos, social security and cell phone numbers, and email addresses. Clearview attempted to purchase this data from Investigative Consultant, Inc. (ICI) which billed itself as an intelligence company with access to tens of thousands of databases and the ability to create unique data streams for its clients. The contract was signed in mid-2019, at a time when Clearview AI was quietly collecting billions of photos off the internet and was relatively unknown at the time.

Beer

Large Study Shows Drinking Alcohol Is Good For Your Cholesterol Levels 130

An anonymous reader quotes a report from Ars Technica: Researchers at Harvard University led the study, and it included nearly 58,000 adults in Japan who were followed for up to a year using a database of medical records from routine checkups. Researchers found that when people switched from being nondrinkers to drinkers during the study, they saw a drop in their "bad" cholesterol -- aka low-density lipoprotein cholesterol or LDL. Meanwhile, their "good" cholesterol -- aka high-density lipoprotein cholesterol or HDL -- went up when they began imbibing. HDL levels went up so much, that it actually beat out improvements typically seen with medications, the researchers noted.

On the other hand, drinkers who stopped drinking during the study saw the opposite effect: Upon giving up booze, their bad cholesterol went up and their good cholesterol went down. The cholesterol changes scaled with the changes in drinking. That is, for people who started drinking, the more they started drinking, the lower their LDL fell and higher their HDL rose. In the newly abstaining group, those who drank the most before quitting saw the biggest changes in their lipid levels.

Specifically, people who went from drinking zero drinks to 1.5 drinks per day or less saw their bad LDL cholesterol fall 0.85 mg/dL and their good HDL cholesterol go up 0.58 mg/dL compared to nondrinkers who never started drinking. For those that went from zero to 1.5 to three drinks per day, their bad LDL dropped 4.4 mg/dL and their good HDL rose 2.49 mg/dL. For people who started drinking three or more drinks per day, their LDL fell 7.44 mg/dL and HDL rose 6.12 mg/dL. For people who quit after drinking 1.5 drinks per day or less, their LDL rose 1.10 mg/dL and their HDL fell by 1.25 mg/dL. Quitting after drinking 1.5 to three drinks per day, led to a rise in LDL of 3.71 mg/dL and a drop in HDL of 3.35. Giving up three or more drinks per day led to an LDL increase of 6.53 mg/dL and a drop in HDL of 5.65.
The study has been published in JAMA Network Open.
Privacy

Allstate Insurance Sued For Delivering Personal Info In Plaintext (theregister.com) 23

An anonymous reader quotes a report from The Register: New York State has sued Allstate Insurance for operating websites so badly designed they would deliver personal information in plain-text to anyone that went looking for it. The data was lifted from Allstate's National General business unit, which ran a website for consumers who wanted to get a quote for a policy. That task required users to input a name and address, and once that info was entered, the site searched a LexisNexis Risk Solutions database for data on anyone who lived at the address provided. The results of that search would then appear on a screen that included the driver's license number (DLN) for the given name and address, plus "names of any other drivers identified as potentially living at that consumer's address, and the entire DLNs of those other drivers."

Naturally, miscreants used the system to mine for people's personal information for fraud. "National General intentionally built these tools to automatically populate consumers' entire DLNs in plain text -- in other words, fully exposed on the face of the quoting websites -- during the quoting process," the court documents [PDF] state. "Not surprisingly, attackers identified this vulnerability and targeted these quoting tools as an easy way to access the DLNs of many New Yorkers," according to the lawsuit. The digital thieves then used this information to "submit fraudulent claims for pandemic and unemployment benefits," we're told. ... [B]y the time the insurer resolved the mess, crooks had built bots that harvested at least 12,000 individuals' driver's license numbers from the quote-generating site.

Earth

Half of World's CO2 Emissions Come From 36 Fossil Fuel Firms, Study Shows 184

Half of the world's climate-heating carbon emissions come from the fossil fuels produced by just 36 companies, analysis has revealed. From a report: The researchers said the 2023 data strengthened the case for holding fossil fuel companies to account for their contribution to global heating. Previous versions of the annual report have been used in legal cases against companies and investors.

The report found that the 36 major fossil fuel companies, including Saudi Aramco, Coal India, ExxonMobil, Shell and numerous Chinese companies, produced coal, oil and gas responsible for more than 20bn tonnes of CO2 emissions in 2023. If Saudi Aramco was a country, it would be the fourth biggest polluter in the world after China, the US and India, while ExxonMobil is responsible for about the same emissions as Germany, the world's ninth biggest polluter, according to the data.

Global emissions must fall by 45% by 2030 if the world is to have a good chance of limiting temperature rise to 1.5C, the internationally agreed target. However, emissions are still rising, supercharging the extreme weather that is taking lives and livelihoods across the planet. The International Energy Agency has said new fossil fuel projects started after 2021 are incompatible with reaching net zero emissions by 2050. Most of the 169 companies in the Carbon Majors database increased their emissions in 2023, which was the hottest year on record at the time.
GNU is Not Unix

An Appeals Court May Kill a GNU GPL Software License (theregister.com) 74

The Ninth Circuit Court of Appeals is set to review a California district court's ruling in Neo4j v. PureThink, which upheld Neo4j's right to modify the GNU AGPLv3 with additional binding terms. If the appellate court affirms this decision, it could set a precedent allowing licensors to impose unremovable restrictions on open-source software, potentially undermining the enforceability of GPL-based licenses and threatening the integrity of the open-source ecosystem. The Register reports: The GNU AGPLv3 is a free and open source software (FOSS) license largely based on the GNU GPLv3, both of which are published by the Free Software Foundation (FSF). Neo4j provided database software under the AGPLv3, then tweaked the license, leading to legal battles over forks of the software. The AGPLv3 includes language that says any added restrictions or requirements are removable, meaning someone could just file off Neo4j's changes to the usage and distribution license, reverting it back to the standard AGPLv3, which the biz has argued and successfully fought against in that California district court.

Now the matter, the validity of that modified FOSS license, is before an appeals court in the USA. "I don't think the community realizes that if the Ninth Circuit upholds the lower court's ruling, it won't just kill GPLv3," PureThink's John Mark Suhy told The Register. "It will create a dangerous legal precedent that could be used to undermine all open-source licenses, allowing licensors to impose unexpected restrictions and fundamentally eroding the trust that makes open source possible."

Perhaps equally concerning is the fact that Suhy, founder and CTO of PureThink and iGov (the two firms sued by Neo4j), and presently CTO of IT consultancy Greystones Group, is defending GPL licenses on his own, pro se, without the help of the FSF, founded by Richard Stallman, creator of the GNU General Public License. "I'm actually doing everything pro se because I used up all my savings to fight it in the lower court," said Suhy. "I'm surprised the Free Software Foundation didn't care too much about it. They always had an excuse about not having the money for it. Luckily the Software Freedom Conservancy came in and helped out there."

Crime

To Identify Suspect In Idaho Killings, FBI Used Restricted Consumer DNA Data (nytimes.com) 99

An anonymous reader quotes a report from the New York Times: As investigators struggled for weeks to find who might have committed the brutal stabbings of four University of Idaho students in the fall of 2022, they were focused on a key piece of evidence: DNA on a knife sheath that was found at the scene of the crime. At first they tried checking the DNA with law enforcement databases, but that did not provide a hit. They turned next to the more expansive DNA profiles available in some consumer databases in which users had consented to law enforcement possibly using their information, but that also did not lead to answers.

F.B.I. investigators then went a step further, according to newly released testimony, comparing the DNA profile from the knife sheath with two databases that law enforcement officials are not supposed to tap: GEDmatch and MyHeritage. It was a decision that appears to have violated key parameters of a Justice Department policy that calls for investigators to operate only in DNA databases "that provide explicit notice to their service users and the public that law enforcement may use their service sites."

It also seems to have produced results: Days after the F.B.I.'s investigative genetic genealogy team began working with the DNA profiles, it landed on someone who had not been on anyone's radar:Bryan Kohberger, a Ph.D. student in criminology who has now been charged with the murders. The case has shown both the promise and the unregulated power of genetic technology in an era in which millions of people willingly contribute their DNA profiles to recreational databases, often to hunt for relatives. In the past, law enforcement officials would need to find a direct match between DNA at the crime scene and that of a specific suspect. Now, investigators can use consumer DNA data to build family trees that can zero in on a person of interest -- within certain policy limits.

Privacy

California Sues Data-Harvesting Company NPD, Enforcing Strict Privacy Law (msn.com) 6

California sued to fine a data-harvesting company, reports the Washington Post, calling it "a rare step to put muscle behind one of the strongest online privacy laws in the United States." Even when states have tried to restrict data brokers, it has been tough to make those laws stick. That has generally been a problem for the 19 states that have passed broad laws to protect personal information, said Matt Schwartz, a policy analyst for Consumer Reports. He said there has been only 15 or so public enforcement actions by regulators overseeing all those laws. Partly because companies aren't held accountable, they're empowered to ignore the privacy standards. "Noncompliance is fairly widespread," Schwartz said. "It's a major problem."

That's why California is unusual with a data broker law that seems to have teeth. To make sure state residents can order all data brokers operating in the state to delete their personal records [with a single request], California is now requiring brokers to register with the state or face a fine of $200 a day. The state's privacy watchdog said Thursday that it filed litigation to force one data broker, National Public Data, to pay $46,000 for failing to comply with that initial phase of the data broker law. NPD declined to comment through an attorney... This first lawsuit for noncompliance, Schwartz said, shows that California is serious about making companies live up to their privacy obligations... "If they can successfully build it and show it works, it will create a blueprint for other states interested in this idea," he said.

Last summer NPD "spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online," according to the blog Krebs on Security, adding that another NPD data broker sharing access to the same consumer records "inadvertently published the passwords to its back-end database in a file that was freely available from its homepage..."

California's attempt to regulate the industry inspired the nonprofit Consumer Reports to create an app called Permission Slip that reveals what data companies collect and, for people in U.S. states, will "work with you to file a request, telling companies to stop selling your personal information."

Other data-protecting options suggested by The Washington Post:
  • Use Firefox, Brave or DuckDuckGo, "which can automatically tell websites not to sell or share your data. Those demands from the web browsers are legally binding or will be soon in at least nine states."
  • Use Privacy Badger, an EFF browser extension which the EFF says "automatically tells websites not to sell or share your data including where it's required by state law."

Security

Encrypted Messages Are Being Targeted, Google Security Group Warns (computerweekly.com) 20

Google's Threat Intelligence Group notes "the growing threat to secure messaging applications." While specifically acknowledging "wide ranging efforts to compromise Signal accounts," they add that the threat "also extends to other popular messaging applications such as WhatsApp and Telegram, which are also being actively targeted by Russian-aligned threat groups using similar techniques.

"In anticipation of a wider adoption of similar tradecraft by other threat actors, we are issuing a public warning regarding the tactics and methods used to date to help build public awareness and help communities better safeguard themselves from similar threats."

Computer Weekly reports: Analysts predict it is only a matter of time before Russia starts deploying hacking techniques against non-military Signal users and users of other encrypted messaging services, including WhatsApp and Telegram. Dan Black, principal analyst at Google Threat Intelligence Group, said he would be "absolutely shocked" if he did not see attacks against Signal expand beyond the war in Ukraine and to other encrypted messaging platforms...

Russia-backed hackers are attempting to compromise Signal's "linked devices" capability, which allows Signal users to link their messaging account to multiple devices, including phones and laptops, using a quick response (QR) code. Google threat analysts report that Russia-linked threat actors have developed malicious QR codes that, when scanned, will give the threat actor real-time access to the victim's messages without having to compromise the victim's phone or computer. In one case, according to Black, a compromised Signal account led Russia to launch an artillery strike against a Ukrainian army brigade, resulting in a number of casualties... Google also warned that multiple threat actors have been observed using exploits to steal Signal database files from compromised Android and Windows devices.

The article notes that the attacks "are difficult to detect and when successful there is a high risk that compromised Signal accounts can go unnoticed for a long time." And it adds that "The warning follows disclosures that Russian intelligence created a spoof website for the Davos World Economic Forum in January 2025 to surreptitiously attempt to gain access to WhatsApp accounts used by Ukrainian government officials, diplomats and a former investigative journalist at Bellingcat."

Google's Threat Intelligence Group notes there's a variety of attack methods, though the "linked devices" technique is the most widely used. "We are grateful to the team at Signal for their close partnership in investigating this activity," Google's group says in their blog post, adding that "the latest Signal releases on Android and iOS contain hardened features designed to help protect against similar phishing campaigns in the future. Update to the latest version to enable these features."
Oracle

Oracle's Ellison Calls for Governments To Unify Data To Feed AI (msn.com) 105

Oracle co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by AI models, calling this step the "missing link" for them to take full advantage of the technology. From a report: Fragmented sets of data about a population's health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models, Ellison said in an on-stage interview with former British Prime Minister Tony Blair at the World Government Summit in Dubai.

Countries with rich population data sets, such as the UK and United Arab Emirates, could cut costs and improve public services, particularly health care, with this approach, Ellison said. Upgrading government digital infrastructure could also help identify wastage and fraud, Ellison said. IT systems used by the US government are so primitive that it makes it difficult to identify "vast amounts of fraud," he added, pointing to efforts by Elon Musk's team at the Department of Government Efficiency to weed it out.

AI

AI Can Now Replicate Itself (space.com) 78

An anonymous reader quotes a report from Space.com: In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves. [...] For the study, researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. While less powerful than commercial systems, both are widely used by AI developers, the researchers said. The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.

The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate. "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."

The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem. "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote.
The research has been published to the preprint database arXiv but has not yet been peer-reviewed.
Open Source

Does the 'Spirit' of Open Source Mean Much More Than a License? (techcrunch.com) 58

"Open source can be something of an illusion," writes TechCrunch. "A lack of real independence can mean a lack of agency for those who would like to properly get involved in a project."
Their article makes the case that the "spirit" of open source means more than a license... "Android, in a license sense, is perhaps the most well-documented, perfectly open 'thing' that there is," Luis Villa, co-founder and general counsel at Tidelift, said in a panel discussion at the State of Open Con25 in London this week. "All the licenses are exactly as you want them — but good luck getting a patch into that, and good luck figuring out when the next release even is...."

"If you think about the practical accessibility of open source, it goes beyond the license, right?" Peter Zaitsev, founder of open source database services company Percona, said in the panel discussion. "Governance is very important, because if it's a single corporation, they can change a license like 'that.'" These sentiments were echoed in a separate talk by Dotan Horovits, open source evangelist at the Cloud Native Computing Foundation (CNCF), where he mused about open source "turning to the dark side." He noted that in most cases, issues arise when a single-vendor project decides to make changes based on its own business needs among other pressures. "Which begs the question, is vendor-owned open source an oxymoron?" Horovits said. "I've been asking this question for a good few years, and in 2025 this question is more relevant than ever."

The article adds that in 2025, "These debates won't be going anywhere anytime soon, as open source has emerged as a major focal point in the AI realm." And it includes this quote from Tidelift's co-founder.

"I have my quibbles and concerns about the open source AI definition, but it's really clear that what Llama is doing isn't open source," Villa said. Emily Omier, a consultant for open source businesses and host of the Business of Open Source podcast, added that such attempts to "corrupt" the meaning behind "open source" is testament to its inherent power.

Much of this may be for regulatory reasons, however. The EU AI Act has a special carve-out for "free and open source" AI systems (aside from those deemed to pose an "unacceptable risk"). And Villa says this goes some way toward explaining why a company might want to rewrite the rulebook on what "open source" actually means. "There are plenty of actors right now who, because of the brand equity [of open source] and the regulatory implications, want to change the definition, and that's terrible," Villa said.

Java

Oracle Starts Laying Mines In JavaScript Trademark Battle (theregister.com) 36

The Register's Thomas Claburn reports: Oracle this week asked the US Patent and Trademark Office (USPTO) to partially dismiss a challenge to its JavaScript trademark. The move has been criticized as an attempt to either stall or water down legal action against the database goliath over the programming language's name. Deno Land, the outfit behind the Deno JavaScript runtime, filed a petition with the USPTO back in November in an effort to make the trademarked term available to the JavaScript community. This legal effort is led by Node.js creator and Deno Land CEO Ryan Dahl, summarized on the JavaScript.tm website, and supported by more than 16,000 members of the JavaScript community. It aims to remove the fear of an Oracle lawsuit for using the term "JavaScript" in a conference title or business venture.

"Programmers working with JavaScript have formed innumerable community organizations," the website explains. "These organizations, like the standards bodies, have been forced to painstakingly avoid naming the programming language they are built around -- for example, JSConf. Sadly, without risking a legal trademark challenge against Oracle, there can be no 'JavaScript Conference' nor a 'JavaScript Specification.' The world's most popular programming language cannot even have a conference in its name." [...] In the initial trademark complaint, Deno Land makes three arguments to invalidate Oracle's ownership of "JavaScript." The biz claims that JavaScript has become a generic term; that Oracle committed fraud in 2019 when it applied to renew its trademark; and that Oracle has abandoned its trademark because it does not offer JavaScript products or services.

Oracle's motion on Monday focuses on the dismissal of the fraud claim, while arguing that it expects to prevail on the other two claims, citing corporate use of the trademarked term "in connection with a variety of offerings, including its JavaScript Extension Toolkit as well as developer's guides and educational resources, and also that relevant consumers do not perceive JavaScript as a generic term." The fraud claim follows from Deno Land's assertion that the material Oracle submitted in support of its trademark renewal application has nothing to do with any Oracle product. "Oracle, through its attorney, submitted specimens showing screen captures of the Node.js website, a project created by Ryan Dahl, Petitioner's Chief Executive Officer," the trademark cancellation petition says. "Node.js is not affiliated with Oracle, and the use of screen captures of the 'nodejs.org' website as a specimen did not show any use of the mark by Oracle or on behalf of Oracle."

Oracle contends that in fact it submitted two specimens to the USPTO -- a screenshot from the Node.js website and another from its own Oracle JavaScript Extension Toolkit. And this, among other reasons, invalidates the fraud claim, Big Red's attorneys contend. "Where, as here, Registrant 'provided the USPTO with [two specimens]' at least one of which shows use of the mark in commerce, Petitioner cannot plausibly allege that the inclusion of a second, purportedly defective specimen, was material," Oracle's motion argues, adding that no evidence of fraudulent intent has been presented. Beyond asking the court to toss the fraud claim, Oracle has requested an additional thirty days to respond to the other two claims.

EU

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72

AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
Security

Sensitive DeepSeek Data Was Exposed to the Web, Cybersecurity Firm Says (reuters.com) 17

An anonymous reader shared this report from Reuters: New York-based cybersecurity firm Wiz says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet. In a blog post published Wednesday, Wiz said that scans of DeepSeek's infrastructure showed that the company had accidentally left more than a million lines of data available unsecured.

Those included digital software keys and chat logs that appeared to capture prompts being sent from users to the company's free AI assistant.

Wiz's chief technology officer tells Reuters that DeepSeek "took it down in less than an hour" after Wiz alerted them.

"But this was so simple to find we believe we're not the only ones who found it."

Slashdot Top Deals