Biotech

DNA of 15 Million People For Sale In 23andMe Bankruptcy (404media.co) 51

An anonymous reader quotes a report from 404 Media: 23andMe filed for Chapter 11 bankruptcy Sunday, leaving the fate of millions of people's genetic information up in the air as the company deals with the legal and financial fallout of not properly protecting that genetic information in the first place. The filing shows how dangerous it is to provide your DNA directly to a large, for-profit commercial genetic database; 23andMe is now looking for a buyer to pull it out of bankruptcy. 23andMe said in court documents viewed by 404 Media that since hackers obtained personal data about seven million of its customers in October 2023, including, in some cases "health-related information based upon the user's genetics," it has faced "over 50 class action and state court lawsuits," and that "approximately 35,000 claimants have initiated, filed, or threatened to commence arbitration claims against the company." It is seeking bankruptcy protection in part to simplify the fallout of these legal cases, and because it believes it may not have money to pay for the potential damages associated with these cases.

CEO and cofounder Anne Wojcicki announced she is leaving the company as part of this process. The company has the genetic data of more than 15 million customers. According to its Chapter 11 filing, 23andMe owes money to a host of pharmaceutical companies, pharmacies, artificial intelligence companies (including a company called Aganitha AI and Coreweave), as well as health insurance companies and marketing companies.
Shortly before the filing, California Attorney General Rob Bonta issued an "urgent" alert to 23andMe customers: "Given 23andMe's reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company."

In a letter to customers Sunday, 23andMe said: "Your data remains protected. The Chapter 11 filing does not change how we store, manage, or protect customer data. Our users' privacy and data are important considerations in any transaction, and we remain committed to our users' privacy and to being transparent with our customers about how their data is managed." It added that any buyer will have to "comply with applicable law with respect to the treatment of customer data."

404 Media's Jason Koebler notes that "there's no way of knowing who is going to buy it, why they will be interested, and what will become of its millions of customers' DNA sequences. 23andMe has claimed over the years that it strongly resists law enforcement requests for information and that it takes customer security seriously. But the company has in recent years changed its terms of service, partnered with big pharmaceutical companies, and, of course, was hacked."
China

China Bans Compulsory Facial Recognition and Its Use in Private Spaces Like Hotel Rooms (theregister.com) 28

China's Cyberspace Administration and Ministry of Public Security have outlawed the use of facial recognition without consent. From a report: The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a "personal information protection impact assessment" that considers whether using the tech is necessary, impacts on individuals' privacy, and risks of data leakage. Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans. Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals' consent. The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets. The measures don't apply to researchers or to what machine translation of the rules describes as "algorithm training activities" -- suggesting images of citizens' faces are fair game when used to train AI models.
AI

AI Will Impact GDP of Every Country By Double Digits, Says Mistral CEO (businessinsider.com) 31

Countries must develop their own artificial intelligence infrastructure or risk significant economic losses as the technology transforms global economies, Mistral CEO Arthur Mensch said last week.

"It will have an impact on GDP of every country in the double digits in the coming years," Mensch told the A16z podcast, warning that nations without domestic AI systems would see capital flow elsewhere. The French startup executive compared AI to electricity adoption a century ago. "If you weren't building electricity factories, you were preparing yourself to buy it from your neighbors, which creates dependencies," he said.
Operating Systems

Linux Kernel 6.14 Officially Released (9to5linux.com) 8

prisoninmate shares a report: Highlights of Linux 6.14 include Btrfs RAID1 read balancing support, a new ntsync subsystem for Win NT synchronization primitives to boost game emulation with Wine, uncached buffered I/O support, and a new accelerator driver for the AMD XDNA Ryzen AI NPUs (Neural Processing Units).

Also new is DRM panic support for the AMDGPU driver, reflink and reverse-mapping support for the XFS real-time device, Intel Clearwater Forest server support, support for SELinux extended permissions, FUSE support for io_uring, a new fsnotify file pre-access event type, and a new cgroup controller for device memory.

AI

How AI Coding Assistants Could Be Compromised Via Rules File (scworld.com) 31

Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday.

Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language.

The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent.

Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted.

Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository.

Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects.

Education

America's College Board Launches AP Cybersecurity Course For Non-College-Bound Students (edweek.org) 26

Besides administering standardized pre-college tests, America's nonprofit College Board designs college-level classes that high school students can take. But now they're also crafting courses "not just with higher education at the table, but industry partners such as the U.S. Chamber of Commerce and the technology giant IBM," reports Education Week.

"The organization hopes the effort will make high school content more meaningful to students by connecting it to in-demand job skills." It believes the approach may entice a new kind of AP student: those who may not be immediately college-bound.... The first two classes developed through this career-driven model — dubbed AP Career Kickstart — focus on cybersecurity and business principles/personal finance, two fast-growing areas in the workforce." Students who enroll in the courses and excel on a capstone assessment could earn college credit in high school, just as they have for years with traditional AP courses in subjects like chemistry and literature. However, the College Board also believes that students could use success in the courses as a selling point with potential employers... Both the business and cybersecurity courses could also help fulfill state high school graduation requirements for computer science education...

The cybersecurity course is being piloted in 200 schools this school year and is expected to expand to 800 schools next school year... [T]he College Board is planning to invest heavily in training K-12 teachers to lead the cybersecurity course.

IBM's director of technology, data and AI called the effort "a really good way for corporations and companies to help shape the curriculum and the future workforce" while "letting them know what we're looking for." In the article the associate superintendent for teaching at a Chicago-area high school district calls the College Board's move a clear signal that "career-focused learning is rigorous, it's valuable, and it deserves the same recognition as traditional academic pathways."

Also interesting is why the College Board says they're doing it: The effort may also help the College Board — founded more than a century ago — maintain AP's prominence as artificial intelligence tools that can already ace nearly every existing AP test on an ever-greater share of job tasks once performed by humans. "High schools had a crisis of relevance far before AI," David Coleman, the CEO of the College Board, said in a wide-ranging interview with EdWeek last month. "How do we make high school relevant, engaging, and purposeful? Bluntly, it takes [the] next generation of coursework. We are reconsidering the kinds of courses we offer...."

"It's not a pivot because it's not to the exclusion of higher ed," Coleman said. "What we are doing is giving employers an equal voice."

Thanks to long-time Slashdot reader theodp for sharing the article.
AI

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End (futurism.com) 121

Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.

Programming

US Programming Jobs Plunge 27.5% in Two Years (msn.com) 104

Computer programming jobs in the US have declined by more than a quarter over the past two years, placing the profession among the 10 hardest-hit occupations of 420-plus jobs tracked by the Bureau of Labor Statistics and potentially signaling the first concrete evidence of artificial intelligence replacing workers.

The timing coincides with OpenAI's release of ChatGPT in late 2022. Anthropic researchers found people use AI to perform programming tasks more than those of any other job, though 57 percent of users employ AI to augment rather than automate work. "Without getting hysterical, the unemployment jump for programming really does look at least partly like an early, visible labor market effect of AI," said Mark Muro of the Brookings Institution.

While software developer positions have remained stable with only a 0.3 percent decline, programmers who perform more routine coding from specifications provided by others have seen their ranks diminish to levels not seen since 1980. Economists caution that high interest rates and post-pandemic tech industry contraction have also contributed to the decline in programming jobs, which typically pay $99,700 compared to $132,270 for developers.
AI

New iOS Update Re-Enables Apple Intelligence For Users Who Had Turned It Off 54

Apple's latest iOS 18.3.2 update is automatically re-enabling its Apple Intelligence feature even for users who previously disabled it, adding to mounting concerns about the company's AI strategy.

The update presents a splash screen with no option except to tap "Continue," which activates the feature. Users must then manually disable it through settings, with the AI consuming up to 7GB of storage space. This forced activation comes amid broader troubles with Apple's AI initiatives.
AI

Cloudflare Turns AI Against Itself With Endless Maze of Irrelevant Facts (arstechnica.com) 65

Web infrastructure provider Cloudflare unveiled "AI Labyrinth" this week, a feature designed to thwart unauthorized AI data scraping by feeding bots realistic but irrelevant content instead of blocking them outright. The system lures crawlers into a "maze" of AI-generated pages containing neutral scientific information, deliberately wasting computing resources of those attempting to collect training data for language models without permission.

"When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them," Cloudflare explained. The company reports AI crawlers generate over 50 billion requests to their network daily, comprising nearly 1% of all web traffic they process. The feature is available to all Cloudflare customers, including those on free plans. This approach marks a shift from traditional protection methods, as Cloudflare claims blocking bots sometimes alerts operators they've been detected. The false links contain meta directives to prevent search engine indexing while remaining attractive to data-scraping bots.
Apple

Apple Sued For False Advertising Over Apple Intelligence (axios.com) 32

Apple has been hit with a federal lawsuit claiming that the company's promotion of now-delayed Apple Intelligence features constituted false advertising and unfair competition. From a report: The suit, filed Wednesday in U.S. District Court in San Jose, seeks class action status and unspecified financial damages on behalf of those who purchased Apple Intelligence-capable iPhones and other devices. "Apple's advertisements saturated the internet, television, and other airwaves to cultivate a clear and reasonable consumer expectation that these transformative features would be available upon the iPhone's release," the suit reads.

"This drove unprecedented excitement in the market, even for Apple, as the company knew it would, and as part of Apple's ongoing effort to convince consumers to upgrade at a premium price and to distinguish itself from competitors deemed to be winning the AI-arms race. [...] Contrary to Defendant's claims of advanced AI capabilities, the Products offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance. Worse yet, Defendant promoted its Products based on these overstated AI capabilities, leading consumers to believe they were purchasing a device with features that did not exist or were materially misrepresented."

Facebook

Meta Spotted Testing AI-Generated Comments on Instagram (techcrunch.com) 20

Meta is testing an AI feature that generates comment suggestions for Instagram posts. Users with access to the test see a pencil icon beside the comment field that activates "Write with Meta AI." The system analyzes photos before offering three comment suggestions, which users can refresh for alternatives. For a photo showing someone smiling with a thumbs-up in their living room, suggested comments include "Cute living room setup" and "Love the cozy atmosphere."
AI

OpenAI Study Finds Links Between ChatGPT Use and Loneliness (yahoo.com) 78

Higher use of chatbots like ChatGPT may correspond with increased loneliness and less time spent socializing with other people, according to new research from OpenAI in partnership with the Massachusetts Institute of Technology. From a report: Those who spent more time typing or speaking with ChatGPT each day tended to report higher levels of emotional dependence on, and problematic use of, the chatbot, as well as heightened levels of loneliness, according to research released Friday. The findings were part of a pair of studies conducted by researchers at the two organizations and have not been peer reviewed.

San Francisco-based OpenAI sees the new studies as a way to get a better sense of how people interact with, and are affected by, its popular chatbot. "Some of our goals here have really been to empower people to understand what their usage can mean and do this work to inform responsible design," said Sandhini Agarwal, who heads OpenAI's trustworthy AI team and co-authored the research. To conduct the studies, the researchers followed nearly 1,000 people for a month.

AI

'Hey Siri, What Month Is It?' (daringfireball.net) 119

DaringFireball: Whole Reddit thread examining this simple question: "What month is it?" and Siri's "I'm sorry, I don't understand" response (which I just reproduced on my iPhone 16 Pro running iOS 18.4b4). One guy changed the question to "What month is it currently?" and got the answer "It is 2025." More comments from that thread:"I ask Siri to play a podcast and she literally says, "I'm trying to play from Apple Podcasts but it doesn't look like you have it installed." I didn't even know you could delete that app. I certainly haven't. So I have to manually do it every time now. It used to work."

"I asked Siri last night to set a reminder for 3:50, so naturally she set it for 10:00."
Further reading:
Apple Shakes Up AI Executive Ranks in Bid to Turn Around Siri;
'Something Is Rotten in the State of Cupertino'.
AI

Clearview Attempted To Buy Social Security Numbers and Mugshots for its Database (404media.co) 24

Controversial facial recognition company Clearview AI attempted to purchase hundreds of millions of arrest records including social security numbers, mugshots, and even email addresses to incorporate into its product, 404 Media reports. From the report: For years, Clearview AI has collected billions of photos from social media websites including Facebook, LinkedIn and others and sold access to its facial recognition tool to law enforcement. The collection and sale of user-generated photos by a private surveillance company to police without that person's knowledge or consent sparked international outcry when it was first revealed by the New York Times in 2020.

New documents obtained by 404 Media reveal that Clearview AI spent nearly a million dollars in a bid to purchase "690 million arrest records and 390 million arrest photos" from all 50 states from an intelligence firm. The contract further describes the records as including current and former home addresses, dates of birth, arrest photos, social security and cell phone numbers, and email addresses. Clearview attempted to purchase this data from Investigative Consultant, Inc. (ICI) which billed itself as an intelligence company with access to tens of thousands of databases and the ability to create unique data streams for its clients. The contract was signed in mid-2019, at a time when Clearview AI was quietly collecting billions of photos off the internet and was relatively unknown at the time.

AI

Gmail Rolls Out AI-Powered Search (x.com) 24

Google is introducing an AI-powered update to Gmail search that prioritizes "most relevant" results based on recency, frequent contacts, and most-clicked emails. The feature aims to help users more efficiently locate specific messages in crowded inboxes. The update is rolling out globally to personal Google accounts, with business accounts to follow at an unspecified date. Users will have the option to toggle between the new AI-powered "most relevant" search and the traditional reverse chronological "most recent" view.
AI

AI-Driven Weather Prediction Breakthrough Reported (theguardian.com) 56

A new AI system called Aardvark could deliver weather forecasts as accurate as those from advanced public weather services but run on desktop computers, according to a project unveiled Thursday and published in Nature. Developed by the UK's Alan Turing Institute with partners including Cambridge University, the European Centre for Medium-Range Weather Forecasts and Microsoft, Aardvark aims to make sophisticated forecasting accessible to countries with fewer resources, particularly in Africa.

The system has already outperformed the US Global Forecast System on many variables in testing. Project leader Richard Turner noted the system is "completely open source" and not planned for commercialization by Microsoft.
Apple

Apple Shakes Up AI Executive Ranks in Bid to Turn Around Siri (bloomberg.com) 46

Apple is undergoing a rare shake-up of its executive ranks, aiming to get its artificial intelligence efforts back on track after months of delays and stumbles, Bloomberg News reported Thursday, citing people familiar with the matter. From the report: Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he's moving over another top executive to help: Vision Pro creator Mike Rockwell. In a new role, Rockwell will be in charge of the Siri virtual assistant, according to the people, who asked not to be identified because the moves haven't been announced.

Rockwell will report to software chief Craig Federighi, removing Siri completely from Giannandrea's command. Apple is poised to announce the changes to employees this week. The iPhone maker's senior leaders -- a group known as the Top 100 -- just met at a secretive, annual offsite gathering to discuss the future of the company. Its AI efforts were a key talking point at the summit, Bloomberg News has reported.

The moves underscore the plight facing Apple: Its AI technology is severely lagging industry rivals, and the company has shown little sign of catching up. The Apple Intelligence platform was late to arrive and largely a flop, despite being the main selling point for the iPhone 16.
Further reading: 'Something Is Rotten in the State of Cupertino'
AI

OpenAI's o1-pro is the Company's Most Expensive AI Model Yet (techcrunch.com) 21

OpenAI has launched a more powerful version of its o1 "reasoning" AI model, o1-pro, in its developer API. From a report: According to OpenAI, o1-pro uses more computing than o1 to provide "consistently better responses." Currently, it's only available to select developers -- those who've spent at least $5 on OpenAI API services -- and it's pricey. Very pricey. OpenAI is charging $150 per million tokens (~750,000 words) fed into the model and $600 per million tokens generated by the model. That's twice the price of OpenAI's GPT-4.5 for input and 10x the price of regular.
AI

AI Crawlers Haven't Learned To Play Nice With Websites (theregister.com) 57

SourceHut, an open-source-friendly git-hosting service, says web crawlers for AI companies are slowing down services through their excessive demands for data. From a report: "SourceHut continues to face disruptions due to aggressive LLM crawlers," the biz reported Monday on its status page. "We are continuously working to deploy mitigations. We have deployed a number of mitigations which are keeping the problem contained for now. However, some of our mitigations may impact end-users."

SourceHut said it had deployed Nepenthes, a tar pit to catch web crawlers that scrape data primarily for training large language models, and noted that doing so might degrade access to some web pages for users. "We have unilaterally blocked several cloud providers, including GCP [Google Cloud] and [Microsoft] Azure, for the high volumes of bot traffic originating from their networks," the biz said, advising administrators of services that integrate with SourceHut to get in touch to arrange an exception to the blocking.

Slashdot Top Deals