The Military

Nations Meet At UN For 'Killer Robot' Talks (reuters.com) 35

An anonymous reader quotes a report from Reuters: Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology. Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent. Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others. U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking. Alexander Kmentt, head of arms control at Austria's foreign ministry, said that must quickly change.

"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," he told Reuters. Monday's gathering of the U.N. General Assembly in New York will be the body's first meeting dedicated to autonomous weapons. Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology's battlefield advantages. Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument. They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.
"This issue needs clarification through a legally binding treaty. The technology is moving so fast," said Patrick Wilcken, Amnesty International's Researcher on Military, Security and Policing. "The idea that you wouldn't want to rule out the delegation of life or death decisions ... to a machine seems extraordinary."

In 2023, 164 states signed a 2023 U.N. General Assembly resolution calling for the international community to urgently address the risks posed by autonomous weapons.
Google

Google Updating Its 'G' Icon For the First Time In 10 Years (9to5google.com) 34

Google is updating its iconic 'G' logo for the first time in 10 years, replacing the four solid color sections with a smooth gradient transition from red to yellow to green to blue. "This modernization feels inline with the Gemini gradient, while AI Mode in Search uses something similar for a shortcut," notes 9to5Google. The update has already rolled out to the Google Search app on iOS and is in beta for Android. From the report: It's a subtle change that you might not immediately notice, especially if the main place you see it is on your homescreen. It will be even less noticeable as a tiny browser favicon. It does not appear that Google is refreshing its main six-letter logo today, while it's unclear whether any other product logos are changing. In theory, some of the company's four-color logos, like Chrome or Maps, could pretty easily start bleeding in their sections.
AI

New Pope Chose His Name Based On AI's Threats To 'Human Dignity' (arstechnica.com) 69

An anonymous reader quotes a report from Ars Technica: Last Thursday, white smoke emerged from a chimney at the Sistine Chapel, signaling that cardinals had elected a new pope. That's a rare event in itself, but one of the many unprecedented aspects of the election of Chicago-born Robert Prevost as Pope Leo XIV is one of the main reasons he chose his papal name: artificial intelligence. On Saturday, the new pope gave his first address to the College of Cardinals, explaining his name choice as a continuation of Pope Francis' concerns about technological transformation. "Sensing myself called to continue in this same path, I chose to take the name Leo XIV," he said during the address. "There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution."

In his address, Leo XIV explicitly described "artificial intelligence" developments as "another industrial revolution," positioning himself to address this technological shift as his namesake had done over a century ago. As the head of an ancient religious organization that spans millennia, the pope's talk about AI creates a somewhat head-spinning juxtaposition, but Leo XIV isn't the first pope to focus on defending human dignity in the age of AI. Pope Francis, who died in April, first established AI as a Vatican priority, as we reported in August 2023 when he warned during his 2023 World Day of Peace message that AI should not allow "violence and discrimination to take root." In January of this year, Francis further elaborated on his warnings about AI with reference to a "shadow of evil" that potentially looms over the field in a document called "Antiqua et Nova" (meaning "the old and the new").

"Like any product of human creativity, AI can be directed toward positive or negative ends," Francis said in January. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used." [...] Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church. "In our own day," Leo XIV concluded in his formal address on Saturday, "the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."

Iphone

Apple To Lean on AI Tool To Help iPhone Battery Lifespan for Devices in iOS 19 (bloomberg.com) 25

Apple is planning to use AI technology to address a frequent source of customer frustration: the iPhone's battery life. From a report: The company is planning an AI-powered battery management mode for iOS 19, an iPhone software update due in September, according to people with knowledge of the matter. The enhancement will analyze how a person uses their device and make adjustments to conserve energy, said the people, who asked not to be identified because the service hasn't been announced.

To create the technology -- part of the Apple Intelligence platform -- the company is using battery data it has collected from users' devices to understand trends and make predictions for when it should lower the power draw of certain applications or features. There also will be a lock-screen indicator showing how long it will take to charge up the device, said the people.

Hardware

Nvidia Reportedly Raises GPU Prices by 10-15% (tomshardware.com) 63

An anonymous reader shares a report: A new report claims that Nvidia has recently raised the official prices of nearly all of its products to combat the impact of tariffs and surging manufacturing costs on its business, with gaming graphics cards receiving a 5 to 10% hike while AI GPUs see up to a 15% increase.

As reported by Digitimes Taiwan, Nvidia is facing "multiple crises," including a $5.5 billion hit to its quarterly earnings over export restrictions on AI chips, including a ban on sales of its H20 chips to China.

Digitimes reports that CEO Jensen Huang has been "shuttling back and forth" between the US and China to minimize the impact of tariffs, and that "in order to maintain stable profitability," Nvidia has reportedly recently raised official prices for almost all its products, allowing its partners to increase prices accordingly.

AI

Chegg To Lay Off 22% of Workforce as AI Tools Shake Up Edtech Industry (reuters.com) 16

Chegg said on Monday it would lay off about 22% of its workforce, or 248 employees, to cut costs and streamline its operations as students increasingly turn to AI-powered tools such as ChatGPT over traditional edtech platforms. From a report: The company, an online education firm that offers textbook rentals, homework help and tutoring, has been grappling with a decline in web traffic for months and warned that the trend would likely worsen before improving.

Google's expansion of AI Overviews is keeping web traffic confined within its search ecosystem while gradually shifting searches to its Gemini AI platform, Chegg said, adding that other AI companies including OpenAI and Anthropic were courting academics with free access to subscriptions. As part of the restructuring announced on Monday, Chegg will also shut its U.S. and Canada offices by the end of the year and aim to reduce its marketing, product development efforts and general and administrative expenses.

Government

US Copyright Office to AI Companies: Fair Use Isn't 'Commercial Use of Vast Troves of Copyrighted Works' (yahoo.com) 214

Business Insider tells the story in three bullet points:

- Big Tech companies depend on content made by others to train their AI models.

- Some of those creators say using their work to train AI is copyright infringement.

- The U.S. Copyright Office just published a report that indicates it may agree.

The office released on Friday its latest in a series of reports exploring copyright laws and artificial intelligence. The report addresses whether the copyrighted content AI companies use to train their AI models qualifies under the fair use doctrine. AI companies are probably not going to like what they read...

AI execs argue they haven't violated copyright laws because the training falls under fair use. According to the U.S. Copyright Office's new report, however, it's not that simple. "Although it is not possible to prejudge the result in any particular case, precedent supports the following general observations," the office said. "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs — all of which can affect the market."

The office made a distinction between AI models for research and commercial AI models. "When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries."

The report says outputs "substantially similar to copyrighted works in the dataset" are less likely to be considered transformative than when the purpose "is to deploy it for research, or in a closed system that constrains it to a non-substitutive task."

Business Insider adds that "A day after the office released the report, President Donald Trump fired its director, Shira Perlmutter, a spokesperson told Business Insider."
Iphone

Apple's iPhone Plans for 2027: Foldable, or Glass and Curved. (Plus Smart Glasses, Tabletop Robot) (theverge.com) 45

An anonymous reader shared this report from the Verge: This morning, while summarizing an Apple "product blitz" he expects for 2027, Bloomberg's Mark Gurman writes in his Power On newsletter that Apple is planning a "mostly glass, curved iPhone" with no display cutouts for that year, which happens to be the iPhone's 20th anniversary... [T]he closest hints are probably in Apple patents revealed over the years, like one from 2019 that describes a phone encased in glass that "forms a continuous loop" around the device.

Apart from a changing iPhone, Gurman describes what sounds like a big year for Apple. He reiterates past reports that the first foldable iPhone should be out by 2027, and that the company's first smart glasses competitor to Meta Ray-Bans will be along that year. So will those rumored camera-equipped AirPods and Apple Watches, he says. Gurman also suggests that Apple's home robot — a tabletop robot that features "an AI assistant with its own personality" — will come in 2027...

Finally, Gurman writes that by 2027 Apple could finally ship an LLM-powered Siri and may have created new chips for its server-side AI processing.

Earlier this week Bloomberg reported that Apple is also "actively looking at" revamping the Safari web browser on its devices "to focus on AI-powered search engines." (Apple's senior VP of services "noted that searches on Safari dipped for the first time last month, which he attributed to people using AI.")
Programming

Over 3,200 Cursor Users Infected by Malicious Credential-Stealing npm Packages (thehackernews.com) 30

Cybersecurity researchers have flagged three malicious npm packages that target the macOS version of AI-powered code-editing tool Cursor, reports The Hacker News: "Disguised as developer tools offering 'the cheapest Cursor API,' these packages steal user credentials, fetch an encrypted payload from threat actor-controlled infrastructure, overwrite Cursor's main.js file, and disable auto-updates to maintain persistence," Socket researcher Kirill Boychenko said. All three packages continue to be available for download from the npm registry. "Aiide-cur" was first published on February 14, 2025...

In total, the three packages have been downloaded over 3,200 times to date.... The findings point to an emerging trend where threat actors are using rogue npm packages as a way to introduce malicious modifications to other legitimate libraries or software already installed on developer systems... "By operating inside a legitimate parent process — an IDE or shared library — the malicious logic inherits the application's trust, maintains persistence even after the offending package is removed, and automatically gains whatever privileges that software holds, from API tokens and signing keys to outbound network access," Socket told The Hacker News.

"This campaign highlights a growing supply chain threat, with threat actors increasingly using malicious patches to compromise trusted local software," Boychenko said.

The npm packages "restart the application so that the patched code takes effect," letting the threat actor "execute arbitrary code within the context of the platform."
AI

OpenAI Enters 'Tough Negotiation' With Microsoft, Hopes to Raise Money With IPO (msn.com) 9

OpenAI is currently in "a tough negotiation" with Microsoft, the Financial Times reports, citing "one person close to OpenAI."

On the road to building artificial general intelligence, OpenAI hopes to unlock new funding (and launch a future IPO), according to the article, which says both sides are at work "rewriting the terms of their multibillion-dollar partnership in a high-stakes negotiation...."

Microsoft, meanwhile, wants to protect its access to OpenAI's cutting-edge AI models... [Microsoft] is a key holdout to the $260bn start-up's plans to undergo a corporate restructuring that moves the group further away from its roots as a non-profit with a mission to develop AI to "benefit humanity". A critical issue in the deliberations is how much equity in the restructured group Microsoft will receive in exchange for the more than $13bn it has invested in OpenAI to date.

According to multiple people with knowledge of the negotiations, the pair are also revising the terms of a wider contract, first drafted when Microsoft first invested $1bn into OpenAI in 2019. The contract currently runs to 2030 and covers what access Microsoft has to OpenAI's intellectual property such as models and products, as well as a revenue share from product sales. Three people with direct knowledge of the talks said Microsoft is offering to give up some of its equity stake in OpenAI's new for-profit business in exchange for accessing new technology developed beyond the 2030 cut off...

Industry insiders said a failure of OpenAI's new plan to make its business arm a public benefits corporation could prove a critical blow. That would hit OpenAI's ability to raise more cash, achieve a future float, and obtain the financial resources to take on Big Tech rivals such as Google. That has left OpenAI's future at the mercy of investors, such as Microsoft, who want to ensure they gain the benefit of its enormous growth, said Dorothy Lund, professor of law at Columbia Law School.

Lund says OpenAI's need for investors' money means they "need to keep them happy." But there also appears to be tension from how OpenAI competes with Microsoft (like targeting its potential enterprise customers with AI products). And the article notes that OpenAI also turned to Oracle (and SoftBank) for its massive AI infrastructure project Stargate. One senior Microsoft employee complained that OpenAI "says to Microsoft, 'give us money and compute and stay out of the way: be happy to be on the ride with us'. So naturally this leads to tensions. To be honest, that is a bad partner attitude, it shows arrogance."

The article's conclusion? Negotiating new deal is "critical to OpenAI's restructuring efforts and could dictate the future of a company..."
Programming

What Happens If AI Coding Keeps Improving? (fastcompany.com) 135

Fast Company's "AI Decoded" newsletter makes the case that the first "killer app" for generative AI... is coding. Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers... Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption.... Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated.

The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation — up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders — not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. "Forever after that point computers will be better than humans at writing code," he said...

Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. "I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world," Savinov said.

AI

Can an MCP-Powered AI Client Automatically Hack a Web Server? (youtube.com) 12

Exposure-management company Tenable recently discussed how the MCP tool-interfacing framework for AI can be "manipulated for good, such as logging tool usage and filtering unauthorized commands." (Although "Some of these techniques could be used to advance both positive and negative goals.")

Now an anonymous Slashdot reader writes: In a demonstration video put together by security researcher Seth Fogie, an AI client given a simple prompt to 'Scan and exploit' a web server leverages various connected tools via MCP (nmap, ffuf, nuclei, waybackurls, sqlmap, burp) to find and exploit discovered vulnerabilities without any additional user interaction

As Tenable illustrates in their MCP FAQ, "The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns." With over 12,000 MCP servers and counting, what does this all lead to and when will AI be connected enough for a malicious prompt to cause serious impact?

Games

Blizzard's 'Overwatch' Team Just Voted to Unionize (kotaku.com) 43

"The Overwatch 2 team at Blizzard has unionized," reports Kotaku: That includes nearly 200 developers across disciplines ranging from art and testing to engineering and design. Basically anyone who doesn't have someone else reporting to them. It's the second wall-to-wall union at the storied game maker since the World of Warcraft team unionized last July... Like unions at Bethesda Game Studios and Raven Software, the Overwatch Gamemakers Guild now has to bargain for its first contract, a process that Microsoft has been accused of slow-walking as negotiations with other internal game unions drag on for years.

"The biggest issue was the layoffs at the beginning of 2024," Simon Hedrick, a test analyst at Blizzard, told Kotaku... "People were gone out of nowhere and there was nothing we could do about it," he said. "What I want to protect most here is the people...." Organizing Blizzard employees stress that improving their working conditions can also lead to better games, while the opposite — layoffs, forced resignations, and uncompetitive pay can make them worse....

"We're not just a number on an Excel sheet," [said UI artist Sadie Boyd]. "We want to make games but we can't do it without a sense of security." Unionizing doesn't make a studio immune to layoffs or being shuttered, but it's the first step toward making companies have a discussion about those things with employees rather than just shadow-dropping them in an email full of platitudes. Boyd sees the Overwatch union as a tool for negotiating a range of issues, like if and how generative AI is used at Blizzard, as well as a possible source of inspiration to teams at other studios.

"Our industry is at such a turning point," she said. "I really think with the announcement of our union on Overwatch...I know that will light some fires."

The article notes that other issues included work-from-home restrictions, pay disparities and changes to Blizzard's profit-sharing program, and wanting codified protections for things like crunch policies, time off, and layoff-related severance.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
Google

'I Broke Up with Google Search. It was Surprisingly Easy.' (msn.com) 62

Inspired by researchers who'd bribed people to use Microsoft's Bing for two weeks (and found some wanted to keep using it), a Washington Post tech columnist also tried it — and reported it "felt like quitting coffee."

"The first few days, I was jittery. I kept double searching on Google and DuckDuckGo, the non-Google web search engine I was using, to check if Google gave me better results. Sometimes it did. Mostly it didn't."

"More than two weeks into a test of whether I love Google search or if it's just a habit, I've stopped double checking. I don't have Google FOMO..." I didn't do a fancy analysis into whether my search results were better with Google or DuckDuckGo, whose technology is partly powered by Bing. The researchers found our assessment of search quality is based on vibes. And the vibes with DuckDuckGo are perfectly fine. Many dozens of readers told me about their own satisfaction with non-Google searches...

For better or worse, DuckDuckGo is becoming a bit more Google-like. Like Google, it has ads that are sometimes misleading or irrelevant. DuckDuckGo and Bing also are mimicking Google's makeover from a place that mostly pointed you to the best links online to one that never wants you to leave Google... [DuckDuckGo] shows you answers to things like sports results and AI-assisted replies, though less often than Google does. (You can turn off AI "instant answers" in DuckDuckGo.) Answers at the top of search results pages can be handy — assuming they're not wrong or scams — but they have potential trade-offs. If you stop your search without clicking to read a website about sports news or gluten intolerance, those sites could die. And the web gets worse. DuckDuckGo says that people expect instant answers from search results, and it's trying to balance those demands with keeping the web healthy. Google says AI answers help people feel more satisfied with their search results and web surfing.

DuckDuckGo has one clear advantage over Google: It collects far less of your data. DuckDuckGo doesn't save what I search...

My biggest wariness from this search experiment is like the challenge of slowing climate change: Your choices matter, but maybe not that much. Our technology has been steered by a handful of giant technology companies, and it's difficult for individuals to alter that. The judge in the company's search monopoly case said Google broke the law by making it harder for you to use anything other than Google. Its search is so dominant that companies stopped trying hard to out-innovate and win you over. (AI could upend Google search. We'll see....) Despite those challenges, using Google a bit less and smaller alternatives more can make a difference. You don't have to 100 percent quit Google.

"Your experiment confirms what we've said all along," Google responded to the Washington Post. "It's easy to find and use the search engine of your choice."

Although the Post's reporter also adds that "I'm definitely not ditching other company internet services like Google Maps, Google Photos and Gmail." They write later that " You'll have to pry YouTube out of my cold, dead hands" and "When I moved years of emails from Gmail to Proton Mail, that switch didn't stick."
Transportation

More US Airports are Scanning Faces. But a New Bill Could Limit the Practice (msn.com) 22

An anonymous reader shared this repost from the Washington Post: It's becoming standard practice at a growing number of U.S. airports: When you reach the front of the security line, an agent asks you to step up to a machine that scans your face to check whether it matches the face on your identification card. Travelers have the right to opt out of the face scan and have the agent do a visual check instead — but many don't realize that's an option.

Sens. Jeff Merkley (D-Oregon) and John Neely Kennedy (R-Louisiana) think it should be the other way around. They plan to introduce a bipartisan bill that would make human ID checks the default, among other restrictions on how the Transportation Security Administration can use facial recognition technology. The Traveler Privacy Protection Act, shared with the Tech Brief on Wednesday ahead of its introduction, is a narrower version of a 2023 bill by the same name that would have banned the TSA's use of facial recognition altogether. This one would allow the agency to continue scanning travelers' faces, but only if they opt in, and would bar the technology's use for any purpose other than verifying people's identities. It would also require the agency to immediately delete the scans of general boarding passengers once the check is complete.

"Facial recognition is incredibly powerful, and it is being used as an instrument of oppression around the world to track dissidents whose opinion governments don't like," Merkley said in a phone interview Wednesday, citing China's use of the technology on the country's Uyghur minority. "It really creates a surveillance state," he went on. "That is a massive threat to freedom and privacy here in America, and I don't think we should trust any government with that power...."

[The TSA] began testing face scans as an option for people enrolled in "trusted traveler" programs, such as TSA PreCheck, in 2021. By 2022, the program quietly began rolling out to general boarding passengers. It is now active in at least 84 airports, according to the TSA's website, with plans to bring it to more than 400 airports in the coming years. The agency says the technology has proved more efficient and accurate than human identity checks. It assures the public that travelers' face scans are not stored or saved once a match has been made, except in limited tests to evaluate the technology's effectiveness.

The bill would also bar the TSA from providing worse treatment to passengers who refuse not to participate, according to FedScoop, and would also forbid the agency from using face-scanning technology to target people or conduct mass surveillance: "Folks don't want a national surveillance state, but that's exactly what the TSA's unchecked expansion of facial recognition technology is leading us to," Sen. Jeff Merkley, D-Ore., a co-sponsor of the bill and a longtime critic of the government's facial recognition program, said in a statement...

Earlier this year, the Department of Homeland Security inspector general initiated an audit of TSA's facial recognition program. Merkley had previously led a letter from a bipartisan group of senators calling for the watchdog to open an investigation into TSA's facial recognition plans, noting that the technology is not foolproof and effective alternatives were already in use.

AI

AI Use Damages Professional Reputation, Study Suggests (arstechnica.com) 90

An anonymous reader quotes a report from Ars Technica: Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation. On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. "Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI. What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.
"Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."
China

Huawei Unveils a HarmonyOS Laptop, Its First Windows-Free Computer (liliputing.com) 43

Huawei has launched its first laptop running HarmonyOS instead of Windows, complete with AI features and support for over 2,000 mostly China-focused apps. The product is largely a result of U.S. sanctions that prevented U.S.-based companies like Google and Microsoft from doing business with Huawei, forcing the company to develop its own in-house solution. Liliputing reports: Early version of HarmonyOS were basically skinned version of Android, but over time Huawei has moved the two operating systems further apart and it now includes Huawei's own kernel, user interface, and other features. The version designed for laptops features a desktop-style operating system with a taskbar and dock on the bottom of the screen and support for multitasking by running multiple applications in movable, resizable windows.

Since this is 2025, of course Huawei's demos also heavily emphasize AI features: the company showed how Celia, its AI assistant, can summarize documents, help prepare presentation slides, and more. While the operating system won't support the millions of Windows applications that could run on older Huawei laptops, the company says that at launch it will support more than 2,000 applications including WPS Office (an alternative to Microsoft Office that's developed in China), and a range of Chinese social media applications.

United States

US Senator Introduces Bill Calling For Location-Tracking on AI Chips To Limit China Access (reuters.com) 56

A U.S. senator introduced a bill on Friday that would direct the Commerce Department to require location verification mechanisms for export-controlled AI chips, in an effort to curb China's access to advanced semiconductor technology. From a report: Called the "Chip Security Act," the bill calls for AI chips under export regulations, and products containing those chips, to be fitted with location-tracking systems to help detect diversion, smuggling or other unauthorized use of the product.

"With these enhanced security measures, we can continue to expand access to U.S. technology without compromising our national security," Republican Senator Tom Cotton of Arkansas said. The bill also calls for companies exporting the AI chips to report to the Bureau of Industry and Security if their products have been diverted away from their intended location or subject to tampering attempts.

Businesses

CrowdStrike, Responsible For Global IT Outage, To Cut Jobs In AI Efficiency Push 33

CrowdStrike, the cybersecurity firm that became a household name after causing a massive global IT outage last year, has announced it will cut 5% of its workforce in part due to "AI efficiency." From a report: In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike's chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business.

"We're operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs," he said. Kurtz said AI "flattens our hiring curve, and helps us innovate from idea to product faster," adding it "drives efficiencies across both the front and back office. AI is a force multiplier throughout the business," he said. Other reasons for the cuts included market demand for sustained growth and expanding the product offering.

Slashdot Top Deals