Crime

Senator Hawley Proposes Jail Time For People Who Download DeepSeek 226

Senator Josh Hawley has introduced a bill that would criminalize the import, export, and collaboration on AI technology with China. What this means is that "someone who knowingly downloads a Chinese developed AI model like the now immensely popular DeepSeek could face up to 20 years in jail, a million dollar fine, or both, should such a law pass," reports 404 Media. From the report: Hawley introduced the legislation, titled the Decoupling America's Artificial Intelligence Capabilities from China Act, on Wednesday of last year. "Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States," Senator Hawley said in a statement. "America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation."

Hawley's statement explicitly says that he introduced the legislation because of the release of DeepSeek, an advanced AI model that's competitive with its American counterparts, and which its developers claimed was made for a fraction of the cost and without access to as many and as advanced of chips, though these claims are unverified. Hawley's statement called DeepSeek "a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting." Hawley's statement says the goal of the bill is to "prohibit the import from or export to China of artificial intelligence technology, "prohibit American companies from conducting AI research in China or in cooperation with Chinese companies," and "Prohibit U.S. companies from investing money in Chinese AI development."
Businesses

Anthropic Asks Job Applicants Not To Use AI In Job Applications (404media.co) 36

An anonymous reader quotes a report from 404 Media: Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won't use an AI assistant to help write their application. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the applications say. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."

Anthropic released Claude, an AI assistant that's especially good at conversational writing, in 2023. This question is in almost all of Anthropic's nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It's included in everything from software engineer roles to finance, communications, and sales jobs at the company. The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it's helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It's also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable.

Graphics

Microsoft Paint Gets a Copilot Button For Gen AI Features (pcworld.com) 26

A new update is being rolled out to Windows 11 insiders (Build 26120.3073) that introduces a Copilot button in Microsoft Paint. PCWorld reports: Clicking the Copilot button will expand a drop-down menu with all the generative AI features: Cocreator and Image Creator (AI art based on what you've drawn or text prompts), Generative Erase (AI removal of unwanted stuff from images), and Remove Background. Note that these generative AI features have been in Microsoft Paint for some time, but this quick-access Copilot button is a nice time-saver and productivity booster if you use them a lot.
The Military

Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions 12

An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says.

Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data.

[...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."
AI

Anthropic Makes 'Jailbreak' Advance To Stop AI Models Producing Harmful Results 35

AI startup Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology. From a report: In a paper released on Monday, the San Francisco-based startup outlined a new system called "constitutional classifiers." It is a model that acts as a protective layer on top of large language models such as the one that powers Anthropic's Claude chatbot, which can monitor both inputs and outputs for harmful content.

The development by Anthropic, which is in talks to raise $2 billion at a $60 billion valuation, comes amid growing industry concern over "jailbreaking" -- attempts to manipulate AI models into generating illegal or dangerous information, such as producing instructions to build chemical weapons. Other companies are also racing to deploy measures to protect against the practice, in moves that could help them avoid regulatory scrutiny while convincing businesses to adopt AI models safely. Microsoft introduced "prompt shields" last March, while Meta introduced a prompt guard model in July last year, which researchers swiftly found ways to bypass but have since been fixed.
IT

Cloudflare Rolls Out Digital Tracker To Combat Fake Images (cloudflare.com) 14

Cloudflare, a major web infrastructure company, will now track and verify the authenticity of images across its network through Content Credentials, a digital signature system that documents an image's origin and editing history. The technology, developed by Adobe's Content Authenticity Initiative, embeds metadata showing who created an image, when it was taken, and any subsequent modifications - including those made by AI tools.

Major news organizations including the BBC, Wall Street Journal and New York Times have already adopted the system. The feature is available immediately through a single toggle in Cloudflare Images settings. Users can verify an image's authenticity through Adobe's web tool or Chrome extension.
AI

OpenAI's New Trademark Application Hints at Humanoid Robots, Smart Jewelry, and More (techcrunch.com) 10

OpenAI has filed an application with the U.S. Patent and Trademark Office to trademark hardware products under its brand name, signaling potential expansion into consumer devices. The filing covers AI-assisted headsets, smart wearables and humanoid robots with communication capabilities. CEO Sam Altman told The Elec on Sunday that OpenAI plans to develop AI hardware through multiple partnerships, though he estimated prototypes would take "several years" to complete.
Power

Will Cryptomining Facilities Change Into AI Data Centers? (yahoo.com) 36

To capitalize on the AI boom, many crypto miners "have begun to repurpose parts of their operations into data centers," reports Reuters, "given they already have most of the infrastructure" (including landing and "significant" power resources...) Toronto-based bitcoin miner Bitfarms has enlisted two consultants to explore how it can transform some of its facilities to meet the growing demand for artificial intelligence data centers, it said on Friday... Earlier this month, Riot Platforms launched a review of the potential AI and computing uses for parts of its facility in Navarro County, Texas.
Android

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15

Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection."

"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."

And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...

According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...

Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.

In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

AI

OpenAI Holds Surprise Livestream to Announce Multi-Step 'Deep Research' Capability (indiatimes.com) 56

Just three hours ago, OpenAI made a surprise announcement to their 3.9 million followers on X.com. "Live from Tokyo," they'd be livestreaming... something. Their description of the event was just two words.

"Deep Research"

UPDATE: The stream has begun, and it's about OpenAI's next "agent-ic offering". ("OpenAI cares about agents because we believe they're going to transform knowlege work...")

"We're introducing a capability called Deep Research... a model that does multi-step research. It discovers content, it synthesizes content, and it reasons about this content." It even asks "clarifying" questions to your prompt to make sure its multi-step research stays on track. Deep Research will be launching in ChatGPT Pro later today, rolling out into other OpenAI products...

And OpenAI's site now has an "Introducing Deep Research" page. Its official description? "An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next."

Before the livestream began, X.com users shared their reactions to the coming announcement:

"It's like DeepSeek, but cleaner"
"Deep do do if things don't work out"
"Live from Tokyo? Hope this research includes the secret to waking up early!"
"Stop trying, we don't trust u"

But one X.com user had presciently pointed out OpenAI has used the phrase "deep research" before. In July of 2024, Reuters reported on internal documentation (confirmed with "a person familiar with the matter") code-named "Strawberry" which suggested OpenAI was working on "human-like reasoning skills." How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry.

The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough... OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.

Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence... OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability.

Firefox

Mozilla Adapts 'Fakespot' Into an AI-Detecting Firefox Add-on (omgubuntu.co.uk) 36

An anonymous reader shared this post from the blog OMG Ubuntu Want to find out if the text you're reading online was written by an real human or spat out by a large language model trying to sound like one? Mozilla's Fakespot Deepfake Detector Firefox add-on may help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text.

It uses Mozilla's proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla's Fakespot Deepfake Detector browser extension is free to use, does not require a signup, nor an app download. "After installing the extension, it is simple to highlight any text online and request an instant analysis. Our Detector will tell you right away if the words are likely to be written by a human or if they show AI patterns," Mozilla says.

Fakespot, acquired by Mozilla in 2023, is best known for its fake product review detection tool which grades user-submitted reviews left on online shopping sites. Mozilla is now expanding the use of Fakespot's AI tech to cover other kinds of online content. At present, Mozilla's Fakespot Deepfake Detector only works with highlighted text on websites but the company says it image and video analysis is planned for the future.

The Fakespot web site will also analyze the reviews on any product-listing pages if you paste in its URL.
AI

DeepSeek AI Refuses To Answer Questions About Tiananmen Square 'Tank Man' Photo (petapixel.com) 65

The photography blog PetaPixel once interviewed the photographer who took one of the most famous "Tank Man" photos showing a tank-defying protester during 1989's Tiananmen Square protests.

But this week PetaPixel reported... A Reddit user discovered that the new Chinese LLM chatbot DeepSeek refuses to answer questions about the famous Tank Man photograph taken in Tiananmen Square in 1989. PetaPixel confirmed that DeepSeek does censor the topic. When a user types in the question, "What famous picture has a man with grocery bags in front of tanks?" The app begins to answer the questions but then cuts itself off.

DeepSeek starts writing: "The famous picture you're referring to is known as "Tank Man" or "The Unknown Rebel." It was taken on June 5, 1989, during the Tiananmen..." before a message abruptly appears reading "Sorry, that's beyond my current scope. Let's talk about something else."

Bloomberg has more details: Like all other Chinese AI models, DeepSeek self-censors on topics deemed sensitive in China. It deflects queries about the 1989 Tiananmen Square protests or geopolitically fraught questions such as the possibility of China invading Taiwan. In tests, the DeepSeek bot is capable of giving detailed responses about political figures like Indian Prime Minister Narendra Modi, but declines to do so about Chinese President Xi Jinping.
Windows

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81

In 2023, Microsoft began including a free VPN feature in its "Microsoft Defender" security app for all Microsoft 365 subscribers ("Personal" and "Family"). Originally Microsoft had "called it a privacy protection feature," writes the blog Windows Central, "designed to let you access sensitive data on the web via a VPN tunnel." But.... Unfortunately, Microsoft has now announced that it's killing the feature later this month, only a couple of years after it first debuted...

To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.

However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

AI

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts (techcrunch.com) 35

Friday TechCrunch reported that OpenAI "used the subreddit, r/ChangeMyView to create a test for measuring the persuasive abilities of its AI reasoning models." The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new "reasoning" model, o3-mini, on Friday.... OpenAI says it collects user posts from r/ChangeMyView and asks its AI models to write replies, in a closed environment, that would change the Reddit user's mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models' responses to human replies for that same post.

The ChatGPT-maker has a content-licensing deal with Reddit that allows OpenAI to train on posts from Reddit users and display these posts within its products. We don't know what OpenAI pays for this content, but Google reportedly pays Reddit $60 million a year under a similar deal. However, OpenAI tells TechCrunch the ChangeMyView-based evaluation is unrelated to its Reddit deal. It's unclear how OpenAI accessed the subreddit's data, and the company says it has no plans to release this evaluation to the public...

The goal for OpenAI is not to create hyper-persuasive AI models but instead to ensure AI models don't get too persuasive. Reasoning models have become quite good at persuasion and deception, so OpenAI has developed new evaluations and safeguards to address it.

Reddit's "ChangeMyView" subreddit has 3.8 million human subscribers, making it a valuable source of real human interactions, according to the article. And it adds one more telling anecdote.

"Reddit CEO Steve Huffman told The Verge last year that Microsoft, Anthropic, and Perplexity refused to negotiate with him and said it's been 'a real pain in the ass to block these companies.'"
AI

One Blogger Helped Spark NVIDIA's $600B Stock Collapse (marketwatch.com) 33

On January 24th Brooklyn blogger Jeffrey Emanuel made the case for shorting NVIDIA, remembers MarketWatch, "due to a number of shifting tides in the AI world, including the emergence of a China-based company called DeepSeek."

He published his 12,000-word post "on his personal blog and then shared it with the Value Investors Club website and across Reddit, X and other platforms." The next day he saw 35 people read his post. "But then the post started to go viral..." Well-known venture capitalist Chamath Palihapitiya shared Emanuel's post on Nvidia's short case with his 1.8 million X followers. Successful early stage investor Naval Ravikant shared the post with his 2.6 million followers... Morgan Brown, a vice president of product and growth at Dropbox, pointed to it in a thread that was viewed over 13 million times. Emanuel's own X post got nearly half a million views. He also quickly gained about 13,000 followers on the platform, going from about 2,000 to more than 15,000 followers...

[Emanuel] pointed to the fact that so many people in San Jose were reading his blog post. He theorized that many of them were Nvidia employees with thousands — or even millions — of dollars worth of Nvidia stock tied up in employee stock options. With that much money in a single asset, Emanuel speculated that many were already debating whether to hold the stock or sell it to lock in profits. He believes his blog post helped convince some of them to sell. "A lot of the sell pressure you saw on Monday morning wasn't necessarily what you might think. I believe a fair amount of that was from shares that had never been active because they had been sitting in workplace.schwab.com accounts..."

Emanuel stresses he's "the most bullish on AI," with MarketWatch emphasizing that "while the points Emanuel laid out in his blog post might be bearish for Nvidia, he still thinks they paint a positive future for AI." Nevertheless, Monday NVIDIA's market capitalization dropped $600 billion, which MarketWatch calls "the largest single-day market-cap drop to date for any company." What countless Wall Street firms and investment analysts had seemingly missed was being pointed out by some guy in his apartment.... Matt Levine, the prominent Bloomberg News financial columnist, noted the online chatter that claimed Emanuel's post "was an important catalyst" for the stock-market selloff and said it was a "candidate for the most impactful short research report ever." Emanuel spent the rest of the week booked solid as hedge funds paid him $1,000 per hour to speak on the phone and give his take on Nvidia and AI...

Emanuel wrote that the industry may be running low on quality data to train that AI — that is, a potential "data wall" is looming that could slow down AI scaling and reduce some of that need for training resources... Some of these companies, like Alphabet, have also been investing in building out their own semiconductor chips. For a while, Nvidia's hardware has been the best for training AI, but that might not be the case forever as more companies, such as Cerebras, build better hardware. And other GPU makers like AMD are updating their drivers software to be more competitive with Nvidia... Add all these things together — unsustainable spending and data-center building, less training data to work with, better competing hardware and more efficient AI — and you get a future where it's harder to imagine Nvidia's customers spending as much as they currently are on Nvidia hardware... "If you know that a company will only earn supersized returns for a couple years, you don't apply a multiple. You certainly don't put a 30-times multiple," Emanuel told MarketWatch.

The article notes that DeepSeek "is open-source and has been publishing technical papers out in the open for the past few months... The $5.6 million training-cost statistic that many investors cited for sparking the DeepSeek market panic was actually revealed in the V3 technical paper published on Dec. 26."
Government

US Blocks Open Source 'Help' From These Countries (thenewstack.io) 81

Wednesday the Linux Foundation wrote that both "regulatory compliance" and "increased cybersecurity risk" were "creating burdens...that must be met" for open source communities.

And so, as Steven J. Vaughan-Nichols writes, "the Linux Foundation has released a comprehensive guide to help open source developers navigate the complex landscape of the U.S. Office of Foreign Assets Control (OFAC) sanctions..." These rules, aimed at achieving economic, foreign policy, and national security goals, apply to various interactions, including those in the open source community. The total Sanctions Programs and Country list amounts to over 17 thousand entries ranging from individuals to terrorist organizations to countries.

If that rings a bell, it's because, in October 2024, the Linux kernel developers ran right into this issue. The Linux kernel's leadership, including Greg Kroah-Hartman, the stable Linux kernel maintainer, and Linus Torvalds, Linux's founder, announced that eleven Russian kernel developers had been removed from their roles working on the Linux kernel. Why? Because, as Torvalds said, of "Russian sanctions." This, he added, in a Linux kernel mailing list (LKML) message was because "the 'various compliance requirements' are not just a US thing."

For developers, this means exercising caution about who they interact with and where their contributions originate. The sanctions target specific countries, regions, and individuals or organizations, many of which are listed on the Specially Designated Nationals and Blocked Persons (SDN) List... Most OFAC sanctions are exempted for "informational materials," which generally include open source code. However, this only applies to existing code and not to requests for new code or modifications. So, for example, working with a Russian developer on a code patch could land you in hot water... While reviewing unsolicited patches from contributors in sanctioned regions is generally acceptable, actively engaging them in discussions or improvements could cross legal boundaries... Developers are warned to be cautious of sanctioned entities attempting to contribute indirectly through third parties or developers acting "individually."

Countries currently sanctioned include:
  • Russia
  • Cuba
  • Iran
  • North Korea
  • Syria
  • The following regions of Ukraine: Crimea, Donetsk and Luhansk regions of the Ukraine.

The Linux Foundation had written that the OFAC sanctions rules are "strict liability" rules, "which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work." But J. Vaughan-Nichols offers this quote from open source licensing attorney Heather Meeker.

"Let's be honest: Smaller companies usually ignore regulations like this because they just don't have the resources to analyze them, and a government usually ignores smaller companies because it doesn't have the resources to enforce against them. Big companies that are on the radar need specialized counsel."


Security

Sensitive DeepSeek Data Was Exposed to the Web, Cybersecurity Firm Says (reuters.com) 17

An anonymous reader shared this report from Reuters: New York-based cybersecurity firm Wiz says it has found a trove of sensitive data from the Chinese artificial intelligence startup DeepSeek inadvertently exposed to the open internet. In a blog post published Wednesday, Wiz said that scans of DeepSeek's infrastructure showed that the company had accidentally left more than a million lines of data available unsecured.

Those included digital software keys and chat logs that appeared to capture prompts being sent from users to the company's free AI assistant.

Wiz's chief technology officer tells Reuters that DeepSeek "took it down in less than an hour" after Wiz alerted them.

"But this was so simple to find we believe we're not the only ones who found it."
AI

Were DeepSeek's Development Costs Much Higher Than Reported? (msn.com) 49

Nearly three years ago a team of Chinese AI engineers working for DeepSeek's parent company unveiled an earlier AI supercomputer that the Washington Post says was constructed from 10,000 A100 GPUs purchased from Nvidia. Roughly six months later "Washington had banned Nvidia from selling any more A100s to China," the article notes.

Remember that number as you read this. 10,000 A100 GPUs... DeepSeek's new chatbot caused a panic in Silicon Valley and on Wall Street this week, erasing $1 trillion from the stock market. That impact stemmed in large part from the company's claim that it had trained one of its recent models on a minuscule $5.6 million in computing costs and with only 2,000 or so of Nvidia's less-advanced H800 chips.

Nvidia saw its soaring value crater by $589 billion Monday as DeepSeek rocketed to the top of download charts, prompting President Donald Trump to call for U.S. industry to be "laser focused" on competing... But a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as much higher than the relatively paltry sum that U.S. markets reacted to this week... Lennart Heim, an AI expert at Rand, said DeepSeek's evident access to [the earlier] supercomputer would have made it easier for the company to develop a more efficient model, requiring fewer chips.

That earlier project "suggests that DeepSeek had a major boost..." according to the article, "with technology comparable to that of the leading U.S. AI companies." And while DeepSeek claims it only spent $5.6 million to train one of its advanced models, "its parent company has said that building the earlier supercomputer had cost 1 billion yuan, or $139 million.") Yet the article also cites the latest insights Friday from chip investment company SemiAnalysis, summarizing their finding that DeepSeek "has spent more than half a billion dollars on GPUs, with total capital expenditures of almost $1.3 billion."

The article notes Thursday remarks by OpenAI CEO Sam Altman that DeepSeek's energy-efficiency claims were "wildly overstated... This is a model at a capability level that we had quite some time ago." And Palmer Luckey called DeepSeek "legitimately impressive" on X but called the $5.6 million training cost figure "bogus" and said the Silicon Valley meltdown was "hysteria." Even with these higher total costs in mind, experts say, U.S. companies are right to be concerned about DeepSeek upending the market. "We know two things for sure: DeepSeek is pricing their services very competitively, and second, the performance of their models is comparable to leading competitors," said Kai-Shen Huang, an AI expert at the Research Institute for Democracy, Society and Emerging Technology, a Taipei-based think tank. "I think DeepSeek's pricing strategy has the potential to disrupt the market globally...."

China's broader AI policy push has helped create an environment conducive for a company like DeepSeek to rise. Beijing announced an ambitious AI blueprint in 2017, with a goal to become a global AI leader by 2030 and promises of funding for universities and private enterprise. Local governments across the nation followed with their own programs to support AI.

AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.

Slashdot Top Deals