AI

Most Men Would Marry Their AI Girlfriends If It Were Legal (vice.com) 152

An anonymous reader quotes a report from VICE News: EVA AI, a platform allowing you to create and connect with your own AI partner, recently surveyed 2,000 men and found that 8 in 10 would consider marrying an AI girlfriend if it were legal. Not only that, but 83% of men also believe they could form a deep emotional bond with an AI girlfriend. What's even scarier is that a whopping 78% of men surveyed said they would consider creating a replica of their ex, and three-quarters would duplicate their current partner to create a "polished" version of them. "AI companionship allows people to be their authentic selves without fear of judgment," said Cale Jones, head of community growth at EVA AI. "It creates a safe space to explore thoughts, emotions, and desires that might feel too vulnerable to share in real life. The benefits extend far beyond the virtual world: one EVA AI user discovered her bisexuality through this platform -- something she previously felt too insecure to explore in real life."
AI

Cursing Disables Google's AI Overviews 21

Google users have discovered that adding curse words to search queries disables the company's AI-powered overview feature. While Google's Gemini AI system typically avoids profanity, inserting expletives into search terms bypasses AI summaries and delivers traditional web results instead. Users can also disable AI overviews by adding "-ai" or other text strings after a minus sign to their queries.
AI

OpenAI's o3-mini: Faster, Cheaper AI That Fact-Checks Itself (openai.com) 73

OpenAI today launched o3-mini, a specialized AI reasoning model designed for STEM tasks that offers faster processing at lower costs compared to its predecessor o1-mini. The model, priced at $1.10 per million cached input tokens and $4.40 per million output tokens, performs fact-checking before delivering results to reduce errors in technical domains like physics and programming, the Microsoft-backed startup said. (A million tokens are roughly 750,000 words)

OpenAI claims that its tests showed o3-mini made 39% fewer major mistakes than o1-mini on complex problems while delivering responses 24% faster. The model will be available through ChatGPT with varying access levels -- free users get basic access while premium subscribers receive higher query limits and reasoning capabilities.
The Almighty Buck

'Magical' Efficient-Market Theory Rebuked in Era of Passive Investing (yahoo.com) 57

An anonymous reader shares a report: At first blush, stock trading this week is hardly a paragon of the market-efficiency theory, an oft-romanticized idea in Economics 101. After all, big equity gauges plunged on Monday, spurred by fears of an AI model released a week earlier, before swiftly rebounding. A fresh academic paper suggests the rise of passive investing may be fueling these kind of fragile market moves.

According to a study to be published in the prestigious American Economic Review, evidence is building that active managers are slow to scoop up stocks en masse when prices move away from their intrinsic worth. Thanks to this lethargic trading behavior and the relentless boom in benchmark-tracking index funds, the impact of each trade on prices gets amplified, explaining how sell orders, like on Monday perhaps, can induce broader equity gyrations. As a result, the financial landscape is proving less dynamic and more volatile in the era of Big Passive, according to authors at the UCLA Anderson School of Management, the Stockholm School of Economics and the University of Minnesota Carlson School of Management.

AI

DeepSeek Outstrips Meta and Mistral To Lead Open-Source AI Race (semianalysis.com) 27

DeepSeek has emerged as the leading open-source AI model developer, surpassing Meta's Llama and Mistral, after releasing its latest model V3 with breakthrough cost efficiencies, research and consultancy firm SemiAnalysis reported on Friday.

The Chinese startup, backed by hedge fund High-Flyer, reached this milestone through innovations in Multi-head Latent Attention technology, which cut inference costs by 93.3% versus standard methods. Despite offering services below cost to gain market share, its performance matches or exceeds OpenAI's GPT-4.
AI

Taiwan Says Government Departments Should Not Use DeepSeek, Citing Security Concerns (reuters.com) 37

An anonymous reader shares a report: Taiwan's digital ministry said on Friday that government departments should not use Chinese startup DeepSeek's artificial intelligence (AI) service, saying that as the product is from China it represents a security concern.

Democratically-governed Taiwan has long been wary of Chinese tech given Beijing's sovereignty claims over the island and its military and political threats against the government in Taipei. In a statement, Taiwan's Ministry of Digital Affairs said that government departments are not allowed to use DeepSeek's AI service to "prevent information security risks".

"DeepSeek's AI service is a Chinese product, and its operation involves cross-border transmission and information leakage and other information security concerns, and is a product that jeopardises the country's information security," the ministry said.

Intel

Intel Won't Bring Its Falcon Shores AI Chip To Market (techcrunch.com) 24

During the company's fourth-quarter earnings call Thursday, Intel co-CEO Michelle Johnston Holthaus announced that Intel has decided to cancel its Falcon Shores AI chip. Instead, it'll opt to use it as an internal test chip while shifting focus to Jaguar Shores for AI data center solutions. TechCrunch reports: "AI data center ... is an attractive market for us," Holthaus said during the call. "[B]ut I am not happy with where we are today. We're not yet participating in the cloud-based AI data center market in a meaningful way ... One of the immediate actions I have taken is to simplify our roadmap and concentrate our resources." The focus instead will be on Jaguar Shores, which Holthaus called Intel's opportunity to "develop a system-level solution at rack scale ... to address the AI data center more broadly."

Holthaus tempered expectations for Falcon Shores last month, when she implied that it was an "iterative" step over the company's previous dedicated AI data center chip, Gaudi 3. "One of the things that we've learned from Gaudi is, it's not enough to just deliver the silicon," Holthaus said during Thursday's earnings call. "Falcon Shores will help us in that process of working on the system, networking, memory -- all those component[s]. But what customers really want is that full-scale rack solution, and so we're able to get to that with Jaguar Shores."

"As I think about our AI opportunity, my focus is on the problems our customers are trying to solve, most notably the need to lower the cost and increase the efficiency of compute," Holthaus said. "As such, a one-size-fits-all approach will not work, and I can see clear opportunities to leverage our core assets in new ways to drive the most compelling total cost of ownership across the continuum."

Privacy

Italy Blocks DeepSeek Over Data Privacy Concerns (reuters.com) 30

Italy's data protection agency has blocked the Chinese AI chatbot DeekSeek after its developers failed to disclose how it collects user data or whether it is stored on Chinese servers. Reuters reports: DeepSeek could not be accessed on Wednesday in Apple or Google app stores in Italy, the day after the authority, known also as the Garante, requested information on its use of personal data. In particular, it wanted to know what personal data is collected, from which sources, for what purposes, on what legal basis and whether it is stored in China. The authority's decision -- aimed at protecting Italian users' data -- came after the Chinese companies that supply chatbot service to DeepSeek provided information that "was considered to totally insufficient," the authority said in a note on its website. The Garante added that the decision had "immediate effect" and that it had also opened an investigation. Thanks to new submitter axettone for sharing the news.
Government

OpenAI Teases 'New Era' of AI In US, Deepens Ties With Government (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to "supercharge" research across a wide range of fields to better serve the public. "This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives," OpenAI said. The deal ensures that "approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe" will have access to OpenAI's latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to "o1 or another o-series model" will be available on Venado -- an Nvidia supercomputer at Los Alamos that will become a "shared resource." Microsoft will help deploy the model, OpenAI noted. OpenAI suggested this access could propel major "breakthroughs in materials science, renewable energy, astrophysics," and other areas that Venado was "specifically designed" to advance. Key areas of focus for Venado's deployment of OpenAI's model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats "before they emerge," and " deepening our understanding of the forces that govern the universe," OpenAI said.

Perhaps among OpenAI's flashiest promises for the partnership, though, is helping the US achieve a "a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation's energy infrastructure." That is urgently needed, as officials have warned that America's aging energy infrastructure is becoming increasingly unstable, threatening the country's health and welfare, and without efforts to stabilize it, the US economy could tank. But possibly the most "highly consequential" government use case for OpenAI's models will be supercharging research safeguarding national security, OpenAI indicated. "The Labs also lead a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," OpenAI noted. "Our partnership will support this work, with careful and selective review of use cases and consultations on AI safety from OpenAI researchers with security clearances."
The announcement follows the launch earlier this week of ChatGPT Gov, "a new tailored version of ChatGPT designed to provide US government agencies with an additional way to access OpenAI's frontier models." It also worked with the Biden administration to voluntarily commit to give officials early access to its latest models for safety inspections.
Books

Books Written By Humans Are Getting Their Own Certification (theverge.com) 76

The Authors Guild -- one of the largest associations of writers in the US -- has launched a new project that allows authors to certify that their book was written by a human, and not generated by artificial intelligence. From a report: The Guild says its "Human Authored" certification aims to make it easier for writers to "distinguish their work in increasingly AI-saturated markets," and that readers have a right to know who (or what) created the books they read. Human Authored certifications will be listed in a public database that anyone can access.
AI

Has Europe's Great Hope For AI Missed Its Moment? (ft.com) 39

France's Mistral AI is facing mounting pressure over its future as an independent European AI champion, as competition intensifies from U.S. tech giants and China's emerging players. The Paris-based startup, valued at $6.5 billion and backed by Microsoft and Nvidia, has struggled to keep pace with larger rivals despite delivering advanced AI models with a fraction of their resources.

The pressure increased this week after China's DeepSeek released a cutting-edge open-source model that challenged Mistral's efficiency-focused strategy. Mistral CEO Arthur Mensch dismissed speculation about selling to Big Tech companies, saying the firm hopes to go public eventually. However, one investor told the Financial Times that "they need to sell themselves."

The stakes are high for Europe's tech ambitions. Mistral remains the region's only significant player in large language models, the technology behind ChatGPT, after Germany's Aleph Alpha pivoted away from the field last year. The company has won customers including France's defense ministry and BNP Paribas, but controls just 5% of the enterprise AI market compared to OpenAI's dominant share.
AI

India Lauds Chinese AI Lab DeepSeek, Plans To Host Its Models on Local Servers (techcrunch.com) 11

India's IT minister on Thursday praised DeepSeek's progress and said the country will host the Chinese AI lab's large language models on domestic servers, in a rare opening for Chinese technology in India. From a report: "You have seen what DeepSeek has done -- $5.5 million and a very very powerful model," IT Minister Ashwini Vaishnaw said on Thursday, responding to criticism New Delhi has received for its own investment in AI, which has been much less than many other countries.

Since 2020, India has banned more than 300 apps and services linked to China, including TikTok and WeChat, citing national security concerns. The approval to allow DeepSeek to be hosted in India appears contingent on the platform storing and processing all Indian users' data domestically, in line with India's strict data localization requirements. [...] DeepSeek's models will likely be hosted on India's new AI Compute Facility. The facility is powered by 18,693 graphics processing units (GPUs), nearly double its initial target -- almost 13,000 of those are Nvidia H100 GPUs, and about 1,500 are Nvidia H200 GPUs.

AI

AI-Assisted Works Can Get Copyright With Enough Human Creativity, Says US Copyright Office (apnews.com) 18

The U.S. Copyright Office has ruled that AI-assisted works can receive copyright protection if they contain perceptible human creativity, such as creative modifications or arrangements. However, fully machine-generated content remains ineligible for copyright. The Associated Press reports: An AI-assisted work could be copyrightable if an artist's handiwork is perceptible. A human adapting an AI-generated output with "creative arrangements or modifications" could also make it fall under copyright protections. The report follows a review that began in 2023 and fielded opinions from thousands of people that ranged from AI developers, to actors and country singers.

It shows the copyright office will continue to reject copyright claims for fully machine-generated content. A person simply prompting a chatbot or AI image generator to produce a work doesn't give that person the ability to copyright that work, according to the report. "Extending protection to material whose expressive elements are determined by a machine ... would undermine rather than further the constitutional goals of copyright," [said Register of Copyrights Shira Perlmutter].
The copyright office says it's working on a separate report that "will turn to the training of AI models on copyrighted works, licensing considerations, and allocation of any liability."
Cloud

Microsoft Makes DeepSeek's R1 Model Available On Azure AI and GitHub 30

Microsoft has integrated DeepSeek's R1 model into its Azure AI Foundry platform and GitHub, allowing customers to experiment and deploy AI applications more efficiently.

"One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows," says By Asha Sharma, Microsoft's corporate vice president of AI platform. "DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks." The Verge reports: R1 was initially released as an open source model earlier this month, and Microsoft has moved at surprising pace to integrate this into Azure AI Foundry. The software maker will also make a distilled, smaller version of R1 available to run locally on Copilot Plus PCs soon, and it's possible we may even see R1 show up in other AI-powered services from Microsoft.
AI

After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power (venturebeat.com) 59

Alibaba has unveiled a new version of its AI model, called Qwen2.5-Max, claiming benchmark scores that surpass both DeepSeek's recently released R1 model and industry standards like GPT-4o and Claude-3.5-Sonnet. The model achieves these results using a mixture-of-experts architecture that requires significantly less computational power than traditional approaches.

The release comes amid growing concerns about China's AI capabilities, following DeepSeek's R1 model launch last week that sent Nvidia's stock tumbling 17%. Qwen2.5-Max scored 89.4% on the Arena-Hard benchmark and demonstrated strong performance in code generation and mathematical reasoning tasks. Unlike U.S. companies that rely heavily on massive GPU clusters -- OpenAI reportedly uses over 32,000 high-end GPUs for its latest models -- Alibaba's approach focuses on architectural efficiency. The company claims this allows comparable AI performance while reducing infrastructure costs by 40-60% compared to traditional deployments.
Security

Chinese and Iranian Hackers Are Using US AI Products To Bolster Cyberattacks (msn.com) 19

Hackers linked to China, Iran and other foreign governments are using new AI technology to bolster their cyberattacks against U.S. and global targets, according to U.S. officials and new security research. WSJ: In the past year, dozens of hacking groups in more than 20 countries turned to Google's Gemini chatbot to assist with malicious code writing, hunts for publicly known cyber vulnerabilities and research into organizations to target for attack, among other tasks, Google's cyber-threat experts said. While Western officials and security experts have warned for years about the potential malicious uses of AI, the findings released Wednesday from Google are some of the first to shed light on how exactly foreign adversaries are leveraging generative AI to boost their hacking prowess.

This week, the China-built AI platform DeepSeek upended international assumptions about how far along Beijing might be the AI arms race, creating global uncertainty about a technology that could revolutionize work, diplomacy and warfare. Expand article logo Continue reading Groups with known ties to China, Iran, Russia and North Korea all used Gemini to support hacking activity, the Google report said. They appeared to treat the platform more as a research assistant than a strategic asset, relying on it for tasks intended to boost productivity rather than to develop fearsome new hacking techniques. All four countries have generally denied U.S. hacking allegations.

AI

Copyright Office Offers Assurances on AI Filmmaking Tools 11

The U.S. Copyright Office declared Wednesday that the use of AI tools to assist in the creative process does not undermine the copyright of a work. Variety: The announcement clears the way for continued adoption of AI in post-production, where it has become increasingly common, such as in the enhancement of Hungarian-language dialogue in "The Brutalist."

Studios, whose business model is founded on strong copyright protections, have expressed concern that AI tools could be inhibited by regulatory obstacles. In a 41-page report [PDF], the Copyright Office also reiterated that human authorship is essential to copyright, and that merely entering text prompts into an AI system is not enough to claim authorship of the resulting output.
AI

Virgin Money Chatbot Scolds Customer Who Typed 'Virgin' (ft.com) 79

Virgin Money's AI-powered chatbot has reprimanded a customer who used the word "virgin," underlining the pitfalls of rolling out external AI tools. From a report: In a post last week on social media site LinkedIn, David Birch, a fintech commentator and Virgin Money customer, shared a picture of his online conversation with the bank in which he asked: "I have two ISAs with Virgin Money, how do I merge them?" The bank's customer service tool responded: "Please don't use words like that. I won't be able to continue our chat if you use this language," suggesting that it deemed the word "virgin" inappropriate.
AI

OpenAI Says It Has Evidence DeepSeek Used Its Model To Train Competitor (theverge.com) 118

OpenAI says it has evidence suggesting Chinese AI startup DeepSeek used its proprietary models to train a competing open-source system through "distillation," a technique where smaller models learn from larger ones' outputs.

The San Francisco-based company, along with partner Microsoft, blocked suspected DeepSeek accounts from accessing its API last year after detecting potential terms of service violations. DeepSeek's R1 reasoning model has achieved comparable results to leading U.S. models despite claiming minimal resources.
AI

White House 'Looking Into' National Security Implications of DeepSeek's AI 53

During the first press briefing of Donald Trump's second administration, White House press secretary, Karoline Leavitt, said that the National Security Council was "looking into" the potential security implications of China's DeepSeek AI startup. Axios reports: DeepSeek's low-cost but highly advanced models have shaken the consensus that the U.S. had a strong lead in the AI race with China. Responding to a question from Axios' Mike Allen, Leavitt said President Trump saw this as a "wake-up call" for the U.S. AI industry, but remained confident "we'll restore American dominance." Leavitt said she had personally discussed the matter with the NSC earlier on Tuesday.

In the combative tone that characterized much of her first briefing, Leavitt claimed the Biden administration "sat on its hands and allowed China to rapidly develop this AI program," while Trump had moved quickly to appoint an AI czar and loosen regulations on the AI industry.
Leavitt also commented on the mysterious drones spotted flying around New Jersey at the end of last year, saying they were "authorized to be flown by the FAA."

Slashdot Top Deals