AI

What Happened When Conspiracy Theorists Talked to OpenAI's GPT-4 Turbo? (washingtonpost.com) 134

A "decision science partner" at a seed-stage venture fund (who is also a cognitive-behavioral decision science author and professional poker player) explored what happens when GPT-4 Turbo converses with conspiracy theorists: Researchers have struggled for decades to develop techniques to weaken the grip of conspiracy theories and cult ideology on adherents. This is why a new paper in the journal Science by Thomas Costello of MIT's Sloan School of Management, Gordon Pennycook of Cornell University and David Rand, also of Sloan, is so exciting... In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

The reduction in belief held across a range of topics... Even more encouraging, participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy. And the results appear to be durable, holding up in evaluations 10 days and two months later... Why was AI able to persuade people to change their minds? The authors posit that it "simply takes the right evidence," tailored to the individual, to effect belief change, noting: "From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence...."

It is hard to walk away from who you are, whether you are a QAnon believer, a flat-Earther, a truther of any kind or just a stock analyst who has taken a position that makes you stand out from the crowd. And that's why the AI approach might work so well. The participants were not interacting with a human, which, I suspect, didn't trigger identity in the same way, allowing the participants to be more open-minded. Identity is such a huge part of these conspiracy theories in terms of distinctiveness, putting distance between you and other people. When you're interacting with AI, you're not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.

Answering questions from Slashdot readers in 2005, Wil Wheaton described playing poker against the cognitive-behavioral decision science author who wrote this article...
Perl

Perl's CPAN Security Group is Now a CNA, Can Assign CVEs (perlmonks.org) 10

Active since 1995, the Comprehensive Perl Archive Network (or CPAN) hosts 221,742 Perl modules written by 14,548 authors. This week they announced that the CPAN Security Group "was authorized by the CVE Program as a CVE Numbering Authority (CNA)" to assign and manage CVE vulnerability identifications for Perl and CPAN Modules.

"This is great news!" posted Linux kernel maintainer Greg Kroah-Hartman on social media, saying the announcement came "Just in time for my talk about this very topic in a few weeks about how all open source projects should be doing this" at the Linux Foundation Member Summit in Napa, California. And Curl creator Daniel Stenberg posted "I'm with Greg Kroah-Hartman on this: all Open Source projects should become CNAs. Or team up with others to do it." (Also posting "Agreed" to the suggestion was Seth Larson, the Python Software Foundation's security developer-in-residence involved in their successful effort to become a CNA in 2023.)

444 CNAs have now partnered with the CVE Program, according to their official web site. The announcement from PerlMonks.org: Years ago, a few people decided during the Perl Toolchain Summit (PTS) that it would be a good idea to join forces, ideas and knowledge and start a group to monitor vulnerabilities in the complete Perl ecosystem from core to the smallest CPAN release. The goal was to follow legislation and CVE reports, and help authors in taking actions on not being vulnerable anymore. That group has grown stable over the past years and is now known as CPANSec.

The group has several focus areas, and one of them is channeling CVE vulnerability issues. In that specific goal, a milestone has been reached: CPANSec has just been authorized as a CVE Numbering Authority (CNA) for Perl and modules on CPAN

Social Networks

Apple Launches 'Age Assurance' Tech As US States Mull Social Media Laws (reuters.com) 53

Apple announced a new feature allowing parents to share a child's age with app developers without exposing sensitive information, as lawmakers debate age-verification laws for social media and apps. Reuters reports: States, such as Utah and South Carolina, are currently debating laws that would require app store operators such as Apple and Alphabet's Google to check the ages of users. That has set up a conflict in the tech industry over which party should be responsible for checking ages for users under 18 -- app stores, or each individual app. Meta, for instance, has long argued in favor of legislation requiring app stores to check ages when a child downloads an app.

Apple on Thursday said it does not want to be responsible for collecting sensitive data for those age verifications. "While only a fraction of apps on the App Store may require age verification, all users would have to hand over their sensitive personally identifying information to us -- regardless of whether they actually want to use one of these limited set of apps," Apple wrote in a whitepaper on its website.

Security

A Disney Worker Downloaded an AI Tool. It Led To a Hack That Ruined His Life. (dailymail.co.uk) 96

A Disney employee's download of an AI image generation tool from GitHub led to a massive data breach in July 2024, exposing over 44 million internal Slack messages. The software contained infostealer malware that compromised Matthew Van Andel's computer [non-paywalled source] for five months, giving hackers access to his 1Password manager.

The attackers used the stolen credentials to access Disney's corporate systems, publishing sensitive information including customer data, employee passport numbers, and revenue figures from Disney's theme parks and streaming services. The breach also devastated Van Andel personally. Hackers exposed his Social Security number, financial login details, and even credentials for his home's Ring cameras. Shortly after the incident, Disney fired Van Andel following a forensic analysis of his work computer, citing misconduct he denies. Security researchers believe the attacker, who identified as part of a Russia-based hacktivist group called Nullbulge, is likely an American individual.
Cellphones

Denmark To Ban Mobile Phones In Schools and After-School Clubs (theguardian.com) 66

Denmark is set to ban mobile phones in schools and after-school clubs, following a government commission's recommendation that children under 13 should not have their own smartphones. The Guardian reports: The government said it would change existing legislation to force all folkeskole -- comprehensive primary and lower secondary schools -- to become phone-free, meaning that almost all children aged between seven and 16-17 will be required by law not to bring their phones into school. The announcement marks a U-turn by the government, which had previously refused to introduce such a law. It comes as governments across Europe are trying to impose tighter regulations on children's access to phones and social media.

The Danish wellbeing commission was set up by the prime minister, Mette Frederiksen, in 2023 to investigate growing dissatisfaction among children and young people. Its long-awaited report, published on Tuesday, raised the alarm over the digitisation of children and young people's lives and called for a better balance between digital and analogue life. Among its 35 recommendations was the need for government legislation banning phones from schools and after-school clubs.

The minister for children and education, Mattias Tesfaye, told Politiken: "There is a need to reclaim the school as an educational space, where there is room for reflection and where it is not an extension of the teenage bedroom." There will be scope for local authorities to make exceptions, including for children with special educational needs, but he said mobile phones and personal tablets "do not belong in school, neither during breaks nor during lessons." He said the government had started preparing a legislative amendment.

Power

A Major Blackout Hits Chile, Leaving Millions Without Power (apnews.com) 27

A massive blackout has hit Chile, leaving millions without power and disrupting transportation, businesses, and essential services across 14 of the country's 16 regions. The Associated Press reports: The National Electrical Coordinator, Chile's grid operator, said a disruption had occurred in a high-voltage transmission line that carries power from the Atacama Desert of northern Chile to the capital of Santiago in the country's central valley. It did not say what actually caused the disruption that pushed much of the country's power grid into shutdown, from the northernmost Chilean port of Arica to the southern Los Lagos agricultural region.

Interior Minister Carolina Toha said hospitals, prisons and government buildings were switching on backup generators to keep essential equipment operating. In a press conference, Toha urged the public to stay calm and said officials were racing to put the grid back in operation and restore electric service across the country of some 19 million people. "It's affecting the entire electrical system of the country," she said of the breakdown in the 500-kV backbone transmission line. Toha said if all areas didn't return to normal by sunset the government would take emergency measures to avert a crisis. [...]

Videos on social media from all over Chile, a long ribbon of a country stretching 4,300 kilometers (over 2,600 miles) along the southern Pacific coast, showed chaos at intersections with no functioning traffic lights, people having to use their mobile phones as torches in the underground metro and police dispatched to help evacuate office buildings.

AI

Most US Workers Avoid AI Chatbots Despite Productivity Benefits, PEW Finds (pewresearch.org) 100

Most American workers are not embracing AI chatbots in their jobs, with 55% rarely or never using these tools and 29% completely unfamiliar with them, according to a Pew Research Center survey released Tuesday.

Only 16% of workers report regular use of AI chatbots like ChatGPT, Gemini or Copilot. Adoption is highest among younger workers (23% of those aged 18-29) and those with post-graduate degrees (26%). Among users, research (57%), editing content (52%), and drafting reports (47%) top the list of applications. While 40% find chatbots extremely or very helpful for working faster, just 29% say they significantly improve work quality.

For the majority who don't use AI chatbots, 36% cite lack of relevance to their job as the primary reason. Employer attitudes remain largely neutral, with half neither encouraging nor discouraging usage. The technology sector leads in workplace adoption, with 36% of employers actively promoting chatbot use, followed by financial services (24%).
AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
Privacy

California Sues Data-Harvesting Company NPD, Enforcing Strict Privacy Law (msn.com) 6

California sued to fine a data-harvesting company, reports the Washington Post, calling it "a rare step to put muscle behind one of the strongest online privacy laws in the United States." Even when states have tried to restrict data brokers, it has been tough to make those laws stick. That has generally been a problem for the 19 states that have passed broad laws to protect personal information, said Matt Schwartz, a policy analyst for Consumer Reports. He said there has been only 15 or so public enforcement actions by regulators overseeing all those laws. Partly because companies aren't held accountable, they're empowered to ignore the privacy standards. "Noncompliance is fairly widespread," Schwartz said. "It's a major problem."

That's why California is unusual with a data broker law that seems to have teeth. To make sure state residents can order all data brokers operating in the state to delete their personal records [with a single request], California is now requiring brokers to register with the state or face a fine of $200 a day. The state's privacy watchdog said Thursday that it filed litigation to force one data broker, National Public Data, to pay $46,000 for failing to comply with that initial phase of the data broker law. NPD declined to comment through an attorney... This first lawsuit for noncompliance, Schwartz said, shows that California is serious about making companies live up to their privacy obligations... "If they can successfully build it and show it works, it will create a blueprint for other states interested in this idea," he said.

Last summer NPD "spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online," according to the blog Krebs on Security, adding that another NPD data broker sharing access to the same consumer records "inadvertently published the passwords to its back-end database in a file that was freely available from its homepage..."

California's attempt to regulate the industry inspired the nonprofit Consumer Reports to create an app called Permission Slip that reveals what data companies collect and, for people in U.S. states, will "work with you to file a request, telling companies to stop selling your personal information."

Other data-protecting options suggested by The Washington Post:
  • Use Firefox, Brave or DuckDuckGo, "which can automatically tell websites not to sell or share your data. Those demands from the web browsers are legally binding or will be soon in at least nine states."
  • Use Privacy Badger, an EFF browser extension which the EFF says "automatically tells websites not to sell or share your data including where it's required by state law."

China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
Censorship

FTC Launches Broad Tech 'Censorship' Probe Targeting Meta, Uber (yahoo.com) 201

The U.S. Federal Trade Commission has launched an inquiry into potential "censorship" by technology platforms ranging from Meta to Uber, marking an escalation in scrutiny of content moderation practices. FTC Chair Andrew Ferguson called for public comment on what he termed "Big Tech censorship," describing it as "un-American" and "potentially illegal."

The broad probe could examine social media, video sharing, ride-sharing and event planning services. The announcement follows long-standing Republican claims that conservative viewpoints face discrimination on social media platforms.
EU

WhatsApp Faces Tougher EU Rules As Users Top 45 Million (msn.com) 38

Meta's WhatsApp messaging service has surpassed 45 million users, earning the designation of a "Very Large Online Platform" under the EU's Digital Services Act. Bloomberg reports: WhatsApp's open channels, which are feeds affiliated with news outlets or public figures that under the DSA are comparable to a social network, averaged about 46.8 million monthly average users in the second half of 2024, Meta said in a filing on Feb. 14 that hasn't previously been reported. [...] The DSA content moderation rulebook imposes stricter requirements on very large online platforms, defined as those whose EU-based monthly active users exceed 45 million. Users of WhatsApp's core messaging feature do not count toward the designation under the DSA.

The commission would still need to rule that WhatsApp should be included in the more regulated tier. Under the DSA, very large online platforms must carry out risk assessments on the spread of illegal or harmful content, and put in place a mitigation strategy. Fines under the DSA can reach as much as 6% of a company's annual global sales. The DSA requires platforms to disclose user numbers every six months. Messaging service Telegram also published an update this week, saying that monthly EU users of its public channels are "significantly fewer than 45 million."

Software

'Uber For Armed Guards' Rushes To Market 72

An anonymous reader quotes a report from Gizmodo: Protector, an app that lets you book armed goons the same way you'd call for an Uber, is having a viral moment. The app started doing the rounds on social media after consultant Nikita Bier posted about it on X. Protector lets the user book armed guards on demand. Right now it's only available in NYC and LA. According to its marketing, every guard is either "active duty or retired law enforcement and military." Every booking comes with a motorcade and users get to select the number of Escalades that'll be joining them as well as the uniforms their hired goons will wear.

Protector is currently "#7 in Travel" on Apple's App Store. It's not available for people who use Android devices. [...] The marketing for Protector, which lives on its X account, is surreal. A series of robust and barrel-chested men in ill-fitting black suits deliver their credentials to the camera while sitting in front of a black background. They're all operators. They describe careers in SWAT teams and being deployed to war zones. They show vanity shots of themselves kitted out in operator gear. All of them have a red lapel pin bearing the symbol of Protector.
If the late UnitedHealthcare CEO had used Protector, he might still be alive today, suggests Protector in its marketing materials. A video on X shows "several fantasy versions of the assassination where a Protector is on hand to prevent the assassin from killing the CEO," reports Gizmodo.

The app is a product from parent company Protector Security Solutions, which was founded by Nick Sarath, a former product designer at Meta.
Technology

Chase Will Soon Block Zelle Payments To Sellers on Social Media (bleepingcomputer.com) 58

An anonymous reader shares a report: JPMorgan Chase Bank (Chase) will soon start blocking Zelle payments to social media contacts to combat a significant rise in online scams utilizing the service for fraud.

Zelle is a highly popular digital payments network that allows users to transfer money quickly and securely between bank accounts. It is also integrated into the mobile apps of many banks in the United States, allowing for almost instant transfers without requiring cash or checks but lacking one crucial feature: purchase protection.

Social Networks

Are Technologies of Connection Tearing Us Apart? (lareviewofbooks.org) 88

Nicholas Carr wrote The Shallows: What the Internet Is Doing to Our Brains. But his new book looks at how social media and digital communication technologies "are changing us individually and collectively," writes the Los Angeles Review of Books.

The book's title? Superbloom: How Technologies of Connection Tear Us Apart . But if these systems are indeed tearing us apart, the reasons are neither obvious nor simple. Carr suggests that this isn't really about the evil behavior of our tech overlords but about how we have "been telling ourselves lies about communication — and about ourselves.... Well before the net came along," says Carr, "[the] evidence was telling us that flooding the public square with more information from more sources was not going to open people's minds or engender more thoughtful discussions. It wasn't even going to make people better informed...."

At root, we're the problem. Our minds don't simply distill useful knowledge from a mass of raw data. They use shortcuts, rules of thumb, heuristic hacks — which is how we were able to think fast enough to survive on the savage savanna. We pay heed, for example, to what we experience most often. "Repetition is, in the human mind, a proxy for facticity," says Carr. "What's true is what comes out of the machine most often...." Reality can't compete with the internet's steady diet of novelty and shallow, ephemeral rewards. The ease of the user interface, congenial even to babies, creates no opportunity for what writer Antón Barba-Kay calls "disciplined acculturation."

Not only are these technologies designed to leverage our foibles, but we are also changed by them, as Carr points out: "We adapt to technology's contours as we adapt to the land's and the climate's." As a result, by designing technology, we redesign ourselves. "In engineering what we pay attention to, [social media] engineers [...] how we talk, how we see other people, how we experience the world," Carr writes. We become dislocated, abstracted: the self must itself be curated in memeable form. "Looking at screens made me think in screens," writes poet Annelyse Gelman. "Looking at pixels made me think in pixels...."

That's not to say that we can't have better laws and regulations, checks and balances. One suggestion is to restore friction into these systems. One might, for instance, make it harder to unreflectively spread lies by imposing small transactional costs, as has been proposed to ease the pathologies of automated market trading. An option Carr doesn't mention is to require companies to perform safety studies on their products, as we demand of pharmaceutical companies. Such measures have already been proposed for AI. But Carr doubts that increasing friction will make much difference. And placing more controls on social media platforms raises free speech concerns... We can't change or constrain the tech, says Carr, but we can change ourselves. We can choose to reject the hyperreal for the material. We can follow Samuel Johnson's refutation of immaterialism by "kicking the stone," reminding ourselves of what is real.

Social Networks

Despite Plans for AI-Powered Search, Reddit's Stock Fell 14% This Week (yahoo.com) 55

"Reddit Answers" uses generative AI to answer questions using what past Reddittors have posted. Announced in December, Reddit now plans to integrate it into their search results, reports TechCrunch, with Reddit's CEO saying the idea has "incredible monetization potential."

And yet Reddit's stock fell 14% this week. CNBC's headline? "Reddit shares plunge after Google algorithm change contributes to miss in user numbers." A Google search algorithm change caused some "volatility" with user growth in the fourth quarter, but the company's search-related traffic has since recovered in the first quarter, Reddit CEO Steve Huffman said in a letter to shareholders. "What happened wasn't unusual — referrals from search fluctuate from time to time, and they primarily affect logged-out users," Huffman wrote. "Our teams have navigated numerous algorithm updates and did an excellent job adapting to these latest changes effectively...." Reddit has said it is working to convince logged-out users to create accounts as logged-in users, which are more lucrative for its business.
As Yahoo Finance once pointed out, Reddit knew this day would come, acknowledging in its IPO filing that "changes in internet search engine algorithms and dynamics could have a negative impact on traffic for our website and, ultimately, our business." And in the last three months of 2024 Reddit's daily active users dropped, Yahoo Finance reported this week. But logged-in users increased by 400,000 — while logged-out users dropped by 600,000 (their first drop in almost two years).

Marketwatch notes that analyst Josh Beck sees this as a buying opportunity for Reddit's stock: Beck pointed to comments from Reddit's management regarding a sharp recovery in daily active unique users. That was likely driven by Google benefiting from deeper Reddit crawling, by the platform uncollapsing comments in search results and by a potential benefit from spam-reduction algorithm updates, according to the analyst. "While the report did not clear our anticipated bar, we walk away encouraged by international upside," he wrote.
AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
Linux

Lead Asahi Linux Developer Quits Days After Leaving Kernel Maintainer Role (theregister.com) 68

Hector Martin has resigned as the project lead of Asahi Linux, weeks after stepping down from his role as a Linux kernel maintainer for Apple ARM support. His departure from Asahi follows a contentious exchange with Linus Torvalds over development processes and social media advocacy. After quitting kernel maintenance earlier this month, the conflict escalated when Martin suggested that "shaming on social media" might be necessary to effect change.

Torvalds sharply rejected this approach, stating that "social media brigading just makes me not want to have anything at all to do with your approach" and suggested that Martin himself might be the problem. In his final resignation announcement from Asahi, Martin wrote: "I no longer have any faith left in the kernel development process or community management approach."

The dispute reflects deeper tensions in the Linux kernel community, particularly around the integration of Rust code. It follows the August departure of another key Rust for Linux maintainer, Wedson Almeida Filho from Microsoft. According to Sonatype's research, more than 300,000 open source projects have slowed or halted updates since 2020.

Slashdot Top Deals