Earth

Atomic Scientists Adjust 'Doomsday Clock' Closer Than Ever To Midnight (reuters.com) 162

The Bulletin of Atomic Scientists moved their Doomsday Clock to 89 seconds before midnight on Tuesday, the closest to catastrophe in the timepiece's 78-year history. The Chicago-based group cited Russia's nuclear threats during its Ukraine invasion, growing tensions in the Middle East, China's military pressure near Taiwan, and the rapid advancement of AI as key factors. The symbolic clock, created in 1947 by scientists including Albert Einstein, moved one second closer than last year's setting.
AI

DeepSeek Has Spent Over $500 Million on Nvidia Chips Despite Low-Cost AI Claims, SemiAnalysis Says (ft.com) 148

Nvidia shares plunged 17% on Monday, wiping nearly $600 billion from its market value, after Chinese AI firm DeepSeek's breakthrough, but analysts are questioning the cost narrative. DeepSeek said to have trained its December V3 model for $5.6 million, but chip consultancy SemiAnalysis suggested this figure doesn't reflect total investments. "DeepSeek has spent well over $500 million on GPUs over the history of the company," Dylan Patel of SemiAnalysis said. "While their training run was very efficient, it required significant experimentation and testing to work."

The steep sell-off led to the Philadelphia Semiconductor index's worst daily drop since March 2020 at 9.2%, generating $6.75 billion in profits for short sellers, according to data group S3 Partners. DeepSeek's engineers also demonstrated they could write code without relying on Nvidia's Cuda software platform, which is widely seen as crucial to the Silicon Valley chipmaker's dominance of AI development.
AI

'AI Is Too Unpredictable To Behave According To Human Goals' (scientificamerican.com) 133

An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users "more fine-tuned control." Developers also embarked on safety research to interpret how LLMs function, with the goal of "alignment" -- which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 "The Year the Chatbots Were Tamed," this has turned out to be premature, to put it mildly. In 2024 Microsoft's Copilot LLM told a user "I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google's Gemini told a user, "You are a stain on the universe. Please die."

Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven't developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool's errand: AI safety researchers are attempting the impossible. [...] My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned "misaligned" interpretations of those goals until after they misbehave. Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven't been.

Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning "step by step." For example, Anthropic claims to have "mapped the mind" of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing. No matter how "aligned" an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later -- again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities -- issues that persist through safety training.

This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve "misaligned" goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with "misaligned" behavior. Every time researchers think they are getting closer to "aligned" LLMs, they're not. My proof suggests that "adequately aligned" LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize "aligned" behavior, deter "misaligned" behavior and realign those who misbehave.
"My paper should thus be sobering," concludes Arvan. "It shows that the real problem in developing safe AI isn't just the AI -- it's us."

"Researchers, legislators and the public may be seduced into falsely believing that 'safe, interpretable, aligned' LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it."
AI

Anthropic Builds RAG Directly Into Claude Models With New Citations API (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers. "When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query."

The company describes several potential uses for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation. In its own internal testing, the company says that the feature improved recall accuracy by up to 15 percent compared to custom citation implementations created by users within prompts. While a 15 percent improvement in accurate recall doesn't sound like much, the new feature still attracted interest from AI researchers like Simon Willison because of its fundamental integration of Retrieval Augmented Generation (RAG) techniques. In a detailed post on his blog, Willison explained why citation features are important.

"The core of the Retrieval Augmented Generation (RAG) pattern is to take a user's question, retrieve portions of documents that might be relevant to that question and then answer the question by including those text fragments in the context provided to the LLM," he writes. "This usually works well, but there is still a risk that the model may answer based on other information from its training data (sometimes OK) or hallucinate entirely incorrect details (definitely bad)." Willison notes that while citing sources helps verify accuracy, building a system that does it well "can be quite tricky," but Citations appears to be a step in the right direction by building RAG capability directly into the model.
Anthropic's Alex Albert clarifies that Claude has been trained to cite sources for a while now. What's new with Citations is that "we are exposing this ability to devs." He continued: "To use Citations, users can pass a new 'citations [...]' parameter on any document type they send through the API."
AI

Nvidia Dismisses China AI Threat, Says DeepSeek Still Needs Its Chips 77

Nvidia has responded to the market panic over Chinese AI group DeepSeek, arguing that the startup's breakthrough still requires "significant numbers of NVIDIA GPUs" for its operation. The US chipmaker, which saw more than $600 billion wiped from its market value on Monday, characterized DeepSeek's advancement as "excellent" but asserted that the technology remains dependent on its hardware.

"DeepSeek's work illustrates how new models can be created using [test time scaling], leveraging widely-available models and compute that is fully export control compliant," Nvidia said in a statement Monday. However, it stressed that "inference requires significant numbers of NVIDIA GPUs and high-performance networking." The statement came after DeepSeek's release of an AI model that reportedly achieves performance comparable to those from US tech giants while using fewer chips, sparking the biggest one-day drop in Nvidia's history and sending shockwaves through global tech stocks.

Nvidia sought to frame DeepSeek's breakthrough within existing technical frameworks, citing it as "a perfect example of Test Time Scaling" and noting that traditional scaling approaches in AI development - pre-training and post-training - "continue" alongside this new method. The company's attempt to calm market fears follows warnings from analysts about potential threats to US dominance in AI technology. Goldman Sachs earlier warned of possible "spillover effects" from any setbacks in the tech sector to the broader market. The shares stabilized somewhat in afternoon trading but remained on track for their worst session since March 2020, when pandemic fears roiled markets.
AI

DeepSeek Piles Pressure on AI Rivals With New Image Model Release 34

Chinese AI startup DeepSeek has launched Janus Pro, a new family of open-source multimodal models that it claims outperforms OpenAI's DALL-E 3 and Stable Diffusion's offering on key benchmarks. The models, ranging from 1 billion to 7 billion parameters, are available on Hugging Face under an MIT license for commercial use.

The largest model, Janus Pro 7B, surpasses DALL-E 3 and other image generators on GenEval and DPG-Bench tests, despite being limited to 384 x 384 pixel images.
Facebook

Meta's AI Chatbot Taps User Data With No Opt-Out Option (techcrunch.com) 39

Meta's AI chatbot will now use personal data from users' Facebook and Instagram accounts for personalized responses in the United States and Canada, the company said in a blog post. The upgraded Meta AI can remember user preferences from previous conversations across Facebook, Messenger, and WhatsApp, such as dietary choices and interests. CEO Mark Zuckerberg said the feature helps create personalized content like bedtime stories based on his children's interests. Users cannot opt out of the data-sharing feature, a Meta spokesperson told TechCrunch.
AI

DeepSeek Rattles Wall Street With Claims of Cheaper AI Breakthroughs 154

Chinese AI startup DeepSeek is challenging U.S. tech giants with claims it can deliver performance comparable to leading AI models at a fraction of the cost, sparking debate among Wall Street analysts about the industry's massive spending plans. While Jefferies warns that DeepSeek's efficient approach "punctures some of the capex euphoria" following Meta and Microsoft's $60 billion commitments this year, Citi questions whether such results were achieved without advanced GPUs.

Goldman Sachs suggests the development could reshape competition by lowering barriers to entry for startups. Founded in 2023 by former hedge fund executive Liang Wenfeng, DeepSeek's open-source models have gained traction with its mobile app topping charts across major markets. DeepSeek's latest AI model had sparked over $1 trillion rout in US and European technology stocks Monday, before even the U.S. market opened.
AI

A New Bid for TikTok from Perplexity AI Would Give the US Government a 50% Stake (apnews.com) 113

An anonymous reader shared this report from the Associated Press: Perplexity AI has presented a new proposal to TikTok's parent company that would allow the U.S. government to own up to 50% of a new entity that merges Perplexity with TikTok's U.S. business, according to a person familiar with the matter... The new proposal would allow the U.S. government to own up to half of that new structure once it makes an initial public offering of at least $300 billion, said the person, who was not authorized to speak about the proposal. The person said Perplexity's proposal was revised based off of feedback from the Trump administration. If the plan is successful, the shares owned by the government would not have voting power, the person said. The government also would not get a seat on the new company's board.

Under the plan, ByteDance would not have to completely cut ties with TikTok, a favorable outcome for its investors. But it would have to allow a "full U.S. board control," the person said.

Under the proposal, the China-based tech company would contribute TikTok's U.S. business without the proprietary algorithm that fuels what users see on the app, according to a document seen by the Associated Press.

GNU is Not Unix

FSF: Meta's License for Its Llama 3.1 AI Model 'is Not a Free Software License' (fsf.org) 35

July saw the news that Meta had launched a powerful open-source AI model, Llama 3.1.

But the Free Software Foundation evaluated Llama 3.1's license agreement, and announced this week that "this is not a free software license and you should not use it, nor any software released under it." Not only does it deny users their freedom, but it also purports to hand over powers to the licensors that should only be exercised through lawmaking by democratically-elected governments.

Moreover, it has been applied by Meta to a machine-learning (ML) application, even though the license completely fails to address software freedom challenges inherent in such applications....

We decided to review the Llama license because it is being applied to an ML application and model, while at the same time being presented by Meta as if it grants users a degree of software freedom. This is certainly not the case, and we want the free software community to have clarity on this.

In other news, the FSF also announced the winner of the logo contest for their big upcoming 40th anniversary celebration.
AI

'Copilot' Price Hike for Microsoft 365 Called 'Total Disaster' with Overwhelmingly Negative Response (zdnet.com) 129

ZDNET's senior editor sees an "overwhelmingly negative" response to Microsoft's surprise price hike for the 84 million paying subscribers to its Microsoft 365 software suite. Attempting the first price hike in more than 12 years, "they made it a 30% price increase" — going from $10 a month to $13 a month — "and blamed it all on artificial intelligence." Bad idea. Why? Because...

No one wants to pay for AI...

If you ask Copilot in Word to write something for you, the results will be about what you'd expect from an enthusiastic summer intern. You might fare better if you ask Copilot to turn a folder full of photos into a PowerPoint presentation. But is that task really such a challenge...?

The announcement was bungled, too... I learned about the new price thanks to a pop-up message on my Android phone... It could be worse, I suppose. Just ask the French and Spanish subscribers who got a similar pop-up message telling them their price had gone from €10 a month to €13,000. (Those pesky decimals.) Oh, and I've lost count of the number of people who were baffled and angry that Microsoft had forcibly installed the Copilot app on their devices. It was just a rebranding of the old Microsoft 365 app with the new name and logo, but in my case it was days later before I received yet another pop-up message telling me about the change...

[T]hey turned the feature on for everyone and gave Word users a well-hidden checkbox that reads Enable Copilot. The feature is on by default, so you have to clear the checkbox to make it go away. As for the other Office apps? "Uh, we'll get around to giving you a button to turn it off next month. Maybe." Seriously, the support page that explains where you can find that box in Word says, "We're working on adding the Enable Copilot checkbox to Excel, OneNote, and PowerPoint on Windows devices and to Excel and PowerPoint on Mac devices. That is tentatively scheduled to happen in February 2025." Until the Enable Copilot button is available, you can't disable Copilot.

ZDNET's senior editor concludes it's a naked grab for cash, adding "I could plug the numbers into Excel and tell you about it, but let's have Copilot explain instead."

Prompt: If I have 84 million subscribers who pay me $10 a month, and I increase their monthly fee by $3 a month each, how much extra revenue will I make each year?

Copilot describes the calculation, concluding with "You would make an additional $3.024 billion per year from this fee increase." Copilot then posts two emojis — a bag of money, and a stock chart with the line going up.
AI

Apple Enlists Veteran Software Executive To Help Fix AI and Siri (yahoo.com) 30

An anonymous reader quotes a report from Bloomberg: Apple executive Kim Vorrath, a company veteran known for fixing troubled products and bringing major projects to market, has a new job: whipping artificial intelligence and Siri into shape. Vorrath, a vice president in charge of program management, was moved to Apple's artificial intelligence and machine learning division this week, according to people with knowledge of the matter. She'll be a top deputy to AI chief John Giannandrea, said the people, who asked not to be identified because the change hasn't been announced publicly. The move helps bolster a team that's racing to make Apple a leader in AI -- an area where it's fallen behind technology peers. [...]

Vorrath, who has spent 36 years at Apple, is known for managing the development of tough software projects. She's also put procedures in place that can catch and fix bugs. Vorrath joins the new team from Apple's hardware engineering division, where she helped launch the Vision Pro headset. Over the years, Vorrath has had a hand in several of Apple's biggest endeavors. In the mid-2000s, she was chosen to lead project management for the original iPhone software group and get the iconic device ready for consumers. Until 2019, she oversaw project management for the iPhone, iPad and Mac operating systems, before taking on the Vision Pro software. Haley Allen will replace Vorrath overseeing program management for visionOS, the headset's operating system, according to the people.

Prior to joining Giannandrea's organization, Vorrath had spent several weeks advising Kelsey Peterson, the group's previous head of program management. Peterson will now report to Vorrath -- as will two other AI executives, Cindy Lin and Marc Schonbrun. Giannandrea, who joined Apple from Google in 2018, disclosed the changes in a memo sent to staffers. The move signals that AI is now more important than the Vision Pro, which launched in February 2024, and is seen as the biggest challenge within the company, according to a longtime Apple executive who asked not to be identified. Vorrath has a knack for organizing engineering groups and creating an effective workflow with new processes, the executive said. It has been clear for some time now that Giannandrea needs additional help managing an AI group with growing prominence, according to the executive. Vorrath is poised to bring Apple's product development culture to the AI work, the person said.

Games

Complexity Physics Finds Crucial Tipping Points In Chess Games (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: The game of chess has long been central to computer science and AI-related research, most notably in IBM's Deep Blue in the 1990s and, more recently, AlphaZero. But the game is about more than algorithms, according to Marc Barthelemy, a physicist at the Paris-Saclay University in France, with layers of depth arising from the psychological complexity conferred by player strategies. Now, Barthelmey has taken things one step further by publishing a new paper in the journal Physical Review E that treats chess as a complex system, producing a handy metric that can help predict the proverbial "tipping points" in chess matches. [...]

For his analysis, Barthelemy chose to represent chess as a decision tree in which each "branch" leads to a win, loss, or draw. Players face the challenge of finding the best move amid all this complexity, particularly midgame, in order to steer gameplay into favorable branches. That's where those crucial tipping points come into play. Such positions are inherently unstable, which is why even a small mistake can have a dramatic influence on a match's trajectory. Barthelemy has re-imagined a chess match as a network of forces in which pieces act as the network's nodes, and the ways they interact represent the edges, using an interaction graph to capture how different pieces attack and defend one another. The most important chess pieces are those that interact with many other pieces in a given match, which he calculated by measuring how frequently a node lies on the shortest path between all the node pairs in the network (its "betweenness centrality").

He also calculated so-called "fragility scores," which indicate how easy it is to remove those critical chess pieces from the board. And he was able to apply this analysis to more than 20,000 actual chess matches played by the world's top players over the last 200 years. Barthelemy found that his metric could indeed identify tipping points in specific matches. Furthermore, when he averaged his analysis over a large number of games, an unexpected universal pattern emerged. "We observe a surprising universality: the average fragility score is the same for all players and for all openings," Barthelemy writes. And in famous chess matches, "the maximum fragility often coincides with pivotal moments, characterized by brilliant moves that decisively shift the balance of the game." Specifically, fragility scores start to increase about eight moves before the critical tipping point position occurs and stay high for some 15 moves after that.
"These results suggest that positional fragility follows a common trajectory, with tension peaking in the middle game and dissipating toward the endgame," writes Barthelemy. "This analysis highlights the complex dynamics of chess, where the interaction between attack and defense shapes the game's overall structure."
Facebook

Meta To Spend Up To $65 Billion This Year To Power AI Goals (reuters.com) 32

Meta plans to spend between $60 billion and $65 billion this year to build out AI infrastructure, CEO Mark Zuckerberg said on Friday, joining a wave of Big Tech firms unveiling hefty investments to capitalize on the technology. From a report: As part of the investment, Meta will build a more than 2-gigawatt data center that would be large enough to cover a significant part of Manhattan. The company -- one of the largest customers of Nvidia's coveted artificial intelligence chips -- plans to end the year with more than 1.3 million graphics processors.

"This will be a defining year for AI," Zuckerberg said in a Facebook post. "This is a massive effort, and over the coming years it will drive our core products and business." Zuckerberg expects Meta's AI assistant -- available across its services, including Facebook and Instagram -- to serve more than 1 billion people in 2025, while its open-source Llama 4 would become the "leading state-of-the-art model."

United States

Scale AI CEO Says China Has Quickly Caught the US With DeepSeek 79

The U.S. may have led China in the AI race for the past decade, according to Alexandr Wang, CEO of Scale AI, but on Christmas Day, everything changed. From a report: Wang, whose company provides training data to key AI players including OpenAI, Google and Meta , said Thursday at the World Economic Forum in Davos, Switzerland, that DeepSeek, the leading Chinese AI lab, released an "earth-shattering model" on Christmas Day, then followed it up with a powerful reasoning-focused AI model, DeepSeek-R1, which competes with OpenAI's recently released o1 model.

"What we've found is that DeepSeek ... is the top performing, or roughly on par with the best American models," Wang said. In an interview with CNBC, Wang described the artificial intelligence race between the U.S. and China as an "AI war," adding that he believes China has significantly more Nvidia H100 GPUs -- AI chips that are widely used to build leading powerful AI models -- than people may think, especially considering U.S. export controls. [...] "The United States is going to need a huge amount of computational capacity, a huge amount of infrastructure," Wang said, later adding, "We need to unleash U.S. energy to enable this AI boom."
DeepSeek's holding company is a quant firm, which happened to have a lot of GPUs for trading and mining. DeepSeek is their "side project."
United States

Trump Signs Executive Order on Developing AI 'Free From Ideological Bias' 169

President Donald Trump signed an executive order on AI Thursday that will revoke past government policies his order says "act as barriers to American AI innovation." From a report: To maintain global leadership in AI technology, "we must develop AI systems that are free from ideological bias or engineered social agendas," Trump's order says. The new order doesn't name which existing policies are hindering AI development but sets out to track down and review "all policies, directives, regulations, orders, and other actions taken" as a result of former President Joe Biden's sweeping AI executive order of 2023, which Trump rescinded Monday.

Any of those Biden-era actions must be suspended if they don't fit Trump's new directive that AI should "promote human flourishing, economic competitiveness, and national security." Last year, the Biden administration issued a policy directive that said U.S. federal agencies must show their artificial intelligence tools aren't harming the public, or stop using them. Trump's order directs the White House to revise and reissue those directives, which affect how agencies acquire AI tools and use them.
Power

Bill Gates' TerraPower Signs Agreement For Nuclear To Power Data Centers 42

An anonymous reader quotes a report from The Verge: TerraPower, a nuclear energy startup founded by Bill Gates, struck a deal this week with one of the largest data center developers in the US to deploy advanced nuclear reactors. TerraPower and Sabey Data Centers (SDC) are working together on a plan to run existing and future facilities on nuclear energy from small reactors. A memorandum of understanding signed by the two companies establishes a "strategic collaboration" that'll initially look into the potential for new nuclear power plants in Texas and the Rocky Mountain region that would power SDC's data centers. [...]

There's still a long road ahead before that can become a reality. The technology TerraPower and similar nuclear energy startups are developing still have to make it through regulatory hurdles and prove that they can be commercially viable. Compared to older, larger nuclear power plants, the next generation of reactors are supposed to be smaller and easier to site. Nuclear energy is seen as an alternative to fossil fuels that are causing climate change. But it still faces opposition from some advocates concerned about the impact of uranium mining and storing radioactive waste near communities. TerraPower's reactor design for this collaboration, Natrium, is the only advanced technology of its kind with a construction permit application for a commercial reactor pending with the U.S. Nuclear Regulatory Commission, according to the company. The company just broke ground on a demonstration project in Wyoming last year, and expects it to come online in 2030.
Earth

Misinformation and Cyberespionage Top WEF's Global Risks Report 2025 22

The World Economic Forum's Global Risks Report 2025 (PDF) highlights misinformation as the top global risk due to generative AI tools and state-sponsored campaigns undermining democratic systems, while cyberespionage ranks as a persistent threat with inadequate cyber resilience, especially among small organizations. From a report: The manipulation of information through gen AI and state-sponsored campaigns is disrupting democratic systems and undermining public trust in critical institutions. Efforts to combat this risk have a "formidable opponent" in gen AI-created false or misleading content that can be produced and distributed at scale, the report warned. Misinformation campaigns in the form of deepfakes, synthetic voice recordings or fabricated news stories are now a leading mechanism for foreign entities to influence "voter intentions, sow doubt among the general public about what is happening in conflict zones, or tarnish the image of products or services from another country." This is especially acute in India, Germany, Brazil and the United States.

Concern remains especially high following a year of the so-called "super elections," which saw heightened state-sponsored campaigns designed to manipulate public opinion. But while it has become increasingly difficult to distinguish AI-generated fake content from human-generated one, AI technologies, in itself, is low in WEF's risk ranking. In fact, it has declined in the two-year outlook, from 29 in last year's report to 31 this year.

Cyberespionage and warfare continue to be a reason for unease for most organizations, ranked fifth in the global risk landscape. According to the report, one in three CEOs cited cyberespionage and intellectual property theft as their top concerns in 2024. Seventy-one percent of chief risk officers say cyber risk and criminal activity such as money laundering and cybercrime could severely impact their organizations, while 45% of cyber leaders are concerned about disruption of operations and business processes, according to WEF's Global Cybersecurity Outlook 2025 report. The rising likelihood of threat actor activity and sophisticated technological disruption is listed as immediate concerns among security leaders.
AI

Developer Creates Infinite Maze That Traps AI Training Bots 87

An anonymous reader quotes a report from 404 Media: A pseudonymous coder has created and released an open source "tar pit" to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed "offensively" as a honeypot trap to waste AI companies' resources.

"It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself -- the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself," Aaron B, the creator of Nepenthes, told 404 Media. "Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time," they added. "But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop."
You can try Nepenthes via this link (it loads slowly and links endlessly on purpose).

Slashdot Top Deals