AI

AI Boom Gives Rise To 'GPU-as-a-Service' 35

An anonymous reader quotes a report from IEEE Spectrum: The surge of interest in AI is creating a massive demand for computing power. Around the world, companies are trying to keep up with the vast amount of GPUs needed to power more and more advanced AI models. While GPUs are not the only option for running an AI model, they have become the hardware of choice due to their ability to efficiently handle multiple operations simultaneously -- a critical feature when developing deep learning models. But not every AI startup has the capital to invest in the huge numbers of GPUs now required to run a cutting-edge model. For some, it's a better deal to outsource it. This has led to the rise of a new business: GPU-as-a-Service (GPUaaS). In recent years, companies like Hyperbolic, Kinesis, Runpod, and Vast.ai have sprouted up to remotely offer their clients the needed processing power.

[...] Studies have shown that more than half of the existing GPUs are not in use at any given time. Whether we're talking personal computers or colossal server farms, a lot of processing capacity is under-utilized. What Kinesis does is identify idle compute -- both for GPUs and CPUs -- in servers worldwide and compile them into a single computing source for companies to use. Kinesis partners with universities, data centers, companies, and individuals who are willing to sell their unused computing power. Through a special software installed on their servers, Kinesis detects idle processing units, preps them, and offers them to their clients for temporary use. [...] The biggest advantage of GPUaaS is economical. By removing the need to purchase and maintain the physical infrastructure, it allows companies to avoid investing in servers and IT management, and to instead put their resources toward improving their own deep learning, large language, and large vision models. It also lets customers pay for the exact amount of GPUs they use, saving the costs of the inevitable idle compute that would come with their own servers.
The report notes that GPUaaS is growing in profitability. "In 2023, the industry's market size was valued at US $3.23 billion; in 2024, it grew to $4.31 billion," reports IEEE. "It's expected to rise to $49.84 billion by 2032."
AI

CIA's Chatbot Stands In For World Leaders 37

The CIA has developed a chatbot to talk to virtual versions of foreign presidents and prime ministers. "Understanding leaders around the world is one of the CIA's most important jobs. Teams of analysts comb through intelligence collected by spies and publicly available information to create profiles of leaders that can predict behaviors," reports the New York Times. "A chatbot powered by artificial intelligence now helps do that work." From the report: The chatbot is part of the spy agency's drive to improve the tools available to CIA analysts and its officers in the field, and to better understand adversaries' technical advances. Core to the effort is to make it easier for companies to work with the most secretive agency. William Burns, CIA director for the past four years, prioritized improving the agency's technology and understanding of how it is used. Incoming Trump administration officials say they plan to build on those initiatives, not tear them down. [...]

The CIA has long used digital tools, spy gadgets and even AI. But with the development of new forms of AI, including the large language models that power chatbots, the agency has stepped up its investments. Making better use of AI, Burns said, is crucial to US competition with China. And better AI models have helped the agency's analysts "digest the avalanche of open-source information out there," he said. The new tools have also helped analysts process clandestinely acquired information, Burns said. New technologies developed by the agency are helping spies navigate cities in authoritarian countries where governments use AI-powered cameras to conduct constant surveillance on their population and foreign spies.
AI

Authors Seek Meta's Torrent Client Logs and Seeding Data In AI Piracy Probe (torrentfreak.com) 15

An anonymous reader quotes a report from TorrentFreak: Meta is among a long list of companies being sued for allegedly using pirated material to train its AI models. Meta has never denied using copyrighted works but stressed that it would rely on a fair use defense. However, with rightsholders in one case asking for torrent client data and 'seeding lists' for millions of books allegedly shared in public, the case now takes a geeky turn. [...] A few weeks ago, the plaintiffs asked for permission to submit a third amended complaint (PDF). After uncovering Meta's use of BitTorrent to source copyright-infringing training data from pirate shadow library, LibGen, the request was justified, they argued. Specifically, the authors say that Meta willingly used BitTorrent to download pirated books from LibGen, knowing that was legally problematic. As a result, Meta allegedly shared copies of these books with other people, as is common with the use of BitTorrent.

"By downloading through the bit torrent protocol, Meta knew it was facilitating further copyright infringement by acting as a distribution point for other users of pirated books," the amended complaint notes. "Put another way, by opting to use a bit torrent system to download LibGen's voluminous collection of pirated books, Meta 'seeded' pirated books to other users worldwide." Meta believed that the allegations weren't sufficiently new to warrant an update to the complaint. The company argued that it was already a well-known fact that it used books from these third-party sources, including LibGen. However, the authors maintained that the 'torrent' angle is novel and important enough to warrant an update. Last week, United States District Judge Vince Chhabria agreed, allowing the introduction of these new allegations. In addition to greenlighting the amended complaint, the Judge also allowed the authors to conduct further testimony on the "seeding" angle. "[E]vidence about seeding is relevant to the existing claim because it is potentially relevant to the plaintiffs' assertion of willful infringement or to Meta's fair use defense," Judge Chhabria wrote last week.

With the court recognizing the relevance of Meta's torrenting activity, the plaintiffs requested reconsideration of an earlier order, where discovery on BitTorrent-related matters was denied. Through a filing submitted last Wednesday, the plaintiffs hope to compel Meta to produce its BitTorrent logs and settings, including peer lists and seeding data. "The Order denied Plaintiffs' motion to compel production of torrenting data, including Meta's BitTorrent client, application logs, and peer lists. This data will evidence how much content Meta torrented from shadow libraries and how much it seeded to third parties as a host of this stolen IP," they write. While archiving lists of seeders is not a typical feature for a torrent client, the authors are requesting Meta to disclose any relevant data. In addition, they also want the court to reconsider its ruling regarding the crime-fraud exception. That's important, they suggest, as Meta's legal counsel was allegedly involved in matters related to torrenting. "Meta, with the involvement of in-house counsel, decided to obtain copyrighted works without permission from online databases of copyrighted works that 'we know to be pirated, such as LibGen," they write. The authors allege that this involved "seeding" files and that Meta attempted to "conceal its actions" by limiting the amount of data shared with the public. One Meta employee also asked for guidance, as "torrenting from a corporate laptop doesn't feel right."

Privacy

The Powerful AI Tool That Cops (Or Stalkers) Can Use To Geolocate Photos In Seconds 21

An anonymous reader quotes a report from 404 Media: A powerful AI tool can predict with high accuracy the location of photos based on features inside the image itself -- such as vegetation, architecture, and the distance between buildings -- in seconds, with the company now marketing the tool to law enforcement officers and government agencies. Called GeoSpy, made by a firm called Graylark Technologies out of Boston, the tool has also been used for months by members of the public, with many making videos marveling at the technology, and some asking for help with stalking specific women. The company's founder has aggressively pushed back against such requests, and GeoSpy closed off public access to the tool after 404 Media contacted him for comment.

Based on 404 Media's own tests and conversations with other people who have used it and investors, GeoSpy could radically change what information can be learned from photos posted online, and by whom. Law enforcement officers with very little necessary training, private threat intelligence companies, and stalkers could, and in some cases already are, using this technology. Dedicated open source intelligence (OSINT) professionals can of course do this too, but the training and skillset necessary can take years to build up. GeoSpy allows essentially anyone to do it. "We are working on something for LE [law enforcement] but it's ," Daniel Heinen, the founder of Graylark and GeoSpy, wrote in a message to the GeoSpy community Discord in July.

GeoSpy has been trained on millions of images from around the world, according to marketing material available online. From that, the tool is able to recognize "distinct geographical markers such as architectural styles, soil characteristics, and their spatial relationships." That marketing material says GeoSpy has strong coverage in the United States, but that it also "maintains global capabilities for location identification." [...] GeoSpy has not received much media attention, but it has become something of a sensation on YouTube. Multiple content creators have tested out the tool, and some try to feed it harder and harder challenges.
Now that it's been shut off to the public, users have to request access, which is "available exclusively to qualified law enforcement agencies, enterprise users and government entities," according to the company's website.

The law enforcement-version of GeoSpy is more powerful than what was publicly available, according to Heinen's Discord posts. "Geospy.ai is a demo," he wrote in September. "The real work is the law enforcement models."
AI

AI Benchmarking Organization Criticized For Waiting To Disclose Funding from OpenAI (techcrunch.com) 6

An anonymous reader shares a report: An organization developing math benchmarks for AI didn't disclose that it had received funding from OpenAI until relatively recently, drawing allegations of impropriety from some in the AI community.

Epoch AI, a nonprofit primarily funded by Open Philanthropy, a research and grantmaking foundation, revealed on December 20 that OpenAI had supported the creation of FrontierMath. FrontierMath, a test with expert-level problems designed to measure an AI's mathematical skills, was one of the benchmarks OpenAI used to demo its upcoming flagship AI, o3.

In a post on the forum LessWrong, a contractor for Epoch AI going by the username "Meemi" says that many contributors to the FrontierMath benchmark weren't informed of OpenAI's involvement until it was made public. "The communication about this has been non-transparent," Meemi wrote. "In my view Epoch AI should have disclosed OpenAI funding, and contractors should have transparent information about the potential of their work being used for capabilities, when choosing whether to work on a benchmark."

Education

More Teens Say They're Using ChatGPT For Schoolwork, a New Study Finds (npr.org) 65

A recent poll from the Pew Research Center shows more and more teens are turning to ChatGPT for help with their homework. Three things to know: 1. According to the survey, 26% of students ages 13-17 are using the artificial intelligence bot to help them with their assignments.
2. That's double the number from 2023, when 13% reported the same habit when completing assignments.
3. Comfort levels with using ChatGPT for different types of assignments vary among students: 54% found that using it to research new topics, for example, was an acceptable use of the tool. But only 18% said the same for using it to write an essay.

United States

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
AI

In AI Arms Race, America Needs Private Companies, Warns National Security Advisor (axios.com) 40

America's outgoing national security adviser has "wide access to the world's secrets," writes Axios, adding that the security adviser delivered a "chilling" warning that "The next few years will determine whether AI leads to catastrophe — and whether China or America prevails in the AI arms race."

But in addition, Sullivan "said in our phone interview that unlike previous dramatic technology advancements (atomic weapons, space, the internet), AI development sits outside of government and security clearances, and in the hands of private companies with the power of nation-states... 'There's going to have to be a new model of relationship because of just the sheer capability in the hands of a private actor,' Sullivan says..." Somehow, government will have to join forces with these companies to nurture and protect America's early AI edge, and shape the global rules for using potentially God-like powers, he says. U.S. failure to get this right, Sullivan warns, could be "dramatic, and dramatically negative — to include the democratization of extremely powerful and lethal weapons; massive disruption and dislocation of jobs; an avalanche of misinformation..."

To distill Sullivan: America must quickly perfect a technology that many believe will be smarter and more capable than humans. We need to do this without decimating U.S. jobs, and inadvertently unleashing something with capabilities we didn't anticipate or prepare for. We need to both beat China on the technology and in shaping and setting global usage and monitoring of it, so bad actors don't use it catastrophically. Oh, and it can only be done with unprecedented government-private sector collaboration — and probably difficult, but vital, cooperation with China...

There's no person we know in a position of power in AI or governance who doesn't share Sullivan's broad belief in the stakes ahead...

That said, AI is like the climate: America could do everything right — but if China refuses to do the same, the problem persists and metastasizes fast. Sullivan said Trump, like Biden, should try to work with Chinese leader Xi Jinping on a global AI framework, much like the world did with nuclear weapons.

"I personally am not an AI doomer," Sullivan says in the interview. "I am a person who believes that we can seize the opportunities of AI. But to do so, we've got to manage the downside risks, and we have to be clear-eyed and real about those risks."

Thanks to long-time Slashdot reader Mr_Blank for sharing the article.
Social Networks

TikTok Goes Offline in US - Then Comes Back Online After Trump Promises 90-Day Reprieve (apnews.com) 109

CNN reports: TikTok appears to be coming back online just hours after President-elect Donald Trump pledged Sunday that he would sign an executive order Monday that aims to restore the banned app. Around 12 hours after first shutting itself down, U.S. users began to have access to TikTok on a web browser and in the app, although the page still showed a warning about the shutdown.
The brief outage was "the first time in history the U.S. government has outlawed a widely popular social media network," reports NPR. Apple and Google removed TikTok from their app stores. (And Apple also removed Lemon8).

The incoming president announced his pending executive order "in a post on his Truth Social account," reports the Associated Press, "as millions of TikTok users in the U.S. awoke to discover they could no longer access the TikTok app or platform."

But two Republican Senators said Sunday that the incoming president doesn't have the power to pause the TikTok ban. Tom Cotton of Arkansas and Peter Ricketts of Nebraska posted on X.com that "Now that the law has taken effect, there's no legal basis for any kind of 'extension' of its effective date. For TikTok to come back online in the future, ByteDance must agree to a sale... severing all ties between TikTok and Communist China. Only then will Americans be protected from the grave threat posted to their privacy and security by a communist-controlled TikTok."

The Associated Press reports that the incoming president offered this rationale for the reprieve in his Truth Social post. "Americans deserve to see our exciting Inauguration on Monday, as well as other events and conversations." The law gives the sitting president authority to grant a 90-day extension if a viable sale is underway. Although investors made a few offers, ByteDance previously said it would not sell. In his post on Sunday, Trump said he "would like the United States to have a 50% ownership position in a joint venture," but it was not immediately clear if he was referring to the government or an American company...

"A law banning TikTok has been enacted in the U.S.," a pop-up message informed users who opened the TikTok app and tried to scroll through videos on Saturday night. "Unfortunately that means you can't use TikTok for now." The service interruption TikTok instituted hours earlier caught most users by surprise. Experts had said the law as written did not require TikTok to take down its platform, only for app stores to remove it. Current users had been expected to continue to have access to videos until the app stopped working due to a lack of updates... "We are fortunate that President Trump has indicated that he will work with us on a solution to reinstate TikTok once he takes office. Please stay tuned," read the pop-up message...

Apple said the apps would remain on the devices of people who already had them installed, but in-app purchases and new subscriptions no longer were possible and that operating updates to iPhones and iPads might affect the apps' performance.

In the nine months since Congress passed the sale-or-ban law, no clear buyers emerged, and ByteDance publicly insisted it would not sell TikTok. But Trump said he hoped his administration could facilitate a deal to "save" the app. TikTok CEO Shou Chew is expected to attend Trump's inauguration with a prime seating location. Chew posted a video late Saturday thanking Trump for his commitment to work with the company to keep the app available in the U.S. and taking a "strong stand for the First Amendment and against arbitrary censorship...."

On Saturday, artificial intelligence startup Perplexity AI submitted a proposal to ByteDance to create a new entity that merges Perplexity with TikTok's U.S. business, according to a person familiar with the matter...

The article adds that TikTok "does not operate in China, where ByteDance instead offers Douyin, the Chinese sibling of TikTok that follows Beijing's strict censorship rules."

Sunday morning Republican House speaker Mike Johnson offered his understanding of Trump's planned executive order, according to Politico. Speaking on Meet the Press, Johnson said "the way we read that is that he's going to try to force along a true divestiture, changing of hands, the ownership.

"It's not the platform that members of Congress are concerned about. It's the Chinese Communist Party and their manipulation of the algorithms."

Thanks to long-time Slashdot reader ArchieBunker for sharing the news.
IT

Are 'Career Catfishers' Justified In Not Showing Up for Work? (fortune.com) 193

Fortune reports 18% of workers have engaged in "career catfishing" — getting a job offer, but then refusing to show up on the first day of work.

And when someone posted Fortune's article to Reddit's antiwork subreddit, it drew 2,100 upvotes -- and another 84 comments. ("I love doing this...! This feels really great to do after a company has jerked you around, and basically said that several other people were in line ahead of you... after five interviews.")

But Fortune reports there's other sources of frustration: At the moment, Gen Z is contending with an onerous battle to land an entry-level, full-time role. The class of 2025 is set to apply to more jobs than the graduating class prior, already submitting 24% more applications on average this past summer than seniors did last year. Furthermore, the class of 2024 applied to 64% more jobs than the cohort before them, according to job platform Handshake. To make matters all the more bleak, the number of job listings has dwindled from 2023 levels, generating deeper frenzy and more intense competition for the roles listed.

That adds up to a hiring managers' market and senior executives are playing hardball; only 12% of mid-level executives think entry-level workers are prepared to join the workforce, per a report from technology education provider General Assembly. About one in four say they wouldn't hire today's entry-level employees. Yet, that's not really the point of entry-level roles, points out Jourdan Hathaway, General Assembly's chief business officer. By definition, it's a position that requires investment in a young adult, she explained. "The entry-level employee pipeline is broken," Hathaway wrote in a statement. "Companies must rethink how they source, train, and onboard employees."

The especially competitive hiring landscape could be forcing Gen Zers to accept the first gig they can get because the job market is so dire — only to later regret it and not show up the first day.

The article also acknowledges that "employers themselves have a role in the two-way communication — or lack thereof — between hire and hirer." Almost 80% of hiring managers admitted they've stopped responding to candidates during the application process, according to a survey of 625 hiring managers from Resume Genius.

Gen Zers say that their ghosting is in reaction to the company's behavior. More than a third of applicants who have purposefully dropped the ball say it was because a recruiter was rude to them or misled them about a position, according to Monster... In part, it's likely AI that's fueling said ghosting. AI has become more integrated into the hiring process, becoming a screener that rejects resumes without ever reaching a human person's eyes. That phenomenon possibly fuels both sides' tendency to be non-responsive...

EU

NATO Will Deploy Unmanned Vessels to Protect Baltic Sea Cables - Plus Data-Assessing AI (twz.com) 56

The BBC brings news from the Baltic Sea. After critical undersea cables were damaged or severed last year, "NATO has launched a new mission to increase the surveillance of ships..." Undersea infrastructure is essential not only for electricity supply but also because more than 95% of internet traffic is secured via undersea cables, [said NATO head Mark Rutte], adding that "1.3 million kilometres (800,000 miles) of cables guarantee an estimated 10 trillion-dollar worth of financial transactions every day". In a post on X, he said Nato would do "what it takes to ensure the safety and security of our critical infrastructure and all that we hold dear".... Estonia's Foreign Minister Margus Tsahkna said in December that damage to submarine infrastructure had become "so frequent" that it cast doubt on the idea the damage could be considered "accidental" or "merely poor seamanship".
The article also has new details about a late-December cable-cutting by the Eagle S (which was then boarded by Finland's coast guard and steered into Finnish waters). "On Monday, Risto Lohi of Finland's National Bureau of Investigation told Reuters that the Eagle S was threatening to cut a second power cable and a gas pipe between Finland and Estonia at the time it was seized." And there's reports that the ship was loaded with spying equipment.

UPDATE (1/19/2024): The Washington Post reports that the undersea cable ruptures "were likely the result of maritime accidents rather than Russian sabotage, according to several U.S. and European intelligence officials."

But whatever they're watching for, NATO's new surveillance of the Baltic Sea will include "uncrewed surface vessels," according to defense-news web site TWZ.com: The uncrewed surface vessels [or USVs], also known as drone boats, will help establish an enhanced common operating picture to give participating nations a better sense of potential threats and speed up any response. It is the first time NATO will use USVs in this manner, said a top alliance commander... There will be at least 20 USVs assigned [a NATO spokesman told The War Zone Friday]... In the first phase of the experiment, the USVs will "have the capabilities under human control" while "later phases will include greater autonomy." The USVs will augment the dozen or so vessels as well as an unspecified number of crewed maritime patrol aircraft committed
One highly-placed NATO official tells the site that within weeks "we will begin to use these ships to give a persistent, 24-7 surveillance of critical areas."

Last week the U.K. government also announced "an advanced UK-led reaction system to track potential threats to undersea infrastructure and monitor the Russian shadow fleet."

The system "harnesses AI to assess data from a range of sources, including the Automatic Identification System (AIS) ships use to broadcast their position, to calculate the risk posed by each vessel entering areas of interest." Harnessing the power of AI, this UK-led system is a major innovation which allows us the unprecedented ability to monitor large areas of the sea with a comparatively small number of resources, helping us stay secure at home and strong abroad.
AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
Biotech

OpenAI Has Created an AI Model For Longevity Science (technologyreview.com) 33

OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review: Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]

OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
Facebook

Russian Disinformation Campaigns Eluded Meta's Efforts To Block Them (nytimes.com) 61

An anonymous reader quotes a report from the New York Times: A Russian organization linked to the Kremlin's covert influence campaigns posted more than 8,000 political advertisements on Facebook despite European and American restrictions barring companies from doing business with the organization, according to three organizations that track disinformation online. The Russian group, the Social Design Agency, evaded lax enforcement by Facebook to place an estimated $338,000 worth of ads aimed at European users over a period of 15 months that ended in October, even though the platform itself highlighted the threat, the three organizations said in a report released on Friday.

The Social Design Agency has faced punitive sanctions in the European Union since 2023 and in the United States since April for spreading propaganda and disinformation to unsuspecting users on social media. The ad campaigns on Facebook raise "critical questions about the platform's compliance" with American and European laws, the report said. [...] The Social Design Agency is a public relations company in Moscow that, according to American and European officials, operates a sophisticated influence operation known as Doppelganger. Since 2022, Doppelganger has created cartoon memes and online clones of real news sites, like Le Monde and The Washington Post, to spread propaganda and disinformation, often about the war in Ukraine.

[...] The organizations documenting the campaign -- Check First, a Finnish research company, along with Reset.Tech in London and AI Forensics in Paris -- focused on efforts to sway Facebook users in France, Germany, Poland and Italy. Doppelganger has been also linked to influence operations in the United States, Israel and other countries, but those are not included in the report's findings. [...] The researchers estimated that the ads resulted in more than 123,000 clicks by users and netted Meta at least $338,000 in the European Union alone. The researchers acknowledged that the figures provide only one, incomplete example of the Russian agency's efforts. In addition to propagating Russia's views on Ukraine, the agency posted ads in response to major news events, including theHamas attack on Israel on Oct. 7, 2023, and a terrorist attack in a Moscow suburb last March that killed 145 people. The ads would often appear within 48 hours, trying to shape public perceptions of events. After the Oct. 7 attacks, the ads pushed false claims that Ukraine sold weapons to Hamas. The ads reached more than 237,000 accounts over two to three days, "underscoring the operation's capacity to weaponize current events in support of geopolitical narratives," the researcher's report said.

AI

Microsoft-OpenAI Partnership Raises Antitrust Concerns, FTC Says (bloomberg.com) 2

Microsoft's $13 billion investment in OpenAI raises concerns that the tech giant could extend its dominance in cloud computing into the nascent AI market, the Federal Trade Commission said in a report released Friday. From a report: The commission said Microsoft's deal with OpenAI, as well as Amazon and Google's partnerships with AI company Anthropic, raise the risk that AI developers could be "fully acquired" by the tech giants in the future.

"The FTC's report sheds light on how partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that can undermine fair competition," FTC Chair Lina Khan said in a statement. The FTC has the power to open market studies to glean more information about industry trends. The findings can be used to inform future actions. It's unclear what the agency's new leadership under the Trump administration will do with the report.

Microsoft

Microsoft Research: AI Systems Cannot Be Made Fully Secure (theregister.com) 28

Microsoft researchers who tested more than 100 of the company's AI products concluded that AI systems can never be made fully secure, according to a new pre-print paper. The 26-author study, which included Azure CTO Mark Russinovich, found that large language models amplify existing security risks and create new vulnerabilities. While defensive measures can increase the cost of attacks, the researchers warned that AI systems will remain vulnerable to threats ranging from gradient-based attacks to simpler techniques like interface manipulation for phishing.
AI

AI Tools Crack Down on Wall Street Trader Code Speak (msn.com) 21

Compliance software firms are deploying AI to decode complex trader communications and detect potential financial crimes as Wall Street and London regulators intensify scrutiny of market manipulation.

Companies like Behavox and Global Relay are developing AI tools that can interpret trader slang, emoji-laden messages and even coded language that traditional detection systems might miss, WSJ reports. The technology aims to replace older methods that relied on scanning for specific trigger words, which traders could easily evade. The story adds: Traders believed that "if somebody wanted to say something sketchy, they would just make up a funny word or, you know, spell it backward or something," [Donald] McElligott (VP of Global Relay) said. "Now, none of that"s going to work anymore."
Google

Google Won't Add Fact Checks Despite New EU Law (axios.com) 185

According to Axios, Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law. From the report: In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google's global affairs president Kent Walker said the fact-checking integration required by the Commission's new Disinformation Code of Practice "simply isn't appropriate or effective for our services" and said Google won't commit to it. The code would require Google to incorporate fact-check results alongside Google's search results and YouTube videos. It would also force Google to build fact-checking into its ranking systems and algorithms.

Walker said Google's current approach to content moderation works and pointed to successful content moderation during last year's "unprecedented cycle of global elections" as proof. He said a new feature added to YouTube last year that enables some users to add contextual notes to videos "has significant potential." (That program is similar to X's Community Notes feature, as well as new program announced by Meta last week.)

The EU's Code of Practice on Disinformation, introduced in 2022, includes several voluntary commitments that tech firms and private companies, including fact-checking organizations, are expected to deliver on. The Code, originally created in 2018, predates the EU's new content moderation law, the Digital Services Act (DSA), which went into effect in 2022.

The Commission has held private discussions over the past year with tech companies, urging them to convert the voluntary measures into an official code of conduct under the DSA. Walker said in his letter Thursday that Google had already told the Commission that it didn't plan to comply. Google will "pull out of all fact-checking commitments in the Code before it becomes a DSA Code of Conduct," he wrote. He said Google will continue to invest in improvements to its current content moderation practices, which focus on providing people with more information about their search results through features like Synth ID watermarking and AI disclosures on YouTube.

Slashdot Top Deals