×
Google

Gemini Nano Won't Come To Pixel 8 Due To Hardware Limitations (mobilesyrup.com) 7

An anonymous reader quotes a report from MobileSyrup: Google's new smart assistant, Gemini, is available on multiple devices but Gemini Nano, the multimodal large language model, isn't coming to all Pixel smartphones. Gemini Nano is only available on the Google Pixel 8 Pro and the Samsung Galaxy S24 series; however, we've recently learned that it's not making its way to the base Pixel 8, according to Terence Zhang, an engineer at Google and reporter by Mishaal Rahman.

Zhang told everyone that Gemini Nano isn't coming to the Pixel 8 because of hardware limitations, but it's unclear what the hardware limitations are. Many would assume it's due to the Pixel 8 housing only 8GB of RAM compared to the Pixel 8 Pro's 12GB. That said, the Galaxy S24 series starts at 8GB of RAM and can use Nano. This must mean that some other hardware limitations are holding back Gemini Nano. Hopefully, more information will come in the future, but right now, it seems like only high-end devices will get the Gemini Nano experience.

AI

OpenAI Board Reappoints Altman and Adds Three Other Directors (reuters.com) 8

As reported by The Information (paywalled), OpenAI CEO Sam Altman will return to the company's board along with three new directors. Reuters reports: The company has also concluded the investigation around Altman's November firing, the Information said, referring to the ouster that briefly threw the world's most prominent artificial intelligence company into chaos. Employees, investors and OpenAI's biggest financial backer, Microsoft had expressed shock over Altman's ouster, which was reversed within days. The company will also announce the appointment of three new directors, Sue Desmond-Hellmann, a former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, a former president of Sony Entertainment, and Fidji Simo, CEO of Instacart, the Information said. "I'm pleased this whole thing is over," Altman said.

"We are excited and unanimous in our support for Sam and Greg [Brockman]," OpenAI chair and former Salesforce executive Bret Taylor told reporters. Taylor said they also adopted "a number of governance enhancements," such as a whistleblower hotline and a new mission and strategy committee on the board. "The mission has not changed, because it is more important than ever before," added Taylor.

An independent investigation by the law firm WilmerHale determined that "the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal." The summary, provided by OpenAI, continued: "The prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. The prior Board's decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman."
Puzzle Games (Games)

NYTimes Files Copyright Takedown Against Hundreds of Wordle Clones (404media.co) 39

As reported by 404 Media, the New York Times has issued hundreds of copyright takedown requests against Wordle clones "in which it asserts not just ownership over the Wordle name but over the broad concepts and mechanics of the word game, which includes its '5x6 grid' and 'green tiles to indicate correct guesses.'" From the report: The Times filed at least three DMCA takedown requests with coders who have made clones of Wordle on GitHub. These include two in January and, crucially, a new DMCA filed this week against Chase Wackerfuss, the coder of a repository called âoeReactle,â which cloned Wordle in React JS (JavaScript). The most recent takedown request is critical because it not only goes after Reactle but anyone who has forked Reactle to create a different spinoff game; an archive of the Reactle code repository shows that it was forked 1,900 times to create a diverse set of games and spinoffs. These include Wordle clones in dozens of languages, crossword versions of Wordle, emoji and bird versions of world, poker and AI spinoffs, etc.

"I write to submit a revised DMCA Notice regarding an infringing repository (and hundreds of forked repositories) hosted by Github that instruct users how to infringe The New York Times Co.'s ('The Times') copyright in its immensely popular Wordle game and create knock-off copies of the same. Unfortunately, hundreds of individuals have followed these instructions and published infringing Wordle knock-off games that The Times has spent the past month removing, including off of Github's websites," the DMCA takedown request against Reactle reads. "The Times's Wordle copyright includes the unique elements of its immensely popular game, such as the 5x6 grid, green tiles to indicate correct guesses, yellow tiles to indicate the correct letter but the wrong place within the word, and the keyboard directly beneath the grid. This gameplay is copied exactly in the repository, and the owner instructs others how to knock off the game and create an identical word game," it adds.

The DMCA request then says that GitHub must delete forks of the repository, which it writes were "infringing to the same extent as the parent repository" and which it says were made in what was "clearly bad faith." [...] The DMCA takedown requests are particularly notable because they come at a time when the New York Times is financially thriving, while many of its competitors are losing money, laying people off, and shutting down. The Times is thriving in part because Wordle, the crossword puzzle, and its recipe apps are juggernauts. The company has been aggressively expanding its "Games" business with Wordle, Connections, and a brand new word search game called Strands.
The New York Times issued a statement in response: "The Times has no issue with individuals creating similar word games that do not infringe The Times's 'Wordle' trademarks or copyrighted gameplay. The Times took action against a GitHub user and others who shared his code to defend its intellectual property rights in Wordle. The user created a 'Wordle clone' project that instructed others how to create a knock-off version of The Times's Wordle game featuring many of the same copyrighted elements. As a result, hundreds of websites began popping up with knock-off 'Wordle' games that used The Times's 'Wordle' trademark and copyrighted gameplay without authorization or permission."
AI

Dozens of Top Scientists Sign Effort To Prevent AI Bioweapons (nytimes.com) 53

An anonymous reader shares a report: Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death. Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be. Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins -- the microscopic mechanisms that drive all creations in biology -- have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.

The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines. "As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward," the agreement reads. The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material. This DNA manufacturing equipment is ultimately what allows for the development of bioweapons, said David Baker, the director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.

AI

President Biden Calls for Ban on AI Voice Impersonations (variety.com) 145

President Biden included a nod to a rising issue in the entertainment and tech industries during his State of the Union address Thursday evening, calling for a ban on AI voice impersonations. From a report: "Here at home, I have signed over 400 bipartisan bills. There's more to pass my unity agenda," President Biden said, beginning to list off a series of different proposals that he hopes to address if elected to a second term. "Strengthen penalties on fentanyl trafficking, pass bipartisan privacy legislation to protect our children online, harness the promise of AI to protect us from peril, ban AI voice impersonations and more."

The president did not elaborate on the types of guardrails or penalties that he would plan to institute around the rising technology, or if it would extend to the entertainment industry. AI was a peak concern for SAG-AFTRA during the actors union's negotiations with and strike against the major studios last year.

AI

Reddit Will Now Use an AI Model To Fight Harassment (androidauthority.com) 75

An APK teardown performed by Android Authority has revealed that Reddit is now using a Large Language Model (LLM) to detect harassment on the platform. From the report: Reddit also updated its support page a week ago to mention the use of an AI model as part of its harassment filter. "The filter is powered by a Large Language Model (LLM) that's trained on moderator actions and content removed by Reddit's internal tools and enforcement teams," reads an excerpt from the page. The Register reports: The filter can be enabled in a Reddit community's mod tools, but individual moderators will need to have permissions to change subreddit settings to enable it. The harassment filter can be set to low ("filters the least content but with the most accurate results") and high ("filters the most content but may be less accurate"), and also includes an explicit allow list to force the AI to ignore certain keywords, up to 15 of which can be added. Once enabled, the filter creates a new tag in the moderation queue called "potential harassment," which moderators can review for accuracy. Reddit's help page says the feature is now available on desktop and the official Reddit apps, though it's not clear when the feature was added.
Power

Is America Running Out of Electrical Power? (theweek.com) 267

An anonymous reader quotes a report from The Week Magazine: The advancement of new technologies appears to have given rise to a new problem across the United States: a crippling power shortage on the horizon. The advent of these technologies, such as eco-friendly factories and data centers, has renewed concerns that America could run out of electrical power. These worries also come at a time when the United States' aging power grid is in desperate need of repair. Heavily publicized incidents such as the 2021 Texas power outage, which was partially blamed on crypto-farming, exposed how vulnerable the nation's power supply is, especially during emergencies. There have also been warnings from tech moguls such as Elon Musk, who has stated that the United States is primed to run out of electricity and transformers for artificial intelligence in 2025. But the push to extend the life of the nation's power grid, while also maintaining eco-friendly sustainability, begs the question: Is the United States really at risk of going dark?

The emergence of new technologies means demand is soaring for power across the country; in Georgia, "demand for industrial power is surging to record highs, with the projection of electricity use for the next decade now 17 times what it was only recently," Evan Halper said for The Washington Post. Northern Virginia "needs the equivalent of several large nuclear power plants to serve all [its] new data centers," Halper said, while Texas faces a similar problem. This demand is resulting in a "scramble to try to squeeze more juice out of an aging power grid." At the same time, companies are "pushing commercial customers to go to extraordinary lengths to lock down energy sources, such as building their own power plants," Halper said. Much of this relates to the "rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure," Halper said. This infrastructure requires significantly more power than traditional data centers, with the aforementioned crypto farms also sucking up massive amounts of power.

Climate change is also hurting sustainability efforts. A recent report from the North American Electric Reliability Corporation estimated that more than 300 million people in the U.S. and Canada could face power shortages in 2024. It also found that electricity demand is rising faster now than at any time in the past five years. This is partially because the "push for the electrification of heating and transportation systems -- including electric cars -- is also creating new winter peaks in electricity demand," Jeremy Hsu said for New Scientist. One of the main issues with these sustainability efforts is the push to move away from fossil fuels toward renewable power. Natural gas is often seen as a bridge between fossils and renewables, but this has also had unintended consequences for the power grid. The system delivering natural gas "doesn't have to meet the same reliability standards as the electric grid, and in many cases, there's no real way to guarantee that fuel is available for the gas plants in the winter," Thomas Rutigliano of the Natural Resources Defense Council said to New Scientist. As a result, the "North American electricity supply has become practically inseparable from the natural gas supply chain," John Moura of the North American Electric Reliability Corporation said to New Scientist. As such, a "reliable electricity supply that lowers the risk of power outages depends on implementing reliability standards for the natural gas industry moving forward," but this may be easier said than done.

AI

Researchers Jailbreak AI Chatbots With ASCII Art (tomshardware.com) 34

Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. Tom's Hardware reports: According to the research paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their ArtPrompt tool. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money. [...]

To best understand ArtPrompt and how it works, it is probably simplest to check out the two examples provided by the research team behind the tool. In Figure 1 [here], you can see that ArtPrompt easily sidesteps the protections of contemporary LLMs. The tool replaces the 'safety word' with an ASCII art representation of the word to form a new prompt. The LLM recognizes the ArtPrompt prompt output but sees no issue in responding, as the prompt doesn't trigger any ethical or safety safeguards.

Another example provided [here] shows us how to successfully query an LLM about counterfeiting cash. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently." Moreover, they claim it "outperforms all [other] attacks on average" and remains a practical, viable attack for multimodal language models for now.

AI

EA Says Generative AI Could Make It 30% More Efficient (videogameschronicle.com) 46

EA CEO Andrew Wilson believes generative AI will "revolutionize" the gaming industry over the next five years. He predicts that the technology will allow for more efficient content creation, reducing development time from months to days. From a report: Greater efficiency coupled with "deeper, more immersive experiences" will lead to significant audience expansion over the next few years and provide a "multi-billion dollar" growth opportunity, he said. Wilson said that in the past it might take six months to build an in-game sports stadium. Over the last 12 months, that time has shrunk to six weeks, and over the coming years it could maybe be cut to six days.

And while FIFA 23 has 12 run cycles for how the players move in the game, EA Sports FC 24 has 1,200 created with generative AI. Over the next five years, Wilson hopes that generative AI will make EA's development 30% more efficient, help grow its 700 million-strong player base by "at least" 50%, and lead to players spending 10-20% more money on its games. "What we've seen every time there's been a meaningful technological advancement in media and in technology, where you are able to democratise an industry and hand it over to the population at large, incredible things happen," he said.

AI

'AI Prompt Engineering Is Dead' 68

The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products. But new research hints that the AI may be better at prompt engineering than humans, indicating many of these jobs could be short-lived as the technology evolves and automates the role. IEEE Spectrum: Battle and Gollapudi decided to systematically test [PDF] how different prompt engineering strategies impact an LLM's ability to solve grade school math questions. They tested three different open source language models with 60 different prompt combinations each. What they found was a surprising lack of consistency. Even chain-of-thought prompting sometimes helped and other times hurt performance. "The only real trend may be no trend," they write. "What's best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand."

There is an alternative to the trial-and-error style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.
Education

Teachers Are Embracing ChatGPT-Powered Grading 121

Schools are widely adopting a new tool called Writable that uses ChatGPT to help grade student writing assignments. Axios reports: Writable, which is billed as a time-saving tool for teachers, was purchased last month by education giant Houghton Mifflin Harcourt, whose materials are used in 90% of K-12 schools. Teachers use it to run students' essays through ChatGPT, then evaluate the AI-generated feedback and return it to the students.

A teacher gives the class a writing assignment -- say, "What I did over my summer vacation" -- and the students send in their work electronically. The teacher submits the essays to Writable, which in turn runs them through ChatGPT. ChatGPT offers comments and observations to the teacher, who is supposed to review and tweak them before sending the feedback to the students. Writable "tokenizes" students' information so that no personally identifying details are submitted to the AI program.
Crime

Former Google Engineer Indicted For Stealing AI Secrets To Aid Chinese Companies 28

Linwei Ding, a former Google software engineer, has been indicted for stealing trade secrets related to AI to benefit two Chinese companies. He faces up to 10 years in prison and a $250,000 fine on each criminal count. Reuters reports: Ding's indictment was unveiled a little over a year after the Biden administration created an interagency Disruptive Technology Strike Force to help stop advanced technology being acquired by countries such as China and Russia, or potentially threaten national security. "The Justice Department just will not tolerate the theft of our trade secrets and intelligence," U.S. Attorney General Merrick Garland said at a conference in San Francisco.

According to the indictment, Ding stole detailed information about the hardware infrastructure and software platform that lets Google's supercomputing data centers train large AI models through machine learning. The stolen information included details about chips and systems, and software that helps power a supercomputer "capable of executing at the cutting edge of machine learning and AI technology," the indictment said. Google designed some of the allegedly stolen chip blueprints to gain an edge over cloud computing rivals Amazon.com and Microsoft, which design their own, and reduce its reliance on chips from Nvidia.

Hired by Google in 2019, Ding allegedly began his thefts three years later, while he was being courted to become chief technology officer for an early-stage Chinese tech company, and by May 2023 had uploaded more than 500 confidential files. The indictment said Ding founded his own technology company that month, and circulated a document to a chat group that said "We have experience with Google's ten-thousand-card computational power platform; we just need to replicate and upgrade it." Google became suspicious of Ding in December 2023 and took away his laptop on Jan. 4, 2024, the day before Ding planned to resign.
A Google spokesperson said: "We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets. After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement."
AI

Public Trust In AI Is Sinking Across the Board 105

Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios. Axios reports: Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.
"When it comes to AI regulation, the public's response is pretty clear: 'What regulation?'," said Edelman global technology chair Justin Westcott. "There's a clear and urgent call for regulators to meet the public's expectations head on."
AI

JPMorgan's AI-Aided Cashflow Model Can Cut Manual Work by 90% (bloomberg.com) 29

JPMorgan helped some of its corporate customers slash manual work by almost 90% (alternative source) with its cashflow management tool that runs on AI, bringing the largest US bank one step closer to charging for this service. From a report: "We are going to keep investing into this solution because we see that we're starting to really crack this workflow," said Tony Wimmer, head of data and analytics at JPMorgan's wholesale payments unit, in an interview. Since launching about a year ago, his firm now has about 2,500 clients using the product, he said.

The tool, which allows corporate treasuries to analyse and forecast cash flows, has seen "tremendous" interest from its clients who currently use it for free, Wimmer said. His firm is considering charging its customers in the future to use the solution, dubbed Cash Flow Intelligence. The world's biggest banks have been stepping up their use of artificial intelligence with the aim of lifting productivity and reducing costs. JPMorgan's Chief Executive Officer Jamie Dimon has said the technology could eventually allow employers to shrink the workweek to just 3.5 days. JPMorgan set a target of $1 billion in "business value" generated by AI in 2023, and the firm increased that goal to $1.5 billion at its investor day in May.

Microsoft

Microsoft Engineer Warns Company's AI Tool Creates Violent, Sexual Images, Ignores Copyrights (cnbc.com) 75

An anonymous reader shares a report: On a late night in December, Shane Jones, an AI engineer at Microsoft, felt sickened by the images popping up on his computer. Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI's technology. Like with OpenAI's DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild. Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft's oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator. "It was an eye-opening moment," Jones, who continues to test the image generator, told CNBC in an interview. "It's when I first realized, wow this is really not a safe model."

Jones has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington. He said he doesn't work on Copilot in a professional capacity. Rather, as a red teamer, Jones is among an army of employees and outsiders who, in their free time, choose to test the company's AI technology and see where problems may be surfacing. Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. Jones said Microsoft referred him to OpenAI and, when he didn't hear back from the company, he posted an open letter on LinkedIn asking the startup's board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

AI

Copilot Pane As Annoying As Clippy May Pop Up In Windows 11 (theregister.com) 43

Richard Speed reports via The Register: Copilot in Windows is set to get even more assertive after Microsoft added a function that makes the AI assistant's window pop up after a user's cursor hovers over the icon in the task bar. [...] Windows Insiders on the Beta Channel â" with the option to get the latest updates turned on â" will soon find themselves on the receiving end of what Microsoft calls "a new hover experience for Copilot in Windows" from build 22635.3276.

If your mouse cursor happens to drift over to the Copilot icon on the taskbar, the Copilot pane will open to make users aware of the delights on offer. The result, we suspect, will be to educate users in the art of switching off the function. Much like Widgets, which will also make its unwanted presence felt should a user move a mouse over its icon. A swift hop into taskbar settings is all it takes to make the icons disappear, for now at least. The new feature is being piloted but considering the proximity of the Beta Channel to Release Preview, there is every chance the pop-up will, er, pop up in a release version of Windows before long.

Google

Google is Starting To Squash More Spam and AI in Search Results (theverge.com) 49

Google announced updates to its search ranking systems aimed at promoting high-quality content and demoting manipulative or low-effort material, including content generated by AI solely to summarize other sources. The company also stated it is improving its ability to detect and combat tactics used to deceive its ranking algorithms.
Microsoft

Microsoft Accuses the New York Times of Doom-Mongering in OpenAI Lawsuit (engadget.com) 55

Microsoft has filed a motion seeking to dismiss key parts of a lawsuit The New York Times filed against the company and Open AI, accusing them of copyright infringement. From a report: If you'll recall, The Times sued both companies for using its published articles to train their GPT large language models (LLMs) without permission and compensation. In its filing, the company has accused The Times of pushing "doomsday futurology" by claiming that AI technologies pose a threat to independent journalism. It follows OpenAI's court filing from late February that's also seeking to dismiss some important elements on the case.

Like OpenAI before it, Microsoft accused The Times of crafting "unrealistic prompts" in an effort to "coax the GPT-based tools" to spit out responses matching its content. It also compared the media organization's lawsuit to Hollywood studios' efforts to " stop a groundbreaking new technology:" The VCR. Instead of destroying Hollywood, Microsoft explained, the VCR helped the entertainment industry flourish by opening up revenue streams. LLMs are a breakthrough in artificial intelligence, it continued, and Microsoft collaborated with OpenAI to "help bring their extraordinary power to the public" because it "firmly believes in LLMs' capacity to improve the way people live and work."

AI

Qualcomm Launches First True 'App Store' For AI With 75 Free Models 20

Wayne Williams reports via TechRadar: Qualcomm has unveiled its AI Hub, an all-inclusive library of pre-optimized AI models ready for use on devices running on Snapdragon and Qualcomm platforms. These models support a wide range of applications including natural language processing, computer vision, and anomaly detection, and are designed to deliver high performance with minimal power consumption, a critical factor for mobile and edge devices. The AI Hub library currently includes more than 75 popular AI and generative AI models including Whisper, ControlNet, Stable Diffusion, and Baichuan 7B. All models are bundled in various runtimes and are optimized to leverage the Qualcomm AI Engine's hardware acceleration across all cores (NPU, CPU, and GPU). According to Qualcomm, they'll deliver four times faster inferencing times.

The AI Hub also handles model translation from the source framework to popular runtimes automatically. It works directly with the Qualcomm AI Engine direct SDK and applies hardware-aware optimizations. Developers can search for models based on their needs, download them, and integrate them into their applications, saving time and resources. The AI Hub also provides tools and resources for developers to customize these models, and they can fine-tune them using the Qualcomm Neural Processing SDK and the AI Model Efficiency Toolkit, both available on the platform.
AI

Gartner Predicts Search Engine Volume Will Drop 25% by 2026, Due To AI Chatbots and Other Virtual Agents 93

Gartner: By 2026, traditional search engine volume will drop 25%, with search marketing losing market share to AI chatbots and other virtual agents, according to Gartner. "Organic and paid search are vital channels for tech marketers seeking to reach awareness and demand generation goals," said Alan Antin, Vice President Analyst at Gartner. "Generative AI (GenAI) solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines. This will force companies to rethink their marketing channels strategy as GenAI becomes more embedded across all aspects of the enterprise."

With GenAI driving down the cost of producing content, there is an impact around activities including keyword strategy and website domain authority scoring. Search engine algorithms will further value the quality of content to offset the sheer amount of AI-generated content, as content utility and quality still reigns supreme for success in organic search results. There will also be a greater emphasis placed on watermarking and other means to authenticate high-value content. Government regulations across the globe are already holding companies accountable as they begin to require the identification of marketing content assets that AI creates. This will likely play a role in how search engines will display such digital content.

Slashdot Top Deals