×
Businesses

Stability AI Reportedly Ran Out of Cash To Pay Its Bills For Rented Cloud GPUs (theregister.com) 45

An anonymous reader writes: The massive GPU clusters needed to train Stability AI's popular text-to-image generation model Stable Diffusion are apparently also at least partially responsible for former CEO Emad Mostaque's downfall -- because he couldn't find a way to pay for them. According to an extensive expose citing company documents and dozens of persons familiar with the matter, it's indicated that the British model builder's extreme infrastructure costs drained its coffers, leaving the biz with just $4 million in reserve by last October. Stability rented its infrastructure from Amazon Web Services, Google Cloud Platform, and GPU-centric cloud operator CoreWeave, at a reported cost of around $99 million a year. That's on top of the $54 million in wages and operating expenses required to keep the AI upstart afloat.

What's more, it appears that a sizable portion of the cloudy resources Stability AI paid for were being given away to anyone outside the startup interested in experimenting with Stability's models. One external researcher cited in the report estimated that a now-cancelled project was provided with at least $2.5 million worth of compute over the span of four months. Stability AI's infrastructure spending was not matched by revenue or fresh funding. The startup was projected to make just $11 million in sales for the 2023 calendar year. Its financials were apparently so bad that it allegedly underpaid its July 2023 bills to AWS by $1 million and had no intention of paying its August bill for $7 million. Google Cloud and CoreWeave were also not paid in full, with debts to the pair reaching $1.6 million as of October, it's reported.

It's not clear whether those bills were ultimately paid, but it's reported that the company -- once valued at a billion dollars -- weighed delaying tax payments to the UK government rather than skimping on its American payroll and risking legal penalties. The failing was pinned on Mostaque's inability to devise and execute a viable business plan. The company also failed to land deals with clients including Canva, NightCafe, Tome, and the Singaporean government, which contemplated a custom model, the report asserts. Stability's financial predicament spiraled, eroding trust among investors, making it difficult for the generative AI darling to raise additional capital, it is claimed. According to the report, Mostaque hoped to bring in a $95 million lifeline at the end of last year, but only managed to bring in $50 million from Intel. Only $20 million of that sum was disbursed, a significant shortfall given that the processor titan has a vested interest in Stability, with the AI biz slated to be a key customer for a supercomputer powered by 4,000 of its Gaudi2 accelerators.
The report goes on to mention further fundraising challenges, issues retaining employees, and copyright infringement lawsuits challenging the company's future prospects. The full expose can be read via Forbes (paywalled).
AI

George Carlin Estate Forces 'AI Carlin' Off the Internet For Good (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: The George Carlin estate has settled its lawsuit with Dudesy, the podcast that purportedly used a "comedy AI" to produce an hour-long stand-up special in the style and voice of the late comedian. Dudesy's "George Carlin: Dead and Loving It" special, which was first uploaded in early January, gained hundreds of thousands of views and plenty of media attention for its presentation as a creation of an AI that had "listened to all of George Carlin's material... to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today." But even before the Carlin estate lawsuit was filed, there were numerous signs that the special was not actually written by an AI, as Ars laid out in detail in a feature report.

Shortly after the Carlin estate filed its lawsuit against Dudesy in late January, a representative for Dudesy host Will Sasso told The New York Times that the special had actually been "completely written by [Dudesy co-host] Chad Kultgen." Regardless of the special's actual authorship, though, the lawsuit also took Dudesy to task for "capitaliz[ing] on the name, reputation, and likeness of George Carlin in creating, promoting, and distributing the Dudesy Special and using generated images of Carlin, Carlin's voice, and images designed to evoke Carlin's presence on a stage." The resulting "association" between the real Carlin and this ersatz version put Dudesy in potential legal jeopardy, even if the contentious and unsettled copyright issues regarding AI training and authorship weren't in play.

Court documents note that shortly after the lawsuit was filed, Dudesy had already "taken reasonable steps" to remove the special and any mention of Carlin from all of Dudesy's online accounts. The settlement restrains the Dudesy podcast (and those associated with it) from re-uploading the special anywhere and from "using George Carlin's image, voice, or likeness" in any content posted anywhere on the Internet. Archived copies of the special are still available on the Internet if you know where to look. While the settlement notes that those reposts are also in "violat[ion] of this order," Dudesy will not be held liable for any reuploads made by unrelated third parties.

AI

Anthropic Researchers Wear Down AI Ethics With Repeated Questions (techcrunch.com) 42

How do you get an AI to answer a question it's not supposed to? There are many such "jailbreak" techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first. From a report: They call the approach "many-shot jailbreaking" and have both written a paper about it [PDF] and also informed their peers in the AI community about it so it can be mitigated. The vulnerability is a new one, resulting from the increased "context window" of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic's researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it's the hundredth question.

AI

US, EU To Use AI To Seek Alternate Chemicals for Making Chips (bnnbloomberg.ca) 17

The European Union and the US plan to enlist AI in the search for replacements to so-called forever chemicals that are prevalent in semiconductor manufacturing, Bloomberg News reported Wednesday, citing a draft statement. From the report: The pledge forms part of the conclusions to this week's joint US-EU Trade and Technology Council taking place in Leuven, Belgium. "We plan to continue working to identify research cooperation opportunities on alternatives to the use of per- and polyfluorinated substances (PFAS) in chips," the statement says. "For example, we plan to explore the use of AI capacities and digital twins to accelerate the discovery of suitable materials to replace PFAS in semiconductor manufacturing," it says.

PFAS, sometimes known as forever chemicals, have been at the center of concerns over pollution in both the US and Europe. They have a wide range of industrial applications but also show up in our bodies, in food and water supplies, and -- as their moniker suggests -- they don't break down for a very long time.

AI

Business Schools Are Going All In on AI (wsj.com) 39

Top business schools are integrating AI into their curricula to prepare students for the changing job market. Schools like the Wharton School, American University's Kogod School of Business, Columbia Business School, and Duke University's Fuqua School of Business are emphasizing AI skills across various courses, WSJ reported Wednesday. Professors are encouraging students to use AI as a tool for generating ideas, preparing for negotiations, and pressure-testing business concepts. However, they stress that human judgment remains crucial in directing AI and making sound decisions. An excerpt from the story: Before, engineers had an edge against business graduates because of their technical expertise, but now M.B.A.s can use AI to compete in that zone, said Robert Bray, who teaches operations management at Northwestern's Kellogg School of Management. He encourages his students to offload as much work as possible to AI, treating it like "a really proficient intern." Ben Morton, one of Bray's students, is bullish on AI but knows he needs to be able to work without it. He did some coding with ChatGPT for class and wondered: If ChatGPT were down for a week, could he still get work done?

Learning to code with the help of generative AI sped up his development. "I know so much more about programming than I did six months ago," said Morton, 27. "Everyone's capabilities are exponentially increasing." Several professors said they can teach more material with AI's assistance. One said that because AI could solve his lab assignments, he no longer needed much of the class time for those activities. With the extra hours he has students present to their peers on AI innovations. Campus is where students should think through how to use AI responsibly, said Bill Boulding, dean of Duke's Fuqua School. "How do we embrace it? That is the right way to approach this -- we can't stop this," he said. "It has eaten our world. It will eat everyone else's world."

AI

UK and US Sign Landmark Agreement On AI Safety (bbc.com) 6

The UK and US have signed a landmark deal to work together on testing advanced artificial intelligence (AI) and develop "robust" safety methods for AI tools and their underlying systems. "It is the first bilateral agreement of its kind," reports the BBC. From the report: UK tech minister Michelle Donelan said it is "the defining technology challenge of our generation." "We have always been clear that ensuring the safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The secretary of state for science, innovation and technology added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023. The event, attended by AI bosses including OpenAI's Sam Altman, Google DeepMind's Demis Hassabis and tech billionaire Elon Musk, saw both the UK and US create AI Safety Institutes which aim to evaluate open and closed-source AI systems. [...]

Gina Raimondo, the US commerce secretary, said the agreement will give the governments a better understanding of AI systems, which will allow them to give better guidance. "It will accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," she said. "Our partnership makes clear that we aren't running away from these concerns - we're running at them."

News

Taiwan Quake Puts World's Most Advanced Chips at Risk (msn.com) 99

Taiwan's biggest earthquake in 25 years has disrupted production at the island's semiconductor companies, raising the possibility of fallout for the technology industry and perhaps the global economy. From a report: The potential repercussions are significant because of the critical role Taiwan plays in the manufacture of advanced chips, the foundation of technologies from artificial intelligence and smartphones to electric vehicles.

The 7.4-magnitude earthquake led to the collapse of at least 26 buildings, four deaths and the injury of 57 people across Taiwan, with much of the fallout still unknown. Taiwan Semiconductor Manufacturing Co., the world's largest maker of advanced chips for customers like Apple and Nvidia, halted some chipmaking machinery and evacuated staff. Local rival United Microelectronics also stopped machinery at some plants and evacuated certain facilities at its hubs of Hsinchu and Tainan.

Taiwan is the leading producer of the most advanced semiconductors in the world, including the processors at the heart of the latest iPhones and the Nvidia graphics chips that train AI models like OpenAI's ChatGPT. TSMC has become the tech linchpin because it's the most advanced in producing complex chips. Taiwan is the source of an estimated 80% to 90% of the highest-end chips -- there is effectively no substitute. Jan-Peter Kleinhans, director of the technology and geopolitics project at Berlin-based think tank Stiftung Neue Verantwortung, has called Taiwan "potentially the most critical single point of failure" in the semiconductor industry.

Apple

Jon Stewart Claims Apple Wouldn't Let Him Interview FTC Chair On His Podcast (axios.com) 85

Sara Fischer reports via Axios: Jon Stewart on Monday told Federal Trade Commission (FTC) Chair Lina Khan that Apple wouldn't let him interview her for a podcast. "I wanted to have you on a podcast and Apple asked us not to do it," "The Daily Show" host said to Khan, in reference to his former podcast that was an extension of his Apple TV+ comedy show "The Problem With Jon Stewart." "They literally said 'please don't talk to her,' having nothing to do with what you do for a living. I think they just... I didn't think they cared for you is what happened," he added during his conversation with Khan. "They wouldn't let us do even that dumb thing we just did in the first act on AI. Like, what is that sensitivity? Why are they so afraid to even have these conversations out in the public sphere?"

Stewart returned to "The Daily Show" in February after leaving in 2015 as its executive producer and host on Monday evenings through the 2024 election cycle. Stewart's Apple TV+ show ended late last year after Stewart and Apple executives parted ways over creative differences, including the comedian's desire to cover topics such as China and AI, the New York Times reported.

Yahoo!

Yahoo Is Buying Artifact, the AI News App From the Instagram Co-Founders (theverge.com) 14

Yahoo is acquiring Artifact, the AI news app from Instagram's co-founders that failed to make it big on its own. The Verge reports: The two sides declined to share the cost of the acquisition, but both made clear Yahoo is acquiring Artifact's tech rather than its team. Mike Krieger and Kevin Systrom, Artifact's co-founders, will be "special advisors" for Yahoo but won't be joining the company. Artifact's remaining five employees have either gotten other jobs or are planning to take some time off. The acquisition comes a bit more than a year after Artifact's launch and about three months after Systrom and Krieger announced its death. [...]

Artifact, the app, will go away once the acquisition is complete. But Artifact's underlying tech for categorizing, curating, and personalizing content will soon start to show up on Yahoo News -- and eventually on other Yahoo platforms, too. "You'll see that stuff flowing into our products in the coming months," says Downs Mulder. It sounds like there's also a good chance that Yahoo's apps might get a bit of Artifact's speed and polish over time, too. Both Systrom and Downs Mulder say the integration will take time, that you can't just drop an Artifact algorithm into Yahoo News and call it a day. But they see a possibility to get everybody into the future a little faster. Yahoo can develop a personalized content ecosystem, the "TikTok for text" that was so alluring to Artifact users. And Artifact can power a news service of the future.

AI

Top Musicians Among Hundreds Warning Against Replacing Human Artists With AI (axios.com) 162

More than 200 musical artists -- including Billie Eilish, Katy Perry and Smokey Robinson -- have penned an open letter to AI developers, tech firms and digital platforms to "cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists." From a report: Unlike other advocacy efforts from creators around AI, this letter specifically addresses tech firms about the concerns of musical artists, such as replicating artist's voices, using their work to train AI models without compensation and diluting royalty pools that are paid out to artists. Jen Jacobsen, executive director at The Artist Rights Alliance (ARA), the trade group representing the artists signing the letter, told Axios, "We're not thinking about legislation here."

"We're kind of calling on our technology and digital partners to work with us to make this a responsible marketplace, and to keep the quality of the music sound, and not to replace human artists." The letter, penned by dozens of well-known musicians within ARA, specifically calls on tech firms and AI developers to stop the "predatory use of AI to steal professional artists' voices and likenesses, violate creators' rights, and destroy the music ecosystem." Signatories include Elvis Costello, Norah Jones, Nicki Minaj, Camila Cabello, Kacey Musgraves, Jon Batiste, Ja Rule, Jason Isbell, Pearl Jam, Sam Smith and dozens more spanning every musical genre.

AI

Amazon Offers Free Credits For Startups To Use AI Models Including Anthropic (reuters.com) 9

AWS has expanded its free credits program for startups to cover the costs of using major AI models. From a report: In a move to attract startup customers, Amazon now allows its cloud credits to cover the use of models from other providers including Anthropic, Meta, Mistral AI, and Cohere. "This is another gift that we're making back to the startup ecosystem, in exchange for what we hope is startups continue to choose AWS as their first stop," said Howard Wright, vice president and global head of startups at AWS.

[...] As part of the deal, Anthropic will use AWS as its primary cloud provider, and Trainium and Inferentia chips to build and train its models. Wright said Amazon's free credit will contribute to revenue of Anthropic, one of the most popular models on Bedrock.

AI

Microsoft is Working on an Xbox AI Chatbot (theverge.com) 11

Microsoft is currently testing a new AI-powered Xbox chatbot that can be used to automate support tasks. From a report: Sources familiar with Microsoft's plans tell The Verge that the software giant has been testing an "embodied AI character" that animates when responding to Xbox support queries. I understand this Xbox AI chatbot is part of a larger effort inside Microsoft to apply AI to its Xbox platform and services.

The Xbox AI chatbot is connected to Microsoft's support documents for the Xbox network and ecosystem, and can respond to questions and even process game refunds from Microsoft's support website. "This agent can help you with your Xbox support questions," reads a description of the Xbox chatbot internally at Microsoft. Microsoft expanded the testing pool for its Xbox chatbot more broadly in recent days, suggesting that this prototype "Xbox Support Virtual Agent" may one day handle support queries for all Xbox customers. Microsoft confirmed the existence of its chatbot to The Verge.

AI

Databricks Claims Its Open Source Foundational LLM Outsmarts GPT-3.5 (theregister.com) 17

Lindsay Clark reports via The Register: Analytics platform Databricks has launched an open source foundational large language model, hoping enterprises will opt to use its tools to jump on the LLM bandwagon. The biz, founded around Apache Spark, published a slew of benchmarks claiming its general-purpose LLM -- dubbed DBRX -- beat open source rivals on language understanding, programming, and math. The developer also claimed it beat OpenAI's proprietary GPT-3.5 across the same measures.

DBRX was developed by Mosaic AI, which Databricks acquired for $1.3 billion, and trained on Nvidia DGX Cloud. Databricks claims it optimized DBRX for efficiency with what it calls a mixture-of-experts (MoE) architecture â" where multiple expert networks or learners divide up a problem. Databricks explained that the model possesses 132 billion parameters, but only 36 billion are active on any one input. Joel Minnick, Databricks marketing vice president, told The Register: "That is a big reason why the model is able to run as efficiently as it does, but also runs blazingly fast. In practical terms, if you use any kind of major chatbots that are out there today, you're probably used to waiting and watching the answer get generated. With DBRX it is near instantaneous."

But the performance of the model itself is not the point for Databricks. The biz is, after all, making DBRX available for free on GitHub and Hugging Face. Databricks is hoping customers use the model as the basis for their own LLMs. If that happens it might improve customer chatbots or internal question answering, while also showing how DBRX was built using Databricks's proprietary tools. Databricks put together the dataset from which DBRX was developed using Apache Spark and Databricks notebooks for data processing, Unity Catalog for data management and governance, and MLflow for experiment tracking.

Businesses

Telegram Challenges Meta With the Launch of New 'Business' Features, Revenue-Sharing (techcrunch.com) 6

Telegram is enhancing its platform for businesses with the introduction of Telegram Business, offering specialized features like customizable start pages, business hours, and chat management tools, while also initiating an ad-revenue sharing model for public channels with at least 1,000 subscribers. "As a whole, the features could introduce competition into a market where Meta's apps like Messenger, Instagram and WhatsApp have a hold on business communication," reports TechCrunch. From the report: The features arrived just a couple of weeks after Telegram founder Pavel Durov told the Financial Times in an interview that he expected the app, which now has over 900 million users, to become profitable by 2025. Telegram Business is clearly part of that push, leading up to a future IPO, as it's an offering that requires users to subscribe to the paid Premium version to access. Telegram Premium is a bundle of upgraded features that cost $4.99 per month on iOS and Android and is also available as a three-month, six-month or one-year plan.

Telegram Business will likely give Premium another bump as it offers tools and features that can be used by business customers without needing to know how to code. For instance, businesses can choose to display their hours of operation and location on a map, and greet customers with a customized start page for empty chats where they can choose the text and sticker users see before beginning a conversation. Similar to features available on WhatsApp, Telegram Business will offer "quick replies," which are shortcuts to preset messages that support formatting, links, media, stickers and files.

Businesses can also set their own custom greeting messages for customers who engage with the company for the first time, and they can specify a period after which the greeting would be shown again. They can manage their availability using away messages while the business is closed or the owner is on vacation. Plus, the businesses can categorize their chats using colored labels based on what chat folders they're in, like delivery, claim, orders, VIP, feedback, or any others that make sense for them. In addition, businesses can create links to chat that will instantly open a Telegram chat with a request to take an action like tracking an order or reserving a table, among other things. Business customers can also add Telegram bots, including those from other tools or AI assistants, to answer messages on their behalf. The company said more features will roll out to Telegram Business in future updates.

IT

The FTC is Trying To Help Victims of Impersonation Scams Get Their Money Back (theverge.com) 8

The Federal Trade Commission (FTC) has a new way to combat the impersonation scams that it says cost people $1.1 billion last year alone. Effective today, the agency's rule "prohibits the impersonation of government, businesses, and their officials or agents in interstate commerce." The rule also lets the FTC directly file federal court complaints to force scammers to return money stolen by business or government impersonation. From a report: Impersonation scams are wide-ranging -- creators are on the lookout for fake podcast invites that turn into letting scammers take over their Facebook pages via a hidden "datasets" URL, while Verge reporters have been impersonated by criminals trying to steal cryptocurrency via fake Calendly meeting links.

Linus Media Group was victimized by a thief who pretended to be a potential sponsor and managed to take over three of the company's YouTube channels. Some scams can also be very intricate, as in The Cut financial columnist Charlotte Cowles' story of how she lost a shoebox holding $50,000 to an elaborate scam involving a fake Amazon business account, the FTC, and the CIA. (See also: gift card scams.) The agency is also taking public comment until April 30th on changes to the rule that would allow it to also target impersonation of individuals, such as through the use of video deepfakes or AI voice cloning. That would let it take action against, say, scams involving impersonations of Elon Musk on X or celebrities in YouTube ads. Others have used AI for more sinister fraud, such as voice clones of loved ones claiming to be kidnapped.

AI

Apple AI Researchers Boast Useful On-Device Model That 'Substantially Outperforms' GPT-4 (9to5mac.com) 40

Zac Hall reports via 9to5Mac: In a newly published research paper (PDF), Apple's AI gurus describe a system in which Siri can do much more than try to recognize what's in an image. The best part? It thinks one of its models for doing this benchmarks better than ChatGPT 4.0. In the paper (ReALM: Reference Resolution As Language Modeling), Apple describes something that could give a large language model-enhanced voice assistant a usefulness boost. ReALM takes into account both what's on your screen and what tasks are active. [...] If it works well, that sounds like a recipe for a smarter and more useful Siri.

Apple also sounds confident in its ability to complete such a task with impressive speed. Benchmarking is compared against OpenAI's ChatGPT 3.5 and ChatGPT 4.0: "As another baseline, we run the GPT-3.5 (Brown et al., 2020; Ouyang et al., 2022) and GPT-4 (Achiam et al., 2023) variants of ChatGPT, as available on January 24, 2024, with in-context learning. As in our setup, we aim to get both variants to predict a list of entities from a set that is available. In the case of GPT-3.5, which only accepts text, our input consists of the prompt alone; however, in the case of GPT-4, which also has the ability to contextualize on images, we provide the system with a screenshot for the task of on-screen reference resolution, which we find helps substantially improve performance."

So how does Apple's model do? "We demonstrate large improvements over an existing system with similar functionality across different types of references, with our smallest model obtaining absolute gains of over 5% for on-screen references. We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it." Substantially outperforming it, you say? The paper concludes in part as follows: "We show that ReaLM outperforms previous ap- proaches, and performs roughly as well as the state- of-the-art LLM today, GPT-4, despite consisting of far fewer parameters, even for onscreen references despite being purely in the textual domain. It also outperforms GPT-4 for domain-specific user utterances, thus making ReaLM an ideal choice for a practical reference resolution system that can exist on-device without compromising on performance."

AI

OpenAI Removes Sam Altman's Ownership of Its Startup Fund (reuters.com) 6

According to a filing with the SEC, OpenAI has removed CEO Sam Altman's ownership and control of the company's venture capital fund that backs AI startups. Reuters reports: The change, documented in the March 29 filing, came after Altman's ownership of the OpenAI Startup Fund raised eyebrows for its unusual structure--while being marketed similar to a corporate venture arm, the fund was raised by Altman from outside limited partners and he made investment decisions. OpenAI has said Altman does not have financial interest in the fund despite the ownership.

Axios first reported on the ownership change on Monday. In a statement, a spokesperson for OpenAI said the fund's initial general partner (GP) structure was a temporary arrangement, and "this change provides further clarity." The OpenAI Startup Fund is investing $175 million raised from OpenAI partners such as Microsoft, although OpenAI itself is not an investor. Control of the fund has been moved over to Ian Hathaway, a partner at the fund since 2021, according to the filing. Altman will no longer be a general partner at the fund. OpenAI said Hathaway has overseen the fund's accelerator program and led investments in such companies as Harvey, Cursor and Ambience Healthcare.

AI

ChatGPT No Longer Requires an Account (techcrunch.com) 44

OpenAI is making its flagship conversational AI accessible to everyone, even people who haven't bothered making an account. From a report: It won't be quite the same experience, however -- and of course all your chats will still go into their training data unless you opt out. Starting today in a few markets and gradually rolling out to the rest of the world, visiting chat.openai.com will no longer ask you to log in -- though you still can if you want to. Instead, you'll be dropped right into conversation with ChatGPT, which will use the same model as logged-in users.
Businesses

Perplexity, an AI Startup Attempting To Challenge Google, Plans To Sell Ads (adweek.com) 25

An anonymous reader shares a report: Generative AI search engine Perplexity, which claims to be a Google competitor and recently snagged a $73.6 million Series B funding from investors like Jeff Bezos, is going to start selling ads, the company told ADWEEK. Perplexity uses AI to answer users' questions, based on web sources. It incorporates videos and images in the response and even data from partners like Yelp. Perplexity also links sources in the response while suggesting related questions users might want to ask.

These related questions, which account for 40% of Perplexity's queries, are where the company will start introducing native ads, by letting brands influence these questions, said company chief business officer Dmitry Shevelenko. When a user delves deeper into a topic, the AI search engine might offer organic and brand-sponsored questions. Perplexity will launch this in the upcoming quarters, but Shevelenko declined to disclose more specifics. While Perplexity touts on its site that search should be "free from the influence of advertising-driven models," advertising was always in the cards for the company. "Advertising was always part of how we're going to build a great business," said Shevelenko.

AI

Huge AI Funding Leads To Hype and 'Grifting,' Warns DeepMind's Demis Hassabis (ft.com) 30

The surge of money flooding into AI has resulted in some crypto-like hype that is obscuring the incredible scientific progress in the field, according to Sir Demis Hassabis, co-founder of DeepMind. From a report: The chief executive of Google's AI research division told the Financial Times that the billions of dollars being poured into generative AI start-ups and products "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever."

"Some of that has now spilled over into AI, which I think is a bit unfortunate. And it clouds the science and the research, which is phenomenal," he added. "In a way, AI's not hyped enough but in some senses it's too hyped. We're talking about all sorts of things that are just not real." The launch of OpenAI's ChatGPT chatbot in November 2022 sparked an investor frenzy as start-ups raced to develop and deploy generative AI and attract venture capital funding. VC groups invested $42.5bn in 2,500 AI start-up equity rounds last year, according to market analysts CB Insights. Public market investors have also rushed into the so-called Magnificent Seven technology companies, including Microsoft, Alphabet and Nvidia, that are spearheading the AI revolution. Their rise has helped to propel global stock markets to their strongest first-quarter performance in five years.

Slashdot Top Deals