×
AI

SXSW Audiences Loudly Boo Festival Videos Touting the Virtues of AI (variety.com) 65

At this year's SXSW festival, discussions on artificial intelligence's future sparked controversy during screenings of premiers like "The Fall Guy" and "Immaculate." Variety reports: The quick-turnaround video editors at SXSW cut a daily sizzle reel highlighting previous panels, premieres and other events, which then runs before festival screenings. On Tuesday, the fourth edition of that daily video focused on the wide variety of keynotes and panelists in town to discuss AI. Those folks sure seem bullish on artificial intelligence, and the audiences at the Paramount -- many of whom are likely writers and actors who just spent much of 2023 on the picket line trying to reign in the potentially destructive power of AI -- decided to boo the video. Loudly. And frequently.

Those boos grew the loudest toward the end of the sizzle, when OpenAI's VP of consumer product and head of ChatGPT Peter Deng declares on camera, "I actually think that AI fundamentally makes us more human." That is not a popular opinion. Deng participated in the session "AI and Humanity's Co-evolution with Open AI's Head of Chat GPT" on Monday, moderated by Signal Fire's consumer VC and former TechCrunch editor Josh Constine. Constine is at the start of the video with another soundbite that drew jeers: "SXSW has always been the digital culture makers, and I think if you look out into this room, you can see that AI is a culture." [...] The groans also grew loud for Magic Leap's founder Rony Abovitz, who gave this advice during the "Storyworlds, Hour Blue & Amplifying Humanity Ethically with AI" panel: "Be one of those people who leverages AI, don't be run over by it."
You can hear some of the reactions from festival attendees here, here, and here.
AI

Cognition Emerges From Stealth To Launch AI Software Engineer 'Devin' (venturebeat.com) 95

Longtime Slashdot reader ahbond shares a report from VentureBeat: Today, Cognition, a recently formed AI startup backed by Peter Thiel's Founders Fund and tech industry leaders including former Twitter executive Elad Gil and Doordash co-founder Tony Xu, announced a fully autonomous AI software engineer called "Devin." While there are multiple coding assistants out there, including the famous Github Copilot, Devin is said to stand out from the crowd with its ability to handle entire development projects end-to-end, right from writing the code and fixing the bugs associated with it to final execution. This is the first offering of this kind and even capable of handling projects on Upwork, the startup has demonstrated. [...]

In a blog post today on Cognition's website, Scott Wu, the founder and CEO of Cognition and an award-winning sports coder, explained Devin can access common developer tools, including its own shell, code editor and browser, within a sandboxed compute environment to plan and execute complex engineering tasks requiring thousands of decisions. The human user simply types a natural language prompt into Devin's chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works. [...]

According to demos shared by Wu, Devin is capable of handling a range of tasks in its current form. This includes common engineering projects like deploying and improving apps/websites end-to-end and finding and fixing bugs in codebases to more complex things like setting up fine-tuning for a large language model using the link to a research repository on GitHub or learning how to use unfamiliar technologies. In one case, it learned from a blog post how to run the code to produce images with concealed messages. Meanwhile, in another, it handled an Upwork project to run a computer vision model by writing and debugging the code for it. In the SWE-bench test, which challenges AI assistants with GitHub issues from real-world open-source projects, the AI software engineer was able to correctly resolve 13.86% of the cases end-to-end -- without any assistance from humans. In comparison, Claude 2 could resolve just 4.80% while SWE-Llama-13b and GPT-4 could handle 3.97% and 1.74% of the issues, respectively. All these models even required assistance, where they were told which file had to be fixed.
Currently, Devin is available only to a select few customers. Bloomberg journalist Ashlee Vance wrote a piece about his experience using it here.

"The Doom of Man is at hand," captions Slashdot reader ahbond. "It will start with the low-hanging Jira tickets, and in a year or two, able to handle 99% of them. In the short term, software engineers may become like bot farmers, herding 10-1000 bots writing code, etc. Welcome to the future."
IT

Modern Workplace Tech Linked To Lower Employee Well-Being, Study Finds (techspot.com) 39

According to a new study from the Institute for the Future of Work, contemporary technology often has a negative impact on workers' quality of life. The think tank surveyed over 6,000 people to learn how four categories of workplace technologies affected their wellbeing. TechSpot reports the findings: The study found that increased exposure to three of the categories tended to worsen workers' mental state and health. The three areas that negatively impact people most are wearable and remote sensing technologies, which covers CCTV cameras and wearable trackers; robotics, consisting of automated machines, self-driving vehicles, and other equipment; and, unsurprisingly, technologies relating to AI and ML, which includes everything from decision management to biometrics. Only one of the categories was found to be beneficial to employees, and it's one that has been around for decades: ICT tech such as laptops, tablets, phones, and real-time messaging tools.
AI

OpenAI's Sora Text-to-Video Generator Will Be Publicly Available Later This Year (theverge.com) 13

You'll soon get to try out OpenAI's buzzy text-to-video generator for yourself. From a report: In an interview with The Wall Street Journal, OpenAI chief technology officer Mira Murati says Sora will be available "this year" and that it "could be a few months." OpenAI first showed off Sora, which is capable of generating hyperrealistic scenes based on a text prompt, in February. The company only made the tool available for visual artists, designers, and filmmakers to start, but that didn't stop some Sora-generated videos from making their way onto platforms like X.

In addition to making the tool available to the public, Murati says OpenAI has plans to "eventually" incorporate audio, which has the potential to make the scenes even more realistic. The company also wants to allow users to edit the content in the videos Sora produces, as AI tools don't always create accurate images. "We're trying to figure out how to use this technology as a tool that people can edit and create with," Murati tells the Journal. When pressed on what data OpenAI used to train Sora, Murati didn't get too specific and seemed to dodge the question.

Google

Google DeepMind's Latest AI Agent Learned To Play Goat Simulator 3 (wired.com) 13

Will Knight, writing for Wired: Goat Simulator 3 is a surreal video game in which players take domesticated ungulates on a series of implausible adventures, sometimes involving jetpacks. That might seem an unlikely venue for the next big leap in artificial intelligence, but Google DeepMind today revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3. Most impressively, when the program encounters a game for the first time, it can reliably perform tasks by adapting what it learned from playing other games. The program is called SIMA, for Scalable Instructable Multiworld Agent, and it builds upon recent AI advances that have seen large language models produce remarkably capable chabots like ChatGPT.

[...] DeepMind's latest video game project hints at how AI systems like OpenAI's ChatGPT and Google's Gemini could soon do more than just chat and generate images or video, by taking control of computers and performing complex commands. "The paper is an interesting advance for embodied agents across multiple simulations," says Linxi "Jim" Fan, a senior research scientist at Nvidia who works on AI gameplay and was involved with an early effort to train AI to play by controlling a keyboard and mouse with a 2017 OpenAI project called World of Bits. Fan says the Google DeepMind work reminds him of this project as well as a 2022 effort called VPT that involved agents learning tool use in Minecraft.

"SIMA takes one step further and shows stronger generalization to new games," he says. "The number of environments is still very small, but I think SIMA is on the right track." [...] For the SIMA project, the Google DeepMind team collaborated with several game studios to collect keyboard and mouse data from humans playing 10 different games with 3D environments, including No Man's Sky, Teardown, Hydroneer, and Satisfactory. DeepMind later added descriptive labels to that data to associate the clicks and taps with the actions users took, for example whether they were a goat looking for its jetpack or a human character digging for gold. The data trove from the human players was then fed into a language model of the kind that powers modern chatbots, which had picked up an ability to process language by digesting a huge database of text. SIMA could then carry out actions in response to typed commands. And finally, humans evaluated SIMA's efforts inside different games, generating data that was used to fine-tune its performance.
Further reading: DeepMind's blog post.
AI

European Lawmakers Approve Landmark AI Legislation 29

European lawmakers approved the world's most comprehensive legislation yet on AI (non-paywalled link), setting out sweeping rules for developers of AI systems and new restrictions on how the technology can be used. From a report: The European Parliament on Wednesday voted to give final approval to the law after reaching a political agreement last December with European Union member states. The rules, which are set to take effect gradually over several years, ban certain AI uses, introduce new transparency rules and require risk assessments for AI systems that are deemed high-risk. The law comes amid a broader global debate about the future of AI and its potential risks and benefits as the technology is increasingly adopted by companies and consumers. Elon Musk recently sued OpenAI and its chief executive Sam Altman for allegedly breaking the company's founding agreement by prioritizing profit over AI's benefits for humanity. Altman has said AI should be developed with great caution and offers immense commercial possibilities.

The new legislation applies to AI products in the EU market, regardless of where they were developed. It is backed by fines of up to 7% of a company's worldwide revenue. The AI Act is "the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI," said Brando Benifei, an EU lawmaker from Italy who helped lead negotiations on the law. The law still needs final approval from EU member states, but that process is expected to be a formality since they already gave the legislation their political endorsement. While the law only applies in the EU it is expected to have a global impact because large AI companies are unlikely to want to forgo access to the bloc, which has a population of about 448 million people. Other jurisdictions could also use the new law as a model for their AI regulations, contributing to a wider ripple effect.
Transportation

Apple Developed Chip Equivalent To Four M2 Ultras For Apple Car Project (9to5mac.com) 61

After 10 years and billions of dollars spent in development, Apple abruptly canceled its ambitious car project known as "Titan," shifting its focus and resources on the company's artificial intelligence division. In a recent Q&A on Monday, Bloomberg's Mark Gurman (paywalled) shared some new insights about the project and how involved the Apple Silicon team was before it was shut down. According to Gurman, Apple was planning to power the "AI brain" of the car with a custom Apple Silicon chip that would have the equivalent power of four M2 Ultra chips (the most powerful Apple has to date) combined. 9to5Mac reports: A single M2 Ultra chip consists of 134 billion transistors and features a 24-core CPU, a GPU with up to 76 cores, and a dedicated 32-core Neural Engine. M2 Ultra powers the current generation of Mac Studio and Mac Pro. Interestingly, Gurman says that the development of this new chip for the car was "nearly finished" before the project was discontinued. As some of the engineers working on the car project were reassigned to other teams at Apple, the company could reuse the engineering of this new chip for future projects.
The Courts

New York Times Denies OpenAI's 'Hacking' Claim In Copyright Fight 25

An anonymous reader quotes a report from Reuters: The New York Times has denied claims by OpenAI that it "hacked" the company's artificial intelligence systems to create misleading evidence of copyright infringement, calling the accusation as "irrelevant as it is false." The Times in a court filing on Monday said OpenAI was "grandstanding" in its request to dismiss parts of the newspaper's lawsuit alleging its articles were misused for artificial intelligence training. The Times sued OpenAI and its largest financial backer Microsoft in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users.

The newspaper is among several prominent copyright owners including authors, visual artists and music publishers that have sued tech companies over the alleged misuse of their work in AI training. The Times' complaint cited several instances in which programs like OpenAI's popular chatbot ChatGPT gave users near-verbatim excerpts of its articles when prompted. OpenAI responded last month that the Times had paid an unnamed "hired gun" to manipulate its products into reproducing the newspaper's content. It asked the court to dismiss parts of the case, including claims that its AI-generated content infringes the Times' copyrights. "In the ordinary course, one cannot use ChatGPT to serve up Times articles at will," OpenAI said. The company also said it would eventually prove that its AI training made fair use of copyrighted content.

The Times replied on Monday that it had simply used the "first few words or sentences" of its articles to prompt ChatGPT to recreate them. "OpenAI's true grievance is not about how The Times conducted its investigation, but instead what that investigation exposed: that Defendants built their products by copying The Times's content on an unprecedented scale -- a fact that OpenAI does not, and cannot, dispute," the Times said.
AI

China Puts Trust in AI To Maintain Largest High-Speed Rail Network on Earth 17

China is using AI in the operation of its 45,000km (28,000-mile) high-speed rail network, with the technology achieving several milestones, according to engineers involved in the project. From a report: An AI system in Beijing is processing vast amounts of real-time data from across the country and can alert maintenance teams of abnormal situations within 40 minutes, with an accuracy as high as 95 per cent, they said in a peer-reviewed paper. "This helps on-site teams conduct reinspections and repairs as quickly as possible," wrote Niu Daoan, a senior engineer at the China State Railway Group's infrastructure inspection centre, in the paper published by the academic journal China Railway.

In the past year, none of China's operational high-speed railway lines received a single warning that required speed reduction due to major track irregularity issues, while the number of minor track faults decreased by 80 per cent compared to the previous year. According to the paper, the amplitude of rail movement caused by strong winds also decreased -- even on massive valley-spanning bridges -- with the application of AI technology. [...] According to the paper, after years of effort Chinese railway scientists and engineers have "solved challenges" in comprehensive risk perception, equipment evaluation, and precise trend predictions in engineering, power supply and telecommunications. The result was "scientific support for achieving proactive safety prevention and precise infrastructure maintenance for high-speed railways," the engineers said.
AI

"We Asked Intel To Define 'AI PC.' Its reply: 'Anything With Our Latest CPUs'" (theregister.com) 35

An anonymous reader shares a report: If you're confused about what makes a PC an "AI PC," you're not alone. But finally have something of an answer: if it packs a GPU, a processor that boasts a neural processing unit and can handle VNNI and Dp4a instructions, it qualifies -- at least according to Robert Hallock, Intel's senior director of technical marketing. As luck would have it, that combo is present in Intel's current-generation desktop processors -- 14th-gen Core, aka Core Ultra, aka "Meteor Lake." All models feature a GPU, NPU, and can handle Vector Neural Network Instructions (VNNI) that speed some -- surprise! -- neural networking tasks, and the DP4a instructions that help GPUs to process video.

Because AI PCs are therefore just PCs with current processors, Intel doesn't consider "AI PC" to be a brand that denotes conformity with a spec or a particular capability not present in other PCs. Intel used the "Centrino" brand to distinguish Wi-Fi-enabled PCs, and did likewise by giving home entertainment PCs the "Viiv" moniker. Chipzilla still uses the tactic with "vPro" -- a brand that denotes processors that include manageability and security for business users. But AI PCs are neither a brand nor a spec. "The reason we have not created a category for it like Centrino is we believe this is simply what a PC will be like in four or five years time," Hallock told The Register, adding that Intel's recipe for an AI PC doesn't include specific requirements for memory, storage, or I/O speeds. "There are cases where a very large LLM might require 32GB of RAM," he noted. "Everything else will fit comfortably in a 16GB system."

AI

Gold-Medalist Coders Build an AI That Can Do Their Job for Them (bloomberg.com) 27

A new startup called Cognition AI can turn a user's prompt into a website or video game. From a report: A new installment of Silicon Valley's most exciting game, Are We in a Bubble?!, has begun. This time around the game's premise hinges on whether AI technology is poised to change the world as the consumer internet did -- or even more dramatically -- or peter out and leave us with some advances but not a new global economy. This game isn't easy to play, and the available data points often prove more confusing than enlightening. Take the case of Cognition AI Inc.

You almost certainly have not heard of this startup, in part because it's been trying to keep itself secret and in part because it didn't even officially exist as a corporation until two months ago. And yet this very, very young company, whose 10-person staff has been splitting time between Airbnbs in Silicon Valley and home offices in New York, has raised $21 million from Peter Thiel's venture capital firm Founders Fund and other brand-name investors, including former Twitter executive Elad Gil. They're betting on Cognition AI's team and its main invention, which is called Devin.

Devin is a software development assistant in the vein of Copilot, which was built by GitHub, Microsoft and OpenAI, but, like, a next-level software development assistant. Instead of just offering coding suggestions and autocompleting some tasks, Devin can take on and finish an entire software project on its own. To put it to work, you give it a job -- "Create a website that maps all the Italian restaurants in Sydney," say -- and the software performs a search to find the restaurants, gets their addresses and contact information, then builds and publishes a site displaying the information. As it works, Devin shows all the tasks it's performing and finds and fixes bugs on its own as it tests the code being written. The founders of Cognition AI are Scott Wu, its chief executive officer; Steven Hao, the chief technology officer; and Walden Yan, the chief product officer. Hao was most recently one of the top engineers at Scale AI, a richly valued startup that helps train AI systems. Yan, until recently at Harvard University, requested that his status at the school be left ambiguous because he hasn't yet had the talk with his parents.

Google

Google Restricts AI Chatbot Gemini From Answering Queries on Global Elections (reuters.com) 53

Google is restricting AI chatbot Gemini from answering questions about the global elections set to happen this year, the Alphabet-owned firm said on Tuesday, as it looks to avoid potential missteps in the deployment of the technology. From a report: The update comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public, prompting governments to regulate the technology.

When asked about elections such as the upcoming U.S. presidential match-up between Joe Biden and Donald Trump, Gemini responds with "I'm still learning how to answer this question. In the meantime, try Google Search". Google had announced restrictions within the U.S. in December, saying they would come into effect ahead of the election. "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

The Internet

World Wide Web Inventor's Top Predictions as It Turns 35 (cnbc.com) 23

A anonymous reader shares a report: Personal artificial intelligence assistants that know our health status and legal history inside out. The ability to transfer your data from one place to another seamlessly without any roadblocks. These are just some of the predictions for the future of the web from the inventor of the World Wide Web, Tim Berners-Lee, on the 35th anniversary of its invention.

[...] Another thing Berners-Lee says might happen in the future is a big tech company being forced to break up. [...] Berners-Lee said he always prefers it when tech companies "do the right thing by themselves" before regulators step in. "That's always been the spirit of the internet." He uses the example of the Data Transfer Initiative, a private initiative that launched in 2018 and is now backed by the likes of Google, Apple, and Meta, to encourage portability of photos, videos and other data between their platforms.

"Maybe the companies were prompted a bit by the possibility of regulation," Berners-Lee said. "But this was an independent thing." However, he added: "Things are changing so quickly. AI is changing very, very quickly. There are monopolies in AI. Monopolies changed pretty quickly back in the web. Maybe at some point in the future, agencies will have to work to break up big companies, but we don't know which company that will be."

AI

Midjourney Bans All Stability AI Employees Over Alleged Data Scraping (theverge.com) 12

Jess Weatherbed reports via The Verge: Midjourney says it has banned Stability AI staffers from using its service, accusing employees at the rival generative AI company of causing a systems outage earlier this month during an attempt to scrape Midjourney's data. Midjourney posted an update to its Discord server on March 2nd that acknowledged an extended server outage was preventing generated images from appearing in user galleries. In a summary of a business update call on March 6th, Midjourney claimed that "botnet-like activity from paid accounts" -- which the company specifically links to Stability AI employees -- was behind the outage.

According to Midjourney user Nick St. Pierre on X, who listened to the call, Midjourney said that the service was brought down because "someone at Stability AI was trying to grab all the prompt and image pairs in the middle of a night on Saturday." St. Pierre said that Midjourney had linked multiple paid accounts to an individual on the Stability AI data team. In its summary of the business update call on March 6th (which Midjourney refers to as "office hours"), the company says it's banning all Stability AI employees from using its service "indefinitely" in response to the outage. Midjourney is also introducing a new policy that will similarly ban employees of any company that exercises "aggressive automation" or causes outages to the service.

St. Pierre flagged the accusations to Stability AI CEO Emad Mostaque, who replied on X, saying he was investigating the situation and that Stability hadn't ordered the actions in question. "Very confusing how 2 accounts would do this team also hasn't been scraping as we have been using synthetic & other data given SD3 outperforms all other models," said Mostaque, referring to the Stable Diffusion 3 AI model currently in preview. He claimed that if the outage was caused by a Stability employee, then it was unintentional and "obviously not a DDoS attack." Midjourney founder David Holz responded to Mostaque in the same thread, claiming to have sent him "some information" to help with his internal investigation.

AI

US Must Move 'Decisively' To Avert 'Extinction-Level' Threat From AI, Gov't-Commissioned Report Says (time.com) 139

The U.S. government must move "quickly and decisively" to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an "extinction-level threat to the human species," says a report commissioned by the U.S. government published on Monday. Time: "Current frontier AI development poses urgent and growing risks to national security," the report, which TIME obtained ahead of its publication, says. "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons." AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies -- like OpenAI, Google DeepMind, Anthropic and Meta -- as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies. The finished document, titled "An Action Plan to Increase the Safety and Security of Advanced AI," recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.

The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini. The new AI agency should require AI companies on the "frontier" of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also "urgently" consider outlawing the publication of the "weights," or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward "alignment" research that seeks to make advanced AI safer, it recommends.

AI

Jensen Huang Says Even Free AI Chips From Competitors Can't Beat Nvidia's GPUs 50

An anonymous reader shares a report: Nvidia CEO Jensen Huang recently took to the stage to claim that Nvidia's GPUs are "so good that even when the competitor's chips are free, it's not cheap enough." Huang further explained that Nvidia GPU pricing isn't really significant in terms of an AI data center's total cost of ownership (TCO). The impressive scale of Nvidia's achievements in powering the booming AI industry is hard to deny; the company recently became the world's third most valuable company thanks largely to its AI-accelerating GPUs, but Jensen's comments are sure to be controversial as he dismisses a whole constellation of competitors, such as AMD, Intel and a range of competitors with ASICs and other types of custom AI silicon.

Starting at 22:32 of the YouTube recording, John Shoven, Former Trione Director of SIEPR and the Charles R. Schwab Professor Emeritus of Economics, Stanford University, asks, "You make completely state-of-the-art chips. Is it possible that you'll face competition that claims to be good enough -- not as good as Nvidia -- but good enough and much cheaper? Is that a threat?" Jensen Huang begins his response by unpacking his tiny violin. "We have more competition than anyone on the planet," claimed the CEO. He told Shoven that even Nvidia's customers are its competitors, in some cases. Also, Huang pointed out that Nvidia actively helps customers who are designing alternative AI processors and goes as far as revealing to them what upcoming Nvidia chips are on the roadmap.
AI

xAI Will Open-Source Grok This Week (techcrunch.com) 44

Elon Musk's AI startup xAI will open-source Grok, its chatbot rivaling ChatGPT, this week, the entrepreneur said, days after suing OpenAI and complaining that the Microsoft-backed startup had deviated from its open-source roots. From a report: xAI released Grok last year, arming it with features including access to "real-time" information and views undeterred by "politically correct" norms. The service is available to customers paying for X's $16 monthly subscription.
Businesses

Does Reddit Represent the Return of the Junk Stock IPO? (forbes.com) 74

An article in Inc notes a "wild projection" in Reddit's SEC filing that Reddit's global market opportunity by 2027 is $1.4 trillion." Some of the numbers lead back to a single individual: Sam Altman. The co-founder and chief executive of ChatGPT-maker OpenAI owns an 8.7 percent stake in Reddit, more than its co-founder and CEO, Steve Huffman, who owns 3.3 percent... Altman, through various funds and holding companies he owns or manages, controls more than a million shares of Reddit at $60 million in aggregate purchase price — and holds more than 9 percent of voting rights...

Discussing Reddit's future, financial analyst and journalist Herb Greenberg recently told CNBC, "This is an AI play."

But the senior investing editor for Kiplinger.com argues that retail investors "may want to hold tight before rushing out to buy the Reddit IPO." While IPO stocks tend to have strong first-day showings, returns for the first year are generally weak, says the team of analysts at Trivariate Research, a market research firm based in New York. And since 2020, "the average IPO has lagged its industry average by 30% over the subsequent three years following its first closing price..."

Other commenters have noted that Reddit's allotment of shares to select Redditors could lower demand on the first day of trading, which would work against any IPO pop.

"Over the past few years, there have been a bunch of IPOs in the U.S. in which overhyped names enjoyed flashy stock-market debuts only to drop sharply soon after," notes the Street. Notable examples include Coinbase, which plummeted by almost 90% after its debut, Robinhood, still down 53% since its IPO, and Rivian, down over 91% since its debut. However, it's crucial to note that all of these IPOs occurred in 2021 amid market euphoria fueled by low interest rates, significant economic stimulus, and the lingering effects of the Covid-19 pandemic. Although the current macroeconomic landscape differs from three years ago, valuations of tech and growth stocks remain stretched.
Kiplingers.com concludes it "boils down to your own personal investing goals and risk tolerance. If you do decide to buy Reddit stock when it first begins trading, do so in a small amount that you can afford to lose."

But they also cite analysis from David Trainer, CEO of New Constructs, a research firm powered by artificial intelligence. "Reddit's IPO marks the return of the junk IPO," Trainer wrote in Forbes. "[The valuation] implies that Reddit will grow its user base to 26 times current levels, which would be nearly five times the size of [Snapchat-maker] Snap, and a highly unlikely feat. Reddit looks overvalued, and we think investors should pass on this IPO."

Trainer writes: [T]he company has never been profitable and should not be a publicly traded company... I think the company may never monetize its platform without angering its users and the entire premise of Reddit is user-generated content. This business model is inescapably built on a catch-22: make money or please users... Reddit looks overvalued, and I think investors should pass on this IPO.
Buyers and analysts told the site Marketing Brew "that they see the platform as nice-to-have, but that it is not an essential part of their media plans, like Meta or Google are." "They've always been solidly in the second or third tier of social networks," alongside Snap, Pinterest, and X, Brian Wieser, a former GroupM exec who's now author of the industry newsletter Madison and Wall, told Marketing Brew.
Yet Trainer notes that "98% of Reddit's revenue in 2023 came from third-party advertising on the site and 28% of all revenue came from ten customers," and "Reddit's cost of revenue, sales & marketing, general & administrative, and research & development costs were 117% of revenue in 2023."

Trainer concludes "Reddit is nowhere near breakeven. Reddit is an unprofitable social media company fighting for users."

Bloomberg adds that the subreddit r/WallStreetBets "has threatened to bet against the stock, with many people noting that the company still loses money two decades into its existence. (Reddit lost $90.8 million last year, down from $158.6 million the year before.)" Some have complained that the invitation to invest fails to make up for the unpaid labor they've invested making the site work... In 2021 the platform's WallStreetBets forum ignited a meme-stock frenzy, propelling skyward the stocks of nostalgic but struggling companies like GameStop Corp. and AMC Entertainment Holdings Inc. and sending shockwaves through the financial industry... When it goes public, the platform that invented meme stocks runs the risk of becoming one itself.

Reddit noted the possibility as a risk in its IPO filing. "Given the broad awareness and brand recognition of Reddit, including as a result of the popularity of r/wallstreetbets among retail investors," the company warned that its stock could "experience extreme volatility ... which could cause you to lose all or part of your investment if you are unable to sell your shares at or above the initial offering price."

Users on WallStreetBets got a kick out of the fact that the company listed the forum as a risk factor, posting about it with a sly smiling emoji...

Meanwhile, reports that marketers are infiltrating subreddits have been confirmed. Over 200 businesses have "integrated Reddit Pro into their digital strategies," reports Search Engine Land, including "well-known names such as Taco Bell, the NFL, and The Wall Street Journal...

"During the initial alpha testing phase with approximately 20 businesses, Reddit reported its Pro partners, on average, generated 11 additional posts and comments per month."
The Military

Palantir Wins US Army Contract For Battlefield AI 32

Lindsay Clark reports via The Register: Palantir has won a US Army contract worth $178.4 million to house a battlefield intelligence system inside a big truck. In what purports to be the Army's first AI-defined vehicle, Palantir will provide systems for the TITAN "ground station," which is designed to access space, high altitude, aerial, and terrestrial sensors to "provide actionable targeting information for enhanced mission command and long range precision fires", according to a Palantir statement.

TITAN stands for Tactical Intelligence Targeting Access Node, which might sound harmless enough. Who was ever killed by a node? The TITAN solution is built to "maximize usability for soldiers, incorporating tangible feedback and insights from soldier touchpoints at every step of the development and configuration process," the statement said. The aim of the TITAN project is to bring together military software and hardware providers in a new way. These include "traditional and non-traditional partners" of the US armed forces, such as Northrop Grumman, Anduril Industries, L3Harris Technologies, Pacific Defense, SNC, Strategic Technology Consulting, and World Wide Technology, as well as Palantir.

Speaking to Bloomberg, Alex Karp, Palantir's motor-mouth CEO, said TITAN was the logical extension of Maven, a controversial project for using machine learning and engineering to tell people and objects apart in drone footage in which Palantir is a partner and from which Google famously pulled out after employees protested. Karp said TITAN was a partnership between "people who've built software products that have been used on the battlefield and used commercially." "That simple insight which you see in the battlefield in Ukraine, which you see in Israel is something that is hard for institutions to internalize. [For] the Pentagon this step is one of the most historic steps ever because what it basically says is, 'We're going to fight for real, we're going to put the best on the battlefield and the best is not just one company.' It's a team of people led by the most prominent software provider in defense in the world: Palantir," he said.
On Thursday, Palantir was one of the companies included in a new U.S. consortium assembled to support the safe development and deployment of generative AI.
Wireless Networking

Google's Newest Office Has AI Designers Toiling In a Wi-Fi Desert (reuters.com) 85

Google's swanky new office building located on the Alphabet's Mountain View, California headquarters has been "plagued for months by inoperable, or, at best, spotty Wi-Fi," reports Reuters citing six people familiar with the matter. "Its recliner-laden collaborative workspaces do not work well for teams carting around laptops, since workers must plug into ethernet cables at their desks to get consistent internet service. Some make do by using their phones as hotspots." From the report: The company promoted the new building and surrounding campus in a 229-page glossy book highlighting its cutting-edge features, such as "Googley interiors" and "an environment where everyone has the tools they need to be successful."

But, a Google spokeswoman acknowledged, "we've had Wi-Fi connectivity issues in Bay View." She said Google "made several improvements to address the issue," and the company hoped to have a fix in coming weeks. According to one AI engineer assigned to the building, which also houses members of the advertising team, the wonky Wi-Fi has been no help for Google pushing a three day per week return-to-office mandate. "You'd think the world's leading internet company would have worked this out," he said.

Managers have encouraged workers to stroll outside or sit at the adjoining cafe where the Wi-Fi signal is stronger. Some were issued new laptops recently with more powerful Wi-Fi chips. Google has not publicly disclosed the reasons for the Wi-Fi problems, but workers say the 600,000-square-foot building's swooping, wave-like rooftop swallows broadband like the Bermuda Triangle.

Slashdot Top Deals