×
United States

Is Nuclear Power in America Reviving - or Flailing? (msn.com) 209

Last week America's energy secretary cheered the startup of a fourth nuclear reactor at a Georgia power plant, calling it "the largest producer of clean energy, and the largest producer of electricity in the United States" after a third reactor was started up there in December.

From the U.S. Energy Department's transcript of the speech: Each year, Units 3 and 4 are going to produce enough clean power to power 1 million homes and businesses, enough energy to power roughly 1 in 4 homes in Georgia. Preventing 10 million metric tons of carbon dioxide pollution annually. That, by the way, is like planting more than 165 million trees every year!

And that's not to mention the historic investments that [electric utility] Southern has made on the safety front, to ensure this facility meets — and exceeds — the highest operating standards in the world....

To reach our goal of net zero by 2050, we have to at least triple our current nuclear capacity in this country. That means we've got to add 200 more gigawatts by 2050. Okay, two down, 198 to go! In building [Unit] 4, we've solved our greatest design challenges. We've stood up entire supply chains.... And so it's time to cash in on our investments by building more. More of these facilities. The Department of Energy's Loan Programs Office stands ready to help, with hundreds of billions of dollars in what we call Title 17 loans... Since the President signed the Inflation Reduction Act and the Bipartisan Infrastructure Law, companies across the nation have announced 29 new or expanded nuclear facilities — across 16 states — representing about 1,600 potential new jobs. And the majority of those projects will expand the domestic uranium production and fuel fabrication, strengthening these critical supply chains...

Bottom line is, in short, we are determined to build a world-class nuclear industry in the United States, and we're putting our money where our mouth is.

America's Energy Secretary told the Washington Post that "Whether it happens through small modular reactors, or AP1000s, or maybe another design out there worthy of consideration, we want to see nuclear built." The Post notes the Energy department gave a $1.5 billion loan to restart a Michigan power plant which was decommissioned in 2022. "It would mark the first time a shuttered U.S. nuclear plant has been reactivated."

"But in this country with 54 nuclear plants across 28 states, restarting existing reactors and delaying their closure is a lot less complicated than building new ones." When the final [Georgia] reactor went online at the end of April, the expansion was seven years behind schedule and nearly $20 billion over budget. It ultimately cost more than twice as much as promised, with ratepayers footing much of the bill through surcharges and rate hikes...

Administration officials say the country has no choice but to make nuclear power a workable option again. The country is fast running short on electricity, demand for power is surging amid a boom in construction of data centers and manufacturing plants, and a neglected power grid is struggling to accommodate enough new wind and solar power to meet the nation's needs...

As the administration frames the narrative of the plant as one of perseverance and innovation that clears a path for restoring U.S. nuclear energy dominance, even some longtime boosters of the industry question whether this country will ever again have a vibrant nuclear energy sector. "It is hard for me to envision state energy regulators signing off on another one of these, given how badly the last ones went," said Matt Bowen, a nuclear scholar at the Center on Global Energy Policy at Columbia University, who was an adviser on nuclear energy issues in the Obama administration.

The article notes there are 19 AP1000 reactors (the design used at the Georgia plant) in development around the world. "None of them are being built in the United States."
Programming

Rust Growing Fastest, But JavaScript Reigns Supreme (thenewstack.io) 55

"Rust is the fastest-growing programming language, with its developer community doubling in size over the past two years," writes The New Stack, "yet JavaScript remains the most popular language with 25.2 million active developers, according to the results of a recent survey." The 26th edition of SlashData's Developer Nation survey showed that the Rust community doubled its number of users over the past two years — from two million in the first quarter of 2022 to four million in the first quarter of 2024 — and by 33% in the last 12 months alone. The SlashData report covers the first quarter of 2024. "Rust has developed a passionate community that advocates for it as a memory-safe language which can provide great performance, but cybersecurity concerns may lead to an even greater increase," the report said. "The USA and its international partners have made the case in the last six months for adopting memory-safe languages...."

"JavaScript's dominant position is unlikely to change anytime soon, with its developer population increasing by 4M developers over the last 12 months, with a growth rate in line with the global developer population growth," the report said. The strength of the JavaScript community is fueled by the widespread use of the language across all types of development projects, with at least 25% of developers in every project type using it, the report said. "Even in development areas not commonly associated with the language, such as on-device coding for IoT projects, JavaScript still sees considerable adoption," SlashData said.

Also, coming in strong, Python has overtaken Java as the second most popular language, driven by the interest in machine learning and AI. The battle between Python and Java shows Python with 18.2 million developers in Q1 2024 compared to Java's 17.7 million. This comes about after Python added more than 2.1 million net new developers to its community over the last 12 months, compared to Java which only increased by 1.2 million developers... Following behind Java there is a six-million-developer gap to the next largest community, which is C++ with 11.4 million developers, closely trailed by C# with 10.2 million and PHP with 9.8 million. Languages with the smallest communities include Objective-C with 2.7 million developers, Ruby with 2.5 million, and Lua with 1.8 million. Meanwhile, the Go language saw its developer population grow by 10% over the last year. It had previously outpaced the global developer population growth, growing by 5Y% over the past two years, from three million in Q1 2022 to 4.7 million in Q1 2024.

"TNS analyst Lawrence Hecht has a few different takeaways. He notes that with the exceptions of Rust, Go and JavaScript, the other major programming languages all grew slower than the total developer population, which SlashData says increased 39% over the last two years alone."
Graphics

Nvidia Takes 88% of the GPU Market Share (xda-developers.com) 83

As reported by Jon Peddie Research, Nvidia now holds 88% of the GPU market after its market share jumped 8% in its most recent quarter. "This jump shaves 7% off of AMD's share, putting it down to 19% total," reports XDA Developers. "And if you're wondering where that extra 1% went, it came from all of Intel's market share, squashing it down to 0%." From the report: Dr. Jon Peddie, president of Jon Peddie Research, mentions how the GPU market hasn't really looked "normal" since the 2007 recession. Ever since then, everything from the crypto boom to COVID has messed with the usual patterns. Usually, the first quarter of a year shows a bit of a dip in GPU sales, but because of AI's influence, it may seem like that previous norm may be forever gone: "Therefore, one would expect Q2'24, a traditional quarter, to also be down. But, all the vendors are predicting a growth quarter, mostly driven by AI training systems in hyperscalers. Whereas AI trainers use a GPU, the demand for them can steal parts from the gaming segment. So, for Q2, we expect to see a flat to low gaming AIB result and another increase in AI trainer GPU shipments. The new normality is no normality."
Businesses

Samsung Electronics Workers Strike For the First Time Ever (theverge.com) 3

Victoria Song reports via The Verge: Samsung Electronics workers went on a strike on Friday for the very first time in the company's history. The move comes at a time when the Korean corporation faces increased competition from other chipmakers, particularly as demand for AI chips grows. The National Samsung Electronics Union (NSEU), the largest of the company's several unions, called for the one-day strike at Samsung's Seoul office building as negotiations over pay bonuses and time off hit a standstill. The New York Times reports that the majority of striking workers come from Samsung's chip division. (Samsung Electronics is technically only a subsidiary comprising its consumer tech, appliances, and semiconductor divisions; Samsung itself is a conglomerate that controls real estate, retail, insurance, food production, hotels, and a whole lot more.) It's unclear how many of the NSEU's roughly 28,400 members participated in the walkout. Even so, multiple outlets are reporting that the walkout is unlikely to affect chip production or trigger shortages. Union leaders told Bloomberg that further actions are planned if management refuses to engage.

That said, the fact that it's happening at all is awkward timing for Samsung, particularly due to tensions with the chipmaking portion of its business. Last year, the division reported a 15 trillion won ($11 billion) loss, leading to a 15-year low in operating profits. The current AI boom played a big role in the massive loss. Samsung has historically been the world leader in making high-bandwidth memory chips â" the kind that are in demand right now to power next-gen generative AI features. However, last year's decline was partly because Samsung wasn't prepared for increased demand, allowing local rival SK Hynix to take the top spot.

AI

Ashton Kutcher: Entire Movies Can Be Made on OpenAI's Sora Someday (businessinsider.com) 47

Hollywood actor and venture capitalist Ashton Kutcher believes that one day, entire movies will be made on AI tools like OpenAI's Sora. From a report: The actor was speaking at an event last week organized by the Los Angeles-based think tank Berggruen Institute, where he revealed that he'd been playing around with the ChatGPT maker's new video generation tool. "I have a beta version of it and it's pretty amazing," said Kutcher, whose VC firm Sound Venture's portfolio includes an investment in OpenAI. "You can generate any footage that you want. You can create good 10, 15-second videos that look very real."

"It still makes mistakes. It still doesn't quite understand physics. But if you look at the generation of this that existed one year ago, as compared to Sora, it's leaps and bounds. In fact, there's footage in it that I would say you could easily use in a major motion picture or a television show," he continued. Kutcher said this would help lower the costs of making a film or television show. "Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100?" Kutcher said. "To go out and shoot it would cost you thousands of dollars,"

Kutcher was so bullish about AI advancements that he said he believed people would eventually make entire movies using tools like Sora. "You'll be able to render a whole movie. You'll just come up with an idea for a movie, then it will write the script, then you'll input the script into the video generator, and it will generate the movie," Kutcher said. Kutcher, of course, is no stranger to AI.

Microsoft

Windows Won't Take Screenshots of Everything You Do After All (theverge.com) 81

Microsoft says it's making its new Recall feature in Windows 11 that screenshots everything you do on your PC an opt-in feature and addressing various security concerns. From a report: The software giant first unveiled the Recall feature as part of its upcoming Copilot Plus PCs last month, but since then, privacy advocates and security experts have been warning that Recall could be a "disaster" for cybersecurity without changes. Thankfully, Microsoft has listened to the complaints and is making a number of changes before Copilot Plus PCs launch on June 18th. Microsoft had originally planned to turn Recall on by default, but the company now says it will offer the ability to disable the controversial AI-powered feature during the setup process of new Copilot Plus PCs. "If you don't proactively choose to turn it on, it will be off by default," says Windows chief Pavan Davuluri.
AI

It's Not AI, It's 'Apple Intelligence' (gizmodo.com) 28

An anonymous reader shares a report: Apple is expected to announce major artificial intelligence updates to the iPhone, iPad, and Mac next week during its Worldwide Developers Conference. Except Apple won't call its system artificial intelligence, like everyone else, according to Bloomberg's Mark Gurman on Friday. The system will reportedly be called "Apple Intelligence," and allegedly will be made available to new versions of the iPhone, iPad, and Mac operating systems. Apple Intelligence, which is shortened to just AI, is reportedly separate from the ChatGPT-like chatbot Apple is expected to release in partnership with OpenAI. Apple's in-house AI tools are reported to include assistance in message writing, photo editing, and summarizing texts. Bloomberg reports that some of these AI features will run on the device while others will be processed through cloud-based computing, depending on the complexity of the task. The name feels a little too obvious. While this is the first we're hearing of an actual name for Apple's AI, it's entirely unsurprising that Apple is choosing a unique brand to call its artificial intelligence systems.
AI

California AI Bill Sparks Backlash from Silicon Valley Giants (arstechnica.com) 59

California's proposed legislation to regulate AI has sparked a backlash from Silicon Valley heavyweights, who claim the bill will stifle innovation and force AI start-ups to leave the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, passed by the state Senate last month, requires AI developers to adhere to strict safety frameworks, including creating a "kill switch" for their models. Critics argue that the bill places a costly compliance burden on smaller AI companies and focuses on hypothetical risks. Amendments are being considered to clarify the bill's scope and address concerns about its impact on open-source AI models.
AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

AI

Adobe Responds To Vocal Uproar Over New Terms of Service Language (venturebeat.com) 34

Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients.

A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

Chrome

Google Is Working On a Recall-Like Feature For Chromebooks, Too (pcworld.com) 47

In an interview with PCWorld's Mark Hachman, Google's ChromeOS chief said the company is cautiously exploring a Recall-like feature for Chromebooks, dubbed "memory." Microsoft's AI-powered Recall feature for Windows 11 was unveiled at the company's Build 2024 conference last month. The feature aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides. Given the obvious privacy and security concerns, many users have denounced the feature, describing it as "literal spyware or malware." PCWorld reports: I sat down with John Solomon, the vice president at Google responsible for ChromeOS, for a lengthy interview around what it means for Google's low-cost Google platform as the PC industry moved to AI PCs. Microsoft, of course, is launching Copilot+ PCs alongside Qualcomm's Snapdragon X Elite -- an Arm chip. And Chromebooks, of course, have a long history with Arm. But it's Recall that we eventually landed upon -- or, more precisely, how Google sidles into the same space. Recall is great in theory, but in practice may be more problematic.) Recall the Project Astra demo that Google showed off at its Google I/O conference. One of the key though understated aspects of it was how Astra "remembered" where the user's glasses were.

Astra didn't appear to be an experience that could be replicated on the Chromebook. Most users aren't going to carry a Chromebook around (a device which typically lacks a rear camera) visually identifying things. Solomon respectfully disagreed. "I think there's a piece of it which is very relevant, which is this notion of having some kind of context and memory of what's been happening on the device," Solomon said. "So think of something that's like, maybe viewing your screen and then you walk away, you get distracted, you chat to someone at the watercooler and you come back. You could have some kind of rewind function, you could have some kind of recorder function that would kind of bring you back to that. So I think that there is a crossover there.

"We're actually talking to that team about where the use case could be," Solomon added of the "memory" concept. "But I think there's something there in terms of screen capture in a way that obviously doesn't feel creepy and feels like the user's in control." That sounds a lot like Recall! But Solomon was quick to point out that one of the things that has turned off users to Recall was the lack of user control: deciding when, where, and if to turn it on. "I'm not going to talk about Recall, but I think the reason that some people feel it's creepy is when it doesn't feel useful, and it doesn't feel like something they initiated or that they get a clear benefit from it," Solomon said. "If the user says like -- let's say we're having a meeting, and discussing complex topics. There's a benefit of running a recorded function if at the end of it it can be useful for creating notes and the action items. But you as a user need to put that on and decide where you want to have that."

AI

DuckDuckGo Offers 'Anonymous' Access To AI Chatbots Through New Service 7

An anonymous reader quotes a report from Ars Technica: On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use. "We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days," says DuckDuckGo, "and that none of the chats made on our platform can be used to train or improve the models." However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet. Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose.
In regard to hallucination concerns, DuckDuckGo states in its privacy policy: "By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice)."
Businesses

Humane Said To Be Seeking a $1 Billion Buyout After Only 10,000 Orders of Its AI Pin (engadget.com) 40

An anonymous reader writes: It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.
One of the companies that Humane has engaged with for the sale is HP, the Times reported.
Microsoft

'Microsoft Has Lost Trust With Its Users and Windows Recall is the Straw That Broke the Camel's Back' (windowscentral.com) 170

In a column at Windows Central, a blog that focuses on Microsoft news, senior editor Zac Bowden discusses the backlash against Windows Recall, a new AI feature in Microsoft's Copilot+ PCs. While the feature is impressive, allowing users to search their entire Windows history, many are concerned about privacy and security. Bowden argues that Microsoft's history of questionable practices, such as ads and bloatware, has eroded user trust, making people skeptical of Recall's intentions. Additionally, the reported lack of encryption for Recall's data raises concerns about third-party access. Bowden argues that Microsoft could have averted the situation by testing the feature openly to address these issues early on and build trust with users. He adds: Users are describing the feature as literal spyware or malware, and droves of people are proclaiming they will proudly switch to Linux or Mac in the wake of it. Microsoft simply doesn't enjoy the same benefit of the doubt that other tech giants like Apple may have.

Had Apple announced a feature like Recall, there would have been much less backlash, as Apple has done a great job building loyalty and trust with its users, prioritizing polished software experiences, and positioning privacy as a high-level concern for the company.

United States

US Regulators To Open Antitrust Inquiries of Microsoft, OpenAI and Nvidia (reuters.com) 39

The U.S. Justice Department and the Federal Trade Commission have reached a deal that allows them to proceed with antitrust investigations into the dominant roles that Microsoft, OpenAI and Nvidia play in the artificial intelligence industry, Reuters reported Thursday, citing a source familiar with the matter. From the report: Under the deal, the U.S. Department of Justice will take the lead in investigating whether Nvidia violated antitrust laws, while the FTC will examine the conduct of OpenAI and Microsoft. While OpenAI's parent is a nonprofit, Microsoft has invested $13 billion in a for-profit subsidiary, for what would be a 49% stake. The Microsoft-OpenAI partnership is also under informal scrutiny in other regions.

The regulators struck the deal over the past week and it is expected to be completed in the coming days, the person said. The FTC is also looking into Microsoft's $650 million deal with AI startup Inflection AI, a person familiar with the matter said.

AI

NewsBreak, Most Downloaded US News App, Caught Sharing 'Entirely False' AI-Generated Stories 98

An anonymous reader quotes a report from Reuters: Last Christmas Eve, NewsBreak, a free app with roots in China that is the most downloaded news app in the United States, published an alarming piece about a small town shooting. It was headlined "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns." The problem was, no such shooting took place. The Bridgeton, New Jersey police department posted a statement on Facebook on December 27 dismissing the article -- produced using AI technology -- as "entirely false." "Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers." NewsBreak, which is headquartered in Mountain View, California and has offices in Beijing and Shanghai, told Reuters it removed the article on December 28, four days after publication.

The company said "the inaccurate information originated from the content source," and provided a link to the website, adding: "When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content." As local news outlets across America have shuttered in recent years, NewsBreak has filled the void. Billing itself as "the go-to source for all things local," Newsbreak says it has over 50 million monthly users. It publishes licensed content from major media outlets, including Reuters, Fox, AP and CNN as well as some information obtained by scraping the internet for local news or press releases which it rewrites with the help of AI. It is only available in the U.S. But in at least 40 instances since 2021, the app's use of AI tools affected the communities it strives to serve, with Newsbreak publishing erroneous stories; creating 10 stories from local news sites under fictitious bylines; and lifting content from its competitors, according to a Reuters review of previously unreported court documents related to copyright infringement, cease-and-desist emails and a 2022 company memo registering concerns about "AI-generated stories."
Five of the seven former NewsBreak employees Reuters spoke to said most of the engineering work behind the app's algorithm is carried out in its China-based offices. "The company launched in the U.S. in 2015 as a subsidiary of Yidian, a Chinese news aggregation app," notes Reuters. "Both companies were founded by Jeff Zheng, the CEO of Newsbreak, and the companies share a U.S. patent registered in 2015 for an 'Interest Engine' algorithm, which recommends news content based on a user's interests and location."

"NewsBreak is a privately held start-up, whose primary backers are private equity firms San Francisco-based Francisco Partners, and Beijing-based IDG Capital."
AI

Humane Warns AI Pin Owners To 'Immediately' Stop Using Its Charging Case (theverge.com) 15

Humane is telling AI Pin owners today that they should "immediately" stop using the charging case that came with its AI gadget. From a report: There are issues with a third-party battery cell that "may pose a fire safety risk," the company wrote in an email to customers. Humane says it has "disqualified" that vendor and is moving to find another supplier. It also specified that the AI Pin itself, the magnetic Battery Booster, and its charging pad are "not affected." As recompense, the company is offering two free months of its subscription service, which is required for most of its functionality. The development follows Humane's AI Pin receiving not-so-great reviews after much hype and the startup, which has raised hundreds of millions of dollars, exploring a sale.
Businesses

Nvidia Hits $3 Trillion Market Cap On Back of AI Boom (cnbc.com) 45

Nvidia has reached a market cap of $3 trillion, surpassing Apple to become the second-largest public company behind Microsoft. CNBC reports: Nvidia's milestone is the latest stunning mark in a run that has seen the stock soar more than 3,224% over the past five years. The company will split its stock 100-for-1 later this month. Apple was the first U.S. company to reach a $3 trillion market cap during intraday trading in January 2022. Microsoft hit $3 trillion in market value in January 2024. Nvidia, which was founded in 1993, passed the $2 trillion valuation in February, and it only took roughly three months from there for it to pass $3 trillion.

Nvidia's surge in recent years has been powered by the tech industry's need for its chips, which are used to develop and deploy big AI models such as the one at the heart of OpenAI's ChatGPT. Companies such as Google, Microsoft, Meta, Amazon and OpenAI are buying billions of dollars worth of Nvidia's GPUs.

Technology

Oral-B Bricking Alexa Toothbrush Is a Cautionary Tale Against Buzzy Tech (arstechnica.com) 61

An anonymous reader quotes a report from Ars Technica: As we're currently seeing with AI, when a new technology becomes buzzy, companies will do almost anything to cram that tech into their products. Trends fade, however, and corporate priorities shift -- resulting in bricked gadgets and buyer's remorse. That's what's happening to some who bought into Oral-B toothbrushes with Amazon Alexa built in. Oral-B released the Guide for $230 in August 2020 but bricked the ability to set up or reconfigure Alexa on the product this February. As of this writing, the Guide is still available through a third-party Amazon seller.

The Guide toothbrush's charging base was able to connect to the Internet and work like an Alexa speaker that you could speak to and from which Alexa could respond. Owners could "ask to play music, hear the news, check weather, control smart home devices, and even order more brush heads by saying, 'Alexa, order Oral-B brush head replacements,'" per Procter & Gamble's 2020 announcement. Oral-B also bragged at the time that, in partnering with Alexa, the Guide ushered in "the truly connected bathroom."

On February 15, Oral-B bricked the Guide's ability to set up Alexa by discontinuing the Oral-B Connect app required to complete the process. Guide owners can still use the Oral-B App for other features; however, the ability to use the charging base like an Alexa smart speaker -- a big draw in the product's announcement and advertising -- is seriously limited. The device should still work with Alexa if users set it up before Oral-B shuttered Connect, but setting up a new Wi-Fi connection or reestablishing a lost one doesn't work without Connect.
Oral-B owner, Proctor & Gamble, said in a statement: "The Oral-B Connect app was originally developed to support Oral-B Guide and Oral-B Sense electric toothbrushes, which were discontinued ... While some features are no longer supported on these brushes, the Oral-B app does remain compatible with both devices. Consumers are invited to contact Oral-B customer service where they can get additional support for these brushes."

Meanwhile, an Amazon spokesperson told Ars: "The Oral-B Guide still has Alexa built-in and customers can keep using the Alexa experience on devices that were set up through the Oral-B Connect app. The Oral-B Guide is currently sold by an independent seller on Amazon.com. Please contact Oral-B for any further questions about their app."
AI

Yellen To Warn of 'Significant Risks' From Use of AI in Finance (reuters.com) 16

U.S. Treasury Secretary Janet Yellen will warn that the use of AI in finance could lower transaction costs, but carries "significant risks," according to excerpts from a speech to be delivered on Thursday. From a report: In the remarks to a Financial Stability Oversight Council and Brookings Institution AI conference, Yellen says AI-related risks have moved towards the top of the regulatory council's agenda.

"Specific vulnerabilities may arise from the complexity and opacity of AI models, inadequate risk management frameworks to account for AI risks and interconnections that emerge as many market participants rely on the same data and models," Yellen says in the excerpts. She also notes that concentration among the vendors that develop AI models and that provide data and cloud services may also introduce risks that could amplify existing third-party service provider risks. "And insufficient or faulty data could also perpetuate or introduce new biases in financial decision-making," according to Yellen.

Slashdot Top Deals