×
Businesses

Nvidia Hits $2 Trillion Valuation (reuters.com) 65

Nvidia hit $2 trillion in market value on Friday, riding on an insatiable demand for its chips that made the Silicon Valley firm the pioneer of the generative AI boom. From a report: The milestone followed another bumper revenue forecast from the chip designer that drove up its market value by $277 billion on Thursday - Wall Street's largest one-day gain on record. Its rapid ascent in the past year has led analysts to draw parallels to the picks and shovels providers during the gold rush of 1800s as Nvidia's chips are used by almost all generative AI players from chatGPT-maker OpenAI to Google.
AI

Stable Diffusion 3.0 Debuts New Architecture To Reinvent Text-To-Image Gen AI 15

An anonymous reader quotes a report from VentureBeat: Stability AI is out today with an early preview of its Stable Diffusion 3.0 next-generation flagship text-to-image generative AI model. The new Stable Diffusion 3.0 model aims to provide improved image quality and better performance in generating images from multi-subject prompts. It will also provide significantly better typography than prior Stable Diffusion models enabling more accurate and consistent spelling inside of generated images. Typography has been an area of weakness for Stable Diffusion in the past and one that rivals including DALL-E 3, Ideogram and Midjourney have also been working on with recent releases. Stability AI is building out Stable Diffusion 3.0 in multiple model sizes ranging from 800M to 8B parameters.

Stable Diffusion 3.0 isn't just a new version of a model that Stability AI has already released, it's actually based on a new architecture. "Stable Diffusion 3 is a diffusion transformer, a new type of architecture similar to the one used in the recent OpenAI Sora model," Emad Mostaque, CEO of Stability AI told VentureBeat. "It is the real successor to the original Stable Diffusion." [...] Stable Diffusion 3.0 is taking a different approach by using diffusion transformers. "Stable Diffusion did not have a transformer before," Mostaque said.

Transformers are at the foundation of much of the gen AI revolution and are widely used as the basis of text generation models. Image generation has largely been in the realm of diffusion models. The research paper that details Diffusion Transformers (DiTs), explains that it is a new architecture for diffusion models that replaces the commonly used U-Net backbone with a transformer operating on latent image patches. The DiTs approach can use compute more efficiently and can outperform other forms of diffusion image generation. The other big innovation that Stable Diffusion benefits from is flow matching. The research paper on flow matching explains that it is a new method for training Continuous Normalizing Flows (CNFs) to model complex data distributions. According to the researchers, using Conditional Flow Matching (CFM) with optimal transport paths leads to faster training, more efficient sampling, and better performance compared to diffusion paths.
AI

Facial-Recognition System Passes Test On Michelangelo's David (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Facial recognition is a common feature for unlocking smartphones and gaming systems, among other uses. But the technology currently relies upon bulky projectors and lenses, hindering its broader application. Scientists have now developed a new facial recognition system that employs flatter, simpler optics that also require less energy, according to a recent paper published in the journal Nano Letters. The team tested their prototype system with a 3D replica of Michelangelo's famous David sculpture and found it recognized the face as well as existing smartphone facial recognition can. [...]

Wen-Chen Hsu, of National Yang Ming Chiao Tung University and the Hon Hai Research Institute in Taiwan, and colleagues turned to ultrathin optical components known as metasurfaces for a potential solution. These metasurfaces can replace bulkier components for modulating light and have proven popular for depth sensors, endoscopes, tomography. and augmented reality systems, among other emerging applications. Hsu et al. built their own depth-sensing facial recognition system incorporating a metasurface hologram in place of the diffractive optical element. They replaced the standard vertical-cavity surface-emitting laser (VCSEL) with a photonic crystal surface-emitting laser (PCSEL). (The structure of photonic crystals is the mechanism behind the bright iridescent colors in butterfly wings or beetle shells.) The PCSEL can generate its own highly collimated light beam, so there was no need for the bulky light guide or collimation lenses used in VCSEL-based dot projector systems.

The team tested their new system on a replica bust of David, and it worked as well as existing smartphone facial recognition, based on comparing the infrared dot patterns to online photos of the statue. They found that their system generated nearly one and a half times more infrared dots (some 45,700) than the standard commercial technology from a device that is 233 times smaller in terms of surface area than the standard dot projector. "It is a compact and cost-effective system, that can be integrated into a single chip using the flip-chip process of PCSEL," the authors wrote. Additionally, "The metasurface enables the generation of customizable and versatile light patterns, expanding the system's applicability." It's more energy-efficient to boot.

Businesses

Reddit Files To Go Public (cnbc.com) 98

Reddit has filed its initial public offering (IPO) with the SEC on Thursday. "The company plans to trade on the New York Stock Exchange under the ticker symbol 'RDDT,'" reports CNBC. From the report: Its market debut, expected in March, will be the first major tech initial public offering of the year. It's the first social media IPO since Pinterest went public in 2019. Reddit said it had $804 million in annual sales for 2023, up 20% from the $666.7 million it brought in the previous year, according to the filing. The social networking company's core business is reliant on online advertising sales stemming from its website and mobile app.

The company, founded in 2005 by technology entrepreneurs Alexis Ohanian and Steve Huffman, said it has incurred net losses since its inception. It reported a net loss of $90.8 million for the year ended Dec. 31, 2023, compared with a net loss of $158.6 million the year prior. [...] Reddit said it plans to use artificial intelligence to improve its ad business and that it expects to open new revenue channels by offering tools and incentives to "drive continued creation, improvements, and commerce." It's also in the early stages of developing and monetizing a data-licensing business in which third parties would be allowed to access and search data on its platform.

For example, Google on Thursday announced an expanded partnership with Reddit that will give the search giant access to the company's data to, among other uses, train its AI models. "In January 2024, we entered into certain data licensing arrangements with an aggregate contract value of $203.0 million and terms ranging from two to three years," Reddit said, regarding its data-licensing business. "We expect a minimum of $66.4 million of revenue to be recognized during the year ending December 31, 2024 and the remaining thereafter."
On Wednesday, Reddit said it plans to sell a chunk of its IPO shares to 75,000 of its most loyal users.
AI

The Justice Department Gets a Chief AI Officer 12

Princeton professor and technology law researcher Jonathan Mayer has been appointed as the Justice Department's first chief AI officer. The Verge reports: Attorney General Merrick Garland said in a statement that appointing an AI officer was important for the department to "keep pace with rapidly evolving scientific and technological developments." One of Mayer's responsibilities will be to build a team of technical and policy experts around cybersecurity and AI. Mayer will also serve as the department's chief science and technology advisor and help recruit tech talent.

Mayer held technology roles in government before his new Justice Department gig, according to his bio in Princeton's Center for Information Technology Policy. He served as an adviser on technology law and policy to Vice President Kamala Harris when she was still in the Senate. Mayer was also the chief technologist in the enforcement office of the Federal Communications Commission.
AI

Reddit in AI Content Licensing Deal With Google (reuters.com) 25

Social media platform Reddit has struck a deal with Google to make its content available for training the search engine giant's AI models. Reuters: The contract with Alphabet-owned Google is worth about $60 million per year, according to one of the sources. The deal underscores how Reddit, which is preparing for a high-profile stock market launch, is seeking to generate new revenue amid fierce competition for advertising dollars from the likes of TikTok and Meta Platform's Facebook.
AI

Instacart's AI Recipes Look Literally Impossible (404media.co) 36

An anonymous reader shares a report: I hate cookbooks without pictures. We eat with our eyes first, as chefs love to say, but what's more important to me is that if I'm making a dish for the first time, I want to see what the final product should look like to know I did it right. It's not so much about presentation as it is about knowing that I browned the chicken skin enough. An image of a recipe will not be this useful, I think, if it was AI-generated, and especially so if the fact that the image was AI-generated wasn't disclosed by the recipe. That, to my surprise, is exactly the case with thousands of recipes the grocery delivery service Instacart is suggesting to its users. Some of the recipes include unheard of measurements and ingredients that don't appear to exist.

[...] As I was browsing, I noticed that Instacart was offering me recipes that appeared to complement the ingredients I was looking at. The concept doesn't make a ton of sense to me -- I'm going to Instacart for the ingredients I know I need for the food I know I'm going to make, not for food inspo -- but I had to click on a recipe for "Watermelon Popsicle with Chocolate Chips" because it looked weird in the thumbnail. Since I have eyeballs with optical nerves that are connected to a semi-functioning brain I can tell that the image was generated by AI. To be more specific, I can see that the top corner of the plate doesn't match its square shape, that the table-ish looking thing it's resting on is made up of jumbled slats (AI is particularly bad at making these series of long, straight lines), and then there are the titular watermelon popsicles, which defy physical reality. They clip into each other like bad 3D models in a video game, one of them to the left appears hollow, and for some reason they are skewered by what appears to be asparagus spears on the bottom end and capped by impossible small watermelon rinds at the top.

AI

Google Pauses AI Image-generation of People After Diversity Backlash (ft.com) 198

Google has temporarily stopped its latest AI model, Gemini, from generating images of people (non-paywalled link) , as a backlash erupted over the model's depiction of people from diverse backgrounds. From a report: Gemini creates realistic images based on users' descriptions in a similar manner to OpenAI's ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings.

Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."

Power

Engineers Use AI To Wrangle Fusion Power For the Grid (princeton.edu) 69

An anonymous reader quotes a report from Princeton Engineering: In the blink of an eye, the unruly, superheated plasma that drives a fusion reaction can lose its stability and escape the strong magnetic fields confining it within the donut-shaped fusion reactor. These getaways frequently spell the end of the reaction, posing a core challenge to developing fusion as a non-polluting, virtually limitless energy source. But a Princeton-led team composed of engineers, physicists, and data scientists from the University and the Princeton Plasma Physics Laboratory (PPPL) have harnessed the power of artificial intelligence to predict -- and then avoid -- the formation of a specific plasma problem in real time.

In experiments at the DIII-D National Fusion Facility in San Diego, the researchers demonstrated their model, trained only on past experimental data, could forecast potential plasma instabilities known as tearing mode instabilities up to 300 milliseconds in advance. While that leaves no more than enough time for a slow blink in humans, it was plenty of time for the AI controller to change certain operating parameters to avoid what would have developed into a tear within the plasma's magnetic field lines, upsetting its equilibrium and opening the door for a reaction-ending escape.

"By learning from past experiments, rather than incorporating information from physics-based models, the AI could develop a final control policy that supported a stable, high-powered plasma regime in real time, at a real reactor," said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment, as well as staff research physicist at PPPL. The research opens the door for more dynamic control of a fusion reaction than current approaches, and it provides a foundation for using artificial intelligence to solve a broad range of plasma instabilities, which have long been obstacles to achieving a sustained fusion reaction.
The team published their findings in the journal Nature.
AI

Google Admits Gemini Is 'Missing the Mark' With Image Generation of Historical People 67

Google's Gemini AI chatbot is under fire for generating historically inaccurate images, particularly when depicting people from different eras and nationalities. Google acknowledges the issue and is actively working to refine Gemini's accuracy, emphasizing that while diversity in image generation is valued, adjustments are necessary to meet historical accuracy standards. 9to5Google reports: The Twitter/X post in particular that brought this issue to light showed prompts to Gemini asking for the AI to generate images of Australian, American, British, and German women. All four prompts resulted in images of women with darker skin tones, which, as Google's Jack Krawcyczk pointed out, is not incorrect, but may not be what is expected.

But a bigger issue that was noticed in the wake of that post was that Gemini also struggles to accurately depict human beings in a historical context, with those being depicted often having darker skin tones or being of particular nationalities that are not historically accurate. Google, in a statement posted to Twitter/X, admits that Gemini AI image generation is "missing the mark" on historical depictions and that the company is working to improve it. Google also does say that the diversity represented in images generated by Gemini is "generally a good thing," but it's clear some fine-tuning needs to happen.
Further reading: Why Google's new AI Gemini accused of refusing to acknowledge the existence of white people (The Daily Dot)
Businesses

Nvidia Posts Record Revenue Up 265% On Booming AI Business (cnbc.com) 27

In its fourth quarter earnings report today, Nvidia beat Wall Street's forecast for earnings and sales, causing shares to rise about 10% in extended trading. CNBC reports: Here's what the company reported compared with what Wall Street was expecting for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

Earnings per share: $5.16 adjusted vs. $4.64 expected
Revenue: $22.10 billion vs. $20.62 billion expected

Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales. Nvidia CEO Jensen Huang addressed investor fears that the company may not be able to keep up this growth or level of sales for the whole year on a call with analysts. "Fundamentally, the conditions are excellent for continued growth" in 2025 and beyond, Huang told analysts. He says demand for the company's GPUs will remain high due to generative AI and an industry-wide shift away from central processors to the accelerators that Nvidia makes.

Nvidia reported $12.29 billion in net income during the quarter, or $4.93 per share, up 769% versus last year's $1.41 billion or 57 cents per share. Nvidia's total revenue rose 265% from a year ago, based on strong sales for AI chips for servers, particularly the company's "Hopper" chips such as the H100, it said. "Strong demand was driven by enterprise software and consumer internet applications, and multiple industry verticals including automotive, financial services and health care," the company said in commentary provided to investors. Those sales are reported in the company's Data Center business, which now comprises the majority of Nvidia's revenue. Data center sales were up 409% to $18.40 billion. Over half the company's data center sales went to large cloud providers. [...]

The company's gaming business, which includes graphics cards for laptops and PCs, was merely up 56% year over year to $2.87 billion. Graphics cards for gaming used to be Nvidia's primary business before its AI chips started taking off, and some of Nvidia's graphics cards can be used for AI. Nvidia's smaller businesses did not show the same meteoric growth. Its automotive business declined 4% to $281 million in sales, and its OEM and other business, which includes crypto chips, rose 7% to $90 million. Nvidia's business making graphics hardware for professional applications rose 105% to $463 million.

AI

ChatGPT Goes Temporarily 'Insane' With Unexpected Outputs, Spooking Users (arstechnica.com) 100

An anonymous reader quotes a report from Ars Technica: On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output. ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

"It gave me the exact same feeling -- like watching someone slowly lose their mind either from psychosis or dementia," wrote a Reddit user named z3ldafitzgerald in response to a post about ChatGPT bugging out. "It's the first time anything AI related sincerely gave me the creeps." Some users even began questioning their own sanity. "What happened here? I asked if I could give my dog cheerios and then it started speaking complete nonsense and continued to do so. Is this normal? Also wtf is 'deeper talk' at the end?" Read through this series of screenshots below, and you'll see ChatGPT's outputs degrade in unexpected ways. [...]

So far, we've seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced "memory" function.

China

China's Rush To Dominate AI Comes With a Twist: It Depends on US Technology (nytimes.com) 32

China's tech firms were caught off guard by breakthroughs in generative artificial intelligence. Beijing's regulations and a sagging economy aren't helping. From a report: In November, a year after ChatGPT's release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems. The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Meta's generative A.I. model, called LLaMA. There was just one twist: Some of the technology in 01.AI's system came from LLaMA. Mr. Lee's start-up then built on Meta's technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war. "Chinese companies are under tremendous pressure to keep abreast of U.S. innovations," said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was "yet another Sputnik moment that China felt it had to respond to."

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch "aren't very good," leading to many Chinese firms often using "fine-tuned versions of Western models." She estimated China was two to three years behind the United States in generative A.I. developments. The jockeying for A.I. primacy has huge implications. Breakthroughs in generative A.I. could tip the global technological balance of power, increasing people's productivity, aiding industries and leading to future innovations, even as nations struggle with the technology's risks. As Chinese firms aim to catch up by turning to open-source A.I. models from the United States, Washington is in a difficult spot. Even as the United States has tried to slow China's advancements by limiting the sale of microchips and curbing investments, it has not held back the practice of openly releasing software to encourage its adoption. For China, the newfound reliance on A.I. systems from the United States -- primarily Meta's LLaMA -- has fueled deeper questions about the country's innovation model, which in recent decades surprised many by turning out world-beating firms like Alibaba and ByteDance despite Beijing's authoritarian controls.

Intel

Microsoft Will Use Intel To Manufacture Home-Grown Processor (yahoo.com) 30

Intel has landed Microsoft as a customer for its made-to-order chip business, marking a key win for an ambitious turnaround effort under Chief Executive Officer Pat Gelsinger. From a report: Microsoft plans to use Intel's 18A manufacturing technology to make a forthcoming chip that the software maker designed in-house, the two companies said at an event Wednesday. They didn't identify the product, but Microsoft recently announced plans for two homegrown chips: a computer processor and an artificial intelligence accelerator.

Intel has been seeking to prove it can compete in the foundry market, where companies produce custom chips for clients. It's a major shift for the semiconductor pioneer, which once had the world's most advanced chipmaking facilities and kept them to itself. These days, Intel is racing to catch up with companies like Taiwan Semiconductor Manufacturing Co., which leads the foundry industry. Microsoft, meanwhile, is looking to secure a steady supply of semiconductors to power its data-center operations -- especially as demand for AI grows. Designing its own chips also lets Microsoft fine-tune the products to its specific needs. "We need a reliable supply of the most advanced, high-performance and high-quality semiconductors," Microsoft CEO Satya Nadella said in a statement. âoeThat's why we are so excited to work with Intel."

AI

Google Launches Two New Open LLMs (techcrunch.com) 15

Barely a week after launching the latest iteration of its Gemini models, Google today announced the launch of Gemma, a new family of lightweight open-weight models. From a report: Starting with Gemma 2B and Gemma 7B, these new models were "inspired by Gemini" and are available for commercial and research usage. Google did not provide us with a detailed paper on how these models perform against similar models from Meta and Mistral, for example, and only noted that they are "state-of-the-art."

The company did note that these are dense decoder-only models, though, which is the same architecture it used for its Gemini models (and its earlier PaLM models) and that we will see the benchmarks later today on Hugging Face's leaderboard. To get started with Gemma, developers can get access to ready-to-use Colab and Kaggle notebooks, as well as integrations with Hugging Face, MaxText and Nvidia's NeMo. Once pre-trained and tuned, these models can then run everywhere. While Google highlights that these are open models, it's worth noting that they are not open-source. Indeed, in a press briefing ahead of today's announcement, Google's Janine Banks stressed the company's commitment to open source but also noted that Google is very intentional about how it refers to the Gemma models.

AI

Google DeepMind Alumni Unveil Bioptimus: Aiming To Build First Universal Biology AI Model (venturebeat.com) 5

An anonymous reader quotes a report from VentureBeat: As the French startup ecosystem continues to boom -- think Mistral, Poolside, and Adaptive -- today the Paris-based Bioptimus, with a mission to build the first universal AI foundation model for biology, emerged from stealth following a seed funding round of $35 million. The new open science model will connect the different scales of biology with generative AI -- from molecules to cells, tissues and whole organisms. Bioptimus unites a team of Google DeepMind alumni and Owkin scientists (AI biotech startup Owkin is itself a French unicorn) who will take advantage of AWS compute and Owkin's data generation capabilities and access to multimodal patient data sourced from leading academic hospitals worldwide. According to a press release, "this all gives the power to create computational representations that establish a strong differentiation against models trained solely on public datasets and a single data modality that are not able to capture the full diversity of biology."

In an interview with VentureBeat, Jean-Philippe Vert, co-founder and CEO of Bioptimus, chief R&D Officer of Owkin and former research lead at Google Brain, said as a smaller, independent company, Bioptimus can move faster than Google DeepMind to gain direct access to the data needed to train biology models. "We have the advantage of being able to more easily and securely collaborate with partners, and have established a level of trust in our work by sharing our AI expertise and making models available to them for research," he said. "This can be hard for big tech to do. Bioptimus will also leverage some of the strongest sovereignty controls in the market today."

Rodolphe Jenatton, a former research scientist at Google DeepMind, has also joined the Bioptimus team, telling VentureBeat the Bioptimus work will be released as open source/open science, at a similar level to Mistral's model releases. "Transparency and sharing and community will be key elements for us," he said. Currently, AI models are limited to specific aspects of biology, Vert explained. "For example, several companies are starting to build language models for protein sequences," he said, adding that there are also initiatives to build a foundation model for images of cells.

However, there is no holistic view of the totality of biology: "The good news is that the AI technology is converging very quickly, with some architectures that allow to have all the data contribute together to a unified model," he explained. "So this is what we want to do. As far as I know that it does not exist yet. But I'm certain that if we didn't do it, someone else would do it in the near future." The biggest bottleneck, he said, is access to data. "It's very different from training an LLM on text on the web," he said. And that access, he pointed out, is what Bioptimus has in spades, through its Owkin partnership.

Microsoft

Microsoft Develops AI Server Gear To Lessen Reliance on Nvidia (reuters.com) 3

Microsoft is developing a new network card that could improve the performance of its Maia AI server chip and potentially reduce the company's reliance on chip designer Nvidia, The Information reported on Tuesday. Reuters: Microsoft CEO Satya Nadella has tapped Pradeep Sindhu, who co-founded networking gear developer Juniper Networks, to spearhead the network card effort, the report said citing a person with knowledge of the matter. Microsoft acquired Sindhu's server chip startup, Fungible, last year. The new network card is similar to Nvidia's ConnectX-7 card, which the chip developer sells alongside its graphic processor units (GPUs), the report added. The equipment could take more than a year to develop and, if successful, could lessen the time it takes for OpenAI to train its models on Microsoft servers as well as make the process less expensive, according to the report.
Google

This Tiny Website Is Google's First Line of Defense in the Patent Wars (wired.com) 45

A trio of Google engineers recently came up with a futuristic way to help anyone who stumbles through presentations on video calls. They propose that when algorithms detect a speaker's pulse racing or "umms" lengthening, a generative AI bot that mimics their voice could simply take over. That cutting-edge idea wasn't revealed at a big company event or in an academic journal. Instead, it appeared in a 1,500-word post on a little-known, free website called TDCommons.org that Google has quietly owned and funded for nine years. WIRED: Until WIRED received a link to an idea on TDCommons last year and got curious, Google had never spoken with the media about its website. Scrolling through TDCommons, you can read Google's latest ideas for coordinating smart home gadgets for better sleep, preserving privacy in mobile search results, and using AI to summarize a person's activities from their photo archives. And the submissions aren't exclusive to Google; about 150 organizations, including HP, Cisco, and Visa, also have posted inventions to the website.

The website is a home for ideas that seem potentially valuable but not worth spending tens of thousands of dollars seeking a patent for. By publishing the technical details and establishing "prior art," Google and other companies can head off future disputes by blocking others from filing patents for similar concepts. Google gives employees a $1,000 bonus for each invention they post to TDCommons -- a tenth of what it awards its patent seekers -- but they also get an immediately shareable link to gloat about otherwise secretive work.

IT

Adobe Acrobat Adds Generative AI To 'Easily Chat With Documents' (theverge.com) 31

Adobe is adding a new generative AI experience to its Acrobat PDF management software, which aims to "completely transform the digital document experience" by making information in long documents easier to find and understand. From a report: Announced in Adobe's press release as "AI Assistant in Acrobat," the new tool is described as a "conversational engine" that can summarize files, answer questions, and recommend more based on the content, allowing users to "easily chat with documents" to get the information they need. It's available in beta starting today for paying Acrobat users.

The idea is that the chatbot will reduce the time-consuming tasks related to working with massive text documents -- such as helping students quickly find information for research projects or summarizing large reports into snappy highlights for emails, meetings, and presentations. AI Assistant in Acrobat can be used with all document formats supported by the app, including Word and PowerPoint. The chatbot abides by Adobe's data security protocols, so it won't store data from customer documents or use it to train AI Assistant.
The new AI Assistant experience is available for Acrobat customers on Standard ($12.99 per month) and Pro ($19.99 per month) plans.
AI

AI Not Hyped Enough, Says Microsoft Exec (indiatimes.com) 133

Puneet Chandok, Microsoft India and South Asia head, at an event this week: "People say AI is overhyped, but I think it's not hyped enough. The next generation who will use this in the next few years will have a much higher bar on what technology can do for them. So how you build it for that generation, how you build it for that future will be really interesting to see. AI is truly a general purpose technology, which can change everything that we do," he added.

Slashdot Top Deals