×
Businesses

Cisco Completes $28 Billion Acquisition of Splunk (securityweek.com) 20

Cisco on Monday completed its $28 billion acquisition of Splunk, a powerhouse in data analysis, security and observability tools. The deal was first announced in September 2023. SecurityWeek reports: Cisco plans to leverage Splunk's AI, security and observability capabilities complement Cisco's solution portfolio. Cisco says the transaction is expected to be cash flow positive and non-GAAP gross margin accretive in Cisco's fiscal year 2025, and non-GAAP EPS accretive in fiscal year 2026. "We are thrilled to officially welcome Splunk to Cisco," Chuck Robbins, Chair and CEO of Cisco, said in a statement. "As one of the world's largest software companies, we will revolutionize the way our customers leverage data to connect and protect every aspect of their organization as we help power and protect the AI revolution."
AI

Chinese and Western Scientists Identify 'Red Lines' on AI Risks (ft.com) 28

Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes."

"In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

AI

Investment Advisors Pay the Price For Selling What Looked a Lot Like AI Fairy Tales (theregister.com) 15

Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings. From a report: Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 and $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in "AI washing," a term used to describe the embellishment of machine-learning capabilities.

"We've seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies," said SEC chairman Gary Gensler. "Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not." Delphia claimed its system utilized AI and machine learning to incorporate client data, a statement the SEC said it found to be false.

"Delphia represented that it used artificial intelligence and machine learning to analyze its retail clients' spending and social media data to inform its investment advice when, in fact, no such data was being used in its investment process," the SEC said in a settlement order. Despite being warned about suspected misleading practices in 2021 and agreeing to amend them, Delphia only partially complied, according to the SEC. The company continued to market itself as using client data as AI inputs but never did anything of the sort, the regulator said.

AI

AI-Generated Science 32

Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates."

"As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral.

Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.
Google

Google Researchers Unveil 'VLOGGER', an AI That Can Bring Still Photos To Life (venturebeat.com) 18

Google researchers have developed a new AI system that can generate lifelike videos of people speaking, gesturing and moving -- from just a single still photo. From a report: The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation. Described in a research paper titled "VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis," (PDF) the AI model can take a photo of a person and an audio clip as input, and then output a video that matches the audio, showing the person speaking the words and making corresponding facial expressions, head movements and hand gestures. The videos are not perfect, with some artifacts, but represent a significant leap in the ability to animate still images.

The researchers, led by Enric Corona at Google Research, leveraged a type of machine learning model called diffusion models to achieve the novel result. Diffusion models have recently shown remarkable performance at generating highly realistic images from text descriptions. By extending them into the video domain and training on a vast new dataset, the team was able to create an AI system that can bring photos to life in a highly convincing way. "In contrast to previous work, our method does not require training for each person, does not rely on face detection and cropping, generates the complete image (not just the face or the lips), and considers a broad spectrum of scenarios (e.g. visible torso or diverse subject identities) that are critical to correctly synthesize humans who communicate," the authors wrote.

Open Source

Grok AI Goes Open Source (venturebeat.com) 38

xAI has opened sourced its large language model Grok. From a report: The move, which Musk had previously proclaimed would happen this week, now enables any other entrepreneur, programmer, company, or individual to take Grok's weights -- the strength of connections between the model's artificial "neurons," or software modules that allow the model to make decisions and accept inputs and provide outputs in the form of text -- and other associated documentation and use a copy of the model for whatever they'd like, including for commercial applications.

"We are releasing the base model weights and network architecture of Grok-1, our large language model," the company announced in a blog post. "Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI." Those interested can download the code for Grok on its Github page or via a torrent link. Parameters refers to the weights and biases that govern the model -- the more parameters, generally the more advanced, complex and performant the model is. At 314 billion parameters, Grok is well ahead of open source competitors such as Meta's Llama 2 (70 billion parameters) and Mistral 8x7B (12 billion parameters). Grok was open sourced under an Apache License 2.0, which enables commercial use, modifications, and distribution, though it cannot be trademarked and there is no liability or warranty that users receive with it. In addition, they must reproduce the original license and copyright notice, and state the changes they've made.

Google

Apple Is in Talks To Let Google's Gemini Power iPhone Generative AI Features (bloomberg.com) 52

Apple is in talks to build Google's Gemini AI engine into the iPhone, Bloomberg News reported Monday, citing people familiar with the situation, setting the stage for a blockbuster agreement that would shake up the AI industry. From the report: The two companies are in active negotiations to let Apple license Gemini, Google's set of generative AI models, to power some new features coming to the iPhone software this year, said the people, who asked not to be identified because the deliberations are private. Apple also recently held discussions with OpenAI and has considered using its model, according to the people.
Youtube

YouTube Now Requires Creators To Label AI-Generated Content (cnn.com) 29

Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users. From a report: When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn't do, alters footage of a real place or event, or depicts a realistic-looking scene that didn't actually occur. The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing.

Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024. YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic -- so that YouTube can attach a label for viewers -- and could face consequences if they repeatedly fail to add the disclosure.

Businesses

32-Hour Workweek for America Proposed by Senator Bernie Sanders (theguardian.com) 390

The Guardian reports that this week "Bernie Sanders, the independent senator from Vermont who twice ran for the Democratic presidential nomination, introduced a bill to establish a four-day US working week." "Moving to a 32-hour workweek with no loss of pay is not a radical idea," Sanders said on Thursday. "Today, American workers are over 400% more productive than they were in the 1940s. And yet millions of Americans are working longer hours for lower wages than they were decades ago. "That has got to change. The financial gains from the major advancements in artificial intelligence, automation and new technology must benefit the working class, not just corporate chief executives and wealthy stockholders on Wall Street.

"It is time to reduce the stress level in our country and allow Americans to enjoy a better quality of life. It is time for a 32-hour workweek with no loss in pay."

The proposed bill "has received the endorsement of the American Federation of Labor and Congress of Industrial Organizations, United Auto Workers, the Service Employees International Union, the Association of Flight Attendants" — as well as several other labor unions, reports USA Today: More than half of adults employed full time reported working more than 40 hours per week, according to a 2019 Gallup poll... More than 70 British companies started to test a four-day workweek last year, and most respondents reported there has been no loss in productivity.
A statement from Senator Sanders: Bill Gates, the founder of Microsoft, and Jamie Dimon, the CEO of JP Morgan Chase, predicted last year that advancements in technology would lead to a three or three-and-a-half-day workweek in the coming years. Despite these predictions, Americans now work more hours than the people of most other wealthy nations, but are earning less per week than they did 50 years ago, after adjusting for inflation.
"Sanders also pointed to other countries that have reduced their workweeks, such as France, Norway and Denmark," adds NBC News.

USA Today notes that "While Sanders' role as chair of the Senate Health, Education, Labor, and Pensions Committee places a greater focus on shortening the workweek, it is unlikely the bill will garner enough support from Republicans to become federal law and pass in both chambers."

And political analysts who spoke to ABC News "cast doubt on the measure's chances of passage in a divided Congress where opposition from Republicans is all but certain," reports ABC News, "and even the extent of support among Democrats remains unclear."
Technology

Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai (calcalistech.com) 6

Israeli outlet Calcalist: Nvidia is in advanced negotiations to acquire AI infrastructure orchestration and management platform Run:ai, Calcalist has learned. The value of the deal is estimated at many hundreds of millions of dollars and could even reach $1 billion. The companies did not respond to Calcalist's request for comment.

Run:ai raised $75 million in a Series C round in March 2022 led by Tiger Global Management and Insight Partners, who also led the previous Series B round. The round included the participation of additional existing investors, TLV Partners, and S Capital VC, bringing the total funding raised to date to $118 million.

AI

Why Are So Many AI Chatbots 'Dumb as Rocks'? (msn.com) 73

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..."

"The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby.

In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations.

I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard.

But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.)

But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...?

"When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes.

"I just can't with all these AI junk bots that demand a lot of us and give so little in return."
Transportation

US Investigates Fatal Crash of Ford EV With Partially Automated Driving System (apnews.com) 41

America's National Transportation Safety Board "is investigating a fatal crash in San Antonio, Texas, involving a Ford electric vehicle that may have been using a partially automated driving system," reports the Associated Press: The NTSB said that preliminary information shows a Ford Mustang Mach-E SUV equipped with the company's partially automated driving system collided with the rear of a Honda CR-V that was stopped in one of the highway lanes.

Television station KSAT reported that the Mach-E driver told police the Honda was stopped in the middle lane with no lights on before the crash around 9:50 p.m. The 56-year-old driver of the CR-V was killed. "NTSB is investigating this fatal crash due to its continued interest in advanced driver assistance systems and how vehicle operators interact with these technologies," the agency statement said.

Ford's Blue Cruise system allows drivers to take their hands off the steering wheel while it handles steering, braking and acceleration on highways. The company says the system isn't fully autonomous and it monitors drivers to make sure they pay attention to the road. It operates on 97% of controlled access highways in the U.S. and Canada, Ford says.

Social Networks

TikTok is Banned in China, Notes X User Community - Along With Most US Social Media (newsweek.com) 148

Newsweek points out that a Chinese government post arguing the bill is "on the wrong side of fair competition" was flagged by users on X. "TikTok is banned in the People's Republic of China," the X community note read. (The BBC reports that "Instead, Chinese users use a similar app, Douyin, which is only available in China and subject to monitoring and censorship by the government.")

Newsweek adds that China "has also blocked access to YouTube, Facebook, Instagram, and Google services. X itself is also banned — though Chinese diplomats use the microblogging app to deliver Beijing's messaging to the wider world."

From the Wall Street Journal: Among the top concerns for [U.S.] intelligence leaders is that they wouldn't even necessarily be able to detect a Chinese influence operation if one were taking place [on TikTok] due to the opacity of the platform and how its algorithm surfaces content to users. Such operations, FBI director Christopher Wray said this week in congressional testimony, "are extraordinarily difficult to detect, which is part of what makes the national-security concerns represented by TikTok so significant...."

Critics of the bill include libertarian-leaning lawmakers, such as Sen. Rand Paul (R., Ky.), who have decried it as a form of government censorship. "The Constitution says that you have a First Amendment right to express yourself," Paul told reporters Thursday. TikTok's users "express themselves through dancing or whatever else they do on TikTok. You can't just tell them they can't do that." In the House, a bloc of 50 Democrats voted against the bill, citing concerns about curtailing free speech and the impact on people who earn income on the app. Some Senate Democrats have raised similar worries, as well as an interest in looking at a range of social-media issues at rival companies such as Meta Platforms.

"The basic idea should be to put curbs on all social media, not just one," Sen. Elizabeth Warren (D., Mass.) said Thursday. "If there's a problem with privacy, with how our children are treated, then we need to curb that behavior wherever it occurs."

Some context from the Columbia Journalism Review: Roughly one-third of Americans aged 18-29 regularly get their news from TikTok, the Pew Research Center found in a late 2023 survey. Nearly half of all TikTok users say they regularly get news from the app, a higher percentage than for any other social media platform aside from Twitter.

Almost 40 percent of young adults were using TikTok and Instagram for their primary Web search instead of the traditional search engines, a Google senior vice president said in mid-2022 — a number that's almost certainly grown since then. Overall, TikTok claims 150 million American users, almost half the US population; two-thirds of Americans aged 18-29 use the app.

Some U.S. politicians believe TikTok "radicalized" some of their supporters "with disinformation or biased reporting," according to the article.

Meanwhile in the Guardian, a Duke University law professor argues "this saga demands a broader conversation about safeguarding democracy in the digital age." The European Union's newly enacted AI act provides a blueprint for a more holistic approach, using an evidence- and risk-based system that could be used to classify platforms like TikTok as high-risk AI systems subject to more stringent regulatory oversight, with measures that demand transparency, accountability and defensive measures against misuse.
Open source advocate Evan Prodromou argues that the TikTok controversy raises a larger issue: If algorithmic curation is so powerful, "who's making the decisions on how they're used?" And he also proposes a solution.

"If there is concern about algorithms being manipulated by foreign governments, using Fediverse-enabled domestic software prevents the problem."
Advertising

Microsoft Criticized For Chrome Popup Ads Resembling Malware That Urge Users to Switch to Bing (theregister.com) 32

"Multiple users around the world have started to notice new Microsoft Bing pop-up ads that look a lot like malware..." reports Lifehacker, describing the adds as "very low quality" and "extremely pixelated..."

"It's just Microsoft doing a bad job of trying to get you to switch to its products."

The Register explains: [W]hile using Google's desktop browser on Windows 10 or 11, a dialog box suddenly and irritatingly appears to the side of the screen urging folks to make Microsoft's Bing the default search engine in Chrome. Not only that, netizens are told they can use Chrome to interact with Bing's OpenAI GPT-4-powered chat bot, allowing them to ask questions and get answers using natural language. We can forgive those who thought this was malware at first glance. "Chat with GPT-4 for free on Chrome!" the pop-up advert, shown below, declares. "Get hundreds of daily chat turns with Bing AI."

It goes on: "Try Bing as default search," then alleges: "Easy to switch back. Install Bing Service to improve chat experience." Users are encouraged to click on "Yes" in the Microsoft pop-up to select Bing as Chrome's default search engine. What's really gross is the next part. Clicking "Yes" installs the Bing Chrome extension and changes the default search provider. Chrome alerts the user in another dialog box that something potentially malicious is trying to update their settings. Google's browser recommends you click on a "Change it back" button to undo the tweak.

But Redmond is one step ahead, displaying a message underneath Chrome's alert that reads: "Wait — don't change it back! If you do, you'll turn off Microsoft Bing Search for Chrome and lose access to Bing AI with GPT-4 and DALL-E 3."

This is where we're at: Two Big Tech giants squabbling in front of users via dialog boxes.

"Essentially, users are caught in a war of pop-ups between one company trying to pressure you into using its AI assistant/search engine," writes Engadget, "and another trying to keep you on its default (which you probably wanted if you installed Chrome in the first place).

"Big Tech's battles for AI and search supremacy are turning into obnoxious virtual shouting matches in front of users' eyeballs as they try to browse the web."

Or, as Lifehacker puts it, "If Microsoft really wants to increase the number of users turning to Bing for its search results, it needs to prove that there's a real reason to switch. And these malware-like ads aren't the solution."
AI

Is AI Ruining Etsy? 87

Emily Dreibelbis reports via PCMag: Etsy's reputation as a haven for small, independent creators has come into question as tools like Midjourney have made it easy to list art without disclosing that it's been AI-generated, and shoppers are not happy. "The fact that it's AI isn't listed anywhere," says one Reddit user who purchased a stock photo on Etsy that seemed suspiciously low cost with glowing reviews. "I was so mad at myself for not noticing it was AI before purchasing." [...] "Being avid user of Etsy, I really enjoy supporting small businesses and the talent that goes into their work," the buyer tells PCMag in a private message. "Shops such as we discussed selling massive amounts of AI-generated images take away from genuine sellers who put hours into perfecting their craft."

Etsy's seller policy does not mention artificial intelligence. The platform is still determining the place AI-generated works have on the site, a source tells PCMag. Complicating matters, some sellers take AI-generated images and modify them, adding a hint of human artistry. Etsy also has a policy regarding when sellers can claim an item is "handmade," but it also does not mention AI and appears virtually unenforceable. [...] Beyond the legalities, Etsy shoppers debate the ethical and economic implications. One argues it devalues the work, citing an ancient example of explorer Mansa Musa handing out fake gold during his travels, inflating the overall supply and tanking the market. If anyone can create art at the push of a button, what defines an artist's work? And what role does Etsy play in answering that question?
AI

FTC Launches Inquiry Into Reddit's AI Deals, Ahead of IPO (axios.com) 2

Days before Reddit's upcoming initial public offering (IPO), the company announced that the FTC has launched an inquiry into the company's licensing of user data to AI companies. Reddit says that it's "not surprised" by the FTC's inquiry, given the novel nature of these agreements. Axios reports: Reddit says it received a letter on Thursday, March 14, in which the FTC said it's "conducting a non-public inquiry focused on our sale, licensing, or sharing of user-generated content with third parties to train AI models." The FTC also is expected to request a meeting with Reddit, plus various documents and information. Reddit isn't the only company receiving these so-called "hold letters," according to a former FTC official who spoke with Axios on background.
AI

Apple Acquires Startup DarwinAI As AI Efforts Ramp Up 16

According to Bloomberg, Apple has acquired Canada-based AI startup DarwinAI for an undisclosed sum. Macworld reports: Apple has reportedly folded the DarwinAI staff into its own AI team, including DarwinAI co-founder Alexander Wong, an AI researcher at the University of Waterloo who "has published over 600 refereed journal and conference papers, as well as patents, in various fields such as computational imaging, artificial intelligence, computer vision, and multimedia systems."

According to its LinkedIn profile, DarwinAI is "a rapidly growing visual quality inspection company providing manufacturers an end-to-end solution to improve product quality and increase production efficiency." In layman's terms, that means Apple is likely interested in DarwinAI to streamline its manufacturing to be more efficient. That's something that could save Apple a ton of money in annual costs.

Far more interesting to our consumer devices, however, is Bloomberg's report that DarwinAI's tech can be used to make AI models more efficient in general. Apple has been said to want any generative AI features to run on the device rather than the cloud, so models will need to be as small as possible and DarwinAI could definitely help there.
Last month, Apple CEO Tim Cook said the iPhone maker sees "incredible breakthrough potential for generative AI, which is why we're currently investing significantly in this area. We believe that will unlock transformative opportunities for users when it comes to productivity, problem solving and more."
AI

India Drops Plan To Require Approval For AI Model Launches (techcrunch.com) 2

An anonymous reader quotes a report from TechCrunch: India is walking back on a recent AI advisory after receiving criticism from many local and global entrepreneurs and investors. The Ministry of Electronics and IT shared an updated AI advisory with industry stakeholders on Friday that no longer asked them to take the government approval before launching or deploying an AI model to users in the South Asian market. Under the revised guidelines, firms are instead advised to label under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.

The March 1 advisory also marked a reversal from India's previous hands-off approach to AI regulation. Less than a year ago, the ministry had declined to regulate AI growth, identifying the sector as vital to India's strategic interests. The new advisory, like the original earlier this month, hasn't been published online, but TechCrunch has reviewed a copy of it. The ministry said earlier this month that though the advisory wasn't legally binding, it signals that it's the "future of regulation" and that the government required compliance.

The advisory emphasizes that AI models should not be used to share unlawful content under Indian law and should not permit bias, discrimination, or threats to the integrity of the electoral process. Intermediaries are also advised to use "consent popups" or similar mechanisms to explicitly inform users about the unreliability of AI-generated output. The ministry has retained its emphasis on ensuring that deepfakes and misinformation are easily identifiable, advising intermediaries to label or embed content with unique metadata or identifiers. It no longer requires firms to devise a technique to identify the "originator" of any particular message.

Microsoft

Microsoft Singles Out Google's Competitive Edge in Generative AI (reuters.com) 16

Google enjoys a competitive edge in generative AI due to its trove of data and AI-optimised chips, Microsoft has told EU antitrust regulators, underscoring the rivalry between the two tech giants. From a report: The comments by Microsoft were in response to a consultation launched by the European Commission in January on the level of competition in generative AI. The growing popularity of generative AI, which can generate human-like responses to written prompts and is exemplified by Microsoft-backed OpenAI's ChatGPT and Google's chatbot Gemini, has triggered concerns about misinformation and fake news.

"Today, only one company - Google - is vertically integrated in a manner that provides it with strength and independence at every AI layer from chips to a thriving mobile app store. Everyone else must rely on partnerships to innovate and compete," Microsoft said in its report to the Commission. It said Google's self-supply AI semiconductors would give it a competitive advantage for the years to come, while its large sets of proprietary data from Google Search Index and YouTube enabled it to train its large language model Gemini. "YouTube provides an unparalleled set of video content; it hosts an estimated 14 billion videos. Google has access to such content; but other AI developers do not," Microsoft said.

AI

SXSW Audiences Loudly Boo Festival Videos Touting the Virtues of AI (variety.com) 65

At this year's SXSW festival, discussions on artificial intelligence's future sparked controversy during screenings of premiers like "The Fall Guy" and "Immaculate." Variety reports: The quick-turnaround video editors at SXSW cut a daily sizzle reel highlighting previous panels, premieres and other events, which then runs before festival screenings. On Tuesday, the fourth edition of that daily video focused on the wide variety of keynotes and panelists in town to discuss AI. Those folks sure seem bullish on artificial intelligence, and the audiences at the Paramount -- many of whom are likely writers and actors who just spent much of 2023 on the picket line trying to reign in the potentially destructive power of AI -- decided to boo the video. Loudly. And frequently.

Those boos grew the loudest toward the end of the sizzle, when OpenAI's VP of consumer product and head of ChatGPT Peter Deng declares on camera, "I actually think that AI fundamentally makes us more human." That is not a popular opinion. Deng participated in the session "AI and Humanity's Co-evolution with Open AI's Head of Chat GPT" on Monday, moderated by Signal Fire's consumer VC and former TechCrunch editor Josh Constine. Constine is at the start of the video with another soundbite that drew jeers: "SXSW has always been the digital culture makers, and I think if you look out into this room, you can see that AI is a culture." [...] The groans also grew loud for Magic Leap's founder Rony Abovitz, who gave this advice during the "Storyworlds, Hour Blue & Amplifying Humanity Ethically with AI" panel: "Be one of those people who leverages AI, don't be run over by it."
You can hear some of the reactions from festival attendees here, here, and here.

Slashdot Top Deals