×
AI

OpenAI Says It Has Begun Training a New Flagship AI Model (nytimes.com) 40

OpenAI said on Tuesday that it has begun training a new flagship AI model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT. From a report: The San Francisco start-up, which is one of the world's leading A.I. companies, said in a blog post that it expects the new model to bring "the next level of capabilities" as it strives to build "artificial general intelligence," or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple's Siri, search engines and image generators.

OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said. OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

Microsoft

Microsoft's Automatic Super Resolution Arrives To Improve Gaming Performance (tomshardware.com) 53

Microsoft has announced Auto SR, an AI-powered image upscaling solution for Windows 11 on Arm devices. The feature, exclusive to Qualcomm's Snapdragon X CPUs, aims to enhance gaming performance on ARM-based systems. Auto SR, however, comes with notable restrictions, including compatibility limitations with certain DirectX versions and the inability to work simultaneously with HDR.
Google

Google's AI Feeds People Answers From The Onion (avclub.com) 125

An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.

Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.

Robotics

Technical Issues' Stall MLB's Adoption of Robots to Call Balls and Strikes (cbssports.com) 39

Will Major League Baseball games use "automated" umpires next year to watch pitches from home plate and call balls and strikes?

"We still have some technical issues," baseball Commissioner Rob Manfred said Thursday. NBC News reports: "We haven't made as much progress in the minor leagues this year as we sort of hoped at this point. I think it's becoming more and more likely that this will not be a go for '25."

Major League Baseball has been experimenting with the automated ball-strike system in minor leagues since 2019. It is being used at all Triple-A parks this year for the second straight season, the robot alone for the first three games of each series and a human with a [robot-assisted] challenge system in the final three.

In "challenge-system" games, robo-umpires are only used for quickly ruling on challenges to calls from human umpires. (As demonstrated in this 11-second video.)

CBS Sports explains: Each team is given a limited number of "incorrect" challenges per game, which incentivizes judicious use of challenges... In some ways, the challenge system is a compromise between the traditional method of making ball-strike calls and the fully automated approach. That middle ground may make approval by the various stakeholders more likely to happen and may lay the foundation for full automation at some future point.
Manfred cites "a growing consensus in large part" from Major League players that that's how they'd want to see robo-umpiring implemented, according to a post on X.com from The Athletic's Evan Drellich. (NBC notes one concern is eliminating the artful way catchers "frame" caught pitches to convince umpires a pitch passed through the strike zone.)

But umpires face greater challenges today, adds CBS Sports: The strong trend, stretching across years, of increased pitch velocity in the big leagues has complicated the calling of balls and strikes, as has the emphasis on high-spin breaking pitches. Discerning balls from strikes has always been challenging, and the stuff of the contemporary major-league pitcher has made anything like perfect accuracy beyond the capabilities of the human eye. Big-league umpires are highly skilled, but the move toward ball-strike automation and thus a higher tier of accuracy is likely inevitable. Manfred's Wednesday remarks reinforce that perception.
AI

Mojo, Bend, and the Rise of AI-First Programming Languages (venturebeat.com) 26

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities...

At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...

Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.

According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.

Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.

The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...

The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.

The future of AI development and computing itself are being reshaped by the languages and tools we create today.

In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.
Sci-Fi

Netflix's Sci-Fi Movie 'Atlas': AI Apocalypse Blockbuster Gets 'Shocking' Reviews (tomsguide.com) 94

Space.com calls it a movie "adding more combustible material to the inferno of AI unease sweeping the globe." Its director tells them James Cameron was a huge inspiration, saying Atlas "has an Aliens-like vibe because of the grounded, grittiness to it." (You can watch the movie's trailer here...)

But Tom's Guide says "the reviews are just as shocking as the movie's AI." Its "audience score" on Rotten Tomatoes is 55% — but its aggregate score from professional film critics is 16%. The Hollywood Reporter called it "another Netflix movie to half-watch while doing laundry." ("The star plays a data analyst forced to team up with an AI robot in order to prevent an apocalypse orchestrated by a different AI robot...") The site Giant Freakin Robot says "there seems to be a direct correlation between how much money the streaming platform spends on green screen effects and how bad the movie is" (noting the film's rumored budget of $100 million)...

But Tom's Guide defends it as a big-budget sci-fi thriller that "has an interesting premise that makes you think about the potential dangers of AI progression." Our world has always been interested in computers and machines, and the very idea of technology turning against us is unsettling. That's why "Atlas" works as a movie, but professional critics have other things to say. Ross McIndoe from Slant Magazine said: "Atlas seems like a story that should have been experienced with a gamepad in hand...." Todd Gilchrist from Variety didn't enjoy the conventional structure that "Atlas" followed...

However, even though the score is low and the reviews are pretty negative, I don't want to completely bash this movie... If I'm being completely honest, most movies and TV shows nowadays are taken too seriously. The more general blockbusters are supposed to be entertaining and fun, with visually pleasing effects that keep you hooked on the action. This is much like "Atlas", which is a fun watch with an unsettling undertone focused on the dangers of evolving AI...

Being part of the audience, we're supposed to just take it in and enjoy the movie as a casual viewer. This is why I think you should give "Atlas" a chance, especially if you're big into dramatic action sequences and have enjoyed movies like "Terminator" and "Pacific Rim".

AI

How A US Hospital is Using AI to Analyze X-Rays - With Help From Red Hat (redhat.com) 19

This week Red Hat announced one of America's leading pediatric hospitals is using AI to analyze X-rays, "to improve image quality and the speed and accuracy of image interpretation."

Red Hat's CTO said the move exemplifies "the positive impact AI can have in the healthcare field". Before Boston Children's Hospital began piloting AI in radiology, quantitative measurements had to be done manually, which was a time-consuming task. Other, more complex image analyses were performed completely offline and outside of the clinical workflow. In a field where time is of the essence, the hospital is piloting Red Hat OpenShift via the ChRIS Research Integration Service, a web-based medical image platform. The AI application running in ChRIS on the Red Hat OpenShift foundation has the potential to automatically examine x-rays, identify the most valuable diagnostic images among the thousands taken and flag any discrepancies for the radiologist. This decreases the interpretation time for radiologists.
But it also seems to be a big win for openness: Innovation developed internally is immediately transferable to public research clouds such as the Massachusetts Open Cloud, where large-scale data sharing and additional innovation can be fostered. Boston Children's Hospital aims to extend the reach of advanced healthcare solutions globally through this approach, amplifying their impact on patient well-being worldwide.
"Red Hat believes open unlocks the world's potential," the announcement concludes, "including the potential to share knowledge and build upon each other's discoveries. Additionally, Red Hat believes innovation — including AI — should be available everywhere, making any application, anywhere a reality.

"With open source, enabling AI-fueled innovation across hybrid IT environments that can lead to faster clinical breakthroughs and better patient outcomes is a reality."
AI

Elon Musk Says AI Could Eliminate Our Need to Work at Jobs (cnn.com) 289

In the future, "Probably none of us will have a job," Elon Musk said Thursday, speaking remotely to the VivaTech 2024 conference in Paris. Instead, jobs will be optional — something we'd do like a hobby — "But otherwise, AI and the robots will provide any goods and services that you want."

CNN reports that Musk added this would require "universal high income" — and "There would be no shortage of goods or services." In a job-free future, though, Musk questioned whether people would feel emotionally fulfilled. "The question will really be one of meaning — if the computer and robots can do everything better than you, does your life have meaning?" he said. "I do think there's perhaps still a role for humans in this — in that we may give AI meaning."
CNN accompanied their article with this counterargument: In January, researchers at MIT's Computer Science and Artificial Intelligence Lab found workplaces are adopting AI much more slowly than some had expected and feared. The report also said the majority of jobs previously identified as vulnerable to AI were not economically beneficial for employers to automate at that time. Experts also largely believe that many jobs that require a high emotional intelligence and human interaction will not need replacing, such as mental health professionals, creatives and teachers.
CNN notes that Musk "also used his stage time to urge parents to limit the amount of social media that children can see because 'they're being programmed by a dopamine-maximizing AI'."
AI

Robotaxis Face 'Heightened Scrutiny' While the Industry Plans Expansion (msn.com) 19

Besides investigations into Cruise and Waymo, America's National Highway Traffic Safety Administration (NHTSA) also announced it's examining two rear-end collisions between motorbikes and Amazon's steering wheel-free Zoox vehicles being tested in San Francisco, Seattle, and Las Vegas.

This means all three major self-driving vehicle companies "are facing federal investigations over potential flaws linked to dozens of crashes," notes the Washington Post, calling it "a sign of heightened scrutiny as the fledging industry lays plans to expand nationwide." The industry is poised for growth: About 40 companies have permits to test autonomous vehicles in California alone. The companies have drawn billions of dollars in investment, and supporters say they could revolutionize how Americans travel... Dozens of companies are testing self-driving vehicles in at least 10 states, with some offering services to paying passengers, according to the Autonomous Vehicle Industry Association. The deployments are concentrated in a handful of Western states, especially those with good weather and welcoming governors.

According to a Washington Post analysis of California data, the companies in test mode in San Francisco collectively report millions of miles on public roads every year, along with hundreds of mostly minor collisions. An industry association says autonomous vehicles have logged a total of 70 million miles, a figure that it compares with 293 trips to the moon and back. But it's a tiny fraction of the almost 9 billion miles that Americans drive every day. The relatively small number of miles the vehicles have driven makes it difficult to draw broad conclusions about their safety.

Key quotes from the article:
  • "Together, the three investigations opened in the past year examine more than two dozen collisions potentially linked to defective technology. The bulk of the incidents were minor and did not result in any injuries..."
  • "But robotic cars are still very much in their infancy, and while the bulk of the collisions flagged by NHTSA are relatively minor, they call into question the companies' boasts of being far safer than human drivers..."
  • "The era of unrealistic expectations and hype is over," said Matthew Wansley, a professor at the Cardozo School of Law in New York who specializes in emerging automotive technologies. "These companies are under a microscope, and they should be. Private companies are doing an experiment on public roads."
  • "Innocent people are on the roadways, and they're not being protected as they need to be," said Cathy Chase, the president of Advocates for Highway and Auto Safety.

Windows

Satya Nadella Says Microsoft's AI-Focused Copilot+ Laptops Will Outperform Apple's MacBooks (msn.com) 86

"Apple's done a fantastic job of really innovating on the Mac," Microsoft CEO Satya Nadella told the Wall Street Journal in a video interview this week.

. Then he said "We are gonna outperform them" with the upcoming Copilot+ laptops from Acer, ASUS, Dell, HP, Lenovo and Samsung that have been completely reengineered for AI — and begin shipping in less than four weeks. Satya Nadella: Qualcomm's got a new [ARM Snapdragon X] processor, which we've optimized Windows for. The battery lab, I've been using it now — I mean, it's 22 hours of continuous video playback... [Apple also uses ARM chips in its MacBooks]. We finally feel we have a very competitive product between Surface Pro and the Surface laptops. We have essentially the best specs when it comes to ARM-based silicon and performance or the NPU performance.

WSJ: Microsoft says the Surfaces are 58% faster than the MacBook Air with M3, and has 20% longer battery life.

The video includes a demonstration of local live translation powered by "small language models" stored on the device. ("It can translate live video calls or in-person conversations from 44 different languages into English. And it's fast.")

And in an accompanying article, the Journal's reporter also tested out the AI-powered image generator coming to Microsoft Paint.

As a longtime MS Paint stick-figure and box-house artist, I was delighted by this new tool. I typed in a prompt: "A Windows XP wallpaper with a mountain and sky." Then, as I started drawing, an AI image appeared in a new canvas alongside mine. When I changed a color in my sketch, it changed a color in the generated image. Microsoft says it still sends the prompt to the cloud to ensure content safety.
Privacy was also touched on. Discussing the AI-powered "Recall" search functionality, the Journal's reporter notes that users can stop it from taking screenshots of certain web sites or apps, or turn it off entirely... But they point out "There could be this reaction from some people that this is pretty creepy. Microsoft is taking screenshots of everything I do."

Nadella reminds them that "it's all being done locally, right...? That's the promise... That's one of the reasons why Recall works as a magical thing: because I can trust it, that it is on my computer."

Copilot will be powered by OpenAI's new GPT-4o, the Journal notes — before showing Satya Nadella saying "It's kind of like a new browser effectively." Satya Nadella: So, it's right there. It sees the screen, it sees the world, it hears you. And so, it's kind of like that personal agent that's always there that you want to talk to. You can interrupt it. It can interrupt you.
Nadella says though the laptop is optimized for Copilot, that's just the beginning, and "I fully expect Copilot to be everywhere" — along with its innovatively individualized "personal agent" interface. "It's gonna be ambient.... It'll go on the phone, right? I'll use it on WhatsApp. I'll use it on any other messaging platform. It'll be on speakers everywhere." Nadella says combining GPT-40 with Copilot's interface is "the type of magic that we wanna bring — first to Windows and everywhere else... The future I see is a computer that understands me versus a computer that I have to understand.

The interview ends when the reporter holds up the result — their own homegrown rendition of Windows XP's default background image "Bliss."
AI

OpenAI Didn't Copy Scarlett Johansson's Voice for ChatGPT, Records Show (msn.com) 74

The Atlantic argued this week that OpenAI "just gave away the entire game... The Johansson scandal is merely a reminder of AI's manifest-destiny philosophy: This is happening, whether you like it or not."

But the Washington Post reports that OpenAI "didn't copy Scarlett Johansson's voice for ChatGPT, records show." [W]hile many hear an eerie resemblance between [ChatGPT voice] "Sky" and Johansson's "Her" character, an actress was hired in June to create the Sky voice, months before Altman contacted Johansson, according to documents, recordings, casting directors and the actress's agent. The agent, who spoke on the condition of anonymity, citing the safety of her client, said the actress confirmed that neither Johansson nor the movie "Her" were ever mentioned by OpenAI. The actress's natural voice sounds identical to the AI-generated Sky voice, based on brief recordings of her initial voice test reviewed by The Post...

[Joanne Jang, who leads AI model behavior for OpenAI], said she "kept a tight tent" around the AI voices project, making Chief Technology Officer Mira Murati the sole decision-maker to preserve the artistic choices of the director and the casting office. Altman was on his world tour during much of the casting process and not intimately involved, she said.... To Jang, who spent countless hours listening to the actress and keeps in touch with the human actors behind the voices, Sky sounds nothing like Johansson, although the two share a breathiness and huskiness. In a statement from the Sky actress provided by her agent, she wrote that at times the backlash "feels personal being that it's just my natural voice and I've never been compared to her by the people who do know me closely."

More from Northeastern University's news service: "The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said in a statement. "We cast the voice actor behind Sky's voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better..."

[Alexandra Roberts, a Northeastern University law and media professor] says she believes things will settle down and Johansson will probably not sue OpenAI since the company is no longer using the "Sky" voice. "If they stopped using it, and they promised her they're not going to use it, then she probably doesn't have a case," she says. "She probably doesn't have anything to sue on anymore, and since it was just a demo, and it wasn't a full release to the general public that offers the full range of services they plan to offer, it would be really hard for her to show any damages."

Maybe it's analgous to something Sam Altman said earlier this month on the All-In podcast. "Let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that... I was posing that as a thought experiment to musicians, and they were like, 'Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it.'"

Altman added "Now, that's not a reason not to do it, um, necessarily, but..." and then talked about Apple's "Crush" ad and the importance of preserving human creativity. He concluded by saying that OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..."
AI

FTC Chair: AI Models Could Violate Antitrust Laws (thehill.com) 42

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists' creations or people's personal information could be in violation of antitrust laws. At The Wall Street Journal's "Future of Everything Festival," Khan said the FTC is examining ways in which major companies' data scraping could hinder competition or potentially violate people's privacy rights. "The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices," Khan said at the event. "So, you can imagine, if somebody's content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition."

Khan said concern also lies in companies using people's data without their knowledge or consent, which can also raise legal concerns. "We've also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you're signing up to use them, but then are secretly or quietly using the data you're feeding them -- be it your personal data, be it, if you're a business, your proprietary data, your competitively significant data -- if they're then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns," she said.

Khan also recognized people's concerns about companies retroactively changing their terms of service to let them use customers' content, including personal photos or family videos, to feed into their AI models. "I think that's where people feel a sense of violation, that that's not really what they signed up for and oftentimes, they feel that they don't have recourse," Khan said. "Some of these services are essential for navigating day to day life," she continued, "and so, if the choice -- 'choice' -- you're being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that's a really tough spot to put people in." Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, "I think in Washington, there's increasingly a recognition that we can't, as a government, just be totally hands off and stand out of the way."
You can watch the interview with Khan here.
AI

OpenAI Releases Former Employees From Controversial Exit Agreements (cnbc.com) 11

OpenAI has reversed its decision requiring former employees to sign a perpetual non-disparagement agreement to retain their vested equity, stating that they will not cancel any vested units and will remove non-disparagement clauses from departure documents. CNBC reports: The internal memo, which was viewed by CNBC, was sent to former employees and shared with current ones. The memo, addressed to each former employee, said that at the time of the person's departure from OpenAI, "you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity]." "Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units," stated the memo, which was viewed by CNBC.

The memo said OpenAI will also not enforce any other non-disparagement or non-solicitation contract items that the employee may have signed. "As we shared with employees, we are making important updates to our departure process," an OpenAI spokesperson told CNBC in a statement. "We have not and never will take away vested equity, even when people didn't sign the departure documents. We'll remove non-disparagement clauses from our standard departure paperwork, and we'll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual," said the statement, adding that former employees would be informed of this as well. "We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be," the OpenAI spokesperson added.

Google

Google Search's 'udm=14' Trick Lets You Kill AI Search For Good (arstechnica.com) 40

An anonymous reader quotes a report from Ars Technica: If you're tired of Google's AI Overview extracting all value from the web while also telling people to eat glue or run with scissors, you can turn it off -- sort of. Google has been telling people its AI box at the top of search results is the future, and you can't turn it off, but that ignores how Google search works: A lot of options are powered by URL parameters. That means you can turn off AI search with this one simple trick! (Sorry.) Our method for killing AI search is defaulting to the new "web" search filter, which Google recently launched as a way to search the web without Google's alpha-quality AI junk. It's actually pretty nice, showing only the traditional 10 blue links, giving you a clean (well, other than the ads), uncluttered results page that looks like it's from 2011. Sadly, Google's UI doesn't have a way to make "web" search the default, and switching to it means digging through the "more" options drop-down after you do a search, so it's a few clicks deep.

Check out the URL after you do a search, and you'll see a mile-long URL full of esoteric tracking information and mode information. We'll put each search result URL parameter on a new line so the URL is somewhat readable [...]. Most of these only mean something to Google's internal tracking system, but that "&udm=14" line is the one that will put you in a web search. Tack it on to the end of a normal search, and you'll be booted into the clean 10 blue links interface. While Google might not let you set this as a default, if you have a way to automatically edit the Google search URL, you can create your own defaults. One way to edit the search URL is a proxy site like udm14.com, which is probably the biggest site out there popularizing this technique. A proxy site could, if it wanted to, read all your search result queries, though (your query is also in the URL), so whether you trust this site is up to you.

The Courts

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges (apnews.com) 49

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden's voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general's office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections "remain free from unlawful interference."

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. "Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately," Kramer said in February.

Facebook

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council (qz.com) 17

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta's management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

"I've come to deeply respect this group of people and their achievements in their respective areas, and I'm grateful that they're willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse," Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta's 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council.
TechCrunch notes that the council features "only white men on it." This "differs from Meta's actual board of directors and its Oversight Board, which is more diverse in gender and racial representation," reports TechCrunch.

"It's telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. ... it's been proven time and time again that AI isn't like other products. It's a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups."
AI

AI Software Engineers Make $100,000 More Than Their Colleagues (qz.com) 43

The AI boom and a growing talent shortage has resulted in companies paying AI software engineers a whole lot more than their non-AI counterparts. From a report: As of April 2024, AI software engineers in the U.S. were paid a median salary of nearly $300,000, while other software technicians made about $100,000 less, according to data compiled by salary data website Levels.fyi. The pay gap that was already about 30% in mid-2022 has grown to almost 50%.

"It's clear that companies value AI skills and are willing to pay a premium for them, no matter what job level you're at," wrote data scientist Alina Kolesnikova in the Levels.fyi report. That disparity is more pronounced at some companies. The robotaxi company Cruise, for example, pays AI engineers at the staff level a median of $680,500 -- while their non-AI colleagues make $185,500 less, according to Levels.fyi.

AI

US Lawmakers Advance Bill To Make It Easier To Curb Exports of AI Models (reuters.com) 30

The House Foreign Affairs Committee on Wednesday voted overwhelmingly to advance a bill that would make it easier for the Biden administration to restrict the export of AI systems, citing concerns China could exploit them to bolster its military capabilities. From a report: The bill, sponsored by House Republicans Michael McCaul and John Molenaar and Democrats Raja Krishnamoorthi and Susan Wild, also would give the Commerce Department express authority to bar Americans from working with foreigners to develop AI systems that pose risks to U.S. national security. Without this legislation "our top AI companies could inadvertently fuel China's technological ascent, empowering their military and malign ambitions," McCaul, who chairs the committee, warned on Wednesday.

"As the (Chinese Communist Party) looks to expand their technological advancements to enhance their surveillance state and war machine, it is critical we protect our sensitive technology from falling into their hands," McCaul added. The Chinese Embassy in Washington did not immediately respond to a request for comment. The bill is the latest sign Washington is gearing up to beat back China's AI ambitions over fears Beijing could harness the technology to meddle in other countries' elections, create bioweapons or launch cyberattacks.

AI

FCC Chair Proposes Disclosure Rules For AI-Generated Content In Political Ads (qz.com) 37

FCC Chairwoman Jessica Rosenworcel has proposed (PDF) disclosure rules for AI-generated content used in political ads. "If adopted, the proposal would look into whether the FCC should require political ads on radio and TV to disclose when there is AI-generated content," reports Quartz. From the report: The FCC is seeking comment on whether on-air and written disclosure should be required in broadcasters' political files when AI-generated content is used in political ads; proposing that the rules apply to both candidates and issue advertisements; requesting comment on what a specific definition of AI-generated comment should look like; and proposing that disclosure rules be applied to broadcasters and entities involved in programming, such as cable operators and radio providers.

The proposed disclosure rules do not prohibit the use of AI-generated content in political ads. The FCC has authority through the Bipartisan Campaign Reform Act to make rules around political advertising. If the proposal is adopted, the FCC will take public comment on the rules.
"As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used," Rosenworcel said in a statement. "Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue."

Slashdot Top Deals