Opera

Opera Adds an Automated AI Agent To Its Browser (theregister.com) 23

king*jojo shares a report from The Register: The Opera web browser now boasts "agentic AI," meaning users can ask an onboard AI model to perform tasks that require a series of in-browser actions. The AI agent, referred to as the Browser Operator, can, for example, find 12 pairs of men's size 10 Nike socks that you can buy. This is demonstrated in an Opera-made video of the process, running intermittently at 6x time, which shows the user has to type out the request for the undergarments rather than click around some webpages.

The AI, in the given example, works its way through eight steps in its browser chat sidebar, clicking and navigating on your behalf in the web display pane, to arrive at a Walmart checkout page with two six-packs of socks added to the user's shopping cart, ready for payment. [...] Other tasks such as finding specific concert tickets and booking flight tickets from Oslo to Newcastle are also depicted, accelerated at times from 4x to 10x, with the user left to authorize the actual purchase. Browser Operator runs more slowly than shown in the video, though that's actually helpful for a semi-capable assistant. A more casual pace allows the user to intervene at any point and take over.

AI

Judges Are Fed Up With Lawyers Using AI That Hallucinate Court Cases (404media.co) 74

An anonymous reader quotes a report from 404 Media: After a group of attorneys were caught using AI to cite cases that didn't actually exist in court documents last month, another lawyer was told to pay $15,000 for his own AI hallucinations that showed up in several briefs. Attorney Rafael Ramirez, who represented a company called HoosierVac in an ongoing case where the Mid Central Operating Engineers Health and Welfare Fund claims the company is failing to allow the union a full audit of its books and records, filed a brief in October 2024 that cited a case the judge wasn't able to locate. Ramirez "acknowledge[d] that the referenced citation was in error," withdrew the citation, and "apologized to the court and opposing counsel for the confusion," according to Judge Mark Dinsmore, U.S. Magistrate Judge for the Southern District of Indiana. But that wasn't the end of it. An "exhaustive review" of Ramirez's other filings in the case showed that he'd included made-up cases in two other briefs, too. [...]

In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don't exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn't be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a "cautionary tale" for the rest of the legal world. Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys' pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each.
The judge in the Ramirez case said that he "does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden." In fact, he noted that he's a vocal advocate for the use of technology in the legal profession.

"Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution," he wrote. "It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution."
Apple

Apple Unveils iPad Air With M3 Chip (apple.com) 42

Apple today announced a significant update to its iPad Air lineup, integrating the M3 chip previously reserved for higher-end devices. The new tablets, available in both 11-inch ($599) and 13-inch ($799) configurations, deliver substantial performance gains: nearly 2x faster than M1-equipped models and 3.5x faster than A14 Bionic versions.

The M3 brings Apple's advanced graphics architecture to the Air for the first time, featuring dynamic caching, hardware-accelerated mesh shading, and ray tracing. The chip includes an 8-core CPU delivering 35% faster multithreaded performance over M1, paired with a 9-core GPU offering 40% faster graphics. The Neural Engine processes AI workloads 60% faster than M1, the company said. Apple also introduced a redesigned Magic Keyboard ($269/$319) with function row and larger trackpad.
Google

Google Releases SpeciesNet, an AI Model Designed To Identify Wildlife (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google has open sourced an AI model, SpeciesNet, designed to identify animal species by analyzing photos from camera traps. Researchers around the world use camera traps -- digital cameras connected to infrared sensors -- to study wildlife populations. But while these traps can provide valuable insights, they generate massive volumes of data that take days to weeks to sift through. In a bid to help, Google launched Wildlife Insights, an initiative of the company's Google Earth Outreach philanthropy program, around six years ago. Wildlife Insights provides a platform where researchers can share, identify, and analyze wildlife images online, collaborating to speed up camera trap data analysis.

Many of Wildlife Insights' analysis tools are powered by SpeciesNet, which Google claims was trained on over 65 million publicly available images and images from organizations like the Smithsonian Conservation Biology Institute, the Wildlife Conservation Society, the North Carolina Museum of Natural Sciences, and the Zoological Society of London. Google says that SpeciesNet can classify images into one of more than 2,000 labels, covering animal species, taxa like "mammalian" or "Felidae," and non-animal objects (e.g. "vehicle"). SpeciesNet is available on GitHub under an Apache 2.0 license, meaning it can be used commercially largely sans restrictions.

Education

Researchers Find Less-Educated Areas Adopting AI Writing Tools Faster 108

An anonymous reader quotes a report from Ars Technica: Since the launch of ChatGPT in late 2022, experts have debated how widely AI language models would impact the world. A few years later, the picture is getting clear. According to new Stanford University-led research examining over 300 million text samples across multiple sectors, AI language models now assist in writing up to a quarter of professional communications across sectors. It's having a large impact, especially in less-educated parts of the United States. "Our study shows the emergence of a new reality in which firms, consumers and even international organizations substantially rely on generative AI for communications," wrote the researchers.

The researchers tracked large language model (LLM) adoption across industries from January 2022 to September 2024 using a dataset that included 687,241 consumer complaints submitted to the US Consumer Financial Protection Bureau (CFPB), 537,413 corporate press releases, 304.3 million job postings, and 15,919 United Nations press releases. By using a statistical detection system that tracked word usage patterns, the researchers found that roughly 18 percent of financial consumer complaints (including 30 percent of all complaints from Arkansas), 24 percent of corporate press releases, up to 15 percent of job postings, and 14 percent of UN press releases showed signs of AI assistance during that period of time.

The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest. "In the consumer complaint domain, the geographic and demographic patterns in LLM adoption present an intriguing departure from historical technology diffusion trends where technology adoption has generally been concentrated in urban areas, among higher-income groups, and populations with higher levels of educational attainment."
"Arkansas showed the highest adoption rate at 29.2 percent (based on 7,376 complaints), followed by Missouri at 26.9 percent (16,807 complaints) and North Dakota at 24.8 percent (1,025 complaints)," notes Ars. "In contrast, states like West Virginia (2.6 percent), Idaho (3.8 percent), and Vermont (4.8 percent) showed minimal AI writing adoption. Major population centers demonstrated moderate adoption, with California at 17.4 percent (157,056 complaints) and New York at 16.6 percent (104,862 complaints)."

The study was listed on the arXiv preprint server in mid-February.
AI

Microsoft Unveils New Voice-Activated AI Assistant For Doctors 18

Microsoft has introduced Dragon Copilot, a voice-activated AI assistant for doctors that integrates dictation and ambient listening tools to automate clinical documentation, including notes, referrals, and post-visit summaries. The tool is set to launch in May in the U.S. and Canada. CNBC reports: Microsoft acquired Nuance Communications, the company behind Dragon Medical One and DAX Copilot, for about $16 billion in 2021. As a result, Microsoft has become a major player in the fiercely competitive AI scribing market, which has exploded in popularity as health systems have been looking for tools to help address burnout. AI scribes like DAX Copilot allow doctors to draft clinical notes in real time as they consensually record their visits with patients. DAX Copilot has been used in more than 3 million patient visits across 600 health-care organizations in the last month, Microsoft said.

Dragon Copilot is accessible through a mobile app, browser or desktop, and it integrates directly with several different electronic health records, the company said. Clinicians will still be able to draft clinical notes with the assistant like they could with DAX Copilot, but they'll be able to use natural language to edit their documentation and prompt it further, Kenn Harper, general manager of Dragon products at Microsoft, told reporters on the call. For instance, a doctor could ask questions like, "Was the patient experiencing ear pain?" or "Can you add the ICD-10 codes to the assessment and plan?" Physicians can also ask broader treatment-related queries such as, "Should this patient be screened for lung cancer?" and get an answer with links to resources like the Centers for Disease Control and Prevention. [...]
AI

TSMC Pledges To Spend $100 Billion On US Chip Facilities (techcrunch.com) 67

An anonymous reader quotes a report from TechCrunch: Chipmaker TSMC said that it aims to invest "at least" $100 billion in chip manufacturing plants in the U.S. over the next four years as part of an effort to expand the company's network of semiconductor factories. President Donald Trump announced the news during a press conference Monday. TSMC's cash infusion will fund the construction of several new facilities in Arizona, C. C. Wei, chairman and CEO of TSMC, said during the briefing. "We are going to produce many AI chips to support AI progress," Wei said.

TSMC previously pledged to pour $65 billion into U.S.-based fabrication plants and has received up to $6.6 billion in grants from the CHIPS Act, a major Biden administration-era law that sought to boost domestic semiconductor production. The new investment brings TSMC's total investments in the U.S. chip industry to around $165 billion, Trump said in prepared remarks. [...] TSMC, the world's largest contract chip maker, already has several facilities in the U.S., including a factory in Arizona that began mass production late last year. But the company currently reserves its most sophisticated facilities for its home country of Taiwan.

AI

Call Centers Using AI To 'Whiten' Indian Accents 136

The world's biggest call center company is using artificial intelligence to "neutralise" Indian accents for Western customers. From a report: Teleperformance said it was applying real-time AI software on phone calls in order to increase "human empathy" between two people on the phone. The French company's customers in the UK include parts of the Government, the NHS, Vodafone and eBay.

Teleperformance has 90,000 employees in India and tens of thousands more in other countries. It is using software from Sanas, an American company that says the system helps "build a more understanding world" and reduces miscommunication. The company's website says it makes call center workers more productive and means customer service calls are resolved more quickly. The company also says it means call center workers are less likely to be abused and customers are less likely to demand to speak to a supervisor. It is already used by companies including Walmart and UPS.
AI

The US Cities Whose Workers Are Most Exposed to AI (bloomberg.com) 24

Silicon Valley, the place that did more than any other to pioneer artificial intelligence, is the most exposed to its ability to automate work. That's according to an analysis by researchers at the Brookings Institution, a think tank, which matched the tasks that OpenAI's ChatGPT-4 could do with the jobs that are most common in different US cities. From a report: The result is a sharp departure from previous rounds of automation. Whereas technologies like robotics came for middle-class jobs -- and manufacturing cities such as Detroit -- generative AI is best at the white-collar work that's highly paid and most common in "superstar" cities like San Francisco and Washington, DC.

The Brookings analysis is of the US, but the same logic would apply anywhere: The more a city's economy is oriented around white-collar knowledge work, the more exposed it is to AI. "Exposure" doesn't necessarily mean automation, stressed Mark Muro, a senior fellow at Brookings and one of the study's authors. It could also mean productivity gains.
From the Brookings report: Now, the higher-end workers and regions only mildly exposed to earlier forms of automation look to be most involved (for better or worse) with generative AI and its facility for cognitive, office-type tasks. In that vein, workers in high-skill metro areas such as San Jose, Calif.; San Francisco; Durham, N.C.; New York; and Washington D.C. appear likely to experience heavy involvement with generative AI, while those in less office-oriented metro areas such as Las Vegas; Toledo, Ohio; and Fort Wayne, Ind. appear far less susceptible. For instance, while 43% of workers in San Jose could see generative AI shift half or more of their work tasks, that share is only 31% of workers in Las Vegas.
Technology

Lenovo's ThinkBook Flip Puts an Extra-Tall Folding Display On a Laptop (theverge.com) 10

Speaking of some concept devices that Lenovo has unveiled, the company today teased its ThinkBook "codename Flip" AI PC Concept at Mobile World Congress, featuring a flexible 18.1-inch OLED display that can transform between three configurations: a traditional 13.1-inch clamshell, a folded 12.9-inch tablet, or a laptop with an extra-tall vertical screen.

Unlike the motorized ThinkBook Plus Gen 6 expected in June, the Flip uses the display's flexibility to fold behind itself, eliminating motors while gaining 0.4 inches of additional screen space. Users can mirror content on the rear-facing portion when folded or enjoy the full 2000x2664 resolution display in vertical orientation. The concept also features a SmartForcePad trackpad with LED-illuminated shortcut layers. While still in prototype phase, Lenovo has specs in mind: Intel Ultra 7 processor, 32GB RAM, PCIe SSD storage, and Thunderbolt 4 connectivity.
Programming

Can TrapC Fix C and C++ Memory Safety Issues? (infoworld.com) 99

"TrapC, a fork of the C language, is being developed as a potential solution for memory safety issues that have hindered the C and C++ languages," reports InfoWorld.

But also being developed is a compiler named trapc "intended to be implemented as a cybersecurity compiler for C and C++ code, said developer Robin Rowe..." Due by the end of this year, trapc will be a free, open source compiler similar to Clang... Rowe said.

TrapC has pointers that are memory-safe, addressing the memory safety issue with the two languages. With TrapC, developers write in C or C++ and compile in TrapC, for memory safety...

Rowe presented TrapC at an ISO C meeting this week. Developers can download a TrapC whitepaper and offer Rowe feedback. According to the whitepaper, TrapC's memory management is automatic and cannot leak memory. Pointers are lifetime-managed, not garbage-collected. Also, TrapC reuses a few code safety features from C++, notably member functions, constructors, destructors, and the new keyword.

"TrapC Memory Safe Pointers will not buffer overrun and will not segfault," Rowe told the ISO C Committee standards body meeting, according to the Register. "When C code is compiled using a TrapC compiler, all pointers become Memory Safe Pointers and are checked."

In short, TrapC "is a programming language forked from C, with changes to make it LangSec and Memory Safe," according to that white paper. "To accomplish that, TrapC seeks to eliminate all Undefined Behavior in the C programming language..."

"The startup TRASEC and the non-profit Fountain Abode have a TrapC compiler in development, called trapc," the whitepaper adds, and their mission is "to enable recompiling legacy C code into executables that are safe by design and secure by default, without needing much code refactoring... The TRASEC trapc cybersecurity compiler with AI code reasoning is expected to release as free open source software sometime in 2025."

In November the Register offered some background on the origins of TrapC...
AI

What Happened When Conspiracy Theorists Talked to OpenAI's GPT-4 Turbo? (washingtonpost.com) 134

A "decision science partner" at a seed-stage venture fund (who is also a cognitive-behavioral decision science author and professional poker player) explored what happens when GPT-4 Turbo converses with conspiracy theorists: Researchers have struggled for decades to develop techniques to weaken the grip of conspiracy theories and cult ideology on adherents. This is why a new paper in the journal Science by Thomas Costello of MIT's Sloan School of Management, Gordon Pennycook of Cornell University and David Rand, also of Sloan, is so exciting... In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

The reduction in belief held across a range of topics... Even more encouraging, participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy. And the results appear to be durable, holding up in evaluations 10 days and two months later... Why was AI able to persuade people to change their minds? The authors posit that it "simply takes the right evidence," tailored to the individual, to effect belief change, noting: "From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence...."

It is hard to walk away from who you are, whether you are a QAnon believer, a flat-Earther, a truther of any kind or just a stock analyst who has taken a position that makes you stand out from the crowd. And that's why the AI approach might work so well. The participants were not interacting with a human, which, I suspect, didn't trigger identity in the same way, allowing the participants to be more open-minded. Identity is such a huge part of these conspiracy theories in terms of distinctiveness, putting distance between you and other people. When you're interacting with AI, you're not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.

Answering questions from Slashdot readers in 2005, Wil Wheaton described playing poker against the cognitive-behavioral decision science author who wrote this article...
AI

27-Year-Old EXE Became Python In Minutes. Is AI-Assisted Reverse Engineering Next? (adafruit.com) 150

Adafruit managing director Phillip Torrone (also long-time Slashdot reader ptorrone) shared an interesting blog post. They'd spotted a Reddit post "detailing how someone took a 27-year-old visual basic EXE file, fed it to Claude 3.7, and watched as it reverse-engineered the program and rewrote it in Python." It was an old Visual Basic 4 program they had written in 1997. Running a VB4 exe in 2024 can be a real yak-shaving compatibility nightmare, chasing down outdated DLLs and messy workarounds. So! OP decided to upload the exe to Claude 3.7 with this request:

"Can you tell me how to get this file running? It'd be nice to convert it to Python.">

Claude 3.7 analyzed the binary, extracted the VB 'tokens' (VB is not a fully-machine-code-compiled language which makes this task a lot easier than something from C/C++), identified UI elements, and even extracted sound files. Then, it generated a complete Python equivalent using Pygame. According to the author, the code worked on the first try and the entire process took less than five minutes...

Torrone speculates on what this might mean. "Old business applications and games could be modernized without needing the original source code... Tools like Claude might make decompilation and software archaeology a lot easier: proprietary binaries from dead platforms could get a new life in open-source too."

And maybe Archive.org could even add an LLM "to do this on the fly!"
AMD

AMD Reveals RDNA 4 GPU Architecture Powering Next Gen Radeon RX 9070 Cards (hothardware.com) 24

Long-time Slashdot reader MojoKid writes: AMD took the wraps of its next gen RDNA 4 consumer graphics architecture Friday, which was designed to enhance efficiency over the previous generation, while also optimizing performance for today's more taxing ray-traced gaming and AI workloads. RDNA 4 features next generation Ray Tracing engines, dedicated hardware for AI and ML workloads, better bandwidth utilization, and multimedia improvements for both gaming and content creation. AMD's 3rd generation Ray Accelerators in RDNA offer 2x the peak throughput of RDNA 3 and add support for a new feature called Oriented Bounding Boxes, that results in more efficient GPU utilization. 3rd Generation Matrix Accelerators are also present, which offer improved performance, along with support for 8-bit float data types, with structured sparsity.

The first cards featuring RDNA 4, the Radeon RX 9070 and 9070 XT go on sale next week, with very competitive MSRPs below $600, and are expected to do battle with NVIDIA's GeForce RTX 5070-class GPUs

The article calls it "a significant step forward" for AMD, adding that next week is "going to be very busy around here. NVIDIA is launching the final, previously announced member of the RTX 50 series and AMD will unleash the 9070 and 9070 XT."
Businesses

3D Software Company Autodesk Cuts 1,350 Jobs To Boost AI Investment 19

Autodesk said it would cut 1,350 employees, or about 9% of its workforce, as part of a pivot to the cloud and artificial intelligence. Fast Company reports: Companies across sectors such as architecture, engineering, construction, and product design are making extensive use of Autodesk's 3D design solutions, with the software maker's artificial intelligence and machine learning capabilities further driving spending on its products. Autodesk saw a 23% jump in total billings to $2.11 billion in the fourth quarter ended January 31.

The company's international operations have particularly shown strength, while analysts have also noted that the company was outpacing peers in the manufacturing sector, driven by the performance of its "Fusion" design software.
Businesses

Benioff Says Salesforce Won't Hire Engineers This Year Due To AI (sfstandard.com) 37

Salesforce CEO Marc Benioff said his firm, San Francisco's largest private employer, does not plan to hire engineers this year because of the success of AI agents created and used by the company. From a report: "My message to CEOs right now is that we are the last generation to manage only humans," Benioff said Wednesday on Salesforce's earnings call, indicating that companies of the future will have hybrid human and digital workforces. Benioff added that Salesforce's mission is to become "the No. 1 digital labor provider, period" to other companies.
AI

OpenAI Plans To Integrate Sora's Video Generator Into ChatGPT (techcrunch.com) 4

An anonymous reader quotes a report from TechCrunch: OpenAI intends to eventually integrate its AI video generation tool, Sora, directly into its popular consumer chatbot app, ChatGPT, company leaders said during a Friday office hours session on Discord. Today, Sora is only available through a dedicated web app OpenAI launched in December, which lets users access the AI video model of the same name to generate up to twenty-second-long cinematic clips. However, OpenAI's product lead for Sora, Rohan Sahai, said the company has plans to put Sora in more places, and expand what Sora can create.

[...] OpenAI may be trying to attract users to ChatGPT by letting them generate Sora videos from the chatbot. Putting Sora in ChatGPT could also incentivize users to upgrade to ChatGPT's premium subscription tiers, which may offer higher video generation limits. One of the reasons OpenAI launched Sora as a separate web app was to maintain ChatGPT's simplicity, Sahai explained during the office hours. Since its launch, OpenAI has expanded Sora's web experience, creating more ways for users to browse Sora-generated videos from the community. Sahai also said OpenAI "would love to build" a standalone mobile app for Sora, noting that the Sora team is actively looking for mobile engineers.
OpenAI also plans to expand Sora's generation capabilities to images, letting users create more photorealistic images than what's currently possible with OpenAI's DALL-E3 model.
Mozilla

Mozilla Responds To Backlash Over New Terms, Saying It's Not Using People's Data for AI 76

Mozilla has denied allegations that its new Firefox browser terms of service allow it to harvest user data for artificial intelligence training, following widespread criticism of the recently updated policy language. The controversy erupted after Firefox introduced terms that grant Mozilla "a nonexclusive, royalty-free, worldwide license to use that information" when users upload content through the browser, prompting competitor Brave Software's CEO Brendan Eich to suggest a business pivot toward data monetization.

"These changes are not driven by a desire by Mozilla to use people's data for AI or sell it to advertisers," Mozilla spokesperson Kenya Friend-Daniel told TechCrunch. "Our ability to use data is still limited by what we disclose in the Privacy Notice." The company clarified that its AI features operate locally on users' devices and don't send content data to Mozilla. Any data shared with advertisers is provided only on a "de-identified or aggregated basis," according to the spokesperson. Mozilla explained it used specific legal terms -- "nonexclusive," "royalty-free," and "worldwide" -- because Firefox is free, available globally, and allows users to maintain control of their own data.
Google

Google's Sergey Brin Urges Workers To the Office at Least Every Weekday 140

Google co-founder Sergey Brin has urged employees working on the company's Gemini AI products to be in the office "at least every weekday" [non-paywalled source] and suggested "60 hours a week is the sweet spot of productivity," according to an internal memo cited by The New York Times. The directive comes as Brin warned that "competition has accelerated immensely and the final race to A.G.I. is afoot," referring to artificial general intelligence, when machines match or surpass human intelligence.

"I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts," Brin wrote in the Wednesday evening memo. The guidance does not alter Google's official policy requiring employees to work in-office three days weekly. Brin, who returned to Google following ChatGPT's 2022 launch, also criticized staff who "put in the bare minimum," calling them "highly demoralizing to everyone else."
AI

US Workers See AI-Induced Productivity Growth, Fed Survey Shows (straitstimes.com) 23

Workers reported saving a substantial number of work hours by using generative AI, according to research conducted by the Federal Reserve Bank of St. Louis, along with Vanderbilt and Harvard universities. From a report: The researchers, drawing from what they identified as the first nationally representative survey of generative AI adoption, measured the impact of generative AI on work productivity by how much workers used the technology and how intensely. They found users are saving meaningful amounts of time.

"On average, workers are 33% more productive in each hour that they use generative AI," the paper found. Among respondents that used generative AI in the previous week, 21% said it saved them four hours or more in that week, 20% reported three hours, 26% said two hours and 33% reported an hour or less.

Slashdot Top Deals