×
Microsoft

Microsoft Postpones Windows Recall After Major Backlash (windowscentral.com) 96

In an unprecedented move, Microsoft has announced that its big Copilot+ PC initiative that was unveiled last month will launch without its headlining "Windows Recall" AI feature next week on June 18. From a report: The feature, which captures snapshots of your screen every few seconds, was revealed to store sensitive user data in an unencrypted state, raising serious concerns among security researchers and experts.

Last week, Microsoft addressed these concerns by announcing that it would make changes to Windows Recall to ensure the feature handles data securely on device. At that time, the company insisted that Windows Recall would launch alongside Copilot+ PCs on June 18, with an update being made available at launch to address the concerns with Windows Recall. Now, Microsoft is saying Windows Recall will launch at a later date, beyond the general availability of Copilot+ PCs. This means these new devices will be missing their headlining AI feature at launch, as Windows Recall is now delayed indefinitely. The company says Windows Recall will be added in a future Windows update, but has not given a timeframe for when this will be.
Further reading:
'Microsoft Has Lost Trust With Its Users and Windows Recall is the Straw That Broke the Camel's Back'
Windows 11's New Recall Feature Has Been Cracked To Run On Unsupported Hardware
Is the New 'Recall' Feature in Windows a Security and Privacy Nightmare?
Mozilla Says It's Concerned About Windows Recall.
Open Source

OIN Expands Linux Patent Protection Yet Again (But Not To AI) (zdnet.com) 7

Steven Vaughan-Nichols reports via ZDNet: While Linux and open-source software (OSS) are no longer constantly under intellectual property (IP) attacks, the Open Invention Network (OIN) patent consortium still stands guard over its patents. Now, OIN, the largest patent non-aggression community, has expanded its protection once again by updating its Linux System definition. Covering more than just Linux, the Linux System definition also protects adjacent open-source technologies. In the past, protection was expanded to Android, Kubernetes, and OpenStack. The OIN accomplishes this by providing a shared defensive patent pool of over 3 million patents from over 3,900 community members. OIN members include Amazon, Google, Microsoft, and essentially all Linux-based companies.

This latest update extends OIN's existing patent risk mitigation efforts to cloud-native computing and enterprise software. In the cloud computing realm, OIN has added patent coverage for projects such as Istio, Falco, Argo, Grafana, and Spire. For enterprise computing, packages such as Apache Atlas and Apache Solr -- used for data management and search at scale, respectively -- are now protected. The update also enhances patent protection for the Internet of Things (IoT), networking, and automotive technologies. OpenThread and packages such as agl-compositor and kukusa.val have been added to the Linux System definition. In the embedded systems space, OIN has supplemented its coverage of technologies like OpenEmbedded by adding the OpenAMP and Matter, the home IoT standard. OIN has included open hardware development tools such as Edalize, cocotb, Amaranth, and Migen, building upon its existing coverage of hardware design tools like Verilator and FuseSoc.

Keith Bergelt, OIN's CEO, emphasized the importance of this update, stating, "Linux and other open-source software projects continue to accelerate the pace of innovation across a growing number of industries. By design, periodic expansion of OIN's Linux System definition enables OIN to keep pace with OSS's growth." [...] Looking ahead, Bergelt said, "We made this conscious decision not to include AI. It's so dynamic. We wait until we see what AI programs have significant usage and adoption levels." This is how the OIN has always worked. The consortium takes its time to ensure it extends its protection to projects that will be around for the long haul. The OIN practices patent non-aggression in core Linux and adjacent open-source technologies by cross-licensing their Linux System patents to one another on a royalty-free basis. When OIN signees are attacked because of their patents, the OIN can spring into action.

Businesses

Amazon Says It'll Spend $230 Million On Generative AI Startups (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: Amazon says that it will commit up to $230 million to startups building generative AI-powered applications. The investment, roughly $80 million of which will fund Amazon's second AWS Generative AI Accelerator program, aims to position AWS as an attractive cloud infrastructure choice for startups developing generative AI models to power their products, apps and services. Much of the new tranche -- including the entire portion set aside for the accelerator program -- comes in the form of compute credits for AWS infrastructure, meaning that it can't be transferred to other cloud service providers like Google Cloud and Microsoft Azure.

To sweeten the pot, Amazon is pledging that startups in this year's Generative AI Accelerator cohort will gain access to experts and tech from Nvidia, the program's presenting partner. They will also be invited to join the Nvidia Inception program, which provides companies opportunities to connect with potential investors and additional consulting resources. The Generative AI Accelerator program has also grown substantially. Last year's cohort, which had 21 startups, received only up to $300,000 in AWS compute credits, amounting to around a combined $6.3 million investment. "With this new effort, we will help startups launch and scale world-class businesses, providing the building blocks they need to unleash new AI applications that will impact all facets of how the world learns, connects, and does business," Matt Wood, VP of AI products at AWS, said in a statement.
Further reading: How Amazon Blew Alexa's Shot To Dominate AI
AI

Turkish Student Arrested For Using AI To Cheat in University Exam (reuters.com) 49

Turkish authorities have arrested a student for cheating during a university entrance exam by using a makeshift device linked to AI software to answer questions. From a report: The student was spotted behaving in a suspicious way during the exam at the weekend and was detained by police, before being formally arrested and sent to jail pending trial. Another person, who was helping the student, was also detained.
AI

How Amazon Blew Alexa's Shot To Dominate AI 43

Amazon unveiled a new generative AI-powered version of its Alexa voice assistant at a packed event in September 2023, demonstrating how the digital assistant could engage in more natural conversation. However, nearly a year later, the updated Alexa has yet to be widely released, with former employees citing technical challenges and organizational dysfunction as key hurdles, Fortune reported Thursday. The magazine reports that the Alexa large language model lacks the necessary data and computing power to compete with rivals like OpenAI. Additionally, Amazon has prioritized AI development for its cloud computing unit, AWS, over Alexa, the report said. Despite a $4 billion investment in AI startup Anthropic, privacy concerns and internal politics have prevented Alexa's teams from fully leveraging Anthropic's technology.
AI

Stable Diffusion 3 Mangles Human Bodies Due To Nudity Filters (arstechnica.com) 88

An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generate images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease. A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]" details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies.

AI image fans are so far blaming the Stable Diffusion 3's anatomy fails on Stability's insistence on filtering out adult content (often called "NSFW" content) from the SD3 training data that teaches the model how to generate images. "Believe it or not, heavily censoring a model also gets rid of human anatomy, so... that's what happened," wrote one Reddit user in the thread. The release of Stable Diffusion 2.0 in 2023 suffered from similar problems in depicting humans accurately, and AI researchers soon discovered that censoring adult content that contains nudity also severely hampers an AI model's ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by excluding NSFW content. "It works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw," wrote another Redditor.

Basically, any time a prompt hones in on a concept that isn't represented well in its training dataset, the image model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying. Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt "a man showing his hands" returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

AI

Adobe Says It Won't Train AI On Customers' Work In Overhauled ToS (theverge.com) 35

In a new blog post, Adobe said it has updated its terms of service to clarify that it won't train AI on customers' work. The move comes after a week of backlash from users who feared that an update to Adobe's ToS would permit such actions. The clause was included in ToS sent to Creative Cloud Suite users, which claimed that Adobe "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience." The Verge reports: The new terms of service are expected to roll out on June 18th and aim to better clarify what Adobe is permitted to do with its customers' work, according to Adobe's president of digital media, David Wadhwani. "We have never trained generative AI on our customer's content, we have never taken ownership of a customer's work, and we have never allowed access to customer content beyond what's legally required," Wadhwani said to The Verge. [...]

Adobe's chief product officer, Scott Belsky, acknowledged that the wording was "unclear" and that "trust and transparency couldn't be more crucial these days." Wadhwani says that the language used within Adobe's TOS was never intended to permit AI training on customers' work. "In retrospect, we should have modernized and clarified the terms of service sooner," Wadhwani says. "And we should have more proactively narrowed the terms to match what we actually do, and better explained what our legal requirements are."

"We feel very, very good about the process," Wadhwani said in regards to content moderation surrounding Adobe stock and Firefly training data but acknowledged it's "never going to be perfect." Wadhwani says that Adobe can remove content that violates its policies from Firefly's training data and that customers can opt out of automated systems designed to improve the company's service. Adobe said in its blog post that it recognizes "trust must be earned" and is taking on feedback to discuss the new changes. Greater transparency is a welcome change, but it's likely going to take some time to convince scorned creatives that it doesn't hold any ill intent. "We are determined to be a trusted partner for creators in the era ahead. We will work tirelessly to make it so."

United States

FTC Chair Lina Khan Says Agency Pursuing 'Mob Bosses' in Big Tech (techcrunch.com) 39

The U.S. Federal Trade Commission is prioritizing enforcement actions against major technology companies that cause the most harm, FTC Chair Lina Khan said at an event. Khan emphasized the importance of targeting "mob bosses" rather than lower-level offenders to effectively address illegal behaviors in the industry. The FTC has recently launched antitrust probes into Microsoft, Open AI, and Nvidia, and has taken legal action against Meta, Amazon, Google, and Apple in recent years. TechCrunch adds: Khan said that in any given year, the FTC sees up to 3,000 merger filings reported to the agency and that around 2% of those deals get a second look by the government. "So you have 98% of deals that, for the most part, are going through," she said. "If you are a startup or a founder that is eager for an acquisition as an exit, a world in which you have five or six or seven or eight potential suitors, I would think, is a better world in which you just have one or two, right? And so, actually promoting more competition at that level to ensure that startups have you know more of a fair chance of getting a better valuation, I think would be beneficial as well."
Hardware

Will Tesla Do a Phone? Yes, Says Morgan Stanley 170

Morgan Stanley, in a note -- seen by Slashdot -- sent to its clients on Wednesday: From our continuing discussions with automotive management teams and industry experts, the car is an extension of the phone. The phone is an extension of the car. The lines between car and phone are truly blurring.

For years, we have been writing about the potential for Tesla to expand into edge compute domains beyond the car, including last October where we described a mobile AI assistant as a 'heavy key.' Following Apple's WWDC, Tesla CEO Elon Musk re-ignited the topic by saying that making such a device is 'not out of the question.' As Mr. Musk continues to invest further into his own LLM/genAI efforts, such as 'Grok,' the potential strategic and userexperience overlap becomes more obvious.

From an automotive perspective, the topic of supercomputing at both the datacenter level and at the edge are highly relevant given the incremental global unit sold is a car that can perform OTA updates of firmware, has a battery with a stored energy equivalent of approx. 2,000 iPhones, and a liquid cooled inference supercomputer as standard kit. What if your phone could tap into your vehicle's compute power and battery supply to run AI applications?

Edge compute and AI have brought to light some of the challenges (battery life, thermal, latency, etc.) of marrying today's smartphones with ever more powerful AI-driven applications. Numerous media reports have discussed OpenAI potentially developing a consumer device specifically designed for AI.

The phone as a (heavy) car key? Any Tesla owner will tell you how they use their smartphone as their primary key to unlock their car as well as running other remote applications while they interact with their vehicles. The 'action button' on the iPhone 15 potentially takes this to a different level of convenience.
AI

SoftBank's New AI Makes Angry Customers Sound Calm On Phone 104

SoftBank has developed AI voice-conversion technology aimed at reducing the psychological stress on call center operators by altering the voices of angry customers to sound calmer. Japan's The Asahi Shimbun reports: The company launched a study on "emotion canceling" three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call. Toshiyuki Nakatani, a SoftBank employee, came up with the idea after watching a TV program about customer harassment. "If the customers' yelling voice sounded like Kitaro's Eyeball Dad, it would be less scary," he said, referring to a character in the popular anime series "Gegege no Kitaro."

The voice-altering AI learned many expressions, including yelling and accusatory tones, to improve vocal conversions. Ten actors were hired to perform more than 100 phrases with various emotions, training the AI with more than 10,000 pieces of voice data. The technology does not change the wording, but the pitch and inflection of the voice is softened. For instance, a woman's high-pitched voice is lowered in tone to sound less resonant. A man's bass tone, which may be frightening, is raised to a higher pitch to sound softer. However, if an operator cannot tell if a customer is angry, the operator may not be able to react properly, which could just upset the customer further. Therefore, the developers made sure that a slight element of anger remains audible.

According to the company, the biggest burdens on operators are hearing abusive language and being trapped in long conversations with customers who will not get off the line -- such as when making persistent requests for apologies. With the new technology, if the AI determines that the conversation is too long or too abusive, a warning message will be sent out, such as, "We regret to inform you that we will terminate our service." [...] The company plans to further improve the accuracy of the technology by having AI learn voice data and hopes to sell the technology starting from fiscal 2025.
Nakatani said, "AI is good at handling complaints and can do so for long hours, but what angry customers want is for a human to apologize to them." He said he hopes that AI "will become a mental shield that prevents operators from overstraining their nerves."
AI

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet (nytimes.com) 38

An anonymous reader quotes a report from the New York Times: The news was featured on MSN.com: "Prominent Irish broadcaster faces trial over alleged sexual misconduct." At the top of the story was a photo of Dave Fanning. But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question. "You wouldn't believe the amount of people who got in touch," said Mr. Fanning, who called the error "outrageous." The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu. A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a "prominent Irish broadcaster." The story was then promoted by MSN, a web portal owned by Microsoft. The story was deleted from the internet a day later, but the damage to Mr. Fanning's reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.

Mr. Fanning's complaint against BNN is one of many. The site based published numerous falsehoods during its short time online.Credit...Paulo Nunes dos Santos for The New York Times BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN's featuring the misleading story with Mr. Fanning's photo or his defamation case, but the company said it had terminated its licensing agreement with BNN. During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of "seasoned" journalists and 10 million monthly visitors, surpassing the The Chicago Tribune's self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN's stories. Google News often surfaced them, too. A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN's "About Us" page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image.
"How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply," adds The Times.

"NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content. The websites, which seem to operate with little to no human supervision, often have generic names -- such as iBusiness Day and Ireland Top News -- that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers."
The Courts

Brazil Hires OpenAI To Cut Costs of Court Battles 16

Brazil's government is partnering with OpenAI to use AI for expediting the screening and analysis of thousands of lawsuits to reduce costly court losses impacting the federal budget. Reuters reports: The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general's office (AGU). AGU told Reuters that Microsoft would provide the artificial intelligence services from ChatGPT creator OpenAI through its Azure cloud-computing platform. It did not say how much Brazil will pay for the services. AGU said the AI project would not replace the work of its members and employees. "It will help them gain efficiency and accuracy, with all activities fully supervised by humans," it said.

Court-ordered debt payments have consumed a growing share of Brazil's federal budget. The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal. The figure does not include small-value claims, which historically amount to around 30 billion reais annually. The combined amount of over 100 billion reais represents a sharp increase from 37.3 billion reais in 2015. It is equivalent to about 1% of gross domestic product, or 15% more than the government expects to spend on unemployment insurance and wage bonuses to low-income workers next year. AGU did not provide a reason for Brazil's rising court costs.
AI

Craig Federighi Says Apple Hopes TO Add Google Gemini, Other AI Models To iOS 18 7

Yesterday, Apple made waves in the media when it revealed a partnership with OpenAI during its annual WWDC keynote. That announcement centered on Apple's decision to bring ChatGPT natively to iOS 18, including Siri and other first-party apps. During a followup interview on Monday, Apple executives Craig Federighi and John Giannandrea hinted at a possible agreement with Google Gemini and other AI chatbots in the future. 9to5Mac reports: Moderated by iJustine, the interview was held in Steve Jobs Theater this afternoon, featuring a discussion with John Giannandrea, Apple's Senior Vice President of Machine Learning and AI Strategy, and Craig Federighi, Senior Vice President of Software Engineering. During the interview, Federighi specifically referenced Apple's hopes to eventually let users choose between different models to use with Apple Intelligence.

While ChatGPT from OpenAI is the only option right now, Federighi suggested that Google Gemini could come as an option down the line: "We think ultimately people are going to have a preference perhaps for certain models that they want to use, maybe one that's great for creative writing or one that they prefer for coding. And so we want to enable users ultimately to bring a model of their choice. And so we may look forward to doing integrations with different models like Google Gemini in the future. I mean, nothing to announce right now, but that's our direction." The decision to focus on ChatGPT at the start was because Apple wanted to "start with the best," according to Federighi.
Hardware

Finnish Startup 'Flow' Claims It Can 100x Any CPU's Power With Its Companion Chip (techcrunch.com) 124

An anonymous reader quotes a report from TechCrunch: A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing to as much as 100x with software tweaks. If it works, it could help the industry keep up with the insatiable compute demand of AI makers. Flow is a spinout of VTT, a Finland state-backed research organization that's a bit like a national lab. The chip technology it's commercializing, which it has branded the Parallel Processing Unit, is the result of research performed at that lab (though VTT is an investor, the IP is owned by Flow). The claim, Flow is first to admit, is laughable on its face. You can't just magically squeeze extra performance out of CPUs across architectures and code bases. If so, Intel or AMD or whoever would have done it years ago. But Flow has been working on something that has been theoretically possible -- it's just that no one has been able to pull it off.

Central Processing Units have come a long way since the early days of vacuum tubes and punch cards, but in some fundamental ways they're still the same. Their primary limitation is that as serial rather than parallel processors, they can only do one thing at a time. Of course, they switch that thing a billion times a second across multiple cores and pathways -- but these are all ways of accommodating the single-lane nature of the CPU. (A GPU, in contrast, does many related calculations at once but is specialized in certain operations.) "The CPU is the weakest link in computing," said Flow co-founder and CEO Timo Valtonen. "It's not up to its task, and this will need to change."

CPUs have gotten very fast, but even with nanosecond-level responsiveness, there's a tremendous amount of waste in how instructions are carried out simply because of the basic limitation that one task needs to finish before the next one starts. (I'm simplifying here, not being a chip engineer myself.) What Flow claims to have done is remove this limitation, turning the CPU from a one-lane street into a multi-lane highway. The CPU is still limited to doing one task at a time, but Flow's Parallel Processing Unit (PPU), as they call it, essentially performs nanosecond-scale traffic management on-die to move tasks into and out of the processor faster than has previously been possible. [...] Flow is just now emerging from stealth, with [about $4.3 million] in pre-seed funding led by Butterfly Ventures, with participation from FOV Ventures, Sarsia, Stephen Industries, Superhero Capital and Business Finland.
The primary challenge Flow faces is that for its technology to be integrated, it requires collaboration at the chip-design level. This means chipmakers need to redesign their products to include the PPU, which is a substantial investment.

Given the industry's cautious nature and the existing roadmaps of major chip manufacturers, the uptake of this new technology might be slow. Companies are often reluctant to adopt unproven technologies that could disrupt their long-term plans.

The white paper can be read here. A Flow Computing FAQ is also available here.
AI

Scammers' New Way of Targeting Small Businesses: Impersonating Them (wsj.com) 17

Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud.

Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

Apple

Apple Brings ChatGPT To Its Apps, Including Siri (techcrunch.com) 49

Apple is bringing ChatGPT, OpenAI's AI-powered chatbot experience, to Siri and other first-party apps and capabilities across its operating systems. From a report: "We're excited to partner with Apple to bring ChatGPT to their users in a new way," OpenAI CEO Sam Altman said in a statement. "Apple shares our commitment to safety and innovation, and this partnership aligns with OpenAI's mission to make advanced AI accessible to everyone." Soon, Siri will be able to tap ChatGPT for "expertise" where it might be helpful, Apple says. For example, if you need menu ideas for a meal to make for friends using some ingredients from your garden, you can ask Siri, and Siri will automatically feed that info to ChatGPT for an answer after you give it permission to do so. You can include photos with the questions you ask ChatGPT via Siri, or ask questions related to your docs or PDFs. Apple's also integrated ChatGPT into system-wide writing tools like Writing Tools, which lets you create content with ChatGPT -- including images -- or ask an initial idea and send it to ChatGPT to get a revision or variation back. Apple said ChatGPT within Apple's apps is free and data isn't being shared with the Microsoft-backed firm. ChatGPT subscribers can connect their accounts and access paid features right from these experiences, the company said.

Apple Intelligence -- Apple's efforts to combine the power of generative models with personal context -- is free to Apple device owners but works with "iOS 18" on iPhone 15 Pro, macOS 15 and iPadOS 17.
AI

Apple Unveils Apple Intelligence 29

As rumored, Apple today unveiled Apple Intelligence, its long-awaited push into generative artificial intelligence (AI), promising highly personalized experiences built with safety and privacy at its core. The feature, referred to as "A.I.", will be integrated into Apple's various operating systems, including iOS, macOS, and the latest, VisionOS. CEO Tim Cook said that Apple Intelligence goes beyond artificial intelligence, calling it "personal intelligence" and "the next big step for Apple."

Apple Intelligence is built on large language and intelligence models, with much of the processing done locally on the latest Apple silicon. Private Cloud Compute is being added to handle more intensive tasks while maintaining user privacy. The update also includes significant changes to Siri, Apple's virtual assistant, which will now support typed queries and deeper integration into various apps, including third-party applications. This integration will enable users to perform complex tasks without switching between multiple apps.

Apple Intelligence will roll out to the latest versions of Apple's operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2.
XBox (Games)

Micrsoft Confirms Cheaper All-Digital Xbox Series X As It Marches Beyond Physical Games (kotaku.com) 72

Microsoft has announced a new lineup of Xbox consoles, including an all-digital white Xbox Series X with a 1TB SSD, priced at $450. The company is also retiring the Carbon Black Series S, replacing it with a white version featuring a 1TB SSD and a $350 price point. Additionally, a new Xbox Series X with a disc drive and 2TB of storage will launch for $600.

The move comes as Microsoft continues to focus on digital gaming and subscription services like Game Pass, with reports suggesting that the PS5 is outselling Xbox Series consoles 2:1. The shift has led to minimal physical Xbox game sections in stores and some first-party titles, like Hellblade 2, not receiving physical releases. Despite rumors of a multiplatform approach, Microsoft maintains its commitment to its own gaming machines, promising a new "next-gen" console in the future, potentially utilizing generative-AI technology.

Further reading: Upcoming Games Include More Xbox Sequels - and a Medieval 'Doom'.
AI

Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers Warn (newatlas.com) 27

New Atlas reports on a research team that successfuly used GPT-4 to exploit 87% of newly-discovered security flaws for which a fix hadn't yet been released. This week the same team got even better results from a team of autonomous, self-propagating Large Language Model agents using a Hierarchical Planning with Task-Specific Agents (HPTSA) method: Instead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities.
"Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace," the researchers conclude. "Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question.

"Beyond the immediate impact of our work, we hope that our work inspires frontier LLM providers to think carefully about their deployments."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Robotics

Dutch Police Test AI-Powered Robot Dog to Raid Drug Labs (interestingengineering.com) 29

"Police and search and rescue forces worldwide are increasingly using robots to assist in carrying out their operations," writes Interesting Engineering. "Now, the Dutch police are looking at employing AI-powered autonomous robot dogs in drug lab raids to protect officers from criminal risks, hazardous chemicals, and explosions."

New Scientist's Matthew Sparkes (also a long-time Slashdot reader) shares this report: Dutch police are planning to use an autonomous robotic dog in drug lab raids to avoid placing officers at risk from criminals, dangerous chemicals and explosions. If tests in mocked-up scenarios go well, the artificial intelligence-powered robot will be deployed in real raids, say police. Simon Prins at Politie Nederland, the Dutch police force, has been testing and using robots in criminal investigations for more than two decades, but says they are only now growing capable enough to be practical for more...
Some context from Interesting Engineering: The police force in the Netherlands carries out such raids at least three to four times a week... Since 2021, the force has already been using a Spot quadruped, fitted with a robotic arm, from Boston Dynamics to carry out drug raids and surveillance. However, the Spot is remotely controlled by a handler... [Significant technological advancements] have prompted the Dutch force to explore fully autonomous operations with Spot.

Reportedly, such AI-enabled autonomous robots are expected to inspect drug labs, ensure no criminals are present, map the area, and identify dangerous chemicals... Initial tests by force suggest that Spot could explore and map a mock drug lab measuring 15 meters by 20 meters. It was able to find hazardous chemicals and put them away into a designated storage container.

Their article notes that Spot "can do laser scans and visual, thermal, radiation, and acoustic inspections using add-on payloads and onboard cameras." (A video from Boston Dynamics — the company behind Spot — also seems to show the robot dog spraying something on a fire.)

The video seems aimed at police departments, touting the robot dog's advantages for "safety and incident response":
  • Enables safer investigation of suspicious packages
  • Detection of hazardous chemicals
  • De-escalation of tense or dangerous situations
  • Get eyes on dangerous situations

It also notes the robot "can be operated from a safe distance," suggesting customers "Use Spot® to place cameras, radios, and more for tactical reconnaissance."


Slashdot Top Deals