Google

Google Maps Can Soon Scan Your Screenshots To Plan Your Vacation (theverge.com) 15

Google is rolling out new AI-powered features across Maps, Search, and Hotels to simplify travel planning, including a screenshot-detection tool in Maps that identifies and saves locations mentioned in image text. The Verge reports: Once the new screenshot list is enabled in Maps, the Gemini-powered feature will detect places that are mentioned in text within screenshots on the device, show users the locations on the map, and allow them to review and save locations to a sharable list. The screenshot list feature will start rolling out in English this week to iOS users in the US, with Android support "coming soon."

AI Overviews for Google Search are also being updated to expand travel planning tools, with itinerary-building features rolling out in English to mobile and desktop devices in the US this week that can create trip ideas for "distinct regions or entire countries." Users can use terms like "create a vacation itinerary for Greece that focuses on history" to explore reviews and photos from other users alongside a map of location recommendations, which can be saved to Google Maps or exported to Docs or Gmail.

Education

Columbia University Suspends Student Behind Interview Cheating AI (businessinsider.com) 37

Columbia University has suspended the student who created an AI tool designed to help job candidates cheat on technical coding interviews, according to disciplinary documents seen by Business Insider. Chungin "Roy" Lee received a yearlong suspension for "publishing unauthorized documents" from a disciplinary hearing about his product, Interview Coder, not for creating the tool itself. Lee had signed a form agreeing not to disclose his disciplinary record or post hearing materials online.

Interview Coder, which sells for $60 monthly, is on track to generate $2 million in annual revenue, Lee said. The university initially placed him on probation after finding him responsible for "facilitation of academic dishonesty." Lee had already submitted paperwork for a leave of absence before his suspension. He told BI he plans to move to San Francisco, which "was my plan all along."
AI

Satya Nadella Says DeepSeek Is the New Bar For Microsoft's AI Success (theverge.com) 14

Microsoft CEO Satya Nadella has told employees that DeepSeek's R1 AI model has set "the new bar" for his company's AI ambitions, citing the startup's ability to reach the top of app store rankings. "What's most impressive about DeepSeek is that it's a great reminder of what 200 people can do when they come together with one thought and one play," The Verge cited Nadella as saying.

"Most importantly, not just leaving it there as a research project or an open source project, but to turn it into a product that was number one in the App Store. That's the new bar to me," he added. Microsoft quickly deployed DeepSeek's R1 on its Azure platform in January. The AI model gained recognition for its optimization below Nvidia's CUDA layer, enabling greater efficiency.
Science

Inside arXiv - the Most Transformative Platform in All of Science (wired.com) 13

Paul Ginsparg, a physics professor at Cornell University, created arXiv nearly 35 years ago as a digital repository where researchers could share their findings before peer review. Today, the platform hosts more than 2.6 million papers, receives 20,000 new submissions monthly, and serves 5 million active users, Wired writes in a profile of the platform.

"Just when I thought I was out, they pull me back in!" Ginsparg quotes from The Godfather, reflecting his inability to fully hand over the platform despite numerous attempts. If arXiv stopped functioning, scientists worldwide would face immediate disruption. "Everybody in math and physics uses it," says Scott Aaronson, a computer scientist at the University of Texas at Austin. "I scan it every night."

ArXiv revolutionized academic publishing, previously dominated by for-profit giants like Elsevier and Springer, by allowing instant and free access to research. Many significant discoveries, including the "transformers" paper that launched the modern AI boom, first appeared on the platform. Initially a collection of shell scripts on Ginsparg's NeXT machine in 1991, arXiv followed him from Los Alamos National Laboratory to Cornell, where it found an institutional home despite administrative challenges. Recent funding from the Simons Foundation has enabled a hiring spree and long-needed technical updates.
AI

China Built Hundreds of AI Data Centers To Catch the AI Boom. Now Many Stand Unused. 64

China's ambitious AI infrastructure push has resulted in hundreds of idle data centers with local media reporting up to 80% of newly built computing resources remaining unused. The country announced over 500 data center projects during 2023-2024, with at least 150 completed facilities now struggling to secure customers in a rapidly changing market.

The rise of DeepSeek's open-source reasoning model R1, which matches ChatGPT o1's performance at a fraction of the cost, has fundamentally altered hardware demand. Computing needs now prioritize low-latency infrastructure for real-time reasoning rather than facilities optimized for large-scale training workloads.

Technical misalignment compounds the problem, as many centers were constructed by companies with little AI expertise, MIT Technology Review reports. The facilities, often built in remote regions to capitalize on cheaper electricity and land, now face obsolescence as AI companies require proximity to tech hubs to minimize transmission delays. GPU rental prices have collapsed, with eight-GPU Nvidia H100 server clusters now leasing for 75,000 yuan ($10,333) monthly, down from peaks of 180,000 yuan, making operations financially unsustainable for many data center operators.
AI

OpenAI's Viral Studio Ghibli Moment Highlights AI Copyright Concerns (techcrunch.com) 121

An anonymous reader quotes a report from TechCrunch: It's only been a day since ChatGPT's new AI image generator went live, and social media feeds are already flooded with AI-generated memes in the style of Studio Ghibli, the cult-favorite Japanese animation studio behind blockbuster films such as "My Neighbor Totoro" and "Spirited Away." In the last 24 hours, we've seen AI-generated images representing Studio Ghibli versions of Elon Musk, "The Lord of the Rings", and President Donald Trump. OpenAI CEO Sam Altman even seems to have made his new profile picture a Studio Ghibli-style image, presumably made with GPT-4o's native image generator. Users seem to be uploading existing images and pictures into ChatGPT and asking the chatbot to re-create it in new styles.

OpenAI's latest update comes on the heels of Google's release of a similar AI image feature in its Gemini Flash model, which also sparked a viral moment earlier in March when people used it to remove watermarks from images. OpenAI's and Google's latest tools make it easier than ever to re-create the styles of copyrighted works -- simply by typing a text prompt. Together, these new AI image features seem to reignite concerns at the core of several lawsuits against generative AI model developers. If these companies are training on copyrighted works, are they violating copyright law?

According to Evan Brown, an intellectual property lawyer at the law firm Neal & McDevitt, products like GPT-4o's native image generator operate in a legal gray area today. Style is not explicitly protected by copyright, according to Brown, meaning OpenAI does not appear to be breaking the law simply by generating images that look like Studio Ghibli movies. However, Brown says it's plausible that OpenAI achieved this likeness by training its model on millions of frames from Ghibli's films. Even if that was the case, several courts are still deciding whether training AI models on copyrighted works falls under fair use protections. "I think this raises the same question that we've been asking ourselves for a couple years now," said Brown in an interview. "What are the copyright infringement implications of going out, crawling the web, and copying into these databases?"

Operating Systems

Linux Kernel 6.14 Is a Big Leap Forward In Performance, Windows Compatibility (zdnet.com) 34

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: Despite the minor delay, Linux 6.14 arrives packed with cutting-edge features and improvements to power upcoming Linux distributions, such as the forthcoming Ubuntu 25.04 and Fedora 42. The big news for desktop users is the improved NTSYNC driver, especially those who like to play Windows games or run Windows programs on Linux. This driver is designed to emulate Windows NT synchronization primitives. What that feature means for you and me is that it will significantly improve the performance of Windows programs running on Wine and Steam Play. [...] Gamers always want the best possible graphics performance, so they'll also be happy to see that Linux now supports recently launched AMD RDNA 4 graphics cards. This approach includes support for the AMD Radeon RX 9070 XT and RX 9070 graphics cards. Combine this support with the recently improved open-source RADV driver and AMD gamers should see the best speed yet on their gaming rigs.

Of course, the release is not just for gamers. Linux 6.14 also includes several AMD and Intel processor enhancements. These boosts focus on power management, thermal control, and compute performance optimizations. These updates are expected to improve overall system efficiency and performance. This release also comes with the AMDXDNA driver, which provides official support for AMD's neural processing units based on the XDNA architecture. This integration enables efficient execution of AI workloads, such as convolutional neural networks and large language models, directly on supported AMD hardware. While Rust has faced some difficulties in recent months in Linux, more Rust programming language abstractions have been integrated into the kernel, laying the groundwork for future drivers written in Rust. [...] Besides drivers, Miguel Ojeda, Rust for Linux's lead developer, said recently that the introduction of the macro for smart pointers with Rust 1.84: derive(CoercePointee) is an "important milestone on the way to building a kernel that only uses stable Rust functions." This approach will also make integrating C and Rust code easier. We're getting much closer to Rust being grafted into Linux's tree.

In addition, Linux 6.14 supports Qualcomm's latest Snapdragon 8 Elite mobile processor, enhancing performance and stability for devices powered by this chipset. That support means you can expect to see much faster Android-based smartphones later this year. This release includes a patch for the so-called GhostWrite vulnerability, which can be used to root some RISC-V processors. This fix will block such attacks. Additionally, Linux 6.14 includes improvements for the copy-on-write Btrfs file system/logical volume manager. These primarily read-balancing methods offer flexibility for different RAID hardware configurations and workloads. Additionally, support for uncached buffered I/O optimizes memory usage on systems with fast storage devices.
Linux 6.14 is available for download here.
China

US Expands Export Blacklist To Keep Computing Tech Out of China (theverge.com) 30

The U.S. has added 80 entities to its export blacklist to prevent China from acquiring advanced American chips for military development, including AI, quantum tech, and hypersonic weapons. The Verge reports: More than 50 of the new entities added to the list are based in China, with others located in Iran, Taiwan, Pakistan, South Africa, and the United Arab Emirates. BIS says the restrictions have been applied to entities that acted "contrary to US national security and foreign policy," and are intended to hinder China's ability to develop high-performance computing capabilities, quantum technologies, advanced artificial intelligence, and hypersonic weapons.

Six of the newly blacklisted entities are subsidiaries of Inspur Group -- China's leading cloud computing service provider and a major customer for US chip makers such as Nvidia, AMD, and Intel -- which BIS alleges had contributed to projects developing supercomputers for the Chinese military. The Beijing Academy of Artificial Intelligence is another addition to the list, which has criticized its inclusion.
"American technology should never be used against the American people," said Jeffrey Kessler, Under Secretary of Commerce for Industry and Security. "BIS is sending a clear, resounding message that the Trump administration will work tirelessly to safeguard our national security by preventing U.S. technologies and goods from being misused for high performance computing, hypersonic missiles, military aircraft training, and UAVs that threaten our national security."
Businesses

Dell's Staff Numbers Have Dropped By 25,000 in Just 2 Years (businessinsider.com) 35

Computer maker Dell's staff numbers have fallen by 25,000 in the last two years. In its latest 10-K filing, published on Tuesday, the company said that it had about 108,000 global employees as of January 31, 2025. In February 2024, that number was 120,000, marking a 10% annual reduction in the workforce. From a report: Looking back two years, Dell's head count stood at 133,000, meaning that since February 2023, the Texas-based tech company has reduced its workforce by 19%. The decline in Dell's head count comes after a year of both layoffs and RTO mandates. In August, the company significantly restructured its sales division, which it told workers was necessary to prepare for "the world of AI." As part of the restructuring, Dell laid off workers, though it did not specify how many.
Microsoft

Microsoft Abandons Data Center Projects, TD Cowen Says (bloomberg.com) 25

Microsoft has walked away from new data center projects in the US and Europe that would have amounted to a capacity of about 2 gigawatts of electricity, according to TD Cowen analysts, who attributed the pullback to an oversupply of the clusters of computers that power artificial intelligence. From a report: The analysts, who rattled investors with a February note highlighting leases Microsoft had abandoned in the US, said the latest move also reflected the company's choice to forgo some new business from ChatGPT maker OpenAI, which it has backed with some $13 billion. Microsoft and the startup earlier this year said they had altered their multiyear agreement, letting OpenAI use cloud-computing services from other companies, provided Microsoft didn't want the business itself.

Microsoft's retrenchment in the last six months included lease cancellations and deferrals, the TD Cowen analysts said in their latest research note, dated Wednesday. Alphabet's Google had stepped in to grab some leases Microsoft abandoned in Europe, the analysts wrote, while Meta Platforms had scooped up some of the freed capacity in Europe.

The Internet

Open Source Devs Say AI Crawlers Dominate Traffic, Forcing Blocks On Entire Countries (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures -- adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic -- Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies. Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."

Iaso's story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies' bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Kevin Fenzi, a member of the Fedora Pagure project's sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso's "Anubis" system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE's GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat. While Anubis has proven effective at filtering out bot traffic, it comes with drawbacks for legitimate users. When many people access the same link simultaneously -- such as when a GitLab link is shared in a chat room -- site visitors can face significant delays. Some mobile users have reported waiting up to two minutes for the proof-of-work challenge to complete, according to the news outlet.

AI

DeepSeek-V3 Now Runs At 20 Tokens Per Second On Mac Studio 90

An anonymous reader quotes a report from VentureBeat: Chinese AI startup DeepSeek has quietly released a new large language model that's already sending ripples through the artificial intelligence industry -- not just for its capabilities, but for how it's being deployed. The 641-gigabyte model, dubbed DeepSeek-V3-0324, appeared on AI repository Hugging Face today with virtually no announcement (just an empty README file), continuing the company's pattern of low-key but impactful releases. What makes this launch particularly notable is the model's MIT license -- making it freely available for commercial use -- and early reports that it can run directly on consumer-grade hardware, specifically Apple's Mac Studio with M3 Ultra chip.

"The new DeepSeek-V3-0324 in 4-bit runs at > 20 tokens/second on a 512GB M3 Ultra with mlx-lm!" wrote AI researcher Awni Hannun on social media. While the $9,499 Mac Studio might stretch the definition of "consumer hardware," the ability to run such a massive model locally is a major departure from the data center requirements typically associated with state-of-the-art AI. [...] Simon Willison, a developer tools creator, noted in a blog post that a 4-bit quantized version reduces the storage footprint to 352GB, making it feasible to run on high-end consumer hardware like the Mac Studio with M3 Ultra chip. This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance.
"The implications of an advanced open-source reasoning model cannot be overstated," reports VentureBeat. "Current reasoning models like OpenAI's o1 and DeepSeek's R1 represent the cutting edge of AI capabilities, demonstrating unprecedented problem-solving abilities in domains from mathematics to coding. Making this technology freely available would democratize access to AI systems currently limited to those with substantial budgets."

"If DeepSeek-R2 follows the trajectory set by R1, it could present a direct challenge to GPT-5, OpenAI's next flagship model rumored for release in coming months. The contrast between OpenAI's closed, heavily-funded approach and DeepSeek's open, resource-efficient strategy represents two competing visions for AI's future."
Google

Google Unveils Gemini 2.5 Pro, Its Latest AI Reasoning Model With Significant Benchmark Gains (blog.google) 7

Google DeepMind has launched Gemini 2.5, a new family of AI models designed to "think" before responding to queries. The initial release, Gemini 2.5 Pro Experimental, tops the LMArena leaderboard by what Google claims is a "significant margin" and demonstrates enhanced reasoning capabilities across technical tasks. The model achieved 18.8% on Humanity's Last Exam without tools, outperforming most competing flagship models. In mathematics, it scored 86.7% on AIME 2025 and 92.0% on AIME 2024 in single attempts, while reaching 84.0% on GPQA's diamond benchmark for scientific reasoning.

For developers, Gemini 2.5 Pro demonstrates improved coding abilities with 63.8% on SWE-Bench Verified using a custom agent setup, though this falls short of Anthropic's Claude 3.7 Sonnet score of 70.3%. On Aider Polyglot for code editing, it scores 68.6%, which Google claims surpasses competing models. The reasoning approach builds on Google's previous experiments with reinforcement learning and chain-of-thought prompting. These techniques allow the model to analyze information, incorporate context, and draw conclusions before delivering responses. Gemini 2.5 Pro ships with a 1 million token context window (approximately 750,000 words). The model is available immediately in Google AI Studio and for Gemini Advanced subscribers, with Vertex AI integration planned in the coming weeks.
AI

Apple Says It'll Use Apple Maps Look Around Photos To Train AI (theverge.com) 11

An anonymous reader shares a report: Sometime earlier this month, Apple updated a section of its website that discloses how it collects and uses imagery for Apple Maps' Look Around feature, which is similar to Google Maps' Street View, as spotted by 9to5Mac. A newly added paragraph reveals that, beginning in March 2025, Apple will be using imagery and data collected during Look Around surveys to "train models powering Apple products and services, including models related to image recognition, creation, and enhancement."

Apple collects images and 3D data to enhance and improve Apple Maps using vehicles and backpacks (for pedestrian-only areas) equipped with cameras, sensors, and other equipment including iPhones and iPads. The company says that as part of its commitment to privacy, any images it captures that are published in the Look Around feature have faces and license plates blurred. Apple also says it will only use imagery with those details blurred out for training models. It does accept requests for those wanting their houses to also be blurred, but by default they are not.

AI

Alibaba's Tsai Warns of 'Bubble' in AI Data Center Buildout (yahoo.com) 34

Alibaba Chairman Joe Tsai has warned of a potential bubble forming in data center construction, arguing that the pace of that buildout may outstrip initial demand for AI services. From a report: A rush by big tech firms, investment funds and other entities to erect server bases from the US to Asia is starting to look indiscriminate, the billionaire executive and financier said. Many of those projects are built without clear customers in mind, Tsai told the HSBC Global Investment Summit in Hong Kong Tuesday.

"I start to see the beginning of some kind of bubble," Tsai told delegates. Some of the envisioned projects commenced raising funds without having secured "uptake" agreements, he added. "I start to get worried when people are building data centers on spec. There are a number of people coming up, funds coming out, to raise billions or millions of capital." [...]

At the same time, Tsai had choice words for his US rivals, particularly with their spending. "I'm still astounded by the type of numbers that's being thrown around in the United States about investing into AI," Tsai told the audience. "People are talking, literally talking about $500 billion, several 100 billion dollars. I don't think that's entirely necessary. I think in a way, people are investing ahead of the demand that they're seeing today, but they are projecting much bigger demand."

AI

OpenAI CEO Altman Says AI Will Lead To Fewer Software Engineers (stratechery.com) 163

OpenAI CEO Sam Altman believes companies will eventually need fewer software engineers as AI continues to transform programming. "Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers," Altman told Stratechery.

AI now handles over 50% of code authorship in many companies, Altman estimated, a significant shift that's happened rapidly as large language models have improved. The real paradigm shift is still coming, he said. "The big thing I think will come with agentic coding, which no one's doing for real yet," Altman said, suggesting that the next breakthrough will be AI systems that can independently tackle larger programming tasks with minimal human guidance.

While OpenAI continues hiring engineers for now, Altman recommended that high school graduates entering the workforce "get really good at using AI tools," calling it the modern equivalent of learning to code. "When I was graduating as a senior from high school, the obvious tactical thing was get really good at coding. And this is the new version of that," he said.
AI

AlexNet, the AI Model That Started It All, Released In Source Code Form (zdnet.com) 8

An anonymous reader quotes a report from ZDNet: There are many stories of how artificial intelligence came to take over the world, but one of the most important developments is the emergence in 2012 of AlexNet, a neural network that, for the first time, demonstrated a huge jump in a computer's ability to recognize images. Thursday, the Computer History Museum (CHM), in collaboration with Google, released for the first time the AlexNet source code written by University of Toronto graduate student Alex Krizhevsky, placing it on GitHub for all to peruse and download.

"CHM is proud to present the source code to the 2012 version of Alex Krizhevsky, Ilya Sutskever, and Geoffery Hinton's AlexNet, which transformed the field of artificial intelligence," write the Museum organizers in the readme file on GitHub. Krizhevsky's creation would lead to a flood of innovation in the ensuing years, and tons of capital, based on proof that with sufficient data and computing, neural networks could achieve breakthroughs previously viewed as mainly theoretical.
The Computer History Museum's software historian, Hansen Hsu, published an essay describing how he spent five years negotiating with Google to release the code.
Portables (Apple)

Software Engineer Runs Generative AI On 20-Year-Old PowerBook G4 (macrumors.com) 55

A software engineer successfully ran Meta's Llama 2 generative AI model on a 20-year-old PowerBook G4, demonstrating how well-optimized code can push the limits of legacy hardware. MacRumors' Joe Rossignol reports: While hardware requirements for large language models (LLMs) are typically high, this particular PowerBook G4 model from 2005 is equipped with a mere 1.5GHz PowerPC G4 processor and 1GB of RAM. Despite this 20-year-old hardware, my brother was able to achieve inference with Meta's LLM model Llama 2 on the laptop. The experiment involved porting the open-source llama2.c project, and then accelerating performance with a PowerPC vector extension called AltiVec. His full blog post offers more technical details about the project.
China

Jack Ma-Backed Ant Touts AI Breakthrough Using Chinese Chips (yahoo.com) 30

An anonymous reader quotes a report from Bloomberg: Jack Ma-backed Ant Group used Chinese-made semiconductors to develop techniques for training AI models that would cut costs by 20%, according to people familiar with the matter. Ant used domestic chips, including from affiliate Alibaba and Huawei, to train models using the so-called Mixture of Experts machine learning approach, the people said. It got results similar to those from Nvidia chips like the H800, they said, asking not to be named as the information isn't public. Hangzhou-based Ant is still using Nvidia for AI development but is now relying mostly on alternatives including from Advanced Micro Devices and Chinese chips for its latest models, one of the people said.

The models mark Ant's entry into a race between Chinese and US companies that's accelerated since DeepSeek demonstrated how capable models can be trained for far less than the billions invested by OpenAI and Alphabet Inc.'s Google. It underscores how Chinese companies are trying to use local alternatives to the most advanced Nvidia semiconductors. While not the most advanced, the H800 is a relatively powerful processor and currently barred by the US from China. The company published a research paper this month that claimed its models at times outperformed Meta Platforms Inc. in certain benchmarks, which Bloomberg News hasn't independently verified. But if they work as advertised, Ant's platforms could mark another step forward for Chinese artificial intelligence development by slashing the cost of inferencing or supporting AI services.

AI

Microsoft Announces Security AI Agents To Help Overwhelmed Humans 23

Microsoft is expanding its Security Copilot platform with six new AI agents designed to autonomously assist cybersecurity teams by handling tasks like phishing alerts, data loss incidents, and vulnerability monitoring. There are also five third-party AI agents created by its partners, including OneTrust and Tanium. The Verge reports: Microsoft's six security agents will be available in preview next month, and are designed to do things like triage and process phishing and data loss alerts, prioritize critical incidents, and monitor for vulnerabilities. "The six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions," says Vasu Jakkal, corporate vice president of Microsoft Security.

Microsoft is also working with OneTrust, Aviatrix, BlueVoyant, Tanium, and Fletch to enable some third-party security agents. These extensions will make it easier to analyze data breaches with OneTrust or perform root cause analysis of network outages and failures with Aviatrix. [...] While these latest AI agents in the Security Copilot are designed for security teams to take advantage of, Microsoft is also improving its phishing protection in Microsoft Teams. Microsoft Defender for Office 365 will start protecting Teams users against phishing and other cyberthreats within Teams next month, including better protection against malicious URLs and attachments.

Slashdot Top Deals