Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 18

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.
Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
AI

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com) 102

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

AI

Two Teenagers Built 'Cal AI', a Photo Calorie App With Over a Million Users (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: In a world filled with "vibe coding," Zach Yadegari, teen founder of Cal AI, stands in ironic, old-fashioned contrast. Ironic because Yadegari and his co-founder, Henry Langmack, are both just 18 years old and still in high school. Yet their story, so far, is a classic. Launched in May, Cal AI has generated over 5 million downloads in eight months, Yadegari says. Better still, he tells TechCrunch that the customer retention rate is over 30% and that the app generated over $2 million in revenue last month. [...]

The concept is simple: Take a picture of the food you are about to consume, and let the app log calories and macros for you. It's not a unique idea. For instance, the big dog in calorie counting, MyFitnessPal, has its Meal Scan feature. Then there are apps like SnapCalorie, which was released in 2023 and created by the founder of Google Lens. Cal AI's advantage, perhaps, is that it was built wholly in the age of large image models. It uses models from Anthropic and OpenAI and RAG to improve accuracy and is trained on open source food calorie and image databases from sites like GitHub.

"We have found that different models are better with different foods," Yadegari tells TechCrunch. Along the way, the founders coded through technical problems like recognizing ingredients from food packages or in jumbled bowls. The result is an app that the creators say is 90% accurate, which appears to be good enough for many dieters.
The report says Yadegari began mastering Python and C# in middle school and went on to build his first business in ninth grade -- a website called Totally Science that gave students access to unblocked games (cleverly named to evade school filters). He sold the company at age 16 to FreezeNova for $100,000.

Following the sale, Yadegari immersed himself in the startup scene, watching Y Combinator videos and networking on X, where he met co-founder Blake Anderson, known for creating ChatGPT-powered apps like RizzGPT. Together, they launched Cal AI and moved to a hacker house in San Francisco to develop their prototype.
Wikipedia

Wikimedia Drowning in AI Bot Traffic as Crawlers Consume 65% of Resources 73

Web crawlers collecting training data for AI models are overwhelming Wikipedia's infrastructure, with bot traffic growing exponentially since early 2024, according to the Wikimedia Foundation. According to data released April 1, bandwidth for multimedia content has surged 50% since January, primarily from automated programs scraping Wikimedia Commons' 144 million openly licensed media files.

This unprecedented traffic is causing operational challenges for the non-profit. When Jimmy Carter died in December 2024, his Wikipedia page received 2.8 million views in a day, while a 1.5-hour video of his 1980 presidential debate caused network traffic to double, resulting in slow page loads for some users.

Analysis shows 65% of the foundation's most resource-intensive traffic comes from bots, despite bots accounting for only 35% of total pageviews. The foundation's Site Reliability team now routinely blocks overwhelming crawler traffic to prevent service disruptions. "Our content is free, our infrastructure is not," the foundation said, announcing plans to establish sustainable boundaries for automated content consumption.
AI

Midjourney Releases V7, Its First New AI Image Model In Nearly a Year 3

Midjourney's new V7 image model features a revamped architecture with smarter text prompt handling, higher image quality, and default personalization based on user-rated images. While some features like upscaling aren't yet available, it does come with a faster, cheaper Draft Mode. TechCrunch reports: To use it, you'll first have to rate around 200 images to build a Midjourney "personalization" profile, if you haven't already. This profile tunes the model to your individual visual preferences; V7 is Midjourney's first model to have personalization switched on by default. Once you've done that, you'll be able to turn V7 on or off on Midjourney's website and, if you're a member of Midjourney's Discord server, on its Discord chatbot. In the web app, you can quickly select the model from the drop-down menu next to the "Version" label.

Midjourney CEO David Holz described V7 as a "totally different architecture" in a post on X. "V7 is ... much smarter with text prompts," Holz continued in an announcement on Discord. "[I]mage prompts look fantastic, image quality is noticeably higher with beautiful textures, and bodies, hands, and objects of all kinds have significantly better coherence on all details." V7 is available in two flavors, Turbo (costlier to run) and Relax, and powers a new tool called Draft Mode that renders images at 10x the speed and half the cost of the standard mode. Draft images are of lower quality than standard-mode images, but they can be enhanced and re-rendered with a click.

A number of standard Midjourney features aren't available yet for V7, according to Holz, including image upscaling and retexturing. Those will arrive in the near future, he said, possibly within two months. "This is an entirely new model with unique strengths and probably a few weaknesses" Holz wrote on Discord. "[W]e want to learn from you what it's good and bad at, but definitely keep in mind it may require different styles of prompting. So play around a bit."
Security

Google Launches Sec-Gemini v1 AI Model To Improve Cybersecurity Defense 2

Google has introduced Sec-Gemini v1, an experimental AI model built on its Gemini platform and tailored for cybersecurity. BetaNews reports: Sec-Gemini v1 is built on top of Gemini, but it's not just some repackaged chatbot. Actually, it has been tailored with security in mind, pulling in fresh data from sources like Google Threat Intelligence, the OSV vulnerability database, and Mandiant's threat reports. This gives it the ability to help with root cause analysis, threat identification, and vulnerability triage.

Google says the model performs better than others on two well-known benchmarks. On CTI-MCQ, which measures how well models understand threat intelligence, it scores at least 11 percent higher than competitors. On CTI-Root Cause Mapping, it edges out rivals by at least 10.5 percent. Benchmarks only tell part of the story, but those numbers suggest it's doing something right.
Access is currently limited to select researchers and professionals for early testing. If you meet that criteria, you can request access here.
The Courts

AI Avatar Tries To Argue Case Before a New York Court (apnews.com) 24

An anonymous reader quotes a report from the Associated Press: It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen -- a person about to present an argument in a lawsuit -- not only had no law degree, but didn't exist at all. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division's First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. "The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. "Ok. We will hear that video now."

On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater. "May it please the court," the man began. "I come here today a humble pro se before a panel of five distinguished justices." "Ok, hold on," Manzanet-Daniels said. "Is that counsel for the case?" "I generated that. That's not a real person," Dewald answered. It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased. "It would have been nice to know that when you made your application. You did not tell me that sir," Manzanet-Daniels said before yelling across the room for the video to be shut off. "I don't appreciate being misled," she said before letting Dewald continue with his argument.

Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words. In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing. "The court was really upset about it," Dewald conceded. "They chewed me up pretty good." [...] As for Dewald's case, it was still pending before the appeals court as of Thursday.

Microsoft

Microsoft Employee Disrupts 50th Anniversary and Calls AI Boss 'War Profiteer' (theverge.com) 174

An anonymous reader shares a report: A Microsoft employee disrupted the company's 50th anniversary event to protest its use of AI. "Shame on you," said Microsoft employee Ibtihal Aboussad, speaking directly to Microsoft AI CEO Mustafa Suleyman. "You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all."
AI

AI Could Affect 40% of Jobs and Widen Inequality Between Nations, UN Warns (cnbc.com) 40

An anonymous reader shares a report: AI is projected to reach $4.8 trillion in market value by 2033, but the technology's benefits remain highly concentrated, according to the U.N. Trade and Development agency. In a report released on Thursday, UNCTAD said the AI market cap would roughly equate to the size of Germany's economy, with the technology offering productivity gains and driving digital transformation. However, the agency also raised concerns about automation and job displacement, warning that AI could affect 40% of jobs worldwide. On top of that, AI is not inherently inclusive, meaning the economic gains from the tech remain "highly concentrated," the report added.

"The benefits of AI-driven automation often favour capital over labour, which could widen inequality and reduce the competitive advantage of low-cost labour in developing economies," it said. The potential for AI to cause unemployment and inequality is a long-standing concern, with the IMF making similar warnings over a year ago. In January, The World Economic Forum released findings that as many as 41% of employers were planning on downsizing their staff in areas where AI could replicate them. However, the UNCTAD report also highlights inequalities between nations, with U.N. data showing that 40% of global corporate research and development spending in AI is concentrated among just 100 firms, mainly those in the U.S. and China.

AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Facebook

Schrodinger's Economics (thetimes.com) 38

databasecowgirl writes: Commenting in The Times on the absurdity of Meta's copyright infringement claims, Caitlin Moran defines Schrodinger's economics: where a company is both [one of] the most valuable on the planet yet also too poor to pay for the materials it profits from.

Ultimately "move fast and break things" means breaking other people's things. Or, possibly worse, going full 'The Talented Mr Ripley': slowly feeling so entitled to the things you are enamored of that you end up clubbing out the brains of your beloved in a boat.

Microsoft

Microsoft Pulls Back on Data Centers From Chicago To Jakarta 21

Microsoft has pulled back on data center projects around the world, suggesting the company is taking a harder look at its plans to build the server farms powering artificial intelligence and the cloud. From a report: The software company has recently halted talks for, or delayed development of, sites in Indonesia, the UK, Australia, Illinois, North Dakota and Wisconsin, according to people familiar with the situation. Microsoft is widely seen as a leader in commercializing AI services, largely thanks to its close partnership with OpenAI. Investors closely track Microsoft's spending plans to get a sense of long-term customer demand for cloud and AI services.

It's hard to know how much of the company's data center pullback reflects expectations of diminished demand versus temporary construction challenges, such as shortages of power and building materials. Some investors have interpreted signs of retrenchment as an indication that projected purchases of AI services don't justify Microsoft's massive outlays on server farms. Those concerns have weighed on global tech stocks in recent weeks, particularly chipmakers like Nvidia which suck up a significant share of data center budgets.
AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
The Internet

NaNoWriMo To Close After 20 Years (theguardian.com) 15

NaNoWriMo, the nonprofit behind the annual novel-writing challenge, is shutting down after 20 years but will keep its websites online temporarily so users can retrieve their content. The Guardian reports: A 27-minute YouTube video posted the same day by the organization's interim executive director Kilby Blades explained that it had to close due to ongoing financial problems, which were compounded by reputational damage. In November 2023, several community members complained to the nonprofit's board, Blades said. They believed that staff had mishandled accusations made in May 2023 that a NaNoWriMo forum moderator was grooming children on a different website. The moderator was eventually removed, though this was for unrelated code of conduct violations and occurred "many weeks" after the initial complaints. In the wake of this, community members came forward with other complaints related to child safety on the NaNoWriMo sites.

The organization was also widely criticized last year over a statement on the use of artificial intelligence in creative writing. After stating that it did not support or explicitly condemn any approach to writing, including the use of AI, it said that the "categorical condemnation of artificial intelligence has classist and ableist undertones." It went on to say that "not all writers have the financial ability to hire humans to help at certain phases of their writing," and that "not all brains have same abilities ... There is a wealth of reasons why individuals can't 'see' the issues in their writing without help."
"We hold no belief that people will stop writing 50,000 words in November," read Monday's email. "Many alternatives to NaNoWriMo popped up this year, and people did find each other. In so many ways, it's easier than it was when NaNoWriMo began in 1999 to find your writing tribe online."
AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Microsoft

Bill Gates Celebrates Microsoft's 50th By Releasing Altair BASIC Source Code (thurrott.com) 97

To mark Microsoft's 50th anniversary, Bill Gates has released the original Altair BASIC source code he co-wrote with Paul Allen, calling it the "coolest code" he's ever written and a symbol of the company's humble beginnings. Thurrott reports: "Before there was Office or Windows 95 or Xbox or AI, there was Altair BASIC," Bill Gates writes on his Gates Notes website. "In 1975, Paul Allen and I created Microsoft because we believed in our vision of a computer on every desk and in every home. Five decades later, Microsoft continues to innovate new ways to make life easier and work more productive. Making it 50 years is a huge accomplishment, and we couldn't have done it without incredible leaders like Steve Ballmer and Satya Nadella, along with the many people who have worked at Microsoft over the years."

Today, Gates says that the 50th anniversary of Microsoft is "bittersweet," and that it feels like yesterday when he and Allen "hunched over the PDP-10 in Harvard's computer lab, writing the code that would become the first product of our new company." That code, he says, remains "the coolest code I've ever written to this day ... I still get a kick out of seeing it, even all these years later."

Crime

Global Scam Industry Evolving at 'Unprecedented Scale' Despite Recent Crackdown (cnn.com) 13

Online scam operations across Southeast Asia are rapidly adapting to recent crackdowns, adopting AI and expanding globally despite the release of 7,000 trafficking victims from compounds along the Myanmar-Thailand border, experts say. These releases represent just a fraction of an estimated 100,000 people trapped in facilities run by criminal syndicates that rake in billions through investment schemes and romance scams targeting victims worldwide, CNN reports.

"Billions of dollars are being invested in these kinds of businesses," said Kannavee Suebsang, a Thai lawmaker leading efforts to free those held in scam centers. "They will not stop." Crime groups are exploiting AI to write scamming scripts and using deepfakes to create personas, while networks have expanded to Africa, South Asia, and the Pacific region, according to the United Nations Office of Drugs and Crime. "This is a situation the region has never faced before," said John Wojcik, a UN organized crime analyst. "The evolving situation is trending towards something far more dangerous than scams alone."
Microsoft

Microsoft Urges Businesses To Abandon Office Perpetual Licenses 95

Microsoft is pushing businesses to shift away from perpetual Office licenses to Microsoft 365 subscriptions, citing collaboration limitations and rising IT costs associated with standalone software. "You may have started noticing limitations," Microsoft says in a post. "Your apps are stuck on your desktop, limiting productivity anytime you're away from your office. You can't easily access your files or collaborate when working remotely."

In its pitch, the Windows-maker says Microsoft 365 includes Office applications as well as security features, AI tools, and cloud storage. The post cites a Microsoft-commissioned Forrester study that claims the subscription model delivers "223% ROI over three years, with a payback period of less than six months" and "over $500,000 in benefits over three years."
AI

AI Masters Minecraft: DeepMind Program Finds Diamonds Without Being Taught 9

An AI system has for the first time figured out how to collect diamonds in the hugely popular video game Minecraft -- a difficult task requiring multiple steps -- without being shown how to play. Its creators say the system, called Dreamer, is a step towards machines that can generalize knowledge learned in one domain to new situations, a major goal of AI. From a report: "Dreamer marks a significant step towards general AI systems," says Danijar Hafner, a computer scientist at Google DeepMind in San Francisco, California. "It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do." Hafner and his colleagues describe Dreamer in a study in Nature published on 2 April.

In Minecraft, players explore a virtual 3D world containing a variety of terrains, including forests, mountains, deserts and swamps. Players use the world's resources to create objects, such as chests, fences and swords -- and collect items, among the most prized of which are diamonds. Importantly, says Hafner, no two experiences are the same. Every time you play Minecraft, it's a new, randomly generated world," he says. This makes it useful for challenging an AI system that researchers want to be able to generalize from one situation to the next. "You have to really understand what's in front of you; you can't just memorize a specific strategy," he says.

Slashdot Top Deals