Businesses

AI May Already Be Shrinking Entry-Level Jobs In Tech, New Research Suggests 76

An anonymous reader quotes a report from TechCrunch: Researchers at SignalFire, a data-driven VC firm that tracks job movements of over 600 million employees and 80 million companies on LinkedIn, believe they may be seeing first signs of AI's impact on hiring. When analyzing hiring trends, SignalFire noticed that tech companies recruited fewer recent college graduates in 2024 than they did in 2023. Meanwhile, tech companies, especially the top 15 Big Tech businesses, ramped up their hiring of experienced professionals. Specifically, SignalFire found that Big Tech companies reduced the hiring of new graduates by 25% in 2024 compared to 2023. Meanwhile, graduate recruitment at startups decreased by 11% compared to the prior year. Although SignalFire wouldn't reveal exactly how many fewer grads were hired according to their data, a spokesperson told us it was thousands.

While adoption of new AI tools might not fully explain the dip in recent grad hiring, Asher Bantock, SignalFire's head of research, says there's "convincing evidence" that AI is a significant contributing factor. Entry-level jobs are susceptible to automation because they often involve routine, low-risk tasks that generative AI handles well. AI's new coding, debugging, financial research, and software installation abilities could mean companies need fewer people to do that type of work. AI's ability to handle certain entry-level tasks means some jobs for new graduates could soon be obsolete. [...]

Although AI's threat to low-skilled jobs is real, tech companies' need for experienced professionals is still rising. According to SignalFire's report, Big Tech companies increased hiring by 27% for professionals with two to five years of experience, while startups hired 14% more individuals in that same seniority range. A frustrating paradox emerges for recent graduates: They can't get hired without experience, but they can't get experience without being hired. While this dilemma is not new, Heather Doshay, SignalFire's people and talent partner, says it is considerably exacerbated by AI. Doshay's advice to new grads: master AI tools. "AI won't take your job if you're the one who's best at using it," she said.
Censorship

US Will Ban Foreign Officials To Punish Countries For Social Media Rules (theverge.com) 255

An anonymous reader quotes a report from The Verge: Secretary of State Marco Rubio announced Wednesday that the U.S. would restrict visas for "foreign nationals who are responsible for censorship of protected expression in the United States." He called it "unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil" and "for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States."

It's not yet clear how or against whom the policy will be enforced, but seems to implicate Europe's Digital Services Act, a law that came into effect in 2023 with the goal of making online platforms safer by imposing requirements on the largest platforms around removing illegal content and providing transparency about their content moderation. Though it's not mentioned directly in the press release about the visa restrictions, the Trump administration has slammed the law on multiple occasions, including in remarks earlier this year by Vice President JD Vance.

The State Department's homepage currently links to an article on its official Substack, where senior advisor for the Bureau of Democracy, Human Rights, and Labor Samuel Samson critiques the DSA as a tool to "silence dissident voices through Orwellian content moderation." He adds, "Independent regulators now police social media companies, including prominent American platforms like X, and threaten immense fines for non-compliance with their strict speech regulations."
"We will not tolerate encroachments upon American sovereignty," Rubio says in the announcement, "especially when such encroachments undermine the exercise of our fundamental right to free speech."
AI

xAI To Pay Telegram $300 Million To Integrate Grok Into Chat App 15

Telegram has partnered with xAI to integrate the Grok chatbot into its platform for one year, with xAI paying $300 million in cash and equity. Telegram will also receive 50% of subscription revenue from Grok. TechCrunch reports: Earlier this year, xAI made the Grok chatbot available to Telegram's premium users. It seems Grok might now be made available to all users. A video posted by [Telegram CEO Pavel Durov] on X suggested that Grok can be pinned on top of chats within the app, and users can also ask questions to Grok from the search bar. Notably, Meta has also integrated Meta AI into the search bar on Instagram and WhatsApp. The video also shows that you will be able to use Grok for writing suggestions, summarizing chats, links, and documents, and creating stickers. Grok will supposedly also help answer questions for businesses and assist with moderation. UPDATE: In a response to Durov's X post outlining the partnership, Elon Musk said: "No deal has been signed."

"Musk's denial, however, raises questions about the status and structure of the agreement," reports TheStreet. "It's unclear whether the partnership has been formalized or if Durov was announcing a framework that remains under discussion. Neither Telegram nor xAI has issued a follow-up clarification."
Google

Google Photos Turns 10 With Major Editor Redesign, QR Code Sharing (9to5google.com) 17

An anonymous reader quotes a report from 9to5Google: Google Photos was announced at I/O 2015 and the company is now celebrating the app's 10th birthday with a redesign of the photo editor. Google is redesigning the Photos editor so that it "provides helpful suggestions and puts all our powerful editing tools in one place." It starts with a new fullscreen viewer that places the date, time, and location at the top of your screen. Meanwhile, it's now Share, Edit, Add to (instead of Lens), and Trash at the bottom.

Once editing, Google Photos has moved controls for aspect ratio, flip, and rotate to be above the image. In the top-left corner, we have Auto Frame, which debuted in Magic Editor on the Pixel 9, to fill-in backgrounds and is now coming to more devices. Underneath, we get options for Enhance, Dynamic, and "AI Enhance" in the Auto tab. That's followed by Lighting, Color, and Composition, as well as a search shortcut: "You can use AI-powered suggestions that combine multiple effects for quick edits in a variety of tailored options, or you can tap specific parts of an image to get suggested tools for editing that area."

The editor allows you to circle or "tap specific parts of an image to get suggested tools for editing that area." This includes the subject, background, or some other aspect. You then see the Blur background, Add portrait light, Sharpen, Move and Reimagine appear in the example below. We also see the redesigned sliders throughout this updated interface. This Google Photos editor redesign "will begin rolling out globally to Android devices next month, with iOS following later this year." We already know the app is set for a Material 3 Expressive redesign. Meanwhile, Google Photos is starting to roll out the ability to share albums with a QR code. This method makes for easy viewing and adding with people nearby. Google even suggests printing it out when in (physical) group settings.
Google shared a few tips, tricks and tools for the new editor in a blog post.
Education

Blue Book Sales Surge As Universities Combat AI Cheating (msn.com) 93

Sales of blue book exam booklets have surged dramatically across the nation as professors turn to analog solutions to prevent ChatGPT cheating. The University of California, Berkeley reported an 80% increase in blue book sales over the past two academic years, while Texas A&M saw 30% growth and the University of Florida recorded nearly 50% increases this school year. The surge comes as students who were freshmen when ChatGPT launched in 2022 approach senior year, having had access to AI throughout their college careers.
AI

'Some Signs of AI Model Collapse Begin To Reveal Themselves' 109

Steven J. Vaughan-Nichols writes in an op-ed for The Register: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google. Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting. This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." [...]

We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it. How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that "OpenAI now generates about 100 billion words per day," and we presume many of those words end up online, it won't take long.
AI

Nothing's Carl Pei Says Your Smartphone's OS Will Replace All of Its Apps 70

In an interview with Wired (paywalled), OnePlus co-founder and Nothing CEO, Carl Pei, said the future of smartphones will center around the OS and AI to get things done -- rendering traditional apps a thing of the past. 9to5Google reports: Pei says that Nothing's strength is in "creativity," adding that "the creative companies of the past" such as Apple "have become very big and very corporate, and they're no longer very creative." He then dives into what else but AI, explaining that Nothing wants to create the "iPod" of AI, saying that Apple built a product that simply built a better user experience: "If you look back, the iPod was not launched as 'an MP3 player with a hard disk drive.' The hard disk drive was merely a means to a better user experience. AI is just a new technology that enables us to create better products for users. So, our strategy is not to make big claims that AI is going to change the world and revolutionize smartphones. For us, it's about using it to solve a consumer problem, not to tell a big story. We want the product to be the story."

Pei then says that he doesn't see the current trend of AI products -- citing wearables such as smart glasses -- as the future of the technology. Rather, he sees the smartphone as the most important device for AI "for the foreseeable future," but as one that will "change dramatically." According to Pei, the future of the smartphone is one without apps, with the experience instead just revolving around the OS and what it can do and how it can "optimize" for the user, acting as a proactive, automated agent and that, in the end, the user "will spend less time doing boring things and more time on what they care about."
Businesses

Salesforce Acquires Informatica For $8 Billion 4

After a year of rumors, Salesforce has officially acquired cloud data management firm Informatica in an $8 billion equity deal. "Under the terms of the deal, Salesforce will pay $25 in cash per share for Informatica's Class A and Class B-1 common stock, adjusting for its prior investment in the company," notes TechCrunch. From the report: Informatica was founded in 1993 and works with more than 5,000 customers across more than 100 countries. The company had a $7.1 billion market cap at the time of publication. This acquisition will help bolster Salesforce's agentic AI ambitions, the company's press release stated, by giving the company more data infrastructure and governance to help its AI agents run more "safely, responsibly, and at scale across the modern enterprise." "Together, we'll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360, enabling autonomous agents to act with intelligence, context, and confidence across every enterprise," Salesforce CEO Marc Benioff said in the press release. "This is a transformational step in delivering enterprise-grade AI that is safe, responsible, and deeply integrated with the world's data."
Cellphones

OnePlus Is Replacing Its Alert Slider With an AI Button (engadget.com) 19

OnePlus is replacing its iconic Alert Slider with a new customizable "Plus Key" on the upcoming OnePlus 13s, which launches the new AI Plus Mind feature that lets users capture and search content found on screen. This update is part of a broader AI push for its devices that includes tools like AI VoiceScribe for call summaries, AI Translation for multi-modal language support, and AI Best Face 2.0 for photo corrections. Engadget reports: What AI Plus Mind does is save relevant content to a dedicated Mind Space, where users can browse various information that they've saved. Users can then search for the detail they want to find using natural language queries. Both the Plus Key and the AI Plus Mind will debut on the OnePlus 13s in Asia. AI Plus Mind will roll out to the rest of the OnePlus 13 Series devices through a future software update, while all future OnePlus phone will come with the new physical key. Notably, the new button and feature bear similarities to Nothing's physical Essential Key that can also save information inside the Essential Space app. Nothing was founded by Carl Pei who co-founded OnePlus.
Education

'AI Role in College Brings Education Closer To a Crisis Point' (bloomberg.com) 74

Bloomberg's editorial board warned Tuesday that AI has created an "untenable situation" in higher education where students routinely outsource homework to chatbots while professors struggle to distinguish computer-generated work from human writing. The editorial described a cycle where assignments that once required days of research can now be completed in minutes through AI prompts, leaving students who still do their own work looking inferior to peers who rely on technology.

The board said that professors have begun using AI tools themselves to evaluate student assignments, creating what it called a scenario of "computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege."

The editorial argued that widespread AI use in coursework undermines the broader educational mission of developing critical thinking skills and character formation, particularly in humanities subjects. Bloomberg's board recommended that colleges establish clearer policies on acceptable AI use, increase in-class assessments including oral exams, and implement stronger honor codes with defined consequences for violations.
AI

Browser Company Abandons Arc for AI-Powered Successor (substack.com) 26

The Browser Company has ceased the active development of its Arc browser to focus on Dia, a new AI-powered browser currently in alpha testing, the company said Tuesday. In a lengthy letter to users, CEO Josh Miller said the startup should have stopped working on Arc "a year earlier," noting data showing the browser suffered from a "novelty tax" problem where users found it too different to adopt widely.

Arc struggled with low feature adoption -- only 5.52% of daily active users regularly used multiple Spaces, while 4.17% used Live Folders. The company will continue maintenance updates for Arc but won't add new features. Arc also won't open-source the browser because it relies on proprietary infrastructure called ADK (Arc Development Kit) that remains core to the company's value.
Facebook

Nick Clegg Says Asking Artists For Use Permission Would 'Kill' the AI Industry 240

As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would "basically kill" the AI industry. From a report: Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn't feasible to ask for consent before ingesting their work first.

"I think the creative community wants to go a step further," Clegg said according to The Times. "Quite a lot of voices say, 'You can only train on my content, [if you] first ask.' And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data."

"I just don't know how you go around, asking everyone first. I just don't see how that would work," Clegg said. "And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight."
AI

At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work (nytimes.com) 207

Amazon software engineers are reporting that AI tools are transforming their jobs into something resembling the company's warehouse work, with managers pushing faster output and tighter deadlines while teams shrink in size, according to the New York Times.

Three Amazon engineers told the New York Times that the company has raised productivity goals over the past year and expects developers to use AI assistants that suggest code snippets or generate entire program sections. One engineer said his team was cut roughly in half but still expected to produce the same amount of code by relying on AI tools.

The shift mirrors historical workplace changes during industrialization, the Times argues, where technology didn't eliminate jobs but made them more routine and fast-paced. Engineers describe feeling like "bystanders in their own jobs" as they spend more time reviewing AI-generated code rather than writing it themselves. Tasks that once took weeks now must be completed in days, with less time for meetings and collaborative problem-solving, according to the engineers.
AI

VCs Are Acquiring Mature Businesses To Retrofit With AI (techcrunch.com) 39

Venture capitalists are inverting their traditional investment approach by acquiring mature businesses and retrofitting them with AI. Firms including General Catalyst, Thrive Capital, Khosla Ventures and solo investor Elad Gil are employing this private equity-style strategy to buy established companies like call centers and accounting firms, then optimizing them with AI automation.
AI

Google Tries Funding Short Films Showing 'Less Nightmarish' Visions of AI (yahoo.com) 74

"For decades, Hollywood directors including Stanley Kubrick, James Cameron and Alex Garland have cast AI as a villain that can turn into a killing machine," writes the Los Angeles Times. "Even Steven Spielberg's relatively hopeful A.I.: Artificial Intelligence had a pessimistic edge to its vision of the future."

But now "Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in The Terminator, 2001: A Space Odyssey and Ex Machina.". So they're funding short films "that portray the technology in a less nightmarish light," produced by Range Media Partners (which represents many writers and actors) So far, two short films have been greenlit through the project: One, titled "Sweetwater," tells the story of a man who visits his childhood home and discovers a hologram of his dead celebrity mother. Michael Keaton will direct and appear in the film, which was written by his son, Sean Douglas. It is the first project they are working on together. The other, "Lucid," examines a couple who want to escape their suffocating reality and risk everything on a device that allows them to share the same dream....

Google has much riding on convincing consumers that AI can be a force for good, or at least not evil. The hot space is increasingly crowded with startups and established players such as OpenAI, Anthropic, Apple and Facebook parent company Meta. The Google-funded shorts, which are 15 to 20 minutes long, aren't commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. Google is not pushing their products in the movies, and the films are not made with AI, she added... The company said it wants to fund many more movies, but it does not have a target number. Some of the shorts could eventually become full-length features, Google said....

Negative public perceptions about AI could put tech companies at a disadvantage when such cases go before juries of laypeople. That's one reason why firms are motivated to makeover AI's reputation. "There's an incredible amount of skepticism in the public world about what AI is and what AI will do in the future," said Sean Pak, an intellectual property lawyer at Quinn Emanuel, on a conference panel. "We, as an industry, have to do a better job of communicating the public benefits and explaining in simple, clear language what it is that we're doing and what it is that we're not doing."

Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
AI

OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test (betanews.com) 112

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.
Open Source

SerenityOS Creator Is Building an Independent, Standards-First Browser Called 'Ladybird' (thenewstack.io) 40

A year ago, the original creator of SerenityOS posted that "for the past two years, I've been almost entirely focused on Ladybird, a new web browser that started as a simple HTML viewer for SerenityOS." So it became a stand-alone project that "aims to render the modern web with good performance, stability and security." And they're also building a new web engine.

"We are building a brand-new browser from scratch, backed by a non-profit..." says Ladybird's official web site, adding that they're driven "by a web standards first approach." They promise it will be truly independent, with "no code from other browsers" (and no "default search engine" deals).

"We are targeting Summer 2026 for a first Alpha version on Linux and macOS. This will be aimed at developers and early adopters." More from the Ladybird FAQ: We currently have 7 paid full-time engineers working on Ladybird. There is also a large community of volunteer contributors... The focus of the Ladybird project is to build a new browser engine from the ground up. We don't use code from Blink, WebKit, Gecko, or any other browser engine...

For historical reasons, the browser uses various libraries from the SerenityOS project, which has a strong culture of writing everything from scratch. Now that Ladybird has forked from SerenityOS, it is no longer bound by this culture, and we will be making use of 3rd party libraries for common functionality (e.g image/audio/video formats, encryption, graphics, etc.) We are already using some of the same 3rd party libraries that other browsers use, but we will never adopt another browser engine instead of building our own...

We don't have anyone actively working on Windows support, and there are considerable changes required to make it work well outside a Unix-like environment. We would like to do Windows eventually, but it's not a priority at the moment.

"Ladybird's founder Andreas Kling has a solid background in WebKit-based C++ development with both Apple and Nokia,," writes software developer/author David Eastman: "You are likely reading this on a browser that is slightly faster because of my work," he wrote on his blog's introduction page. After leaving Apple, clearly burnt out, Kling found himself in need of something to healthily occupy his time. He could have chosen to learn needlepoint, but instead he opted to build his own operating system, called Serenity. Ladybird is a web project spin-off from this, to which Kling now devotes his time...

[B]eyond the extensive open source politics, the main reason for supporting other independent browser projects is to maintain diverse alternatives — to prevent the web platform from being entirely captured by one company. This is where Ladybird comes in. It doesn't have any commercial foundation and it doesn't seem to be waiting to grab a commercial opportunity. It has a range of sponsors, some of which might be strategic (for example, Shopify), but most are goodwill or alignment-led. If you sponsor Ladybird, it will put your logo on its webpage and say thank you. That's it. This might seem uncontroversial, but other nonprofit organisations also give board seats to high-paying sponsors. Ladybird explicitly refuses to do this...

The Acid3 Browser test (which has nothing whatsoever to do with ACID compliance in databases) is an old method of checking compliance with web standards, but vendors can still check how their products do against a battery of tests. They check compliance for the DOM2, CSS3, HTML4 and the other standards that make sure that webpages work in a predictable way. If I point my Chrome browser on my MacBook to http://acid3.acidtests.org/, it gets 94/100. Safari does a bit better, getting to 97/100. Ladybird reportedly passes all 100 tests.

"All the code is hosted on GitHub," says the Ladybird home page. "Clone it, build it, and join our Discord if you want to collaborate on it!"

Slashdot Top Deals