×
Open Source

The Ethical Source Movement Launches a New Kind of Open-Source Organization (zdnet.com) 258

ZDNet takes a look at a new nonprofit group called the Organization for Ethical Source (OES): The OES is devoted to the idea that the free software and open-source concept of "Freedom Zero" are outdated. Freedom Zero is "the freedom to run the program as you wish, for any purpose." It's fundamental to how open-source software is made and used... They hate the notion that open-source software can be used for any purpose including "evil" purposes. The group states:

The world has changed since the Open Source Definition was created — open source has become ubiquitous, and is now being leveraged by bad actors for mass surveillance, racist policing, and other human rights abuses all over the world. The OES believes that the open-source community must evolve to address the magnitude and complexity of today's social, political, and technological challenges...

How does this actually work in a license...?

The Software shall not be used by any person or entity for any systems, activities, or other uses that violate any Human Rights Laws. "Human Rights Laws" means any applicable laws, regulations, or rules (collectively, "Laws") that protect human, civil, labor, privacy, political, environmental, security, economic, due process, or similar rights....

This latest version of the license was developed in collaboration with a pro-bono legal team from Corporate Accountability Lab (CAL). It has been adopted by many open-source projects including the Ruby library VCR; mobile app development tool Gryphon; Javascript mapping library react-leaflet; and WeTransfer's entire open-source portfolio...

The organization adds, though, the license's most significant impact may be the debate it sparked between ethical-minded developers and open-source traditionalists around the primacy of Freedom Zero.

The article includes this quote from someone described as an open source-savvy lawyer.

"To me, ethical licensing is a case of someone with a very small hammer seeing every problem as a nail, and not even acknowledging that the nail is far too big for the hammer."
Open Source

Why AWS Is Forking Elasticsearch and Kibana (zdnet.com) 47

Steven J. Vaughan-Nichols writes at ZDNet: When Elastic, makers of the open-source search and analytic engine Elasticsearch, went after Amazon Web Services (AWS) by changing its license from the open-source Apache 2.0-license ALv2) to the non-open-source friendly Server Side Public License, I predicted "we'd soon see AWS-sponsored Elasticsearch and Kibana forks." The next day, AWS tweeted it "will launch new forks of both Elasticsearch and Kibana based on the latest Apache 2.0 licensed codebases." Well, that didn't take long!

In a blog post, AWS explained that since Elastic is no longer making its search and analytic engine Elasticsearch and its companion data visualization dashboard Kibana available as open source, AWS is taking action. "In order to ensure open source versions of both packages remain available and well supported, including in our own offerings, we are announcing today that AWS will step up to create and maintain an ALv2-licensed fork of open-source Elasticsearch and Kibana.... AWS brings years of experience working with these codebases, as well as making upstream code contributions to both Elasticsearch and Apache Lucene, the core search library that Elasticsearch is built on — with more than 230 Lucene contributions in 2020 alone... We're in this for the long haul, and will work in a way that fosters healthy and sustainable open source practices — including implementing shared project governance with a community of contributors..."

Yet another company, Logz.io, a cloud-monitoring company, and some partners have announced that it will launch a "true" open source distribution for Elasticsearch and Kibana.

Wikipedia

The English Language Wikipedia Just Had Its Billionth Edit (vice.com) 43

An anonymous reader quotes a report from Motherboard: Just after 1 A.M. on January 12, a prolific Wikiepdian edited the entry for the album Death Breathing. The small edit was the addition of a hyperlink and it was the billionth edit done to the English language Wikipedia. "The article on the album Death Breathing was amended by Wikipedian Ser Amantio di Nicolao, one of over 3.9 million edits done by the Wikipedian with the highest edit count other than bots," said a note in Wikimedia-l, a listserv that documents various Wikimedia matters. Wikipedia relies on volunteers who constantly assess, edit, and argue over the specifics of the information in its vast online encyclopedia. Every edit is catalogued, tagged, and assigned a unique URL when it's pushed through. The Death Breathing edit secured the billionth. "Pedants may be aware that this is only the thousand million since the move to MediaWiki software and not all of the hundreds of thousands of previous edits have since been reloaded," the notice said. "So if we could work out the true counts since edit one it probably came one, maybe two days earlier."

"I don't have the exact numbers but there were definitely many edits made that aren't recorded in the current system," Wikiepdian "The Cunctator" told Motherboard in an email. "Many of the UseModWiki edits were reintegrated with the history but there is a lacuna that covers my peak of editing in about August 2001 to February 2002 (I was the primary editor of September 11 related content). I don't know if the edit count reflects deleted edits or edits on deleted pages. One point that isn't made enough when discussing Wikipedia is how much of Google's wealth is built on its abuse of Wikipedia copyleft. But Death Breathing got the edit with the thousand million counter."
Security

OpenWRT Forum User Data Stolen In Weekend Data Breach (bleepingcomputer.com) 16

The OpenWRT forum, a large community of enthusiasts of alternative, open-source operating systems for routers, announced a data breach over the weekend. Bleeping Computer reports: The attack occurred on Saturday, around 04:00 (GMT), when an unauthorized third party gained admin access to and copied a list with details about forum users and related statistical information. The intruder used the account of an OpenWRT administrator. Although the account had "a good password," additional security provided by two-factor authentication (2FA) was not active. Email addresses and handles of the forum users have been stolen, the moderators say. They add that they believe the attacker was not able to download the forum database, meaning that passwords should be safe. However, they reset all the passwords on the forum just to be on the safe side and invalidated all the API keys used for project development processes.

Users have to set the new password manually from the login menu by providing their user name and following the "get a new password" instructions. Those logging in using GitHub credentials are advised to reset or refresh it. The OpenWRT forum credentials are separate from the Wiki. Currently, there is no suspicion that the Wiki credentials have been compromised in any way. OpenWRT forum administrators warn that since this breach exposed email addresses, users may become targets of credible phishing attempts.

Education

The Linux Foundation Now Offers a Suite of Open-Source Management Classes (zdnet.com) 7

The Linux Foundation has new courses to help you manage open-source projects and technical staff within your organization. Steven J. Vaughan-Nichols writes via ZDNet: Previously, if you want to know how to run open-source well in your company, you had to work with OASIS Open or the TODO Group. Both are non-profit organizations supporting best open source and open standards practices. But, to work with either group, effectively, you already had to know a lot about open source. [...] This 7-module course series is designed to help executives, managers, software developers, and engineers understand the basic concepts for building effective open-source practices. It's also helpful to those in the C suite who want to set up effective open-source program management, including how to create an Open Source Program Office (OSPO).

The program builds on the accumulated wisdom of many previous training modules on open-source best practices while adding fresh and updated content to explain all of the critical elements of working effectively with open source in enterprises. The courses are designed to be self-paced, and reasonably high-level, but with enough detail to get new open-source practitioners up and running quickly. Guy Martin, OASIS Open's executive director, developed these courses. Martin knows his way around open source. He has a unique blend of over 25 years' experience both as a software engineer and open-source strategist. Marin has helped build open-source programs at Red Hat, Samsung, and Autodesk. He was also instrumental in founding the Academy Software Foundation, the Open Connectivity Foundation, and has contributed to TODO Group's best practices and learning guides.
The "Open Source Management & Strategy program" costs $499 and is available to begin immediately. A certificate is awarded upon completion.
Open Source

Rediscovering RISC-V: Apple M1 Sparks Renewed Interest in Non-x86 Architecture (zdnet.com) 202

"With the runaway success of the new ARM-based M1 Macs, non-x86 architectures are getting their closeup," explains a new article at ZDNet.

"RISC-V is getting the most attention from system designers looking to horn-in on Apple's recipe for high performance. Here's why..." RISC-V is, like x86 and ARM, an instruction set architecture (ISA). Unlike x86 and ARM, it is a free and open standard that anyone can use without getting locked into someone else's processor designs or paying costly license fees...

Reaching the end of Moore's Law, we can't just cram more transistors on a chip. Instead, as Apple's A and M series processors show, adding specialized co-processors — for codecs, encryption, AI — to fast general-purpose RISC CPUs can offer stunning application performance and power efficiency. But a proprietary ISA, like ARM, is expensive. Worse, they typically only allow you to use that ISA's hardware designs, unless, of course, you're one of the large companies — like Apple — that can afford a top-tier license and a design team to exploit it. A canned design means architects can't specify tweaks that cut costs and improve performance. An open and free ISA, like RISC-V, eliminates a lot of this cost, giving small companies the ability to optimize their hardware for their applications. As we move intelligence into ever more cost-sensitive applications, using processors that cost a dollar or less, the need for application and cost-optimized processors is greater than ever...

While open operating systems, like Linux, get a lot of attention, ISAs are an even longer-lived foundational technology. The x86 ISA dates back 50 years and today exists as a layer that gets translated to a simpler — and faster — underlying hardware architecture. (I suspect this fact is key to the success of the macOS Rosetta 2 translation from x86 code to Apple's M1 code.)

Of course, an open ISA is only part of the solution. Free standard hardware designs — with tools to design more — and smart compilers to generate optimized code are vital. That larger project is what Berkeley's Adept Lab is working on. As computing continues to permeate civilization, the cost of sub-optimal infrastructure will continue to rise.

Optimizing for efficiency, long-life, and broad application is vital for humanity's progress in a cyber-enabled world.

One RISC-V feature highlighted by the article: 128-bit addressing (in addition to 32 and 64 bit).
Open Source

Wasmer 1.0 Can Run WebAssembly 'Universal Binaries' on Linux, MacOS, Windows, Android, and iOS (infoworld.com) 72

The WebAssembly portable binary format will now have wider support from Wasmer, the server-side runtime which "allows universal binaries compiled from C++, Rust, Go, Python, and other languages to run on different operating systems and in web browsers without modification," reports InfoWorld: Wasmer can run lightweight containers based on WebAssembly on a variety of platforms — Linux, MacOS, Windows, Android, iOS — from the desktop to the cloud to IoT and mobile devices, while also allowing these containers to be embedded in any programming language. The Wasmer runtime also is able to run the Nginx web server and other WebAssembly modules...

Wasmer was introduced in December 2018, with the stated goal of doing for WebAssembly what JavaScript did for Node.js: establish it server-side. By leveraging Wasmer for containerization, developers can create universal binaries that work anywhere without modification, including on Linux, MacOS, and Windows as well as web browsers. WebAssembly automatically sandboxes applications by default for secure execution, shielding the host environment from malicious code, bugs, and vulnerabilities in the software being run.

Wasmer 1.0 reached "general availability status" with its release on January 5, and its developers are now claiming "out of this world" runtime and compiler performance.

"We believe that WebAssembly will be a crucial component for the future of software execution and containerization (not only inside the browser but also outside)."
Open Source

Linux Mint 20.1 Long-term Support Release Is Out (ghacks.net) 21

Thelasko quotes gHacks: Linux Mint 20.1 is now available.

The first stable release of Linux Mint in 2021 is available in the three flavors Cinnamon, MATE and Xfce. The new version of the Linux distribution is based on Ubuntu 20.04 LTS and Linux kernel 5.4...

- Linux Mint 20.1 comes with a unified file system that sees certain directories being merged with their counterparts in /usr, e.g. /bin merged with /usr/bin, /lib merged with /usr/lib for compatibility purposes...

- The developers have added an option to turn websites into desktop applications in the new version [using the new Web App manager]... Web apps behave like desktop programs for the most part; they start in their own window and use a custom icon, and you find them in the Alt-Tab interface when you use it. Web apps can be pinned and they are found in the application menu after they have been created.

Government

Open-Source Developer and Manager David Recordon Named White House Director of Technology (zdnet.com) 51

An anonymous reader quotes a report from ZDNet: President-elect Joe Biden's transition team announced that David Recordon, one of OpenId and oAuth's developers, has been named the White House Director of Technology. Recordon most recently was the VP of infrastructure and security at the non-profit Chan Zuckerberg Initiative Foundation. Before that, Recordon was Facebook's engineer director. There, he had led Facebook's open-source initiatives and projects. Among other programs, this included Phabricator, a suite of code review web apps, which Facebook used for its own development. He also led efforts on Cassandra, the Apache open-source distributed database management system; HipHop, a PHP to C++ source code translator; and Apache Thrift, a software framework, for scalable cross-language services development. In short, he's both a programmer and manager who knows open-source from the inside out.

Recordon learned to program at a public elementary school. According to the Biden-Harris transition team, he's spent his almost two-decade career working at the intersection of technology, security, open-source software, public service, and philanthropy. Looking forward to the challenges Recordon faces in his new position, he wrote on LinkedIn: "The pandemic and ongoing cybersecurity attacks present new challenges for the entire Executive Office of the President, but ones I know that these teams can conquer in a safe and secure manner together."
The report notes that Recordon served as the first Director of White House Information Technology during President Barack Obama's term of office, working on IT modernization and cybersecurity issues. He's also served as the Biden-Harris transition team's deputy CTO.
Open Source

Ask Slashdot: How Long Should a Vendor Support a Distro? 137

Long-term Slashdot reader couchslug believes that "Howls of anguish from betrayed CentOS 8 users highlight the value of its long support cycles..." Earlier this month it was announced that at the end of 2021, the community-supported rebuild of Red Hat Enterprise Linux, CentOS 8, "will no longer be maintained," though CentOS 7 "will stick around in a supported maintenance state until 2024."

This leads Slashdot reader couchslug to an interesting question. "Should competitors like Ubuntu and SUSE offer truly long-term-support versions to seize that (obviously large and thus important to widespread adoption) user base?" As distros become more refined, how important are changes vs. stability for users running tens, thousands and hundreds of thousands of servers, or who just want stability and security over change for its own sake...? Why do you think distro leadership are so eager for distro life cycles? Boredom, progress or what mix of both?

What sayeth the hive mind and what distros do you use to achieve your goals?

The original submission argues that "Distro-hopping is fun but people with work to do and a fixed task set have different needs." But what do Slashdot's readers thinks? Leave your own thoughts in the comments.

And how long do you think a vendor should support a distro?
Google

Google Plans to Calculate 'Criticality' Scores for Open Source Projects (thenewstack.io) 40

Programming columnist Mike Melanson writes: As part of its involvement in the recently announced Open Source Security Foundation (OpenSSF), Google has penned a blog post outlining one of the first steps it will take as part of this group, with an attempt at finding critical open source projects.

"Open source software (OSS) has long suffered from a 'tragedy of the commons' problem," they write. "Most organizations, large and small, make use of open source software every day to build modern products, but many OSS projects are struggling for the time, resources and attention they need."

So as a way to address this problem, and help fund those projects that need funding, Google is releasing the Criticality Score project. The project gives projects a criticality score (a number between 0 and 1) that is "is derived from various project usage metrics" such as "a project's age, number of individual contributors and organizations involved, user involvement (in terms of new issue requests and updates), and a rough estimate of its dependencies using commit mentions." From there, you can also add your own metrics, if you see fit...

Abhishek Arya, one of the project's creators, points out that the project is still in its initial phases and welcoming feedback on "any ideas on metrics we can use." Arya also notes that the project is currently limited to ranking open source projects hosted on GitHub, but "will be expanding to our source control system in the near future."

"Though we have made some progress on this problem, we have not solved it and are eager for the community's help in refining these metrics to identify critical open source projects," the blog post announcing the project concludes.

Google

Google Says It is Expanding Fuchsia's Open Source Model (googleblog.com) 79

New submitter RealNeoMorpheus shares a Google blogpost about Fuchsia -- a new open source operating system that has been in the works for several years: Fuchsia is a long-term project to create a general-purpose, open source operating system, and today we are expanding Fuchsia's open source model to welcome contributions from the public. Fuchsia is designed to prioritize security, updatability, and performance, and is currently under active development by the Fuchsia team. We have been developing Fuchsia in the open, in our git repository for the last four years. You can browse the repository history at fuchsia.googlesource.com to see how Fuchsia has evolved over time. We are laying this foundation from the kernel up to make it easier to create long-lasting, secure products and experiences. Starting today, we are expanding Fuchsia's open source model to make it easier for the public to engage with the project. We have created new public mailing lists for project discussions, added a governance model to clarify how strategic decisions are made, and opened up the issue tracker for public contributors to see what's being worked on. As an open source effort, we welcome high-quality, well-tested contributions from all. There is now a process to become a member to submit patches, or a committer with full write access. In addition, we are also publishing a technical roadmap for Fuchsia to provide better insights for project direction and priorities. Some of the highlights of the roadmap are working on a driver framework for updating the kernel independently of the drivers, improving file systems for performance, and expanding the input pipeline for accessibility.
Open Source

The Few, the Tired, the Open Source Coders (wired.com) 71

Reader shanen shares a report (and offers this commentary): When the open source concept emerged in the '90s, it was conceived as a bold new form of communal labor: digital barn raisings. If you made your code open source, dozens or even hundreds of programmers would chip in to improve it. Many hands would make light work. Everyone would feel ownership. Now, it's true that open source has, overall, been a wild success. Every startup, when creating its own software services or products, relies on open source software from folks like Jacob Thornton: open source web-server code, open source neural-net code. But, with the exception of some big projects -- like Linux -- the labor involved isn't particularly communal. Most are like Bootstrap, where the majority of the work landed on a tiny team of people. Recently, Nadia Eghbal -- the head of writer experience at the email newsletter platform Substack -- published Working in Public, a fascinating book for which she spoke to hundreds of open source coders. She pinpointed the change I'm describing here. No matter how hard the programmers worked, most "still felt underwater in some shape or form," Eghbal told me.

Why didn't the barn-raising model pan out? As Eghbal notes, it's partly that the random folks who pitch in make only very small contributions, like fixing a bug. Making and remaking code requires a lot of high-level synthesis -- which, as it turns out, is hard to break into little pieces. It lives best in the heads of a small number of people. Yet those poor top-level coders still need to respond to the smaller contributions (to say nothing of requests for help or reams of abuse). Their burdens, Eghbal realized, felt like those of YouTubers or Instagram influencers who feel overwhelmed by their ardent fan bases -- but without the huge, ad-based remuneration. Sometimes open source coders simply walk away: Let someone else deal with this crap. Studies suggest that about 9.5 percent of all open source code is abandoned, and a quarter is probably close to being so. This can be dangerous: If code isn't regularly updated, it risks causing havoc if someone later relies on it. Worse, abandoned code can be hijacked for ill use. Two years ago, the pseudonymous coder right9ctrl took over a piece of open source code that was used by bitcoin firms -- and then rewrote it to try to steal cryptocurrency.

SuSE

$6 Billion Linux Deal? SUSE IPO Rumored (zdnet.com) 28

An anonymous reader quotes a report from ZDNet: According to Bloomberg, EQT is planning an IPO for German Linux and enterprise software company SUSE. EQT is a Swedish-based private equity firm with 50 billion euros in raised capital. SUSE is the leading European Union (EU) Linux distributor. Over the years, SUSE has changed owners several times. First, it was acquired by Novell in 2004. Then, Attachmate, with some Microsoft funding, bought Novell and SUSE in 2010. This was followed in 2014 when Micro Focus purchased Attachmate and SUSE was spun off as an independent division. Then, EQT purchased SUSE from Micro Focus for $2.5 billion in March 2019. With an IPO of approximately $6 billion, EQT would do very well for itself in very little time.

Bloomberg states that the IPO talks are in a very preliminary stage. Nothing may yet come of these conversations. As for SUSE, a company representative said, "As a company, we are constantly exploring ways to grow. But as a matter of corporate policy, we do not comment on rumor or speculation in the market."

Chromium

Linux Mint Introduces Its Own Take On the Chromium Web Browser (zdnet.com) 33

Mint's programmers, led by lead developer, Clement "Clem" Lefebvre, have built their own take on Google's open-source Chromium web browser. ZDNet reports: Some of you may be saying, "Wait, haven't they offered Chromium for years? Well, yes, and no. For years, Mint used Ubuntu's Chromium build. But then Canonical, Ubuntu's parent company, moved from releasing Chromium as an APT-compatible DEB package to a Snap. The Ubuntu Snap software packing system, along with its rivals Flatpak and AppImage, is a new, container-oriented way of installing Linux applications. The older way of installing Linux apps, such as DEB and RPM package management systems for the Debian and Red Hat Linux families, incorporate the source code and hard-coded paths for each program.

While tried and true, these traditional packages are troublesome for developers. They require programmers to hand-craft Linux programs to work with each specific distro and its various releases. They must ensure that each program has access to specific libraries' versions. That's a lot of work and painful programming, which led to the process being given the name: Dependency hell. Snap avoids this problem by incorporating the application and its libraries into a single package. It's then installed and mounted on a SquashFS virtual file system. When you run a Snap, you're running it inside a secured container of its own. For Chromium, in particular, Canonical felt using Snaps was the best way to handle this program. [...]

Lefebvre wrote, "The Chromium browser is now available in the official repositories for both Linux Mint and LMDE. If you've been waiting for this I'd like to thank you for your patience." Part of the reason was, well, Canonical was right. Building Chromium from source code is one really slow process. He explained, "To guarantee reactivity and timely updates we had to automate the process of detecting, packaging and compiling new versions of Chromium. This is an application which can require more than 6 hours per build on a fast computer. We allocated a new build server with high specifications (Ryzen 9 3900, 128GB RAM, NMVe) and reduced the time it took to build Chromium to a little more than an hour." That's a lot of power! Still, for those who love it, up-to-date builds of Chromium are now available for Mint users.

Open Source

Dan Kohn, Executive Director of the Cloud Native Computing Foundation, Has Died (www.lfph.io) 7

Dan Kohn, leader of the Linux Foundation's Public Health (LFPH) initiative and former executive director at the Cloud Native Computing Foundation (CNCF), has passed away of complications from colon cancer. Linux Foundation Executive Director Jim Zemlin wrote yesterday (via LFPH): Dan played a special role at the Linux Foundation. He helped establish the organization that we are today and oversaw the fastest growing open source community in history, the Cloud Native Computing Foundation. Dan was also a pioneer. In 1994 he conducted the first secure commercial transaction on the internet after building the first web shopping cart.

What you may not know about Dan was his lifelong desire to help others. From serving as a volunteer firefighter in college to stepping aside from his role in the Cloud Native Computing Foundation to incubate and found the Linux Foundation Public Health initiative which is helping authorities around the world combat Covid19; Dan could always be counted on in a crisis.

Dan leaves behind his wife Julie and two young boys, Adam and Ellis... We will be creating a scholarship fund for his children and will send out information in the coming days as to how folks can contribute.
LFPH has set up a card for the community to sign to forward to his family when the time is right. You may sign it here.

Alex Williams from The New Stack has also paid tribute to Kohn.
Intel

Intel Begins Their Open-Source Driver Support For Vulkan Ray-Tracing With Xe HPG (phoronix.com) 10

In preparation for next year's Xe HPG graphics cards, Intel's open-source developers have begun publishing their patches enabling their "ANC" Vulkan Linux driver to support Vulkan ray-tracing. Phoronix reports: Jason Ekstrand as the lead developer originally on the Intel ANV driver has posted today the initial ray-tracing code for ANV in order to support VK_KHR_ray_tracing for their forthcoming hardware. Today is the first time Intel has approved of this open-source code being published and more is on the way. The code today isn't enough for Vulkan ray-tracing but more is on the way and based against the latest internal Khronos ray-tracing specification. At the moment they are not focusing on the former NVIDIA-specific ray-tracing extension but may handle it in the future if game vendors continue targeting it rather than the forthcoming finalized KHR version.

Among other big ticket items still to come in the near-term includes extending the ANV driver to support compiling and dispatching OpenCL kernels, new SPIR-V capabilities, and generic pointer support. Also needed is the actual support for compiling ray-tracing pipelines, managing acceleration structures, dispatching rays, and the platform support. The actual exposing of the support won't come until after The Khronos Group has firmed up their VK_KHR_ray_tracing extension. Some of this Intel-specific Vulkan ray-tracing code may prove useful to Mesa's Radeon Vulkan "RADV" driver as well. Intel engineers have been testing their latest ray-tracing support with ANV internally on Xe HPG.

Open Source

Wikimedia Is Moving To GitLab (mediawiki.org) 12

The Wikimedia Foundation, the American non-profit organization that owns the internet domain names of many movement projects and hosts sites like Wikipedia, has decided to migrate their code repositories from Gerrit to Gitlab. Slashdot reader nfrankel shares the announcement: For the past two years, our developer satisfaction survey has shown that there is some level of dissatisfaction with Gerrit, our code review system. This dissatisfaction is particularly evident for our volunteer communities. The evident dissatisfaction with code review, coupled with an internal review of our CI tooling and practice makes this an opportune moment to revisit our code review choices. While Gerrit's workflow is in many respects best-in-class, its interface suffers from usability deficits, and its workflow differs from mainstream industry practices. This creates barriers to entry for the community and slows onboarding for WMF technical staff. In addition, there are a growing number of individuals and teams (both staff and non-staff) who are opting to forgo the use of Gerrit and instead use a third-party hosted option such as GitHub. Reasons vary for the choice to use third-party hosting but, based on informal communication, there are 3 main groupings: lower friction to create new repositories; easier setup and self-service of Continuous Integration configuration; and more familiarity with pull-request style workflows.

All these explanations point to friction in our existing code-review system slowing development rather than fostering it. The choice to use third-party code-hosting hurts our collaboration (both internal and external), adds to the confusion of onboarding, and makes it more difficult to maintain code standards across repositories. At the same time, there is a requirement that all software which is deployed to Wikimedia production is hosted and deployed from Gerrit. If we fail to address the real usability problems that users have with Gerrit, people will continue to launch and build projects on whatever system it is they prefer -- Wikimedia's GitHub already contains 152 projects, the Research team has 127 projects.

This raises the question: if Gerrit has identifiable problems, why can't we solve those problems in Gerrit? Gerrit is open source (Apache licensed) software; modifications are a simple matter of programming. [...] Upstream has improved the UI in recent releases, and releases have become more frequent; however, upgrade path documentation is often lacking. The migration from Gerrit 2 to Gerrit 3, for example, required several upstream patchsets to avoid the recommended path of several days of downtime. This is the effort required to maintain the status quo. Even small improvements require effort and time as, often, our use-case is very different from the remainder of the Gerrit community.

Open Source

OpenStack Foundation Transforms Into the Open Infrastructure Foundation (zdnet.com) 16

An anonymous reader quotes a report from ZDNet: The writing was on the wall two years ago. The OpenStack Foundation was going to cover more than just the OpenStack Infrastructure-as-a-Service (IaaS) cloud. Today, that metamorphosis is complete. The Foundation now covers a wide variety of open-source cloud and container technologies as the Open Infrastructure Foundation. Why so long? COO Mark Collier said, "They wanted to be sure they did this right." One reason for this was to make sure they could differentiate their group from The Linux Foundation's Cloud Native Computing Foundation (CNCF), which covers much of the same ground.

The Open Infrastructure Foundation executive director Jonathan Bryce said that, "OpenStack is still one of the top three most active open source projects in the world. It's just the landscape of infrastructure and there are many new exciting trends with open becoming more and more ubiquitous." To make use of all these different ways the cloud has evolved requires new software programs and that's where the Open Infrastructure Foundation comes in. The new Foundation's mission is to establish new open-source communities to help bring into production new emerging use cases. This includes AI/ML; CI/CD; container infrastructure; edge computing; 5G; and public, private and hybrid clouds.

Slashdot Top Deals