×
Open Source

Landmark 2.80 Release of Open Source Blender 3D With Improved UI Now Available (blender.org) 67

"In the 3D content creation space, where are lot of professional 3D software costs anywhere from 2K to 8K Dollars a license, people have always hoped that the free, open source 3D software Blender would some day be up to the job of replacing expensive commercial 3D software packages," writes Slashdot reader dryriver: This never happened, not because Blender didn't have good 3D features technically, but rather because the Blender Foundation simply did not listen to thousands of 3D artists screaming for a "more standard UI design" in Blender. Blender's eccentric GUI with reversed left-click-right-click conventions, keyboard shortcuts that don't match commercial software and other nastiness just didn't work for a lot of people.

After years of screaming, Blender finally got a much better and more familiar UI design in release 2.80, which can be downloaded here. Version 2.80 has many powerful features, but the standout feature is that after nearly 10 years of asking, 3D artists finally get a better, more standard, more sensible User Interface. This effectively means that for the first time, Blender can compete directly with expensive commercial 3D software made by industry leaders like Autodesk, Maxon, NewTek and SideFX.

Why the Blender Foundation took nearly a decade to revise the software's UI is anybody's guess.

Bug

Remember Autorun.inf Malware In Windows? Turns Out KDE Offers Something Similar (zdnet.com) 85

Long-time Slashdot reader Artem S. Tashkinov writes: A security researcher has published proof-of-concept (PoC) code for a vulnerability in the KDE software framework. A fix is not available at the time of writing. The bug was discovered by Dominik "zer0pwn" Penner and impacts the KDE Frameworks package 5.60.0 and below. The KDE Frameworks software library is at the base of the KDE desktop environment v4 and v5 (Plasma), currently included with a large number of Linux distributions.

The vulnerability occurs because of the way the KDesktopFile class (part of KDE Frameworks) handles .desktop or .directory files. It was discovered that malicious .desktop and .directory files could be created that could be used to run malicious code on a user's computer. When a user opens the KDE file viewer to access the directory where these files are stored, the malicious code contained within the .desktop or .directory files executes without user interaction — such as running the file.

Zero user interaction is required to trigger code execution — all you have to do is to browse a directory with a malicious file using any of KDE file system browsing applications like Dolphin.

When ZDNet contacted KDE for a comment Tuesday, their spokesperson provided this response.

"We would appreciate if people would contact security@kde.org before releasing an exploit into the public, rather than the other way around, so that we can decide on a timeline together."
Red Hat Software

Red Hat Joins the RISC-V Foundation (phoronix.com) 49

Red Hat has joined the RISC-V Foundation to help foster this open-source processor ISA. Phoronix reports: While we're still likely years away from seeing any serious RISC-V powered servers at least that can deliver meaningful performance, Red Hat has been active in promoting RISC-V as an open-source processor instruction set architecture and one of the most promising libre architectures we have seen over the years. Red Hat developers have already helped in working on Fedora's RISC-V support and now the IBM-owned company is helping out more and showing their commitment by joining the RISC-V Foundation. Red Hat joins the likes of Google, NVIDIA, Qualcomm, SiFive, Western Digital, IBM, and Samsung as among the many RISC-V members.
Open Source

When Open Source Software Comes With a Few Catches (wired.com) 120

As open source software grows more popular, and important, developers face an existential question: How to make money from something you give away for free? An anonymous reader shares a report: The Open Source Initiative standards body says an open source license must allow users to view the underlying source code, modify it, and share it as they see fit. Independent developers and large companies alike now routinely release software under these licenses. Many coders believe open collaboration results in better software. Some companies open their code for marketing purposes. Open source software now underpins much technology, from smartphone operating systems to government websites.

Companies that release software under open source licenses generate revenue in different ways. Some sell support, including Red Hat, which IBM acquired for $34 billion earlier this month. Others, like cloud automation company HashiCorp, sell proprietary software based on the open source components. But with the rise of cloud computing, developers see their open source code being bundled into services and sold by other companies. Amazon, for example, sells a cloud-hosted service based on the popular open source database Redis, which competes with a similar cloud-hosted service offered by Redis Labs, the sponsor of the open source project. To protect against such scenarios, companies behind popular open source projects are restricting how others can use their software. Redis Labs started the trend last year when it relicensed several add-ons for its core product under terms that essentially prohibit offering those add-ons as part of a commercial cloud computing service.

That way, Amazon and other cloud providers can't use those add-ons in their competing Redis services. Companies that want the functionality provided by those add-ons need to develop those features themselves, or get permission from Redis Labs. [...] Analytics company Confluent and database maker CockroachDB added similar terms to their licenses, preventing cloud computing companies from using some or all of their code to build competing services. Taking a slightly different tack, MongoDB relicensed its flagship database product last year under a new "Server Side Public License" (SSPL) that requires companies that sell the database system as a cloud service also release the source code of any additional software they include.

Open Source

Open Source RISC-V License Helps Alibaba Sidestep US Trade War (tomshardware.com) 221

"RISC-V is open source, so it's much more resistant to government bans," reports Tom's Hardware: The Alibaba Group Holding, China's largest e-commerce company, unveiled its first self-designed chip, Xuantie 910, based on the open source RISC-V instruction set architecture. As reported by Nikkei Asian Review, the chip will target edge computing and autonomous driving, while the RISC-V's open source license may help Alibaba side-step the U.S. trade war altogether.

Alibaba doesn't intend to manufacture the chips itself. Instead, it could outsource production to other Chinese semiconductor companies, such as Semiconductor Manufacturing International Corp. According to Nikkei, the Chinese government has been encouraging wealthy Chinese companies from various industries to enter the semiconductor industry in recent years. The government's efforts accelerated when the trade war with the U.S. started last year. It reportedly forced foreign companies to transfer their technology and IP to Chinese companies if they wanted any chance at the local Chinese market.

"Most Chinese companies are still wary about whether Arm's architecture and Intel's architecture and technical support would remain accessible amid tech tension and further geopolitical uncertainties," Sean Yang, an analyst at research company CINNO in Shanghai, said, according to Nikkei. "It would be very helpful for China to increase long-term semiconductor sufficiency if big companies such as Alibaba jump in to build a chip (design) platform which smaller Chinese developers can just use without worrying about being cut off from supplies."

The article also notes that using RISC-V will give Alibaba "the ability to completely customize and extend the ISA of the processors built on top of it without having to get permission from any company first."
AI

AI is Supercharging the Creation of Maps Around the World (fb.com) 49

For those of us who live in places where driving directions are available at our fingertips, it might be surprising to learn that millions of miles of roads around the world have yet to be mapped. From a blog post: For more than 10 years, volunteers with the OpenStreetMap (OSM) project have worked to address that gap by meticulously adding data on the ground and reviewing public satellite images by hand and annotating features like roads, highways, and bridges. It's a painstaking manual task. But, thanks to AI, there is now an easier way to cover more areas in less time". With assistance from Map With AI (a new service that Facebook AI researchers and engineers created) a team of Facebook mappers has recently cataloged all the missing roads in Thailand and more than 90 percent of missing roads in Indonesia. Map With AI enabled them to map more than 300,000 miles of roads in Thailand in only 18 months, going from a road network that covered 280,000 miles before they began to 600,000 miles after. Doing it the traditional way -- without AI -- would have taken another three to five years, estimates Xiaoming Gao, a Facebook research scientist who helped lead the project.

"We were really excited about this achievement because it has proven Map With AI works at a large scale," Gao says. Starting today, anyone will be able to use the Map With AI service, which includes access to AI-generated road mappings in Afghanistan, Bangladesh, Indonesia, Mexico, Nigeria, Tanzania, and Uganda, with more countries rolling out over time. As part of Map With AI, Facebook is releasing our AI-powered mapping tool, called RapiD, to the OSM community. RapiD is an enhanced version of the popular OSM editing tool iD. RapiD is designed to make adding and editing roads quick and simple for anyone to use; it also includes data integrity checks to ensure that new map edits are consistent and accurate. You can find out more about RapiD at mapwith.ai.

AI

IBM Gives Cancer-Killing Drug AI Project To the Open Source Community 42

IBM has released three artificial intelligence (AI) projects tailored to take on the challenge of curing cancer to the open-source community. ZDNet reports: The first project, dubbed PaccMann -- not to be confused with the popular Pac-Man computer game -- is described as the "Prediction of anticancer compound sensitivity with Multi-modal attention-based neural networks." IBM is working on the PaccMann algorithm to automatically analyze chemical compounds and predict which are the most likely to fight cancer strains, which could potentially streamline this process. The ML algorithm exploits data on gene expression as well as the molecular structures of chemical compounds. IBM says that by identifying potential anti-cancer compounds earlier, this can cut the costs associated with drug development.

The second project is called "Interaction Network infErence from vectoR representATions of words," otherwise known as INtERAcT. This tool is a particularly interesting one given its automatic extraction of data from valuable scientific papers related to our understanding of cancer. INtERAcT aims to make the academic side of research less of a burden by automatically extracting information from these papers. At the moment, the tool is being tested on extracting data related to protein-protein interactions -- an area of study which has been marked as a potential cause of the disruption of biological processes in diseases including cancer.

The third and final project is "pathway-induced multiple kernel learning," or PIMKL. This algorithm utilizes datasets describing what we currently know when it comes to molecular interactions in order to predict the progression of cancer and potential relapses in patients. PIMKL uses what is known as multiple kernel learning to identify molecular pathways crucial for categorizing patients, giving healthcare professionals an opportunity to individualize and tailor treatment plans.
Graphics

'Fortnite' Creator Epic Games Supports Blender Foundation With $1.2 Million (blender.org) 43

Long-time Slashdot reader dnix writes: Apparently having a lot of people playing Fortnite is good for the open source community too. Epic Games' MegaGrants program just awarded the Blender Foundation with $1.2 million over the next three years...to further the success of the free and open source 3D creation suite.
It's part of the company's $100 million "MegaGrants" program, according to the announcement. "Open tools, libraries and platforms are critical to the future of the digital content ecosystem," said Tim Sweeney, founder and CEO of Epic Games. "Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators."
Bitcoin

Celo Launches Decentralized Open Source Financial Services Prototype (forbes.com) 32

Forbes notes that other financial transaction platforms hope to benefit from Facebook's struggles in launching its Libra cryptocurrency -- including Celo. The key value proposition of the assets running on top of the [Celo] platform is that they are immune to the wide swings in volatility that have plagued leading crypto assets in recent years. Many are designed to mirror the price movements of traditional currency, and most have names that reflect their fiat brethren, such as the Gemini Dollar. This is a critical need for the industry, as no asset will be able to serve as a currency if it does not maintain a consistent price. However, rather than being a centralized issuer that supports the price pegs with fiat held in banks, Celo has built a full-stack platform (meaning it developed the underlying blockchain and applications that run on top), that can offer an unlimited number of stablecoins all backed by cryptoassets held in reserve.

Furthermore, Celo is what is known as an algorithmic-based stablecoin provider. This distinction means that rather than being a centralized entity that controls issuances and redemptions, the company employs a smart-contract based stability protocol that automatically expands or contracts the supply of its collateral reserves in a fashion similar to how the Federal Reserve adjusts the U.S. monetary supply... Additionally, a key differentiator for Celo from similar projects is that for the first time its blockchain platform allows users to send/receive money to a person's phone number, IP address, email, as well as other identifiers. This feature will be critical to the long-term success for the network because it eliminates the need for counterparties in a transaction to share their public keys with each other prior to a transaction.

And now... Celo is open-sourcing its entire codebase and design after two years of development. Additionally, the company is launching the first prototype of its platform, named the Alfajores Testnet, and Celo Wallet, an Android app that will allow users to manage their accounts and send/receive payments on the testnet.

This announcement and product is intended to be just the first of what will be a wide range of financial services applications designed to connect the world.

Celo's investors include LinkedIn founder Reid Hoffman and Twitter/Square CEO Jack Dorsey, the article points out, as well as some of Libra's first members, "including venerated venture capital firm Andreessen Horowitz and crypto-unicorn Coinbase."
Programming

Developer Requests Google Remove Their Logo From Re-Designed Golang Page (github.com) 113

Slashdot reader DevNull127 writes: Another very minor kerfuffle has broken out in the community for the Go programming language. When its official Twitter account asked for feedback on the new look of its web site, one developer suggested that it had been a mistake to add the Google logo to the lower-right of the home page. "A lot of people associate it with a commercial Google product."

Following the suggested procedure, he then created an issue on GitHub. ("Go is perceived by some as a pure Google project without community involvement. Adding a Google logo does not help in this discussion.") The issue received 61 upvotes (and 30 downvotes), eventually receiving a response from Google software engineer Andrew Bonventre, the engineering lead on the Go Team.

"Thanks for the issue. We spent a long time talking about it and are sensitive to this concern. It's equally important to make it clear that Google supports Go, which was missing before (Much like typescriptlang.org). Google pays for and hosts the infrastructure that golang.org runs on and we hope the current very small logo is a decent compromise." He then closed the issue.

The developer who created the issue then responded, "I get that you've discussed this internally. This is a great opportunity to discuss it with the community. I'm thankful to Google for financing the initial and ongoing development of Go but Google is not the only company investing [in] Go. I would like to move the Google logo into an separate section, together will the major stakeholders of the project."

In a later comment he added "I value Google's participation in Go and I'm not arguing to change that. Having the Google logo in the corner of each golang.org page suggests that this is a pure Google project when it is not..."

For some perspective, another Go developer had also suggested "animate the gopher's eyes on the website."

"Thanks, but we're not going to do this," responded the engineering lead on the Go Team. "We've discussed it before and it would be way too distracting."

Open Source

GitHub Removed Open Source Versions of 'Deepfakes' Porn App DeepNude (vice.com) 178

An anonymous reader quotes a report from Motherboard: GitHub recently removed code from its website that used neural networks to algorithmically strip clothing from images of women. The multiple code repositories were spun off from an app called DeepNude, a highly invasive piece of software that was specifically designed to create realistic nude images of women without their consent. The news shows how after DeepNude's creator pulled the plug on his own invention late last month following a media and public backlash, some platforms are now stopping the spread of similar tools. "We do not proactively monitor user-generated content, but we do actively investigate abuse reports. In this case, we disabled the project because we found it to be in violation of our acceptable use policy," a GitHub spokesperson told Motherboard in a statement. "We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Community Guidelines."

The "Sexually Obscene" section of GitHub's Community Guidelines states: "Don't post content that is pornographic. This does not mean that all nudity, or all code and content related to sexuality, is prohibited. We recognize that sexuality is a part of life and non-pornographic sexual content may be a part of your project, or may be presented for educational or artistic purposes. We do not allow obscene sexual content or content that may involve the exploitation or sexualization of minors."
Debian

After 25 Months, Debian 10 'buster' Released (debian.org) 158

"After 25 months of development the Debian project is proud to present its new stable version 10 (code name 'buster'), which will be supported for the next 5 years thanks to the combined work of the Debian Security team and of the Debian Long Term Support team."

An anonymous reader quotes Debian.org: In this release, GNOME defaults to using the Wayland display server instead of Xorg. Wayland has a simpler and more modern design, which has advantages for security. However, the Xorg display server is still installed by default and the default display manager allows users to choose Xorg as the display server for their next session.

Thanks to the Reproducible Builds project, over 91% of the source packages included in Debian 10 will build bit-for-bit identical binary packages. This is an important verification feature which protects users against malicious attempts to tamper with compilers and build networks. Future Debian releases will include tools and metadata so that end-users can validate the provenance of packages within the archive.

For those in security-sensitive environments AppArmor, a mandatory access control framework for restricting programs' capabilities, is installed and enabled by default. Furthermore, all methods provided by APT (except cdrom, gpgv, and rsh) can optionally make use of "seccomp-BPF" sandboxing. The https method for APT is included in the apt package and does not need to be installed separately... Secure Boot support is included in this release for amd64, i386 and arm64 architectures and should work out of the box on most Secure Boot-enabled machines.

The announcement touts Debian's "traditional wide architecture support," arguing that it shows Debian "once again stays true to its goal of being the universal operating system." It ships with several desktop applications and environments, including the following:
  • Cinnamon 3.8
  • GNOME 3.30
  • KDE Plasma 5.14
  • LXDE 0.99.2
  • LXQt 0.14
  • MATE 1.20
  • Xfce 4.12

"If you simply want to try Debian 10 'buster' without installing it, you can use one of the available live images which load and run the complete operating system in a read-only state via your computer's memory... Should you enjoy the operating system you have the option of installing from the live image onto your computer's hard disk."


Programming

'Kerfuffle' Erupts Around Newly-Proposed try() Feature For Go Language (thenewstack.io) 210

Matt Klein, a member of the Go steering committee recently apologized for the angst caused to some people by "the try() kerfuffle... Change is hard, but sometimes it's for the best."

Tech columnist Mike Melanson covers the kerfuffle over the newly-proposed feature, while trying "not to over-dramatize what is happening." There is disagreement and conflicting views, but working through those views is how the open source sausage is made, is it not? Of course, in the Go community, how the core team receives those opposing views may be a point of soreness among some who vehemently opposed the vgo package versioning for Go and felt that, in the end, it was rammed through despite their objections. As one Gopher points out, it is better to debate now than summarily accept and then later deprecate...

As Go makes its way to Go 2.0, with Go 1.14 currently taking center stage for debate, there is, again, as Klein points out, some kerfuffle about a newly proposed feature called try(), which is "designed specifically to eliminate the boilerplate if statements typically associated with error handling in Go." According to the proposal, the "minimal approach addresses most common scenarios while adding very little complexity to the language" and "is easy to explain, straightforward to implement, orthogonal to other language constructs, and fully backward-compatible" as well as extensible for future needs.

Much of the disagreement around try() comes in the form of whether or not the resultant code is more or less readable than current implementations of error handling. Beyond that, however, some say that even if try() were accepted, it has faults that would prevent them from recommending or even allowing its use among their teams. Meanwhile, another point of contention is offered in an open letter to the Go team about try by William Kennedy who often writes about Go, and focuses on not style or function, but rather whether or not a solution is needed at all. According to Kennedy, "the perceived error handling complaints are perhaps overblown and these changes are not what the majority of Go developers want or need" and that try() may be a solution searching for a problem, and even the cause of more problems than it solves."Since this new mechanic is going to cause severe inconsistencies in code bases, disagreements on teams, and create an impossible task for product owners to enforce consistent guidelines, things need to be slowed down and more data needs to be gathered," Kennedy writes.

He goes on to point out those very sensitivities that may have lingered from previous discussions in the Go community. "This is a serious change and it feels like it's being pushed through without a concerted effort to understand exactly what those 5% of Go developers meant when they said they wanted improved error handling...."

Google

Google's Robots.txt Parser is Now Open Source (googleblog.com) 32

From a blog post: For 25 years, the Robots Exclusion Protocol (REP) was only a de-facto standard. This had frustrating implications sometimes. On one hand, for webmasters, it meant uncertainty in corner cases, like when their text editor included BOM characters in their robots.txt files. On the other hand, for crawler and tool developers, it also brought uncertainty; for example, how should they deal with robots.txt files that are hundreds of megabytes large? Today, we announced that we're spearheading the effort to make the REP an internet standard. While this is an important step, it means extra work for developers who parse robots.txt files.

We're here to help: we open sourced the C++ library that our production systems use for parsing and matching rules in robots.txt files. This library has been around for 20 years and it contains pieces of code that were written in the 90's. Since then, the library evolved; we learned a lot about how webmasters write robots.txt files and corner cases that we had to cover for, and added what we learned over the years also to the internet draft when it made sense.

Open Source

Linus Torvalds Sees Lots of Hardware Headaches Ahead (devops.com) 205

Linux founder Linus Torvalds "warns that managing software is about to become a lot more challenging, largely because of two hardware issues that are beyond the control of DevOps teams," reports DevOps.com.

An anonymous reader shares their report about Torvalds remarks at the KubeCon + CloudNative + Open Source Summit China conference: The first, Torvalds said, is the steady stream of patches being generated for new cybersecurity issues related to the speculative execution model that Intel and other processor vendors rely on to accelerate performance... Each of those bugs requires another patch to the Linux kernel that, depending on when they arrive, can require painful updates to the kernel, Torvalds told conference attendees. Short of disabling hyperthreading altogether to eliminate reliance on speculative execution, each patch requires organizations to update both the Linux kernel and the BIOS to ensure security. Turning off hyperthreading eliminates the patch management issue, but also reduces application performance by about 15 percent.

The second major issue hardware issue looms a little further over the horizon, Torvalds said. Moore's Law has guaranteed a doubling of hardware performance every 18 months for decades. But as processor vendors approach the limits of Moore's Law, many developers will need to reoptimize their code to continue achieving increased performance. In many cases, that requirement will be a shock to many development teams that have counted on those performance improvements to make up for inefficient coding processes, he said.

Open Source

Tech Press Rushes To Cover New Linus Torvalds Mailing List Outburst (zdnet.com) 381

"Linux frontman Linus Torvalds thinks he's 'more self-aware' these days and is 'trying to be less forceful' after his brief absence from directing Linux kernel developers because of his abusive language on the Linux kernel mailing list," reports ZDNet.

"But true to his word, he's still not necessarily diplomatic in his communications with maintainers..." Torvalds' post-hiatus outburst was directed at Dave Chinner, an Australian programmer who maintains the Silicon Graphics (SGI)-created XFS file system supported by many Linux distros. "Bullshit, Dave," Torvalds told Chinner on a mailing list. The comment from Chinner that triggered Torvalds' rebuke was that "the page cache is still far, far slower than direct IO" -- a problem Chinner thinks will become more apparent with the arrival of the newish storage-motherboard interface specification known as Peripheral Express Interconnect Express (PCIe) version 4.0. Chinner believes page cache might be necessary to support disk-based storage, but that it has a performance cost....

"You've made that claim before, and it's been complete bullshit before too, and I've called you out on it then too," wrote Torvalds. "Why do you continue to make this obviously garbage argument?" According to Torvalds, the page cache serves its correct purpose as a cache. "The key word in the 'page cache' name is 'cache'," wrote Torvalds.... "Caches work, Dave. Anybody who thinks caches don't work is incompetent. 99 percent of all filesystem accesses are cached, and they never do any IO at all, and the page cache handles them beautifully," Torvalds wrote.

"When you say the page cache is slower than direct IO, it's because you don't even see or care about the *fast* case. You only get involved once there is actual IO to be done."

"The thing is," reports the Register, "crucially, Chinner was talking in the context of specific IO requests that just don't cache well, and noted that these inefficiencies could become more obvious as the deployment of PCIe 4.0-connected non-volatile storage memory spreads."

Here's how Chinner responded to Torvalds on the mailing list. "You've taken one single statement I made from a huge email about complexities in dealing with IO concurrency, the page cache and architectural flaws in the existing code, quoted it out of context, fabricated a completely new context and started ranting about how I know nothing about how caches or the page cache work."

The Register notes their conversation also illustrates a crucial difference from closed-source software development. "[D]ue to the open nature of the Linux kernel, Linus's rows and spats play out in public for everyone to see, and vultures like us to write up about."
Open Source

Does Open Source Have a 'Working For Free' Problem? (tidelift.com) 191

"Let's abandon the notion that open source is exclusively charity," writes Havoc Pennington, a free software engineer (and former Red Hat engineer) who's now a co-founder of Tidelift: Look around. We do have a problem, and it's time we do something about it.... The lack of compensation isn't just bad for individual developers -- it also creates social problems, by amplifying existing privilege.... The narrative around open source is that it's completely OK -- even an expectation -- that we're all doing this for fun and exposure; and that giant companies should get huge publicity credit for throwing peanuts-to-them donations at a small subset of open source projects.

There's nothing wrong with doing stuff for fun and exposure, or making donations, as an option. It becomes a problem when the free work is expected and the donations are seen as enough... What would open source be like if we had a professional class of independent maintainers, constantly improving the code we all rely on?

The essay suggests some things consider, including asking people to pay for:
  • Support requests
  • Security audits/hardening and extremely good test coverage
  • Supporting old releases
  • License-metadata-annotation practices that are helpful for big companies trying to audit the code they use, but sort of a pain in the ass and nobody cares other than these big companies.

"Right now many users expect, and demand, that all of this will be free. As an industry, perhaps we should push back harder on that expectation. It's OK to set some boundaries..."

"Of course this relates to what we do at Tidelift -- the company came out of discussions about this problem, among others... In our day-to-day right now we're specifically striving to give subscribers a way to pay maintainers of their application dependencies for additional value, through the Tidelift Subscription. But we hope to see many more efforts and discussions in this area.... [I]n between a virtual tip jar and $100 million in funding, there's a vast solution space to explore."


Open Source

The Mysterious History of the MIT License (opensource.com) 40

Red Hat technology evangelist Gordon Haff explains why it's hard to say exactly when the MIT license created. Citing input from both Jim Gettys (author of the original X Window System) and Keith Packard (a senior member on the X Windows team), he writes that "The best single answer is probably 1987. But the complete story is more complicated and even a little mysterious."

An anonymous reader quotes his article at OpenSource.com, which begins with the X Window System at MIT's "Project Athena" (first launched in 1983): X was originally under a proprietary license but, according to Packard, what we would now call an open source license was added to X version 6 in 1985... According to Gettys, "Distributing X under license became enough of a pain that I argued we should just give it away." However, it turned out that just placing it into the public domain wasn't an option. "IBM would not touch public domain code (anything without a specific license). We went to the MIT lawyers to craft text to explicitly make it available for any purpose. I think Jerry Saltzer probably did the text with them. I remember approving of the result," Gettys added.

There's some ambiguity about when exactly the early license language stabilized; as Gettys writes, "we weren't very consistent on wording." However, the license that Packard indicates was added to X Version 6 in 1985 appears to have persisted through X Version 11, Release 5. A later version of the license language seems to have been introduced in X Version 11, Release 6 in 1994... But the story doesn't end there. If you look at the license used for X11 and the approved MIT License at the Open Source Initiative (OSI), they're not the same. Similar in spirit, but significantly different in the words used.

The "modern" MIT License is the same as the license used for the Expat XML parser library beginning in about 1998. The MIT License using this text was part of the first group of licenses approved by the OSI in 1999. What's peculiar is that, although the OSI described it as "The MIT license (sometimes called called [sic] the 'X Consortium license')," it is not in fact the same as the X Consortium License. How and why this shift happened -- and even if it happened by accident -- is unknown. But it's clear that by 1999, the approved version of the MIT License, as documented by the OSI, used language different from the X Consortium License.

He points out that to this day, this is why "some, including the Free Software Foundation," avoid the term "MIT License" altogether -- "given that it can refer to several related, but different, licenses."
Businesses

Amazon Has Gone From Neutral Platform To Cutthroat Competitor, Say Open Source Developers (medium.com) 111

An anonymous reader shares a report: Elastic isn't the only open source cloud tool company currently looking over its shoulder at AWS. In 2018 alone, at least eight firms have made similar "rule changes" designed to ward off what they see as unfair competition from a company intent on cannibalizing their services. In his blog post, Adrian Cockcroft, VP of cloud architecture strategy at Amazon Web Services (AWS), argued that by making part of its product suite proprietary, Elastic was betraying the core principles of the open source community. "Customers must be able to trust that open source projects stay open," Cockcroft wrote. "When important open source projects that AWS and our customers depend on begin restricting access, changing licensing terms, or intermingling open source and proprietary software, we will invest to sustain the open source project and community."

AWS's announcement did not attract the immediate attention of the Democratic presidential candidates or the growing cadre of antitrust activists who have recently set their sights on Amazon. But in the world of open source and free software, where picayune changes in arcane language can spark the internet equivalent of the Hundred Years War, the release of AWS's Open Distro for Elasticsearch launched a heated debate. [...] Sharone Zitzman, a respected commentator on open source software and the head of developer relations at AppsFlyer, an app development company, called Amazon's move a "hostile takeover" of Elastic's business. Steven O'Grady, co-founder of the software industry analyst firm RedMonk, cited it as an example of the "existential threat" that open source companies like Elastic believe a handful of cloud computing giants could pose.

Slashdot Top Deals