Oh, this is one of my favorite (and sad!) dramas in free software.
Five years later the main llvm developer proposed [0] to integrate it into gcc.
Unfortunately, this critical message was missed by a mail mishap on Stallman's part; and he publicly regretted both his errors (missing the message and not accepting the offer), ten years later [1].
The drama was discussed in realtime here in HN [2].
It also speaks to an absolute failure of governance. If I missed an important email on a FreeBSD mailing list, you can bet that a dozen other people would see it and either poke me about it or just go ahead and act upon it themselves.
The fact that RMS missed an email and nobody else did anything about it either is a sign of an absolutely dysfunctional relationship between the project and its leadership.
So, having been around a lot of different communication methods, I think email lists aren’t ideal, but for serious projects they’re better than all the alternatives.
Chat has a way of getting completely lost. All your knowledge that goes into chat either goes into somebody’s head or it just disappears into the ether. This includes Slack, Discord, Teams, etc. Useful as a secondary channel but serious projects need something more permanent.
Bug tracking systems just don’t support the kind of conversations you want to have about things. They’re focused on bugs and features. Searchability is excellent, but there are a lot of conversations which just end up not happening at all. Things like questions.
That brings us back to mailing lists. IMO… the way you fix it is by having redundancies on both sides of the list. People sending messages to the mailing list should send followup messages. You should also have multiple people reading the list, so if one person misses a message, maybe another gets it.
Mailing lists are not perfect, just better than the alternatives, for serious projects.
Piling on about chat. Slack threads are an abomination. They aren’t inline with the main channel so you can’t cut and paste an entire conversation with threads. And does exporting a channel include threads? Who knows because the admin wouldn’t do it for me.
What are the current practical non-self-hosted options for an open source project mailing list? We (portaudio) are being (gently) pushed off our .edu-maintained mailing list server, google groups is the only viable option that I know about, and I know not everyone will be happy about that choice.
Still, we are discussing it almost 30 years after it happened. What alternative messaging system offers such openness and stability? I don't see anything other than publicly archived mailing lists.
There is no communication method where this isn't possible. Email can be missed, chat can be missed, phone calls can be missed, even talking to someone in person can be missed. All forms of communication can fail such that the person sending the message thinks it was received when it wasn't. So one would need evidence that email is more likely to fail in this respect, rather than evidence it can happen at all, to show that email is a worse communication method.
Sorry...maybe I'm dense. Email has worked for decades. If I don't catch something this relevant in an email forum, why would I automatically, without question, see it and understand its relevance in chat, Slack, etc.
Serious question, since in my experience even specifically assigning someone a Jira tix doesn't guarantee they'll actually look at it and act.
Every 20 seconds someone misses an important message in a thread hidden deep in a chat.
I don't understand how we have moved from email and IRC to the various chats. The latter seem to actively hide communication, as some deliberate sabotage.
The fault here was entirely Stallman's own. He has some kind of byzantine but ideologically-pure protocol for reading his emails in batches, which he has to request explicitly from someone or something that retrieves them for him.
You can't infer anything from this episode about the suitability or unsuitability of email for any particular purpose.
I think OP might be confusing Stallman's website protocol with that for email:
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly.
And the result is that most new open source languages (and commercial companies) use LLVM instead of GCC as the backend => way more engineering resources are dedicated to LLVM.
I'm not sure that's the only reason. In recent years a lot of projects have chosen to avoid the (l)GPL and use more permissive licences to try and reach a larger audience that might have been spooked by free software.
They can do this because they have a choice. Apple cleaned itself of the GPL once it could, after a long stint where it couldn't. Had GCC been the library backend standard instead of LLVM, the world would have a lot more GPL in its compilers.
And yet, LLVM is thriving, and not desolately crying for proprietary commercial improvements to be fed back by their creators. It's an odd balance, sometimes it works out, this seems to be such a case.
In retrospective, I think this came out for the better in case of LLVM, and probably for GCC too. After all, both compilers emit ~equally optimized code today.
For what it's worth, the leverage did work, just not forever. It was a play with a limited lifetime. It didn't necessarily need to shake out that way, probably if GCC was slightly easier to write for but not too easy people would have invested more. It took a major investment to create a competing product.
GCC has come a long way in terms of features and complexity since the 90's/00's when Stallman made these decisions. Today, building a compiler from scratch would be a huge undertaking, and would be prohibitively expensive for most organizations regardless of licensing.
If the requirement was still just to implement a "simple" C89 compliant compiler, and I was worried about software freedom. The GPL is probably still a good bet.
you only need to worry about GPLv3 if you are modifying gcc in source and building it and distributing that. Just running gcc does not create a GPLv3 infection. And glibc et al are library licensed so they don't infect what you build either, most especially if you are not modifying its source and rebuilding it.
the context here doesn't actually specify whether we are talking about companies using llvm sources to create proprietary compilers (or maybe integrated with a proprietary IDE) or using llvm to quickly bootstrap and craft a compiler for a new processor, new language, etc., where they will distribute the source to the compiler anyway
but such a compiler or IDE would not GPLv3 infect it's users' target sources and binaries.
Stallman's version of free is free to the end user. He cares more about whether the end user will have access to the source code and means to modify their software to remove any anti-feature, and less about whatever freedoms the developers of said software would want (such as, the freedom to close the source and distribute only binaries)
Ultimately Stallman was against a kind of digital feudalism, where whoever developed software had power over those that didn't
To which none can answer how it creates freedom without mass adoption to actually get the software into end users' hands. The great contradiction in FSF philosophy is to create highly pure software within a monastery of programmer-users while simultaneously insisting to focus on end-user freedoms without reconciling programmer incentives to build what these end users need.
Why would it matter in this context; the GP was asking a theoretical question akin to "how is it physically possible for the sky to be blue?" and I am just pointing at the sky saying "look!"
It is Free Software whether it is BSD or GPL3. By all measures, Free Software as originally envisaged has been a massive success. It's just the goalposts have expanded over the years.
You clearly did not read the FSF manifestos and don't understand their positions. They will call the BSD license "permissive" and will correct you if you attempt to call BSD "free/libre".
> Why would it matter
The FSF didn't build "open source." They actively work to discredit open source. Let's not give them credit for what they tirelessly denounce.
Linux is open source, but did not adopt the GPL3. Firefox is open source but uses MPL. If the FSF is a leader who is responsible for all of these great projects, why doesn't anyone want to use their license?
Not that strange, as GCC was an effort to a goal of developing an ecosystem of Free (as in speech) software. While FSF had sometimes made allowances for supporting non-Free software (whether non-copyleft open source or proprietary), these were always tactics in support of the longer-term strategy. Much like you might spend marketing funds on customer acquisition in the service of later recurring revenue.
As RMS indicated, this strategy had already resulted in the development of C++ front ends for the Free software ecosystem, that would otherwise likely not have come about.
At that time the boom in MIT/BSD-licensed open source software predominantly driving Web apps and SaaS in languages like Rust and Javascript was still far away. GCC therefore had very high leverage if you didn't want to be beholden to the Microsoft ecosystem (it's no accident Apple still ships compat drivers for gcc even today) and still ship something with high performance, so why give up that leverage towards your strategic goal for no reason?
The Linux developers were more forward-leaning on allowing plugins despite the license risks but even with a great deal of effort they kept running into issues with proprietary software 'abusing' the module APIs and causing them to respond with additional restrictions piled atop that API. So it's not as if it were a completely unreasonable fear on RMS's part.
True. Some of their positions come across as "extreme" and rms' personality can be quite abrasive especially these days when even much smaller incidents are amplified by social media.
However, I quite value their stand. It's principled and they are, more or less, sincere about it. Many of their concerns about "open source" (as contrasted to free software) being locked up inside proprietary software etc. have come true.
Historical context is not merely important, it is indispensable.
The statement in question was issued during a period in which software vendors routinely demanded several hundred — and in some cases, thousands — of dollars[0] for access to a mere compiler. More often than not, the product thus acquired was of appalling quality — a shambolic assembly marred by defects, instability, and a conspicuous lack of professional rigour.
If one examines the design of GNU autoconf, particularly the myriad of checks it performs beyond those mandated by operating system idiosyncrasies, one observes a telling pattern — it does not merely assess environmental compatibility; it actively contends with compiler-specific bugs. This is not a testament to ingenuity, but rather an indictment of the abysmal standards that once prevailed amongst so-called commercial tool vendors.
In our present epoch, the notion that development tools should be both gratis and open source has become an expectation so deeply ingrained as to pass without remark. The viability and success of any emergent hardware platform now rests heavily — if not entirely — upon the availability of a free and competent development toolchain. In the absence of such, it shall not merely struggle — it shall perish, forgotten before it ever drew breath. Whilst a sparse handful of minor commercial entities yet peddle proprietary development environments, their strategy has adapted — they proffer these tools as components of a broader, ostensibly cohesive suite: an embedded operating system here, a bundled compiler there.
And yet — if you listen carefully — one still hears the unmistakable sounds of malcontent: curses uttered under breath and shouted aloud by those condemned to use these so-called «integrated» toolchains, frustrated by their inability to support contemporary language features, by their paltry libraries, or by some other failure born of commercial indifference.
GNU, by contrast, is not merely a project — it is a declaration of philosophy. One need not accept its ideological underpinnings to acknowledge its practical contributions. It is precisely due to this dichotomy that alternatives such as LLVM have emerged — and thrived.
[0] Throw in another several hundreds for a debugger, another several hundreds for a profiler and pray that they are even compatible with each other.
Oh, this is one of my favorite (and sad!) dramas in free software.
Five years later the main llvm developer proposed [0] to integrate it into gcc.
Unfortunately, this critical message was missed by a mail mishap on Stallman's part; and he publicly regretted both his errors (missing the message and not accepting the offer), ten years later [1].
The drama was discussed in realtime here in HN [2].
[0] https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html
[1] https://lists.gnu.org/archive/html/emacs-devel/2015-02/msg00...
[2] https://news.ycombinator.com/item?id=9028738
I feel like this is a sort of evidence that even for the most serious of engineers email lists are not an ideal way to communicate.
It also speaks to an absolute failure of governance. If I missed an important email on a FreeBSD mailing list, you can bet that a dozen other people would see it and either poke me about it or just go ahead and act upon it themselves.
The fact that RMS missed an email and nobody else did anything about it either is a sign of an absolutely dysfunctional relationship between the project and its leadership.
So, having been around a lot of different communication methods, I think email lists aren’t ideal, but for serious projects they’re better than all the alternatives.
Chat has a way of getting completely lost. All your knowledge that goes into chat either goes into somebody’s head or it just disappears into the ether. This includes Slack, Discord, Teams, etc. Useful as a secondary channel but serious projects need something more permanent.
Bug tracking systems just don’t support the kind of conversations you want to have about things. They’re focused on bugs and features. Searchability is excellent, but there are a lot of conversations which just end up not happening at all. Things like questions.
That brings us back to mailing lists. IMO… the way you fix it is by having redundancies on both sides of the list. People sending messages to the mailing list should send followup messages. You should also have multiple people reading the list, so if one person misses a message, maybe another gets it.
Mailing lists are not perfect, just better than the alternatives, for serious projects.
(I also think forums are good.)
This is why the D community has forums. The messages are all archived as static web pages and are a gold mine of information.
https://www.digitalmars.com/d/archives/digitalmars/D/
BTW, like HackerNews, the D forums don't allow emojis, icons, javascript, multiple fonts, and all that nonsense. Just text. What a relief!
^_^ sucks when you actually need to talk about emoji though :/
We discourage posts that aren't relevant in some way to D programming.
One of the reasons I enjoy HackerNews is dang's enlightened and sensible moderation policy.
I think OP meant cases like, "I need to process a string with this emoji in D" etc
¯\_(ツ)_/¯
Piling on about chat. Slack threads are an abomination. They aren’t inline with the main channel so you can’t cut and paste an entire conversation with threads. And does exporting a channel include threads? Who knows because the admin wouldn’t do it for me.
praise https://github.com/rusq/slackdump
it does include threads, and no need for admins
You’ve saved my life!
Threads are amazing as an idea and what you're missing is just an implementation detail in Slack. More platforms should have threads (WhatsApp, etc).
What are the current practical non-self-hosted options for an open source project mailing list? We (portaudio) are being (gently) pushed off our .edu-maintained mailing list server, google groups is the only viable option that I know about, and I know not everyone will be happy about that choice.
Sourcehut, maybe?
https://lists.sr.ht/
https://sourcehut.org/
Discourse (or other forums) would be my pick and what I’ve driven other projects I’m involved with to success.
They can be treated like mailing lists, but are easy to navigate , easy to search and index, and easy to categorize.
Still, we are discussing it almost 30 years after it happened. What alternative messaging system offers such openness and stability? I don't see anything other than publicly archived mailing lists.
I think Mozilla has a bugzilla instance that's been around almost as much, e.g. this is 26 years old
https://bugzilla.mozilla.org/show_bug.cgi?id=35839#:~:text=C...
Bugzilla is good for some things, but terrible for discussions, questions, offers, advice, etc, etc.
JIRA /s
There is no communication method where this isn't possible. Email can be missed, chat can be missed, phone calls can be missed, even talking to someone in person can be missed. All forms of communication can fail such that the person sending the message thinks it was received when it wasn't. So one would need evidence that email is more likely to fail in this respect, rather than evidence it can happen at all, to show that email is a worse communication method.
You can go yell in the others people ear
Sorry...maybe I'm dense. Email has worked for decades. If I don't catch something this relevant in an email forum, why would I automatically, without question, see it and understand its relevance in chat, Slack, etc.
Serious question, since in my experience even specifically assigning someone a Jira tix doesn't guarantee they'll actually look at it and act.
It's the worst, except for all the others.
20 years ago someone missed an important email.
Every 20 seconds someone misses an important message in a thread hidden deep in a chat.
I don't understand how we have moved from email and IRC to the various chats. The latter seem to actively hide communication, as some deliberate sabotage.
The fault here was entirely Stallman's own. He has some kind of byzantine but ideologically-pure protocol for reading his emails in batches, which he has to request explicitly from someone or something that retrieves them for him.
You can't infer anything from this episode about the suitability or unsuitability of email for any particular purpose.
> He has some kind of byzantine but ideologically-pure protocol for reading his emails in batches,
This caught my eye as well.
I'm not sure what his objection to accessing email in a normal-ish way might be. Any ideas?
My best guess is that it's something surveillance-related, but really not sure.
I think OP might be confusing Stallman's website protocol with that for email:
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly.
(he describes his arrangements in detail here: https://www.stallman.org/stallman-computing.html)
https://lwn.net/Articles/262570/ Is what I found
You could simply fix it by marking unreads bold. Doesn't sound so byzantine now does it?
Somebody back in 2015 mentioned a relevant response to Richard Stallman from David Kastrup[1]. It's brilliant.
Did he convince other GCC devs with that post? Mentor younger devs on free software strategy?
What's happened in the ten years since?
C'mon GCC mailing list lurkers-- spill the tea! :)
1: https://lists.gnu.org/archive/html/emacs-devel/2015-02/msg00...
What about peripheral packages for the GCC library? The compiler specifies objective C in GPL for front end architecture.
And the result is that most new open source languages (and commercial companies) use LLVM instead of GCC as the backend => way more engineering resources are dedicated to LLVM.
I'm not sure that's the only reason. In recent years a lot of projects have chosen to avoid the (l)GPL and use more permissive licences to try and reach a larger audience that might have been spooked by free software.
This gave LLM a leg up too.
They can do this because they have a choice. Apple cleaned itself of the GPL once it could, after a long stint where it couldn't. Had GCC been the library backend standard instead of LLVM, the world would have a lot more GPL in its compilers.
And yet, LLVM is thriving, and not desolately crying for proprietary commercial improvements to be fed back by their creators. It's an odd balance, sometimes it works out, this seems to be such a case.
Let's be real here.
A lot of this is driven by FAANG or FAANG wannabes, companies at a scale where they can basically reproduce huge chunk of OSS infrastructure.
They also put out a lot of Open Source with they don't want to license as GPL due to a general fear of GPL contamination.
Most of this is huge corporation driven.
In retrospective, I think this came out for the better in case of LLVM, and probably for GCC too. After all, both compilers emit ~equally optimized code today.
More languages choose LLVM as their primary backend, like Rust, Crystal, Julia.
For what it's worth, the leverage did work, just not forever. It was a play with a limited lifetime. It didn't necessarily need to shake out that way, probably if GCC was slightly easier to write for but not too easy people would have invested more. It took a major investment to create a competing product.
GCC has come a long way in terms of features and complexity since the 90's/00's when Stallman made these decisions. Today, building a compiler from scratch would be a huge undertaking, and would be prohibitively expensive for most organizations regardless of licensing.
If the requirement was still just to implement a "simple" C89 compliant compiler, and I was worried about software freedom. The GPL is probably still a good bet.
I thought GPLv3 adoption by GCC was what really lit the flames on moving to llvm by commercial entities?
you only need to worry about GPLv3 if you are modifying gcc in source and building it and distributing that. Just running gcc does not create a GPLv3 infection. And glibc et al are library licensed so they don't infect what you build either, most especially if you are not modifying its source and rebuilding it.
you only need to worry about GPLv3 if you are modifying gcc in source and building it and distributing that.
That's the context here. If you build a new compiler based on GCC, GPL applies to you. If you build a new compiler based on LLVM it doesn't.
the context here doesn't actually specify whether we are talking about companies using llvm sources to create proprietary compilers (or maybe integrated with a proprietary IDE) or using llvm to quickly bootstrap and craft a compiler for a new processor, new language, etc., where they will distribute the source to the compiler anyway
but such a compiler or IDE would not GPLv3 infect it's users' target sources and binaries.
The main problem with GPLv3 specifically from the perspective of various commercial vendors is the patent clause.
And what we've seen from e.g. Apple is that "make a private fork and only distribute binaries" is exactly what they wanted the whole time.
Still some companies try hard to avoid GPLv3, see Apple, who either provide old GPLv2 licensed software or invest in BSD/MIT replacements.
You might know this history better than me.
Strange philosophy, imo. Feels very much like saying "My version of free is best, and I must force you to implement it yourself".
Stallman's version of free is free to the end user. He cares more about whether the end user will have access to the source code and means to modify their software to remove any anti-feature, and less about whatever freedoms the developers of said software would want (such as, the freedom to close the source and distribute only binaries)
Ultimately Stallman was against a kind of digital feudalism, where whoever developed software had power over those that didn't
> free to the end user
To which none can answer how it creates freedom without mass adoption to actually get the software into end users' hands. The great contradiction in FSF philosophy is to create highly pure software within a monastery of programmer-users while simultaneously insisting to focus on end-user freedoms without reconciling programmer incentives to build what these end users need.
I'm responding to this comment as an end user with a free browser and free OS and it works perfectly fine. Billions of users do in fact.
So there doesn't need to be an answer. He can just show it to you.
Judging by "billions of users", it sounds like you mean Android, in which case neither the browser nor the OS are really free in FSF sense.
How much of it is GPL3?
Why would it matter in this context; the GP was asking a theoretical question akin to "how is it physically possible for the sky to be blue?" and I am just pointing at the sky saying "look!"
It is Free Software whether it is BSD or GPL3. By all measures, Free Software as originally envisaged has been a massive success. It's just the goalposts have expanded over the years.
> It is Free Software whether it is BSD or GPL3.
You clearly did not read the FSF manifestos and don't understand their positions. They will call the BSD license "permissive" and will correct you if you attempt to call BSD "free/libre".
> Why would it matter
The FSF didn't build "open source." They actively work to discredit open source. Let's not give them credit for what they tirelessly denounce.
Linux is open source, but did not adopt the GPL3. Firefox is open source but uses MPL. If the FSF is a leader who is responsible for all of these great projects, why doesn't anyone want to use their license?
> ...will correct you if you attempt to call BSD "free/libre".
Wrong. https://www.gnu.org/philosophy/categories.en.html
> The FSF didn't build "open source." They actively work to discredit open source. Let's not give them credit for what they tirelessly denounce.
Where did I ever use that term in this conversation?
And yet despite your theory it appeared to work quite well in practice.
If reality disproves your theory, it's not reality that's wrong.
Open source works. FSF tactics for producing and promoting free/libre do not. Let's not give the FSF credit for what open source does.
Not that strange, as GCC was an effort to a goal of developing an ecosystem of Free (as in speech) software. While FSF had sometimes made allowances for supporting non-Free software (whether non-copyleft open source or proprietary), these were always tactics in support of the longer-term strategy. Much like you might spend marketing funds on customer acquisition in the service of later recurring revenue.
As RMS indicated, this strategy had already resulted in the development of C++ front ends for the Free software ecosystem, that would otherwise likely not have come about.
At that time the boom in MIT/BSD-licensed open source software predominantly driving Web apps and SaaS in languages like Rust and Javascript was still far away. GCC therefore had very high leverage if you didn't want to be beholden to the Microsoft ecosystem (it's no accident Apple still ships compat drivers for gcc even today) and still ship something with high performance, so why give up that leverage towards your strategic goal for no reason?
The Linux developers were more forward-leaning on allowing plugins despite the license risks but even with a great deal of effort they kept running into issues with proprietary software 'abusing' the module APIs and causing them to respond with additional restrictions piled atop that API. So it's not as if it were a completely unreasonable fear on RMS's part.
Nit: non-copyleft open source is still free software (as defined by FSF).
"My version of Free is best" is like the defining feature of GNU/FSF.
(Not knocking them, i think sometimes being obnoxiously stubborn is the only way to change the world)
True. Some of their positions come across as "extreme" and rms' personality can be quite abrasive especially these days when even much smaller incidents are amplified by social media.
However, I quite value their stand. It's principled and they are, more or less, sincere about it. Many of their concerns about "open source" (as contrasted to free software) being locked up inside proprietary software etc. have come true.
Historical context is not merely important, it is indispensable.
The statement in question was issued during a period in which software vendors routinely demanded several hundred — and in some cases, thousands — of dollars[0] for access to a mere compiler. More often than not, the product thus acquired was of appalling quality — a shambolic assembly marred by defects, instability, and a conspicuous lack of professional rigour.
If one examines the design of GNU autoconf, particularly the myriad of checks it performs beyond those mandated by operating system idiosyncrasies, one observes a telling pattern — it does not merely assess environmental compatibility; it actively contends with compiler-specific bugs. This is not a testament to ingenuity, but rather an indictment of the abysmal standards that once prevailed amongst so-called commercial tool vendors.
In our present epoch, the notion that development tools should be both gratis and open source has become an expectation so deeply ingrained as to pass without remark. The viability and success of any emergent hardware platform now rests heavily — if not entirely — upon the availability of a free and competent development toolchain. In the absence of such, it shall not merely struggle — it shall perish, forgotten before it ever drew breath. Whilst a sparse handful of minor commercial entities yet peddle proprietary development environments, their strategy has adapted — they proffer these tools as components of a broader, ostensibly cohesive suite: an embedded operating system here, a bundled compiler there.
And yet — if you listen carefully — one still hears the unmistakable sounds of malcontent: curses uttered under breath and shouted aloud by those condemned to use these so-called «integrated» toolchains, frustrated by their inability to support contemporary language features, by their paltry libraries, or by some other failure born of commercial indifference.
GNU, by contrast, is not merely a project — it is a declaration of philosophy. One need not accept its ideological underpinnings to acknowledge its practical contributions. It is precisely due to this dichotomy that alternatives such as LLVM have emerged — and thrived.
[0] Throw in another several hundreds for a debugger, another several hundreds for a profiler and pray that they are even compatible with each other.
Yeah, people trying to enforce their ideals upon others. What a strange thing indeed.
They don't "enforce" anything on anybody. Participating in the ecosystem was always and still is a free choice.
Freedom through obscurutiy
Stallman is such a deep thinker. I think he doesnt get nearly as much credit as he deserves.