The Lisp "Curse" Redemption Arc, or How I Learned To Stop Worrying And Love The CONS
Preface
There are far more ugly programs than ugly languages. Using a 'good' language does not guarantee pretty programs. In fact, since a 'good' language has immense power, it is possible to write uglier programs in a 'good' language, than it is to write ugly programs in a 'bad' language. This, of course, is one of the justifications I've seen for forcing people to write in 'bad' languages!!
– Henry Baker, unaware of what he predicted in a Usenet thread
The following text was taken from a book entitled Software and Anarchy, which contained a smattering of ideas on software and anarchy. I had been considering making a web version of this part of the book for a few months now; but yesterday I was idly poking around the web and found a blog post from the Freshcode software consultancy entitled Myths of Lisp: curse that didn't really happen? While such a name would sound like it leads to an interesting article, the authors avoided doing any serious analysis, instead opting to make up unfounded bullshit about development in Common Lisp, so that they can claim development is somehow better in Clojure. Usually I would just laugh it off, as it is a claim that I have heard several times, but the authors found it necessary to take random quotes from a discussion I regret engaging with, including something I said. While I am not too embarrased with what I said, I did get second-hand embarrasment to see a pretty poor choice of quotes with pretty poor timing.
The general argument presented against the logic of the Lisp "Curse" is that people randomly make poor decisions, so they should not get stuck with bad designs, and they should have a way of investigating better designs. The half-done projects produced by the "lone wolves" of Lisp work as metaphorical compost: they are dead, and they smell funny, but they help produce better projects. One merely has to find the projects that aren't rotting, which is fairly easy when there are communication channels to investigate with. The following text "just" provides evidence for this line of reasoning.
I hate to say it, but I also hope that the act of producing a web version of this text now, after having procrastinated on it for several months, indicates how disappointing the behaviour of the people at Freshcode is. I am not a fan of making up things for marketing purposes, admittedly. (The revisiting reader might remember that this is a theme that appears in A Parastatal Problem and C14-dating schema-based design, too.)
Anyways, on with the show…
– Hayley Patton
The Lisp "Curse" Redemption Arc
When a developer has introspective tools and some time to poke around, they are in a very good position to analyse difficult codebases, even when the codebases are fairly large or are written in a way that is unnatural to the reader. However, the supposed secondary and tertiary effects of providing a programmer with sufficient power over their programming environment may be significant enough to deter cooperation, according to an essay titled The Lisp Curse, which is frequently used as an excuse to avoid confrontation over social issues affecting the Common Lisp community.
The main points made are that technical issues in sufficiently powerful languages and environments become social issues, and that having such power reduces some natural cooperative force between programmers, causing them to part ways easily and thus not achieve anything significant without external discipline. This would spell disaster for our peer production model, if it weren't that the centralised models put in contrast to a loosely coordinated development model can only be worse at producing stagnation and removing agency from the user; which would greatly slow any experimentation and the progress of the community. The apparent incoherence of peer production should be embraced instead of lamented, as we may stand to learn a lot from incomplete prototypes when trying to produce some sort of grand unified product.
There are two apparent "solutions" that avoid this curse that we will explore. The first solution is to add the extension to the system via the implementation, forcing the community to adopt this extension, removing the agency of the user and setting them up to be screwed if the solution becomes a problem. The second is to ensure that any task is too large to tackle without cooperation, by reducing the power and efficiency of each individual user, and in doing so, eliminating all facilities for the individual creative process.
Neither solution is particularly appealing. If the provided
extension has flaws that require it to be replaced, fixing the
problem will affect many more clients, as opposed to if the client
had more options. For example, JavaScript used to use a callback
system (which is really a form of continuation-passing style) for
interfacing with the outside world. Writing in a
continuation-passing style manually was regarded by some as
difficult to read and write, so a promise-based system and some
syntactic sugar (async
and await
) were introduced to make using
it look like normal, synchronous code. Fortunately, promises are
still compatible with continuation-passing style code, so it did not
require any code to be replaced, but it still cuts a program into
blocking and non-blocking parts.
You still can't use them with exception handling or other control flow statements. You still can't call a function that returns a future from synchronous code.
You've still divided your entire world into asynchronous and synchronous halves and all of the misery that entails. So, even if your language features promises or futures, its face looks an awful lot like the one on my strawman.
– Bob Nystrom, What Color is Your Function?
Features such as asynchronous programming are very difficult to
handle without getting it right the first time. The way to go
forward while how to implement features is being debated is to
provide a construct that subsumes it, such as providing access to
implicit continuations like Scheme or using another unique
combination of syntax and constructs that provide something like
continuations, such as monads and the do
-notation present in
Haskell and F♯, allowing a programmer to wire the continuations into
their asynchronous backend; or by providing many green threads, such
as in Erlang, and have the backend unblock the green threads when
they are ready to continue. The latter two techniques also
facilitate composing various implementations, where both
implementations can be used in the same project if necessary. Thus,
we are in a much safer position with more power to reproduce
libraries and special language constructs if they become
problematic, and then to unify and compose multiple implementations,
allowing participants to work without a consensus. Alan Kay
succinctly states this view in The Computer Revolution Hasn't
Happened Yet as "the more the language can see its own structures,
the more liberated you can be from the tyranny of a single
implementation."
Disempowering an individual discourages experimentation, whether it be in attempting to implement a new concept, or modifying existing code to improve it. If the goal is to develop new concepts, then having many partial implementations is better than one complete implementation; as they reflect on different views of the concept. An implementation is never really complete, and a concept is never really finished, anyway. One complete implementation is prone to be wrong, and a linear progression of products provides less information to work with, compared to many experiments. If the goal is to produce stable software, knowledge of such experiments and prior work is very useful. Kay had frequently called various tenents of programming a "pop culture", where participants have no knowledge of their history. Without such experiments, we have no history to investigate, even in the near future! It would thus be ideal to experiment as much as possible, and use the lessons learnt to produce a comprehensive theory and then a comprehensive implementation; and such software may not require replacement as immediately as if it were developed in a centralised or linear manner.
On the other hand, some projects simply can't be merged in any meaningful way, even if they seem similar. Such projects have different designs, and none are necessarily better than others; yet the cultists of the Lisp Curse only see the similarities, and argue that redundant work is being done. One example is a comment made on array processing libraries in Common Lisp, which finds a subtlety in how to represent an array, but misses the issue of why the different representations exist. The comment mentions the numcl library, which includes many hand-written array processing functions that immediately produce results. We may compare this library to, say, Petalisp, which instead builds up a lazily-evaluated computation and generates "kernel" code.1 We are sure the authors of either library are aware of each others' work, but there is not much they can provide to each other, as the implementation techniques are very different. However, similar approaches are taken by the numpy and TensorFlow libraries for Python; numpy immediately calls into hand-coded BLAS routines, and TensorFlow builds up kernel code from descriptions of neural networks. Yet there is no Python curse, and no one asking for the authors of either library to make the stupid decision to somehow combine efforts.
Disempowering a community has negative effects for creating one "unified" product, too. While inducing difficulty to go ahead with any decision that isn't unanimous leads to a consensus, it is an entirely arbitrary consensus, which can be as terrible as it can be good. The resulting structure may be good at giving orders and techniques to its participants, but "although everyone marches in step, the orders are usually wrong, especially when events begin to move rapidly and take unexpected turns."2 Suppose a sufficiently large group of scientists, say, all the physicists in Europe, were all told to perform the same experiment with the same hypothesis, the administration in charge would be laughed at, as it would be hugely redundant and inefficient to investigate only one problem and one hypothesis with as many physicists. However, such a situation is presented as an ideal for software design, when groups pursuing their own hypotheses and theories are considered "uncoordinated", or called "lone wolves". Disempowerment also precludes the group from attempting another strategy, without another unanimous decision on which to attempt.
The most viable option is to go forward with multiple experiments, and provide participants with more power, so they may late bind and ignore the "social problems" produced by the diverse environment, in turn provided by having reasonable control over the environment. Measuring progress by the usage and completion of one implementation of a concept is an inherently useless measure; it would not consider the progress made in another implementation or design. Such a measure "subjects people to itself"3 and inhibits their creative processes. "You're imagining big things and painting for yourself […] a haunted realm to which you are called, an ideal that beckons to you. You have a fixed idea!" Progress on producing should be measured in how much of the design space has (or can be easily) traversed, as opposed to the completion of one product; a poor design choice could entail a final product being unfit for its purpose, but a failed prototype is always useful for further designs to avoid. With that metric, a decentral development model is greatly surperior to a centralised model.
What has the community done for me?
It is perfectly fine to not come upon a consensus for some reason; it is even a natural effect of developing sufficiently broad concepts and concepts which the theory of is frequently changing, and where there are many ways to implement the concept which are not always better or worse than one another. Examples of this theme are hash tables and pattern matching, for which there are many implementations with varying performance characteristics, and for which research appears to generate a new technique or optimisation every few years.
It has been a little more than ten years since the publication of The Lisp Curse, and the Common Lisp community has not delivered on the predictions the author made, of many mutually incomprehensible incomplete implementations of any new concept. Pattern matching, again, was a concept that the author of the article believed would be implemented many times when functional programming becomes further put into the mainstream. Functional programming did indeed become fairly mainstream,4 and since then, there has been one pattern matching library widely used. In fact, the number of common pattern matching libraries has been reduced, with Optima being deprecated in favour of the now common Trivia, which uses a newer compilation technique. Furthermore, there have also only been one popular lazy evaluation library (CLAZY), and one static type system implementation (Coalton). So, empirically, there has not been an explosion of poor attempts at any functional programming libraries.
It is also possible that we just have not have heard of any other of these genres of libraries, prior to writing this book. This may be due to network effects, which filter out many bad libraries when there are many options; with no disposition otherwise, one is most likely to follow the design choices that appear popular. For example, there are many testing frameworks in Common Lisp (and it has become a joke to write another framework), but we can count most of the frameworks mentioned publicly per week on one hand. This has a sort of smoothing effect, where there are a small number of usually-good choices that are frequently recommended, greatly reducing the perceived entropy of the environment.
A culture that encourages experimentation, but allows the community to settle on usually-good defaults, can remain cooperative and cohesive without risk of stagnation; the many forms of communication which do not require formal arrangements, and the rapid network effects produced by online communication, can support both qualities. The community should still be aware of duplication of code when its prototypes converge; but also conscious of fundamentally differing decisions made as well. But a good design should be further developed, and thus someone has to do the dirty maintenance work, which is hard to "sell" while code quality is hard to quantify and concisely describe, but such work is of course necessary to support further development.
Since execution performance is readily quantified, it is most often measured and optimized–even when increased performance is of marginal value […] Quality and reliability of software is much more difficult to measure and is therefore rarely optimized. It is human nature to "look for one's keys under the street-lamp, because that is where the light is brightest."
It should be a high priority among computer scientists to provide models and methods for quantifying the quality and reliability of software, so that these can be optimized by engineers, as well as performance.
– Henry Baker, Dubious Achievement
(Nowadays, though, there are pretty bad ways of measuring software quality; some people have used automated "code quality" tools to pat themselves on the back for minutae such as not mixing up tabs and spaces in source code. Such problems are fairly simple to fix, and the fix is often even easily automatable; quite unlike larger problems, which are sources of technical debt and require larger design efforts to fix.)
The chant of the ever circling skeletal family
Beyond our commentary on development models, there are some other issues with the article that should be pointed out. The fatalism of the article is self-fulfilling, which is not helpful for attempting to rid Lisp of its so-called "curse". If one takes the advice and avoids powerful languages and environments, then such environments will remain unpopular and its espousers can use it as evidence for their babbling, with a statement like "Look, no one uses Lisp still! The Lisp Curse was right!" Assuming this development model is a problem, is the aim to actually relieve the community of this issue, or is it to badmouth the community? By the sounds of things, most people solely invoke it for the latter purpose.
The essay also supposes that the goals then, such as Lisp machines and operating systems, are the goals now. As we have suggested before, we do want a Lisp operating system, but the special machines are less necessary with clever compilation strategies today; but more importantly, this metric conveniently ignores that the choices in any operating system almost collapsed to Windows or some Unix in the nineties, equally universally affecting other vendors, regardless of technology choices; instead opting to implicitly put the blame on Lisp users, for the loss of diversity of operating systems. The foreword to The Unix Hater's Handbook also names TOPS-20, the Incompatible Time System and Cedar as victims of this collapse, which were not written in Lisp or a dynamic language, but were still disposed of around the same time.
In this sense, The Lisp "Curse" is only real because we make it real. We set up metrics and constraints that promote conformist and centralist development strategies, then pat ourselves on the back for having made it impossible for anyone to test new ideas on their own. These sorts of metrics and organisational methods "treat […] all twists and turns in the process of theorizing as more or less equivalently suspect, equivalently random flights of fancy,"5 which have no real purpose. It would be interesting to see if consciously attempting to avoid this centralism would produce better quality software; allowing developers to go off on any interesting tangent, and breaking the illusion that there is one way to achieve the aim. This state should not pose any issues with sufficient communication; even if it does not appear very coordinated, or the group of diverging programmers appears uncooperative, they are much more likely to find the right approach and base for a product, as they are simply covering more ground and keeping some record of it.
Of course, there certainly is communication already. There are several forums for discussing Lisp (e.g. /r/lisp, /r/common_lisp), and chatrooms on at least IRC, Matrix, and Discord. We did feel quite successful in disseminating information on the one-more-re-nightmare compiler quite recently, so it appears these channels for communication work well. The only change that might need to be made is to have more confidence while using these channels; as previously stated, it is too easy to get stuck in a vicious cycle, where prior failures to communicate would cause one not to bother, and thus cause more failures to communicate later on.
It's about time, then, that we just shut up about this "curse", and get some work done! As Robert Strandh said about what is wrong with Lisp, "no amount of lamenting is going to magically create any libraries"; and lamenting over your imaginary incompetence due to an imaginary curse is hardly any better than Winestock's imaginary situation where we all solely get pissed and whine all day about Lisp machines or something. And, of course, many of us are getting work done, because any claims that we're unproductive are trivial to blow off.
Footnotes:
The curious reader might want to compare this approach to the APL\3000 compiler.
As per Listen, Marxist!
As per The Unique and its Property
The reader should note – oh, pardon me, I'm supposed to make a no-true-Scotsman here – Real Hackers know that one does not simply compare functional and object-oriented techniques.
As per Science as Radicalism