# The next 700 virtual machines

We describe how to transform an abstract machine for a pure functional programming language into an abstract machine for a pure object-oriented programming language, and the practical ramifications of such a design. In particular, emphasis is moved off the instructions of the abstract machine, and moved onto the types and operations provided to the programmer. This movement makes compilers targeting the abstract machine and implementations of the abstract machine more reusable and composable.

## Designing an abstract machine

### Eagerly evaluated lambda calculus

Let's begin with a machine to evaluate a variant of the lambda calculus. The lambda calculus is a very simple language; it only concerns itself with creating and applying functions of one argument. We may write the language of the lambda calculus, using a pseudo-BNF notation, as:

\begin{align*} \newcommand{\q}[1]{\mathrm{#1}} \q{LC} &::= \q{Symbol} \mid \q{Function} \mid \q{Application} \\ \q{Function} &::= \lambda\ \q{Symbol} \ldotp \q{LC} \\ \q{Application} &::= (\q{LC}\ \q{LC}) \end{align*}

We concern ourselves only with an eager method of evaluating such programs, where the values demanded in evaluating terms in this language are evaluated immediately: the function and argument to a function application are evaluated before that application is performed, and that a symbol only stands for a variable, and not itself.1

Evaluation requires another data structure called a closure, which associates some bindings, an argument name, and a sub-program describing the behaviour of the closure. The bindings are associations between argument names and values. Evaluation can be described with a few rules:

• A function evaluates to a closure with the same argument name, sub-program, and the current bindings of evaluation.
• A symbol evaluates to the value of its binding in the current bindings.
• An application of an argument to a closure evaluates to the result of evaluating the sub-program of the function, using the bindings of the closure with the argument name bound to the result of evaluating the argument.

If we had some lambda calculus terms $$X$$ and $$Y$$, we will find $$(((\lambda a\ldotp \lambda b\ldotp a) X) Y)$$ evaluates to the result of evaluating $$X$$. The steps are as follows:

• We first evaluate the outer application, starting with the function, with no bound variables yet. The function is another application $$((\lambda a\ldotp \lambda b\ldotp a) X)$$.
• We may then evaluate the function term, which produces a closure with no bound variables, an argument $$a$$ and a sub-program $$\lambda b\ldotp a$$.
• We may then evaluate the argument $$X$$.
• We can now perform the application. We evaluate the sub-program of the function, which is $$\lambda b\ldotp a$$, with $$a$$ bound to the result of evaluating $$X$$.
• The sub-program is a function term, which evaluates to another closure, with $$a$$ bound to the result of evaluating $$X$$, an argument $$b$$ and a sub-program $$a$$.
• Thus the result of application is a closure, with $$a$$ bound to the result of evaluating $$X$$, an argument $$b$$ and a sub-program $$a$$.
• We may then evaluate the argument $$Y$$.
• We can finally now perform the application. We evaluate the sub-program of the closure, which is $$a$$, with the binding of $$a$$ to the result of evaluating $$X$$ from the closure, and also the new binding of $$b$$ to the result of evaluating $$Y$$.
• The sub-program is a symbol $$a$$, which evaluates to the value of the binding of $$a$$, which is the result of evaluating $$X$$.
• Thus the result of application is the result of evaluating $$X$$.

The example also demonstrates that there is no loss of utility by only defining functions to have one argument, as function terms can be nested and then applied repeatedly, in order to use and pass multiple arguments. With a formal definition of what a closure is, we may formally define how to evaluate these sorts of programs:2

\begin{align*} \newcommand{\c}[2]{\q{#1}\hspace{-0.1em}\left[ #2 \right]} \q{Environment} &= \c{Map}{\q{Symbol}, \q{Value}} \\ \q{Value} &::= \c{closure}{\q{Symbol}, \q{LC}, \q{Environment}} \\ \\ \q{eval} &: (\q{LC}, \q{Environment}) \rightarrow \q{Value} \\ \c{eval}{\lambda s\ldotp b, e} &= \c{closure}{s, b, e} \\ \c{eval}{(f\ a), e} &= \c{eval}{b, \c{bind}{s, \c{eval}{a, e}, e'}} \\ &\mathrm{where}\ \c{closure}{s, b, e'} = \c{eval}{f, e} \\ \c{eval}{s, e} &= \c{lookup}{s, e} \end{align*}

The function $$\c{bind}{s, v, e}$$ creates a new map with all the bindings of $$e$$, and with $$s$$ bound to $$v$$. The function $$\c{lookup}{s, e}$$ looks up the symbol $$s$$ in the map $$e$$. The initial environment is the empty map.

While this definition of how to evaluate a program may be used for other interesting programs, it may be troublesome to implement a program using this definition. The nested call to $$\q{eval}$$ may overflow a small call stack, and recursing down terms may not be very fast.

### SECD machine

The SECD machine is an abstract machine which can evaluate programs in this language, without using recursion in the implementation language, and rendering sub-programs as sequences of "flatter" instructions. It uses four registers: a Stack of values to be consumed in later evaluations, an Environment of associations of names to values, a Control list containing further instructions to evaluate in the current function, and a Dump stack of states to return to after evaluating all the instructions of the current function. We may specify these objects as:

\begin{align*} \newcommand{\st}[1]{\left\langle #1 \right\rangle} \q{Machine} &= \st{\c{List}{\q{Value}}, \q{Environment}, \c{List}{\q{Instruction}}, \q{Dump}} \\ \q{Environment} &= \q{Map}[\q{Symbol}, \q{Value}] \\ \q{Dump} &::= \q{Machine} \mid \q{end} \end{align*}

(One mistake one of the authors makes is to think that C stands for "call stack" and D for "data stack"; they do not, don't do that.)

Landin's original presentation of the SECD machine only concerns itself with managing functions and application; in order to have interesting examples, without having to come up with strange terms as stand-ins for further computation, we allow the machine to use symbols as data. Thus the operations our machine can perform are to create functions, apply functions, and create symbols, and it may use symbols and functions as data:

\begin{align*} \q{Instruction} &::= \c{create}{\q{Symbol}, \c{List}{\q{Instruction}}} \mid \q{apply} \mid \c{value}{\q{Symbol}} \mid \c{quote}{\q{Symbol}} \\ \q{Value} &::= \q{Symbol} \mid \c{closure}{\q{Symbol}, \c{List}{\q{Instruction}}, \q{Environment}} \end{align*}

The instructions correspond with operations that $$\q{eval}$$ performs, but their behaviour is worth explaining in detail.

• $$\c{create}{s, b}$$ creates a closure with an argument $$s$$ and code $$b$$, closing over the current environment, as $$\q{eval}$$ does when evaluating a function, and then pushes the closure onto the Store stack.
• $$\q{apply}$$ performs function application, popping the argument off the top of the Store stack, and then popping the function off the top after. The order here matters; we want to evaluate the function first, and then the argument, so the argument will be on the top of the stack just before executing an $$\q{apply}$$ instruction, with the function below. It then saves the state of the machine to return to, when the function application has finished being evaluated.
• $$\c{value}{s}$$ looks for a variable binding in the Environment, and pushes the value onto the Store stack.
• $$\c{quote}{s}$$ pushes $$s$$ onto the Store stack, corresponding to our hand-waving about bogus terms like $$X$$ and $$Y$$.

As well as being able to run this machine, we now need to compile lambda calculus terms to be evaluated by this machine. We arbitrarily choose to treat the application of a symbol named $$\q{quote}$$ to some other symbol as a literal value. The compiler simply recurses on each kind of expression to produce a list of instructions.

\begin{align*} \newcommand{\doubleplus}{\mathrm{+\hspace{-0.3em}+}\,} \q{compile} &: \q{LC} \rightarrow \c{List}{\q{Instruction}} \\ \c{compile}{(\q{quote}\ x)} &= [\c{quote}{x}] \\ \c{compile}{(f\ x)} &= \c{compile}{f} \doubleplus \c{compile}{x} \doubleplus [\q{apply}] \\ \c{compile}{\lambda x\ldotp b} &= [\c{create}{x, \c{compile}{b}}] \\ \c{compile}{x} &= [\c{value}{x}] \end{align*}

(The symbol $$\doubleplus$$ means to append the lists on either side together.)

Compiling $$(((\lambda a\ldotp \lambda b\ldotp a)\ (\q{quote}\ X)\ (\q{quote}\ Y))$$ yields the program $$[ \c{create}{a, [\c{create}{b, [\c{value}{a}]}]}, \c{quote}{X}, \q{apply}, \c{quote}{Y}, \q{apply} ]$$. The order of the instructions resembles the order of steps we took to evaluate this term directly. The SECD machine is implemented by dispatching on the first instruction (or lack thereof) in the Control list. The instructions are executed as follows:

\begin{align*} \q{run} &: \q{Machine} \rightarrow \q{Value} \\ \c{run}{s, e, \c{create}{a, b}::c, d} &= \c{run}{\c{closure}{a, b, e}::s, e, c, d} \\ \c{run}{a::\c{closure}{n, c', e'}::s, e, \q{apply}::c, d} &= \c{run}{[], \c{bind}{n, a, e'}, c', \st{s, e, c, d}} \\ \c{run}{s, e, \c{value}{x}::c, d} &= \c{run}{\c{lookup}{x, e}::s, e, c, d} \\ \c{run}{s, e, \c{quote}{x}::c, d} &= \c{run}{x::s, e, c, d} \\ \end{align*}

(The symbol $$::$$ means to prepend the value on the left hand side to the list on the right hand side.)

It remains to define what happens when we run out of instructions in the current function. When there are no more functions to continue executing (i.e. $$d = \q{end}$$), we finish evaluation and return the sole value on the Store stack. When there are remaining functions, we restore the prior state of the machine, with the sole value on the current Store stack prepended to the new Store stack, and continue with execution as follows:

\begin{align*} \c{run}{[x], e, [], \q{end}} &= x \\ \c{run}{[x], e, [], \st{s', e', c', d'}} &= \c{run}{x::s', e', c', d'} \end{align*}

There are many cases where there are no rules for running the machine. For example, it is undefined what happens if more than one value remains on the Store stack at the end of executing a function; but the compiler cannot generate code which could cause such situations to happen.3

The machine starts in the state $$\st{[], [], c, \q{end}}$$ where $$c$$ is the code to run. Evaluation of the formerly mentioned program is as follows. We couldn't figure out a nice way to typeset the states of the machines; tables ended up too wide, and we were hoping to get each register to line up nicely on the screen. Oh well, here's a box you can scroll in:

Initial state
Store: $$[]$$
Environment: $$[]$$
Control: $$[\c{create}{a, [\c{create}{b, [\c{value}{a}]}]}, \c{quote}{X}, \q{apply}, \c{quote}{Y}, \q{apply}]$$
Dump: $$\q{end}$$

Create closure
Store: $$[\c{closure}{a, [\c{create}{b, [\c{value}{a}]}], []}]$$
Environment: $$[]$$
Control: $$[\c{quote}{X}, \q{apply}, \c{quote}{Y}, \q{apply}]$$
Dump: $$\q{end}$$

Push literal
Store: $$[X, \c{closure}{a, [\c{create}{b, [\c{value}{a}]}], []}]$$
Environment: $$[]$$
Control: $$[\q{apply}, \c{quote}{Y}, \q{apply}]$$
Dump: $$\q{end}$$

Apply function
Store: $$[]$$
Environment: $$[a = X]$$
Control: $$[\c{create}{b, [\c{value}{a}]}]$$
Dump: $$\st{[], [], [\c{quote}{Y}, \q{apply}], \q{end}}$$

Create closure
Store: $$[\c{closure}{b, [\c{value}{a}], [a = X]}]$$
Environment: $$[a = X]$$
Control: $$[]$$
Dump: $$\st{[], [], [\c{quote}{Y}, \q{apply}], \q{end}}$$

Return from function
Store: $$[\c{closure}{b, [\c{value}{a}], [a = X]}]$$
Environment: $$[]$$
Control: $$[\c{quote}{Y}, \q{apply}]$$
Dump: $$\q{end}$$

Push literal
Store: $$[Y, \c{closure}{b, [\c{value}{a}], [a = X]}]$$
Environment: $$[]$$
Control: $$[\q{apply}]$$
Dump: $$\q{end}$$

Apply function
Store: $$[]$$
Environment: $$[b = Y, a = X]$$
Control: $$[\c{value}{a}]$$
Dump: $$\st{[], [], [], \q{end}}$$

Push variable value
Store: $$[X]$$
Environment: $$[b = Y, a = X]$$
Control: $$[]$$
Dump: $$\st{[], [], [], \q{end}}$$

Return from function
Store: $$[X]$$
Environment: $$[]$$
Control: $$[]$$
Dump: $$\q{end}$$

We implemented the evaluator, compiler and machine in Standard ML, if you'd like to play along at home.

### An object language

There are several ways to formulate objects. Xavier Leroy suggests that "objects are generalised closures with multiple entry points", which we can start from. However, we will replace closures with objects entirely, and lambda terms with classes. This language will consist of literal values, classes, and targeted and self message sends. Formally:

\begin{align*} \newcommand{\s}[1]{\text{ #1: }} \q{OO} &::= \q{Value} \mid \q{Class} \mid \q{Targeted} \mid \q{Self} \\ \q{Class} &::= \{ \s{Symbol} \q{Symbol} = \q{OO}, \cdots \} \\ \q{Targeted} &::= \q{OO}\ \s{Symbol}\ \q{OO} \\ \q{Self} &::= \s{Symbol}\ \q{OO} \end{align*}

A class has methods, each with a method name, argument name, and body. All methods on objects still have one argument, so we will need to use nesting to pass multiple arguments, and to pass unused arguments for methods which would accept zero arguments. We use a special symbol $$()$$ to denote a value which we are only passing due to this limitation.4 For example, the program $$\{ \s{x} x = \{ \s{y} y = \{ \s{getx} z = \s{x} (), \s{gety} z = \s{y} () \} \} \} \s{x} A \s{y} B \s{getx} ()$$ evaluates to $$A$$. Note that variables have been replaced with self message sends.

The values used by an evaluator for this language are somewhat different. Instead of closures, we now have objects; each object has an enclosing object, and some methods. We also need to keep track of the receiver of a message, and the argument passed to the method. The latter can be achieved by creating a new object as an "environment", with the receiver as enclosing object and one method which always returns the argument provided to the message send. However, this scheme is arguably slightly too simple, as we need to write unsightly getter methods to allow access to closed-over values, when we want to do that.

\begin{align*} \q{Value} &::= \q{Symbol} \mid \c{object}{\q{Enclosing}, [ \s{Symbol} \q{Symbol} = \q{OO}, \cdots ] } \\ \q{Enclosing} &::= \q{none} \mid \q{Value} \\ \\ \q{bind} &: (\q{Symbol}, \q{Value}, \q{Value}) \rightarrow \q{Value} \\ \c{bind}{n, v, o} &= \c{object}{o, [ n\text{: } z = v ]} \end{align*}

One issue is how to look up methods on an object. Self message sends should walk enclosing objects to find a method; we'll call a function, analogous to $$\q{lookup}$$ for the lambda calculus evaluator, which does this $$\q{deep}$$. We are provided some flexibility with how to handle targeted sends, however. We can implement a form of encapsulation wherein only methods directly in the object can be accessed by targeted sends. We'll call a function, also analogous to $$\q{lookup}$$, which only looks through the methods directly in an object $$\q{shallow}$$. We can then write the evaluation rules as follows:

\begin{align*} \q{eval} &: (\q{OO}, \q{Value}) \rightarrow \q{Value} \\ \c{eval}{x, e} &= x \\ \c{eval}{\{ m \cdots \}, e} &= \c{object}{e, [m \cdots]} \\ \c{eval}{r\text{ m: }a, e} &= \c{eval}{b, \c{bind}{n, \c{eval}{a, e}, r}} \\ &\text{where } (\text{m: } n = b) = \c{shallow}{m, r} \\ \c{eval}{\text{m: }a, e} &= \c{eval}{b, \c{bind}{n, \c{eval}{a, e}, e}} \\ &\text{where } (\text{m: } n = b) = \c{deep}{m, e} \\ \end{align*}

The initial environment is $$\c{object}{\q{none}, []}$$. One might think that a simple equivalence between a function and a class may be established, that a function $$\lambda x\ldotp e$$ can be written as a class $$\{ \text{call: } x = e \}$$. But unlike the lambda calculus, objects induce a kind of recursive reference to themselves; the message $$\q{call}$$ is bound while evaluating $$e$$ with objects. Often there is a keyword such as self or this which allows a method to retrieve its object, but we only support such self-reference to occur implicitly, as the receiver of a self message send.

The same issues associated with the lambda calculus evaluator apply to this evaluator, too. As such, we can create a similar machine to evaluate object programs.

### Micro-Utena

This machine for executing programs written in the object language is quite similar to the SECD machine. The registers are more or less the same; the main difference being that the Environment register contains an object. The instructions of the machine are:

• Literal: Push a literal value onto the Store stack.
• Create: Create an instance of a class, and push it onto the Store stack.
• Targeted send: Pop an argument off the Store stack, then pop a receiver object off the Store stack, and send a message with that argument to that receiver.
• Self send: Pop an argument off the Store stack, and perform a self message send.

The formal specification of the machine and instructions should come as no surprise:

\begin{align*} \q{Machine} &::= \st{\c{List}{\q{Value}}, \q{Value}, \c{List}{\q{Instruction}}, \q{Dump}} \\ \q{Dump} &::= \q{Machine} \mid \q{end} \\ \q{Instruction} &::= \c{literal}{\q{Value}} \mid \c{create}{\c{List}{\q{Method}}} \mid \c{tsend}{\q{Symbol}} \mid \c{ssend}{\q{Symbol}} \end{align*}

The compiler is also similar to that for the SECD machine, walking down each kind of term to produce instructions. The main difference is that there are multiple methods in a class, as opposed to just one body for a function.

\begin{align*} \q{compile} &: \q{OO} \rightarrow \c{List}{\q{Instruction}} \\ \c{compile}{`x} &= [\c{literal}{x}] \\ \c{compile}{\{ m \cdots \}} &= [\c{create}{\c{map}{\lambda (\text{m: } x = e)\ldotp (\text{m: } x = \c{compile}{e}), m}}] \\ \c{compile}{r \s{m} a} &= \c{compile}{r} \doubleplus \c{compile}{a} \doubleplus [\c{tsend}{m}] \\ \c{compile}{\text{m: } a} &= \c{compile}{a} \doubleplus [\c{ssend}{m}] \end{align*}

The forementioned object language program compiles to $[\c{create}{ C_1 }, \c{literal}{A}, \c{tsend}{\q{x}}, \c{literal}{B}, \c{tsend}{\q{y}}, \c{literal}{()}, \c{tsend}{\q{getx}}]$ where

\begin{align*} C_1 &= [\s{x} x = [\c{create}{\s{y} y = [\c{create}{ C_2 }]}]] \\ C_2 &= [\s{getx} z = [\c{literal}{()}, \c{ssend}{\q{x}}], \s{gety} z = [\c{literal}{()}, \c{ssend}{\q{y}}]] \end{align*}

The nested classes are just too much for human consumption, unfortunately; having to support multiple methods in the one $$\q{create}$$ instruction adds to visual noise in a way that the single place to put code for a function does not.

The way we generate methods for arguments needs to change, as we no longer execute terms in the object language. We can instead generate a $$\q{literal}$$ instruction:

\begin{align*} \c{bind}{n, v, e} &= \c{object}{e, [\text{n: } z = \c{literal}{v}]} \end{align*}

We now specify the behaviour of the Micro-Utena machine. As well as dispatching instructions, we still need to handle finishing a method send, which is handled identically to returning from functions with the SECD machine.

\begin{align*} \q{run} &: \q{Machine} \rightarrow \q{Value} \\ \c{run}{s, e, \c{literal}{x} :: c, d} &= \c{run}{x :: s, e, c, d} \\ \c{run}{s, e, \c{create}{m} :: c, d} &= \c{run}{\c{object}{e, m} :: s, e, c, d} \\ \c{run}{a :: r :: s, e, \c{tsend}{m} :: c, d} &= \c{run}{[], \c{bind}{n, a, r}, b, \st{s,e,c,d}} \\ &\text{where } (\text{m: } n = b) = \c{shallow}{m, r} \\ \c{run}{a :: s, e, \c{ssend}{m} :: c, d} &= \c{run}{[], \c{bind}{n, a, r}, b, \st{s,e,c,d}} \\ &\text{where } (\text{m: } n = b) = \c{deep}{e, r} \\ \c{run}{[x], e, [], \q{end}} &= x \\ \c{run}{[x], e, [], \st{s,e,c,d}} &= \c{run}{x :: s, e, c, d} \end{align*}

The initial state of this machine is $$\st{[], \c{object}{\q{end}, []}, c, \q{end}}$$ for some compiled code $$c$$. A trace of the execution of this machine is also unpleasant without more typographic liberties than we are willing to take, so please believe us when we say it correctly executes the aforementioned object language program, and the result is $$A$$. Or you may again run the implementation in Standard ML yourself.

## Why design a machine this way

### Sharing abstract machines

The pure lambda calculus and object language both allow for programming using an object-capability style, where isolation and security can be achieved by making sensitive resources unreachable from other objects in the language. Limited access can be achieved by providing another object to the outside world, with the object mediating operations on the sensitive resource. Following this style suggests that programs never access a global namespace, as it should always be possible to mediate what access to resources a program has.

While the object-capability style is beneficial for security, it also lends itself to supporting different languages in one programming system. The lack of a global namespace can be combined with allowing any compiler to generate code for the abstract machine, to allow for substantially different language designs to run on the same abstract machine.

If much of a language can be customised outside its abstract machine, the differences in abstract machines can be more essential and more interesting. For example, it would be difficult to retrofit lazy evaluation rather than eager evaluation onto this abstract machine, by modifying the compiler targeting this machine. It would also be difficult to implement a language which doesn't use a kind of "function call" pattern, such as Prolog, without contortions. (That's not to say neither kind of language is impossible to implement; both kinds are possible to implement, but require finessing that languages with a similar sort of semantics don't require.) But less fundamental differences to languages can be supported on the same abstract machine.

### Incremental changes to programming languages

One effect of having few virtual machine instructions and using message-sends to compute is that the primitive types and operations exposed to programs can be changed without needing to modify the compiler at all. The Java virtual machine does not treat certain "primitive types" uniformly, and such types have special instructions which a compiler must know how to emit.

In Zero Feet we argued that low-level "magic" operations could be provided as a capability to the implementation code, and that the object-capability style could also provide protection, and the ability to debug using simulated components without having to change any implementation code. Another feature of this design choice is that the rest of the language does not need to be changed at all.

Some readers likened the approach to the Pre-Scheme language, famously used as an implementation language for Scheme 48. While Pre-Scheme shares many core forms and its syntax with standard Scheme, it conflates many design choices such as a type system, monomorphisation of higher-order functions, and lack of automatic memory management into the one language. While these may be necessary for a "systems language", we still believe that they are not all necessary for language implementation, nor need they all be required in every module at once. For example, it is only necessary to not allocate memory from a garbage-collected heap in the code implementing the garbage collector, and other code needn't be burdened by that restraint. A design based on an object-capability system5 and custom compilers can introduce these primitives and restraints incrementally and only as necessary.

### Separation of abstract instructions and concrete encodings

Many abstract machines have a serialised form for code. For example, the Java virtual machine and the Smalltalk-80 machine both can represent instructions using bytecode. Often, for the purpose of saving space, some abstract instructions have multiple concrete representations, allowing packing of smaller values into smaller instructions. This can cause some confusion, if one tries to measure the complexity of a machine by the number of instructions it has. Describing instructions in an abstract form, as we have with the SECD machine and Micro-Utena machine, separates space-saving measures and the inherent complexity of the design of the abstract machine.

That being said, the instructions of either abstract machine proposed have little correlation with the complexity of the systems that can be built upon either machine. Both machines implement very uniform paradigms, of creating and applying functions, and of creating objects and sending messages between them; thus any complexity appears in the protocols of these objects and not instructions, and attempting to measure complexity by the number of instructions would drastically underestimate the complexity of systems using these abstract machines.

Having an abstract representation also makes metaprogramming tools simpler, as they can manipulate objects and not serialised code. For example, the interface for loading code in the Java virtual machine involves the serialised form of classes, and a library such as ASM is necessary to generate serialised classes from objects describing the class.

The serialisation of code may also be provided as a library, allowing further space savings if the serialisation format is tailored to the user. Such tailoring may also remove incentives to generate programs in a way that is more space efficient; a compiler may be encouraged to generate a program in a way that is better compressed, akin to how a programmer may be encouraged to write a program in a way that the compiler optimises better. For the same reason, designing the serialisation format after a general idea of what "good" programs are may prevent accidentally designing the wrong incentives into the format.

## Other interesting things

Rideau, Knauth and Amin write of formulating prototype object-oriented programming in a functional style, using a fixed-point function in pure Scheme to realise dynamic dispatch, and a combinator allowing for mixing prototypes. The late William Cook also compared algebraic types and objects, using closures and fixed-points to realise objects. He suggests "the untyped lambda-calculus was the first object-oriented language." We do the reverse, however, and turn functions into objects.

Peter Landin put forth the ISWIM family of languages in The next 700 programming languages, suggesting that members of the family share an abstract syntax while possibly differing in concrete syntax, and possibly differing in the primitive types and operations provided. The abstract instructions of the Micro-Utena machine provide a similar facility to abstract syntax, and the lack of a global namespace allows for basing a language off different primitives.

The Newspeak language has been described as "almost as simple as a class calculus" by Gilad Bracha, though class expressions are not yet implemented in Newspeak. The E programming language uses lexical closure to implement a kind of access control, like our object language.

Our notation for classes was borrowed from object literals in the Self language, but Self objects do not nest as objects do in our object language.

## Conclusion

As the name might suggest, Micro-Utena is a simplified version of the Utena programming system which we are designing, lacking methods of different arities, mutation, inheritance, protection modifiers, and interactivity. But it demonstrates a simple approach to designing a machine for a novel object-oriented language, and serves well as an object for discussing the consequences of the design of the abstract machine and language.

A pure object-oriented language may be implemented using a machine derived from a machine used for implementing functional languages, following equivalences proposed between the languages. The pure form lends itself to an object-capability style, which can be combined with different compilers targeting the abstract machine to allow for varied kinds of languages, as well as incremental modifications to languages without affecting the compiler. Focusing on abstract instructions rather than concrete representations avoids misestimations in the complexity of abstract machines, simplifies meta-programming, and can avoid some kinds of incentives to write worse programs. This view of the abstract machine de-emphasises the design of the abstract machine itself, as the semantics beyond the basic evaluator are provided by languages built atop the machine. Conversely, newer abstract machines should be designed with more different and unique semantics, which can't be easily implemented on other abstract machines.

The design of an object-oriented ISWIM dialect is left as an exercise for the reader.

## Acknowledgements

We would like to thank Gilad Bracha for a discussion on the consequences of making activation records and closures into proper objects. Some of the discussion was on Twitter, some not. Gilbert Baumann also pushed us to come up with a simple basis for a pure object-oriented language, and to make it comparable to the lambda calculus.

## Footnotes:

1

The latter allows for there to be errors when a symbol is unbound, when the symbol would otherwise denote itself in the lambda calculus. The meaning of binding is described later, so this may only make sense after reading on. It only requires some recursion to determine which programs may produce such errors, but we won't concern ourselves with it in this article.

2

Note that $$lookup$$ is undefined if $$s$$ is not bound in $$e$$, and $$eval$$ is partial. These cases correspond to unbound-variable errors and type errors, respectively.

3

These may be checked beforehand by performing a verification pass on every function, checking that instructions push and pop appropriate numbers of values to avoid popping an empty Store stack, or returning with more or less than one value on the Store stack.

4

ML programmers will recognise this as the sole value of the unit type.

5

Giving out low-level and unsafe capabilities to the least amount of code possible also follows the principle of least privilege, which is always nice.