For Handmade Chat 010, Casey talked about a few things, including the distinction between interpretation and translation, and partial specialisation. Interestingly, there's a school of thought which ties the two together.

Before I go into this, I'm going to quickly define interpretation and translation as "us compiler guys" usually think of it, but then I'm going to use Casey's definition in what follows.

So the way that we usually use the term in the business is that we understand that there is no CPU which really executes a "program" as the programmer types it. A programmer types a bunch of (usually ASCII) characters, and this has to be turned into some other form for the CPU to execute it.

Even in the days of 8-bit interpreted BASIC, there was a certain amount of processing that happened to each line of code as you typed it in. At the very least, it was tokenised, so that each keyword (e.g. LET, IF, THEN, ARRAY) was converted into a single byte. It was almost a degenerate example of bytecode in that respect.

Ultimately, your program needs to be translated into machine code, which is the language that the CPU implements directly. If the translation mostly happens statically, before the program is run, we call this "compiled". If it happens partly dynamically, as the program is running, we call this "interpreted". There is a lot of space between these two extremes, but that's usually how we use the terms.

OK. Now let's look at partial specialisation.

A program can be thought of as a function this this kind of "type":

program :: (static_input, dynamic_input) -> output

That is, a program takes some static input and some dynamic input and returns some output.

Partial specialisation can be thought of as a function which takes a program and some static input, and returns a program:

partial_specialise :: ((static_input, dynamic_input) -> output, static_input) -> (dynamic_input -> output)

That is, it takes a program of the previous form, and some static input, and returns a program which takes only the dynamic input and returns the output.

This is a complicated type, with functions that take functions and return functions. It might be a bit simpler if we replace all that with smaller letters:

partial_specialise :: ((A, B) -> C, A) -> (B -> C)

Ideally, partial_specialise should be able to do something sensible with any A, B, or C.

Now, let's think about what an interpreter is. It takes a program and some input, and executes the program, returning some output:

interpreter :: (program,input) -> output

Now what's a compiler? It takes a program, and returns a function which just maps input to output:

compiler :: program -> (input -> output)

With a little thought, we can see:

compiler(program) = partial_specialise(interpreter,program)

That is, a compiler is just a partially-specialised interpreter, with the "specialised" part being the program. You might not understand all the type notation, but that part, if you think about it, this should make some kind of sense.

How good is this compiler? Well, it depends partly on how good your partial specialiser is and how well-written your interpreter is. But yes, it's actually been done.

That, by the way, was the easy part.


Suppose we implemented a compiler this way. We don't really want to ship the interpreter and partial_specialise, because that would be silly. The interpreter doesn't change between programs. So actually, the interpreter itself is static data passed to partial_specialise. So of course we should partially specialise this!

You may be able to see where I'm going with this:

compiler = partial_specialise(partial_specialise,interpreter)

This is much harder to get your brain around, but it totally works, and people have actually implemented compilers this way.

Of course, you're now all wondering what happens if you go one step further into meta-land, right? If you partially specialise that, you get a function which takes an interpreter and returns a compiler. That is, it's a compiler-compiler:

compiler_compiler = partial_specialise(partial_specialise,partial_specialise)

You really don't want to know the technical details, but once again, it actually does work.


This idea was all the rage in the late 80s. It's fair to say that nobody has really implemented all of a real-world industrial-strength compiler this way.

Having said that, the idea still has practical applications. Real compilers are often made out of small translation passes, where an intermediate form is translated into a slightly different intermediate form. There are shipped compilers which use this technique to implement some of those translation passes. Chances are good that you've never used one, though...