Posts tagged language
Stack reification via trampolines
21 February 2016 (programming language java)I've been spending a couple weeks now on getting Időrégész to Android in the best possible way. Időrégész is the Hungarian text adventure game from the '80s that I reverse-engineered and then re-implemented in Inform. My original plan was to just implement a Glulx interpreter; however, initial experiments with a Clojure implementation weren't too promising on the performance front.
I decided to turn to compilation instead of interpretation, then: take the Inform-emitted Glulx image, and compile that directly to the JVM. Of course, that approach would have its own problems with self-modifying code, but my goal is just to get it working well enough that I can compile Időrégész, which is very vanilla as far as Inform programs go.
Most instructions of Glulx map to JVM instructions in a relatively straightforward manner; some unsigned integer operations are implemented by hand in Java and then called via invokestatic. The rather baroque string compression is currently handled at compile time, by just emitting a single Java function that looks like this:
public String resolveString(int ptr) { switch (ptr) { case 32851 : return "Class" ; case 32857 : return "Object" ; case 32863 : return "Routine" ; // and so on... }
However, there is one aspect of Glulx that makes the compilation trickier than a straightforward mapping of procedures to procedures and calls to calls: some Glulx opcodes provide fairly detailed access to the runtime state. For example, save must be able to serialize enough of the state that it can later load that back and continue with the next instruction.
In an interpreter, this is a trival matter, since things like the VM's memory and the stack are already reified data structures. However, if it's compiled to JVM, the state of the Glulx program and the state of the JVM is one and the same; so how do you then save it — from the inside?
The solution is to not use the JVM's stack as the Glulx stack; rather, function calls are compiled to areturn instructions that return a small Java object describing which function to call and where to jump back once its result is available. Returns are also compiled to areturn instructions, but this time returning an object describing the result value. Function-local variables and the per-function stack are passed to the generated JVM code as function arguments:
Each Glulx function is compiled into a class that extends the Glulx.Fun class, defined in Kotlin as:
package Glulx abstract class Fun { abstract class Cont { class Return(val value : Int) : Cont() class Call(val args: IntArray, val nextFun : Fun, val contPC : Short) : Cont() } class Result(val value : Int, val contPC : Short) abstract fun enter(stub: Stack.CallStub?, args: IntArray): Stack.CallFrame abstract fun exec(result: Result?, localVars: IntArray): Cont }
Since the JVM doesn't support computed jumps, the continuation address contPC is handled by starting the exec of each Fun with a big tableswitch. Here's an example of a recursively defined factorial function (using the Krakatau JVM assembler's syntax):
.method public exec : (LGlulx/Fun$Result;[I)LGlulx/Fun$Cont; .code stack 10 locals 10 aload_1 ifnull LSTART aload_1 invokevirtual Method Glulx/Fun$Result getContPC ()S tableswitch 0 LCONT0 default: LSTART LSTART: ;; if V0=0, jump to base case aload_2 ldc 0 iaload ifeq L0 ;; START call FACT(V0-1) ldc 1 newarray int dup ldc 0 aload_2 ldc 0 iaload ldc 1 isub iastore new Glulx/Fun$Cont$Call swap dup2 getstatic Field Glulx/Image/FACT fun LGlulx/Fun; ldc 0 invokespecial Method Glulx/Fun$Cont$Call <init> ([ILGlulx/Fun;S)V pop areturn LCONT0: aload_1 invokevirtual Method Glulx/Fun$Result getValue ()I ;; END call FACT(V0-1) ;; Note the code generated for the call spans an areturn! ;; Multiply result by V0 aload_2 ldc 0 iaload imul ;; Return result -- this is the "real" return new Glulx/Fun$Cont$Return swap dup2 invokespecial Method Glulx/Fun$Cont$Return <init> (I)V pop areturn L0: ;; For the base case, we just return 1 new Glulx/Fun$Cont$Return dup ldc 1 invokespecial Method Glulx/Fun$Cont$Return <init> (I)V areturn .end code .end method
Running these functions then becomes a matter of mere stack juggling, implemented again in Kotlin:
package Glulx class Stack { class CallFrame(val parent: CallStub?, val localVars: IntArray) { constructor(parent: CallStub?, localVarCount: Int): this(parent, IntArray(localVarCount)) fun storeArgs(args: IntArray) { for(i in args.zip(localVars).indices) localVars[i] = args[i] } } class CallStub(val parent: CallFrame, val parentFun: Fun, val parentPC : Short) } fun run(startFun: Fun) { var frame = startFun.enter(null, IntArray(0)) var fn = startFun var result : Fun.Result? = null while (true) { val cont = fn.exec(result, frame.localVars) when (cont) { is Fun.Cont.Return -> { val stub = frame.parent if (stub == null) return frame = stub.parent fn = stub.parentFun result = Fun.Result(cont.value, stub.parentPC) } is Fun.Cont.Call -> { val stub = Stack.CallStub(frame, fn, cont.contPC) fn = cont.nextFun frame = fn.enter(stub, cont.args) result = null } } } }
In the real implementation, there's slightly more state to pass around: Fun.exec also gets as argument an instance of a generic Environment class which it can use to e.g. access the main Glulx memory, or to issue Glk calls; and there are some boring details about handling both 32-, 16- and 8-bit local variables.
Note that the exact same technique can also be used to implement tail recursion (even mutual tail recursion) on a platform that doesn't support it, like the JVM. I am not using it here, but Glulx actually has a tailcall instruction (not used by Időrégész, or sane Inform code in general), so it might come handy if I want to increase feature coverage.
Typed embedding of STLC into Haskell
5 February 2015 (programming haskell language correctness)Someone posted to the Haskell subreddit this blogpost of Lennart where he goes step-by-step through implementing an evaluator and type checker for CoC. I don't know why this post from 2007 showed up on Reddit this week, but it's a very good post, pedagogically speaking. Go and read it.
In this post, I'd like to elaborate on the simply-typed lambda calculus part of his blogpost. His typechecker defines the following types for representing STLC types, terms, and environments:
data Type = Base | Arrow Type Type deriving (Eq, Show) type Sym = String data Expr = Var Sym | App Expr Expr | Lam Sym Type Expr deriving (Eq, Show)
The signature of the typechecker presented in his post is as follows:
type ErrorMsg = String type TC a = Either ErrorMsg a newtype Env = Env [(Sym, Type)] deriving (Show) tCheck :: Env -> Expr -> TC Type
My approach is to instead create a representation of terms of STLC in such a way that only well-scoped, well-typed terms can be represented. So let's turn on a couple of heavy-weight language extensions from GHC 7.8 (we'll see how each of them is used), and define a typed representation of STLC terms:
{-# LANGUAGE GADTs, StandaloneDeriving #-} {-# LANGUAGE DataKinds, KindSignatures, TypeFamilies, TypeOperators #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TemplateHaskell #-} -- sigh... import Data.Singletons.Prelude import Data.Singletons.TH import Data.Type.Equality -- | A (typed) variable is an index into a context of types data TVar (ts :: [Type]) (a :: Type) where Here :: TVar (t ': ts) t There :: TVar ts a -> TVar (t ': ts) a deriving instance Show (TVar ctx a) -- | Typed representation of STLC: well-scoped and well-typed by construction data TTerm (ctx :: [Type]) (a :: Type) where TConst :: TTerm ctx Base TVar :: TVar ctx a -> TTerm ctx a TLam :: TTerm (a ': ctx) b -> TTerm ctx (Arrow a b) TApp :: TTerm ctx (Arrow a b) -> TTerm ctx a -> TTerm ctx b deriving instance Show (TTerm ctx a)
The idea is to represent the context of a term as a list of types of variables in scope, and index into that list, de Bruijn-style, to refer to variables. This indexing operation maintains the necessary connection between the pointer and the type that it points to. Note the type of the TLam constructor, where we extend the context at the front for the inductive step.
To give a taste of how convenient it is to work with this representation programmatically, here's a total evaluator:
-- | Interpretation (semantics) of our types type family Interp (t :: Type) where Interp Base = () Interp (Arrow t1 t2) = Interp t1 -> Interp t2 -- | An environment gives the value of all variables in scope in a given context data Env (ts :: [Type]) where Nil :: Env '[] Cons :: Interp t -> Env ts -> Env (t ': ts) lookupVar :: TVar ts a -> Env ts -> Interp a lookupVar Here (Cons x _) = x lookupVar (There v) (Cons _ xs) = lookupVar v xs -- | Evaluate a term of STLC. This function is total! eval :: Env ctx -> TTerm ctx a -> Interp a eval env TConst = () eval env (TVar v) = lookupVar v env eval env (TLam lam) = \x -> eval (Cons x env) lam eval env (TApp f e) = eval env f $ eval env e
Of course, the problem is that this representation is not at all convenient for other purposes. For starters, it is certainly not how we would expect human beings to type in their programs.
My version of the typechecker is such that instead of giving the type of a term (when it is well-typed), it instead transforms the loose representation (Term) into the tight one (TTerm). A Term is well-scoped and well-typed (under some binders) iff there is a TTerm corresponding to it. Let's use singletons to store type information in existential positions:
$(genSingletons [''Type]) $(singDecideInstance ''Type) -- | Existential version of 'TTerm' data SomeTerm (ctx :: [Type]) where TheTerm :: Sing a -> TTerm ctx a -> SomeTerm ctx -- | Existential version of 'TVar' data SomeVar (ctx :: [Type]) where TheVar :: Sing a -> TVar ctx a -> SomeVar ctx -- | A typed binder of variable names data Binder (ctx :: [Type]) where BNil :: Binder '[] BCons :: Sym -> Sing t -> Binder ts -> Binder (t ': ts)
Armed with these definitions, we can finally define the type inferer. I would argue that it is no more complicated than Lennart's version. In fact, it has the exact same shape, with value-level equality tests replaced with Data.Type.Equality-based checks.
-- | Type inference for STLC infer :: Binder ctx -> Term -> Maybe (SomeTerm ctx) infer bs (Var v) = do TheVar t v' <- inferVar bs v return $ TheTerm t $ TVar v' infer bs (App f e) = do TheTerm (SArrow t0 t) f' <- infer bs f TheTerm t0' e' <- infer bs e Refl <- testEquality t0 t0' return $ TheTerm t $ TApp f' e' infer bs (Lam v ty e) = case toSing ty of SomeSing t0 -> do TheTerm t e' <- infer (BCons v t0 bs) e return $ TheTerm (SArrow t0 t) $ TLam e' inferVar :: Binder ctx -> Sym -> Maybe (SomeVar ctx) inferVar (BCons u t bs) v | v == u = return $ TheVar t Here | otherwise = do TheVar t' v' <- inferVar bs u return $ TheVar t' $ There v' inferVar _ _ = Nothing
Note that pattern matching on Refl in the App case brings in scope type equalities that are crucial to making infer well-typed.
Of course, because of the existential nature of SomeVar, we should provide a typechecker as well which is a much more convenient interface to work with:
-- | Typechecker for STLC check :: forall ctx a. (SingI a) => Binder ctx -> Term -> Maybe (TTerm ctx a) check bs e = do TheTerm t' e' <- infer bs e Refl <- testEquality t t' return e' where t = singByProxy (Proxy :: Proxy a) -- | Typechecker for closed terms of STLC check_ :: (SingI a) => Term -> Maybe (TTerm '[] a) check_ = check BNil
(The SingI a constraint is an unfortunate implementation detail; the kind of a is Type, which is closed, so GHC should be able to know there is always going to be a SingI a instance).
To review, we've written a typed embedding of STLC into Haskell, with a total evaluator and a typechecker, in about 110 lines of code.
If we were doing this in something more like Agda, one possible improvement would be to define a function untype :: TTerm ctx a -> Term and use that to give check basically a type of Binder ctx -> (e :: Term) -> Either ((e' :: TTerm ctx a) -> untype e' == e -> Void) (TTerm ctx a), i.e. to give a proof in the non-well-typed case as well.
Arrow's place in the Applicative/Monad hierarchy
12 July 2014 (programming haskell language)I've been thinking lately about arrows in relation to applicative functors and monads. The difference between the latter two is easy to intuit (and I'll describe it via an example below), but I never managed to get the same level of understanding about arrows. There's a somewhat famous paper about this question, which has a very clear-cut diagram showing that applicatives embed into arrows and arrows embed into monads (and both containments are non-trivial), which I understood as meaning every arrow is an applicative functor, and every monad is an arrow.
At first glance, this makes sense, given the well-known result that monads are exactly equivalent to arrows that are also instances of ArrowApply, as witnessed by the Haskell types Kleisli and ArrowMonad. However, there's no immediately obvious reason why you couldn't also turn an applicative functor into an arrow, so how does that leave any room for arrows to be different from applicatives? (As an aside, the fact that applicative functors have kind ⋆ → ⋆ and arrows have kind ⋆ → ⋆ → ⋆ has been a huge complication for me in trying to compare them).
Now, finally, based on the helpful replies to that StackOverflow question and the associated Reddit thread, I am confident enough to answer my own question.
Tom Ellis suggested thinking about a concrete example involving file I/O, so let's compare three approaches to it using the three typeclasses. To make things simple, we will only care about two operations: reading a string from a file and writing a string to a file. Files are going to be identified by their file path:
type FilePath = String
Monadic I/O
Our first I/O interface is defined as follows:
data IOM ∷ ⋆ → ⋆ instance Monad IOM readFile ∷ FilePath → IOM String writeFile ∷ FilePath → String → IOM ()
Using this interface, we can for example copy a file from one path to another:
copy ∷ FilePath → FilePath → IOM () copy from to = readFile from >>= writeFile to
However, we can do much more than that: the choice of files we manipulate can depend on effects upstream. For example, the below function takes an index file which contains a filename, and copies it to the given target directory:
copyIndirect ∷ FilePath → FilePath → IOM () copyIndirect index target = do from ← readFile index copy from (target ⟨/⟩ to)
On the flip side, this means there is no way to know upfront the set of filenames that are going to be manipulated by a given value action ∷ IOM α. By "upfront", what I mean is the ability to write a pure function fileNames :: IOM α → [FilePath].
Of course, for non-IO-based monads (such as ones for which we have some kind of extractor function μ α → α), this distinction becomes a bit more fuzzy, but it still makes sense to think about trying to extract information without evaluating the effects of the monad (so for example, we could ask "what can we know about a Reader Γ α without having a value of type Γ at hand?").
The reason we can't really do static analysis in this sense on monads is because the function on the right-hand side of a bind is in the space of Haskell functions, and as such, is completely opaque.
So let's try restricting our interface to just an applicative functor.
Applicative I/O
data IOF ∷ ⋆ → ⋆ instance Applicative IOF readFile ∷ FilePath → IOF String writeFile ∷ FilePath → String → IOF ()
Since IOF is not a monad, there's no way to compose readFile and writeFile, so all we can do with this interface is to either read from a file and then postprocess its contents purely, or write to a file; but there's no way to write the contents of a file into another one.
How about changing the type of writeFile?
writeFile′ ∷ FilePath → IOF (String → ())
The main problem with this interface is that while it would allow writing something like
copy ∷ FilePath → FilePath → IOF () copy from to = writeFile′ to ⟨*⟩ readFile from
it leads to all kind of nasty problems because String → () is such a horrible model of writing a string to a file, since it breaks referential transparency. For example, what do you expect the contents of out.txt to be after running this program?
(λ write → [write "foo", write "bar", write "foo"]) ⟨$⟩ writeFile′ "out.txt"
Two approaches to arrowized I/O
First of all, let's get two arrow-based I/O interfaces out of the way that don't (in fact, can't) bring anything new to the table: Kleisli IOM and Applicarrow IOF.
The Kleisli-arrow of IOM, modulo currying, is:
readFile ∷ Kleisli IOM FilePath String writeFile ∷ Kleisli IOM (FilePath, String) ()
Since writeFile's input still contains both the filename and the contents, we can still write copyIndirect (using arrow notation for simplicity). Note how the ArrowApply instance of Kleisli IOM is not even used.
copyIndirect ∷ Kleisli IOM (FilePath, FilePath) () copyIndirect = proc (index, target) → do from ← readFile ↢ index s ← readFile ↢ from writeFile ↢ (to, s)
The Applicarrow of IOF would be:
readFile ∷ FilePath → Applicarrow IOF () String writeFile ∷ FilePath → String → Applicarrow IOF () ()
which of course still exhibits the same problem of being unable to compose readFile and writeFile.
A proper arrowized I/O interface
Instead of transforming IOM or IOF into an arrow, what if we start from scratch, and try to create something in between, in terms of where we use Haskell functions and where we make an arrow? Take the following interface:
data IOA ∷ ⋆ → ⋆ → ⋆ instance Arrow IOA readFile ∷ FilePath → IOA () String writeFile ∷ FilePath → IOA String ()
Because writeFile takes the content from the input side of the arrow, we can still implement copy:
copy ∷ FilePath → FilePath → IOA () () copy from to = readFile from >>> writeFile to
However, the other argument of writeFile is a purely functional one, and so it can't depend on the output of e.g. readFile; so copyIndirect can't be implemented with this interface.
If we turn this argument around, this also means that while we can't know in advance what will end up being written to a file (before running the full IOA pipeline itself), but we can statically determine the set of filenames that will be modified.
Conclusion
Monads are opaque to static analysis, and applicative functors are poor at expressing dynamic-time data dependencies. It turns out arrows can provide a sweet spot between the two: by choosing the purely functional and the arrowized inputs carefully, it is possible to create an interface that allows for just the right interplay of dynamic behaviour and amenability to static analysis.
Yo dawg I heard you like esoteric programming languages...
5 October 2013 (programming language brainfuck haskell)... so I made a compiler to turn a Turing tarpit into a one instruction set computer so you can use the processor inside your processor to compute between cycles of computation.
There's this wonderful paper detailing the computation model discovered in the x86 memory management unit, or MMU for short. Basically, by setting up the page table right, you can arrange for a page fault handler to lead to another page fault. If you set things up juuuust right, you can get a whole cascade of page faults. You can then modulate this to implement a single instruction, Harvard-architecture, Turing-complete computer using Move-Decrement-BranchIfZero, or MOVDBZ for short, with the following semantics in pseudocode:
MOVDBZ (x ← y-1) THEN l ELSE l′ = do val ← read x if val == 0 then do write y 0 jump l else do write y (val-1) jump l′
I'm going to refer to x and y as the source and target registers below, but it's important to understand that these are not the real registers of the x86 processor that we are talking about here; rather, these are the registers of this virtual machine living inside the MMU. The term Harvard architecture means the instruction labels l and l′ come from a completely separate namespace as the register (c.f. von Neumann computer).
Another important thing is that from the real x86 processor's point of view, the whole computation takes up zero cycles. That is because the resolution of the page fault happens between (or, to be more precise, in the middle of) instructions, so the whole cascade will be executed by the MMU before the next x86 instruction runs.
The paper linked above has all the details; this is just a summary to motivate what follows. Readers of my blog already know Brainfuck is Turing-complete, since we managed to compile register machine code into Brainfuck. So let's continue that chain of reasoning (motivated by this Proggit comment), and prove the Turing-completeness of MovDBz by compiling Brainfuck into MovDBz.
First of all, I'm going to add two new instructions to our one-instruction machine; hopefully we can all agree they don't change the spirit of the challange:
- HALT is already implicitly part of the original language, since you can set up parts of the page table in such a way that the cascade stops
- I've added PRINT to be able to do some I/O. We won't be aiming for input, so the Brainfuck instruction , is not going to be supported.
Here's a straightforward interpreter for this dialect of MovDBz.
At initial glance, there seem to be two tricky aspects of Brainfuck with respect to MovDBz:
- Brainfuck has (limited) indirect addressing: there's a pointer register that addresses into the other data cells. MovDBz has nothing like that: every instruction has the source and target registers hardcoded.
- Brainfuck has increment and decrement operators, whereas MovDBz only has decrement.
Indirect addressing
The first problem is solved by basically turning this:
into this:
else if ptr == 1 then increment cells[1]
else if ptr == 2 then increment cells[2]
...
else if ptr == n then increment cells[n]
else error
This latter construct is easy to implement if we nominate one of our MovDBz registers as a temporary register tmp:
scan(1): MOVDBZ (tmp ← tmp-1) THEN found(1) ELSE scan(2)
...
scan(n): MOVDBZ (tmp ← tmp-1) THEN found(n) ELSE impossible
found(i): increment cells[i]
where n is the maximum index of Brainfuck cells.
Note that this means the generated MovDBz program's size will be linear in the number of Brainfuck cells. The Brainfuck spec calls for 30,000 cells; when compiling to MovDBz, it is usually a good idea to use a lot less cells.
Incrementing
At first glance, it might seem impossible to implement incrementing when all we have is a single decrement operator. Modular arithmetic comes to the rescue: since x + 1 ≡ x - 255 (mod 256), we can do repeated decrements (255 times) to emulate incrementing. Of course, we also have to handle specially the case when the value gets to 0.
So with the help of a constant register C256 we keep clamped to 256, we can turn this:
Into this:
count: MOVDBZ (tmp ← tmp-1) THEN next ELSE dec
dec: MOVDBZ (cells[i] ← cells[i]-1) THEN uflow ELSE count
uflow: MOVDBZ (cells[i] ← C256-1) THEN impossible ELSE count
next: ...
We can implement incrementing the pointer (> in Brainfuck) similarily, by having another constant register storing the number of cells (i.e. n+1).
Putting it all together
To prototype these ideas, I've written a compiler that uses the above techniques to turn Brainfuck code into MovDBz. The Haskell implementation is available on Github, but be aware that it is somewhat-horrible spaghetti code.
One nice aspect of it, though, is that the compiler generates semantic labels and registers (like Cell i for the register containing the i'th Brainfuck cell, or ScanFinished i (Inc l) for the label of the instruction that starts incrementing the Cell i and eventually goes to label l when finished). This can make the output almost readable... For example, here's the result of compiling -.>. with two Brainfuck cells:
λ> mapM_ print $ compileBF 1 [DecData, Output, IncPtr, Output] (Src 0,MOVDBZ C0 C0 (S (Scan 0 (Dec (Src 1)))) (S (Scan 0 (Dec (Src 1))))) (Src 1,MOVDBZ C0 C0 (S (Scan 0 (Print (Src 2)))) (S (Scan 0 (Print (Src 2))))) (Src 2,MOVDBZ C0 C0 (S (DoIncPtr (Src 3))) (S (DoIncPtr (Src 3)))) (Src 3,MOVDBZ C0 C0 (S (Scan 0 (Print (Src 4)))) (S (Scan 0 (Print (Src 4))))) (Src 4,HALT) (S (Scan 0 (Dec (Src 1))),MOVDBZ Ptr Tmp (S (ScanFinished 0 (Dec (Src 1)))) (S (Scan 1 (Dec (Src 1))))) (S (Scan 0 (Print (Src 2))),MOVDBZ Ptr Tmp (S (ScanFinished 0 (Print (Src 2)))) (S (Scan 1 (Print (Src 2))))) (S (Scan 0 (Print (Src 4))),MOVDBZ Ptr Tmp (S (ScanFinished 0 (Print (Src 4)))) (S (Scan 1 (Print (Src 4))))) (S (Scan 1 (Dec (Src 1))),MOVDBZ Tmp Tmp (S (ScanFinished 1 (Dec (Src 1)))) (S End)) (S (Scan 1 (Print (Src 2))),MOVDBZ Tmp Tmp (S (ScanFinished 1 (Print (Src 2)))) (S End)) (S (Scan 1 (Print (Src 4))),MOVDBZ Tmp Tmp (S (ScanFinished 1 (Print (Src 4)))) (S End)) (S (ScanFinished 0 (Dec (Src 1))),MOVDBZ C0 C0 (S (DecCell 0 (Src 1))) (S (DecCell 0 (Src 1)))) (S (ScanFinished 0 (Print (Src 2))),PRINT (Cell 0) (Src 2)) (S (ScanFinished 0 (Print (Src 4))),PRINT (Cell 0) (Src 4)) (S (ScanFinished 1 (Dec (Src 1))),MOVDBZ C0 C0 (S (DecCell 1 (Src 1))) (S (DecCell 1 (Src 1)))) (S (ScanFinished 1 (Print (Src 2))),PRINT (Cell 1) (Src 2)) (S (ScanFinished 1 (Print (Src 4))),PRINT (Cell 1) (Src 4)) (S (DecCell 0 (Src 1)),MOVDBZ (Cell 0) (Cell 0) (S (UnderflowCell 0 (Src 1))) (Src 1)) (S (DecCell 1 (Src 1)),MOVDBZ (Cell 1) (Cell 1) (S (UnderflowCell 1 (Src 1))) (Src 1)) (S (UnderflowCell 0 (Src 1)),MOVDBZ CMaxData (Cell 0) (S End) (Src 1)) (S (UnderflowCell 1 (Src 1)),MOVDBZ CMaxData (Cell 1) (S End) (Src 1)) (S (DoIncPtr (Src 3)),MOVDBZ CMaxAddr Tmp (S (DoIncPtrLoop (Src 3) IncPtrDecCounter)) (S (DoIncPtrLoop (Src 3) IncPtrDecCounter))) (S (DoIncPtrLoop (Src 3) IncPtrDecCounter),MOVDBZ Tmp Tmp (Src 3) (S (DoIncPtrLoop (Src 3) IncPtrDecPtr))) (S (DoIncPtrLoop (Src 3) IncPtrDecPtr),MOVDBZ Ptr Ptr (S (DoIncPtrLoop (Src 3) IncPtrUnderflowPtr)) (S (DoIncPtrLoop (Src 3) IncPtrDecCounter))) (S (DoIncPtrLoop (Src 3) IncPtrUnderflowPtr),MOVDBZ CMaxAddr Ptr (S (DoIncPtrLoop (Src 3) IncPtrDecCounter)) (S (DoIncPtrLoop (Src 3) IncPtrDecCounter))) (S End,HALT)
Which, after label and register layouting, comes out as this:
λ> mapM_ print $ layout 1 $ compileBF 1 [DecData, Output, IncPtr, Output] (0,MOVDBZ 0 0 5 5) (1,MOVDBZ 0 0 6 6) (2,MOVDBZ 0 0 25 25) (3,MOVDBZ 0 0 7 7) (4,HALT) (5,MOVDBZ 3 4 11 8) (6,MOVDBZ 3 4 12 9) (7,MOVDBZ 3 4 13 10) (8,MOVDBZ 4 4 14 29) (9,MOVDBZ 4 4 15 29) (10,MOVDBZ 4 4 16 29) (11,MOVDBZ 0 0 17 17) (12,PRINT 5 2) (13,PRINT 5 4) (14,MOVDBZ 0 0 18 18) (15,PRINT 6 2) (16,PRINT 6 4) (17,MOVDBZ 1 4 19 19) (18,MOVDBZ 1 4 22 22) (19,MOVDBZ 4 4 1 20) (20,MOVDBZ 5 5 21 19) (21,MOVDBZ 1 5 19 19) (22,MOVDBZ 4 4 1 23) (23,MOVDBZ 6 6 24 22) (24,MOVDBZ 1 6 22 22) (25,MOVDBZ 2 4 26 26) (26,MOVDBZ 4 4 3 27) (27,MOVDBZ 3 3 28 26) (28,MOVDBZ 2 3 26 26) (29,HALT)
Simply Typed Lambda Calculus in Agda, Without Shortcuts
1 May 2013 (programming agda correctness language)There are many ways to introduce dependent type systems, depending on which side of the Curry-Howard lens you look through. But if your interest is more in programming languages than proof assistants, then length-indexed vectors is your Hello, World!, and an interpreter for the simply-typed lambda calculus is your FizzBuzz. I think the usual presentation of the latter is a bit of a cheat, for reasons which I will explain below. I'll also show a "non-cheating" version.
Continue reading »Older posts:
- 1 December 2012: Static analysis with Applicatives
- 3 December 2011: How I learned about Java's lack of type erasure the hard way
- 23 October 2010: The case for compositional type checking (1 comment)
- 7 September 2010: From Register Machines to Brainfuck, part 2
- 6 September 2010: From Register Machines to Brainfuck, part 1
- 16 May 2010: Compiling Haskell into C++ template metaprograms
- 22 February 2010: The B Method for Programmers (Part 2)
- 16 February 2010: The B Method for Programmers (Part 1)
- 17 March 2009: Eliminating magic strings from ELisp's (interactive)
- 22 January 2009: Type inference for CLazy (1 comment)
- 12 January 2009: CLazy interpreter and compiler (1 comment)
- 2 June 2008: Incompetent fuckwits (3 comments)
- 2 August 2007: A kindergarten of pancakes