8.0

Lecture 8: Defining functions

Last time we built up the infrastructure for calling functions in a manner that’s compatible with the C calling convention, so that we could interact with functions defined in our stub.rs runtime. This prompts the obvious generalization: can we expand our source language to include function calls too? Could we expand it further to define our own functions?

1Function Definitions and Calls

Exercise

1. Extend the source language with a new expression form for function definitions and calls

2. Give examples of functions we’d like to provide, and examples of their use

3. Extend the abstract syntax and its semantics

4. Extend our transformations

5. Test the new expressions

1.1Concrete syntax

We’ll start with concrete syntax. A function call is a new form of expression that starts with a function name and takes zero or more comma-separated expressions as arguments.

‹expr›: ... | IDENTIFIER ( ‹exprs› ) | IDENTIFIER ( ) ‹exprs›: ‹expr› | ‹expr› , ‹exprs›

To account for function definitions, we need to really change our syntactic structure. Our programs can’t just be single expressions any longer: we add a sequence of top-level function definitions, too.

‹program›: | ‹decls› ‹expr› | ‹expr› ‹decls›: | ‹decl› | ‹decl› ‹decls› ‹decl›: | def IDENTIFIER ( ‹ids› ) : ‹expr› end | def IDENTIFIER ( ) : ‹expr› ‹ids›: | IDENTIFIER | IDENTIFIER , ‹ids› ‹expr›: ...

A ‹program› is now a list of zero or more function declarations, followed by a single expression that is the main result of the program.

For our examples, let’s design max, which takes two numeric arguments and returns the larger of the two.

def max(x,y): if x >= y: x else: y end max(17,31)

should evaluate to 31.

1.2Abstract syntax for Calls

First, let’s cover the calling side of the language.

Do Now!

What should the semantics of f(e1, e2, ..., en) be? How should we represent this in our Exp data definition? What knock-on effects does it have for the transformation passes of our compiler?

The first thing we might notice is that attempting to call an unknown function should be prohibited — this is analogous to the scope-checking we already do for variable names, and should be done at the same time. Indeed, we can generalize our scope-checking to a suite of well-formedness checks, that assert that the program we’re compiling is “put together right”. (These static checks include static type-checking, which we are not yet doing, and in fact many popular languages these days are focusing heavily on improving the precision and efficiency of their well-formedness checking as a way to improve programmer efficiency and correctness.) Checking for undefined functions implies that we need something like an environment of known functions. We don’t yet know what that environment should contain, but at a minimum it needs to contain the names of the functions we support.

Do Now!

What other programmer mistakes should we try to catch with well-formedness checking? What new mistakes are possible with function calls?

What should happen if a programmer tries to call max(1) or max(1, 2, 3)? Certainly nothing good can happen at runtime if we allowed this to occur. Fortunately, we can track enough information to prevent this at well-formedness time, too. Our function environment should keep track of known function names and their arities. Then we can check every function call expression and see whether it contains the correct number of actual arguments.

We need more examples:

 Source Output max(1) Compile Error: expected 2 arguments, got 1 max(1, 2, 3) Compile Error: expected 2 arguments, got 3 unknown(1, 2) Compile Error: unknown function 'unknown'

To represent call expressions in our AST, we just need to keep track of the function name, the argument list, and any tag information:


enum Exp<Ann> {
...
Call(String, Vec<Exp<Ann>>, Ann),
}

We need to consider how our expression evaluates, which in turn means considering how it should normalize into sequential form.

Do Now!

What are the design choices here?

Since Exp::Call expressions are compound, containing multiple subexpressions, they probably should normalize similar to how we normalize Prim2 expressions: the arguments should all be made immediate.

pub enum SeqExp<Ann> {
...
Call(String, Vec<ImmExp>, Ann),
}

We have at least two possible designs here, for how to normalize these expressions: we can choose a left-to-right or right-to-left evaluation order for the arguments. For consistency with infix operators, we’ll choose a left-to-right ordering.

Do Now!

What tiny example program, using only the expressions we have so far, would demonstrate the difference between these two orderings?

Do Now!

Extend sequentialization to handle Exp::Call.

1.3Making the call

Once we’ve confirmed our program is well-formed, and subsequently ANFed it, what information do we need to retain in our compiler in order to finish the compilation? Do we still need the function environment? Not really! Assuming that the function name is the same as label name that we call, we don’t need anything else but that name and the immediate arguments of the call. After that, we output the same calling code as when implementing Print above. Remember to push the arguments in reverse order, so that the first argument is closest to the top of the stack.

1.4Defining our own functions

Now that our programs include function definitions and a main expression, our AST representation now grows to match:

pub struct FunDecl<E, Ann> {
pub name: String,
pub parameters: Vec<String>,
pub body: E,
pub ann: Ann,
}

pub struct Prog<E, Ann> {
pub funs: Vec<FunDecl<E, Ann>>,
pub main: E,
pub ann: Ann,
}

Here we are abstract over annotations, as well as the type of expressions. This allows us to instantiate to Prog<Exp> or Prog<SeqExp> to encode whether or not the expressions are in sequential form.

1.5Semantics

Do Now!

What new semantic concerns do we have with providing our own definitions?

As soon as we introduce a new form of definition into our language, we need to consider scoping concerns. One possibility is to declare that earlier definitions can be used by later ones, but not vice versa. This possibility is relatively easy to implement, but restrictive: it prevents us from having mutually-recursive functions. Fortunately, because all our functions are statically defined, supporting mutual recursion is not all that difficult; the only complication is getting the well-formedness checks to work out correctly.

Exercise

Do so.

Additionally, the bodies of function definitions need to consider scope as well.

def sum3(x, y, z):
a + b + c
end

x + 5

This program refers to names that are not in scope: a, b and c are not in scope within sum3, and x is not in scope outside of it.

def f(x): x end

def f(y): y end

f(3)

Repeatedly defining functions of the same name should be problematic: which function is intended to be called?

def f(x, x): x end

f(3, 4)

Having multiple arguments of the same name should be problematic: which argument should be returned here?

1.6Compilation

As we mentioned in Lecture 7, a function body needs to actively participate in the call-stack in order to be usable. To do that, it must (1) save the previous base pointer RBP onto the stack, (2) copy the current stack pointer RSP into RBP, and (3) reserve space for its local variables by decrementing RSP. At the end of the function, it must undo those steps by (1) restoring RSP to its previous value of RBP, (2) popping the saved base-pointer value back into RBP, and (3) returning to the caller.

• At the start of the function:

push RBP          ; save (previous, caller's) RBP on stack
push ...          ; save other callee-save variables on stack
mov RBP, RSP      ; make current RSP the new RBP
sub RSP, 8*N      ; "allocate space" for N local variables (possibly with padding for alignment)

• At the end of the function

mov RSP, RBP      ; restore value of RSP to that just before call
; now, value at [RSP] is caller's (saved) RBP
pop ...           ; restore other callee-save variables on stack
pop RBP           ; so: restore caller's RBP from stack [RSP]
ret               ; return to caller

Between that prologue and epilogue, the body of the function basically is just a normal expression, whose value winds up in RAX as always. However, the crucial difference is that the body of a function can use its arguments while evaluating—that’s the whole point of passing arguments to a function! This is similar in spirit to handling let-bound variables: we just need to keep track of more mappings from names to stack locations. However the details are quite different: rather than looking above RBP (i.e. stack address RBP - 8 * i contains the $$i^{th}$$ local variable), we need to look below RBP, where the arguments were pushed by our caller. There’s a bit of a gap, though: at RBP itself is the saved caller’s value of RBP, and at RBP + 8 is the return address of our function. So

• In a stack-only calling convention, the zeroth argument to our function can be found at RBP + 16, and the $$i^{th}$$ argument can be found at RBP + 8 * (i + 2).

• In the x64 calling convention, the first six arguments go in registers, the seventh argument can be found at RBP + 16, and the $$(i + 6)^{th}$$ argument can be found at RBP + 8 * (i + 2).

Exercise

Complete the remaining stages of the pipeline: enhance sequentialize to work over programs; generate valid assembly output for each of our functions using the calling convention we discussed last time; and write test programs to confirm that the scoping of functions works properly. Can you support recursion? Mutual recursion?

2Testing

Now that we have functions and builtins, especially ones that can produce output, we’re gaining the ability to write non-trivial test programs. At this point, it starts becoming more useful to write integration tests, namely entire programs in our language that we run through the entire compiler pipeline and execute. Unit tests still have their place: it’s very easy to make a tiny mistake somewhere in the compiler and produce bizarre or inexplicable output. Narrowing down the cause of the error is tricky, and requires careful attention to each stage of our pipeline.

Additionally, now that we’re manipulating the stack in earnest, we should be particularly careful that we conform to the calling convention. Valgrind is a tool that’s designed to help check such issues, though unfortunately no longer available on Mac OS X. Once you’ve compiled output/foo.run to produce an executable, executing valgrind output/foo.run will run your program within a sandboxed environment that can check for common mistakes in the calling convention. A clean valgrind run will report no errors. Interpreting valgrind errors can be tricky, but a useful strategy (as always) is to minimize the input program until there’s hardly anything left, and removing anything else seems to make the problem disappear. At that point, start diving into the compiler phases that influence that output, and write unit tests for them.