Assignment 6: Egg-eater:   Heap Allocation
1 Language and Requirements
2 Syntax Additions and Semantics
3 Semantics and Representation of Arrays
3.1 Array Heap Layout
3.2 Accessing Array Contents
3.3 Setting up the runtime
3.4 Interaction with Existing Features
4 Implementing Closures
4.1 Changes to static and dynamic errors
4.2 Implementing Closures by translation
5 Recommended TODO List
6 List of Deliverables
7 Grading Standards
8 Submission
8.10

Assignment 6: Egg-eater: Heap Allocation

Due: Sun 11/19 at 11:59pm

git clone

In this assignment you’ll extend to implement arrays and closures, which are sort like eggs lain in the heap, if you don’t think about it too much...

1 Language and Requirements

Egg-eater starts with the same semantics as Diamondback, and adds support for two types of heap-allocated objects: arrays and closures. The language features are:

The runtime system must add support for

Read through the whole assignment below carefully, then take note of the recommended TODO list at the bottom for a suggested order to tackle these pieces.

2 Syntax Additions and Semantics

Egg-eater adds two new kinds of heap-allocated "objects": arrays and closures.

For arrays, the new syntactic forms are array expressions, along with accessor expressions for getting and setting the contents of arrays, a unary primitive for checking if a value is an array, and a unary primitive for getting the length of an array. Array expressions are a series of zero or more comma-separated expressions enclosed in (square) brackets. An array access expression is one expression followed by another enclosed in square brakcets, which expresses which field to be accessed. isarray is a primitive (like isnum and isbool) that checks if a value is an array. Finally, length is a primitive that produces the length of an array.

For closures we change add a new syntactic forms lambda x1,...: e end for anonymous function values, a unary primitive isfun for checking if a value is a function and we now allow for the function position in a call expression to be an arbitrary value, not just an identifier.

‹expr›: ... | ‹array› | ‹expr› [ ‹expr› ] | ‹expr› [ ‹expr› ] := ‹expr› | ‹expr› ; ‹expr› | isarray ( ‹expr› ) | length ( ‹expr› ) | ‹expr› ( ) | ‹expr› ( ‹exprs› ) | lambda : ‹expr› end | lambda ‹params› : ‹expr› end ‹params›: | IDENTIFIER | IDENTIFIER , ‹params› ‹exprs›: | ‹expr› | ‹expr› , ‹exprs› ‹array›: | [ ] | [ ‹exprs› ]

For example, we can create three arrays and access their fields:

let unit = [] in
let one = [1] in
let three = [3, 4, 5] in
three[0]

An array-set expression evaluates both arguments in left-to-right order, updates the array at the appropriate index, and returns the entire tuple value as its result. We can therefore chain array-set expressions together, and write

let three = [0, 0, 0] in
((three[0] := 1)[1]  := 2)[2] := 3

let pair = [0, 0] in
pair[0] := (three[1] := 10)
After running this, three will be [1,10,3] and pair will be [three, 0]

We add three array forms as Prim operations. Array literals [e1,...] are parsed as MakeArray, indexing e1[e2] is parsed as ArrayGet and updating e1[e2] := e3 is parsed as ArraySet. Additionally, we have an IsArray primitive for checking if a dynamically typed value is an array. Finally, we also add a Semicolon case to the Exp to represent sequencing without introducing a binding.

enum Prim {
  ...
  IsArray, // Unary
  MakeArray, // 0 or more arguments
  ArrayGet, // first arg is array, second is index
  ArraySet, // first arg is array, second is index, third is new value
}

enum Exp<Ann> {
  ...
    Semicolon {
        e1: Box<Exp<Ann>>,
        e2: Box<Exp<Ann>>,
        ann: Ann,
    },
}

We add a Semicolon form to SeqExp as well, though you may choose instead to desugar it into a Let.

For closures, we only need a few small changes to support the new forms produced by the parser, but we add several internal-only forms to simplify the implementation. First, we allow our Call form to have an arbitrary expression as the function, and add a IsFun primitive as well.

Then we have several internal-only forms: DirectCall and ClosureCall are used to distinguish between which calls can be statically resolved (direct) and compiled as in diamondback versus those that must be implemented by calling a dynamically determined closure. Next, the ExternalCall form has been updated to allow the function pointer to be either statically determined (Label) or dynamically determined (Var). Finally the CheckArityAndUntag, GetCode and GetEnv forms are used to simplify the code generation for calling closures, and the MakeClosure form for creating closures.

enum Prim {
  ...
  IsFun, // Unary
  CheckArityAndUntag(usize), // Unary, input is arbitrary value
  GetCode, // Unary, input is untagged pointer to a closure
  GetEnv,  // Unary, input is untagged pointer to a closure
}

enum Exp {
  ...
    // A call that may or may not require a closure
    Call(Box<Exp<Ann>>, Vec<Exp<Ann>>, Ann),

    // A call to a dynamically determined closure
    ClosureCall(Box<Exp<Ann>>, Vec<Exp<Ann>>, Ann),
    // A direct call to a statically known function definition
    DirectCall(String, Vec<Exp<Ann>>, Ann),

    // A local tail call to a statically known local function definition
    InternalTailCall(String, Vec<Exp<Ann>>, Ann),
    // A function call to either a statically known global function or
    // a dynamically determined code pointer
    ExternalCall {
        fun: VarOrLabel,
        args: Vec<Exp<Ann>>,
        is_tail: bool,
        ann: Ann,
    },
    MakeClosure {
        arity: usize,
        label: String,
        env: Box<Exp>,
        ann: Ann,
    },
}

enum VarOrLabel {
    Var(String),
    Label(String),
}

We add similar forms to SeqExp. Note that though we have array and closure values, MakeArray and MakeClosure are not immediates as they perform some non-trivial execution steps: allocating memory. Their values at runtime will always be accessed through variables.

3 Semantics and Representation of Arrays

3.1 Array Heap Layout

Array expressions should evaluate their sub-expressions in order from left to right, and store the resulting values on the heap. We discussed several possible representations in class for laying out arrays on the heap; the one we recommend you use for this assignment is:

image

That is, one word is used to store the count of the number of elements in the array, and the subsequent words are used to store the values themselves. Note that the count is an actual integer; it is not an encoded Egg-eater integer value.

An array value is stored in variables as the address of the first word in the array’s memory, but with an additional 1 added to the value to act as a tag. So, for example, if the start address of the above memory were 0x0adadad0, the array value would be 0x0adadad1. With this change, we extend the set of tag bits to the following:

Visualized differently, the value layout is:

Bit pattern

     

Value type

0xWWWWWWW[bbb0]

     

Number

0xFFFFFFF[1111]

     

True

0x7FFFFFF[1111]

     

False

0xWWWWWWW[b001]

     

Array

0xWWWWWWW[b011]

     

Closure

Where W is a “wildcard” 16-bit nibble and b is a “wildcard” bit.

3.2 Accessing Array Contents

In an array access expression, like

let t = [6, 7, 8, 9] in t[1]

The behavior should be:

  1. Evaluate the expression in array position (before the brackets), then the index expression (the one inside the brackets).

  2. Check that the array position’s value is actually an array, and signal an error containing "indexed into non-array" if not.

  3. Check that the indexing expression is a number. Signal an error containing "index not a number" if not.

  4. Check that the index number is a valid index for the array value — that is, it is between 0 and the stored number of elements in the array minus one. Signal an error containing "index out of bounds"

  5. Evaluate to the array element at the specified index.

These same error messages apply also to setting the value of an array. Additionally, if at runtime, length is performed on a non array value, the error message should include "length called with non-array" analogous to similar previous error messages.

You can do this with just rax, but it causes some pain. Feel free to use as scratch registers r8 and r9 as needed (for example saving the index in r9 and using rax to store the address of the tuple). This can save a number of instructions. Note that we will generate code that doesn’t need to use r8 or r9 beyond the extent of this one expression, so there is no need to worry about saving or restoring the old value from r9 except in the compilation of the main expression.

You also may want to use an extended syntax for mov in order to combine these values for lookup. For example, this kind of arithmetic is allowed inside mov instructions:

mov rax, [rax + r9 * 8 + 8]

This would access the memory at the location of rax, offset by the value of r9 * 8 + 8. So if the value in r9 were, say 2, this may be part of a scheme for accessing the second element of a tuple. To aid in this we have generalized the MemRef type to allow for these dynamically computed offsets:

struct MemRef {
    reg: Reg,
    offset: Offset,
}

enum Offset {
    Constant(i32),
    Computed { // reg * factor + constant
        reg: Reg,
        factor: i32,
        constant: i32,
    },
}

Neither R9 nor anything beyond the typical Offset::Constant is required to make this work, but you may find it simpler to compile using these. R8 and R9 are good for short-lived scratch registers because they are caller-save in the System V AMD 64 ABI, so you are free to use them without saving them, but know that their values do not persist across a call to e.g., print.

3.3 Setting up the runtime

When the Rust stub calls into your generated assembly code, you need to initialize the heap so your allocation operates correctly. The register r15 has been designated as the heap pointer. To initialize the heap you should either use the technique from Lecture 10: Tuples and Memory Allocation where we construct an array in Rust and pass it in as an argument to start_here or you can construct an array in the data section of your assembly file. For instance if you start the file with

section .data
HEAP:    times 1024 dq 0

Then the label HEAP will be resolved to an address pointing to an array of 1024 QWORD values initialized to all 0. This is 8 kilobytes which should be enough to pass the autograder tests. Whichever approach you take, it is up to your code to initialize the heap pointer R15 to point to the beginning of this space and ensure that the value of R15 is always the address of the next block of free space (in increasing address order) in the provided block of memory.

Additionally, r15 is a callee-save register in the System V AMD 64 ABI. This is a sensible choice because it means we don’t need to save it when we call into Rust functions such as print. But this does mean that we need to save r15 to the stack upon entry and restore it before returning. This can all be accomplished by using the following implementation for start_here, assuming the main expression is compiled with the label main_exp:

start_here:
        push r15            ; save the original value in r15
        sub rsp, 8          ; padding to ensure the correct alignment
        lea r15, [rel HEAP] ; load the address of the HEAP into r15 using rip-relative addressing
        call main_exp       ; call into the actual code for the main expression of the program
        add rsp, 8          ; remove the padding
        pop r15             ; restore the original to r15
        ret

3.4 Interaction with Existing Features

Any time we add a new feature to a language, we need to consider its interactions with all the existing features.

We’ll take them one at a time.

4 Implementing Closures

4.1 Changes to static and dynamic errors

First, we need to consider how static and dynamic error messages are affected by making functions first-class values.

4.2 Implementing Closures by translation

At a high level, the core features of closures are a combination of heap-allocated objects and function calls. So when implementing closures, we will translate them into internally used forms for constructing and using closure objects as well as our pre-existing forms for function calls that we implemented in diamondback.

We break this down into several steps:

First, you should implement an optimization pass eliminate_closures<Ann> : &Exp<Ann> -> Exp<()> that translates Calls to either DirectCall for those where the function being called can be resolved to a statically known label and ClosureCall for where the function being called is a dynamically determined closure. If this pass is implemented correctly, valid diamondback programs should be compiled with minimal performance hit. As an example, in the following program there are two function calls:

def f(x,y): ... in
let arr = [f] in
let r   = f(0,1) in
let g   = arr[0] in
g(r, 2)

The first call f(0,1) is easily determined to be a call to a known function definition and so should be translated to a DirectCall. The second call, g(r, 2) is to a variable g that does not correspond to a known function definition and so should be translated to a ClosureCall1notice that if we did some further simplifications to this program, we would be able to figure out that g will be f. This shows how different optimizations feed into each other..

Next, you’ll need to update your lambda lifting procedure to handle lambda functions and mutually recursive closures. First, we need to lift not just functions that get non-tail calls but also any function definition that ends up being used as a closure. For the details of the updpated translation, refer to the lecture and notes on closures. You should use generate intermediate code that uses the MakeClosure, CheckArityAndUntag, GetCode and GetEnv internal forms that we have provided. Additionally, the function in an ExternalCall can now be either a dynamically determined variable or a statically resolved function name that will correspond to a label in code generation.

Finally update your code generation to implement these new forms as well as the dynamically determined ExternalCalls. To support dynamically determined calls we have updated all jmp and call forms to take a register as an argument. Additionally, we have added a special Instr in asm.rs called RelativeLoadAddress(r, l) which is the instruction lea r, [rel l] which uses RIP-relative addressing to load a label into a register, which is the easiest way to load the address corresponding to a label into a register that works on Mac OS X. This instruction will be useful for implementing MakeClosure.

5 Recommended TODO List

  1. Try implementing an interesting test cases using lists, binary trees or another interesting recursive datatypes in Egg-eater. Include one of these examples as interesting.egg in the examples/ directory.

  2. Add your existing code, adding panic! for the new cases (arrays, lambda) and changed cases (function calls).

  3. Write tests for the array features.

  4. Extend check_prog to handle arrays, and adapt it to correctly handle the changed error messages relating to functions.

  5. Extend your passes (uniquify, lambda_lift,sequentialize) to support arrays.

  6. Implement code generation for arrays.

  7. Implement printing arrays in Rust.

  8. Write tests for closures/lambdas.

  9. Extend lambda_lift to translate lambdas to uses of make_closure.

  10. Implement code generation for make_closure and call_closure.

  11. Extend your implementation to handle recursive closures. We discussed several methods for this in class (directly, Landin’s knot and the Y combinator) and you are free to use whichever you wish.

6 List of Deliverables

Again, please ensure cargo builds your code properly. The autograder will give you an automatic 0 if it cannot compile your code!

7 Grading Standards

For this assignment, you will be graded on

Note that the autograder has hidden tests worth 10% of the total autograder score so be sure to test your code thoroughly.

8 Submission

Wait! Please read the assignment again and verify that you have not forgotten anything!

Please submit your homework to gradescope by the above deadline.

1notice that if we did some further simplifications to this program, we would be able to figure out that g will be f. This shows how different optimizations feed into each other.