Introduction to the Go compiler

cmd/compile contains the main packages that form the Go compiler. The compiler may be logically split in four phases, which we will briefly describe alongside the list of packages that contain their code.

You may sometimes hear the terms “front-end” and “back-end” when referring to the compiler. Roughly speaking, these translate to the first two and last two phases we are going to list here. A third term, “middle-end”, often refers to much of the work that happens in the second phase.

Note that the go/* family of packages, such as go/parser and go/types, are mostly unused by the compiler. Since the compiler was initially written in C, the go/* packages were developed to enable writing tools working with Go code, such as gofmt and vet. However, over time the compiler’s internal APIs have slowly evolved to be more familiar to users of the go/* packages.

It should be clarified that the name “gc” stands for “Go compiler”, and has little to do with uppercase “GC”, which stands for garbage collection.

1. Parsing

In the first phase of compilation, source code is tokenized (lexical analysis), parsed (syntax analysis), and a syntax tree is constructed for each source file.

Each syntax tree is an exact representation of the respective source file, with nodes corresponding to the various elements of the source such as expressions, declarations, and statements. The syntax tree also includes position information which is used for error reporting and the creation of debugging information.

2. Type checking

The types2 package is a port of go/types to use the syntax package’s AST instead of go/ast.

3. IR construction (“noding”)

The compiler middle end uses its own AST definition and representation of Go types carried over from when it was written in C. All of its code is written in terms of these, so the next step after type checking is to convert the syntax and types2 representations to ir and types. This process is referred to as “noding.”

Noding using a process called Unified IR, which builds a node representation using a serialized version of the typechecked code from step 2. Unified IR is also involved in import/export of packages and inlining.

4. Middle end

Several optimization passes are performed on the IR representation: dead code elimination, (early) devirtualization, function call inlining, and escape analysis.

The early dead code elimination pass is integrated into the unified IR writer phase.

5. Walk

The final pass over the IR representation is “walk,” which serves two purposes:

  1. It decomposes complex statements into individual, simpler statements, introducing temporary variables and respecting order of evaluation. This step is also referred to as “order.”

  2. It desugars higher-level Go constructs into more primitive ones. For example, switch statements are turned into binary search or jump tables, and operations on maps and channels are replaced with runtime calls.

6. Generic SSA

In this phase, IR is converted into Static Single Assignment (SSA) form, a lower-level intermediate representation with specific properties that make it easier to implement optimizations and to eventually generate machine code from it.

During this conversion, function intrinsics are applied. These are special functions that the compiler has been taught to replace with heavily optimized code on a case-by-case basis.

Certain nodes are also lowered into simpler components during the AST to SSA conversion, so that the rest of the compiler can work with them. For instance, the copy builtin is replaced by memory moves, and range loops are rewritten into for loops. Some of these currently happen before the conversion to SSA due to historical reasons, but the long-term plan is to move all of them here.

Then, a series of machine-independent passes and rules are applied. These do not concern any single computer architecture, and thus run on all GOARCH variants. These passes include dead code elimination, removal of unneeded nil checks, and removal of unused branches. The generic rewrite rules mainly concern expressions, such as replacing some expressions with constant values, and optimizing multiplications and float operations.

7. Generating machine code

The machine-dependent phase of the compiler begins with the “lower” pass, which rewrites generic values into their machine-specific variants. For example, on amd64 memory operands are possible, so many load-store operations may be combined.

Note that the lower pass runs all machine-specific rewrite rules, and thus it currently applies lots of optimizations too.

Once the SSA has been “lowered” and is more specific to the target architecture, the final code optimization passes are run. This includes yet another dead code elimination pass, moving values closer to their uses, the removal of local variables that are never read from, and register allocation.

Other important pieces of work done as part of this step include stack frame layout, which assigns stack offsets to local variables, and pointer liveness analysis, which computes which on-stack pointers are live at each GC safe point.

At the end of the SSA generation phase, Go functions have been transformed into a series of obj.Prog instructions. These are passed to the assembler (cmd/internal/obj), which turns them into machine code and writes out the final object file. The object file will also contain reflect data, export data, and debugging information.

7a. Export

In addition to writing a file of object code for the linker, the compiler also writes a file of “export data” for downstream compilation units. The export data file holds all the information computed during compilation of package P that may be needed when compiling a package Q that directly imports P. It includes type information for all exported declarations, IR for bodies of functions that are candidates for inlining, IR for bodies of generic functions that may be instantiated in another package, and a summary of the findings of escape analysis on function parameters.

The format of the export data file has gone through a number of iterations. Its current form is called “unified”, and it is a serialized representation of an object graph, with an index allowing lazy decoding of parts of the whole (since most imports are used to provide only a handful of symbols).

The GOROOT repository contains a reader and a writer for the unified format; it encodes from/decodes to the compiler’s IR. The golang.org/x/tools repository also provides a public API for an export data reader (using the go/types representation) that always supports the compiler’s current file format and a small number of historic versions. (It is used by x/tools/go/packages in modes that require type information but not type-annotated syntax.)

The x/tools repository also provides public APIs for reading and writing exported type information (but nothing more) using the older “indexed” format. (For example, gopls uses this version for its database of workspace information, which includes types.)

Export data usually provides a “deep” summary, so that compilation of package Q can read the export data files only for each direct import, and be assured that these provide all necessary information about declarations in indirect imports, such as the methods and struct fields of types referred to in P’s public API. Deep export data is simpler for build systems, since only one file is needed per direct dependency. However, it does have a tendency to grow as one gets higher up the import graph of a big repository: if there is a set of very commonly used types with a large API, nearly every package’s export data will include a copy. This problem motivated the “indexed” design, which allowed partial loading on demand. (gopls does less work than the compiler for each import and is thus more sensitive to export data overheads. For this reason, it uses “shallow” export data, in which indirect information is not recorded at all. This demands random access to the export data files of all dependencies, so is not suitable for distributed build systems.)

8. Tips

Getting Started

Testing your changes

Juggling compiler versions

Additional helpful tools

-gcflags and ‘go build’ vs. ‘go tool compile’

Further reading

To dig deeper into how the SSA package works, including its passes and rules, head to cmd/compile/internal/ssa/README.md.

Finally, if something in this README or the SSA README is unclear or if you have an idea for an improvement, feel free to leave a comment in issue 30074.