CLR Inside Out

IronPython and the Dynamic Language Runtime

Bill Chiles


IronPython Hello World
Launching Ipy.exe
Introduction to the ASTs
Looking at the IronPython AST
Looking at the DLR AST
Looking at Generated Code
Understanding Dynamic Sites
How DynamicSite Objects Work
The Big Payoff

The Microsoft® .NET Framework was designed to support a broad range of programming languages on a single runtime—the common language runtime (CLR). The CLR provides shared services to these languages, including garbage collection, just-in-time (JIT) compilation, a sandboxed security model, and support for tools integration. Sharing these features has two huge benefits for language implementers. It is easier to implement a language because much of the lower-level engineering work is already done. And languages running on the CLR can integrate seamlessly. Sharing libraries and frameworks allows languages new to the CLR to build on the work of others.

The CLR also includes support for dynamic languages, as IronPython 1.0 demonstrates ( Dynamic programming languages deliver a powerful and productive experience for programmers, as James Schementi explained in the October 2006 installment of "CLR Inside Out" ( Now Microsoft is building the Dynamic Language Runtime (DLR), which adds to the CLR a set of services designed specifically to meet the needs of dynamic languages. The DLR adds functionality such as a shared dynamic type system, a standard hosting model, and support to make it easy to generate fast dynamic code and fast symbol tables. With these additional features, it is much easier to build high-quality dynamic language implementations for .NET. Furthermore, these features enable dynamic languages built on the DLR to share libraries written in other dynamic languages or in CLR-based static languages.

The DLR lets you build the best experience for your favorite dynamic language. This means your experience coding in your language should feel like it always does. You should also benefit from excellent tools, great performance, and shared libraries. Because using the DLR means you can share standard functionality, you have more time to focus on the features that make a language unique. For example, you can avoid having to implement garbage collection, code generation, and method caching.

The DLR uses adaptive method caching to produce fast dynamic programs, and all language implementations on the DLR benefit from this shared work. Code runs faster by avoiding repeated searching of class precedence lists. It also avoids resolving overloads on .NET methods each time you call on an object (for example, call a method, foo, on some object, o).

The source code for IronPython and the DLR is available on Codeplex under the Berkeley Software Distribution (BSD)-style Microsoft Permissive License (see the 2.0 alpha release). This is a very early release, and there is still significant work to be done. There are low-level mechanisms to design, performance enhancements to implement, and documentation to write. By the time you read this, there will be Codeplex projects available for the DLR code itself and for IronRuby, in addition to IronPython. Though we are working on DLR JScript® and VBX implementations, these may be available only in binary form.

IronPython Hello World

I'll walk through a simple Python program (a twist on Hello World) and discuss what happens when it executes in ipy.exe. The program I'll examine is

def yo(yourname):
   text = "hello, "
   return text + yourname

print yo("bill")

I've added something a little beyond the classic "print hello" functionality, so I can discuss more interesting aspects of the IronPython implementation. For instance, I'll be able to demonstrate name binding for identifier references and caching the meaning of "+" in the yo function.

I'm using IronPython 2.0 alpha, so take a few minutes now to download it so you can walk through it with me.

To begin, open the ironpython.sln file in Visual Studio® 2005 and build the solution with the Debug solution configuration. Then right-click on the IronPythonConsole project and select the Debug tab in the left navigation bar. Set this project as the start project (first set of radio buttons under Start Action). Next, under Start Options, enter as the command-line argument. Finally, copy the Python code listed above into a file called in the \bin\debug\ directory that was created when you built the solution.

Launching Ipy.exe

Now let's see how ipy.exe starts up. The DLR provides some default console classes so that language implementers can easily get an interpreter running (the implementer still needs to write parsers and a runtime implementation for his language). The console classes handle input and output, command-line switches, and some shared diagnostic switches the DLR implements. This saves language implementers work in getting some first code running on the DLR, and it provides a testing tool.

To get started, open <installdir>\Src\IronPython\Compiler\Generation\PythonScriptCompiler.cs. Then set a breakpoint in the ParseFile function on the line "using (Parser parser = ...."

Use F5 (Debug | Start Debugging) to run the program. You'll first hit the breakpoint while parsing a code snippet that will initialize the Python execution engine. The code snippet loads the Python file for machine-specific initialization. So, how did you get here? If you look at the call stack, you will see that ipy.exe starts in PythonConsoleHost's Main function, which calls into shared DLR convenience code. Execution comes back into IronPython-specific code to process the command line, which causes the file to run.

The PythonConsoleHost class is very small. It has a very short Initialize method that sets the LanguageProvider (and therefore the execution engine) to be the IronPython implementation. PythonConsoleHost's Main method simply calls into the default ConsoleHost class provided by the DLR.

ConsoleHost is a helper class that allows language implementers to quickly build a cmd.exe-style interaction shell. The ConsoleHost class gets the CommandLine helper object from the LanguageProvider. When the ConsoleHost runs the CommandLine object, which calls the CommandLine object's Initialize method, the code starts executing in the PythonCommandLine object.

In its Initialize method, the PythonCommandLine object sets up the Python path for loading modules, sets up some built-in modules, and imports IronPython's file. This is all standard Python behavior. PythonCommandLine imports the file by calling IScriptEngine.Execute on the code snippet string, "import".

You will first hit the breakpoint in the call to PythonScriptCompiler.ParseFile due to parsing "import", but let's move on because is not running yet.

Introduction to the ASTs

When ipy.exe executes, the IronPython implementation and the DLR work together to compile the code in the file. At a basic level, compilers are pipelines that start with a parser that produces a data structure representing the code. This data structure is an abstract syntax tree, or AST in compiler-speak. The compiler transforms the AST or creates new data structures as it goes through stages analyzing the code. Then, based on the resulting structures, the compiler generates machine code or intermediate language (IL) code for a virtual machine like the CLR.

IronPython code first generates an AST particular to IronPython, and then it maps this tree to a DLR AST. Some languages have their own intermediate tree so that they can perform analysis or support tools (code editors) over their ASTs. Tools need a tree that is close to source code that users have typed in an editor. Many languages may have ASTs that will be similar to the DLR's, but the DLR ASTs will have more information about semantics explicitly represented. You might think of the DLR AST as one of the various transformations that later stages of a compiler need.

It is interesting to note that the DLR lazily compiles portions of the ASTs when the code runs. This is similar to the way the CLR uses JIT compilation only when code runs. You pay a little more in performance the first time the code runs, but subsequent executions are fast. Later on, you'll see that pieces of the AST are not compiled until specific lines of code execute.

Looking at the IronPython AST

Let's get back to running the program so we can look at the IronPython AST. First, hit F5 (Debug | Continue) to stop at the breakpoint again when the IronPython engine starts parsing the contents of to execute it. If you look in the Visual Studio Locals window, you can expand "cc", then "SourceUnit", and then look at either Name or DisplayName to see what ParseFile is processing.

Hit F5 again. We stopped the last time at this breakpoint. If you look in Locals now, you'll see that cc.SourceUnit.DisplayName is This is the file I added as a command-line argument to ipy.exe. In the Locals window, cc.SourceUnit.Name is "__main__" because Python names the primary executing module this way.

Now, hit F10 (Debug | Step Over) three times to step over the line

ast = parser.ParseFileInput()

Now you can look at the IronPython AST in the debugger, as shown in Figure 1.

Figure 1 IronPython AST in the Debugger

Figure 1** IronPython AST in the Debugger **

You can see the AST root node is a SuiteStatement, representing a series of statements, and it has a Statements member. Expanding the first statement (index zero), you see it holds a FunctionDefinition node. It has a Body member that is a SuiteStatement for the statements of the function's definition, and it has a Name member which is the name that will be bound to the resulting function object.

Pictorially, with some details elided, the IronPython AST for the entire file looks like Figure 2.

Figure 2 IronPython Abstract Syntax Tree for

Figure 2** IronPython Abstract Syntax Tree for **(Click the image for a larger view)

There is a mixture of visual representation in the tree. For the most part, if an AST node has a member that points to another AST node, the picture shows an arrow pointing to the subnode. Usually the subnode is named as the type of a member in the parent node. For example, the FunctionDefinition node has a member of type SuiteStatement, and the next node is a SuiteStatement. Sometimes a node contains a description of the subnodes using indentation and curly braces. I chose to show these nodes that way to simplify the diagram.

Looking at the DLR AST

IronPython is now ready to build the DLR AST so that the final stages of compilation can run. Below our breakpoint in ParseFile you can see the call to BindAndTransform, which transforms the IronPython AST into a DLR tree. During that process, IronPython resolves all identifiers so that their representation in the DLR tree points to the declaration/allocation information for the variables they represent. Of course, for some identifiers, they need to compile to an AST node that represents searching for the value at run time; for example, the identifier could resolve to a module member added at runtime or to a late-bound value supplied by a host via the module's dictionary.

There is a lot of detail in this section, perhaps of interest only if you're looking to port a language to the DLR. If you only look at the picture of the DLR tree, note there are MethodCallExpression nodes and ActionExpression nodes. MethodCallExpression nodes compile to calls to an IronPython runtime function that performs an operation. Every time this function gets called, it performs the same work given the arguments passed to it.

The ActionExpression node is particularly interesting because it represents an abstract operation or action. Abstract actions, for example, include fetching a member, indexing an object, calling an object, performing addition on objects, and so on. The DLR compiles ActionExpression nodes into dynamic call sites. Dynamic sites are a runtime mechanism that caches methods appropriate to specific combinations of action and argument types. These allow dynamic code to run faster with little or no static type information at compile time. I'll explain dynamic sites in more detail later.

Let's get back to stepping through the program's execution. Hit Shift+F11 (Debug | Step Out) twice to stop in SourceUnit.Compile, then use F10 (Step Over) to perform the assignment to the variable "block", which will hold the root of the DLR AST for the entire file.

Now we can look at the DLR AST. In the Visual Studio Locals window, expand the node called block, which is a CodeBlock AST node. This is the root node for the file of Python code. Part of translating a language's specific syntax tree to the DLR's AST is clearly designating where variables are declared (explicitly or implicitly) and which variables identifiers refer to. If you expand block's variables member, you'll see two variables declared in the module's scope, "__name__" and "yo". (Expand elements 0 and 1 to see the Variable object's name members.) The first variable is a bound name that Python requires and creates for modules. The second is the module variable that holds our function definition.

Before drilling into the body, take a quick look at a partial expansion of the DLR AST in Figure 3, which reflects a similar representation to the IronPython AST you saw earlier.

Figure 3 The DLR AST

Figure 3** The DLR AST **(Click the image for a larger view)

Expand the Body property under the block local. Due to the Visual Studio debugger's display characteristics, you need to expand the first node under the body node to actually see the BlockStatement's contents. This DLR BlockStatement AST node corresponds to the IronPython AST's SuiteStatement node, and each represents a series of statements. Expand the statements property.

The first statement (index 0) is an EmptyStatement, which is an artifact of IronPython's translation to the DLR tree. IronPython allocates two statements in case there's a module documentation string, which would cause the creation of an assignment to __doc__. In this case the first statement remains as a placeholder EmptyStatement since there's no documentation string.

The second statement (index 1) is what's interesting because it is the BlockStatement for the rest of the file, which is the code I wrote. This BlockStatement contains an ExpressionStatement followed by a BlockStatement (discussed later). The ExpressionStatement node wraps a BoundAssignment node so that the resulting value of the assignment gets popped off the stack; Python assignments do not return values. The BoundAssignment has name and value members. They represent setting the local variable yo (local to the module the Python code executes in) to the result of a function call. The function call, represented by a MethodCallExpression, calls a runtime helper function that creates an IronPython function object. MethodCallExpression has members indicating that the call is to a method named MakeFunction and what arguments the call gets. One argument is a CodeBlockExpression AST node for the function's body, and another is an array of parameters the created function will take.

Expand the MethodCallExpression AST node, which is the value member of the BoundAssignment. Look at the third element (index 2) of the Arguments property, you'll see the CodeBlockExpression for the body of yo. It has a Block property that is a CodeBlock. The CodeBlockExpression causes the CodeBlock it wraps to turn into a callable value that can be assigned to a module variable. Before we go into the body of the yo function's AST, look at Figure 4, which is a diagram of the entire DLR AST for

Figure 4 Entire DLR AST for

Figure 4** Entire DLR AST for **(Click the image for a larger view)

There are a couple of notes about the diagram. As with the previous diagram, there is a mixture of visual representation in the tree. For the most part, if an AST node has a member that points to another AST node, the picture shows an arrow pointing to the subnode. Sometimes a node contains a description of the subnodes using indentation and curly braces instead of more arrows and boxes. The snapshot of the Locals window in Visual Studio shows the AST down to the MethodCallExpression node on the left side of the diagram. The first arrow is dotted because the picture skips the BlockStatement with the EmptyExpression. You can expand the data structures in the Locals windows further to see the full structure of the tree, as shown in the diagram.

The CodeBlock node representing the body of the yo function has members for variables, parameters, and the body AST. This is similar to the CodeBlock that represented the entire file. As you will see later, the CodeBlock for the file turns into an Initialize method for the module. The yo function has one local variable, text, that you can see if you expand the variables member, expand element 0, and then expand the Variable object. Following a similar expansion, you can see the yo function has one parameter, yourname.

The body of the CodeBlock is a BlockStatement, which has two statements: an assignment and a return. As you saw above with the assignment to yo, the BlockStatement first has an ExpressionStatement that holds a BoundAssignment. This assignment represents setting text to "hello, ". The BoundAssignment object has a ConstantExpression value member and a Variable member. The Variable object describes text as a local variable and has a block member that points back to the CodeBlock as the scope of the local variable.

The second statement in BlockStatement is a ReturnStatement whose expression member is an ActionExpression. This AST node is particularly interesting as I explained earlier. This ActionExpression node has an action member that is a DoOperationAction, which represents an Add operation in this case. The ActionExpression has an arguments member with two elements that are BoundExpression objects. The first BoundExpression has a variable member that represents the local text, which was set to "hello, ". The second has a variable member that represents the parameter yourname.

Now let's go back up the tree to the BlockStatement that contains two ExpressionStatments. One is for the BoundAssignment of yo to a function object. The second ExpressionStatement also exists so that the compiler emits code to pop its expression's value off the stack. The ExpressionStatement has an expression member that is a MethodCallExpression.

The MethodCallExpression represents the call to IronPython's runtime function that implements the print statement. It has a method member that is a .NET RuntimeMethodInfo describing print. The MethodCallExpression has an arguments member that is an ActionExpression. As discussed above, this compiles to a dynamic site for fast calls. The ActionExpression has an action member that is a CallAction for the call to yo. This node also has an arguments member with two elements. One is a BoundExpression with a variable member representing that the expression is the value of the local yo (yo is local to the module in which it is defined). The other argument is a ConstantExpression with a value member that holds the string "bill".

That concludes our walk through the DLR AST. You can see how the IronPython tree directly represents the Python code you see. The DLR AST has more explicit information in it. It contains identifier binding information and explicit blocks to represent scopes. It contains explicit representation of runtime method invocations (the call to Print) rather than actions that compile to dynamic sites (the node for "text + yourname"). The DLR AST also contains nodes that force results of expressions to pop off the stack so that the code adheres to Python semantics.

Looking at Generated Code

Now that you've seen how the process starts up and performs the parsing, let's look at some generated code. The easiest way to do this is to use switches for ipy.exe and Lutz Roeder's Reflector program. The switches cause ipy.exe to generate static code (as opposed to Lightweight Code Generation or dynamic methods) and save it to disk. Download Reflector from and then invoke the following command line while in the bin debug directory where you built ironpython.sln:

ipy.exe -D -X:SaveAssemblies -X:StaticMethods

You'll find two files in the directory afterwards, snippets1.dll and msdnmag.exe. Now launch Reflector on these files with this command line (filling in the absolute path as necessary):

C:\where\you\put\reflector.exe snippets1.dll,

You can look down the list of DLLs and EXEs to find msdn and snippets1. Expand msdnmag, msdnmag.exe, "{} -", and finally "__main__$mod_2". Now double-click the Initialize method. This is the body of the file. You should see the disassembled source code from the IL that the DLR generated. If you have used Reflector before, you know that it is a great tool, but it also has a few flaws. It can generate extra temporary variables for example, so you need to take the C# code you see with a grain of salt.

I won't dwell on the Initialize function, but you can see that yo is being set to the result of the call to MakeFunction, as you saw in the AST. You can see that Initialize then calls the runtime helper PythonOps.Print on the result of invoking yo. Notice the fragment "Call-Simple-1.Invoke". The Call-Simple-1 object is part of the method caching mechanism. I will explain the method caching further by exploring how it shows up in the yo function.

Let's look at the yo function. Double-click yo$1 in the left pane. You can see where it sets text to be "hello," and where it returns the result of "DoOperation-Add-0.Invoke(text, yourname)". That is the essence of the code I wrote in Python. In a moment I will focus on DoOperation-Add-0, which is the object that caches method lookup results. I'll also look at some of the runtime-generated functions in snippets1, as I drill into DoOperation-Add-0.

Understanding Dynamic Sites

Dynamic sites allow dynamic language code to run fast. They manage the method caching for performing specific operations on specific types of objects. The dynamic sites mechanism the DLR uses is based on research and experience with tried-and-true dynamic language implementations. You can see objects in the generated code such as Call-Simple-1 and DoOperation-Add-0, which are instances of derived types from DynamicSite<Targ1, Targ2, Tresult>. I'll briefly explain the caching theory here, but you can search the Web for "dynamic language method caching" or "polymorphic inline method caching" for more information.

Dynamic language performance is hindered by the extra checks and searches that occur at each call site. Straightforward implementations have to repeatedly search class precedence lists for members and potentially resolve overloads on method argument types each time you execute a particular line of code. In an expression such as o.m(x, y) or x + y, dynamic languages need to check exactly what kind of object o is, what is m bound to for o, what type x is, what type y is, or what "+" means for the actual runtime type of x and y. In a statically typed language (or with enough type hints in the code and type inferencing), you can emit exactly the instructions or runtime function calls that are appropriate at each call site. You can do this because you know from the static types what is needed at compile time.

Dynamic languages provide great productivity enhancements and powerful terse expressions due to their dynamic capabilities. However, in practice code tends to execute on the same types of objects each time. This means you can improve performance by remembering the results of method searches the first time a section of code executes. For example, with x + y, if x and y are integers the first time that expression executes, we can remember a code sequence or exactly what runtime function performs addition given two integers. Then each time that expression executes, there is no search involved. The code just checks that x and y are integers again, and dispatches to the right code with no searching. The result can literally be reduced to inlined code generation with a couple of type checks and an add instruction, depending on the semantics of an operation and method caching mechanisms used.

How DynamicSite Objects Work

Static language compilers can select methods or emit specific code at compile time to perform operations such as addition, member fetching, and indexing. Since the DLR cannot always know at compile time what to emit, it emits code to call an Invoke method on a DynamicSite object. This object captures the operation and a delegate (called by Invoke) that contains the caching logic. This caching logic is updated each time the delegate sees a new combination of argument types at run time. (Note that delegates actually wrap pointers to functions, but, for brevity, I use delegate as if it is the function.) That's the simple story; next up is a more detailed look.

The delegate starts out with one call in it to UpdateBindingAndInvoke. The first time a call site executes (let's just consider x + y), UpdateBindingAndInvoke queries the arguments for an Add operation implementation for the type of x and the type of y. If it gets one, it generates a new delegate that encodes a check for x and y having the same types you just saw. In a later call, if x and y have the same types, the new delegate code calls the implementation that we got. If x or y have a different type in a later call, the delegate falls through all the cache checks and calls UpdateBindingAndInvoke again, which generates a new delegate. The newest delegate's code will capture the first tests of x and y as well as the new tests, calling whichever target implementation matches the types.

It's time to take a look at the DoOperation-Add-0 delegate that the DLR generated from running the code in Go back to Reflector, and this time expand the snippets1, snippets.dll, "{} -", and "Type$_stub_$4" nodes in the left pane. Click on the Handle method, and in the right pane you'll see the code in Figure 5.

Figure 5 Disassembled "+" Cache

public static object Handle(object[], 
    FastDynamicSite<object, object, object> site1,
    object obj1, object obj2)
    if (((obj1 != null) && (obj1.GetType() == typeof(string))) 
        && ((obj2 != null) && (obj2.GetType() == typeof(string))))
        string text;
        string text1 = text = 
        return text;
     return site1.UpdateBindingAndInvoke(obj1, obj2);

This is the disassembled code from the IL that the DLR generated for the DoOperation-Add-0 delegate after executing "text + yourname". If we called the yo function again, this delegate would execute. This code has some performance issues you might spot, but more on that later. You can see that the code tests whether the two arguments are strings and calls the IronPython runtime implementation of addition given two strings.

If one of the arguments is not a string, then the delegate calls UpdateBindingAndInvoke to generate a new delegate. The new delegate would replace the current one (above) so that the next time DoOperation-Add-0.Invoke runs, it will call the updated delegate. I changed the code so that the "text + yourname" executes on two integers for a second call to yo. I then found the second delegate that the DLR generated after the second call. It looked like Figure 6.

Figure 6 Disassembled "+" Cache Two

      public static object Handle(object[]
    FastDynamicSite<object, object, object> site1, 
    object obj1, object obj2)
    if (((obj1 != null) && (obj1.GetType() == typeof(int))) 
        && ((obj2 != null) && (obj2.GetType() == typeof(int))))
        return Int32Ops.Add(Converter.ConvertToInt32(obj1), 
    if (((obj1 != null) && (obj1.GetType() == typeof(string))) 
        && ((obj2 != null) && (obj2.GetType() == typeof(string))))
        return = StringOps.Add(Converter.ConvertToString(obj1),
    return site1.UpdateBindingAndInvoke(obj1, obj3);

I cleaned up the bogus temporary variables that Reflector inserted. If you do the extra experiment I did, you'll get slightly different code. You'll see that now the code tests if the arguments are integers. If they are, it calls the runtime helper function that implements Add on integers. The same code that I had before follows the integer test and target code. I chose to order the tests with the most recent types tested first. With profiled runs of the program, you could imagine generating all the right arguments' tests in the optimal order. Of course, the delegate falls back to calling UpdateBindingAndInvoke in case the arguments have a new type combination that has not been seen before.

You will probably notice performance issues with the code. The DLR could generate better code today, but as with any alpha release, the DLR makes trade-offs on functionality and code generation. You may notice that the code tests for a type, string, and then converts the argument to a string. There are two problems with this. The first is that the .NET Framework 2.0 JIT does not propagate type information from "if" tests to the consequent code block. Since the Add implementation is a statically typed method, the arguments need to show up as strings to it. The second problem is the DLR-generated code does a slow call to a runtime conversion function. The DLR could have emitted a cast operation here.

The last performance issue to mention is that the DLR today does not structure tests to avoid repeating the same one. In my updated delegate that knows how to quickly add two integers or two strings, I test the arguments for null. I do this multiple times as I move through the tests when I don't have to do that. When UpdateBindingAndInvoke searches for operation implementations, it actually asks the objects how to do the operation. It calls on the objects to provide a rule for how to perform the operation represented by the DynamicSite. The rule contains an AST for the test to perform and an AST for the target code to execute if the test is true. The DLR could perform tree transformations across the tests to optimize the delegate's code even more.

As a final point, because I used dynamic sites for fetching members of objects, I have to handle changes to instances and dynamic types. The DLR cannot assume for o.m() that if o points to the same object, then it knows exactly which m to invoke. Some languages allow types to change at run time, and they can replace members, essentially overriding a member in an instance. The DLR uses a global version counter that it increments for new types and changes to types. There are various trip wires that cause a type's version to change, such as changing members. Any DynamicSite caches that refer to objects where their type's version has changed will fail the cache. Then the DynamicSite will search again and update the delegate.

The Big Payoff

The generated DynamicSite types are generic types. Using generic types doesn't help very much in the example Python code in However, in an expression such as "if x < 1:" you can generate a DynamicSite<Object, Int, Bool>. The site's Invoke method will be statically typed to take one argument as an integer and return a Boolean. The delegate inside the dynamic site is also statically typed in this way, which makes the resulting code faster. The delegate only has to test one argument in its caching, and as the stack unwinds, the JIT knows a Boolean value is flowing to the right location. This eliminates any casts or checks surrounding the if test.

Some dynamic languages have optional explicit typing. Even if they don't, implementations can infer types from calls to .NET static methods, from literal constants, and so on. They can propagate this type information over lifetimes of variables, branches of conditionals, and the like. When languages have type information, they can generate DLR AST that in turn compiles to DynamicSite types with full type information. That can result in inline code generation for the Invoke method call and target code call in the cache.

Today, the DLR generates generic DynamicSite types with six or fewer generic type parameters. The first N-1 parameters represent the types of arguments, and the Nth represents the result type of the operation. After six types, you have to generate AST that just performs method calls to runtime helper functions. Those functions have to do the slow work to compute everything at run time. In the future, the DLR will allow you to use generic tuples to provide large numbers of statically typed arguments. We might also increase the number from six as we figure out the sweet spot for most programs.

You should have a pretty good idea now how explore both language-specific ASTs and DLR ASTs. With some basic information on simple trees, you can examine the way certain languages have been built on the DLR. If you are trying to implement a language on the DLR yourself, you can use this technique. If you're not, it might be fun to just look at what is happening under the covers.

You also got to see some of the shared infrastructure the DLR offers to language implementers. There is a lot to leverage from the CLR and the DLR—garbage collection, JIT compilation, dynamic sites, common dynamic type model, and so forth. Implementing languages on the .NET runtime produces pretty good results now—and it will just keep getting better. Version 1 of the DLR will be great. Soon implementing dynamic languages on .NET will be amazingly easy and the results will be excellent.

Send you questions and comments to

Bill Chiles is a Senior Program Manager at Microsoft who has worked on dynamic language projects (CMU Common Lisp, Dylan) and tools for most of his career. Bill spent nine years on the Visual Studio Core Technologies team and now works on IronPython and the Dynamic Language Runtime.