Share via

November 2009

Volume 24 Number 11

CLR INSIDE OUT - Exploring the .NET Framework 4 Security Model

By Andrew Dai | November 2009

The .NET Framework 4 introduces many updates to the .NET security model that make it much easier to host, secure and provide services to partially trusted code. We’ve overhauled the complicated Code Access Security (CAS) policy system, which was powerful but difficult to use and even more difficult to get right. We’ve also improved upon the Security Transparency model, bringing much of the Silverlight improvements (which I talked about last October: in security enforcement to the desktop framework. Finally, we’ve introduced some new features that give host and library developers more flexibility over where their services are exposed. With these changes, the .NET Framework 4 boasts a simpler, improved security model that makes it easy for hosts and libraries to sandbox code and libraries to safely expose services.

A Background in .NET Framework Security

Before diving into any particular feature, it helps to have a little background on how security in the .NET Framework works. Partially trusted code is restricted by the permissions it has, and different APIs will require different permissions to successfully be called. The goal of CAS is to make sure that untrusted code runs with appropriate permissions and can’t do anything beyond its permissions without authorization.

We can think of the .NET security model as comprising three parts. Specific improvements have been made in each area, and the rest of this article is organized according to these three fundamental sections:

  • Policy—Security policy determines which permissions to give a particular untrusted assembly or application. Managed code has Evidence objects associated with it, which can describe where the code is loaded from, who published it and so on. Evidence can be used to determine which permissions are appropriate; the result is called a permission grant set. The .NET Framework has traditionally used CAS policy as a machine-wide mechanism to govern this. As mentioned before, CAS policy has been overhauled in favor of giving hosts more flexibility over their own security policy and unhosted code parity with native code.
  • Sandboxing—Sandboxing is the actual process of restricting assemblies or applications to a given permission grant set. The preferred way to sandbox is to create a sandboxing application domain containing a permission grant set for loaded assemblies and an exemption list for specific library assemblies (these are given full trust). With the obsoletion of CAS policy, partial trust code always gets sandboxed this way in .NET Framework 4.
  • Enforcement—Enforcement refers to the mechanism that keeps untrusted code restricted to its sandbox. Proper use of enforcement APIs prevents one untrusted assembly from simply calling an API in a different, more-trusted assembly and exercising greater permissions that way. It also allows host and library developers to expose controlled, limited access to elevated behavior and provide meaningful services to partially trusted code. The Level 2 Security Transparency model makes it much easier to safely do this.

The .NET security model has always been of particular importance to host and library developers (who often go hand in hand). Examples of the .NET security model are ASP.NET and SQL CLR, which both host managed code within controlled environments and restricted contexts. When a host like these wants to load a partially trusted assembly, it creates a sandboxed application domain with the appropriate permission grant set. The assembly is then loaded into this sandboxed domain. The host also provides library APIs that are fully trusted but callable from the hosted code. The libraries are also loaded into the sandboxed domain, but are explicitly placed on the exemption list mentioned earlier. They rely on the .NET Framework’s enforcement mechanisms to ensure that access to its elevated abilities is tightly controlled.

For most managed application developers, this is all magic that is happening at the framework level—even developers writing code that will be run in a sandbox don’t need to know all the details of how the security model works. The framework ensures sandboxed code is limited to using APIs and abilities that the host provides. The .NET security model and CAS have long been the realm of enterprise administrators and host and library developers; for them, we’ve made things easier than ever.


CAS policy has been provided since the beginning of the .NET Framework to give machine and enterprise administrators a way to fine-tune what the runtime considered trusted or untrusted. While CAS policy was very powerful and allowed for very granular controls, it was extremely difficult to get right and could hinder more than help. Machine administrators could lock certain applications out of needed permissions (described in the next major section, Sandboxing), and many people wondered why their applications suddenly stopped working once they decided to put them on a network share. Furthermore, CAS policy settings didn’t move forward from one version of the runtime to another, so the elaborate custom CAS policy that someone set up in .NET Framework 1.1 had to be redone by hand for .NET Framework 2.0.

Security policy could be split into two scenarios: security policy for hosted code and security policy for the machine or enterprise. Regarding machine policy, the Common Language Runtime security team has decided that the runtime was the wrong place to govern it, as native code was obviously not subject to its restrictions. While it makes sense for hosts to be able to determine what their hosted code can do, unhosted exes that are simply clicked or run from the command line should behave like their native counterparts (especially since they look identical to the users running them).

The correct place for global security policy is at the operating system level, where such a policy would apply to native and managed code equally. Therefore, we’re encouraging machine administrators to look at solutions like Windows Software Restriction Policies and disabling machine-wide CAS policy resolution by default. The other scenario, security policy for hosted code, is still very much valid in the managed code world. Host security policy is now easier to govern, as it will no longer clash with an arbitrary machine policy.

What This Means for You

For one, all unhosted managed code runs as fully trusted by default. If you run an .EXE from your hard drive or a network share, your app will have all the abilities a native app running from the same place would have. Hosted code, however, is still subject to the security decisions of the host.(Note that all the ways that code can arrive via the Internet are hosted scenarios—ClickOnce applications, for example—so this does not mean that code running over the Internet is fully trusted.)

For many applications, these changes are mostly in the background and will have no perceived effect. Those that are affected by the change may run into two issues. The first is that certain CAS policy-related APIs are deprecated, many having to do with assembly loads (so read on if you do this at all). Second, and affecting fewer people (primarily hosts), will be the fact that heterogeneous application domains (which are described in the Sandboxing section) aren’t available by default.

But I’m not doing any of this! How do I just make it work?

Perhaps you’ve run into an error or obsoletion message that looked something like this:

This method [explicitly/implicitly] uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see [link to MSDN documentation] for more information.

For compatibility reasons, we’ve provided a configuration switch that allows a process to enable CAS policy resolution on it. You may enable CAS policy by placing the following in your project’s app.config file:

      <!-- enables legacy CAS policy for this process -->
      <NetFx40_LegacySecurityPolicy enabled="true" />

The following section describes where to start looking for migration, if the exception is being thrown from your own code. If it isn’t, then the configuration switch is the way to go and the following section shouldn’t apply directly to you.

Affected APIs

Affected APIs can be divided into two groups: those that are explicitly using CAS policy and those that are implicitly using it. Explicit usages are obvious—they tend to reside in the System.Security.Policy.SecurityManager class and look something like SecurityManager.ResolvePolicy. These APIs directly call or modify the machine’s CAS policy settings, and they have all been deprecated.

Implicit usages are less obvious—these tend to be assembly loads or application domain creations that take evidence. CAS policy is resolved on this evidence, and the assembly is loaded with the resulting permission grant set. Since CAS policy is off by default, it doesn’t make sense to try to resolve it on this evidence. An example of such an API is Assembly.Load(AssemblyName assemblyRef,Evidence assemblySecurity).

There are a couple of reasons why such an API would be called:

  1. Sandboxing—Perhaps you know that calling that Assembly.Load overload with zone evidence from the Internet will result in that assembly being loaded with the Internet named permission set (unless, that is, an administrator changed that evidence mapping for this particular machine or user!).
  2. Other parameters on the overload—Maybe you just wanted to get to a specific parameter that existed only on this overload. In this case, you might’ve simply passed null or Assembly.GetExecutingAssembly().Evidence for the evidence parameter.

If you’re trying to sandbox, the Sandboxing section describes how to create a sandboxed application domain restricted to the Internet named permission set.. Your assembly could then be loaded into that domain and be guaranteed to have the permissions you intended (that is, not subject to the whims of an administrator).

In the second scenario, we’ve added overloads to each of these APIs that expose all necessary parameters but don’t expose an evidence parameter. Migration is a simple matter of cutting out the evidence argument to your calls. (Note that passing null Evidence into an obsolete API still works as well, as it doesn’t result in CAS policy evaluation.)

One additional thing to note is that if you’re doing an assembly load from a remote location (that is, Assembly.LoadFrom(“https://...”)), you’ll initially get a FileLoadException unless the following configuration switch is set. This was done because this call would’ve sandboxed the assembly in the past. With CAS policy gone, it is fully trusted!

      <!-- WARNING: will load assemblies from remote locations as fully
         trusted! -->
      <loadFromRemoteSources enabled="true" />

Another way to do this, without turning this switch on for the whole process, is to use the new Assembly.UnsafeLoadFrom API, which acts like LoadFrom with the switch set. This is useful if you only want to enable remote loads in certain places or you don’t own the primary application.

With machine-wide CAS policy out of the picture, all examination of assembly evidence and decisions regarding appropriate permission sets is left up to hosts of managed code. Without a complicated system on top of it interfering with its security decisions (aside from any OS security policy), a host is free to assign its own permissions. Now it’s time to assign those permissions to partial trust assemblies.


Via a host’s security policy, we can determine the correct permission grant set to give to a partial trust assembly. Now we need a simple, effective way to load that assembly into an environment that is restricted to that particular grant set. Sandboxing, particularly using the simple sandboxing CreateDomain overload, does just that.

Sandboxing in the Past

With the old CAS policy model, it was possible to create a heterogeneous application domain, where every assembly in the domain had its own permission set. An assembly load with Internet zone evidence could result in two or more assemblies at different partial trust levels being loaded into the same domain as the full trust assembly doing the loading. Furthermore, the application domain could have its own evidence, giving it its own permission set.

There are several issues with this model:

  • The permission set granted to an assembly is dependent on CAS policy, as several policy levels are intersected to compute the final permission set. Therefore, it is possible to end up with fewer permissions than intended.
  • Similar to the previous point, evidence evaluation on an assembly is done by CAS policy, which could differ across machines, users and even versions of the runtime (CAS policy settings didn’t move forward with new versions of the runtime). Therefore, it wasn’t always obvious what permission grant set an assembly was getting.
  • Partial trust assemblies are not usually examined for security hardening, making “middle trust” assemblies vulnerable to “lowest trust” assemblies.  Assemblies are freely and easily able to call each other, so having many of them with different abilities becomes problematic from a security perspective. Can someone be certain that every combination of calls from assemblies at different trust levels is secure? Is it absolutely safe to cache information from a middle trust layer?

Because of these issues, we introduced the concept of a homogeneous application domain, which contains only two permission grant sets (partial trust and full trust) and is extremely simple to create and reason about. Homogeneous domains, and how to create them, are described later in this section.

Another popular mechanism for sandboxing was the use of PermitOnly and Deny, which are stack walk modifiers that list specific allowed permissions (and nothing more) and disallow specific permissions, respectively. It seemed useful to say, “I only want callers with permissions x and y to be able to call this API,” or, “as an API, I want to deny permission to all my callers.” However, these modifiers did not actually change the permission grant set of a particular assembly, which meant that they could be asserted away because all they did was intercept demands. An example of this in action is shown in Figure 1.

Figure 1 A Call Stack Representing an Attempt at Sandboxing with Deny

Without the red Assert, the demand hits the Deny and the stack walk is terminated. When the red Assert is active, however, the Deny is never hit, as Untrusted has asserted the demand away. (Notes: Call stack is growing down. APIs do not represent actual APIs in the framework.) For this reason, Deny is deprecated in the .NET Framework 4, because using it is always a security hole (PermitOnly is still around because it can be legitimately used in a few corner cases, but is generally discouraged). Note that it can be reactivated using the NetFx40_LegacySecurityPolicy switch, mentioned in the policy section above.

Sandboxing Today

For the .NET Framework, the unit of isolation we use is the application domain. Each partial trust application domain has a single permission grant set that all assemblies loaded into it get, except for the ones specifically listed on the full trust exemption list or loaded from the Global Assembly Cache. Creating this domain is very simple — the .NET Framework provides a simple sandboxing API that takes in everything you need to create the domain:

AppDomain.CreateDomain( string friendlyName,
                        Evidence securityInfo,
                        AppDomainSetup info,
                        PermissionSet grantSet,
                        params StrongName[] fullTrustAssemblies);

Where the parameters are:

  • friendlyName—The friendly name of the application domain.
  • securityInfo—Evidence associated with the application domain. This isn’t used for CAS policy resolution, obviously, but can be used to store things like publisher information.
  • info—Application domain initialization information. This must include, at minimum, an ApplicationBase, representing the store where partial trust assemblies reside.
  • grantSet—The permission grant set of all loaded assemblies in this domain, except for those on the full trust list or in the Global Assembly Cache.
  • fullTrustAssemblies—A list of StrongNames  of assemblies that are granted full trust (exempt from partial trust).

Once the domain is created, you can call AppDomain.CreateInstanceAndUnwrap on a MarshalByRefObject in your partial trust assembly and then call into its entry point method to kick it off, as shown in Figure 2.

Figure 2 Sandbox with Partial Trust Code Running Inside

PermissionSet permset = new PermissionSet(PermissionState.None);
ps.AddPermission(new SecurityPermission(
AppDomainSetup ptInfo = new AppDomainSetup();
ptInfo.ApplicationBase = ptAssemblyStore;
AppDomain sandboxedDomain = AppDomain.CreateDomain(
// assume HarnessType is in the GAC and a MarshalByRef object
HarnessType ht = sandboxedDomain.CreateInstanceAndUnwrap(
   as HarnessType;

That’s it! With several lines of code, we now have a sandbox with partial trust code running in it.

This CreateDomain API was actually added in .NET Framework 2.0, so it’s not new. However, it’s worth mentioning as it’s now the only truly supported way to sandbox code. As you can see, the permission set is passed directly, so no evidence has to be evaluated in loading assemblies into this domain; you know exactly what each loaded assembly is going to get. Furthermore, you’re using a real isolation boundary to contain partial trust code, which is extremely helpful in making security assumptions. With the simple sandboxing CreateDomain API, sandboxes become more obvious, consistent and secure—all things that help make dealing with untrusted code easier.


At this point, we have an appropriate permission grant set for our partial trust assembly and have loaded the assembly into a proper sandbox. Great! However, what if we actually want to expose some elevated functionality to partially trusted code? For example, I may not want to give full file system access to an Internet application, but I don’t mind if it reads and write from a known temporary folder.

Those of you who read last year’s column on Silverlight security ( know exactly how this issue is addressed in that platform—through the Security Transparency model that neatly divides code into three buckets. I’m happy to say that Silverlight’s advancement of the model is now in effect on .NET Framework 4. This means that the benefits of the simpler model enjoyed by the Silverlight platform libraries are now available to non-Microsoft developers of partial trust libraries. Before I go into that and other improvements in the enforcement space, though, I’ll discuss our primary enforcement mechanisms from before.

Enforcement in the Past

I mentioned last year that Security Transparency was actually introduced in .NET Framework 2.0, but served primarily as an audit mechanism rather than an enforcement one (the new Security Transparency model is both). In the older model, or Level 1 Security Transparency, violations did not manifest themselves as hard failures—many of them (like p/invoking into native code) resulted in permission demands. If your transparent assembly happened to have UnmanagedCode in its grant set, it could still go ahead and do what it was doing (violating the Transparency rules in the process). Furthermore, Transparency checks stopped at the assembly boundary, further reducing its enforcement effectiveness.

True enforcement in the .NET Framework 2.0 came in the form of LinkDemands— JIT time checks that checked if the grant set of the calling assembly contained the specified permission. That was all well and good, but this model essentially required library developers to use two different mechanisms for audit and enforcement, which is redundant. The Silverlight model, which consolidated and simplified these two concepts, was a natural progression from this state and became what is now Level 2 Security Transparency.

Level 2 Security Transparency

Level 2 Security Transparency is an enforcement mechanism that separates code that is safe to execute in low-trust environments and code that isn’t. In a nutshell, it draws a barrier between code that can do security-sensitive things (Critical), like file operations, and code that can’t (Transparent).

The Security Transparency model separates code into three buckets: Transparent, Safe Critical and Critical.. The following diagram, Figure 3, describes these buckets. (Note: Green arrows represent calls that are allowed; red arrows represent those that aren’t. Self-loops are valid as well, but not shown.)

Figure 3 Security Transparency Model

For typical desktop applications, the Level 2 Transparency model has no noticeable effect—code that does not have any security annotations and is not sandboxed is assumed to be Critical, so it is unrestricted. However, since it is Critical, it is off-limits to partial trust callers. Therefore, developers who don’t have partial trust scenarios won’t have to worry about exposing anything to partial trust.

For sandboxed applications, the opposite is true—any assembly loaded into a sandboxed application domain is assumed to be completely Transparent (even if it has annotations specifying otherwise). This ensures that partial trust code cannot attempt to elevate via asserting for permissions or calling into native code (a Full Trust equivalent action).

Libraries exposed to partial trust callers, unlike desktop or sandboxed apps, must be keenly aware of their security requirements and have much more flexibility over their abilities and what they expose. A typical partial trust-callable library should be primarily Transparent and Critical code with a minimal set of Safe Critical APIs. Critical code, while unrestricted, is known to be inaccessible from partial trust code. Transparent code is callable from partial trust code but is safe. Safe Critical code is extremely dangerous, as it provides elevated functionality, and utmost care must be taken to make sure its caller is validated before transitioning over to Critical code.

The Security Transparency attributes and their behaviors are listed and described in Figure 4. Keep in mind that the highest-scoped attribute applies for all introduced APIs under it, regardless of whether those APIs have their own annotations. AllowPartiallyTrustedCallers is different in that it defers to and honors lower-level attributes. (Note: This table describes the attributes and their behavior when applied at the assembly, type or member level. The attributes apply only to introduced APIs, which means subclasses and overrides are subject to the inheritance rules and may be at different Transparency levels.)

Figure 4 Security Transparency Attributes and Their Behaviors

Those of you who remember last October’s article will probably notice that the attributes work, more or less, the same way they do in Silverlight. You might also remember that there were specific inheritance rules associated with the different types of code. Those are also in effect in the desktop. For more details on the inheritance rules and other aspects of Level 2 Transparency, take a look at last year’s article, “Security in Silverlight 2” (

Conditional AllowPartiallyTrustedCallers

The AllowPartiallyTrustedCallers attribute (APTCA) indicates that an assembly is a library that may expose security-sensitive functionality to partial trust. APTCA library assemblies are often written in conjunction with hosts, since the hosts typically want to expose specific functionality to their hosting environments. One major example is ASP.NET, which exposes the System.Web namespace to its hosted code, which may be at various trust levels.

However, putting APTCA on an assembly means it’s available to partial trust in any host that decides to load it, which can be a liability if the assembly author doesn’t know how that assembly will behave in different hosts. Therefore, host developers sometimes want their libraries to be available to partial trust only when loaded in their own domains. ASP.NET does exactly this, and in earlier versions has had to use LinkDemands for special permissions on their APIs. While this works, it causes everyone building on top of them to have to satisfy that LinkDemand, preventing those up-stack assemblies from being transparent.

To solve this, we introduced the Conditional APTCA feature,which allows libraries to expose APIs to partial trust callers only in an enabling host (via a list).

The specific roles of the host and library are:

  • The library simply qualifies the AllowPartiallyTrustedCallers attribute with a parameter, the PartialTrustVisibilityLevel enum. For example:
      [assembly: AllowPartiallyTrustedCallers(PartialTrustVisibilityLevel= PartialTrustVisibilityLevel.NotVisibleByDefault)]
    This attribute basically says that the library is not callable from partial trust unless the host has it on its allow-list, mentioned below. A value of VisibleToAllHosts would make the library callable from partial trust in all hosts.
  • The host specifies partial trust visible assemblies, per application domain, via an allow list. This list is typically populated via a configuration file supplied to the host. An important thing to keep in mind is that unconditional APTCA assemblies, like the basic framework libraries, do not have to be added to this list.(Also important to keep in mind is that if you’re enabling a Conditional APTCA assembly, you should enable its transitive closure of dependent Conditional APTCA assemblies as well. Otherwise, you might end up with odd behavior, as your original assembly tries to call APIs that it assumes are accessible but really aren’t.)

Easier to Secure

Much has happened in the security model for the .NET Framework 4. CAS policy has been disabled by default, leaving all security policy decisions up to the host and granting unhosted managed exes behavioral parity with native exes. Disabling CAS policy has also disabled heterogeneous application domains, finally making the efficient simple-sandboxing CreateDomain overload the primary supported sandboxing mechanism for partial trust assemblies. Silverlight’s improvements to the Security Transparency model, described last October, have also come to the desktop, providing partial trust library developers with the same efficiency and cleanliness benefits that were provided to the Silverlight platform.

We’ve crafted these changes in such a way that most applications will continue to work as they have before, but the host and library developers out there will find a simpler model to work with—one that is more deterministic, simpler to use and, therefore, easier to secure.       

Post your questions and comments on the CLR Team Blog at


Andrew Dai is a program manager on the CLR security team. For more, in-depth information on how to use the features mentioned in this article, please visit the CLR team blog ( and Shawn Farkas’ .NET security blog (