The Myth of .NET Purity, Reloaded

 

Scott Hanselman
Corillian Corporation

May 28, 2004

Summary: Does a solution written for Microsoft .NET have to be 100% .NET? Scott Hanselman looks at how hybrid managed-unmanaged solutions are really the norm. (9 printed pages)

Related Books

It's Spring of 2004. The Microsoft® .NET Framework is current included with Microsoft® Windows Server™ 2003, and we know that many of the rich services in the upcoming release of Microsoft® Windows® code-named "Longhorn" will be built with managed APIs. Some folks have asked the question, "Is Longhorn managed?" implying that it would be a great thing if huge parts of the operating system were implemented in managed code. When they hear that not all of it is managed, they are somehow disappointed.

Don Box put it simply: "It doesn't matter. What does matter is that ... the primary access mode ... is managed." Do I care if my device driver is written in managed code or in C? No, I care that it works, and works well.

There is an increasing amount of discussion around the topic of ".NET Purity" in development circles. When selling an application, the question often arises, "Is your application 100 percent .NET?" Or, "How much of your application is .NET?" There is an implied qualitative judgment behind these questions, and it's usually pejorative.

I've heard it said by many a CTO in many a technical briefing that, "We're planning to port our whole system to .NET." Why spend 18 months converting your application, so you can arrive at the endpoint you're already at?

The implication is that an application entirely written in .NET, presumably without any interoperation with COM or direct calls to the Win32 API, is superior to an application that is a combination of technologies. The tragic irony of the goal of "as little interop as possible" is that the very .NET Framework you're building upon is itself a fantastic example of interoperability.

.NET represents a fantastic leap in developer productivity, and puts a clean, consistent face on the services that the Windows Platform provides. For many years, the set of interfaces provided by the Windows Platform—collectively known as the Windows SDK—have been exposed to developers as exported "C"-style functions in DLLs, and in recent years, through the Component Object Model (COM). And things got more complicated, and new access modes and abstractions were introduced.

MFC wrapped Win32 in a way that was different from the Microsoft® Visual Basic® 6.0 view, or the Windows Template Library (WTL) view. Classic Microsoft® ASP had no eventing model at all build on top of HTTP, while Visual Basic encouraged folks to double-click and get their events wired up automagically. With the .NET Framework, many different "access modes" have been unified under one programming model. There is one access mode; it's the .NET Framework managed APIs sitting on Windows, "The Platform."

Figure 1. The .NET "platform"

.NET Framework Library

The Windows platform has dozens and dozens of high-level system services exposed by literally thousands of APIs. This large library of functionality encompasses various levels of richness. A low-level API may open a file off a disk, while a high-level one might play an audio file. The designers of the .NET Framework wanted to create a consistent object-oriented face on a rich legacy of platform functionality. The CLR and .NET Framework work together to expose the capabilities within the Windows platform, including those that may have previously been hidden away in difficult or little known APIs.

While the CLR provides a new paradigm for application development, it does not close the door on existing libraries. The CLR provides interop services to the developer, but the biggest consumer of these services is certainly the .NET Class Libraries themselves, which unlock existing Windows platform abilities through the .NET APIs.

Figure 2. The platform invoke model

For example, when sending e-mail using the .NET Framework Library class, System.Web.Mail.SmtpMail, the Class Library uses a helper class that abstracts the existing CDO (Collaboration Data Objects) COM Library. This is just one example of thousands where a .NET Library developer chose to rely on a production-ready and reliable existing library rather than write something from scratch. This example and dozens of others with the Library not withstanding, the Common Language Runtime still at some point needs to work with the Windows internal APIs. As the saying goes, no matter what you're working on, eventually someone has to call LoadLibrary().

If Microsoft were to truly virtualize the machine, they would have marginalized their investment in the Windows platform.

Certainly it behooved the designers to make transitions to existing libraries as painless as possible. They have enabled this with NET/COM interop through both runtime- and COM-callable wrappers, the ability to tap into standard Win32 Platform APIs through a technology called P/Invoke (short for platform invoke), as well as other options. When writing code hosted in the CLR, the vast resources of platform are just sitting under the developer—the runtime is transparent rather than virtual. This marks a fundamentally different view of the platform than other virtualizing machine implementations.

GDI, the Graphics Device Interface, is an excellent example of an underlying unmanged service that has been seamlessly brought into the managed world almost entirely with P/Invoke. It's useful to list out the current "three pillars of Windows" in a table to remind readers where most of the unmanaged work is still happening, no matter how pure you are.

DLL Description of its contents
Kernel32.dll Contains low-level operating-system functions for unmanaged memory management and handling of resources.
GDI32.dll Graphics Device Interface (GDI) functions for drawing, font management, and general device output.
User32.dll Windows management functions for message handling, menus, and communications.

If I wanted to use the internal unmanaged FindWindow method in User32.dll, I'd add a declaration to my C# application like this:

   [DllImport("User32.dll")]
   public static extern int FindWindow(string strClassName, 
                     string strWindowName);

After I've made this declaration (which notably differs from a typical .NET function definition only by the DllImport attribute and perhaps the use of static extern), I can happily call this method like any other. Certainly there are tricks and tribulations around marshalling between unmanaged and managed code, but for the most part, it just works. The proof that it works is in the call stack of your own code, which is constantly jumping into unmanaged support DLLs with you rarely having to think about it.

While creating a fresh application using only .NET may offer some benefits in the arenas of deployment or marketing architecture ("marketecture"), these benefits may not be worth the trouble when weighed against the cost of rewriting non-.NET components in .NET when those legacy components could have been leveraged. A "pure" .NET solution can only make use of either those pieces of functionality that can be achieved entirely within the runtime, or those functions that have been exposed by the Base Class Library—which itself uses COM Interop and P/Invoke.

The .NET Framework Library itself isn't "pure .NET," as it uses every opportunity to take full advantage of the underlying platform primitives.

The whole concept of .NET Purity is rendered specious in this new light. The .NET Framework is certainly the best way to create business components on the Windows platform, but any applications written with the .NET Framework are only lifted as high as the underlying Windows operating system services.

Why Write Managed Code at All?

Why should one move up a step in the language and environment ladder? In order to live and work at another level of abstraction. Certainly the .NET CLR provides an improved object-oriented experience over Visual Basic 6.0 and even possibly C++, but the Framework's real value is in its ability to effectively hide the underlying operating system and System APIs and then expose the new abstraction layer to a multitude of languages. Of course you won't write a real-time device driver in managed code (yet), but I believe towards the middle of this decade, nearly all new business-focused code of any significance will be written in a managed environment. (If this is not already true.)

The CLR and managed runtimes aim to lift the developer out of the drudgery of memory management, low-level IO and storage, and even wire protocols. In the days of C++, we talked a good talk about objects being "closer to the business person" and "easier to design with" but our object-oriented illusion was shattered when it came time to serialize the object, remote its data easily, or save it to a database. Service-oriented architecture (SOA) begins to reconcile the responsibilities of a logical service versus the objects (messages) it acts upon, but this move forward wouldn't have been feasible without extensible, complete, and pervasive metadata, and a unified type system. We'd have been too mired in the details. That's why writing managed code is a no-brainer.

Common Language Runtime or Virtual Machine?

Often the .NET Common Language Runtime, or CLR, is directly compared to the Java™ Virtual Machine. Initially, there are many clear parallels: both are "managed" environments that provide a component container, both consume a "partially chewed" intermediate language, both provide low-level services like garbage collection and threading conveniences.

While these parallels are semantically correct on a superficial level, these two implementations differ fundamentally in philosophy. Comparing the CLR to the Virtual Machine is reasonable only to a certain point—their architectural goals are ultimately different.

Sun Microsystems™ promotes a marketing program called 100% Pure Java, which is certainly appropriate if code portability and underlying operating system transparency is a desirable endpoint. However, many third-party Java Application Servers create a competitive advantage by judicious use of "C" function calls directly down (through Java Native Interface or JNI) into their host operating systems value-added services that are not exposed by the Java Application Platform (the Java Class Library). Calling into the core platform is the only way to make use of base functionality that is only presented through a native interface.

Figure 3. The Java "stack"

The Java Virtual Machine is truly a "virtual machine" the ultimate goal of which is to abstract (virtualize) away the underlying operating system and provide an idealized (not necessarily ideal, but idealized) environment for development. The Java Virtual Machine is also intimately united with the API, the Java Application Platform, with services provided by the Virtual Machine implementation. Regardless of where you run your compiled Java code, you will run within the context of the Virtual Machine and ostensibly link with supplied Java Platform APIs.

The .NET Common Language Runtime is well named, as it is used more as a language runtime than a virtual machine. While it successfully abstracts away aspects of underlying hardware through its use of an intermediate language, when the CLR is combined with the .NET Framework Library of APIs, it is married to the underlying platform, which is Windows. The CLR provides all the facilities of the Windows Platform to any .NET-enabled language.

It's also worth noting that all code running in the .NET managed environment is in fact running compiled (JIT'ed) as native instructions, thus providing a balance between flexibility and performance, abstracted away from the hardware, but at the same time running "close to the metal."

"Hybrid" Solutions Provide Real Solutions

Many large existing applications are written in Microsoft® Visual C++® and COM. They are written "close to the metal" to take full advantage of native Windows multi-threading and fine-grained (not automatic) memory management. However, new business components may also be written in a .NET language such as C# or Visual Basic .NET. The existing system then hosts the .NET Common Language Runtime within its process space and "interops." An interface that uses COM interop only incurs minimal overhead of between 10 and 40 processor instructions per in-proc call (mileage may vary).

.NET components hosted within the legacy application can take advantage of that application's existing services. Lower-level developer features, such as memory management, object lifetime, and object orientation, are provided by the CLR, while higher-level, vertical-specific business functionality is exposed through the legacy application.

More usefully, an application's existing unmanaged services can be presented with a fresh perspective as a managed and possibly service- or object-oriented API. The fact that the service's internal workings are ultimately unmanaged is just that—an implementation detail.

This "hybrid" model can (and does) provide a best-of-breed solution on the Windows Platform, exploiting both the high-performance low-level APIs through C++, and the highly componentized features of the .NET Framework. These solutions can work very successfully while companies shift their focus to developing entirely with .NET.

 

.NET in the Real World

Scott Hanselman is currently the Technology Evangelist and .NET Architect at eFinance enabler Corillian Corporation. He has a decade of experience developing software in C, C++, Visual Basic, COM, and recently C# and .NET. Scott has been appointed the Microsoft Developer Network "Regional Director" for Portland, Oregon for the last three years, developing content for, and speaking at Developer Days and the Visual Studio.NET Launch in both Portland and Seattle. He is the highest rated speaker at Microsoft Events in the PacWest Region, and has co-authored two books from Wrox Press. Scott and Corillian are members of the Web Service Interoperability Organization (WS-I).