Migrating 32-bit Managed Code to 64-bit


Microsoft Corporation

Updated May 2005

Applies to:
   Microsoft .NET
   Microsoft .NET Framework 2.0

Summary: Find out what is involved in migrating 32-bit managed applications to 64-bit, issues that can impact migration, and the tools that are available to assist you. (17 printed pages)


Managed Code in a 32-bit Environment
Enter the CLR for the 64-bit Environment
Migration and Platform Invoke
Migration and COM Interoperability
Migration and Unsafe Code
Migration and Marshaling
Migration and Serialization


This whitepaper discusses:

  • What is involved in migrating managed applications from 32-bit to 64-bit
  • The issues that can impact migration
  • What tools are available to assist you

This information is not meant to be prescriptive; rather it is intended to familiarize you with the different areas that are susceptible to issues during the process of migrating to 64-bit. At this point there is no specific "cookbook" of steps that you can follow and insure your code will work on 64-bit. The information contained in this whitepaper will familiarize you with the different issues and what should be reviewed.

As you will soon see, if your managed assembly is not 100% type safe code you will need to review your application and its dependencies to determine your issues with migrating to 64-bit. Many of the items you will read about in the next sections can be addressed through programming changes. In a number of cases you will also need to set aside time to update your code in order to run correctly in both 32-bit and 64-bit environments, if you want it to run in both.

Microsoft .NET is a set of software technologies for connecting information, people, systems, and devices. Since its 1.0 release in 2002, organizations have succeeded in deploying .NET-based solutions whether built in-house, by independent software vendors (ISVs), or some combination. There are several types of .NET applications that push the limits of the 32-bit environment. These challenges include, but are not limited to, the need for more real addressable memory and the need for increased floating-point performance. x64 and Itanium offer better performance for floating-point operations than you can get on x86. However, it is also possible that the results you get on x64 or Itanium will be different from the results you get on x86. The 64-bit platform aims to help address these issues.

With the release of .NET Framework version 2.0, Microsoft includes support for managed code running on the x64 and Itanium 64-bit platforms.

Managed code is simply "code" that provides enough information to allow the .NET Common Language Runtime (CLR) to provide a set of core services, including:

  • Self description of code and data through metadata
  • Stack walking
  • Security
  • Garbage collection
  • Just-in-Time compilation

In addition to managed code there are several other definitions that are important to understand as you investigate the migration issues.

Managed Data—data that is allocated on the managed heap and collected via garbage collection.

Assembly—the unit of deployment that allows the CLR to fully understand the contents of an application and to enforce the versioning and dependency rules defined by the application.

Type safe code—code that uses only managed data, and no unverifiable data types or unsupported data type conversion/coercion operations (that is, non-discriminated unions or structure/interface pointers). C#, Visual Basic .NET, and Visual C++ code compiled with /clr:safe generate type safe code.

Unsafe Code—code that is permitted to perform such lower-level operations as declaring and operating on pointers, performing conversions between pointers and integral types, and taking the address of variables. Such operations permit interfacing with the underlying operating system, accessing a memory-mapped device, or implementing a time-critical algorithm. Native code is unsafe.

Managed Code in a 32-bit Environment

To understand the complexities involved with migrating managed code to the 64-bit environment, let's review how managed code is executed in a 32-bit environment.

When an application, managed or unmanaged, is selected to be executed, the Windows loader is invoked and is responsible for deciding how to load and then execute the application. Part of this process involves peeking inside of the executable's portable execution (PE) header to determine if the CLR is required. As you might have already guessed, there are flags in the PE that indicate managed code. In this case the Windows loader starts the CLR that is then responsible for loading and executing the managed application. (This is a simplified description of the process as there are many steps involved, including determining which version of the CLR to execute, setting up the AppDomain 'sandbox', etc.)


As the managed application runs, it can (assuming appropriate security permissions) interact with native APIs (including the Win32 API) and COM objects through the CLR interoperability capabilities. Whether calling a native platform API, making a COM request, or marshaling a structure, when running completely within the 32-bit environment the developer is isolated from having to think about data type sizes and data alignment.

When considering the migration to 64-bit it will be essential to research what dependencies your application has.

Enter the CLR for the 64-bit Environment

In order for managed code to execute in the 64-bit environment consistent with the 32-bit environment, the .NET team developed the Common Language Runtime (CLR) for the Itanium and x64 64-bit systems. The CLR had to strictly comply with the rules of the Common Language Infrastructure (CLI) and Common Language Type System to insure that code written in any of the .NET languages would be able to interoperate as they do in the 32-bit environment. In addition, the following is a list of some of the other pieces that also had to ported and/or developed for the 64-bit environment:

  • Base class libraries (System.*)
  • Just-In-Time compiler
  • Debugging support
  • .NET Framework SDK

64-bit managed code support

The .NET Framework version 2.0 supports the Itanium and x64 64-bit processors running:

  • Windows Server 2003 SP1
  • Future Windows 64 bit client releases

(You cannot install the .NET Framework version 2.0 on Windows 2000. Output files produced using the .NET Framework versions 1.0 and 1.1 will run under WOW64 on a 64-bit operating system.)

When installing the .NET Framework version 2.0 on the 64-bit platform, you are not only installing all of the necessary infrastructure to execute your managed code in 64-bit mode, but you are installing the necessary infrastructure for your managed code to run in the Windows-on-Windows subsystem, or WoW64 (32-bit mode).

A simple 64-bit migration

Consider a .NET application that is 100% type safe code. In this scenario it is possible to take your .NET executable that you run on your 32-bit machine and move it to the 64-bit system and have it run successfully. Why does this work? Since the assembly is 100% type safe we know that there are no dependencies on native code or COM objects and that there is no 'unsafe' code which means that the application runs entirely under the control of the CLR. The CLR guarantees that while the binary code that is generated as the result of Just-in-time (JIT) compilation will be different between 32-bit and 64-bit, the code that executes will both be semantically the same. (You cannot install the .NET Framework version 2.0 on Windows 2000. Output files produced using .NET Framework versions 1.0 and 1.1 will run under WOW64 on a 64-bit operating system.)

In reality the previous scenario is a bit more complicated from the perspective of getting the managed application loaded. As discussed in the previous section, the Windows loader is responsible for deciding how to load and execute the application. However, unlike the 32-bit environment, running on a 64-bit Windows platform means that there are two (2) environments where the application could be executed, either in the native 64-bit mode or in WoW64.

The Windows loader now has to make decisions based on what it discovers in the PE header. As you might have guessed there are settable flags in the managed code that assist with this process. (See corflags.exe to display the settings in a PE.) The following list represents information that is found in the PE that aids in the decision making process.

  • 64-bit—denotes that the developer has built the assembly specifically targeting a 64-bit process.
  • 32-bit—denotes that the developer has built the assembly specifically targeting a 32-bit process. In this instance the assembly will run in WoW64.
  • Agnostic—denotes that the developer built the assembly with Visual Studio 2005, code-named "Whidbey". or later tools and that the assembly can run in either 64-bit or 32-bit mode. In this case, the 64-bit Windows loader will run the assembly in 64-bit.
  • Legacy—denotes that the tools that built the assembly were "pre-Whidbey". In this particular case the assembly will be run in WoW64.

Note   There is also information in the PE that tells the Windows loader if the assembly is targeted for a specific architecture. This additional information ensures that assemblies targeted for a particular architecture are not loaded in a different one.

The C#, Visual Basic .NET, and C++ Whidbey compilers let you set the appropriate flags in the PE header. For example, C# and THIRD have a /platform:{anycpu, x86, Itanium, x64} compiler option.

Note   While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.

If you are curious to know how these flags are set on a managed assembly, you can run the ILDASM utility provided in the .NET Framework SDK. The following illustration shows a "legacy" application.

Keep in mind that a developer marking an assembly as Win64 determined that all dependencies of the application would execute in 64-bit mode. A 64-bit process cannot use a 32-bit component in process (and a 32-bit process cannot load a 64-bit component in process). Keep in mind that the ability for the system to load the assembly into a 64-bit process does not automatically mean that it will execute correctly.

So, we now know that an application comprised of managed code that is 100% type safe can be copied (or xcopy deploy it) to a 64-bit platform and have it JIT and run successfully with .NET in 64-bit mode.

However, we often see situations that aren't ideal, and that brings us to the main focus of this paper, which is to increase awareness of the issues related to migrating.

You can have an application that isn't 100% type safe and that is still able to run successfully in 64-bit under .NET. It will be important for you to look at your application carefully, keeping in mind the potential issues discussed in the following sections and make the determination of whether you can or cannot run successfully in 64-bit.

Migration and Platform Invoke

Making use of the platform invoke (or p/invoke) capabilities of .NET refers to managed code that is making calls to non-managed, or native, code. In a typical scenario this native code is a dynamic link library (DLL) that is either part of the system (Windows API, etc.), part of your application, or a third-party library.

Using non-managed code does not mean explicitly that a migration to 64-bit will have issues; rather it should be considered an indicator that additional investigation is required.

Data types in Windows

Every application and every operating system has an abstract data model. Many applications do not explicitly expose this data model, but the model guides the way in which the application's code is written. In the 32-bit programming model (known as the ILP32 model), integer, long, and pointer data types are 32 bits in length. Most developers have used this model without realizing it.

In 64-bit Microsoft Windows, this assumption of parity in data type sizes is invalid. Making all data types 64 bits in length would waste space, because most applications do not need the increased size. However, applications do need pointers to 64-bit data, and they need the ability to have 64-bit data types in selected cases. These considerations led the Windows team to select an abstract data model called LLP64 (or P64). In the LLP64 data model, only pointers expand to 64 bits; all other basic data types (integer and long) remain 32 bits in length.

The .NET CLR for 64-bit platforms uses the same LLP64 abstract data model. In .NET there is an integral data type, not widely known, that is specifically designated to hold 'pointer' information: IntPtr whose size is dependent on the platform (e.g., 32-bit or 64-bit) it is running on. Consider the following code snippet:

public void SizeOfIntPtr() {
Console.WriteLine( "SizeOf IntPtr is: {0}", IntPtr.Size );

When run on a 32-bit platform you will get the following output on the console:

SizeOf IntPtr is: 4

On a 64-bit platform you will get the following output on the console:

SizeOf IntPtr is: 8

Note   If you want to check at runtime whether or not you are running in a 64-bit environment, you can use the IntPtr.Size as one way to make this determination.

Migration considerations

When migrating managed applications that use p/invoke, consider the following items:

  • Availability of a 64-bit version of the DLL
  • Use of data types


One of the first things that needs to be determined is whether the non-managed code that your application has a dependency on is available for 64-bit.

If this code was developed in-house, then your ability for success is increased. Of course, you will still need to allocate resources to port the non-managed code to 64-bit along with appropriate resources for testing, quality assurance, etc. (This whitepaper isn't making recommendations about development processes; rather, it is trying to point out that resources may need to be allocated to tasks to port code.)

If this code is from a third party, you will need to investigate whether this third party already has the code available for 64-bit and whether the third party would be willing to make it available.

The higher risk issue will arise if the third party no longer provides support for this code or if the third party is not willing to do the work. These cases necessitate additional research into available libraries that do similar functionality, whether the third party will let the customer do the port themselves, etc.

It is important to keep in mind that a 64-bit version of the dependent code may have altered interface signatures that may mean additional development work and to resolve differences between the 32-bit and 64-bit versions of the application.

Data types

Using p/invoke requires that the code developed in .NET declare a prototype of the method that the managed code is targeting. Given the following C declaration:

typedef void * HANDLE
HANDLE GetData();

Examples of prototyped methods are shown below:


[DllImport( "sampleDLL", CallingConvention=CallingConvention.Cdecl )]
      public static extern int DoWork( int x, int y );

[DllImport( "sampleDLL", CallingConvention=CallingConvention.Cdecl )]
      public unsafe static extern int GetData();

Let's review these examples with an eye towards 64-bit migration issues:

The first example calls the method DoWork passing in two (2) 32-bit integers and we expect a 32-bit integer to be returned. Even though we are running on a 64-bit platform, an integer is still 32 bits. There is nothing in this particular example that should hinder our migration efforts.

The second example requires some changes to the code to successfully run in 64-bit. What we are doing here is calling the method GetData and declared that we are expecting an integer to be returned, but where the function actually returns an int pointer. Herein lies our problem: remember that integers are 32 bits but in 64-bit pointers are 8 bytes. As it turns out, quite a bit of code in the 32-bit world was written assuming that a pointer and an integer were the same length, 4 bytes. In the 64-bit world this is no longer true.

In this last case the problem can be resolved by changing the method declaration to use an IntPtr in place of the int.

public unsafe static extern IntPtr GetData();

Making this change will work in both the 32-bit and 64-bit environments. Remember, IntPtr is platform-specific.

Using p/invoke in your managed application does not mean that migrating to the 64-bit platform will not be possible. Nor does it mean that there will be problems. What it does mean is that you must review the dependencies on non-managed code that your managed application has, and determine if there will be any issues.

Migration and COM Interoperability

COM interoperability is an assumed capability of the .NET platform. Like the previous discussion on platform invoke, making use of COM interoperability means that managed code is making calls to non-managed code. However, unlike platform invoke, COM interoperability also means having the ability for non-managed code to call managed code as if it were a COM component.

Once again, using non-managed COM code does not mean that a migration to 64-bit will have problems; rather it should be considered an indicator that additional investigation is required.

Migration considerations

It is important to understand that with the release of .NET Framework version 2.0 there is no support for inter-architecture interoperability. To be more succinct, you cannot make use of COM interoperability between 32-bit and 64-bit in the same process. But you can make use of COM interoperability between 32-bit and 64-bit if you have an out-of-process COM server. If you cannot use an out-of-process COM server, you will want to mark your managed assembly as Win32 rather than Win64 or Agnostic in order to have your program run in WoW64 so that it can interoperate with the 32-bit COM object.

The following is a discussion of the different considerations that must be given to making use of COM interoperability where managed code makes COM calls in a 64-bit environment. Specifically,

  • Availability of a 64-bit version of the DLL
  • Use of data types
  • Type libraries


The discussion in the p/invoke section regarding availability of a 64-bit version of the dependent code is relevant to this section as well.

Data types

The discussion in the p/invoke section regarding data types of a 64-bit version of the dependent code is relevant to this section as well.

Type libraries

Unlike assemblies, type libraries cannot be marked as 'neutral'; they must be marked as either Win32 or Win64. In addition, the type library must be registered for each environment in which the COM will run. Use tlbimp.exe to generate a 32-bit or 64-bit assembly from a type library.

Using COM interoperability in your managed application does not mean that migrating to the 64-bit platform will not be possible. Nor does it mean that there will be problems. What it does mean is that you must review the dependencies your managed application has and determine if there will be any issues.

Migration and Unsafe Code

The core C# language differs notably from C and C++ in its omission of pointers as a data type. Instead, C# provides references and the ability to create objects that are managed by a garbage collector. In the core C# language it is simply not possible to have an uninitialized variable, a "dangling" pointer, or an expression that indexes an array beyond its bounds. Whole categories of bugs that routinely plague C and C++ programs are thus eliminated.

While practically every pointer type construct in C or C++ has a reference type counterpart in C#, there are situations where access to pointer types becomes a necessity. For example, interfacing with the underlying operating system, accessing a memory-mapped device, or implementing a time-critical algorithm may not be possible or practical without access to pointers. To address this need, C# provides the ability to write unsafe code.

In unsafe code it is possible to declare and operate on pointers, to perform conversions between pointers and integral types, to take the address of variables, and so forth. In a sense, writing unsafe code is much like writing C code within a C# program.

Unsafe code is in fact a "safe" feature from the perspective of both developers and users. Unsafe code must be clearly marked with the modifier unsafe, so developers can't possibly use unsafe features accidentally.

Migration considerations

In order to discuss the potential issues with unsafe code let's explore the following example. Our managed code makes calls to an unmanaged DLL. In particular, there is a method called GetDataBuffer that returns 100 items (for this example we are returning a fixed number of items). Each of these items consists of an integer and a pointer. The sample code below is an excerpt from the managed code showing the unsafe function responsible for handling this returned data.


public unsafe int UnsafeFn() {
   IntPtr * inputBuffer = sampleDLL.GetDataBuffer();
   IntPtr * ptr = inputBuffer;
   int   result = 0;

   for ( int idx = 0; idx < 100; idx ++ ) {
      // Add 'int' from DLL to our result
      result = result + ((int) *ptr);

// Increment pointer over int (
      ptr = (IntPtr*)( ( (byte *) ptr ) + sizeof( int ) );

      // Increment pointer over pointer (
      ptr = (IntPtr*)( ( (byte *) ptr ) + sizeof( int ) );
   return result;

Note This particular example could have been accomplished without the use of unsafe code. More specifically, there are other techniques such as marshaling that could have been used. But for this purpose we are using unsafe code.

The UnsafeFn loops through the 100 items and sums the integer data. As we are walking through a buffer of data, the code needs to step over both the integer and the pointer. In the 32-bit environment this code works fine. However, as we've previously discussed, pointers are 8 bytes in the 64-bit environment and therefore the code segment (shown below) will not work correctly, as it is making use of a common programming technique, e.g., treating a pointer as equivalent to an integer.

// Increment pointer over pointer (
ptr = (IntPtr*)( ( (byte *) ptr ) + sizeof( int ) );

In order for this code to work in both the 32-bit and 64-bit environment it would be necessary to alter the code to the following.

// Increment pointer over pointer (
ptr = (IntPtr*)( ( (byte *) ptr ) + sizeof( IntPtr ) );

As we've just seen, there are instances where using unsafe code is necessary. In most cases it is required as a result of the managed code's dependency on some other interface. Regardless of the reasons unsafe code exists, it has to be reviewed as part of the migration process.

The example we used above is relatively simple and the fix to make the program work in 64-bit was straightforward. Clearly there are many examples of unsafe code that are more complex. Some will require deep review and perhaps stepping back and rethinking the approach the managed code is using.

To repeat what you've already read—using unsafe code in your managed application does not mean that migrating to the 64-bit platform will not be possible. Nor does it mean that there will be problems. What it does mean is that you must review all of the unsafe code your managed application has and determine if there will be any issues.

Migration and Marshaling

Marshaling provides a collection of methods for allocating unmanaged memory, copying unmanaged memory blocks, and converting managed to unmanaged types, as well as other miscellaneous methods used when interacting with unmanaged code.

Marshaling is manifested through the .NET Marshal class. The static or shared in Visual Basic, methods defined on the Marshal class are essential to working with unmanaged data. Advanced developers building custom marshalers who need to provide a bridge between the managed and unmanaged programming models typically use most of the methods defined.

Migration considerations

Marshaling poses some of the more complex challenges associated with the migration of applications to 64-bit. Given the nature of what the developer is trying to accomplish with marshaling, namely transferring structured information to, from, or to-and-from managed and unmanaged code, we will see that we are providing information, sometimes low-level, to assist the system.

In terms of layout there are two specific declarations that can be made by the developer; these declarations are typically made through the use of coding attributes.


Let's review the definition as supplied in the .NET Framework SDK Help:

"The members of the object are laid out sequentially, in the order in which they appear when exported to unmanaged memory. The members are laid out according to the packing specified in StructLayoutAttribute.Pack, and can be noncontiguous."

We are being told that the layout is specific to the order in which it is defined. Then, all we need to do is make sure that the managed and unmanaged declarations are similar. But, we are also being told that packing is a critical ingredient, too. At this point you won't be surprised to learn that without explicit intervention by the developer, there is a default pack value. As you might have already guessed, the default pack value is not the same between 32-bit and 64-bit systems.

The statement in the definition regarding noncontiguous members is referring to the fact that, because there are default pack sizes, data that is laid out in memory may not be at byte 0, byte 1, byte2, etc. Rather, the first member will be at byte 0, but the second member might be at byte 4. The system does this default packing to allow the machine to get access to members without having to deal with misalignment problems.

Here is an area that we need to pay close attention to packing, and at the same time, try to let the system act in its preferred mode.

The following is an example of a structure as defined in managed code, as well as the corresponding structure defined in unmanaged code. You should take careful note of how this example demonstrates setting the pack value in both environments.

[StructLayout(LayoutKind.Sequential, Pack=1)]
public class XYZ {
      public byte arraysize = unchecked((byte)-1);
      [MarshalAs(UnmanagedType.ByValArray, SizeConst=52)]
      public int[] padding = new int[13];
[unmanaged c++]
#pragma pack(1)
typedef struct{
      BYTE arraysize;      // = (byte)-1;
      int      padding[13];
} XYZ;


Let's review the definition as supplied in the .NET FrameworkSDK Help:

"The precise position of each member of an object in unmanaged memory is explicitly controlled. Each member must use the FieldOffsetAttribute to indicate the position of that field within the type."

We are being told here that the developer will be providing exact offsets to aid in the marshaling of information. So, it is essential that the developer correctly specify the information in the FieldOffset attribute.

So, where are the potential problems? Keeping in mind that the field offsets are defined knowing the size of the proceeding data member size, it is important to remember that not all data type sizes are equal between 32-bit and 64-bit. Specifically, pointers are either 4 or 8 bytes in length.

We now have a case where we may have to update our managed source code to target the specific environments. The example below shows a structure that includes a pointer. Even though we've made the pointer an IntPtr there is still a difference when moving to 64-bit.

    internal struct FooValue {
        [FieldOffset(0)] public int dwType;
        [FieldOffset(4)] public IntPtr pType;
        [FieldOffset(8)] public int typeValue;

For 64-bit we have to adjust the field offset for the last data member in the structure as it really begins at offset 12 rather than 8.

    internal struct FooValue {
        [FieldOffset(0)] public int dwType;
        [FieldOffset(4)] public IntPtr pType;
        [FieldOffset(12)] public int typeValue;

The use of marshaling is a reality when complex interoperability between managed and unmanaged code is required. Making use of this powerful capability is not an indicator that you can migrate your 32-bit application to the 64-bit environment. However, because of the complexities associated with using marshaling, this is an area where careful attention to detail is required.

Analysis of your code will indicate whether separate binaries are required for each of the platforms and whether you will also have to make modifications to your unmanaged code to address issues like packing.

Migration and Serialization

Serialization is the process of converting the state of an object into a form that can be persisted or transported. The complement of serialization is deserialization, which converts a stream into an object. Together, these processes allow data to be easily stored and transferred.

The .NET Framework features two serializing technologies:

  • Binary serialization preserves type fidelity, which is useful for preserving the state of an object between different invocations of an application. For example, you can share an object between different applications by serializing it to the Clipboard. You can serialize an object to a stream, to a disk, to memory, over the network, and so forth. .NET Remoting uses serialization to pass objects "by value" from one computer or application domain to another.
  • XML serialization serializes only public properties and fields and does not preserve type fidelity. This is useful when you want to provide or consume data without restricting the application that uses the data. Because XML is an open standard, it is an attractive choice for sharing data across the Web. SOAP is likewise an open standard, which makes it an attractive choice.

Migration considerations

When we think about serialization we need to keep in mind what we are trying to achieve. One question to keep in mind as you migrate to 64 bit is whether you intend to share serialized information between the different platforms. In other words, will the 64-bit managed application read (or deserialize) information stored by a 32-bit managed application.

Your answer will help drive the complexity of your solution.

  • You may want to write your own serialization routines to account for the platforms.
  • You may want to restrict the sharing of information, while still allowing each platform to read and write its own data.
  • You may want to revisit what you are serializing and make alterations to help avoid some of the problems.

So after all that, what are the considerations with respect to serialization?

  • IntPtr is either 4 or 8 bytes in length depending on the platform. If you serialize the information then you are writing platform-specific data to the output. This means that you can and will experience problems if you attempt to share this information.

If you consider our discussion in the previous section about marshaling and offsets, you might come up with a question or two about how serialization addresses packing information. For binary serialization .NET internally uses the correct unaligned access to serialization stream by using byte based reads and correctly handling the data.

As we've just seen, the use of serialization does not prevent migration to 64-bit. If you use XML serialization you have to convert from and to native managed types during the serialization process, isolating you from the differences between the platforms. Using binary serialization provides you with a richer solution but creates the situation where decisions need to be made regarding how the different platforms share serialized information.


Migration to 64-bit is coming and Microsoft has been working to make the transition from 32-bit managed applications to 64-bit as simple as possible.

However, it is unrealistic to assume that one can just run 32-bit code in a 64-bit environment and have it run without looking at what you are migrating.

As mentioned earlier, if you have 100% type safe managed code then you really can just copy it to the 64-bit platform and run it successfully under the 64-bit CLR.

But more than likely the managed application will be involved with any or all of the following:

  • Invoking platform APIs via p/invoke
  • Invoking COM objects
  • Making use of unsafe code
  • Using marshaling as a mechanism for sharing information
  • Using serialization as a way of persisting state

Regardless of which of these things your application is doing it is going to be important to do your homework and investigate what your code is doing and what dependencies you have. Once you do this homework you will have to look at your choices to do any or all of the following:

  • Migrate the code with no changes.
  • Make changes to your code to handle 64-bit pointers correctly.
  • Work with other vendors, etc., to provide 64-bit versions of their products.
  • Make changes to your logic to handle marshaling and/or serialization.

There may be cases where you make the decision either not to migrate the managed code to 64-bit, in which case you have the option to mark your assemblies so that the Windows loader can do the right thing at start up. Keep in mind that downstream dependencies have a direct impact on the overall application.


You should also be aware of the tools that are available to assist you in your migration.

Today Microsoft has a tool called FxCop which is a code analysis tool that checks .NET managed code assemblies for conformance to the Microsoft .NET Framework Design Guidelines. It uses reflection, MSIL parsing, and call graph analysis to inspect assemblies for more than 200 defects in the following areas: naming conventions, library design, localization, security, and performance. FxCop includes both GUI and command-line versions of the tool, as well as an SDK, to create your own rules. For more information, refer to the FxCop Web site. Microsoft is in the process of developing additional FxCop rules that will provide you information to assist you in your migration efforts.

There area also managed library functions to assist you at runtime to determine what environment you are running in.

  • System.IntPtr.Size—to determine if you are running in 32-bit or 64-bit mode
  • System.Reflection.Module.GetPEKind—to programmatically query an .exe or .dll to see if it is meant to run only on a specific platform or under WOW64

There is no specific set of procedures to address all of the challenges that you could run into. This whitepaper is intended to raise your awareness to these challenges and present you with possible alternatives.