Share via


Behavior of 1.0/1.1 managed apps on 64bit machines

As I alluded to in my previous post there are multiple ways that we can go in terms of supporting legacy 1.0/1.1 assemblies on a Win64 machine. The context of the 1.0/1.1 support story is made somewhat simpler by the current plan of having both a v2.0 32bit CLR and a v2.0 64bit CLR on the box but no 1.0/1.1 CLR bits.

I mentioned that the 1.0/1.1 compilers didn't know anything about “bitness”. Basically they spit out a PE image that said “Hey! I'm managed! Run me with the CLR!” (gross simplification), whereas the v2.0 compilers produce images that range from “Hey! I'm managed, and I can run everywhere!!” to “Hey! I'm managed and I only run on x86!” etc...

This brings us to the fundamental question of this post -- what to do with 1.0/1.1 assemblies?

Option 1: call them “Legacy” assemblies since they don't know about “bitness”. Require them to run in the WOW64 under the 32bit CLR as we can't say for sure that the developer who created them was thinking about 64bit compatibility when they were created (remember that many of these were created years before even a 64bit alpha of .NET was available at PDC last year). Additionally, make the loader get angry and spew something along the lines of “BAD_IMAGE_FORMAT” if you try to load a legacy assembly in a native 64bit managed process just as if you had tried to load a v2.0 assembly marked x86 only.

Option 2: treat them like the v2.0 notion of MSIL assemblies, allow them to be used from both 32bit and 64bit managed processes. By default if they are an exe kick off the 64bit CLR when someone tries to start them. This would cause them to run as a 64bit process even though their creators probably didn't have that potential in mind when the code was written and tested.

 

Cases can be made for both sides. Right now the more conservative approach is “Option 1” which is what we are leaning towards. But there are definitely some negatives to that, the primary one in my mind being that is makes the transition to 64bit harder for groups that have dependencies on a lot of managed code that they don't own but are willing to do the testing legwork to make sure they work in 64bit mode anyway. In effect it makes the 1.0/1.1 managed code assemblies much like 32bit native code components as dependencies for moving your app to 64bit because in that scenario we won't let you load 1.0/1.1 assemblies in your 64bit process.

One of the great things about managed code is that frequently there isn't much if any work to be done to move it to 64bit. But given “Option 1” above we would at least require the work of a recompile (though someone could imagine a tool that would be frightfully dangerous which would modify 1.0/1.1 headers to look like 2.0 headers to pretend to be a v2.0 compiled MSIL image... Please don't do this!!). If you don't own the managed code you're using that means waiting for whoever does to recompile and give you the properly tagd version before you can move your app to 64bit.

Mind you, that is probably better than the alternative. If we were to just load up 1.0/1.1 images in a 64bit process expecting that they should be the equivalent of v2.0’s MSIL (which is what the compilers are currently producing as a default) you could end up with all manner of random execution failure, usually related to calling into some native 32bit code or other... “Option 2” would allow those who are willing do the legwork in testing to do their due diligence, test their application in a 64bit environment thoroughly even though it might contain 1.0/1.1 components and be able to say with reasonable confidence that their customers wont have problems running 64bit native. The fact that I id “willing to do the legwork in testing” and “due diligence” etc.. should be setting off huge danger signals in your head. How many people are willing to toughly test some component they paid for, isn’t that part of what you paid for??

There are of course all manner of “in-between” scenarios, few of which are supportable or justifiable, so for the purposes of this debate lets stick to these two options.

 

The main reason I started writing this post however wasn't to make up your mind but to poll your thoughts…

Thoughts?

Comments

  • Anonymous
    March 13, 2004
    For executables, the conservative approach makes sense, but I think that for libraries you should assume them to be "neatral". After all, it is the responsibility of the app to test with the libraries. Of course, in more dynamic scenarios (e.g. user specified plugins) this isn't so clear cut, but in any case, I would definitely not cut off the ability to use 1.0/1.1 libraries in 64bit.

  • Anonymous
    March 13, 2004
    I think that the conservative approach should only be taken if P/Invoke is actually used within the 1.0/1.1 assembly. In a pure MSIL assembly using no pointers or other advanced features ("safe code") the code should run without changes on 64 bit shouldn't it?

  • Anonymous
    March 13, 2004
    The comment has been removed

  • Anonymous
    March 13, 2004
    The problem with conservative approach (option 1) is that one can't ship a single binary that works now with .NET 1.1 and will work with 64-bit applications.

    Even if I've tested my Everett library or application on 64-bit Whidbey CLR, I have to create (and distribute) a separate binary(-ies) for Whidbey. If I compile assembly with Everett, 64-bit Whidbey would not load it. If I compile assembly with Whidbey, Everett would not load it. Thus I need two separate binaries.

    That distribution problem can be a serious blocker for adoption of 64-bit CLR usage.

  • Anonymous
    March 13, 2004
    Michael -- you're correct in your assesment (assuming we go with "option 1"). But given the conservative approach you would actually need to create a seperate binary(-ies) just for testing purposes, at which point distributing it seems reasonable. Assuming that option you would still be able to run your application on the 64bit platforms under the 32bit CLR (which under 64bit extended architectures can still be very performant).

    While I was debating this very issue the other day with the 64bit CLR test lead he made the valid point that the two "adoption blocker" issues we're dealing with at this point are "Windows64" adoption blocker vs. "64bit CLR" adoption blocker. Fundamentally there is a hierarchy there, not just for political reasons (said as a CLR dev) but also because without a Windows64 there isn't any need for a 64bit CLR...

    If people who install 64bit Windows and then run into annoying crashes and such when they run random 1.0/1.1 managed code that they might have been downloaded from some site or other (which may not have been tested on 64bit) that is a Windows64 platform adoption blocker.

    If on the other hand we are conservative we end up with a situation where legacy code you install just works, albeit in a 32bit process. People who really need 64bit (a need that is dire for some but debatable for most) initially have to do some legwork to get there (namely recompiling and presumably testing). And, as we move forward more and more stuff runs natively under 64bit. Like the Win16 to Win32 transition, we don't anticipate that everyone's code will make the leap immediately.

    Either way, I really appreciate the comments!! Like I said this behavior is still under debate.

  • Anonymous
    March 14, 2004
    Why don't create config policy option for switch between theese cases?

    Developer will can apply publisher policy and administrator customize too. Is there any reason not to do so?

  • Anonymous
    March 14, 2004
    The comment has been removed

  • Anonymous
    March 15, 2004
    I'm afraid I'm with Jason here. If I'm running in a CLR environment, or a JVM environment I shouldn't have to care what's under the hood, the runtime should be taking care of that for me. I don't care about big endian, or little endian if I target Compact Framework, Rotor or Mono, so why should a step up be a big deal.

    In fact the idea of there being targetted code in a CLR environment strikes me as a bit of a betrayal of the whole idea. Yes I do realise the problems with PInvoke assumptions, but darnit, it just feels wrong. If Windows 95 managed the thunking layer for Windows 3.1 code, why can't there be a CRL thunk. Or maybe I just like to say "thunk" <g>

  • Anonymous
    March 15, 2004
    The comment has been removed

  • Anonymous
    March 15, 2004
    The comment has been removed

  • Anonymous
    March 15, 2004
    Barry -- good point about small software houses. And my reply would be that if we go with the option of running 1.0/1.1 apps under the WOW64 in 32bit mode even though they're on 64bit boxes then we're doing what you're asking for? We're making it safe... Are you in agreement with this? Or am I missing the point?

    In that scenario you don't have to test on 64bit machines (if your app works on x86 and breaks on the 32bit CLR in the WOW64 then that is a CLR or Windows bug, not a bug in your code). And when you're ready to test on 64bit machines (they're getting more prevelant, www.hp.com has a AMD64 system (a450e series) at US$720 when I just looked it up) you can and ship a binary that is tagged as 64bit safe (read my prior entry on WOW64 for a treatment of the topic of compile time bitness tagging, the current default implies 64bit safe because most code is) and it will run natively when people run it on their 64bit machines... This gives you the safety now of people being able to use your app the way you intended it be used when you wrote the code, even if they're on a machine that you haven't tested on...

    In writing my response here I've realized that I'm not really sure which definition of "safe" you mean, in terms of options to go with in this debate, which are you trying to argue:

    a) all 1.0/1.1 apps should run as 64bit native, but the CLR should go out of our way to be extrodinarially careful with them such that no matter what they do they don't break?

    b) all 1.0/1.1 apps need to run in a safe enviornment, and since they couldn't possibly have been tested with 64bit machines when they were created running in a "safe" mode under WOW64 and the 32bit CLR is acceptable as they should then run as exepected by the application developer

    c) something else: <please elaborate>

    I've been proposing option "b", which I believe gives the "safety" that you desire, but I could be wrong in my assumptions about your desire.


    As for your final thought: yes, but for a number of reasons I generally believe that falls under a potentially "very" fragile implementation category that has the potential to take away from rather than add to the "safety" we're trying to dial into the runtime.

    p.s. don't take my S2000 argument to mean that we want to be the S2000 of runtimes... We definitely don't!! want to be the runtime with the least safety dialed in out of the factory!! But at this point we may not want to be a Honda Accord either.

  • Anonymous
    March 15, 2004
    I'm for safe in this instance, so WOW it. But of course then you loose out on the 64bit goodness.

    Wouldn't it be possible to look inside for things like pinvoke, and if an exe is well behaved then run it in 64bit mode, for speed/extra memory/cool funkiness. If it looks like it's doing something bad, then iosolate it and WOW it.

    However ... what will happen if you have a 32bit assembly which contains, say, some business objects, and a 64bit assembly wants to use the objects within? Can the 64bit CLR access objects hosting in the 32bit CLR?

    I'd prefer option b. And a free MSN universal subscription and AMD machine for testing please.

    (As an aside, what CLR will Yukon on a 64bit platform be using for its stored procedures?)

  • Anonymous
    March 16, 2004
    The comment has been removed

  • Anonymous
    March 16, 2004
    I think the first two posters were right here. The 32bit CLR already knows and checks for verifiable code (type safe, no pointers, no P/Invoke). Shouldn't verifiable 1.x code be able to run on 64bit without any worries? (If not, I'd really like to know why, and others probably with me.) Anyway, wasn't this the promise made to developers? Isn't that why Java already runs on 64bit, without recompiling?

    For libraries containing non-verifiable code, a recompile would be a pain, but reasonable; for fully verifiable managed libraries I can't think of any justification. I'm mainly talking libraries here; .exe's already bind to their original platform (1.0 or 1.1), so it would be no surprise if they're treated conservatively and remain 32bit.

  • Anonymous
    March 16, 2004
    So, while I wrote some of the code that controls whether or not exes get loaded into the 64bit CLR when they start up I didn't write the code that controls the loading of libraries used at runtime by 64bit processes, and it looks like I've been writing the wrong thing here in regard to the way we're leaning with that implementation. I just had the test lead for 64bit stop by my office on his way to somewhere else and say "no... dude, you can load libraries, it's only exes which we kick into the WOW64" (in reference of course to our current implementation, and the therefore incorrect info I've been piping out on this blog <embarassed>). An interesting statement for him to make as when we were debating this issue the other day that was what I was pushing for and I got the feeling he was pushing back... Turns out I got the wrong read.

    I'm going to talk to him in a little bit and clear things up as to what our current (read: one possible and probable implementation of what we will ship in v2.0) implementation is... The size of this product sometimes amazes me... I'll post an update when I get a fuller story, but given the thoughts I'm seeing shared in this blog this difference will be pleasently recieved.

  • Anonymous
    March 16, 2004
    Josh Williams:
    > more trouble than it's worth

    What troubles did you meant? Only I can imagine is loader-time config searching.

    This is really coding technical task. Of course, development costs, but it is one-time payment.

    On other side, we will have universal flexible solution, that can provide best granularity support for any existing cases.


    Examples.

    If you have 3rd-side library, you can test it 64bit-safety by own and set config flag. Do not need wait for 3rd-side recompile.

    If you are library or app producer, you can easily explicitly define 64bitness rules. This don't need 2.0 recompile and don't hurt .NET 1.1 compatibility.

    If you, as producer, have existing app, you can test it and, if it conform with 64bitness, provide policy to customers. Or you can patch some 64bit-unsafe assemblies and send them theese patches with app policy in one zip as package.

  • Anonymous
    March 18, 2004
    Mihailik -- sorry for taking so long to respond, things have been busy at work.

    I agree that it's really a technical task, the questions I have (which I have not spent time researching at this point and therefore can answer) include:

    - how would this change in publisher policy interact with 1.0/1.1 runtimes on 32bit boxes
    - how does this change interact with the 64bit CLR on 64bit boxes (the obvious question) and with the 32bit CLR on 64bit boxes running in the WOW.
    - do we enable only the x86 v. neutral specification? or do we allow for also specifing that a 1.0/1.1 assembly is 64bit only (and yes it is possible to write 64bit specific code in a 1.0/1.1 assembly, PInvoke, unsafe, etc...)
    - how does versioning work with this publisher policy

    And there are more... At this point I just don't know. But I have made sure that your thoughts are getting to the right people.

    -josh

  • Anonymous
    March 18, 2004
    Whether an app always loads as 32 bit, always 64 bit, or allowed to run on either OS platform should be configurable in the application manifest. You definitely need to provide a way for developers to mark their .NET 1.1 applications as "fully tested for use on .NET64".

    -Danny

  • Anonymous
    May 25, 2004
    This was a Great Article, It helped me finish my paper ! Many Thanks !

  • Anonymous
    July 26, 2004
    The comment has been removed

  • Anonymous
    May 06, 2005
    In a number of blog entries I have discussed how on 64-bit machines .Net applications can run as either...

  • Anonymous
    October 30, 2006
    This article covers some 64 bit aspects regarding managed code and COM+ applications. The 64 bit info

  • Anonymous
    October 30, 2006
    This article covers some 64 bit aspects regarding managed code and COM+ applications. The 64 bit info

  • Anonymous
    May 20, 2007
    PingBack from http://millicentboh.wordpress.com/2007/05/21/flipping-bits-on-managed-images-to-make-them-load-with-the-right-bitness/

  • Anonymous
    June 06, 2008
    As I alluded to in my previous post there are multiple ways that we can go in terms of supporting legacy 1.0/1.1 assemblies on a Win64 machine. The context of the 1.0/1.1 support story is made somewhat simpler by the current plan of having both a v2.