Share via


Future Breaking Changes, Part Three

As I said earlier, we hate causing breaking changes in our product, the C# compiler, because they cause our customers pain.

Said customers are also software developers, and presumably they hate causing breaking changes for their customers as much as we do. We want to throw our customers into the Pit of Success and give them tools which encourage them where possible to prevent breaking changes. This leads to some subtle issues in language design.

Pop quiz. What does this program do?


// Alpha.DLL
namespace Alpha {
public class Charlie {
public void Frob(int i) { System.Console.WriteLine("int"); }
// etc.
}
}


// Bravo.EXE, references Alpha.DLL.
namespace Bravo {
public class Delta : Alpha.Charlie {
public void Frob(float f) { System.Console.WriteLine("float"); }
// etc.
public static void Main() {
Delta d = new Delta();
d.Frob(1);
}
}
}


Most people look at this program and say “clearly Charlie.Frob(int) is the best possible match for the call, so that is called.” A compelling argument, but wrong. As the standard says, “methods in a base class are not candidates if any method in a derived class is applicable".

In other words, the overload resolution algorithm starts by searching the class for an applicable method. If it finds one then all the other applicable methods in deeper base classes are removed from the candidate set for overload resolution. Since Delta.Frob(float) is applicable, Charlie.Frob(int) is never even considered as a candidate. Only if no applicable candidates are found in the most derived type do we start looking at its base class.

Why on earth would we do that? Clearly in this example the base class member is the far better match, so why wouldn’t we even consider it?

It is instructive to consider what happens in a world where we do implement the rule “pick the best applicable candidate from any base”. Suppose we did that.

In the previous version of Alpha.DLL, Charlie did not have a method Frob(int).; When Bravo Corporation wrote Bravo.EXE, every call inside class Delta to method Frob was a call to Delta.Frob(float). Then one day Alpha corporation did customer research and discovered that a lot of their customers like to frob integers. They added this feature in their latest version. Delta corporation gets the new version of Alpha.DLL, recompiles Bravo.EXE, and suddenly their carefully developed code is sometimes calling a method that they didn’t write, which does something subtly incompatible with their implementation.

Alpha corporation has just pushed a breaking change onto Bravo corporation, which, if they don’t catch it in time, may now be pushing a subtly broken version onto their customers in turn, and hey! we’re in the Pit of Despair again!

This particular family of breaking changes is called the "brittle base class problem"; there are many versions of it and different languages deal with it in different ways. Lots of work went into the design of C# to try and make it harder for people to accidentally cause brittle base class problems. That is why we make you distinguish between the original definition of a virtual method and an overriding method. That is why we make you put “new” on methods which shadow other methods. All these semantics are in part to help prevent, mitigate or diagnose brittle base class issues and thereby prevent accidental breaking changes in C# code.

Next time on FAIC: some psychic debugging. Then a bit later I want to talk more about breaking changes, this time in the context of thinking about covariance and contravariance.

Comments

  • Anonymous
    September 04, 2007
    Eric, operator overload resolution changed in a breaking way between v1 and v2. Code compiles in both v1 and v2 but produces different results. Was that just a mistake on MSFT's part or was there a reason?

  • Anonymous
    September 04, 2007
    The comment has been removed

  • Anonymous
    September 04, 2007
    Re: Breaking change between v1 and v2: I'm unsurprised to learn that there was such a change. But since that was years before I was on the C# team, I'm unable to determine from the vague description what the issue was. Can you be more specific?

  • Anonymous
    September 04, 2007
    Re: overriding a virtual: Yes, that is deliberate. A virtual method is considered to be a member of the class which declares it, not a class which overrides it.  

  • Anonymous
    September 04, 2007
    The concept of a "breaking change" is non sequitur. As a developer, I currently write code targeting C# 2. In a future version (C# 3 or 4), even if the syntax changes slightly, my code targets C# 2, thus I logically continuing compiling it with a C# 2 compiler. Changes in future versions are immaterial to the version that my code targets. If one day I decide to upgrade to a newer version, then I also know that I must port some of my code. Once porting is done, I will need to run all of my unit and functional tests to ensure the port was successful. That is to be expected. I suppose you are suggesting a scenario somewhere between these two premises. Such as a developer who writes code targeting C# 2 and one days decides to switch to a future version without the intention or desire to port, test, and verify that the code still works as before. While that is irresponsible, it is simply not a realistic option. Shame on the developer that tries to get away with that. A breaking change is a non-concept. If you want to evolve the language, then do so. But "breaking changes" is not an excuse either way.

  • Anonymous
    September 04, 2007
    The comment has been removed

  • Anonymous
    September 05, 2007
    The comment has been removed

  • Anonymous
    September 05, 2007
    I'm glad you enjoy the posts. But done?  No way. In the fall of 1993 I applied to be an intern at Microsoft. One of the promotional brochures that the recruiters gave me had a picture of Scott Wiltamuth, then a PM on the VB team. The caption was something like "Have we implemented all the cool language features yet? No way, we've barely scratched the surface." Scott (who incidentally is now my manager's manager's manager's manager) got a lot of teasing about being the "VB Poster Boy" over the years, but he was right then and the sentiment is still right now. There is so much more we can do with this language.  Ideas from Spec# are interesting, yes. Can we make contracts first-class in the language? Should we extend the type system?  Maybe! What about all the real-world feedback we are getting about the power and limitations of dynamic languages? Can we learn from that and make C# better? Maybe! What about metaprogramming? There are powerful, horrible things I can do in C++ that are hard to do in C# because C++ has a (terrible!) metalanguage built in.  Can we learn from the successes and failures of that and design a sensible metalanguage with all the power but none of the drawbacks of the C++ metalanguage?  Maybe! What about design patterns? Design patterns only exist because they make up for a deficiency in a language. If C# had double virtual dispatch then the visitor pattern would be trivial. Can we look at common design patterns and come up with more powerful abstractions behind them?  Maybe! What about all the ideas coming out of research languages? There is so much we can still do here.  That surface is still only slightly scratched.

  • Anonymous
    September 05, 2007
    I am almost sure something changed in some KB update (not even SP) So I cann't reprodue the problem. :( The difference I still see is this (compiles in v1 and doesn't in v2) class Variant { public static implicit operator bool(Variant v){return false; } public static implicit operator Variant(bool b){return new Variant(); } public static Variant operator|(Variant v1, Variant v2){return new Variant(); } public static bool operator|(Variant v1, bool v2){return false;} } class UpgradeAndBeHappy { static void Main(string[] args) { Variant a = new Variant(); if( a || true)

  • Anonymous
    September 05, 2007
    >> If one day I decide to upgrade to a newer version, >If YOU decide.  Like, it's YOUR decision. I was not clear, but I did not intend to limit the statement to myself as a single individual. In fact, my team has had this experience several times. When the manager decided that we were going to move from VS6 to VS2003, it took almost four months to completely port and verify most of our C++ codebase. There were a few projects, though, that we deemed would take too long to port and they remained VS6 projects for much longer. The process was then similarly repeated with the move to VS2005 as well. If I or even my manager decides that we are to move to a newer version, then appropriate time is needed for porting and verification. That may take days or even months depending on the projects. If the transition will take too long, then revert back to the previous working edition. Again, you may not be the person to decide such a thing. In which case the manager will need to either 1) allocate appropriate time for transition or 2) revert to the previous working edition. A manager who does not understand this is not in the right job. You may disagree, but I stand by my original statement and still assert that "breaking changes" are immaterial to the development process. So I am not worried about breaking changes. In fact, I expect them, no matter how careful you are to minimize them. If you believe a feature is compelling and valuable, then introduce it. Whatever necessary porting will be done during the appropriate transition phase.

  • Anonymous
    September 06, 2007
    Well then where you see me say "breaking change" read "a change which greatly increases the cost of porting to a new version of the compiler", if that's how you prefer to think of it.  We want to keep your porting costs down, and that seems very "material" to me.

  • Anonymous
    September 06, 2007
    The comment has been removed

  • Anonymous
    September 08, 2007
    The comment has been removed

  • Anonymous
    September 09, 2007
    I have to agree with Allan and Phil. Breaking changes are just a reality to the business that we are in. I think that Allan makes an excellent point pointing out that code is written for a specific version and that moving to a new version indeed requires porting. Minimize the time required to port as best as possible, but the reality is that some time will always be necessary. I would expect warnings if my old code will not function the same as before. However, unit tests should just as easily tell me the same thing. Breaking changes really should not be used as an excuse for not implementing desired features.

  • Anonymous
    September 16, 2007
    Eric, A problem that is almost identical to the brittle 'base class problem' has in fact be introduced into the language, in the way extension methods are implemented.  I'd be very curious to know why Microsoft avoided the problem in overload resolution, but have not avoided it in resolving extension methods.  Details are here: http://dotnet.agilekiwi.com/blog/2006/04/extension-methods-problem.html

  • Anonymous
    September 16, 2007
    Opps "has be introduced" should read "has been introduced"

  • Anonymous
    September 18, 2007
    The comment has been removed

  • Anonymous
    November 09, 2007
    I forgot to mention, I would also be very interested in hearing your response to John Rusk's post about extension methods, in light of how much you hate causing breaking changes in your product. Don't extension methods simply re-introduce another version of the brittle base class problem that you put so much effort into minimizing in C#?

  • Anonymous
    November 09, 2007
    Yes, they do. (Briefly, the problem is that a third party adding a new method on their class can suddenly and invisibly "hide" an extension method that was previously in use, thereby changing the meaning of a program.) The original draft of the C# 3.0 specification included this caveat: "Extension methods are less discoverable and more limited in functionality than instance methods. For those reasons, it is recommended that extension methods be used sparingly and only in situations where instance methods are not feasible or possible." However it seems that this didn't make it into the final edit. It is unfortunate that extension methods introduce a potential point of breaking change for customer programs; we take this down side of the feature very seriously and weighed it against the potential benefits of the feature.  We believe that on balance, the benefits compellingly outweigh the potential for breaks, particularly if extension methods are used sparingly and carefully.

  • Anonymous
    November 09, 2007
    Eric, I like your description of the problem. It's much more concise than my own attempts at describing the same problem :-) It's a shame that caveat didn't make it into the final edit. What is Microsoft's view on solutions to prevent this problem?  Have solutions been considered and discarded?  Are solutions under consideration for future releases?  It seems like a solvable problem to me.  (In fact, I proposed one hypothetical solution here - http://dotnet.agilekiwi.com/blog/2006/04/extension-methods-solution.html . It would completely prevent unexpected changes in program meaning. )

  • Anonymous
    February 01, 2008
    Eric,      From the few references I've seen, what you termed the "BRITTLE base class problem" is more commonly called the "FRAGILE base class problem". The latter yields more hits in searches. The latter term has a Wikipedia page: http://en.wikipedia.org/wiki/Fragile_base_class_problem