Constraints are not part of the signature
What happens here?
class Animal { }
class Mammal : Animal { }
class Giraffe : Mammal { }
class Reptile : Animal { }
…
static void Foo<T>(T t) where T : Reptile { }
static void Foo(Animal animal) { }
static void Main()
{
Foo(new Giraffe());
}
Most people assume that overload resolution will choose the second overload. In fact, this program produces a compile error saying that T cannot be Giraffe. Is this a compiler bug?
No, this behaviour is correct according to the spec. First we attempt to determine the candidate set. Clearly the second overload is a member of the candidate set. The first overload is a member of the candidate set if type inference succeeds.
The method type inference algorithm considers only whether the method type arguments can be consistently inferred from the types of the arguments. Method type inference cares not a bit about whether the resulting method is malformed in some other way. Its only job is to work out the best possible type arguments given the arguments to the method. Clearly the best type for T is Giraffe, so that’s what we infer.
So we now have two methods in the candidate set, Foo<Giraffe>, and the second overload. Which is better?
Again, overload resolution looks only at the arguments you passed in, and compares them to the types of all the candidates. The argument is of type Giraffe. We have a choice: argument of type Giraffe goes to parameter of type Giraffe, or argument of type Giraffe goes to parameter of type Animal. Clearly the former is better; it’s an exact match.
Therefore we discard the second overload because it is worse than another candidate. That leaves one candidate left, which is an exact match. Only then, after overload resolution, do we check to see whether the generic constraints are violated.
When I try to explain this to people, they often bring up this portion of the specification as evidence that I am wrong, wrong, wrong:
If F is generic and M has no type argument list, F is a candidate when:
1) type inference succeeds, and
2) once the inferred type arguments are substituted for the corresponding method type parameters, all constructed types in the parameter list of F satisfy their constraints, and the parameter list of F is applicable with respect to A. [emphasis added]
This appears at first glance to say that Foo<Giraffe> cannot be a candidate because the constraints are not satisfied. That is a misreading of the specification; the bit “in the parameter list” is referring to the formal parameter list, not the type parameter list.
Let me give you an example of where this rule comes into play, so that it’s clear. Suppose we have
class C<T> where T : Mammal {}
…
static void Bar<T>(T t, C<T> c) where T : Mammal {}
static void Bar(Animal animal, string s) { }
…
Bar(new Iguana(), null);
Type inference infers that Bar<Iguana> might be a candidate. But that would mean that we are calling a method that converts null to C<Iguana>, which violates the constraint in the declaration of C<T>. Therefore the results of type inference are discarded, and Bar<Iguana> is not added to the candidate set.
So if that’s not the relevant bit of the spec, what is? The relevant bit happens after the best method has been determined, not before.
If the best method is a generic method, the type arguments (supplied or inferred) are checked against the constraints declared on the generic method. If any type argument does not satisfy the corresponding constraints on the type parameter, a compile-time error occurs.
It is often surprising to people that an invalid method can be chosen as the best method, chosen over a less good method that would be valid. (*) The principle here is overload resolution (and method type inference) find the best possible match between a list of arguments and each candidate method’s list of formal parameters. That is, they look at the signature of the candidate method. If the best possible match between the arguments and the signature of the method identify a method that is for whatever reason not possible to call, then you need to choose your arguments more carefully so that the bad thing is no longer the best match. We figure that you want to be told that there's a problem, rather than silently falling back to a less-good choice.
UPDATE: My Twilight-reading friend Jen from a couple episodes back points out that this is not just good language design, it's good dating advice. If the best possible match between your criteria and an available guy on Match.com identifies a guy who, for whatever reason, is impossible to call, then you need to choose your criteria more carefully so that the bad guy is no longer the best match. Words to live by.
(*) I already discussed the related situation of how it is the case that we can choose a static method over an instance method even when there’s no hope of the static method being the right one.
Comments
Anonymous
December 10, 2009
It seems to me that this is indeed a compiler bug, and what you have shown is that it results from a bug in the spec. No, a bug in the compiler would be an implementation that doesn't match the spec. A bug in the spec is a spec which doesn't match the desires of the language designers. In this case the implementation, the spec, and the language designers all agree. Now, you might be of the opinion that this was a lousy design decision, but it is not an error. -- Eric What is gained by discarding constraint information when finding the best overload? The constraint information is not discarded; it's never even considered in the first place. What is gained is the resulting error analysis: if the best applicable match from argument types to formal parameter types produces an error then you probably have an error in your program. We assume that you intended the compiler to choose the best applicable match; if doing so violates a constraint, then we assume you want to be told about that. We don't say "well, the choice with the best match is A, but since that's not going to work out, let's silently choose B and not tell anyone about it." C# tries to not be a "hide your errors and muddle on through" sort of language. -- EricAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
Sounds like this is a defect in the C# specification. In Visual Basic it honors constraints for the purpose of overload resolution and this compiles just fine. No, this is not a defect in either the specification or the design of the language. This is a deliberate design choice in accordance with the long-standing principles of the design of the C# language. To do it the other way would be a defect. Reasoning about C# design choices by pointing out that VB does it differently is not compelling. The VB language designers are building a different language and are making different design choices, in accordance with the design principles for their language. C# has always been a "complain loudly if something looks potentially incorrect" language, and VB has always been a "do your best to figure out what the user meant even if it means sometimes guessing wrong" language. Both philosophies are sensible and useful, and we offer you the choice. For example, VB allows overload resolution to succeed when you pass an expression of type Giraffe to a method that requires a parameter of type Mammal, silently inserts a typecast, and crashes at runtime if that turns out to be a bad choice. Is it your contention that this is also a sensible rule for C#, just because it is a sensible rule for VB? -- EricAnonymous
December 10, 2009
So much technicality, so little discernment. The code at the beginning of this blog entry should be the only evidence required to show that the the compiler should be changed, whether or not the spec would need to be changed first. And by the way, if Match.com determined that the best match for me was someone who is not even available on Match.com, I would hope that this would not cause the website to display an error message. I'm not sure how many more ways I can come up with to explain this. The correct behaviour here is to give an error. The best possible choice is not viable; the correct behaviour is to tell you that, rather than blithely choose the second-best choice and hope for the best. -- EricAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
Also a Giraffe is not a Reptile, end of story!Anonymous
December 10, 2009
> The generic version is the best possible choice, and still, it doesn't work. You don't think that's a sensible candidate for an error message? -- Eric What would we lose by considering the type constraints for the purpose of choosing which choice is best? I would think that the answer is obvious. When you make an error case into a legal case then you lose the ability to detect the error. (I am for some reason reminded of all those US congresspeople who, over the years implemented much "deregulation" and yet seemed to not realize that "deregulation" simply means "taking acts that used to be crimes against the American people and making them legal". As the worldwide financial crisis has shown, perhaps some of those acts ought to have remained crimes.) -- Eric If this is like the ambiguous case caused by the Color Color problem, show us an example.Anonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
> Most people assume that overload resolution will choose the second overload. In fact, this program produces a compile error saying that T cannot be Giraffe. Is this a compiler bug? > I'm not sure how many more ways I can come up with to explain this. The correct behaviour here is to give an error. The best possible choice is not viable; the correct behaviour is to tell you that, rather than blithely choose the second-best choice and hope for the best. -- Eric This is clearly not a compiler bug, and yes, the best choice here is to give an error. But I think this is a spec "bug", because the "best possible choice" of the spec is not "best possible choice" that most (if not all) users expect.Anonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 10, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
@Eric, "...Is that more convincing?" No, it isn't. You basically just restated the same argument you were already making. The key point of disagreement is this sentence: "It works out that the best possible value for T is Giraffe, reasoning solely from the arguments that you provided and the types provided in the generic version of the method." Yes, that is what it does, in accordance with the spec. What I and others are saying is that is not what it SHOULD do. What it should do is work out the best possible value for T, reasoning from the arguments provided by the call site, the types provided in the generic version of the method, and the constraints declared on those types. "That method undoubtedly must win, since it is an exact match, and is then discovered to be illegal." If it is illegal, then it is NOT an exact match :) I understand that the compiler's current behavior is to make a distinction between matching methods and legal methods, but programmers don't make any such distinction. That is precisely why I would claim that the second overload is the best one simply by looking at it. "The right behaviour is to say 'huh?..." No, the right behavior is to do what I asked it to do: find the best overload that can be legally called, and call it. Overall, I agree with INTPnerd: "You are thinking too much about the technicalities of how generics work under the covers instead of the way that they are used to achieve a goal."Anonymous
December 11, 2009
As a matter of fact, I really don't see the use of constraints like where: Blah. I migh tbe comitting heresy here, but if you are constraining a generic type to a certain class or descendants of said class why dont you implement the non generic option to begin with. What's the advantage of using generics in that scenario a part from having your code look cooler and impressing your boss?Anonymous
December 11, 2009
@Skynyrd, "The people that think the current behavior is wrong are not fully understanding the reasoning behind the C# design team's decision...its all about avoiding subtle bugs..." I fully understand that the C# team thinks they are avoiding subtle bugs. Eric already explicitly stated as much: "We figure that you want to be told that there's a problem, rather than silently falling back to a less-good choice." However, I do not believe in this example that the second overload is "a less-good choice", I believe that it is in fact the best choice, and should be called. "...in a real world scenario with a much more convoluted code base this might not always be the case..." By all means, show me a case where this rule prevents unintuitive behavior and I will be happy to consider it. I have been trying to think of one since I first read the article, but have so far drawn a blank.Anonymous
December 11, 2009
@Dave Sexton, "Maybe the problem is that Eric's example fits on one screen, so you can readily see the where : Reptile constraint. In the real world, it's easy to make a mistake like this." The compiler does not and should not change its behavior based on how far away from each other the methods are declared. The best overload is the best overload, regardless of whether it is declared in the same file or a completely different library altogether.Anonymous
December 11, 2009
@David Nelson "No, the right behavior is to do what I asked it to do: find the best overload that can be legally called, and call it." Well thats precisely the whole point of the argument. YOU decide that the best legally called method is the non generic one. That doesnt mean that in all 100% of real life situations that should be the case. Many times maybe the coder is expecting the generic method to be called because he's handling complicated heirarchies, 100s of classes, thousands of lines of code and not three stupid classes that fit in a paragraph and where everything is crystal clear. Under that persepctive what is better? Make a fail safe compiler error or pray that everyone is a perfect coder and will never mistakenly think that the generic method should be called becuase its argument signature matches perfectly? The first just obliges you to do a simple cast and ensures there will never be a bug in the code. The second option satisfies very smart coders like you who see it crystal clear but leaves the door open for less talented coders to introduce a bug in the code because they UNINTENTIONALLY call the non generic method instead (like Skynyrd said, bugs exist. Coders arent perfect so making their life easier is the way to go). Besides, like I said before, I'd never implement a generic method like that one to begin with. :pAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
@INTPnerd Essentially, by providing both a non-generic and a generic method in the class, you're effectively stating "I'm only going to use this non-generic method in a very specific manner with exact matches on the argument type(s)." Assignment compatibility flies out the window. Regardless of the contraint on the generic method, that's the rule you're implicitly agreeing to follow.Anonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
The comment has been removedAnonymous
December 11, 2009
In contemplating this article, one thing which I did to make this clearer to myself, was asked the question, "What if I, as a developer, did want to use the 2nd method in the case that I passed in a Giraffe?" In examining what such an implementation would require, I had a thought: "Constraints are for preventing code from compiling when you break your own rules, not for deciding what your code will do. Adding constraints to code should not alter that code's behavior except in the case that a compiler error is thrown." Leaving constraints out of the signature makes perfect sense.Anonymous
December 12, 2009
The comment has been removedAnonymous
December 12, 2009
We are talking about generic type inference here, not coder's intention inference if I'm not completely lost. In Eric's example the compiler gives an error because inferring the logical generic type, constraints are not met. So I don't quite follow your argument here. Because inference fails the compiler is not sure if that was what the person who wrote the code wanted and thus stops with a compile time error.Anonymous
December 13, 2009
consider (illegal code): public T f<T>(T e ) where T: IEnumerable { ... } public T f<T>(T l ) where T:IList {... } As signature for a function to manipulate a list of values, where if algorithm is much better if you know indexing exists. (The ILinq function Reverse would be an example.) Of course the function can be written with one body that dynamically selects the best algorithm based on run time code.Anonymous
December 13, 2009
Quoting: " Bar(new Iguana(), null); Type inference infers that Bar<Iguana> might be a candidate. But that would mean that we are calling a method that converts null to C<Iguana>, which violates the constraint in the declaration of C<T>" -- I don't understand why it violates the constraint: C<T> is a reference type and null seems to be a a valid value to substitute for an instance of a reference type, why would type inference deem a constraint violation in this case?Anonymous
December 13, 2009
The comment has been removedAnonymous
December 13, 2009
The comment has been removedAnonymous
December 13, 2009
The comment has been removedAnonymous
December 14, 2009
The comment has been removedAnonymous
December 14, 2009
The comment has been removedAnonymous
December 14, 2009
The comment has been removedAnonymous
December 15, 2009
"Once again, you start your argument from the position that "[the] perfect match is illegal." That argument is a non-starter, because my contention from the very beginning has been that the illegal match is NOT the perfect match! It is only hard to say what the intention of the programmer is if you start from the position that what the programmer wrote is ambiguous or illegal. If you instead start from the position that what the programmer wrote is exactly what they intended, the right solution is obvious." We can all go in circles all day long here :D You claim that Eric's argument is non-starter. Of course another prefect claim is that "what the programmer wrote is exactly what they intended, the right solution is obvious." is a non-starter too. Because thats precisely why the compiler throws an error, because it's not sure that the programmer wrote exactly what they intended. C# is a language designed to avoid traps and pitfalls and if that means throwing an avoidable compile time error in lieu of risking doing something the coder might have not intended, well so be it. You can always go back to C++ where the compiler will let you happily wander into the realms of the unknown whenever you choose.Anonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
@Grico - "C# is a language designed to avoid traps and pitfalls and if that means throwing an avoidable compile time error in lieu of risking doing something the coder might have not intended, well so be it." I have seen something like the statement above several times in this thread and I believe that this is the heart of the issue. The problem is that this statement is NOT TRUE in C#, specifically in regards to method inference and generic type inference and extension methods inference. Today C# is making all sorts of assumptions that risk doing something the coder might not have intended. So why does C# decide to treat generic constraints differently and throw a compile error instead using the constraint to infer a valid method? The only explanations I have seen are variants of the statement above and the statement simply does not hold for C#'s method inference logic.Anonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
I am curious about something. Those of you who think this rule is bad, read this code carefully. Note that Func is covariant in T and Action is contravariant in T in C# 4, and that C# 4 type inference takes this into account. class Animal { }
class Reptile : Animal { }
class Program
{
static void Bar<T>(Func<T> f, Action<T> a) where T : Reptile { }
static void Bar(object x, object y) { }
static void Main()
{
Func<Reptile> f = null;
Action<Animal> a = null;
Bar(f, a);
}
} STEP ONE: What happens in C# 4 is type inference reasons as follows. From the first argument we know that T must be Reptile, or a larger type. From the second argument we know that T must be Animal, or a smaller type. Both Reptile and Animal match both bounds. We choose Animal as the more general of the two and infer that T is Animal. STEP TWO: Overload resolution then has Bar(ob, ob) and Bar<Animal>(F, A) to choose from. The latter match is a better match, so we choose it. Then we discover that this match violates the type constraints, and give an error. I understand that you believe that step two is wrong. You believe that Bar<Animal> should not be considered to be a candidate at all, since it is an illegal method. Fine. I accept that opinion as reasonable. You believe that applicable candidate checking should take into account factors which are not in the signature, like the generic constraints. Fine. You therefore believe that this should choose the Bar(ob, ob) version, right? Hold on a minute there. Shouldn't step one, the type inference step, also have taken into account the constraint? Because type inference had two types to choose from, Reptile and Animal, that both met all the checks required by the generic bounds. Type inference chose to infer Animal because Animal was more general, and thereby chose to infer a method that is illegal. Type inference could have said "well, I could infer Bar<Animal>, but oh, that violates the constraint on Bar. Is there anything else I can infer? Yes, I can infer Bar<Reptile>! Had type inference done that, then the choice would have been between the object version and Bar<Reptile>, which DOES meet the generic type constraint, and is the better match. Surely the best thing to do in this case is to choose Bar<Reptile>. So my question for you is: is it your belief that method type inference should also take into account stuff that is not in the signature, like generic type constraints, when inferring the type arguments of a generic method call from the passed arguments? Or should method type inference only take into account the stuff that is actually in the signature?Anonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
I love the example Eric! Now I have another reason besides "C# does not try to read the developers mind" To answers your question: I believe that method type inference should take into consideration generic type constraints when inferring the type arguments of the generic method call. After all, Bar<T> is an infinite list methods of all type T's. Method inference should be able to pick the "best" one that is valid per the constraints. Now that being said, I understand it might be difficult to do this, but in a perfect world it should be able to.Anonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
The comment has been removedAnonymous
December 15, 2009
The comment has been removedAnonymous
December 16, 2009
The comment has been removedAnonymous
December 16, 2009
Earlier I said: "Also if the generic method has a type constraint that means it would not even work with the same type as found by a non-generic overload, then there is a good chance that it is not simply a faster version of the same behavior, but different behavior entirely. Since the two methods probably have very different behavior, it does not make sense to default to the generic one when the calling code is all but explicitly specifying the non-generic method." I take back the part about how the non generic method would probably have very different behavior. This time I was the one not seeing it from the programmers perspective in a real world scenario. Normally the whole point of naming a method the same, is because it does the same thing. It would be very misleading to use overloading when one method does something completely different. Even so, I still stand by everything else I said. If this were a real world example, the two methods might have somewhat different behavior, but I would expect they would be very similar. The generic one may very well be faster, when it can be used, but if it can't be used the compiler should not try to force you to use it and then have an error when it does not work. I will repeat this part one more time since I think I finally found a good wording: the calling code is all but explicitly specifying the non-generic method.Anonymous
December 16, 2009
Hhhmm, your starting to convince me, Eric. With the history of generic and non-generic interfaces of the same name that C# has and the lack of support for contravariance, I can see that unintended methods calls could happen more frequently than I thought. Add into that the practical considerations of keeping things simple and the amount of effort to implement the new logic and it definitely makes since NOT to consider the constraints in the inference logic. Thanks for your efforts to explain all reasons!Anonymous
December 16, 2009
The comment has been removedAnonymous
December 16, 2009
The comment has been removedAnonymous
December 16, 2009
The comment has been removedAnonymous
December 17, 2009
The comment has been removedAnonymous
December 17, 2009
Eric, "My point is that the compiler cannot know what was intended; if inference cannot work out something that works, then we don't know what was intended and the right thing to do is to stop and ask for directions, rather than blazing along a different road."
- yes, but (once you introduce the idea that the library may not have been designed optimally) that's true of your way too! Consider: static void Feed<T>(T t) where T : Animal { ... } static void Feed(Fish fish) { ... fish need special handling to make sure they aren't overfed ... } Obviously this is a bad way to design an API because it does all kinds of wrong things if the compiletime type doesn't match the runtime type. But for simplistic uses it does the job - UNTIL someone introduces class Goldfish : Fish { ... }. Your way: We infer T=Goldfish and then Feed<T>(T) is an exact match, better than Feed(Fish). Note that I'm not making any concrete proposal as to how to address this problem or even saying that it's a real problem that should be addressed. I'm just pointing out that the current system already has cases where it "blazes along a different road" than what was intended. Also, I think you are missing Pop.Catalin's main point, which isn't just that there are better ways to write the IsBoundedAboveBy method - but that one of the better ways to write that method doesn't work because type inference doesn't consider constraints. I'll admit that your posts are making me reconsider my position, but I'm not ready to give up completely on the idea that inference should take into account constraints. Mainly because it's not always about ambiguity between a generic and non-generic method (my codebase is self-contained, so my generic methods don't have back-compatible versions, usually) - sometimes it's about being able to infer something versus not being able to infer anything at all. Like my <Derived, D2> case, or Pop.Catalin's second version of IsBoundedAboveBy. Is there really no way to get the best of both?
Anonymous
December 17, 2009
The comment has been removedAnonymous
December 19, 2009
The comment has been removedAnonymous
December 19, 2009
The comment has been removedAnonymous
December 19, 2009
The comment has been removedAnonymous
December 28, 2009
The comment has been removedAnonymous
January 21, 2010
The comment has been removedAnonymous
January 21, 2010
The comment has been removedAnonymous
January 22, 2010
The comment has been removedAnonymous
January 22, 2010
The comment has been removedAnonymous
January 22, 2010
The comment has been removedAnonymous
January 24, 2010
The comment has been removedAnonymous
January 25, 2010
The comment has been removedAnonymous
January 25, 2010
The comment has been removedAnonymous
January 25, 2010
The comment has been removedAnonymous
January 25, 2010
The comment has been removedAnonymous
January 26, 2010
The comment has been removedAnonymous
January 27, 2010
I didn't make it all the way through, but I agree with those saying this appears to be a "spec bug" while also agreeing with those who say having those two methods in the same class is a mistake. Unfortunately, I'm also thinking that, since the type constraints aren't apparently useful except for intellisense noise reduction, I can't think of a reason why I would ever use type constraints again unless the compiler requires it for setting an object to null (i.e. where the class constraint is required, though you could get around that with default(T)). If I'm wrong, why would I care about constraints anymore? I could use guard clauses or Code Contracts to do the rest and get a better, imho, response than the type constraint.Anonymous
January 27, 2010
The comment has been removedAnonymous
January 28, 2010
The comment has been removedAnonymous
January 28, 2010
@Alex G: I just noticed this in one of your comments: "The class constructors and abstract static members. They cannot be inherited or passed up to derived class types." Class constructors can be used by derived types. It goes like this: public class Animal { public string Name; public Animal(String name) { Name = name; } } public class Giraffe { public Giraffe() : base("Giraffe") { // Animal ctor is called first. Follow with Giraffe here. } } You can call any constructors of a parent class in this way. PeteAnonymous
January 28, 2010
The comment has been removedAnonymous
January 28, 2010
The comment has been removedAnonymous
January 30, 2010
The comment has been removedAnonymous
February 01, 2010
The comment has been removedAnonymous
February 01, 2010
Brannon, Nice idea about the operators - I agree that constraints need to and can be enhanced in many ways.Anonymous
February 02, 2010
I posted about this problem on the csharp news group ages ago. The example I posted there shows that the extension methods are basically unusable when relying on constraints.Anonymous
February 02, 2010
You know what sounds really weird to me? The fact that there was a bug introduced in the spec that causes the very inconvenient behavior, and the other fact that instead of fixing both the spec and behavior, guys the are proudly protecting this bug from to be fixed. Anybody can look from this perspective? Eric? In order for there to be a "bug" there has to be an error. In this case, the spec is exactly the way we want it, and the implementation is a correct implementation of the spec. That the language was not designed the way you think would be better isn't a bug, it's just a disagreement between you and me about what the better design decision would have been. The documentation of that design decision is sound, as is the implementation. There's no bug here to be fixed, in either the spec or the implementation. This is a feature. This feature finds bugs in your code and tells you about them rather than hiding the bug from you and doing something wrong. I do not understand why C# programmers think that's a bad thing; C# programmers typically want to be told about the bugs and ambiguities in their programs so that they can be assured that their program is doing exactly what they intend it to be doing. -- EricAnonymous
February 03, 2010
The comment has been removedAnonymous
February 03, 2010
Also, cheers to Darren oakey - I agree with what he's saying: "if you define the constraints to be part of the signature, and allow overloads that differ only on constraint - you still have a completely consistent, working model" In other words, static void Draw<T>(T t) where T : Square { } static void Draw<T>(T t) where T : Circle { } is not ambiguous and could indeed be very useful. We may want to Draw the class of all things Square completely differently than the class of all things Circle. If we want a catch-all that supersedes the non-generic (object) version, then simply add: static void Draw<T>(T t) In the absence of using generic constraints in the signature, it just becomes too awkward, if not impossible, to disambiguate the call. Generic constraints would be a nice way of allowing for detailed specialization and would not introduce any static inconsistency if they were part of the method signature.Anonymous
February 03, 2010
Eric, You say: "How does that work, unless the developer writing the callee is the same as the developer writing the caller? That is frequently not the case." But you could say the same thing about any overload. As far as: "When faced with a situation where the developer's intentions are unclear, the right thing for the C# compiler to do is to give an error." I agree. That's why I'd like to see the generic constraints become a part of the signature so that the intentions would not be unclear. In other words, the intentions of the developer are unclear precisely because the C# team chose to ignore generic constraints for purposes of overload resolution. I'm a huge fan of C++ because it allows for 'var-like' things via template specialization with zero "0" runtime overhead and zero type ambiguity. I have very little use for scripting languages that try to guess what I mean -- I've been down that road and I know the pitfalls. So please don't feel so discouraged - I get what you are saying: "the programmer writing the call site intends to call a particular method, and communicates this by selecting arguments which overload resolution will use to choose a particular matching signature." Everything you say is exactly right. But, if you change the definition of what the "signature" is (to include generic constraints), then the method that the caller is selecting with arguments 1) changes, and 2) is still unambiguos. So, the only argument here, is what the signature should be. BTW, I think C# rocks and I've used a lot of languages. -- SamAnonymous
February 04, 2010
Good article cheers for that. Now i can sleep safely at nightly knowing my type arguments are being checked by my compiler ... phew for thatAnonymous
February 04, 2010
The comment has been removedAnonymous
February 04, 2010
"In this case, the spec is exactly the way we want it," - Eric I like this piece. Eric, with all due respect, what matters here is not what YOU want, but what WE, the developers and primary users of the product (C# language) want. As you can see from this thread, the majority agrees that constraints have to be part of the signature and (slightly off-topic) that constraints have to be enhanced. There are other improvements that were proposed here, as well (Markus W.'s post). First off, this is not a representative sample. Second, the design of C# is not a democracy. And third, the spec is the way we want it because it is right, and no amount of people wanting it to be wrong is going to make the wrong thing right. The right thing to do, based on the design principles of C#, is to identify when the program is ambiguous and therefore likely to be wrong, so that you can fix it. The wrong thing to do is to guess that you meant a worse method and silently choose it, thereby introducing bugs. I don't know why I have to keep making this point over and over again. -- Eric Now, we all understand budget and time constraints, you guys just can't add features like that left and right - no sweat. But don't tell us that it "is exactly the way we want it", because that's a terrible argument. There is very little doubt that the enhancements requested in this discussion and other places are needed - and when MS's response is that, "we don't think that's necessary", that's a bad way of putting "we don't care what you want, we're too lazy (as a company) to develop it, now eat it". It would be much more honest and nice to say that, "we can't do it now, but will try to ASAP". We care very very much about what customers want and need, and appreciate this feedback. We get proposed "enhancements" all the time that violate the fundamental design principles of the language in order to make narrow, rare scenarios like this one slightly more convenient, and we don't do them because it's a bad idea to spend our limited budget making features that work against our design principles and thereby make the language worse for the majority in order to increase convenience a minescule amount for a tiny minority. That's not laziness, nor is it dishonesty, and frankly I resent the insinuation that my team is lazy and dishonest. Rather, that is being respectful of the needs of our millions of users. -- EricAnonymous
February 05, 2010
"The right thing to do ... is to identify when the program is ambiguous and therefore likely to be wrong, so that you can fix it. The wrong thing to do is to guess that you meant a worse method and silently choose it, thereby introducing bugs. I don't know why I have to keep making this point over and over again. -- Eric" It seems to me that the point of many statements here is that adding constraints to the resolver does not introduce bugs. And it's not a matter of introducing bugs! It's a matter of reduced functionality. My earlier point was that resolving methods by parameter footprints alone is insufficient to make extension methods work in a reasonable fashion. Extension methods should not have been added to the language with that limitation. You can't have an extension method that uses IList<T>, etc. The language has already been extended beyond the point where it needs the constraints in the method resolver. True? "As I've said now over ten times in these comments: we assume that the programmer writing the call site intends to call a particular method, and communicates this by selecting arguments which overload resolution will use to choose a particular matching signature...-- Eric" That's a rash assumption, especially in math and geomtery libraries. In those cases I expect the best method for my current shape/matrix/number/etc to be called. Why would we make coders search through a vast array of overloads, chose the best one, and then call it manually through the use of casts? We somehow think this manual process and excess code reduces bugs? Hardly.Anonymous
February 05, 2010
The comment has been removedAnonymous
February 05, 2010
The comment has been removedAnonymous
February 05, 2010
The comment has been removedAnonymous
February 07, 2010
The comment has been removedAnonymous
February 10, 2010
The comment has been removedAnonymous
February 10, 2010
The comment has been removedAnonymous
February 11, 2010
I admit that in modern C++ the template system is somewhat "abused" to do all the fancy things you are opposed to. The code is ugly and sometimes unreadable. Of course there are better ways to design a language that supports all those features. A great chance for C# to beat 'em all! OTOH what makes me feel happy with C++ is the fact that the basic idea of its language design is not to put any restrictions on the programming paradigm. It is exactly this attitude that leads to a sense of freedom and a "Can-Do-That", of course disregarding the pain it may take to fulfill the task, but you can. Example: LINQ is cool. Take boost::proto and half a year of concentrated work and the same thing works smoothly in C++. You can do that. My own experience with C# is exactly the opposite. Every week I run into a "Can't-Do-That" (like missing operator=) or a caveat that is hurting. This is where your aim to keep it simple leads to the opposite: weird looking workarounds that make the code complex and far from being clear and concise. Back to the original problem: IMHO the deliberately chosen rules here offer a little advantage to probably catch a subtle error while at the same time hinder a plethora of useful applications. To me this feels ike the damage done by the design decision outweighs the benefit by a factor of 10. Also what I still do not get: If you "like plain boring straightforward unsurprising code", then why such a counter-intuitive behaviour?Anonymous
February 11, 2010
The comment has been removedAnonymous
February 17, 2010
It seems that the generics problem is not so much a bug in the compiler as it is a Legacy .NET 1.0 fragment. .NET 1.0 did not use generics, and as .NET 2.0 is based on 1.0 I imagine it kept the same method resolution because if a new type of overloading was added it would probably break just about everything. Also I see generic type constraints as just a tool to help us as coders, not to help the compiler or runtime, the speed of C# would probably be greatly affected if several (object is Type) operations needed to be called upon every method call, which is what generic constraint overloading would require. So altho generic constraing overloading would be a nice feature for coders, it would probably degrade the entire user experience of C#.Anonymous
February 18, 2010
Jonathan said: It seems to me that this is indeed a compiler bug, and what you have shown is that it results from a bug in the spec. No, a bug in the compiler would be an implementation that doesn't match the spec. A bug in the spec is a spec which doesn't match the desires of the language designers. In this case the implementation, the spec, and the language designers all agree. Now, you might be of the opinion that this was a lousy design decision, but it is not an error. -- Eric It's a language designers bug. :)Anonymous
February 24, 2010
The comment has been removedAnonymous
February 24, 2010
Ok, so its definitely not a BUG, but I think its BAD compiler design, the compiler should use ALL NECESSARY INFORMATION when doing overloading inferencing, if this means that the constraints SHOULD be part of the method signature then so be it. If the compiler was to use the constraints it could SCORE each method based on how many levels away a match is from its descendant, and an overload method call IMHO should ALWAYS use a compatible NON-GENERIC method BEFORE any Generic Method. Personally I love generics when it comes to types on a class, but I refuse to use generics for individual methods, its bad code design and messy IMO.Anonymous
February 24, 2010
The comment has been removedAnonymous
March 09, 2010
It seems to me that C# is picking up all VB's bad habits. I never knew generic types in methods could be infered from the parameters. Personally I'm going to put it out of my mind again. As for preventing the compile. IMHO its correct to do so.Anonymous
November 03, 2011
The comment has been removedAnonymous
January 09, 2012
The comment has been removedAnonymous
May 22, 2012
It seems obvious to me that there is a disconnect between what you were trying to explain here and what people were understanding. I think the general consensus based on the responses is that people expect that constraints should be part of the signature, not necessarily that the overload resolution algorithm is somehow wrong.