Linq and the cost of additional language complexity

Uwe Keim posted a thought provoking comment in response to my entry about Linq.  Here is an excerpt:

I do see the benefits, but I also have a big déjà vû: The C#-language seems to go the C++-way, where I, even after 10 years of programming, don't know all of the features and sometimes still wonder "why does this thing behave this way?".  So the complexity is raised from version to version. Does everyone think, this is a good way? I think it would be better to NOT enhance the language/compiler from version to version, but to enhance functionality by enhancing the library instead.

I think the added complexity concern is a valid one which deserves careful analysis.  I'll just touch on my thoughts here.

One important difference between C++ and C#/VB is that C++ is designed by committee, where C# and VB each have a chief architect (Anders Hejlsberg and Paul Vick respectively) who doesn't have to try and satisfy everyone's favorite language request.  I know the complexity concern is something Anders and his team have taken very seriously (as have the VB folks).  I've heard Anders talk about the fact that they have to be extremely careful about this, and that he'd rather err on the side of caution.  The C# team has rejected several of my favorite language features (such as generic variance, C++ style const, etc.) because they felt the benefits for the average user didn't significantly outweigh the extra complexity (even through there were compelling benefits for SOME users).  This is where a lot of the "art" of language design comes into play.  Personally I have a lot of respect for Anders and Paul and trust their judgment and the customer research their teams have done which concluded that in this case the extra complexity was justified.  As much as I'd like to believe otherwise, there is probably a good reason here why Anders is the chief C# architect and you and I are not <grin>.

Of course they could be wrong, and it's important that Microsoft and our customers evaluate this risk seriously.  Uwe suggested a compiler plug-in model where you could enable different features on demand.  I'd love a public compiler plug-in model - there are so many cool features I'd like to add and use in my code.  I've even asked Anders about this (in the context of the fact that expression trees means that some errors that could normally be detected a compile time now become run-time errors), and he indicated that to maintain simplicity and comprehensibility they are avoiding any such dynamic / customizable compiler behavior.  I think Anders has a point here.  In this case, just using the v1.1 or v2.0 compiler to target the latest CLR has similar benefits except that the possible feature combinations are fixed to well known sets, which avoids the confusion and complexity of the plug-in or optional feature model.

The other thing to keep in mind is that a lot of effort went into reducing complexity by adding a few powerful general language concepts, instead of many feature-specific concepts.  I think Linq did even better than Cω in this respect.  For example, unlike the Cω compiler, the C# 3.0 compiler has no knowledge of databases, SQL or XML, it's all in the DLinq and XLinq libraries.  Personally, I find the new language features in the C# 3.0 spec to be quite simple and elegant, but then I'm a (functional) programming languages nerd so I'm not exactly the target audience.

Ultimately it will be up to our customers.  If, after seriously experimenting and evaluating Linq, you think the extra complexity isn't worth the benefit, then it's important that you give us your feedback (we're getting better and better at listening to this sort of thing).  On the other hand, the response from the PDC was overwhelmingly positive, so it would be likely to be an uphill battle <grin>.