The general principle here is that type information flows only "in one direction", from inside out the expression. The example you give is extremely simple. Suppose we wanted to have a flow of information about the type "in both directions" when performing type inference in the method R G<A, R>(A a) and consider some of the crazy scenarios that create:
N(G(5))
Suppose there are ten different overloads of N, each of which has a different type of argument. Should we draw ten different conclusions for R? If we did, would we somehow choose the "best"?
double x = b ? G(5) : 123;
What should be the return type G? Int because the other half of the conditional expression is int? Or double, because ultimately this thing will be assigned to double? Now, perhaps you are beginning to see how this happens; if you are going to say that you are reasoning from the outside inward, how far do you go? There can be many steps along this path. See what happens when we start combining them:
N(b ? G(5) : 123)
What should we do? We have ten overloads from N to choose from. Do we think that R is int? It can be an int or any type that int is implicitly converted to. But of those types, which ones are implicitly converted to argument type N? Do we write ourselves a small prologue program and ask the prologue engine to decide which all possible return types can be R to satisfy each of the possible overloads on N, and then somehow choose the best one?
(I'm not joking, there are languages ββthat essentially write a small prologue program and then use a logic engine to determine what everything is. F # for example, makes more complex type inferences than C #. Haskell is actually Turing Complete, you can encode arbitrarily complex problems in the type system and ask the compiler to solve them.As we will see later, the same is true for resolving overloads in C # - you cannot encode a problem with a stop in a C # system like you can in Haskell, but you can code NP-HARD problems in pro lems with overloading resolution.)
This is still a very simple expression. Suppose you have something like
N(N(b ? G(5) * G("hello") : 123));
Now we must solve this problem several times for G and, possibly, for N, and we must solve them together. We have five overload resolution problems to solve, and all of them, to be fair, should consider both their arguments and their type of context. If there are ten possibilities for N, then for N (N (...)) there are potentially hundreds of possibilities and a thousand for N (N (N (...))), and very quickly you must solve the problem of the problem that you easily had billions of possible combinations made the compiler very slow.
That is why we have a rule that enters information in only one direction. It prevents such kinds of problems with chicken and eggs, where you try to determine the external type from the internal type and determine the internal type from the external type and cause a combinatorial explosion of possibilities.
Note that type information passes both paths for lambda! If you say N(x=>x.Length) , then, of course, consider all possible overloads of N that have functions or expression types in their arguments and try out all possible types for x. And, of course, there are situations in which you can easily get the compiler to try out billions of possible combinations to find a unique combination that works. Type inference rules that allow this to be done for general methods are extremely complex and even make John Skeet nervous. This feature makes NP-HARD overload resolution possible .
Getting type information in both directions for lambda so that the general overload resolution works correctly and effectively occupies me for about a year. This is such a complex function that we only wanted to take upon ourselves if we would absolutely love this investment. The LINQ job was worth it. But there is no corresponding function, such as LINQ, which justifies the enormous cost of doing this work as a whole.