The first thing to note is that math.max overloaded, and if the compiler doesnβt have a hint of the expected types of arguments, it just selects one of the overloads (I still donβt know which rules determine which overload, but it will become clear to the end this post).
Apparently, this contributes to an overload that takes Int parameters over others. This can be seen in repl:
scala> math.max _ res6: (Int, Int) => Int = <function2>
This method is most specific because the first of the following compilations (due to numerical expansion transforms), and the second is not:
scala> (math.max: (Float,Float)=>Float)(1,2) res0: Float = 2.0 scala> (math.max: (Int,Int)=>Int)(1f,2f) <console>:8: error: type mismatch; found : Float(1.0) required: Int (math.max: (Int,Int)=>Int)(1f,2f) ^
The test is whether one function applies to the parameter types of another, and this test includes any conversions.
Now the question is: why cannot the compiler determine the correct expected type? Of course, it is known that the type of Array(1f, 3f, 4f) is equal to Array[Float]
We can get the key if we replace reduce with reduceLeft : then it compiles fine.
So, of course, this is due to the difference in the signature of reduceLeft and reduce . We can reproduce the error with the following code fragment:
case class MyCollection[A]() { def reduce[B >: A](op: (B, B) => B): B = ??? def reduceLeft[B >: A](op: (B, A) => B): B = ??? } MyCollection[Float]().reduce(max) // Fails to compile MyCollection[Float]().reduceLeft(max) // Compiles fine
Signatures are different from each other.
In reduceLeft second argument is forced to A (collection type), so type inference is trivial: if A == Float (which the compiler knows), then the compiler knows that the only valid overload of max is the one that takes a Float as the second argument. The compiler only finds one ( max(Float,Float) ), and it happens that another restriction ( B >: A ) is trivially fulfilled (like B == A == Float for this overload).
This is different for reduce : both the first and second arguments can be any (same) supertype A (i.e., Float in our particular case). This is a much weaker restriction, and although it can be argued that in this case the compiler could see that there is only one possibility, the compiler is not smart enough here. Whether the compiler is supposed to be able to handle this case (which means it is an output error) or not, I have to say I donβt know. Type inference is a complex business in scala, and as far as I know, the specification is intentionally vague as to what can be output or not.
Since there are useful applications, such as:
scala> Array(1f,2f,3f).reduce[Any](_.toString+","+_.toString) res3: Any = 1.0,2.0,3.0
trying to overload with any possible replacement of the type parameter is expensive and may change the result depending on the expected type that you encounter; or would he have to issue a mistake of ambiguity?
Using -Xlog-implicits -Yinfer-debug shows the difference between reduce(math.max) , where overload resolution first occurs, and the version in which the param type is first resolved:
scala> Array(1f,2f,3f).reduce(math.max(_,_)) [solve types] solving for A1 in ?A1 inferExprInstance { tree scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 2.0, 3.0)).reduce[A1] tree.tpe (op: (A1, A1) => A1)A1 tparams type A1 pt ? targs Float tvars =?Float }