Shrinking a Float array with scala.math.max

I am confused by the following behavior: why does the reduction of the Int array work using math.max, but does the Float array require a wrapped function? I have memories that this is not a problem in version 2.9, but I'm not quite sure about it.

$ scala -version Scala code runner version 2.10.2 -- Copyright 2002-2013, LAMP/EPFL $ scala scala> import scala.math._ scala> Array(1, 2, 4).reduce(max) res47: Int = 4 scala> Array(1f, 3f, 4f).reduce(max) <console>:12: error: type mismatch; found : (Int, Int) => Int required: (AnyVal, AnyVal) => AnyVal Array(1f, 3f, 4f).reduce(max) ^ scala> def fmax(a: Float, b: Float) = max(a, b) fmax: (a: Float, b: Float)Float scala> Array(1f, 3f, 4f).reduce(fmax) res45: Float = 4.0 

update: it works

 scala> Array(1f, 2f, 3f).reduce{(x,y) => math.max(x,y)} res2: Float = 3.0 

So is it just reduce(math.max) that cannot be reduced?

+4
source share
4 answers

The first thing to note is that math.max overloaded, and if the compiler doesn’t have a hint of the expected types of arguments, it just selects one of the overloads (I still don’t know which rules determine which overload, but it will become clear to the end this post).

Apparently, this contributes to an overload that takes Int parameters over others. This can be seen in repl:

 scala> math.max _ res6: (Int, Int) => Int = <function2> 

This method is most specific because the first of the following compilations (due to numerical expansion transforms), and the second is not:

 scala> (math.max: (Float,Float)=>Float)(1,2) res0: Float = 2.0 scala> (math.max: (Int,Int)=>Int)(1f,2f) <console>:8: error: type mismatch; found : Float(1.0) required: Int (math.max: (Int,Int)=>Int)(1f,2f) ^ 

The test is whether one function applies to the parameter types of another, and this test includes any conversions.

Now the question is: why cannot the compiler determine the correct expected type? Of course, it is known that the type of Array(1f, 3f, 4f) is equal to Array[Float]

We can get the key if we replace reduce with reduceLeft : then it compiles fine.

So, of course, this is due to the difference in the signature of reduceLeft and reduce . We can reproduce the error with the following code fragment:

 case class MyCollection[A]() { def reduce[B >: A](op: (B, B) => B): B = ??? def reduceLeft[B >: A](op: (B, A) => B): B = ??? } MyCollection[Float]().reduce(max) // Fails to compile MyCollection[Float]().reduceLeft(max) // Compiles fine 

Signatures are different from each other.

In reduceLeft second argument is forced to A (collection type), so type inference is trivial: if A == Float (which the compiler knows), then the compiler knows that the only valid overload of max is the one that takes a Float as the second argument. The compiler only finds one ( max(Float,Float) ), and it happens that another restriction ( B >: A ) is trivially fulfilled (like B == A == Float for this overload).

This is different for reduce : both the first and second arguments can be any (same) supertype A (i.e., Float in our particular case). This is a much weaker restriction, and although it can be argued that in this case the compiler could see that there is only one possibility, the compiler is not smart enough here. Whether the compiler is supposed to be able to handle this case (which means it is an output error) or not, I have to say I don’t know. Type inference is a complex business in scala, and as far as I know, the specification is intentionally vague as to what can be output or not.

Since there are useful applications, such as:

 scala> Array(1f,2f,3f).reduce[Any](_.toString+","+_.toString) res3: Any = 1.0,2.0,3.0 

trying to overload with any possible replacement of the type parameter is expensive and may change the result depending on the expected type that you encounter; or would he have to issue a mistake of ambiguity?

Using -Xlog-implicits -Yinfer-debug shows the difference between reduce(math.max) , where overload resolution first occurs, and the version in which the param type is first resolved:

 scala> Array(1f,2f,3f).reduce(math.max(_,_)) [solve types] solving for A1 in ?A1 inferExprInstance { tree scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 2.0, 3.0)).reduce[A1] tree.tpe (op: (A1, A1) => A1)A1 tparams type A1 pt ? targs Float tvars =?Float } 
+6
source

This seems to be a bug in the inferrer, because with Int it correctly describes the types:

 private[this] val res2: Int = scala.this.Predef.intArrayOps(scala.Array.apply(1, 2, 4)).reduce[Int]({ ((x: Int, y: Int) => scala.math.`package`.max(x, y)) }); 

but with floats:

 private[this] val res1: AnyVal = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[AnyVal]({ ((x: Int, y: Int) => scala.math.`package`.max(x, y)) }); 

If you explicitly annotate the abbreviation using the Float type, it should work:

 Array(1f, 3f, 4f).reduce[Float](max) private[this] val res3: Float = scala.this.Predef.floatArrayOps(scala.Array.apply(1.0, 3.0, 4.0)).reduce[Float]({ ((x: Float, y: Float) => scala.math.`package`.max(x, y)) }); 
+2
source

There is always scala.math.Ordering:

 Array(1f, 2f, 3f).reduceOption(Ordering.Float.max) 
+2
source

This does not seem to be a mistake. Consider the following code:

 class C1 {} object C1 { implicit def c2toc1(x: C2): C1 = new C1 } class C2 {} class C3 { def f(x: C1): Int = 1 def f(x: C2): Int = 2 } (new C3).f _ //> ... .C2 => Int = <function1> 

If I remove the implicit conversion, I get an "ambiguous reference" error. And since Int has an implicit conversion to Float Scala, it tries to find the most specific type for min , which is (Int, Int) => Int . The closest common superclass for Int and Float is AnyVal , so you see (AnyVal, AnyVal) => AnyVal .

The reason that (x, y) => min(x, y) works is probably because the eta extension is done before type input and reduce deals with (Int, Int) => Int , which will be converted to (AnyVal, AnyVal) => AnyVal .

UPDATE : Meanwhile (new C3).f(_) will fail with the parameter parameter type missing, which means that f(_) depends on the type inference and does not consider implicit conversions, while f _ does not need the parameter type and will expand to the most specific type of argument if Scala can find it.

0
source

Source: https://habr.com/ru/post/1495781/


All Articles