Why does this change affect the outcome when even VS claims to be redundant?

------ Please go to the latest update -----------

I found an error (in my code) and I am struggling to find the correct understanding of this.

It all boils down to this particular example taken from a direct window when debugging:

x 0.00023569075 dx -0.000235702712 x+dx+1f < 1f true (float) (x+dx+1f) < 1f false 

x and dx are both float types. So why is the boolean different in the execution of the role?

In the actual code, I had:

 x+=dx if( x+1f < 1f) // Add a one to truncate really small negative values (originally testing x < 0) { // do actions accordingly // Later doing x+=1; // Where x<1 has to be true, therefore we have to get rid of really small negatives where x+=1 will give x==1 as true and x<1 false. } 

but now I'm trying to make a throw

 x+=dx; if( (float)( x+1f) < 1f) // Add a one to truncate really small negative values (originally testing x < 0) { // do actions accordingly // Later doing x+=1; // Where x<1 has to be true, therefore we have to get rid of really small negatives where x+=1 will give x==1 as true and x<1 false. } 

The visual studio says that the throw is superfluous, but it makes a false position without it, since immediately the window also informed me when:

 x+dx+1f < 1f true 

I am currently running my code to find out if I am getting an application error again, and I will update as soon as I make sure anyway.

Meanwhile, I hope someone can figure out what's going on here? Can I expect Cast to do something?

Update - Variables My variables x and dx are components of Vector2 (Xna / monogame). Therefore, in the code you must read

 Vector2 coord; // the x (and y) components are floats. Vector2 ds; coord.X // where it says x ds.X // where it says dx 

I thought it didn't matter, but maybe it is.

Update 2 - Drop the above example

When I saw that the throw changed the result, I made this simple demonstration

 class Program { static void Main(string[] args) { float a = -2.98023224E-08f; // Just a small negative number i picked... Console.WriteLine(((a + 1f) < 1f) ? "true" : "false"); //true Console.WriteLine(((float)(a + 1f) < 1f) ? "true":"false"); //false // Visual Studio Community 2015 marks the above cast as redundant // but its clearly something fishy going on! Console.Read(); } } 

So why does this change affect the outcome when even VS claims to be redundant?

+5
source share
3 answers

I think the important part of the C # specification is here:

"Floating-point operations can be performed with greater precision than the type of the result of the operation. For example, some hardware architectures support the" extended "or" long double "floating-point type with a greater range and precision than the double type and implicitly perform all floating-point operations using this higher type of precision.Only at an excessive cost of performance, such hardware architectures can be made to perform floating point operations with less precision and instead of s demand that the realization of lost productivity and precision, C # allows a higher precision type for all floating-point operations. " - https://msdn.microsoft.com/en-us/library/aa691146(v=vs.71).aspx

We can conclude that this almost certainly happens by looking at these three lines of code, making comparisons in several different ways:

 float a = -2.98023224E-08f; Console.WriteLine((a + 1f) < 1f); // True Console.WriteLine((float)(a + 1f) < 1f); //False Console.WriteLine((double)(a + 1f) < 1f); //True 

As you can see, the results of the first calculation (which interests us) coincide with the fact that the intermediate value is given in the form of a double, indicating that the compiler uses the ability to perform calculations with higher accuracy.

Of course, the reason is that the results are different from each other, because although we can see that the result must be true when it calculates a+1f , the result as one is 1, so the comparison is false.

And just to round this value of a in the above, it is stored in the float with an exponent of -25 and a fraction of 0. If you add 1 to this, the parts of the -25 exponent are too small to be represented so it should be rounded, in which case rounding leaves a number of 1. This is due to the fact that since data is stored with single precision floating point, they have only 23 bits for the part following the leading 1, and therefore do not have accuracy for store the fraction, and it ends with rounding to 1 at storage. Therefore, why does the comparison return false when we force it to fully use float calculations.

+2
source

I don’t see how you declare your variables, but assigning to static values ​​assigned to variables makes these variables of type double , not float . And, as you know, the double type has more precision than the float .

Here is the test:

 var x = 0.00023569075; var dx = -0.000235702712; Console.WriteLine(x.GetType()); //result: System.Double Console.WriteLine(dx.GetType()); //result: System.Double 

And, of course, when adding two double and float result is double , so the first condition returns true :

 Console.WriteLine(x+dx+1f < 1f); //returns true Console.WriteLine(x+dx+1f); //returns 0.999999988038 

But when you throw it on a float , truncation occurs, and the result is no longer correct, so your second condition returns false :

 Console.WriteLine((float)(x+dx+1f) < 1f); //returns false Console.WriteLine((float)(x+dx+1f)); //returns 1 

UPDATE: When your variables are float , truncation occurs here. Remember that the maximum precision of the float is only 7 digits, and you assign numbers with a much larger number of digits, so truncation occurs and leads to inaccurate results that you observe.

In the original question, here's how the values ​​are truncated:

 float x = 0.00023569075f; float dx = -0.000235702712f; Console.WriteLine(x); //0.0002356907 last digit lost Console.WriteLine(dx); //-0.0002357027 last two digits lost Console.WriteLine((x + dx)); //-1.196167E-08 Console.WriteLine((x + dx + 1f)); //1 

The reason why the last result 1 should be obvious. The result of adding x and dx is -1.196167E-08 ( -0.00000001196167 ), which has 7 digits and can fit into a float . Now adding 1 makes it 0.99999998803833 , which has 14 digits and cannot fit in a float , so it is truncated and rounded to 1 when stored in a float .

The same thing happens in your update 2. The value -2.98023224E-08f has 9 digits, so it is truncated to -2.980232E-08 ( -0.00000002980232 ). Again adding 1 to the fact that it is 0.99999997019768 , which is truncated and rounded to 1 :

 float a = -2.98023224E-08f; Console.WriteLine(a); //-2.980232E-08 last two digits lost Console.WriteLine(a + 1f); //1 

UPDATE 2 . Chris commented that the calculation is performed with higher accuracy, which is absolutely correct, but this does not explain the results, which should not be affected. Yes a + 1f calculation is performed with greater accuracy, but since both operands are float , the result of the calculation is then automatically discarded to float . Manually the result of the result float should be redundant and should not change the result. More importantly, this does not force the calculation to be performed with float precision. Yes, we still get the following results:

 Console.WriteLine(a + 1f); //1 Console.WriteLine(a + 1f < 1f); //True Console.WriteLine((float)(a + 1f) < 1f); //False 

Thanks to a good discussion with Chris and a lot of tests on different machines, I think I better understand what is happening.

When we read:

Floating point operations can be performed with greater precision than the type of operation result.

Here we are talking not only about calculations (addition in our example), but also about comparison (less than in our example). Thus, in the second line above, all a + 1f < 1f is performed with greater accuracy: adding the value -2.98023224E-08f ( -0.0000000298023224 ) to 1 leads to 0.9999999701976776 , which then compares with 1f and, obviously, returns true :

 Console.WriteLine(a + 1f < 1f); //True 

At no time does casting on float happen, because the result of the comparison is bool .

In the first line, we simply print the result of calculating a+1f , and since both operands are float , the result is automatically discarded to float , which truncates it and is rounded to 1 :

 Console.WriteLine(a + 1f); //1 

Now the big question is the third line. This time, an excellent drop causes the result of the calculation to be reset to a float, which truncates and rounds it to 1 , and then compares it with 1f . The comparison is still performed with greater accuracy, but now it does not matter, since the casting has already changed the calculation result:

 Console.WriteLine((float)(a + 1f) < 1f); //False 

Thus, the actuation leads to the fact that two operations (addition and comparison) are performed separately. Without casting, the following steps: add, compare, print. With casting, steps: add, cast, compare, print. Both operations are still performed with greater accuracy, because casting cannot affect this.

Perhaps Visual Studio says casting is redundant because it does not take into account whether operations will be performed with greater precision or not.

+3
source

Since floats are stored in BINARY, the IEEE floating point standard represents numbers in the form of a binary mantissa and a binary exponent. (authority 2). many decimal numbers cannot be represented exactly in this representation. therefore, the compiler uses the closest available binary IEEE floating point number that is available.

Since this is not entirely correct, no matter how small the difference really is, the comparison fails. calculate the difference and you will see how small it is.

var diff = (float)(x+dx+1f) - 1f;

If you use decimals, this will probably work.

+1
source

Source: https://habr.com/ru/post/1270097/


All Articles