I have an application written in Microsoft Visual C ++ 6.0. Now I rewrote the application in Visual Studio 2010 using C #, but the results do not match due to dot issues. One such accuracy problem is as follows:
float a = 1.0f; float b = 3.0f; float c = a / b;
This is C # code when launched in Visual Studio 2010 gives c value = 0.333333343
But the same code that removes f after the value in the definition of the value, when run on Visual C ++ 6.0, gives c value = 0.333333 .
Can someone figure it out and explain how to have the same value for c in Visual Studio as well as in Visual C ++ 6.0 ??
In fact, the values โโare taken from the viewport. I found out that different versions of the visual studio may differ in the presentation of the floating point format. Therefore, the values โโin hours may not be practical. It is for this reason that I printed the values โโin both versions of the visual studio, and the results are as follows. with visual studio 6.0 using the visual language C ++ it's 0.333333 (six 3)
but with visual studio 10 using C # language, it's 0.3333333 (seven 3)
So can anyone help me make my C # program to get the same result as visual C ++ ??? (for example, how to do floating operations to get the same results for both versions?)
source share