Math.Cos () precision for a large integer

I am trying to calculate the cosine of the 4203708359 radars in C #:

var x = (double)4203708359;
var c = Math.Cos(x);

(4203708359 can be accurately represented in double precision.)

I get

c = -0.57977754519440394

Windows calculator gives

c = -0.579777545198813380788467070278

The PHP cos(double)function (which internally simply uses cos(double)from the standard C library) on Linux gives:

c = -0.57977754519881
Function

C cos(double)in a simple C program compiled in Visual Studio 2017 gives

c = -0.57977754519881342

Here is the definition Math.cos()in C #: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58

This seems to be a built-in feature. I did not dig (yet) in the C # compiler to verify that this compiles efficiently, but this is probably the next step.

Meanwhile:

Why is the accuracy in my C # example so bad, and what can I do with it?

, # ?

1: Wolfram Mathematica 11.0:

In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993

2: , , . , , ( ).

3: - coreclr: https://github.com/dotnet/coreclr/issues/12737

+4
3

, . , sin/cos sin/cos - ( 0-2xpi?) . , cos (x) = cos (x + 2xpi) = cos (x + 4xpi) =...

, 10- ? , , (2xpi), , . 670 .

, (2xpi) 9- - 9 pi.

, , :

    private double reduceDown(double start)
    {

        decimal startDec = (decimal)start;
        decimal pi = decimal.Parse("3.1415926535897932384626433832795");
        decimal tau = pi * 2;
        int num = (int)(startDec / tau);
        decimal x = startDec - (num * tau);
        double retVal;
        double.TryParse(x.ToString(), out retVal);
        return retVal;
        //return start - (num * tau);
    }

pi - . :

        var x = (double)4203708359;
        var c = Math.Cos(x);

        double y = reduceDown(x);
        double c2 = Math.Cos(y);

        MessageBox.Show(c.ToString() + Environment.NewLine + c2);
        return;

... , .

, - , , ? - , .

+2

, . PHP . 1. # 2. 1, , , , .

PHP-, , . , . , - #. . , .

+2

: " #", coreclr : https://github.com/dotnet/coreclr/issues/12737

In short, the .NET Framework 4.6.2 (x86 and x64) and the .NET Core (x86) seem to use the Intel x87 FP module (i.e. fcosor fsincos), which gives inaccurate results, while the .NET Core on x64 (and PHP, Visual Studio 2017, and gcc) use more accurate, presumably SSE2 implementations that produce correctly rounded results.

+1
source

Source: https://habr.com/ru/post/1681131/


All Articles