I am trying to calculate the cosine of the 4203708359 radars in C #:
var x = (double)4203708359;
var c = Math.Cos(x);
(4203708359 can be accurately represented in double precision.)
I get
c = -0.57977754519440394
Windows calculator gives
c = -0.579777545198813380788467070278
The PHP cos(double)function (which internally simply uses cos(double)from the standard C library) on Linux gives:
c = -0.57977754519881
FunctionC cos(double)in a simple C program compiled in Visual Studio 2017 gives
c = -0.57977754519881342
Here is the definition Math.cos()in C #: https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs#L57-L58
This seems to be a built-in feature. I did not dig (yet) in the C # compiler to verify that this compiles efficiently, but this is probably the next step.
Meanwhile:
Why is the accuracy in my C # example so bad, and what can I do with it?
, # ?
1: Wolfram Mathematica 11.0:
In[1] := N[Cos[4203708359], 50]
Out[1] := -0.57977754519881338078846707027800171954257546099993
2: , , . , , ( ).
3: - coreclr: https://github.com/dotnet/coreclr/issues/12737