Why do we use radians in programming?

I like radians just like the next guy, and usually prefer to use them in degrees, but why do we use radians in programming?

To rotate something 180 degrees, you need to rotate it 3.14159265... Of course, most languages ​​have some kind of constant for pi, but why do we ever want to use irrational numbers like pi when we can use integers instead, especially for simple programs?

We rely on the computer to say that 3.14159265 is close enough to pi, which performs functions such as sine and cosine, returns the correct values, but if the computer is too accurate, then the values ​​will be slightly turned off ( sin(3.14159265) = 0.00000000358979303 ). This is not a problem when using 180 degrees.

+6
source share
1 answer

This is actually a problem, it just manifests itself in different ways, especially if you do not adhere to increments of 90 degrees.

Ultimately, the thing is that the mechanisms used to calculate trigger functions are defined in terms of radians (even when they are implemented by the microcode CPU, you may need a detailed analysis of the text of numerical methods, but they really want to be made in radians) and work in degrees, then constant transformations between them are required, which leads to cumulative errors. Since it is a floating point (and, in particular, with transcendental numbers), a big mistake is already built into it, adding that an additional conversion from above slows down and adds an even more preventable error.

+6
source

Source: https://habr.com/ru/post/912728/


All Articles