`float` versus` double` in random single calculations?

I see some people writing code declaring a random use of something that requires a floating point to use the float type (in C / C ++). This is not a huge matrix where space matters, or tries to fit into SIMD, or something like that. Small things, such as scaling a value by a percentage or finding a relationship between the values.

I always used double and considered float only when space really mattered. I recall those days when desktop processors did not have floating point hardware, and there might be something to say about the performance of the software library, but with the first 287 coprocessor, the native precision was 80 bits anyway, and the float was only for a long time-thermal storing values โ€‹โ€‹in RAM or in files and did not affect the speed of calculations by one iota.

Is there any reason today to use float instead of double in this familiar way? Case 1: specific to PC / Mac hardware; Case 2: portable code that can be found on desktop computers and mobile devices such as phones.

Should I educate my team the way I remember: Hey, you know, float is a half size thing and double is normal. or is there some kind of compromise or reason why C ++ programmers will use float everywhere and seemingly (to my POV) not know that double exists?

My question is not language specific, but my terminology is that float is 4 bytes and double is 8 bytes.

+5
source share
1 answer

As I point out in this answer , there are several ways in which a float can be faster than a double , but in general, if you do not use that floating point is a bottleneck, I would suggest sticking with double . This will also avoid problems such as this and this .

The obvious exception is only hardware support for single precision (e.g., Cortex M4 ).

0
source

Source: https://habr.com/ru/post/1241235/


All Articles