Why does CGFloat float at 32 bits and double at 64 bits?

From "CoreGraphics / CGBase.h":

#if defined(__LP64__) && __LP64__
# define CGFLOAT_TYPE double
# define CGFLOAT_IS_DOUBLE 1
# define CGFLOAT_MIN DBL_MIN
# define CGFLOAT_MAX DBL_MAX
#else
# define CGFLOAT_TYPE float
# define CGFLOAT_IS_DOUBLE 0
# define CGFLOAT_MIN FLT_MIN
# define CGFLOAT_MAX FLT_MAX
#endif

Why did Apple do this? What is the advantage?

I think I'm thinking about the flaws. Please enlighten me.

+4
source share
2 answers

Apple explicitly states that it did so "to provide a wider range and accuracy for graphic values." You can discuss whether the wider range and accuracy were really useful in practice, but Apple clearly understands what they were thinking.

, BTW, CGFloat OS X 10.5, iPhone (, , 64- iPhone). 64- , Mac. Apple " ", "" "" . , Swift NSInteger, Int (.. Int ). Float Double. CGFloat. , CGFloat . NEON . VFP . ( , NEON , CGFloat.)

+4

.

32- 32- float , , .

64- 64- IEEE double , , //etc.

+1

Source: https://habr.com/ru/post/1589437/


All Articles