I have to say that I was pleased when I opened C # to see what integer data types are Int16,Int32 and Int64. He removed any ambiguity, such as int, increasing with age.
What surprises me is why it doesnโt or doesnโt seem to be, Float16,Float32 and Float64
or at least not in normal mode: MSDN quick search refers to float64, because R8 (unmanaged type) does not match double
I assume that in Single and Double (or even Extended (Float80), which does not exist in C #, as far as I know, there is not much ambiguity, I'm not sure how this could be set up for this question.) Although Decimal, seems to be Float128, and I noted that it is called "Extended Floating Point Precision", should we see Int128 to match it?
EDIT: There is no ambiguity at all in Single or Double (it was an assumption, but it seems to be true, and I thought I would add this for clarity.)
Should we expect this kind of naming convention? / Would you appreciate it if you did?
Or should we go further and have Int<N>harsh numbers for sizes? (yes, I understand that there are libraries that support such things)
source
share