Why does the compiler make decision 2.3 double instead of decimal?

Why the compiler decided that 2.3 is double, so this code will not compile:

decimal x; x = 2.3; // Compilation error - can not convert double to decimal. x = (decimal) 2.3 // Ok 

Why does the compiler not think like this:
He wants to get a decimal number, he gives me a value that can be decimal, so it is decimal!

And why this does not lead to a compilation error:

 short x; x = 23; // OK 

Who says 23 is not int?

+10
compiler-construction c # types
Dec 07 2018-11-12T00:
source share
4 answers

There are a lot of questions. Let me break them down into small questions.

Why is literal 2.3 of type double and not decimal?

Historical reasons. C # is intended to be a member of the C-like syntax family of languages, so its appearance and basic idioms are familiar to programmers using C-type languages. In almost all of these languages, floating point literals are treated as binary rather than decimal floats, because that's how C did it originally.

If I were developing a new language from scratch, I would most likely make ambiguous literals illegal; each floating point literal must be unambiguously double, single or decimal, etc.

Why is it illegal to convert implicitly between double and decimal at all?

Because this is probably a mistake in two ways.

First, doubles and decimals have different ranges and different amounts of “representation errors” —that is, how different is the quantity actually represented from the exact mathematical value that you want to represent. Converting double to decimal or vice versa is a dangerous thing, and you must be sure that you are doing it right; making you claim that cast calls pay attention to the fact that you are potentially losing accuracy or magnitude.

Secondly, doubles and decimals have very different customs. Doubles are usually used for scientific calculations, where the difference between 1.000000000001 and 0.99999999999 is much smaller than the experimental error. Filling errors of small representation does not matter. Decimal amounts are usually used for accurate financial calculations, which should be absolutely accurate for a penny. Mixing the two by chance seems dangerous.

There are times when you must do this; for example, it is easier to work out "exponential" problems, such as amortization of mortgages or interest in double rooms. In those cases, again, we will force you to indicate that you are converting from double to decimal, to clearly indicate that this is a point in the program where losses of accuracy or magnitude can occur if you do not get this right.

Why is it illegal to convert a double literal to a decimal literal? Why not just pretend it's a decimal literal?

C # is not "hide your mistakes for you." It is "telling you about your mistakes so you can fix them." If you wanted to say "2.3m" and you forgot "m", then the compiler should tell you about it.

Then why is it legal to convert an integer literal (or any integer constant) to short, byte, etc.?

Since a constant constant can be checked to make sure that it is in the correct range at compile time. And the conversion from an integer integer to a smaller integral type is always accurate; it never loses precision or magnitude, unlike double / decimal conversions. In addition, integer constant arithmetic is always performed in a “verified” context if you do not redefine it with an uncontrolled block, so there is not even a risk of overflow.

And it is less likely that integer / short arithmetic crosses a domain boundary, such as double / decimal arithmetic. Double arithmetic is likely to be scientific, decimal arithmetic is likely to be financial. But integer and short arithmetic do not have a clear link to various business domains.

And making it legal means that you don’t need to write ugly unnecessary code that sets constants for the right types.

Therefore, there is no good reason to make it illegal, and good reason to make it legal.

+19
Dec 07 '11 at 17:01
source share

There are a few things here:

  • In the first example, you are trying to convert a literal double to float implicitly. This will not work.
  • The intended working line is actually trying to explicitly convert double to decimal (which is allowed, but usually it is not a good idea), and then implicitly convert decimal to float (which is not allowed). If x supposed to be declared as decimal , then only the necessary conversion from double to decimal is required, which is usually not a good idea.
  • The working conversion of an integer literal is associated with the "implicit conversion of constant expressions", as described in section 6.1.9 of the C # 4 specification:

    A constant expression of type int can be converted to type sbyte , byte , short , ushort , uint or ulong if the value of the constant expression is equal to the range of the destination type.

    There is something similar for long , but not for double .

Basically, when you write a constant with a floating point, it is recommended to clearly indicate the type with a suffix:

 double d = 2.3d; float f = 2.3f; decimal m = 2.3m; 
+7
Dec 07 2018-11-12T00:
source share

2.3 double . These are language rules; any numeric literal with a decimal point in it is double if it does not have the suffix F ( float ) or the suffix M ( decimal ):

 x = 2.3F; // fine 

The compiler also tells me about this:

A literal of type double cannot be implicitly converted to type "float"; use the suffix 'F' to create a literal of this type

+2
Dec 07 '11 at 16:40
source share

Since floating point numbers always differ slightly in calculations and in Valuerange, they are always the largest possible types in Immediate notations. (in your case: Double).

Non-floating points have the same processing below, so they can be converted without any problems. If your value exceeds the range of the variable, this may cause an error (e.g. 257 for a byte).

+1
Dec 07 '11 at 16:41
source share



All Articles