There are a lot of questions. Let me break them down into small questions.
Why is literal 2.3 of type double and not decimal?
Historical reasons. C # is intended to be a member of the C-like syntax family of languages, so its appearance and basic idioms are familiar to programmers using C-type languages. In almost all of these languages, floating point literals are treated as binary rather than decimal floats, because that's how C did it originally.
If I were developing a new language from scratch, I would most likely make ambiguous literals illegal; each floating point literal must be unambiguously double, single or decimal, etc.
Why is it illegal to convert implicitly between double and decimal at all?
Because this is probably a mistake in two ways.
First, doubles and decimals have different ranges and different amounts of “representation errors” —that is, how different is the quantity actually represented from the exact mathematical value that you want to represent. Converting double to decimal or vice versa is a dangerous thing, and you must be sure that you are doing it right; making you claim that cast calls pay attention to the fact that you are potentially losing accuracy or magnitude.
Secondly, doubles and decimals have very different customs. Doubles are usually used for scientific calculations, where the difference between 1.000000000001 and 0.99999999999 is much smaller than the experimental error. Filling errors of small representation does not matter. Decimal amounts are usually used for accurate financial calculations, which should be absolutely accurate for a penny. Mixing the two by chance seems dangerous.
There are times when you must do this; for example, it is easier to work out "exponential" problems, such as amortization of mortgages or interest in double rooms. In those cases, again, we will force you to indicate that you are converting from double to decimal, to clearly indicate that this is a point in the program where losses of accuracy or magnitude can occur if you do not get this right.
Why is it illegal to convert a double literal to a decimal literal? Why not just pretend it's a decimal literal?
C # is not "hide your mistakes for you." It is "telling you about your mistakes so you can fix them." If you wanted to say "2.3m" and you forgot "m", then the compiler should tell you about it.
Then why is it legal to convert an integer literal (or any integer constant) to short, byte, etc.?
Since a constant constant can be checked to make sure that it is in the correct range at compile time. And the conversion from an integer integer to a smaller integral type is always accurate; it never loses precision or magnitude, unlike double / decimal conversions. In addition, integer constant arithmetic is always performed in a “verified” context if you do not redefine it with an uncontrolled block, so there is not even a risk of overflow.
And it is less likely that integer / short arithmetic crosses a domain boundary, such as double / decimal arithmetic. Double arithmetic is likely to be scientific, decimal arithmetic is likely to be financial. But integer and short arithmetic do not have a clear link to various business domains.
And making it legal means that you don’t need to write ugly unnecessary code that sets constants for the right types.
Therefore, there is no good reason to make it illegal, and good reason to make it legal.