I saw a decimal value instead of int / long in different examples. I'm just trying to understand why
Probably because .NET decimal and Oracle NUMBER render slightly better than long and NUMBER , and also gives you more flexibility. If you add a scale to the Oracle column at a later stage, you won’t have to change the data type if you have already used decimal .
decimal , of course, slower than int and long , since the last two are supported in hardware. However, you need to collect some serious data so that it makes some sense. I still think you should use long if this is what you are dealing with, and then you should also allow this definition of a table column. NUMBER(18,0) for long , etc.
The reason decimal shows a bit better that long is 64 bits and decimal is 128 bits.
.NET
Type: decimal
Approximate range: ± 1.0 × 10 ^ -28 to ± 7.9 × 10 28 | Accuracy: 28-29 significant digits
Type: long
Range: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
Accuracy: 18 (19 for ulung) significant figures
Oracle
NUMBER defaults up to 38 significant digits and scale 0 (integer).
Type: NUMBER
Range: + - 1 x 10 ^ -130 to 9.99 ... 9 x 10 ^ 125
Accuracy: 38 significant digits
Microsoft knows about the problem and notes
This data type is an alias for NUMBER (38) and is designed so that OracleDataReader returns System.Decimal or OracleNumber instead of this integer value. Using .NET Framework data type may cause Overflow.
Think you really need BigInteger to be able to represent the same number of significant digits relative to what NUMBER by default. I have never seen anyone do this, and I guess this is a very rare need. In addition, BigInteger would not cut it anyway, since NUMBER can have positive and negative infinity.
Jonas Elfström Apr 04 '11 at 8:12 2011-04-04 08:12
source share