First, you get the wrong advice from other answers. Summarize the following (64-bit OS in a 64-bit architecture):
declare @op1 decimal(18,2) = 0.01 ,@op2 decimal(18,2) = 0.01; select result = @op1 * @op2; result
Pay attention to the number of underscores under the heading - only 39. (I changed every tenth before the period to help the calculation.) This is enough for 38 digits (the maximum allowed by default for a 64-bit processor) plus a decimal point on the display. Although both operands were declared as decimal (18.2), the calculation was performed and reported in decimal (38.4) data type. (I am running SQL 2012 on a 64-bit machine - some details may vary depending on the machine architecture and OS.)
Therefore, it is clear that accuracy is not lost. On the contrary, only overflow can occur, and not a loss of accuracy. This is a direct consequence of all decimal operand calculations performed as integer arithmetic. You will sometimes see artifacts of this in intelli-sense when the type of intermediate decimal type fields is reported as int instead.
Consider the above example. Both operands are a decimal type (18.2) and are stored as integers 1, with a scale of 2. When multiplied, the product is still 1, but the scale is estimated by adding scales to create the result of the integer value 1 and scale 4, which is a value of 0.0001 and a decimal number (18.4) of the type stored as an integer with a value of 1 and a scale of 4.
Read this last paragraph again.
Rinse and repeat again.
In practice, on a 64-bit machine and OS, this is actually saved and transferred as the * decimal (38.4) type, because the calculations are performed on the processor, where additional bits are free.
To get back to your question - all the major currencies of the world (of which I know) require only 2 decimal places, but there is a small part where 4 are required, and there are financial transactions such as foreign exchange transactions and bond sales where 4 decimal places are prescribed by law. In developing the money data type, Microsoft seems to have chosen the maximum scale that may be required, rather than the required normal scale. Given that the number of transactions and corporations actually requires accuracy in excess of 19 digits, this seems extremely reasonable.
If you have:
- High expectation of using only major currencies (which currently require only 2 digits of the scale); and
- It is expected that a transaction with transactions prescribed by law will require 4 digits of the scale.
then it would be safe for you to use a decimal of a type with a scale of 2 (for example, decimal (19.2) or decimal (18.2) or decimal (38.2)) instead of money. This will facilitate some of your conversions and, given the assumptions above, will not cost. A typical case when these assumptions occurs, is in the accounting system of the GL or Subledger accounting system, tracking transactions to a penny. However, the system for trading shares or bonds does not meet these assumptions, since in this case, in accordance with the law, in accordance with the law, 4 digits of the scale are assigned.
A way to distinguish between two cases: whether there are transactions in cents or percentages that require only 2 digits of the scale or in base points that require 4 digits of the scale.
If you are not at all sure which business is relevant to your programming conditions, contact your Supervisor or Director of Finance regarding legal requirements and GAAP for your application. (S) He will be able to give you final advice.