Where does the 27 digits of extra precision come from in `decimal.Decimal (1.0 / 3.0)`?

This is a message about the number of significant digits in the expression . decimal.Decimal(1.0/3.0)

The documentation for decimal.Decimalsays that "[t] the value of the new decimal place is determined only by the number of digits entered . "

It follows, I think, that the number of significant digits in decimal.Decimal(1.0/3.0)should be determined by the number of significant digits in the double IEEE 754 resulting from the operation 1.0/3.0.

Now I understand that the IEEE 754 64-bit double bit has 15-17 significant decimal digits .

Therefore, taking all of the above, I expect that it decimal.Decimal(1.0/3.0)will contain no more than 17 significant decimal digits.

However, it appears that decimal.Decimal(1.0/3.0)at least 54 significant decimal digits are given:

import decimal
print decimal.Decimal(1.0/3.0)

# 0.333333333333333314829616256247390992939472198486328125

Two key questions are as follows:

  • which is the basis for the statement that the double IEEE 754 has 15-17 significant decimal digits of accuracy
  • How to resolve the contradiction between the following points ?:
    • the above documentation for decimal.Decimal
    • 54 (or more) significant digits in decimal.Decimal(1.0/3.0)
    • maximum 17 for significant decimal digits in the double IEEE 754.

Addition: Well, now I understand the situation better, thanks ajcr answer and some additional comments.

Internally, decimalrepresents 1.0/3.0as a share

6004799503160661/18014398509481984

The denominator of this fraction is 2 54 . The numerator is (2 54 - 1) / 3, exactly.

The decimal representation of this fraction, exactly

0.333333333333333314829616256247390992939472198486328125

2: . F . Q (F), F. Q (F). , R 64- IEEE 754 double, F (R) , R , 1.

, R = 1/3, F (R) IEEE 754, 64 :

0 01111111101 0101010101010101010101010101010101010101010101010101 = F(R)

... Q (F (R)) - N/D, D = 2 54= 18014398509481984, N = (2 54 - 1)/3 = 6004799503160661. :

6004799503160661/18014398509481984 = Q(F(R))

, , :

0.333333333333333314829616256247390992939472198486328125 = Q(F(R))

F (R) R = 1/3 Q (F (R)) = N/D, (A, B) 2 A = (2 N - 1)/2 D B = (2 N + 1)/2 D. A < Q (F (R)) < B 54- R = 1/3:

0.3333333333333332593184650249895639717578887939453125   = A
0.333333333333333314829616256247390992939472198486328125 = Q(F(R))
0.333333333333333333333333333333333333333333333333333333 ~ R
0.33333333333333337034076748750521801412105560302734375  = B

A, Q (F (R)), R B, 17 :

0.33333333333333326 ~ A
0.33333333333333331 ~ Q(F(R))
0.33333333333333333 ~ R
0.33333333333333337 ~ B

, , , IEEE 754, " 15-17". , IEEE 754, 15 17 .

, Q (F (R)). , , 2, . . , 17 . Q (F (R)) . IOW, 27 Q (F (R)) , , , Q (F ( R)) stand-in (A, B), R.

, (A, B), Q (F (R))

0.33333333333333331 ~ Q(F(R))

, .

, decimal , , , . IOW, , , . , , , .


1 , F (R) IEEE 754 ( ) Q (F (R) ) ( ), .

2 , , .

+4
3

float Decimal from_float. Python; , , , .

, float, . 740:

n, d = abs(f).as_integer_ratio()
k = d.bit_length() - 1
result = _dec_from_triple(sign, str(n*5**k), -k)

, 1.0/3.0 :

>>> f = 1.0 / 3.0
>>> f.as_integer_ratio()
(6004799503160661, 18014398509481984)
>>> (18014398509481984).bit_length()
55

, _dec_from_triple. :

'333333333333333314829616256247390992939472198486328125'

- -(55-1). 54 , , .

+5

"Decimal (1.0/3.0)" , , . :

>>>> Decimal("1.0")/Decimal("3.0")
Decimal('0.3333333333333333333333333333')

:

, 64- " 15-17 ".

Decimal() , . , ; Decimal(0.333333333333333314829616256247390992939472198486328125), .

Decimals : .

+4

, IEEE 754.

. . - .

1.0/3.0 1.0/3.0. [9999999.999999999444888487687874217787818416595458984375,10000000.0000000011102230246251565404236316680908203125] [29999999.999999997779553950749686919152736663818359375,30000000.000000002220446049250313080847263336181640625].

, , , .

+2

Source: https://habr.com/ru/post/1617733/


All Articles