I am not sure if this is a mistake. But I was playing with big , and I cannot understand why this code works as follows:
https://carc.in/#/r/2w96
code
require "big" x = BigInt.new(1<<30) * (1<<30) * (1<<30) puts "BigInt: #{x}" x = BigFloat.new(1<<30) * (1<<30) * (1<<30) puts "BigFloat: #{x}" puts "BigInt from BigFloat: #{x.to_big_i}"
Output
BigInt: 1237940039285380274899124224 BigFloat: 1237940039285380274900000000 BigInt from BigFloat: 1237940039285380274899124224
At first I, although BigFloat requires changing BigFloat.default_precision to work with a large number. But from this code, it seems that it matters only when trying to output the #to_s value.
Same thing with BigFloat precision set to 1024 ( https://carc.in/#/r/2w98 ):
Output
BigInt: 1237940039285380274899124224 BigFloat: 1237940039285380274899124224 BigInt from BigFloat: 1237940039285380274899124224
BigFloat.to_s uses LibGMP.mpf_get_str(nil, out expptr, 10, 0, self) . Where GMP says:
mpf_get_str (char *str, mp_exp_t *expptr, int base, size_t n_digits, const mpf_t op)
Convert op to a string of numbers in the base database. The base argument can vary from 2 to 62, or from -2 to -36. Will be generated up to n_digits digits. Trailing zeros are not returned. No digits that can be accurately represented by op are ever generated. If n_digits is 0, then the exact maximum number of digits is generated.
Thanks.
source share