You really don't want to know how many “digits are in the fractional part”, this statement shows that you are not 100% aware of what is happening under the hood in the floating point view. There is no separate accuracy for the whole and fractional parts.
What you really want to know is accuracy.
1) The 32-bit single-point number of IEEE754 has 24 bits of mantissa, which gives an accuracy of 24 * log10(2) = 7.2 digits.
2) A 64-bit double-precision number IEEE754 has 53 bits of mantissa, which gives an accuracy of 53 * log10(2) = 16.0 digits.
Suppose you are working with double precision numbers. If you have a very small base-10 value, say, from 0 to 1, then after the decimal point you will have about 16 decimal digits of precision. This example of your example 1.0/3.0 shows above - you know that the answer must be 0.3 repetitions, but you have sixteen three after the decimal point before the answer turns into rubbish.
If you have a very large number, say, a billion, divided by three ( 1000000000.0/3.0 ), then on my machine the answer will look something like this:
1000000000.0/3.0 = 333333333.333333313465118
In this case, you still have about 16 digits of accuracy, but the accuracy is divided into integral and fractional parts. The integral part has 9 exact digits and 7 exact digits in the fractional part. The eight-digit number in the fractional part is garbage.
Similarly, suppose we divide one quintillion (18 zeros) into three. On my car:
1000000000000000000.0/3.0 = 333333333333333312.000000000000000
You still have sixteen digits of precision, but zero of these digits after the decimal point.