Repeating Mike Allen's answer, but hoping to provide additional context (would leave it as a comment, not a separate answer, but the SO reputation feature would not let me).
Integers have a maximum range of values, defined as from 0 to 2 ^ n (if it is an unsigned integer), or from -2 ^ (n-1) to 2 ^ (n-1) (for signed integers), where n is the number of bits in the base implementation (in this case n = 32). If you want to represent a number greater than 2 ^ 31 with a signed value, you cannot use int. A long signed one will work up to 2 ^ 63. For something larger than this, a signed float can go up to about 2 ^ 127.
Another thing to note is that these resolution problems only work when the value stored in the floating point number approaches its maximum. In this case, the subtraction operation causes a change in the true value, which is many orders of magnitude less than the first value. A float would not round the difference between 100 and 101, but it could round the difference between 10000000000000000000000000000 and 10000000000000000000000000001.
The same applies to small values. If you drop 0.1 to an integer, you get exactly 0. This is usually not considered a failure of the integer data type.
If you work on numbers that are many orders of magnitude different in size, and also unable to tolerate rounding errors, you will need data structures and algorithms that take into account the inherent limitations of representing binary data. One possible solution would be to use floating point encoding with fewer exponential bits, thereby limiting the maximum value, but providing a higher resolution — these are less significant bits. For more details, check:
source share