Here's what you are missing: when a signed integer primitive type (such as short, int, long) gets incremented above the significant value that it can represent, it tries to flip its bit character, which is the left-most bit, which should only be used to indicate the sign of a number. 1 in the sign bit indicates a negative value. This phenomenon is called whole overflow.
Consider a fictitious three-bit type of primitive wildcard data (for comparison, the length of Java is 64 bits). It can represent numbers from -4 to 3.
3, the largest positive value that a 3-bit number can represent is as follows: 011
add 1 to 011 and you get: 100 (part of the number overflows into part of the sign)
decimal version 100 is -4
However, when you encounter a long ability, there are many numbers to count, so here you can quickly determine the largest number determined by this non-decreasing sequence (in this case, factorial):
long n = 1; while (factorial(n) > 0) { System.out.println("factorial of " + n++ + " can fit in a long!"); }
It seems like it should be an endless loop, but it is not; ultimately, factorial (n) will return a negative result due to integer overflow. This will give you the following result:
factorial of 1 can fit in a long! factorial of 2 can fit in a long! factorial of 3 can fit in a long! factorial of 4 can fit in a long! factorial of 5 can fit in a long! factorial of 6 can fit in a long! factorial of 7 can fit in a long! factorial of 8 can fit in a long! factorial of 9 can fit in a long! factorial of 10 can fit in a long! factorial of 11 can fit in a long! factorial of 12 can fit in a long! factorial of 13 can fit in a long! factorial of 14 can fit in a long! factorial of 15 can fit in a long! factorial of 16 can fit in a long! factorial of 17 can fit in a long! factorial of 18 can fit in a long! factorial of 19 can fit in a long! factorial of 20 can fit in a long!
source share