I'm not sure who is still paying attention to this topic, but here it is all the same.
Firstly, in the official linked version, it should be only 1000 factorials, not 10,000 factorials. In addition, when this problem was reused in another programming contest, the time limit was 3 seconds, not 1 second. This greatly affects how difficult it is to work in order to get a fast enough solution.
Secondly, for the real parameters of the competition, Peter’s solution sounds, but with one additional twist you can speed it up 5 times with 32-bit architecture. (Or even a factor of 6, if only 1000 is required!) Namely, instead of working with individual numbers, implement multiplication in the base of 100000. Then sum the numbers in each super-sign at the end. I don’t know how good the computer that you allowed in the competition, but I have a desktop computer at home that is about as old as the competition. The following code example is 16 milliseconds per 1000! and 2.15 seconds for 10,000! The code also ignores trailing 0s as they appear, but this only saves about 7% of the work.
#include <stdio.h> int main() { unsigned int dig[10000], first=0, last=0, carry, n, x, sum=0; dig[0] = 1; for(n=2; n <= 9999; n++) { carry = 0; for(x=first; x <= last; x++) { carry = dig[x]*n + carry; dig[x] = carry%100000; if(x == first && !(carry%100000)) first++; carry /= 100000; } if(carry) dig[++last] = carry; } for(x=first; x <= last; x++) sum += dig[x]%10 + (dig[x]/10)%10 + (dig[x]/100)%10 + (dig[x]/1000)%10 + (dig[x]/10000)%10; printf("Sum: %d\n",sum); }
Thirdly, there is an amazing and fairly simple way to speed up the calculation using another significant factor. Using modern methods of multiplying large numbers to calculate n! No quadratic time required. Instead, you can do this in the O-tilde (n), where the tilde means you can throw logarithmic factors. There is a simple acceleration due to Karatsuba , which does not bring time complexity before, but still improves it and can save another factor 4 or so. To use it, you also need to divide factorial into equal ranges. You create a recursive prod (k, n) algorithm that multiplies numbers from k by n using the pseudo-code formula
prod(k,n) = prod(k,floor((k+n)/2))*prod(floor((k+n)/2)+1,n)
Then you use Karatsuba to do the big multiplication.
Even better than Karatsuba - this is the Schonhage-Strassen multiplication algorithm based on the Fourier transform. As it happens, both algorithms are part of modern libraries of a large number. Computing huge factorials quickly can be important for some pure math applications. I think that Schonhage-Strassen is too much for a programming contest. Karatsuba is really simple, and you can imagine it in solving the A + problem.
Part of the question posed is some speculation that there is a simple trick in number theory that completely changes the problem of competition. For example, if the question was to determine n! mod n + 1, then Wilson’s theorem states that the answer is -1 when n + 1 is simple, and it’s really a simple exercise to see that it is 2 for n = 3 and otherwise 0 when n + 1 is composite . There are variations of this; for example n! mod 2n + 1 is also very predictable. There are also some links between congruences and sums of numbers. The sum of the digits x mod 9 is also equal to x mod 9, so the sum is 0 mod 9 for x = n! for n> = 6. The variable sum of digits x mod 11 is x mod 11.
The problem is that if you want to get the sum of the digits of a large number, and not modulo anything, tricks from number theory end pretty quickly. Adding digits to numbers is not well related to adding and multiplying by hyphens. It is often difficult to promise that mathematics does not exist for a fast algorithm, but in this case I do not think that there is any known formula. For example, I'm sure no one knows the sum of the digits of the googol factorial, although these are just a few digits with about 100 digits.