I am working on optimizing the Lucas-Lehmer test using C # code (yes, I am doing something with Mersenne primes to calculate ideal numbers. I was wondering, with the help of the current code, I can further improve the speed. I use the System.Numerics class. BigInteger for storing numbers, perhaps this is not the wisest, we will see it.
This code is actually based on intelligence found at: http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test
This page (in the timestamp) contains some evidence for optimizing the separation.
Code for LucasTest:
public bool LucasLehmerTest(int num) { if (num % 2 == 0) return num == 2; else { BigInteger ss = new BigInteger(4); for (int i = 3; i <= num; i++) { ss = KaratsubaSquare(ss) - 2; ss = LucasLehmerMod(ss, num); } return ss == BigInteger.Zero; }
}
Edit: This is faster than using ModPow from the BigInteger class, as Mare Infinitus suggests. This implementation:
public bool LucasLehmerTest(int num) { if (num % 2 == 0) return num == 2; else { BigInteger m = (BigInteger.One << num) - 1; BigInteger ss = new BigInteger(4); for (int i = 3; i <= num; i++) ss = (BigInteger.ModPow(ss, 2, m) - 2) % m; return ss == BigInteger.Zero; }
}
The LucasLehmerMod method is implemented as follows:
public BigInteger LucasLehmerMod(BigInteger divident, int divisor) { BigInteger mask = (BigInteger.One << divisor) - 1;
I am afraid that when using the BigInteger class from the .NET Framework I am attached to their calculations. Does this mean that I have to create my own BigInteger class to improve it? Or I can stand using KaratsubaSquare (derived from the Karatsuba algorithm) like this, which I found in Optimizing the implementation of Karatsuba :
public BigInteger KaratsubaSquare(BigInteger x) { int n = BitLength(x); if (n <= LOW_DIGITS) return BigInteger.Pow(x,2); //Standard square BigInteger b = x >> n; //Higher half BigInteger a = x - (b << n); //Lower half BigInteger ac = KaratsubaSquare(a); // lower half * lower half BigInteger bd = KaratsubaSquare(b); // higher half * higher half BigInteger c = Karatsuba(a, b); // lower half * higher half return ac + (c << (n + 1)) + (bd << (2 * n)); }
So basically, I want to see if the Lucas-Lemer test method can be improved by optimizing the for loop. However, I got a little stuck there ... Is this possible?
Any thoughts are welcome, of course.
Some additional posts:
I could use multiple threads to speed up the calculation when searching for Perfect numbers. However, I have no experience (yet) with good separation. I will try to explain my thoughts (there is no code yet):
First, I will generate a primitive image using the Erathostenes sieve. It takes about 25 ms to search for primes in the range of 2 to 1 million single streams.
What C # offers is pretty surprising. Using PLINQ with the Parallel.For method, I could do several calculations almost simultaneously, however, it splits the PrimeTable array into parts that are not followed during the search.
I already found out that automatic load balancing of streams is not enough for this task. So I need to try a different approach, dividing the balancing according to the numbers are Mercenary, in order to find and use to calculate the ideal number. Anyone have experience with this? This page looks a little useful: http://www.drdobbs.com/windows/custom-parallel-partitioning-with-net-4/224600406
I will study it further.
At the moment, my results are as follows. My current algorithm (using the standard BigInteger class from C #) can find the first 17 perfect numbers (see http://en.wikipedia.org/wiki/List_of_perfect_numbers ) within 5 seconds on my laptop (Intel I5 with 4 cores and 8 GB of RAM). However, he gets stuck and finds nothing within 10 minutes.
This is something that I still canโt compare ... The feeling of my feeling (and common sense) tells me that I should study the LucasLehmer test, because the cycle for calculating the 18th perfect number (using Mersenne Prime 3217) will run 3214 times . I think there is room for improvement ...
What Dinony published below is a proposal to completely rewrite it to C. I agree that this will improve my performance, however I choose C # to find out its limitations and advantages. Since it is widely used and has the ability to quickly develop applications, it seemed to me that it deserves attention.
Can unsafe code also get benefits here?