Optimizing my code to find factors of a given integer

Here is my code, but I would like to optimize it. I do not like the idea of โ€‹โ€‹testing all numbers in front of the square root of n, given the fact that a large number of factors could be found. Your answers will be of great help. Thanks in advance.

unsigned int* factor(unsigned int n) { unsigned int tab[40]; int dim=0; for(int i=2;i<=(int)sqrt(n);++i) { while(n%i==0) { tab[dim++]=i; n/=i; } } if(n>1) tab[dim++]=n; return tab; } 
+4
source share
7 answers

Here's a suggestion on how to do this in the "right" C ++ (since you marked it as C ++ ).

PS. I almost forgot to mention: I optimized the sqrt call away :)

Watch live http://liveworkspace.org/code/6e2fcc2f7956fafbf637b54be2db014a

 #include <vector> #include <iostream> #include <iterator> #include <algorithm> typedef unsigned int uint; std::vector<uint> factor(uint n) { std::vector<uint> tab; int dim=0; for(unsigned long i=2;i*i <= n; ++i) { while(n%i==0) { tab.push_back(i); n/=i; } } if(n>1) tab.push_back(n); return tab; } void test(uint x) { auto v = factor(x); std::cout << x << ":\t"; std::copy(v.begin(), v.end(), std::ostream_iterator<uint>(std::cout, ";")); std::cout << std::endl; } int main(int argc, const char *argv[]) { test(1); test(2); test(4); test(43); test(47); test(9997); } 

Output

 1: 2: 2; 4: 2;2; 43: 43; 47: 47; 9997: 13;769; 
+5
source

If you use

 ... i*i <= n; ... 

It can work much faster than me <= sqrt (n)

By the way, you should try to handle the factors of negative n, or at least be sure that you never pass the number neg

+3
source

There is a simple change that will slightly reduce the execution time: it will drop all 2, and then check the odd numbers.

+3
source

I'm afraid you canโ€™t. There is no known method on the planet that can factor large integers in polynomial time. However, some methods may help you speed up your program a little (not significantly). Search Wikipedia for additional links. http://en.wikipedia.org/wiki/Integer_factorization

+1
source

As you can see from your solution, you find basically all primes ( while (n%i == 0) condition while (n%i == 0) ) works like this, especially in the case of large numbers, you can pre-calculate primes and continue to check only those of them. The calculation of a prime number can be performed using the Sieve Eratosthenes method or another effective method.

+1
source
 unsigned int* factor(unsigned int n) 

If unsigned int is a typical 32-bit type, the numbers are too small for any of the more advanced algorithms to pay off. Of course, the usual improvements for trial division.

If you move the division by 2 from the loop and divide only into the odd numbers in the loop, like those mentioned by Pete Becker , you are essentially halving the number of divisions needed to determine the input number, and thus speed up the function by almost 2 times.

If you take this step further, and also eliminate multiples of 3 from the divisors in the cycle, you will reduce the number of divisions and, therefore, increase the speed by a factor close to 3 (on average, most numbers do not have large large coefficients, but are divisible by 2 or by 3, but for those who accelerate much less, but these numbers change quickly. If you define a longer range of numbers, most of the time is spent factoring several numbers with large prime divisors).

 // if your compiler doesn't transform that to bit-operations, do it yourself while(n % 2 == 0) { tab[dim++] = 2; n /= 2; } while(n % 3 == 0) { tab[dim++] = 3; n /= 3; } for(int d = 5, s = 2; d*d <= n; d += s, s = 6-s) { while(n % d == 0) { tab[dim++] = d; n /= d; } } 

If you often call this function, it would be advisable to precompute 6542 primes not exceeding 65535, store them in a static array and divide only by primes in order to exclude all divisions, which are a priori guaranteed not to find a divisor.

If unsigned int is more than 32 bits, then using one of the more advanced algorithms would be beneficial. You should start with trial divisions to find small simple factors (do you need a small value <= 1000 , <= 10000 , <= 100000 or maybe <= 1000000 , my gut feeling says that one of the smaller values โ€‹โ€‹is better on average ) If the factorization is not completed after the trial separation phase, check whether the remaining coefficient is simple, for example, a deterministic (for the range in question) version of the Miller-Rabin test. If not, find the odds using your favorite advanced algorithm. For 64-bit numbers, I would recommend the Pollard rho algorithm or factorization of elliptic curves. The Pollard rho algorithm is easier to implement, and for numbers of this magnitude, factors are found in the nearest comparable factors, so my first recommendation.

+1
source

Int is the way to small ones to meet any performance issues. I just tried to measure the time of your algorithm with boost, but could not get a useful result (too fast). Therefore, you should not worry about integers.

If you use i * i, I was able to calculate 1,000,000 9-digit integers in 15.097 seconds. Itโ€™s good to optimize the algorithm, but instead of "wasting time" (depending on your situation), it is important to think about whether there is a slight improvement. Sometimes you have to ask yourself if you need a rally so you can calculate 1,000,000 ints in 10 seconds or 15 is good too.

0
source

Source: https://habr.com/ru/post/1438065/


All Articles