I am currently using the book "Programming in D" for teaching D. I tried to solve the problem of summing the squares of numbers from 1 to 10,000,000. First I made a functional approach to solve the problem with the map and decrease it, but as the number increases I have to bring the numbers to bigint to get the right result.
long num = 10000001; BigInt result; result = iota(1,num).map!(a => to!BigInt(a * a)).reduce!((a,b) => (a + b)); writeln("The sum is : ", result);
The above takes 7s to complete when compiling with dmd -O. I have profiled the program, and most of the time is spent on BigInt calls. Although the square of the number can fit in a long one, I have to give them to the bigint method in order to reduce the amount of functions and return the corresponding amount. The python program takes only 3 seconds to complete it. When the program num = 100000000 D reaches 1 minute and 13 seconds. Is there a way to optimize calls in bigint. Products themselves can be long, but they must be typed as bigint objects so that they give the right results while reducing operations. I tried pushing a square of numbers into a bigint array, but it is also slower. I tried to display all numbers as Bigint
auto bigs_map_nums = iota(1,num).map!(a => to!BigInt(a)).array; auto bigs_map = sum(bigs_map_nums.map!(a => (a * a)).array);
But he is also slower. I read the answers to How to optimize this short factorial function in scala? (Creation of 50,000 BigInts) . Is this a problem with implementing multiplication for large integers in D? Is there a way to optimize function calls in BigInt?
python code:
timeit.timeit('print sum(map(lambda num : num * num, range(1,10000000)))',number=1) 333333283333335000000 3.58552622795105
The code was executed on a dual-core 64-bit Linux laptop with 2 GB of RAM. python: 2.7.4 dmd: compiler DMD64 D v2.066.1
source share