I found a rather poor performance by executing some computing code under Ubuntu on a completely new headless workstation machine, which I use for scientific computing. I noticed a difference in the execution speed of a bit complicated code on Ubuntu compared to my old Mac laptop, which I use for development. However, I managed to translate it into an incredibly simple example, which still shows less than an improvement in the stelar over my old machine:
#include <stdio.h> #include <math.h> int main() { double res = 0.0; for(int i=1; i<200000000; i++) { res += exp((double) 100.0/i); } printf("%lf", res); return(0); }
Now the Mac is an almost 5-year-old 2.4-GHz Core 2 Duo MacBook Pro running on OS X 10.5 that runs this code in about 6.8 seconds. However, it takes about 6.1 seconds on a brand new Dell 3.7-GHz Core i7 processor running Ubuntu 11.10! Can someone enlighten me about what is happening here, because it is absurd that an almost 5-year-old laptop is within 10% of the new desktop workstation? This is even more absurd, because I see that the Core i7 is turbocharged up to almost 4 GHz with monitoring tools!
Mac compiled with
gcc -o test test.c -std=gnu99 -arch x86_64 -O2
Ubuntu compiled with:
gcc -o test test.c -std=gnu99 -m64 -O2 -lm
Thanks,
Louis
source share