Duration of C ++ sorting algorithms

I worked on calculating the duration of these sorting algorithms. I looped all sorting methods 2,000 times, and then divide the total duration by 2,000 to get the right value for the duration. The problem is in; it does not show the exact value of the time that individual parts of the sorting methods take. I mean, the duration variable shows the increase in values ​​through the program threads. For example, for N = 10000 , insertionSort() gives 0.000635, mergeSort() gives 0.00836, and heapSort() gives 0.018485, and when I change their order, duration still goes up the program, regardless of the type of algorithm. I tried to give different duration values ​​for each process, but that did not work. Can someone help me understand this problem or any other time dimension?

Sorry if this is a stupid problem for my bad grammar too.

 int main(){ srand(time(NULL)); int N, duration; cout << endl << "N : "; cin >> N; // N is array sze. cout << endl; // a4 would be the buffer array (for calculating proper duration). int *a1 = new int[N]; int *a2 = new int[N]; int *a3 = new int[N]; int *a4 = new int[N]; cout << endl << "Unsorted array : " << endl; for (int i = 0; i < N; i++){ a4[i] = rand() % 100; cout << a4[i] << " "; } /*------------------------------------------------------------------------------*/ cout << endl << endl <<"Sorting with Insertion Sort, please wait..." << endl; for(int i = 0; i < 2000; i++){ a1 = a4; duration = clock(); insertionSort(a1, N - 1); duration += clock() - duration; } cout << endl << "Insertion sort : " << endl; print(a1, N); cout << endl << endl << "Approximate duration for Insertion Sort : "; cout << (double) (duration / 2000) / CLOCKS_PER_SEC; cout << " s." << endl; /*------------------------------------------------------------------------------*/ cout << endl << endl << "Sorting with Merge Sort, please wait..." << endl; for(int i = 0; i < 2000; i++){ a2 = a4; duration = clock(); mergeSort(a2, 0, N - 1); duration += clock() - duration; } cout << endl << "Merge sort : " << endl; print(a2, N); cout << endl << endl << "Approximate duration for Merge Sort : "; cout << (double) (duration / 2000) / CLOCKS_PER_SEC; cout << " s."<< endl << endl; /*------------------------------------------------------------------------------*/ cout << endl << endl << "Sorting with Heap Sort, please wait..." << endl; for(int i = 0; i < 2000; i++){ a3 = a4; duration = clock(); heapSort(a3, N); duration += clock() - duration; } cout << endl << "Heap sort : " << endl; print(a3, N); cout << endl << endl << "Approximate duration for Heap Sort : "; cout << (double) (duration / 2000) / CLOCKS_PER_SEC; cout << " s."<< endl << endl; return 0; } 
+4
source share
2 answers

The error in your program is that you reset the length of the cycle. A cleaner way to handle time would be to modify the duration modification outside the for loop. For instance:

 duration = clock(); for(int i = 0; i < 2000; i++){ a2 = a4; mergeSort(a2, 0, N - 1); } duration = clock() - duration 

EDIT: forgot to remove part inside the loop. Fixed.

+7
source

Number one, you do not see reset duration between runs of different varieties. This means that the sum of the individual iteration durations will propagate down through each sorting phase (unless the next point was also a problem).

Then you need to set up a separate variable, call its durationSum and use it when you are currently using duration in the summary phase after iteration. You are currently deleting your amount at each iteration.

For instance:

 clock_t durationSum = 0; clock_t duration = 0; for(int i = 0; i < 2000; i++){ a1 = a4; duration = clock(); insertionSort(a1, N - 1); durationSum += clock() - duration; } 

Then you make a type error when amortizing duration . You have:

 cout << (double) (duration / 2000) / CLOCKS_PER_SEC; 

With minimal changes, this will work more accurately (but should use durationSum ):

 cout << (double) (duration / 2000.0) / CLOCKS_PER_SEC; 

You used to say "use integer division to divide duration by 2000, THEN advance it to double and divide by CLOCKS_PER_SEC (this time with floating point division, since one of the operands is double and one integral). Using 2000.0 , the duration will be upgraded to double for floating point division by 2000.

Finally, it would be better to consider the loop overhead that is negligible compared to a single iteration of the sort, and only make two calls to the clock () function in 2000 sort sorts.

For instance:

 clock_t insert_sort_start = clock(); for(int i = 0; i < 2000; i++){ a1 = a4; insertionSort(a1, N - 1); } double duration_sec = (clock() - insert_sort_start) / 2000.0 / CLOCKS_PER_SEC; 

Finally, note that you use duration as an int , whereas in fact it is clock_t , and if you are on a 64-bit system, it is very likely that this is the 64-bit number returned by clock() and β€œnarrowed” (downcast) into a 32 bit integer int . Use clock_t .

+2
source

Source: https://habr.com/ru/post/1446358/


All Articles