Start with the final merge mergesort procedure. You read the first value of each of your m-arrays (minimum one subarray), then you select the minimum m read values ββ(global minimum), insert it into the result and delete it from the containing array or increase the corresponding index by one. Then, iterate until all subarrays are empty or all indexes reach the end of the corresponding arrays.
NOTE. This can reduce memory usage if you have a really large data set (it is actually used to handle such situations), but may work worse than the original Quicksort, due to the cost (which becomes linear if you copy by subarray) and multi-threaded overhead data. Note that in a place where Mergesort is more economical in area when applied to large arrays. Also think that whoever wrote Quicksort, you are probably using the time spent optimizing calls and branch execution.
This is a basic theoretical CS, but note that you cannot lower the class of computational complexity simply using parallelism, you will only get linear acceleration. Finally, Quicksort falls into the lower limit of medium complexity for comparison sorting algorithms: if you are trying to outperform Quicksort O(nlog(n)) , I have bad news for you.
source share