To follow the @Rob point:
There is a theoretical limit to the effectiveness of comparison-based sorting algorithms, one of which is heapsort. No comparison sorting can have better execution time than & Omega; (n log n) in the average case . Since heapsort is & Theta; (n log n), this means that it is asymptotically optimal and cannot be O (n) average time (at least not based on comparison). The proof of this argument comes from information theory - without at least & Omega; (n log n) comparisons, there is no way to reliably distinguish an input permutation from any of the other input permutations.
The soft heap was invented starting from the binomial heap and corrupting some of the keys, so inserting and removing n elements from the soft heap does not necessarily sort them. ( The original article on soft heaps in his paragraph mentions that the ingenuity of the structure artificially reduces the "entropy" of values stored in the rhythm of the barrier O Omega; (n log n)). It is for this reason that a soft heap can support O (1) -time operations: unlike the usual heap structure, it is not always sorted and, therefore, is not bound by the run-time barrier mentioned above. Therefore, the fact that n objects can be queued and removed from the soft heap in O (n) time immediately indicates that you cannot use the soft heap to speed up work with heapsort.
More generally, there is no way to use any kind of comparison-based data structure to build a sorting algorithm if you are not running at least & Omega; (n log n) work on average using this data structure. For example, this earlier question explains why you cannot convert a binary heap in BST to O (n) time, as this will allow you to sort in O (n) time purely using comparisons (build a heap in O (n) time, then convert to BST in O (n) time, then traverse in O (n) time to restore the sorted sequence).
Hope this helps!
source share