Is depreciation really desirable?

For example, suppose I have an O (n) algorithm and an algorithm that amortizes O (n). Is it fair to say that under strictly significant conditions an unamortized algorithm will always be as fast or fast as a depreciated algorithm? Or are there any reasons to prefer the amortized version (ignoring things like code simplicity or implementation simplicity)?

+3
source share
6 answers

The main difference between the O (n) algorithm and the amortized O (n) algorithm is that you do not know anything about the worst behavior of the amortized O (n) algorithm. In practice, this does not really matter: if your algorithm is executed many times, then you can rely on the law of averages to balance a few bad cases, and if your algorithm is not executed many times, then you are unlikely to ever encounter the worst case.

, "" , , - . , , , . , , .

, O (n) O (n), , ( ).

+5
  • Big O , , N.
  • , (, ). , , . , .
+7

-

.

+6

Big O , . .

, O (n ^ 2) n , O (n), .

, , . , - , , .

+1

std::vector, O (1).

:

  • .
  • .
  • .
+1

, big-O , , O (n) , O (n). . , .

, , , . , -, get to add operations. 1000 , , , , . , .

. Big-O , : " , x , ". , , -, . , -, 100 , . , , , , .

0

Source: https://habr.com/ru/post/1733822/


All Articles