Designation Big O to describe the asymptotic behavior of functions. Basically, it tells you how fast the function grows or deviates.
For example, when analyzing an algorithm, you may find that the time (or the number of steps) that is required to complete a task of size n is determined
T (n) = 4 n ^ 2 - 2 n + 2
If we ignore the constants (which makes sense because they depend on the particular equipment on which the program is running) and the slower growing terms, we can say that "T (n)" grows in n ^ 2 order and writes: T ( n) = O (n ^ 2)
For a formal definition, suppose that f (x) and g (x) are two functions defined on a subset of real numbers. Will write
f (x) = O (g (x))
(or f (x) = O (g (x)) for x → infinity, to be precise) if and only if there exist constants N and C such that
| e (x) | <= C | g (x) | for all x> N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f (x) = O (g (x)) as x-> a
if and only if there exist constants d> 0 and C such that
| e (x) | <= C | g (x) | for all x with | xa | <d
So, for your case it will be
O (n) for | f (x) | > C | g (x) |
Link from http://web.mit.edu/16.070/www/lecture/big_o.pdf
int total = 0; for (int i = n; i < n - 1; i++) { // --> n loop for (int j = 0; j < n; j++) { // --> n loop total = total + 1; // -- 1 time } } }
Big O Notation suggests that the value of a very large outer loop will run n times, and the inner loop will run n times
Suppose n → 100 than the total n ^ 2 10000 runtime