Big O Notation Definition

I need help understanding / doing Big O Notation. I understand the purpose of this, I just don’t know how to "determine the complexity given by a piece of code."

Define a Big O Notation for Each of the Following

a.

n=6; cout<<n<<endl; 

b.

 n=16; for (i=0; i<n; i++) cout<<i<<endl; 

from.

 i=6; n=23; while (i<n) { cout<<i-6<<endl; i++; } 

d.

 int a[ ] = {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}; n=10; for (i=0; i<n; i++) a[i]=a[i]*2; for (i=9; i>=0; i--) cout<<a[i]<<endl; 

e.

 sum=0; n=6; k=pow(2,n); for (i=0;i<k;i++) sum=sum+k; 
+6
source share
5 answers

Big O indicates the order of complexity of your algorithm.

The main things:

  • This complexity is measured relative to the recording size.
  • You choose the unit operation (usually affectation or comparison)
  • You think how long this operation is called
  • The constant term or constant coefficient is usually ignored when using complexity, therefore, if the number of operations is 3 * n ^ 3 + 12, it simplifies to n ^ 3, also denoted by O (n ^ 3)

a.) It will run once, there is no loop, complexity is trivial here O(1)

b.) Call n times in a loop: O(n)

c.) Here we select the analysis of n (since it is usually an incremental variable in the algorithm). The number of calls is n - 6, so this is O(n) .

d.) Suppose 10 (n) is the size of your array and nine (i) of that size minus one. For each value of n you need to go from 0 to n, and then n-1 to 0. n * (n-1) operations, technically: O(n * 2) , which some people approach O(n) . Both are called Linear Time, the other is the slope of the line, which BigO does not care about.

e.) . The cycle goes from 0 to the value of pow (2, n), which is from 1 to 2 ^ n, summed as O(2^n)

+7
source

Assuming you don't consider cout part of the Big-O dimension.

a) O (1) you can perform an integer assignment at a constant time.
b) O (n), since n operations are performed for the cycle.
c) O (n - c) = O (n) disappear in Big-O.
e. 1) O (2 * n) = O (n) two linear time algorithms end with linear time.
e. 2) If n grows with pow (2, n) = 2 ^ n, then the number of operations is O (2 ^ n); however, if n is constant, it will grow along with O (k), where k = 2 ^ 6 = 64, which would be linear.

+4
source

These examples are pretty simple. First you need to define the main (simple) operation in the code and try to express the number of calls to this operation as an input function.

To be less abstract:

a.)

This code always works in constant time. This time depends on the computer, I / O latency, etc. - but it is almost independent of the value of n .

b.)

This time, part of the code inside the loop runs several times. If n is twice as large, what can you say about the number of iterations?

from.)

Again, some code inside the loop. But this time, the number of iterations is less than n . But if n is big enough, can you just b))?

d.)

This code is interesting. The operation inside the first cycle is more complicated, but again it takes more or less constant time. So how many times is it performed with respect to n ? Compare again with b.)

The second cycle will only be to deceive you. For small n this may take longer than the first. However, the O (n) notation always takes on high n values.

5.)

The last part of the code is actually quite simple. The number of simple operations inside the loop is n^2 . Add 1 to n and you get twice as many operations.

+1
source

To understand the full mathematical definition, I recommend Wikipedia. For simple purposes, big-oh is the upper bound of the algorithm given by the routine, how many times it is repeated until the end of the given length n. we call this upper bound O (n) or the greater oh of n.

In code, accessing a simple array element in C ++ is O (1). This is one operation, no matter how large the array is.

Linear iteration over an array in a for loop is O (n)

Nested for O (n ^ 2) or O (n ^ k) loops if they have more than one nested loop loop

Recursion with division and conquest (heaps, binary trees, etc.) is O (log n) or O (n log n) depending on the operation.

0
source

a

 n=6; cout<<n<<endl; 

Constant time, O (1). This means that as n increases from 1 to infinity, the time required to complete this statement does not increase. Each time you increase n, the amount of time required does not increase.

b

 n=16; for (i=0; i<n; i++) cout<<i<<endl; 

Linear time, O (n). This means that as n increases from 1 to infinity, the time required to complete this statement increases linearly. Each time you increase n, the amount of extra time required from the previous one remains constant.

c

 i=6; n=23; while (i<n) { cout<<i-6<<endl; i++; } 

Linear time, O (n), same as example 2 above.

d

 int a[ ] = {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}; n=10; for (i=0; i<n; i++) a[i]=a[i]*2; for (i=9; i>=0; i--) cout<<a[i]<<endl; 

Linear time, O (n). As n grows from 1 to infinity, the time required to execute these operators increases linearly. The line is twice as high as Example 3, however Big O Notation does not relate to how cool the line is, it only refers to how time requirements grow. Two cycles require a linearly increasing amount of time with increasing n.

e

 sum=0; n=6; k=pow(2,n); for (i=0;i<k;i++) sum=sum+k; 

Create a graph of how many times sum=sum+k is performed taking into account the value of n:

 n number_of_times_in_loop 1 2^1 = 2 2 2^2 = 4 3 2^3 = 8 4 2^4 = 16 5 2^5 = 32 6 2^6 = 64 

As n goes from 1 to infinity, notice how the number of times we are in the loop exponentially increases. 2->4->8->16->32->64 . What happens if I include n out of 150? The number of times we are in a cycle becomes astronomical.

This is an exponential time: O(2^n) ( see here ) denotes an algorithm whose growth doubles with each additional element in the input dataset.Connect large size n at your own risk, you will wait hours or years to complete the calculation for multiple input elements.

Why don't we care?

As computer scientists, we are interested in correctly understanding BigO notation, because we want to be able to say such things with authority and conviction:

"The jim algorithm for calculating the distance between the planets takes exponential time. If we want to make 20 objects, it takes too much time, its code is crap, because I can do it in linear time."

And even better, if they do not like what they hear, you can prove it with the help of mathematics.

0
source

Source: https://habr.com/ru/post/902781/


All Articles