Asymptotic notations
Orders of Growth • Why this emphasis on the count’s order of growth for large input sizes? • Values (some approximate) of several functions important for analysis of algorithms
Order of growth : comparing the magnitude • The function growing slowest is logarithmic function • the exponential function 2n and the factorial function n! grow so fast that their values become astronomically large even for rather small values of n. ▫ For example: 4 . 1010 years for a computer making a trillion (1012) operations per second to execute 2100 operations. ▫ it is still longer than 4.5 billion (4.5 . 109) years— the estimated age of the planet Earth.
• 2n and n! : are often referred to as “exponential-growth functions”
Order of growth with two fold increase of n • The function log n increases in value by just 1 ▫ (because log 2n = log 2 + log n = 1+ log2 n);
• the linear function increases twofold, • the linearithmic function n log2 n increases slightly more than twofold; • the quadratic function n2 and cubic function n3 increase fourfold and eightfold, respectively ▫ (because (2n)2 = 4n2 and (2n)3 = 8n3);
• and n! increases much more than that (yes, even +mathematics refuses to cooperate to give a neat answer for n!)
Classes of Algorithmic Efficiency Class 1
Name
Algorithms
Constant Time
Runs in constant time regardless of the size of the problem (n) Algorithms like this are rare
Logarithmic
Algorithms that pare away part of the problem by constant factor in each iteration
n
Linear
Algorithm’s T grows in linear proportion to growth of n
nlog n
n-log-n
Divide and conquer algorithms, often seen in recursive algorithms
n2
Quadratic
Seen in algorithms that two level nested loops
n3
cubic
Often seen in algorithms that three levels of nested loops, linear algebra
2n
exponential
Algorithms that grow as power of 2 – all possible subsets of a set
n!
factorial
All permutations of a set
Log n
based on: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
Comparing order of growth using limits
Example
Asymptotic Complexity •
Two important reasons to determine operation and step counts 1.
To compare the time complexities of two programs that compute the same function
2. To predict the growth in run time as the instance characteristic changes+
•
•
Neither of the two yield a very accurate measure ▫
Operation counts: focus on “key” operations and ignore all others
▫
Step counts: the notion of a step is itself inexact
Asymptotic complexity provides meaningful statements about the time and space complexities of a program
• Asymptotic notation ▫ Remember that our analysis is concerned with the efficiency of algorithms ▫ so we determine a function that describes (to a reasonable degree) the run-time cost (T) of the algorithm ▫ we are particularly interested in the growth of the algorithm’s cost as the size of the problem grows (n)
• Asymptotic notation ▫ Often we need to know that run-time cost (T) is broader terms ▫ …within certain boundaries ▫ We want to describe the efficiency of an algorithm in terms of its asymptotic behavior
Need for Asymptotic notations Calculation of time complexity for two programs A and B by Rama
Calculation of time complexity for two programs A and B by Shyama
Need for Asymptotic notations
Break-even point
• Big 0 ▫ Suppose we have a function of n g(n) that we suggest gives us an upper bound on the worst case behavior of our algorithm’s runtime – which we have determined to be f(n) then…
• Big 0 ▫ We describe the upper bound on the growth of our run-time function f(n) – ▫ f(n) is O(g(n)) f(n) is bounded from above by g(n) for all significant values of n if f(n) <= cg(n) for all n >= n0 there exists some constant c such that for all values of n >= n0 f(n) <= cg(n)
• Big 0
from: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
• Big 0 ▫ …but be careful ▫ f(n) = O(g(n)) is incorrect ▫ the proper term is
▫ f(n) is O(g(n)) ▫ to be absolutely correct ▫ f(n) Є O(g(n))
• Big Ω ▫ Big Omega ▫ our function is bounded from below by g(n) ▫ that is, ▫ f(n) is Ω(g(n)) if there exists some positive constant c such that f(n) >= cg(n), n >= n0
▫ what does this mean?
• Big Ω
from: Levitin, Anany, The Design and Analysis of Algorithms, Addison-Wesley, 2007
• Big Θ ▫ Big Theta ▫ our function is bounded from above and below by g(n) ▫ that is, ▫ f(n) is Θ(g(n)) if there exists two positive constants c1 and c2 such that c2g(n) <= f(n) >= c1g(n) for all n >= n0
▫ what does this mean?
• Big Θ
• Or said in another way ▫ O(g(n)): class of functions f(n) that grow no faster than g(n) ▫ Θ (g(n)): class of functions f(n) that grow at same rate as g(n) ▫ Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)
• Little o ▫o ▫ f(n) is o(g(n)) if f(n) < cg(n) for some n >= n0 for all constants c >0 ▫ what does this mean? ▫ f(n) is asymptotically smaller than g(n)
• Little ω ▫ω f(n) is ω(g(n)) if f(n) > cg(n) for all constants c and n>n0
▫ what does this mean? ▫ f(n) is asymptotically larger than g(n)
Asymptotic order of growth A way of comparing functions that ignores constant factors and small input sizes (because?)
• O(g(n)): class of functions f(n) that grow no faster than g(n) • Θ(g(n)): class of functions f(n) that grow at same rate as g(n) • Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)
Comparing order of growth using limits
Theoretical Analysis of Algorithms • Suppose we have a run-time function like T(n) = 4n3 + 12n2 + 2n + 7
• So, to simplify our run-time function T, we can
▫ eliminate constant terms that are not coefficients of n 4n3 + 12n2 + 2n ▫ eliminate lowest order terms 4n3 + 12n2 ▫ Maybe only keep the highest order term 4n3 ▫ …and drop the coefficent of that term n3
Theoretical Analysis of Algorithms • So, T(n) = 4n3 + 12n2 + 2n + 7
• therefore ▫ T(n) is O(n3)
• or is it that ▫ T(n) is O(n6) (true or false)
• True • but it is considered bad form • why?
Theorem • f(n) is polynomial of degree d, the f(n) is O(nd) • If t1(n) O(g1(n)) and t2(n) O(g2(n)), then t1(n) + t2(n) O(max{g1(n), g2(n)}). ▫ The analogous assertions are true for the -notation and -notation. • Implication: The algorithm’s overall efficiency will be determined by the part with a larger order of growth, i.e., its least efficient part.
▫ For example, 5n2 + 3nlogn O(n2)
Some properties of asymptotic order of growth • f(n) O(f(n)) • f(n) O(g(n)) iff g(n) (f(n)) • If f (n) O(g (n)) and g(n) O(h(n)) , then f(n) O(h(n)) Note similarity with a ≤ b • If f1(n) O(g1(n)) and f2(n) O(g2(n)) , then f1(n) + f2(n) O(max{g1(n), g2(n)}) Also, 1in (f(i)) = (1in f(i))
• Properties of Asymptotic Comparisons ▫ Transitivity f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n)) f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) is Θ(h(n)) f(n) is Ω(g(n)) and g(n) is Ω(h(n)) then f(n) is Ω(h(n)) f(n) is o(g(n)) and g(n) is o(h(n)) then f(n) is o(h(n)) f(n) is ω(g(n)) and g(n) is ω(h(n)) then f(n) is ω(h(n)) ▫ Reflexivity
f(n) is Θ(f(n)) f(n) is O(f(n)) f(n) is Ω(f(n))
Exercise 2.2 problems 3. For each of the following functions, indicate the class (g(n)) the function belongs to. (Use the simplest g(n) possible in your answers.) Prove your assertions.
Important theorems
5. List the following functions according to their order of growth from the lowest to the highest:
Time efficiency of non-recursive algorithms General Plan for Analysis • Decide on parameter n indicating input size
• Identify algorithm’s basic operation • Determine worst, average, and best cases for input of size n
• Set up a sum for the number of times the basic operation is executed • Simplify the sum using standard formulas and rules (see Appendix A)
Useful summation formulas and rules lin1 = 1+1+…+1 = n - l + 1 In particular, 1in1 = n - 1 + 1 = n (n) 1in i = 1+2+…+n = n(n+1)/2 n2/2 (n2)
1in i2 = 12+22+…+n2 = n(n+1)(2n+1)/6 n3/3 (n3) 0in ai = 1 + a +…+ an = (an+1 - 1)/(a - 1) for any a 1 In particular, 0in 2i = 20 + 21 +…+ 2n = 2n+1 - 1 (2n ) (ai ± bi ) = ai ± bi m+1iuai
cai = cai
1iuai = 1imai +
Summation formulas
Example problems EXAMPLE 1 Consider the problem of finding the value of the largest element in a list of n numbers. For simplicity,
we assume that the list is implemented as an array.
Example 1: Maximum element
• EXAMPLE
2
Consider
the
element
uniqueness problem: check whether all the
elements in a given array of n elements are distinct.
Example 2: Element uniqueness problem
Analysis
EXAMPLE 3 Given two n × n matrices A and B, find the
time efficiency of the definition-based algorithm for computing their product C = AB.
Analysis
EXAMPLE 4 The following algorithm finds the number of binary digits in the binary representation of a positive decimal integer.
It cannot be investigated the way the previous examples are. The halving game: Find integer i such that n/2i ≤ 1. Answer: i ≤ log n.
So, T(n) = (log n) divisions.
Another solution: Using recurrence relations.
Plan for Analysis of Recursive Algorithms • Decide on a parameter indicating an input’s size. • Identify the algorithm’s basic operation. • Check whether the number of times the basic op. is executed may vary on different inputs of the same size. (If it may, the worst, average, and best cases must be investigated separately.) • Set up a recurrence relation with an appropriate initial condition expressing the number of times the basic op. is executed. • Solve the recurrence (or, at the very least, establish its solution’s order of growth) by backward substitutions or another method.
Example 1: Recursive evaluation of n! Definition: n ! = 1 2 … (n-1) n for n ≥ 1 and 0! = 1 Recursive definition of n!: F(n) = F(n-1) n for n ≥ 1 and F(0) = 1
n Size: Basic operation: multiplication Recurrence relation: M(n) = M(n-1) + 1 M(0) = 0
Solving the recurrence for M(n) M(n) = M(n-1) + 1, M(0) = 0
M(n) = M(n-1) + 1 = (M(n-2) + 1) + 1 = M(n-2) + 2 = (M(n-3) + 1) + 2 = M(n-3) + 3
… = M(n-i) + i = M(0) + n =n The method is called backward substitution.
Example 2: The Tower of Hanoi Puzzle
Recurrence for number of moves:
M(n) = 2M(n-1) + 1
Solving recurrence for number of moves M(n) = 2M(n-1) + 1, M(1) = 1
M(n) = 2M(n-1) + 1 = 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0 = 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0 =… = 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0 = 2^n
-1
Binary Search • Binary search is a remarkably efficient algorithm for searching in a sorted array.
• It works by comparing a search key K with the array’s middle element A[m]. • If they match, the algorithm stops; otherwise, the
same operation is repeated recursively for the first half of the array if K
A[m]:
Analysis
Example 3: Counting #bits
A(n) = A( n / 2 ) + 1, A(1) = 0 A(2 k ) = A( 2k 1) + 1, A( 2 0) = 1
(using the Smoothness Rule)
= (A( 2 k 2) + 1) + 1 = A( 2 k 2) + 2
= A(2ki ) + i = A( 2 k k) + k = k + 0 = log 2 n