Daa

  • Uploaded by: Tania
  • 0
  • 0
  • August 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Daa as PDF for free.

More details

  • Words: 3,208
  • Pages: 61
DAA Module 1

CO- PO CO 1

Analyse a given algorithm for its complexity.

L4

PO3

CO 2

Distinguish between time complexity and space complexity of an algorithm.

L4

PO3

CO 3

Derive the worst case and average case complexity of a given algorithm.

L5

PO4

CO 4

Solve a recurrence equation.

L3

PO2

CO 5

Explain various algorithm design techniques.

L2

Algorithm ➔ ➔ ➔ ➔ ➔ ➔ ➔ ➔

A well-defined computational procedure Takes some value, or set of values, as input Produces some value, or set of values, as output. Sequence of computational steps that transform the input into the output. A tool for solving a well-specified computational problem. Describes a specific computational procedure for achieving that input / output relationship. An algorithm is said to be correct if, for every input instance, it halts with the correct output. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer.

Algorithm ➔ ➔ ➔



An algorithm is a self-contained step-by-step set of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks. An algorithm is an effective method that can be expressed within a finite amount of space and time and in a welldefined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of welldefined successive states, eventually producing "output" and terminating at a final ending state.

Applications of algorithms ➔ ➔ ➔ ➔ ➔

Algorithms on Internet to quickly access and retrieve large amounts of data. Numerical algorithms used in cryptography to ensure privacy of personal information. Find shortest path in routing algorithms. Find common long subsequences from two different sequences. (DNA). Algorithms under dynamic programming. Topological sorting algorithms to find out which machine parts have to used first and in which order before another part is used.

Determination of the best algorithm ➔ ➔

Depends on the application Criteria ◆ ◆ ◆



Speed Architecture of computer Storage devices used

Efficiency ◆ ◆ ◆

Computing time Memory space Algorithms are to be efficient in terms of time or space.

r e d n o p o t t n i o P Data structure Name one data structure. What are its strengths? What are its limitations?

Analyzing Algorithms and Problems ➔ ➔

Algorithms are analyzed with the intention of improving them. The following criteria are used for analyzing an algorithm : ◆ Correctness ◆ Amount of work done ◆ Amount of space used ◆ Simplicity, clarity ◆ Optimality

Correctness ➔



➔ ➔

An algorithm consists of sequences of steps (operations, instructions, statements) for transforming inputs (preconditions) to outputs (postconditions). If the preconditions are satisfied, then the postconditions will be true when the algorithm terminates. Two aspects of algorithms - Method and Implementation Correctness of the methods or formulas used may be easy or may require a long sequence of lemmas and theorems about the objects on which the algorithm works.

Correctness contd…... ➔



For small programs, checking the initial and final values of loop counters, hand simulating the algorithm on few small examples can be used as an informal technique to check the correctness of an algorithm. To prove the correctness of a large and complex programs, break down the program into smaller modules; and check correctness of the smaller modules.

Amount of work done ➔



➔ ➔

Measure of work that tells us about the efficiency of the method used by the algorithm, independent of computer, programming language, programmer, and other implementation details, usually depending on the size of the input. Eg: Counting passes through loops Basic Operation ◆ Identify a particular operation fundamental to the problem ◆ Total number of operations performed is roughly proportional to the number of basic operations Identifying the properties of the inputs that affect the behavior of the algorithm. Amount of work done is synomymous to the term Complexity of an algorithm.

Space Usage ➔ ➔

➔ ➔ ➔

The number of memory cells used by the program. A program requires storage space for the instructions, the constants and variables used by the program, input data, workspace for manipulating data and storing information needed to carry out its computations. Data is representable in several forms. If input data have one natural form, then we analyze the amount of extra space used, aside from program and input. If the amount of space used depends on the particular input, worst-case and average-case analysis can be done.

Simplicity ➔ ➔



The simplest and most straightforward way to solve a problem may not be the most efficient method. The simplicity of an algorithm helps in writing, debugging, modifying and verifying the correctness of an algorithm easier. The time needed to produce a debugged program and its efficiency should be considered when choosing an algorithm.

Optimality ➔ ➔ ➔ ➔

We can’t improve an algorithm for a problem beyond a certain point. Each problem has an inherent complexity. There is some minimum amount of work required to solve it. To analyze the complexity of a problem, we choose ◆ ◆

➔ ➔

a class of algorithms a measure of complexity.

An algorithm is optimal if there is no algorithm in the class under study that performs fewer basic operations. Prove theorems that establish a lower bound on the number of operations needed to solve the problem. Then any algorithm that performs the lower bound of operations is optimal. {Lower bound (for the worst case)}

Analysis of Algorithm ➔

➔ ➔



The analysis of algorithms is the determination of the amount of resources necessary to execute them. Resources include computational time, memory, communication bandwidth, or computer hardware. Most algorithms are designed to work with inputs of arbitrary length. Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity). Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem.

Analyzing Algorithms Input size may be number of items in the input (sorting) or total number of bits (multiplication) or number of vertices and edge (graph). Running time is the number of primitive steps or operations executed. It is assumed that a constant amount of time is required to execute each line of code. I.e. ith line takes ci constant time.

Asymptotic efficiency of Algorithms ➔

➔ ➔ ➔

Defines how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Not efficient for small inputs. Describe the worst-case running-time function T(n), which usually is defined only on integer input sizes. Allows for comparison of relative performance between alternative algorithms.

Asymptotic Notation ➔

Applies to functions whose domains are the set of natural numbers: N = {0,1,2,…}



If time resource T(n) is being analyzed, the function’s range is usually the set of non-negative real numbers: T(n)∈ R+



If space resource S(n) is being analyzed, the function’s range is usually also the set of natural numbers: S(n)∈ N

Asymptotic Notations 1. 2. 3. 4. 5.

Theta Big O Big Omega Little o Little omega

Notation ➔

Asymptotically tight bound

➔ ➔

f(n) ∈ (g(n)) For all values of n >= n0, f (n) = g(n) within a constant factor. (1) means either a constant or constant function.



O Notation ➔

Asymptotic upper bound.



For all values of n at and to the right of n0, the value of f(n) is on or below cg(n). f(n) = O((g(n)) indicates that f(n) is a member of the set O(g(n)). (g(n)) ⊆ O(g(n)) Use it to bound the worst-case running time of an algorithm

➔ ➔ ➔

Notation ➔

Asymptotic lower bound



For all values of n at or to the right of n0, the value of f(n) is on or above cg(n).

o Notation (little-oh of n) ➔ ➔

Upper bound that is not asymptotically tight. The bound 2n2 = O(n2) is asymptotically tight, but the bound 2n = O(n2) is not. Use o-notation to denote an upper bound that is not asymptotically tight.

➔ ➔

2n = o(n2) but 2n2 ≠ o(n2) f(n) = o(g(n)) bound 0≤f(n)≤cg(n) holds for all constant c>0. The function f(n) becomes insignificant relative to g(n) as n approaches to infinity.



Notation ➔

Lower bound that is not asymptotically tight.

Properties of Asymptotic Functions

Properties of Asymptotic Functions

➔ ➔ ➔ ➔





Explain the significance of notations Theta and Omega in the complexity analysis of algorithms. (repeated twice) (5 marks) Explain the different asymptotic notations used for specifying the growth rate of functions. (repeated five times) (10 marks) How do we analyze algorithms based on (i) amount of space (ii) optimality. (repeated twice) (5 marks) Distinguish between time complexity and space complexity of algorithms. Explain various criteria used for analysis of algorithms. (repeated thrice) (10 marks) Explain the various criteria used for analyzing algorithms with suitable examples. (repeated thrice)(15 marks) What do you understand by complexity of an algorithm? Explain worst case complexity and average case complexity with a suitable example. (repeated thrice)(15 marks)

Analyse Algorithm - Largest Algorithm Largest(A) M = A[ 0 ]; for (i = 0; i < n; ++i ) { if ( A[ i ] >= M ) { M = A[ i ]; } }

Instructions can be ... ➔ ➔ ➔ ➔ ➔

Assigning a value to a variable Looking up the value of a particular element in an array Comparing two values Incrementing a value Basic arithmetic operations such as addition and multiplication

Analyse Algorithm - Largest

This is the worst case, as we are doing n comparisons. Eg: 3, 5, 7, 9, 11

Analyse Algorithm - Largest

This is the best case, as we are doing 1 comparision. Eg: 25, 6, 4, 9, 10

Write an algorithm for finding the sum of a given array elements. Analyse the above algorithm.

Analyse Algorithm - Sum Algorithm Sum(A) sum = 0; n = length(A) for (i = 0; i < n; ++i ) { sum += A[i]; }

Analyse Algorithm - Sum

Analyse Algorithm ➔

➔ ➔ ➔ ➔ ➔

From the terms that are considered, drop all the terms that grow slowly and only keep the ones that grow fast as n becomes larger. Simple programs can be analyzed by counting the nested loops of the program. A single loop over n items yields f( n ) = n. A loop within a loop yields f( n ) =n2. A loop within a loop within a loop yields f( n ) = n3. Given a series of for loops that are sequential, the slowest of them determines the asymptotic behavior of the program. Two nested loops followed by a single loop is asymptotically the same as the nested loops alone, because the nested loops dominate the simple loop.

Write an algorithm for search for an element from an array using Linear Search. Analyse the above algorithm.

Analyse Algorithm - Linear Search Algorithm search(A[], for (i=0; i
n,

if (A[i] == key) return i; } return ‐1; }

key) { {

Recurrence ➔ ➔ ➔ ➔ ➔ ➔ ➔

A recurrence is a recursive description of a function. A description of a function in terms of itself. A recurrence consists of one or more base cases and one or more recursive cases. The base cases give explicit values for a small subset of the possible values of n. The recursive cases relate the function value f (n) to function value f (k) for one or more integers k < n. Example f (n) = 0 if n = 0 = f (n − 1) + 1 for all n > 0

Methods for Solving Recurrences 1. Substitution method 2. Iteration method a.

Recursion-tree method

b.

Master method

Substitution Method • Idea: Make a guess for the form of the solution and prove by induction. • It can be used to prove both upper bounds O() and lower bounds Ω().

The substitution method for solving recurrences consists of two steps: 1 Guess the form of the solution. 2 Use mathematical induction to find constants in the form and show that the solution works. The inductive hypothesis is applied to smaller values, similar like recursive calls bring us closer to the base case. The substitution method is powerful to establish lower or upper bounds on a recurrence.

Substitution Method - Power x

n

long power(long x, long n) if (n == 0) return 1; else return x * power(x, n-1); Let T(n) = Time required to solve a problem of size n Base Case T(0) = T(0) = Recursive T(n) = T(n) =

time to solve problem of size 0 c2, a constant Case time to solve problem of size n T(n-1) + c1 for n > 0

Solve by Substitution T(n) = T(n-1) + c

1

This is a recurrence relation defining a function T(n).

Substitution method - Power x

n

long power(long x, long n) if (n==0) return 1; if (n==1) return x; if ((n % 2) == 0) return power(x*x, n/2); else return power(x*x, n/2) * x; T(0) = c1 T(1) = c2 T(n) = T(n/2) + c3

Solve by Substitution T(n) = c log n + c 2

4

Assume T(n) = c2 log n + c4 is the solution to this recurrence. Then prove by induction. Base case:

T(1) = c2 log 1 + c4 = c4 .

Induction hypothesis:

T(m) = c2 log m + c4

for all 1 ≤ m < n.

Inductive step: T(n) = T(n/2) + c2 = c2 log(n/2) + c4 + c2 = c2(log n − log 2) + c4 + c2 = c2 log n − c2 + c4 + c2 = c2 log n + c4. = O(log n).

by Induction hypothesis

Prove that

T(n) ≤ d log n + e for some constants d and e. And determine d and e

Base case:

T(1) = c4 ≤ d log 1 + e.

This will be true provided we choose e ≥ c4.

Induction hypothesis: T(m) ≤ d log m + e for all 1 ≤ m < n.

Inductive step, n even: T(n) = T(n/2) + c2 ≤ d log(n/2) + e + c2

by I.H.

= d log n − d + e + c2 ≤ d log n + e,

provided we choose d ≥ c2.

Inductive step, n odd: T(n) = T((n−1)/2) + c1 + c2 ≤ d log((n−1)/2) + e + c1 + c2 by I.H. = d log(n−1) − d + e + c1 + c2 ≤ d log n − d + e + c1 + c2 ≤ d log n + e,

provided we choose d ≥ c1 + c2.

Master Theorem The master method gives us a quick way to find solutions to recurrence relations of the form T(n) = aT(n/b) + h(n) where a and b are constants, a ≥ 1 and b > 1. Conceptually, ➔ ➔ ➔

a represents how many recursive calls are made. b represents the factor by which the work is reduced in each recursive call. h(n) represents how much work is done by each call apart from the recursion, as a function of n.

Master Theorem Case 1 For a recurrence relations of the form T(n) = aT(n/b) + h(n) the master method gives a solution based on the relation between a, b, and h(n), as follows: Case 1: h(n) is , which says that h grows more slowly than the number of leaves. In this case, the total work is dominated by the work done at the leaves, so T(n) is Θ (nlogb

ε

)

> 0, and it is the smallest positive constant like

0.00000000000001 etc.

Master Theorem Case 2 Case 2: h(n) is Θ(nlogba), which says that h grows at the same rate as the number of leaves. In this case, T(n) is Θ(nlogb a

Master Theorem Case 3 Case 3: h(n) is Ω(nlogba , which says that h grows faster than the number of leaves. For the upper bound, we also need an extra smoothness condition on f, namely a.h(n/b) ≤ c.h(n) for some c < 1 and large n. In this case, the total work is dominated by the work done at the root, so T(n) is Θ(h(n)).

Master Theorem - Example 1 Recurrence relation T(n) = 8T(n/2) + cn^2 , where some positive constant. a=8 , cn^2

b=2 , and is

This is case 1 of Master Theorem. T(n)

is

h(n) = cn^2 .

O(n^(log28 − ε )) = O(n^(3 − ε ))

Therefore,

c

is

Θ(n^3 ) .

for any

ε ≤ 1.

Master Theorem - Example 2 T(n) = T(n/2) + cn , where a=1 , h(n)

b=2 , and is

c

is some positive constant.

h(n) = cn .

Ω(n^(log21 + ε )) = Ω(n^ε )

This is case 3 of Master Theorem. ah(n/b) = cn/2 = ½ h(n). Therefore

T(n)

is

Θ(n) .

for any

ε ≤ 1.

Master Theorem - Example 3 T(n) = 8T(n/4) constant. a=8 ,

b=4 , and

cn^(3/2)

is

+

cn^(3/2)

,

where

h(n) = cn^(3/2) .

Θ(n^log48 ) = Θ(n^(3/2)).

This is case 32 of Master Theorem. Therefore,

T(n)

is

Θ(n^(3/2) log n) .

c

is

some

positive

Solve using Master Theorem For a given problem, start trying the different cases one by one to solve it. If none of the 3 cases is able to solve the problem then, Master method cannot solve the problem. Eg 1:

Eg 4 :

T(n) = 8T(n/2) + cn2

T(n) = T(n/2) + 1

Eg 2:

Eg 5 :

T(n) = 4T(n/2) + n

T(n) = T(n/3) + n

Eg 3 :

Eg 6 :

T(n) = 8T(n/4) + cn3/2

T(n) = T(n/2) + cn

Recursion Tree ➔ ➔ ➔

➔ ➔

A recursion tree is useful for visualizing what happens when a recurrence is iterated. It diagrams the tree of recursive calls and the amount of work done at each call. Recursion trees can be useful for gaining intuition about the closed form of a recurrence, but they are not a proof. Recurrence trees can be a good method of guessing.

Recursive Tree Example 1 The recursion tree for the recurrence T(n) = 2T(n/2) + n^2 has the following form:

Sum across each row of the tree to obtain the total work done at a given level. This a geometric series, thus in the limit the sum is O(n^2 ).

Recursive Tree - Example 2 T(n) = T(n/3) + T(2n/3) + n.

The tree here is not balanced: the longest path is the rightmost one, and its length is log3/2n. Hence the guess for the closed form of this recurrence is O(n log n).

Recursive Procedures The basic unit of storage for an individual procedure invocation at run time is called an activation frame. This frame provides storage space for the procedure’s local variables, actual parameters, and compiler “temporary variables”, including return value.

Related Documents

Daa
November 2019 35
Daa
November 2019 28
Daa
August 2019 63
Daa Notes.pdf
November 2019 21

More Documents from ""

Avon Moda Casa 17
June 2020 30
Povestea Cifrei 1.ppt
May 2020 12
October 2019 27
Babyjewels Catalogue
April 2020 16
Proforma.docx
October 2019 33