Algorithms

  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Algorithms as PDF for free.

More details

  • Words: 1,963
  • Pages: 6
308-250B (Section 2): Introduction to Computer Science

Introduction to Algorithm Analysis To get you started, we will design and analyse several simple algorithms for multiplying integers using only addition. Though this might sound like a trivial exercise, it is actually quite tricky to design an efficient multiplication algorithm. Such an algorithm is important because computers don’t just magically know how to multiply numbers, we must tell them how to multiply using more primitive operations like addition that it already knows how to perform.

1

Simple Multiplication

The simplest algorithm for multiplying integers a and b is to simply add b to 0 exactly a times. Algorithm product(a,b) Input: a ≥ 0 and b ≥ 0 are integers. Output: the product of a and b. r← 0 for i ← 1 to a do r ← r+b return r Can you determine the number of additions that the algorithm uses to find the product of a and b? It shouldn’t be too difficult to see that it uses a additions. Can we use fewer? Notice that when a > b, product(b,a) uses b additions whereas product(a,b) uses a which is greater than b additions. The following algorithm makes sure that we always use min(a, b) additions: Algorithm product(a,b) Input: a ≥ 0 and b ≥ 0 are integers. Output: the product of a and b. if a > b then t← a a← b b← t r← 0 for i ← 1 to a do r ← r+b return r This is certainly better than before. From now on we’ll assume that a ≤ b.

Introduction to Algorithm Analysis

1

308-250B (Section 2): Introduction to Computer Science

2

Recursive Multiplication

We still might be able to improve efficiency by redefining multiplication recursively, i.e. like this: (

a×b=

0 if a = 0 b + (a − 1) × b otherwise

This definition is recursive because we use multiplication to define multiplication. Though this is circular, we’ll soon see that it is not useless. Below is the pseudocode corresponding to this definition: Algorithm product(a,b) Input: b ≥ a ≥ 0 are integers. Output: the product of a and b. if a = 0 then return 0 else return b + product(a − 1, b) Just like the definition, the algorithm product is recursive since it calls itself. The previous algorithms are not recursive, we call them iterative. To understand how this algorithm works, consider the example multiplying 3 and 5: 1. calling product(3,5) returns 5 + product(2, 5), 2. calling product(2,5) returns 5 + product(1, 5), 3. calling product(1,5) returns 5 + product(0, 5) and 4. calling product(0,5) returns 0. Therefore: 1. product(1,5) returns 5 + 0 = 5, 2. product(2,5) returns 5 + (5 + 0) = 10 and 3. product(3,5) returns 5 + (5 + (5 + 0)) = 15. As you can see, having the algorithm call itself is not a problem because it calls itself with a smaller a each time. Eventually, we have a = 0 in which case the algorithm does not call itself.

Introduction to Algorithm Analysis

2

308-250B (Section 2): Introduction to Computer Science

3

Fast Multiplication

Unfortunately, this algorithm is no more efficient than the algorithms before. We have not, however, exhausted the different ways to define multiplication recursively so we try another definition: a×b=

   0

2b a2

 

b + 2b a−1 2

if a = 0 if a > 0 is even otherwise

Translating this into pseudocode we obtain the following algorithm: Algorithm product(a,b) Input: b ≥ a ≥ 0 are integers. Output: the product of a and b. if a = 0 then return 0 else j k h ← product( a2 , b) if a is even then return h + h else return b + h + h j k By way of review, we have a2 = j

k

11 j2k 10 2

a−1 2

when a is odd and

j k

j k

a 2

when a is even. For example, we have:

j k

j k

j k

= 5 92 = 4 72 = 3 52 = 2 32 = 1 12 = 0 j k j k j k j k j k = 5 82 = 4 62 = 3 42 = 2 22 = 1 02 = 0

To see how this algorithm works, consider using it to multiply 10 and 16. 1. calling product(10,16) returns product(5, 16) + product(5, 16), 2. calling product(5,16) returns 16 + product(2, 16) + product(2, 16), 3. calling product(2,16) returns product(1, 16) + product(1, 16), 4. calling product(1,16) returns 16 + product(0, 16) + product(0, 16) and 5. calling product(0,16) returns 0. Therefore: 1. product(1,16) returns 16 + 0 + 0 = 16, 2. product(2,16) returns 16 + 16 = 2 × 16, 3. product(5,16) returns 16 + 2 × 16 + 2 × 16 = 5 × 16 and 4. product(10,16) returns 5 × 16 + 5 × 16 = 10 × 16. Introduction to Algorithm Analysis

3

308-250B (Section 2): Introduction to Computer Science All together, the algorithm used 6 additions, much less than 10, the number of additions the previous algorithms would have used. One of the requirements that we stated when we started designing multiplication algorithms was that we only use addition. However, you may have noticed that the previous algorithm uses division by 2 and a test for determining whether or not an integer is even. It turns out that computers can do both of these operations very quickly. Suppose that the binary representation of a stored by the computer is ak ak−1 . . . a1 a0 for some k ≥ 0 i.e. each ai = 0 or 1. Thus, we have a = ak 2k + ak−1 2k−1 + . . . + a1 21 + a0 20 and a = ak 2k−1 + ak−1 2k−2 + . . . + a1 20 + a0 2−1 2 Notice that each term except possibly a0 2−1 is an integer. Bit a0 is equal to 0 or 1 so a0 2−1 is equal to 0 or 1 . In other words, if we divide a by 2 the quotient will be ak 2k−1 + ak−1j2k−2 + . . . + a1 20 and the remainder 2 k a0 . Consequently, a is even iff a0 = 0 and the binary representation of a2 = ak 2k−1 + ak−1 2k−2 + . . . + a1 20 j k

is ak ak−1 . . . a1 . To find a2 the computer just shifts the bits storing the value of a one bit to the right except for the rightmost bit a0 which it discards. In the example we worked out above, with a = 10, we noticed that the algorithm used only 6 additions. It seems then that this algorithm is better than the earlier ones we designed. The question, though, is how much better is it? We can answer this question as follows: 1. product(x,y) calls 2. product(

j k x 2



3. product(

,y) which calls

b x2 c 2



,y) which calls

 x   b2c    2    4. product( 2  ,y) which calls

... which calls i-1. product(1,y) which calls i. product(0,y) which simply returns 0. Notice that the algorithm uses at most 2(i − 1) additions to calculate the product of x and y because each call except the ith listed above accounts for either one addition if a is even or two if a is odd. Suppose that the binary representation of x is xk xk−1 . . . x0 with xk = 1. From our discussion above, we have: 1. the binary representation of x is xk xk−1 . . . x0 ; 2. the binary representation of

j k x 2



3. the binary representation of Introduction to Algorithm Analysis

is xk xk−1 . . . x1 ;

b x2 c 2



is xk xk−1 . . . x2 ; 4

308-250B (Section 2): Introduction to Computer Science  x   b2c    2    4. the binary representation of  2   is xk xk−1 . . . x3 ;

... and i-1. the binary representation of 1 is xk ; i. the binary representation of 0 is 0. Thus, i is at most k + 2, so the algorithm uses at most 2(i − 1) ≤ 2(k + 2 − 1) = 2k + 2 additions. Since the binary representation of x is xk xk−1 . . . x0 , we have x = xk 2k + xk−1 2k−1 + . . . + x1 21 + x0 20 . Since xk = 1, we have x ≥ xk 2k = 2k . Since logm (n) = p if and only if n = mp , we then have log2 (x) ≥ k. Therefore, the algorithm uses at most 2k + 2 ≤ 2 log2 (x) + 2 additions. The previous algorithms would have used x additions. Is x smaller or larger than 2 log2 (x) + 2? It turns out that x is much larger. The following table should give you an idea of how much larger it is: x 2 log2 (x) + 2 8 8 9 8.3 10 8.6 11 8.9 12 9.2 13 9.4 14 9.6 15 9.8 16 10 32 12 64 14 128 16 256 18 512 20 1024 22 2048 24 4096 26 8192 28 16384 30 32768 32 The last row is fairly dramatic! Rather than use 32,768 additions, the algorithm uses only 32. Introduction to Algorithm Analysis

5

308-250B (Section 2): Introduction to Computer Science

4

Running Time

When we design algorithms, what we’re really interested in is how quickly the algorithm can complete its assigned task. After all, we use computers to save ourselves time, so it is extremely important that they complete tasks quickly. For example, would you continue to use a word processor to write papers if every time you pressed a key it took 20 seconds for the corresponding character to appear on the screen? Would you use the search-and-replace function if it took 2 minutes for to search through your 5-page paper? One way to compare algorithms, is to implement them in some programming language and then execute them on a computer, timing each one to see which is fastest. There are several limitations to this method: 1. How quickly a program completes a task, depends on more than just its underlying algorithm. It depends on the speed of the machine on which it is running, the operating system in which it is running, decisions the programmer made when designing the program, and so on. 2. Algorithms cannot be compared until they are implemented and tested. It would be much better to be able to predict whether or not an algorithm will be efficient before we implement it to avoid implementing innefficient algorithms. 3. A finite set of tests do not tell the whole story. For many algorithms, the number of possible tests is too large to try them all. As a result, when testing a program, we may accidently (or purposely) avoid tests that show that the program is very slow. In this lecture, we compared algorithms by comparing the number of additions that each uses. This method, however, may not be useful for comparing algorithms that do not use addition. Consequently, we should count all operations used by an algorithm. Later in the course we will define a set of primitive operations including assignment, addition and integer comparison on which we will design all algorithms so that we can compare them.

Introduction to Algorithm Analysis

6

Related Documents

Algorithms
May 2020 19
Algorithms
June 2020 23
Algorithms
October 2019 26
Algorithms
November 2019 26
Algorithms
May 2020 15
Exploring Algorithms
October 2019 17