Ada

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Ada as PDF for free.

More details

  • Words: 3,778
  • Pages: 15
ADA algo solution maual

1 Any algo whose worst running time is lowest, is least dependent on the initial order of input (or in general, is least dependent on anything at all - since we've assuming the worst that could ever happen anyways ) So amongst comparison sorts, the running time of a merge sort or a heap sort would be least dependent (or as I said, not at all dependent) on your initial input ordering .

2 It needs O(n) time and O(1) space. Just a minor modification of the basic algorithm to determine max element in an array. .... max = A[0] .... count = 1 .... for(i = 1; i < N; i++) ..... begin ...... if(A[ i ] == max) ........... ++count ...... else if(A[ i ] > max) ........... max = A[ i ] ........... count = 1 //mark this step - found a new max element hence reset count ...... endif .... endfor .... print "Max value = " max .... print "Number of occurences = " count Time = O(n) Space = O(1)

3. time complexity of code for(int i=0;i<=n;i++) for(j=0;j<=log(i);j++) printf("puneet") printf executes for log(1) + log(2) + log(3) + log(4) + log(5) + ...log(n) times Page 1

ADA = log (n!) times so complexity is O(log(n!)) think u'll have to take into account the complexity of the function log(i) here coz its calculated from the series ln(1+x)=x-(x^2/2)+(x^3/3)-(x^4/4)+... now obviously this is not linear time, and it depends on the accuracy till which the maths library calculates it. Sot i dont think he complexity is O(log(n!)). printf executes for (log(1) +1)+ (log(2)+1) + (log(3)+1) + (log(4)+1) + ...(log(n)+1) times = (n+log (n!)) times still the complexity is O(log(n!)) becoz log(n!) is greater than n for larger values of n.eg if base of log is "e" then for n>=6 log(n!) is greater than n. if base of log is 10 then for n>=25 log(n!) is greater than 4.for(i=0;i<=n*n;i++) for(j=i;j<=n/2;j++) printf("x"); I think its O(n^3)...And moreover there isn't any hidden landmine in it as well

break outer loop into 2 parts 1) 0<=(n/2) (usual nested loop) 2) n/2<=n*n (inner loop doesn't work) so both r O(n^2)

5ques:> the input is a set s containing n real number sorted in increasing order and a real number x we have to develop algo to know wheather there are two elements of s whose sum is exactly x use linear search steps: 1. move in the array and stop at the point where u get an element >=x 2. now subtract that number from x and search the number in the array to the left from start.. 3. now move backwards from the number(where we stopped) and repeat the same for all other

Page 2

ADA let us say the array is 1,2,5,6,7,8,19,20,34,36,45,76,86 we have to search for the sum X=38 1. start from 1....stop at 36 2. now search for the number 38-36=2 in the left side of the array 3. suppose if 2 was not there....then move backwards from 36 to 34 and repeat step 2 or have two pointers one at the beginning and one at the end..... add the elements pointed to , move the last pointer if the sum >x or else move the first pointer 6 ques:decimal to binary conv without loop Use recursion. binary(unsigned int n) { .....if(n>=2) ..........binary(n/2); .....printf("%u",n&1); }

q:7suppose we are given 3 sequences a,b,c of length n,m,p all soted we have to merge this into d of length m+n+p,how can we generalize this for say n sequences,what is the complexity of this?? ans: Kind of merge sort.. take two sorted array (size m and n).. merge them into a one sorted array(size m+n) repeate this process recursively till u get a single sorted array.. the method will b lik described above and its complexity should b (n-1) times complexity of a single merge between two lists. repeate this process recursively till u get a single sorted array.. >>>>>another approach Better way of doing it for n sequences is to construct a heap of n elements which contain the min elemets from the n sequences.If an element is removed from the heap, add the new element from the sequence from which it is removed. If each sequence is very long,this method is very efficient as u read the element into the memory only once. (external mem algos) Time Complexity will be O(nm log n) --> each sequence has m elements,n sequences Page 3

ADA ques8 :suppose we have to draw a set of closed curves c1,c2....,cn in the plane according to th below rules: 1. no curve intersect itself 2.c1 is drawn arbitarily 3 for every i>=2.ci intersects each of c1,c2,...,ci-1 in exactly two distinct point 4no three curves meet at a single point. let r(n) denote the number of regions in the plane defined by constructing n closed curves by the above rules.how can we dervie grneral recurrence relation for r(n) in terms of r(n-1) and how can we compute r(1) and r(20) ans: r(n) = r(n-1) + 2 * (n-1) r(1) = 1 basically, when a new curve is added, each already existing curve (total n-1 in number) are bisected in two regions. This gives additional n-1 regions. Further n-1 more regions are formed which share boundaries with the new curve. Solving above recurrnce relation gives r(n) = n(n-1) + 1 which gives r(20) = 381

ques:9 let x and y be strings of symbols from some alphabet .consider the operations of deleting a symbol from x,inserting a symbol into x and replacing a symbol of x by another symbol ,how can we design algo to find the minimum number of such operation needed to transform x into y.. ans:the edit distance between two strings of characters is the number of operations required to transform one of them into the other. There are several different algorithms to define or calculate this metric: Hamming distance Levenshtein distance Damerau-Levenshtein distance Jaro-Winkler distance Wagner-Fischer edit distance Ukkonen Hirshberg the Hamming distance between two strings of equal length is the number of positions for which the corresponding symbols are different. Put another way, it measures the minimum number of substitutions required to change one into the other, or the number of errors that transformed one string into the other. 1011101 and 1001001 is 2. Page 4

ADA algo: The following C++ function will compute the Hamming distance of two integers (considered as binary values, that is, as sequences of bits). The running time of this procedure is proportional to the Hamming distance rather than to the number of bits in the inputs. int hamdist(int x, int y) { int dist = 0, val = x^y; while(val) { ++dist; val &= val - 1; } return dist; }

ques:10 how many comparisons are required for finding both largest and smallest elements in aset of n elements,what could be the algo for that ans: Number of addition operation is i guess (n-2).. Correct me if i am wrong. MIN-MAX(A) //assume n is odd min=max=A[1] for i=2 to length[A] step 2 do if A[.i]A[.i] then min=A[.i] if max A.[i+1] then min = A[.i+1] if max
ADA [3n/2] -2. ques:11 is there any way to evaluate the polynomial - rel="nofollow"> summation with condition 1<=i<=j<=n xixj=x1x2+x2x3+.......+x(n-1)xn with fewer then n-1 multiplication's and 2n-4 additions ANS: 1<= i < j <= n then expression is x1(x2+...+xn) + x2(x3+...+xn) + ... + x(n-1)xn, which would require n-1 multplication and 2n-4 addition according to following pseudo-code. temp_sum = xn result = x(n-1) * xn for i= (n-1) to 2, temp_sum += x(i) result += temp_sum * x(i-1) end_for As far as reducing number of operations is concerned, I do not know. I tried rewriting original expression as 0.5 * ( ( x1 + x2 + ... + xn )^2 - ( x1^2 + x2^2 + ... + xn^2 ) ), but it doesn't help. so try on your own ques:12 to find the kth smallest element in a set of n elements,how many comparisons are required ??,what is the algo for that?? ans: Well calculations i am not sure. but this is the age old strategy. This algorithm is modificaiton of quick sort.. Choose a pivot and apply the partition step of the algorithm. If position of the pivot is k. u are done If position of the pivot in the array is greater than k then repeat this algorithm on the left partition. If position of the pivot is less than k then repeat the algorithm on the right partition.

Let me give a shot at number of comparisons. Let assume that number are uniformly distributed in the array. Then first iteration will take n comparison. here we can assume that both the partitions are of roughly same size. And we are going to throw out one partition. So number of elements to be tackled in the next iteration are n/2 and thats max num of comparisons . So total comparison in this case will be less than n + n/2 + n/4 ....log n terms. <= n( 1 +1/2 + 1/4 ...... upto infinity) =2n. so i think on an average we shld be done in less than 2n comparisons. i overlooked the title, the worst case in this algorithm will be if the array is already sorted and u are asking me to find smallest element. in that case i will take n^2 ops.. so here this strategy may not be good (ofcourse provided i am choosing the pivot as always the last element. but i think this problem can a be avoided if i choose my pivot randomly.) Page 6

ADA >>another use median as pivot. worst case=O(n) ques13:given three arrays of n real numbers i.e +ve or -ve,we have to determine if there are three numbers one from each array whose sum is 0,what is the complexity of that?? ans: if u consider a naive algo it can be done in O(n ^ 3 ) time.. take one element from each of the arrays.. that will have 3 loops.. so O(n ^ 3) time.. but u can do in O(n lgn) time like this.. 1)sort each of the arrays in O(n lgn). 2)merge all in a 3n array this can be done in O(n) time. 3)then maintain 2 arrays one with -ve other with +ve elements 4)for each of the +element in the array do binary search in -ve array for its negetive counterpart. (also as there can be more than one no like this check the neighbouring values too.) this step will take o(n lgn) time that is o(lgn) time for each of the +n elements another view i think:: in O(n ^ 2 logn) we may find sort 3 arrays for each element (let say x) in array1 { for each element (let say y) in array 2 binary search -(x+y) in array 3 for each element (let say z) in array 3 binary search for -(x + z) in array 2 } repeat this This can't be done in better than O(n^2) time (& O(1) space). There is a flaw in line (4) of algorithms. Think properly if you can. q:14let m be an mxn matrix of of 0 and 1,how can we find a smallest sxn submatrix S of m such that if m(i)(j)=1 then for some 1
ADA contained in any of the sets in the input. It was one of Karp's 21 NP-complete problems shown to be NP-complete in 1972. More formally, given a universe and a family of subsets of , a cover is a subfamily of sets whose union is . In the set covering decision problem, the input is a pair and an integer k; the question is whether there is a set covering of size k or less. In the set covering optimization problem, the input is a pair , and the task is to find a set covering which uses the fewest sets. The decision version of set covering is NP complete, and the optimization version of set cover is NP hard. Set covering is equivalent to the Hitting set problem. It is easy to see this by observing that an instance of set covering can be viewed as an arbitrary bipartite graph, with sets represented by vertices on the left, the universe represented by vertices on the right, and edges representing the inclusion of elements in sets. The task is then to find a minimum cardinality subset of left-vertices which covers all of the right-vertices. In the Hitting set problem, the objective is to cover the left-vertices using a minimum subset of the right vertices. Converting from one problem to the other is therefore achieved by interchanging the two sets of vertices. The set covering problem can be seen as a finite version of the notion of compactness in topology, where the elements of certain infinite families of sets can be covered by choosing only finitely many of them.

q:15 suppose we are given an input ,points in the plane a,b,c,d,we wish to place four circles centered at each of these points,so that no two circles overlaps each other and the avg radius is as large as possible,how can we design algo for this,what is it's complexity... ANS:

this is an optimization problem max r1+r2+r3...+rn s.t r1+r2 <= d12 r2+r3 <= d23 r1+r3 <= d13 nd so on Use simplex algorithm to get the optimal solution. I dont know if this can be done using some sort of dynamic programming or greedy algorithm approach.

question: Page 8

ADA

question::16 You have an array of 2n+1 elements, in which each but one element is present twice. You have to find out the remaining 1 element.

ans: Sort the numbers using some sort algo that takes O(n log n). Then use something like this for(i = 0;i < 2n;i=i+2) { if(a == a[i+1] i++; else return a; } return a[2n]; or try this for( i = 0,xor = 0 ;i < 2*n + 1 ;++i) xor ^= x[..i..]; return xor; quest:17 solution of this recurrence t(n)=t(n/2)+bigoh(square root of n) for n>3 else t(n)=1 ANS: Page 9

ADA well the solution to this recurrence is theta_of(sqrt(n))..........i just guessed it and proved using substitution metho

>>>>T(n)=c1*sqrt(n)/sqrt(2)+c2*sqrt(n)

>>>>T(n)=(c1+c2)*sqrt(n) >>>>T(n)=(c1+c2)*sqrt(n)<=c*sqrt(n)

if we choose c1=1 c2=1 and c=10 the above inequality holds...........end_of_proof.............. the lower bound can be prooved similarly........ >>>another view lower bound cannot be proved. theta(sqrt(n)) is incorrect. problem statement has bigoh. counter-example: t(n) = t(n/2) + 1 ower bound : theta(log(n))

Q:18 evaluate big O ,omega and thita int try(int n) { if n<3 return(1); return(try(n-3) + try(n-1) - try(n-2)); } void main(int m) { int n; for(n=0;n<m;n++) printf("\n%d",try(n)); } ANS:>>>>>>>>>>>>> he recurrence relation for above program could be possibly..... T(n)=T(n-1)+T(n-2)+T(n-3)+O(1)

and this is multiplied by m times......so i guess answer should be O(m*n^2).........is it correct???

Page 10

ADA Q:>>19 suppose we are given multiplication of two polynomials a1*x raised to i1+a2*x raised to i2+...) and b1* x raised to j1+b2* x raised to j2..... in which the coefficients of the answers c1* x raised to i1+j1+...) are generated in order as the input coefficients are being multiplied.. how can we design algo for this in most efficient time and space.?. ANS: use divide and conquer u can do it in O(n power logn base 3) or FFT is the fastest known method. Polynomial multiplication is convolution of the series of coefficients. This should be O(nLogn)

q20 suppose you are given n men and n women and two n x n arrays p and q such that p(i,j) is the preference of man i for woman j and q(i,j) is the preference of woman i for man j,how can we design algorithm which finds a pairing of men and women such that the sum of product is maximized... ans: G1 = (V1, E1) G2 = (V2, E2) [1] With the help of two matrices p and q (both being directed and weighted) convert them to undirected graph. u -> v = w1 v -> u = w2 then undirected edge (u, v) should have weight w(u, v) = w1*w2 Note: if (u, v) isn't belongs to E1 (or E2) then it's directed weight should be taken as zero. i.e. w1(u, v) = 0 [2] Now run MST with a little difference. You choose the safe edge with highest weight.

q: quick sort with two partition ANS: Page 11

ADA A very naive way can be ... Choose 2 random elements x and y as partitions. Make sure that x != y Use the partition value 'x' and apply the standard partitioning algorithm. Suppose after running this algorithm, 'x' is placed at index 'i'. Now, either y < x or y > x. So, now you can apply the same standard partitioning algorithm on the subinterval [0, i-1] (if y < x) or [i+1, N] (if y > x) So, now you get 3 subsets, instead of 2. Apply quicksort recursively on those 3 subsets too. Not sure whether this will be more or less efficient than the actual QuickSort. I will try to program this in my leisure time, and will let you know. In the meantime, I am sure other algorithm wizards can definitely come up with better versions

analysis: BTW, I dont think my method achieves any improvement. For average case, T(n) = 3n/2 + 3T(n/3) In each pass, O(n) time for partition using 'x' and O(n/2) for partition using 'y' This gives T(n) = 1.5 nlogn + n So, we should think of developing a partitioning algorithm which implements positioning both 'x' and 'y' in O(n) time in 1 go. I don't think there is any use of doing QuickSort using 2 partition elements. Consider this case - suppose the size of array to be sorted is 1 million (1,000,000). You choose x and y as partition elements, and after partitionning, it is found that there are say less than 5 elements between x and y. So, this will only worsen the performance as compared to standard QuickSort.and rem the quick sort is the algo used for sorting very large data say just said.... Page 12

ADA

Q>>>>> suppose we are given sequence of n songs where the ith song is i subscript i min long,we want to place all of the songs on an ordered series of Cd's(eg. cd1,cd2,.....,cdk),where each cd can hold m min's,fuerthermore,the songs must be recorded in the given order spng1,song2,...,songn.no song may be spilt across cd's,the goal is to determine how to place them on the cd's as to minimize the number of cd's needed.., how can we design efficient algorithm for this..

ANS: This problem reminds me of the 0/1 knapsack problem my question: Let T be an array that contains n integers. An integer is a majority element in T if it appears strictly more than n/2 times in T. Give an algorithm that can decide whether an array T contains a majority element and if so find it. Your algorithm must run in linear time in the worst case. ANS:

guess

????

here it is>> This modification of quick sort: Select a pivot randomly. run partition. Find the position of the pivot say p. calculate size of left partition if it is greater than n/2 your then your majoriy element is likely in this part of the array. repeat the algorithm in this part but if left partion has length less than n/2 so the majority is likely to be in the right parition. continue this alorithm on that partition. This will go on till the partition size remamins staionary or drops below n/2. In first case u have the majority element in second case u dont. Since in each case we are working on only one parition this shld be done in o(n) time use stack if stack is empty Page 13

ADA push the element if stack is not empty compare current element with stack top if both r same push else pop finally if there exists an element that is majority element or by pruning also u can do it in linear time by only comparisions by taking another array if ( median == min || median == max ) return median else return "NO" In the worst case this could happen: We find the pivot and it turns out to be the last element e.g. 2222579 The next time you pick a pivot, in the worst case it turns out to be 7 (in this case) so we continue this process. This will be an O(n^2) algo. Correct me if i'm wrong.

Finding the median involves sorting which is O(nlogn) for comparison sort algos. Unless you do radix or bucket sort (for small n) http://en.wikipedia.org/wiki/Selection_algorithm>>try this If there is no space constraint then we can use hashtable to find it. Traverse each element in the array. if this element is not present in Hashtable then place it in HT with count value 1 else increment the count value of corresponding element. at last check the count value to find the Max.

Q: how do we generailize huffma's algorithm to ternary codewards i.e codewards using 3 symbols 0,1,2.. ANS>>>>>>>>>>> Page 14

ADA

what's the problem in this it is easy,extension of huffman's algo simply apply the algo to make a new node and according to priority queue keep on adding two trees frequencies ya you will get a skewed tree if the frequencies of trees satisfies triangle inequality law

Q>>>>>>> we are given all the numbers in a table so that all -ve values precede all nonnegative values.the items need not be sorted completely,just seprated b/w -ve and nonnegative,we have to degin algo for this with constraint that the sol should require min.. number of comparisons...

ANS: i guess it can be done in O(n).. the way we do partition in quick sort... only diff will be instead of picking an element and then partiotionin it.. partition it wid respect to 0...

Page 15

Related Documents

Ada
November 2019 53
Ada
July 2020 33
Ada
October 2019 34
Ada Augusta Ada Byron
April 2020 23
Brig Ada
June 2020 2