Ieee 2 Column Model Format

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Ieee 2 Column Model Format as PDF for free.

More details

  • Words: 7,139
  • Pages: 14
SPARKS@ELECTROMANIA 2K9 

IMPLEMENTATION OF ANALOG TO INFORMATION COVERTER 1.Harishchandra dubey(E.C.E),2.Prateesh Raj(E.E) 1B.Tech ECE, Motilal Nehru National Institute of Technology, E-mail: [email protected] 2 B.Tech EE, Motilal Nehru National Institute of Technology, E-mail: [email protected]

Abstract In this paper, I have implemented anolog to information (by information I mean something giving the details input analog signal .In most cases ,we are required it in the form of an image or another analog signal) conversion based on theory of compressed Sensing (also known as sampling).Sampling process coverts the continuous time signals to sequences of numbers (samples) that can be processed and stored in digital devices (like buffer or registers).I have used buffer for the storage of sampled signals. It is more appropriate to do sampling of signals by considering them as union of linear subspaces rather a single linear space. .It is because union of sub spaces is a more appropriate mathematical model for analog signals than a single linear vector space.Instead .Recent development in areas of compressive Sensing (Sampling) and mathemtics including convex programming,Uniform principle and linear optimization theory have inspired me do this project.Compressive Sensing allows us to do sampling at a rate comparable to that of signal frequency unlike the traditional Sensing (which was based on Nyquist theorem).The stable sampling condition is related to important concept of Restricted Isometry Property(RIP)which helps in selection of samples containing maximum information about input signal. Firstly,Fourier Sampling Algorithm is applied on sampled signals. For storage of sampled signals we use buffer(which is unity gain op-amp in our case).The transformed signals are analysed by some appropriate algorithms.(in our case it is based on convex programming and uniform uncertainity principle) .Upto this stage we have collected only those samples that contain maximum information about signal. Now, if required we can reconstruct the signal using algorithm 890 in SPARCO : toolbox of MATLAB,else required information can be taken from collected samples. With the power of MATLAB (language of mathematical and scientific calculations) ,we have thus implemented a converter which not only eliminates the

problem posed by Nyquist sampling theorem but also can serve multiple purposes for us. 1.

Introduction

In real life-situations ,we mostly deal with analog signals but the processing of signals is done on digital systems due to a number of reasons.Also at the same time after doing required manipulations or so-called processing of digital signals we have to convert it back into some information (like an image) or in another analog signal so that it can be used by another device or as some information . In accordance with Nyquist Sampling theorem it is very difficult to sample high bandwidth signal especillay those in radio frequencies because of high sampling rate required for efficient sampling.But ,it is not the end ,today we are having the modern theory of Compressive sensing/sampling which can be used to eliminate the problems posed by Nyquist sampling theorem and at the same time getting a satisfactory level of accuracy in sparse recovery of signals. Following the same way ,I have tried to implement it a different way,instead of using all the sample we will be using only a few which contain the maximum information .The selection of these samples is based on some mathematical theory involved. 2. PROBLEM STATEMENT Many problems in radar and communication signal processing involve radio frequency (RF) typically 30300 GHz signals of very high bandwidth. This presents a serious challenge to systems that might attempt to use a high-rate Analog-to-Digital Converter (ADC) to sample these signals, as prescribed by the Shannon/Nyquist Sampling Theorem(according to which the highest frequency that can be accurately represented is less than one half of the sampling rate ). The dashed vertical lines are sample intervals, and the blue dots are the crossing points – the actual samples

SPARKS@ELECTROMANIA 2K9 

taken by the conversion process. The sampling rate here is below the Nyquist frequency, so when were reconstruction the waveform we see the problem quite readily.The power, stability, and low cost of digital signal processing (DSP) have pushed the analog-todigital converter (ADC) increasingly close to the frontend of many important sensing, imaging, and communication systems. The power, stability, and low cost of digital signal processing (DSP) have pushed the analog-to-digital converter (ADC) increasingly close to the front-end of many important sensing, imaging, and communication systems. The power, stability, and low cost of digital signal processing (DSP) have pushed the analog-to-digital converter (ADC) increasingly close to the front-end of many important sensing, imaging, and communication systems. The power, stability, and low cost of digital signal processing (DSP) have pushed the analog-to-digital converter (ADC)increasingly close to the front-end of many important sensing, imaging, and communication Systems.Unfortunately, many systems, especially those operating in the radio frequency (RF) bands, severely stress current ADC technologies.For example, some important radar and communications applications would be best served by an ADC sampling over 5 GSample/s and resolution of over 20 bits, a combination that greatly exceeds current capabilities. It could be decades before ADCs based on current technology will be fast and precise enough for these applications.And even after better ADCs become available, the deluge of data will swamp back-end DSP algorithms. For example, sampling a 1GHz band using 2 GSample/s at 16 bits-persample generates data at a rate of 4GB/s, enough to fill a modern hard disk in roughly one minute. In a typical application, only a tiny fraction of this information is actually relevant; the wideband signals in many RF applications often have a large bandwidth but a small “information rate”. Fortunately, recent developments in mathematics and signal processing have uncovered a promising approach to the ADC bottleneck that enables sensing at a rate comparable to the signal’s information rate. A new field, known as Compressive Sensing (CS) establishes mathematically that a relatively small number of non-adaptive, linear measurements can harvest all of the information necessary to faithfully reconstruct sparse or compressible signals. Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is

that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this project I develop a general framework for efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. Our main result is an equivalence condition under which the proposed Convex Algorithm along with uniform uncertainity principle is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and have a large bandwidth but a small “information rate”. Fortunately, recent developments in mathematics and signal processing have uncovered a promising approach to the ADC bottleneck that enables sensing at a rate comparable to the signal’s information rate. A new field, known as Compressive Sensing (CS) establishes mathematically that a relatively small number of non-adaptive, linear measurements can harvest all of the information necessary to faithfully reconstruct sparse or compressible signals. Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this project I develop a general framework for efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. Our main result is an equivalence condition under which the proposed Convex Algorithm along with uniform uncertainity principle is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of

SPARKS@ELECTROMANIA 2K9 

compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modeling errors. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently. 3.PROPOSED SOLUTION 3.1. OUTLINE OF SOLUTION Our“Analog-To-Informationconverter” (AIC)is inspired by the recent theory of Compressive Sensing (CS), which states that a discrete signal having a sparse representation in some domain can be recovered from a small number of linear projections of that signal. We generalize the CS theory to continuoustime sparse signals, explain our proposed AIC system in the CS context, and discuss practical issues regarding implementation. Analog signals are sampled considering them as union of linear sub spaces rather than a single space.In most of the practical applications it is found union of sub spaces is a better model for the signal of intertest that using Fast Fourier Transform and then stored in a buffer .Using Convex Programming and Uniform Uncertainity Principle we take the sparsed sampled signals which collect maximum information.Finally using 890 algorithm in SPARCO toolbox of matlab we reconstruct the signal as information in some desired format. 3.2..”UNION OF FINITE DIMENSIONAL LINEAR SUB SPACES MODEL” FOR SIGNALS OF INTEREST A.Subspace Sampling Traditional sampling theory deals with the problem of recovering an unknown signal ϰ ∈ Ή from a set of n samples yi=fi(ϰ) where fi(ϰ) is some function of ϰ. The signal ϰ can be a function of time ϰ = x(t), or can represent a finite-length vector ϰ = x. The most common type of sampling is linear sampling in which yion here.

…………… (1) for a set of function

Here

denotes the standard

inner product on Ή.For example, if Ή =L 2 the space of real finite energy signals then

is

 

…………………..(2)

 

…………..(3) Non-linear sampling is also there but our focus will be on the Linear one only.When Ή= IRN ,the unknown x = x as well as the sampling functions si = si are vectors in IRN. Therefore, the samples can be written conveniently in matrix form as y = ST x, where S is the matrix with columns functions si .In the more general case in which Ή =L2 or any other abstract Hilbert space, we can use the set transformation notation in order to conveniently represent the samples. A set transformation S : IRN Ή corresponding to sampling vectors { si € Ή, 1 ≤ i ≤ n} is defined by n Sc = Σ si i=1

……………………(4)

for all c € IRN. From the definition of the adjoint, if c = s*x, then c(i) = <si,x>. Note that when Ή= IR N , S = S and S* = ST .Using this notation, we can always express the samples as y= S* x …………………….(5) where S is a set transformation for arbitrary Ή and an appropriate matrix when Ή= IRN.Our goal is to

SPARKS@ELECTROMANIA 2K9 

recover x from the samples y ∈IRN. If the vectors si do not span the entire space Ή , then there are many possible signals x consistent with y. More specifically, if we define by S the sampling space spanned by the vectors si, then clearly S* v= 0 for any v €S*.Therefore, if S*. is not the trivial space then adding such a vector v to any solution xof (5) will result in the same samples y. However, by exploiting prior knowledge on x in many cases uniqueness can be guaranteed. A prior very often assumed is that lies in a given subspace A of Ή . If A and S have the same finite dimension, and S⊥and A intersect only at the 0 vector, then x can be perfectly recovered from the samples y. B. Union of Subspaces When subspace information is available, perfect reconstruction can often be guaranteed.Furthermore, recover can be implemented by a simple linear transformation of the given samples (5). However, there are many practical scenarios in which we are given prior information about x that is not necessarily in the form of a subspace.Here we focus our attention on the setting where x lies in a union of subspaces n u=U v i ……i=1……(6)

where { A J , 1 ≤ j ≤ m} are a given set of disjoint subspaces, and |j| = k denotes a sum over k indices. Thus, each subspace vi corresponds to a different choice of subspaces AJ that comprise the sum. We assume throughout the paper that m and the dimensions di = dim(AJ ) of the subspaces A J are finite. Given n samples y= S* x ……………..(8) and the knowledge that x lies in exactly one of the subspaces v i ,we would like to recover the unknown signal x .In this setting, there are possible subspaces comprising the

union.An alternative interpretation of our model is as follows. Given an observation vector y, we seek a signal xfor which y= S* x and in addition x can be written as k ϰ = Σ xi i=1

where each v i is a subspace. Thus, x belongs to one of the v i , but we do not know a priori to which one. Note that the set u is no longer a subspace. Indeed, if v i is, for example, a one-dimensional space spanned by the vector vi, then U contains vectors of the form αvi, for some i but does not include their linear combinations. Our goal is to recover a vector ϰ lying in a union of subspaces, from a given set of samples. In principle, if we knew which subspace ϰ belonged to, then reconstruction can be obtained using standard sampling results. However, here the problem is more involved because conceptually we first need to identify the correct subspace and only then can we recover the signal within the space. Previous work on sampling over a union focused on invertibility and stability results in some generaliastions which are useful for us. To achieve this goal,we limit our attention to a subclass of (6) for which stable recovery algorithms can be developed and analyzed. Specifically, we treat the case in which each v i has the additional structure v i= AJ

|j| = k …… …………………………

(7)

…………………………………. (9) where each ϰi lies in AJ for some index j.A special case is the standard CS problem in which x = x is a vector of length N, that has a sparse representation in a given basis defined by an invertible matrix W. Thus, x = Wc where c is a sparse vector that has at most k nonzero elements. This fits our framework by choosing A i as nthe space. spanned by the ith column of W. In this setting m = N, and there are

subspaces comprising the union. A.Problem Formulation and Main Results Given k and the subspaces A i, ,we would like to address the following questions: 1) What are the conditions on the sampling vectors si, 1 ≤ i ≤ n in order to guarantee that the sampling is invertible and stable? 2) How can we recover the unique x (regardless of computational complexity)?

SPARKS@ELECTROMANIA 2K9 

3) How can we recover the unique x in an efficient and stable manner? However, no concrete methods were proposed in order to recover x. Here we provide efficient convex algorithms that recover x in a stable way for arbitrary k under appropriate conditions on the sampling functions si another spaces A i. My results are based on an equivalence between the union of subspaces problem assuming (7) and that of recovering block-sparse vectors. This allows us to recover x from the given samples by first treating the problem of recovering a block k-sparse vector c from a given set of measurements. This relationship is established in the next section. In the reminder of the paper we therefore focus on the block k-sparse model and develop our results in that context. In particular, we introduce a block RIP condition that ensures uniqueness and stability of our sampling problem. We then suggest an efficient convex optimization problem which approximates an unknown block-sparse vector c. Based on block RIP we prove that c can be recovered exactly in a stable way using the proposed optimization program. Furthermore, in the presence of noise and modeling errors, this algorithm can approximate the best block-k sparse solution. D. UNIQUENESS AND STABILITY In this section we study the uniqueness and stability of our sampling method. These properties are intimately related to the RIP, which we generalize here to the block-sparse setting.The first questi we address is that of uniqueness, namely conditions under which a blocksparse vector c is uniquely determined by the measurement vector y = Dc. Proposition 1: There is a unique block-k sparse vector c consistent with the measurements y = Dc if and only if Dc ǂ 0 for every c ǂ 0 that is block 2k-sparse. Proof: The proof follows from [22, Proposition 4]. We next address the issue of stability. A sampling operator is stable for a set T if and only if there exists constants α> 0, β < ∞ such that α II x 1 −x 2 II 2 Ή <= IIS* x 1 -- S* x 2 II2 <= βIIx 1 −x 2 II 2 Ή …………… (18) for every x 1, x 2 in T . The ratio κ = β/α provides a measure for stability of the sampling operator. The operator is maximally stable when κ = 1. In our settings S is replaced by D, and the set T contains block-k sparse vectors. The following proposition follows immediately from (18) by noting that given two block-k sparse vectors c1,c2 their difference c1 −c2 is block-2k sparse.

Proposition 2: The measurement matrix D is stable for every block k-sparse vector c if and only if there exists c1 > 0 and c2 < ∞ such that ………(19) c1 II vII 22 <= II Dv II 22 <= II vII 22 for every v that is block 2k-sparse.It is easy to see that if D satisfies (19) then Dc ǂ 0 for all block 2k-sparse vectors c. Therefore, this condition implies both invertibility and stability .

A.Block RIP Property (19) is related to the RIP used in several previous works in CS [9], [13], [14]. A matrix D of size n × N is said to have the RIP if there exists a constant δk€ [0, 1) such that for every k-sparse c €IRN, (1 − δk) II c II2 2

<= II Dv II 22 <= (1 + δk) II c II2 2 …………. (20) Extending this property to block-sparse vectors leads to the following definition: Definition 2: Let D : IRN →IRN be a given matrix. Then D has the block RIP over I = {d1, . . . , dm} with parameter δk|I if for every c ∈ IRN that is block ksparse over I we have that (1 − δk/I) II c II2 2 δk/I) II c II2

<= II Dv II 22 <= (1 +

2

………………….. (21) By abuse of notation, we use δk for the block-RIP constant δk|I when it is clear from the context that we refer to blocks. Block-RIP is a special case of the A -restricted isometry defined in [23]. From Proposition 1 it follows that if D satisfies the RIP (21) with δ2k < 1, then there is a unique block-sparse vector c consistent with (16). Note that a block k-sparse vector over I is M-sparse in the conventional sense where M is the sum of the k largest values in I, since it has at most M nonzero elements. If we require D to satisfy RIP for all M-sparse vectors, then (21) must hold for all 2M-sparse vectors c. Since we only require the RIP for block sparse signals, (21) only has to be satisfied for a certain subset of 2M sparse signals, namely those that have block sparsity. As a result, the block-RIP constant δk|I is typically smaller than δM (where M depends on k; for blocks with equalsize d, M = kd).To emphasize the advantage of block RIP over standard RIP, consider the following matrix, separated into three blocks of two columns each:

SPARKS@ELECTROMANIA 2K9 

 

uniquely recovered by solving the optimization problem Min

………………………(22) where B is a diagonal matrix that results in unit-norm columns of D, i.e., B = diag (1, 15, 1, 1, 1, 12)−1/2 . In this example m = 3 and I = {d1 = 2,d2 = 2,d3 = 2}. Suppose that c is block-1 sparse, which corresponds to at most two non-zero values. Brute-force calculations show that the smallest value of δ2 satisfying the standard RIP (20) is δ2 = 0.866. On the other hand, the block-RIP (21) corresponding to the case in which the two non-zero elements are restricted to occur in one block is satisfied with δ1|I = 0.289. Increasing the number of non-zero elements to k = 4, we can verify that the standard RIP (20) does not hold for any δ4 € [0, 1). Indeed, in this example there exist two 4-sparse vectors that result in the same measurements. In contrast, δ2|I = 0.966 satisfies the lower bound in (21) when restricting the 4 non-zero values to two blocks. Consequently, the measurements y = Dc uniquely specify a single block-sparse c. In the next section, we will see that the ability to recover c in a computationally efficient way depends on the constant δ2k|I in the block RIP (21). The smaller the value ofδ2k|I , the fewer samples are needed in order to guarantee stable recovery. Both standard and block RIP constants δk,δk|I are by definition increasing with k. Therefore, it was suggested in [12] to normalize each of the columns of D to 1, so as to start with δ1 = 0. In the same spirit, we recommend choosing the bases for A I such that D = S* A has unit-norm columns, corresponding to δ1|I = 0. B. Recovery Method We have seen that if D satisfies the RIP (21) with δ2k < 1, then there is a unique block-sparse vector c consistent with (16). The question is how to find c in practice. Below we present an algorithm that will in principle find the unique c from the samples y. Unfortunately, though, it has exponential complexity. In the next section we show that under a stronger condition on δ2k we can recover c in a stable and efficient manner.Our first claim is that c can be

…………………. (23) To show that (23) will indeed recover the true value of c, suppose that there exists a c′ such that Dc′ = y and II c′II 0,I <= II c′II 0,I <= k. Since both c, c′ are consistent with the measurements, 0 = D(c − c′) = Dd,

……………

………(24) where II d II 0,I <= 2k so that d is a block 2k-sparse vector. Since D satisfies(21) with δ2k < 1, we must have that d = 0 or c = c′. 3.3.The Fast Fourier TransformAlgorithm This is how the DFT may be computed efficiently. 1D Case

has to be evaluated for N values of u, which if done in the obvious way clearly takes multiplications.It is possible to calculate the DFT more efficiently than this, using the fast Fourier transform or FFT algorithm, which reduces the number of operations to . We shall assume for simplicity that N is a power of 2, . If we define to be the root of unity given by , and set M=N/2, we have

This can be split apart into two separate sums of alternate terms from the original sum,

SPARKS@ELECTROMANIA 2K9 

Thus, we can compute an N-point DFT by dividing it into two parts: The Now, since the square of a root of unity, we have that

first

half

of can

root of unity is an

be

F(u)

for

found

from

Eqn. 28, •

The second half for can be found simply be reusing the same terms differently as shown by Eqn. 30.



This is obviously a divide and conquer method.

and hence

To show how many operations this requires, let T(n) be the time taken to perform a transform of size , measured by the number of multiplications performed. The above analysis shows that 10

If we call the two sums demarcated above the first term on the right hand side coming from the two transforms of half the original size, and the second term coming from the multiplications of

and respectively,then we have

by . Induction can be used to prove that

Note that each of and

A similar argument can also be applied to the number of additions required, to show that the algorithm as a whole takes time.

for is in itself a discrete Fourier transform over N/2=M points. How does this help us? Well

. Also Note that the same algorithm can be used with a little modification to perform the inverse DFT too. Going back to the definitions of the DFT and its inverse,

and we can also write

10

SPARKS@ELECTROMANIA 2K9 

and

amplifier with a gain of exactly 1.The gain for a noninverting amplifier is given by the formula :

If we take the complex conjugate of the second equation, we have that

This now looks (apart from a factor of 1/N) like a forward DFT, rather than an inverse DFT. Thus to compute an inverse DFT, •

take the conjugate of the Fourier space data,



put conjugate through a forward DFT algorithm,



take the conjugate of the result, at the same time multiplying each value by N.

2D Case : The same fast Fourier transform algorithm can be used -- applying the separability property of the 2D transform. Rewrite the 2D DFT as

The right hand sum is basically just a one-dimensional DFT if x is held constant. The left hand sum is then another one-dimensional DFT performedwith the numbers that come out of the first set of sums. So we can compute a two-dimensional DFT by •

performing a one-dimensional DFT for each value of x, i.e. for each column of f(x,y), then



performing a one-dimensional DFT in the opposite direction (for each row) on the resulting values.

This requires a total of 2 N one dimensional transforms, so the overall process takes time 3.4. STORAGE OF SAMPLED SIGNALS For storage of sampled signals I have used buffer as described below.I have designed a non-inverting

So, if we make R2 zero, and R1 infinity, we'll have an amp with a gain of exactly 1. How can we do this? The circuit is surprisingly simple. Here, R2 is a plain wire, which has effectively zero resistance. We can think of R1 as an infinite resistor -- we don't have any connection to ground at all.This arrangement is called an Op-Amp Follower, or Buffer. The buffer has an output that exactly mirrors the input (assuming it's within range of the voltage rails), so it looks kind of useless at first.

However, the buffer is an extremely useful circuit, since it helps to store the signal for sometime . The input impedance of the op-amp buffer is very high: close to infinity. And the output impedance is very low: just a few ohms. 3.5.SPARSE RECOVERY BY 890 ALGORITHM USING SPARCO TOOLBOX OF MATLAB Now in this section ,I am giving a brief account of SPARCO with emphasis on applicatiopn part .Sparco is organized as a fexible framework providing test problems for sparse signal recon-struction as well as a library of operators and tools. The problem suite currently contains 25 problems and 28 operators.The latest version of Sparco and releted stuffs like installation guides ,prerequisites code forc sparase

SPARKS@ELECTROMANIA 2K9 

MRIB toolbox ,test problems packaged with the GPSR solver can be obtained from www.cs.ubc.ca/labs/scl/sparco.Also an open source pacakage Rice Wavelet Toolbox can be of great help.A brief description of various sparco operators is as follows : At the core of the Sparco architect architecture is a large library of linear operators. Where possible, specialized code is used for fast evaluation of matrixvector multiplications. Once an operator has been created D = opDCT(128) matrix-vector products with the created operator can be accessed as follows: Matlab function Description:

y = D(x,1); % gives y := Dx x = D(y,2); % gives x := Dty A full list of the basic operators available in the Sparco library is given in Tables 3 and 4: Matlab classes can be used to overload operations commonly used for matrices so that the objects in that class behave exactly like explicit matrices. Although this mechanism is not used for the implementation of the Sparco operators, operator overloading can provide a very convenient interface for the user. To facilitate this feature, Sparco provides the function classOp:

opBinary opBlockDiag opBlur opColumnRestrict opConvolve1d opCurvelet2d opDCT opDiag opDictionary opDirac opFFT opFFT2d opFFT2C opFoG opGaussian opHaar opHaar2d opHeaviside opKron opMask opMatrix opPadding opReal opRestriction opSign opWavelet opWindowedOp overcomplete

binary (0/1) ensemble compound operator with operators on the diagonal two-dimensional blurring operator restriction on matrix columns one-dimensional convolution operator two-dimensional curvelet operator one-dimensional discrete cosine transform scaling operator compound operator with operators abutted identity operator one-dimensional FFT two-dimensional FFT centralized two-dimensional FFT subsequent application of a set of operators Gaussian ensemble one-dimensional Haar wavelet transform two-dimensional Haar wavelet transform Heaviside matrix operator Kronecker product of two operators vector entry selection mask wrapper for matrices pad and unpad operators equally around each side discard imaginary components vector entry restriction sign-ensemble operator wavelet operator windowed operator

opWindowedOp overcomplete

windowed operator

Table 3: The operators in the Sparco library C = classOp(op); % Create matrix object C from op C = classOp(op,'nCprod'); % Additionally, create a global counter variable nCprod the main matrix-vector operations are de_ned. In its second form, the classOp function

SPARKS@ELECTROMANIA 2K9 

These calls take an operator op and return an object from the operator class for which the main matrix-vector operations are de_ned. In its second form, the classOp function accepts an optional string argument and creates a global variable that keeps track of the number of multiplications with C and CT . The variable can be accessed from Matlab's base workspace. The following example illustrates the use of classOp: F = opFFT(128); G = classOp(F); g1 = F(y,2); % gives g1 := FTy g2 = G'*y; % gives g2 := GTy _ Fty Operator type

Matlab function

Ensembles opBinary, opSign, opGaussian Selection opMask, opColumnRestrict, opRestriction Matrix opDiag, opDirac, opMatrix, opToMatrix Fast operators opCurvelet, opConvolve1d, opConvolve2d, opDCT, opFFT, opFFT2d, opFFT2C, opHaar, opHaar2d, opHeaviside, opWavelet Compound operators opBlockDiag, opDictionary, opFoG, opKron, opWindowedOp Nonlinear opReal, opPadding

Table 4: Operators grouped by type: Meta operators: Several tools are available for conveniently assembling more complex operators from the basic operators. The _ve meta-operators opFoG, opDictionary, opTranspose, opBlockDiag, and opKron take one or more of the basis operators as inputs, and assemble them into a single operator: H = opFoG(A1,A2,...); % H := A1 _ A2 _ : : : _ An

H = opDictionary(A1,A2,...); % H := [A1 j A2 j _ _ _ j An] H = opTranspose(A); % H := AT H = opBlockDiag(A1,A2,...); % H := diag(A1;A2; : : :) H = opKron(A1,A2); % H := A1 A2 A sixth meta-operator, opWindowedOp, is a mixture between opDictionary and opBlockDiagv in which blocks can partially overlap rather than fully (opDictionary), or not at all (opBlockDiag). A further two di_erences are that only a single operator is repeated and that each operator is implicitly preceded by a diagonal window operator. Ensemble operators and general matrices:The three ensemble operators (see Table 4) can be instantiated by simply specifying their dimensions and a mode that determines the normalization of the ensembles. Unlike theother operators in the collection, the ensemble operators can be instantiated as explicit matrices (requiring O(m_ n) storage), or as implicit operators. When instantiated as implicit operators, the random number seeds are saved and rows and columns are generated on the y during multiplication, requiring only O(n) storage for the normalization coeffcients. Selection operators: Two selection operators are provided: opMask and opRestriction. In forward mode, the restriction operator selects certain entries from the given vector and returns a correspondingly shortened vector. In contrast, the mask operator evaluates the dot-product with a binary vector thus zeroing out the entries instead of discarding them, and returns a vector of the same length as the input vector. Fast operators: Sparco also provides support for operators with a special structure for which fast algorithms are available. Such operators in the library include Fourier, discrete cosine, wavelet, twodimensional curvelet, and one-dimensional convolution of a signal with a kernel.For example, the following code generates a partial 12 Fourier measurement operator (F), a masked version with 30% of the rows randomly zeroed out (M), and a dictionary consisting of an FFT and a scaled Dirac basis (B): m = 128;

SPARKS@ELECTROMANIA 2K9 

D = opDiag(m,0.1); F = opFFT(m); % D is a diagonal operator, F is an FFT M = opFoG(opMask(rand(m,1) < 0.7),F); % M is a masked version of F B = opDictionary(F,D); % B = [F D] Utilities: For general matrices there are three operators: opDirac, opDiag, and opMatrix. The Dirac operator coincides with the identity matrix of desired size. Diagonal matrices can be generated using opDiag which takes either a size and scalar, or a vector containing the diagonal entries. General matrix operators can be created using opMatrix with a (sparse) matrix as an argument. The opToMatrix utility function takes an implicit linear operator and forms and returns an explicit matrix. Figure 1 shows the results of using this utility function on the operator M and B : Mexplicit = opToMatrix(M); imagesc(Mexplicit); Bexplicit = opToMatrix(B); imagesc(Bexplicit); Using the appropriate algorithms and tools I have done experimentation which is discussed in next section. 4. EXPERIMENTASTION AND OBSERVATION : 4.1. Aim: to reconstruct an image given input image is provided .

4.6.Output image is as follows:

5.RESULTS AND INFERENCES Though the experimentation was not so successful completely due to some inevitable and unavoidable circumstances , the result is still satisfactory.Since always there is a scope of improvement , design of such a system in practical form is still a challenge. The top view of DSP kit used is shown below :

SPARKS@ELECTROMANIA 2K9 

7. References [1] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.: Cambridge Univ. Press, 2004. [2] E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Prob., vol. 23, no. 3, pp. 969–986, June 2007. [3] M. Rudelson and R. Vershynin, “On sparse reconstruction from Fourier and Gaussian measurements,” submitted for publication. [4] J. Shapiro, “Embedded image coding using zerotrees of wavelet sDec. 1993. [5] A. Skodras, C. Christopoulos, and T. Ebrahimi, “The JPEG2000 still image compression standard,” IEEE Signal Processing Mag., vol. 18, pp. 36–58,Sept. 2001. [6] D. Takhar, J.N. Laska, M.B. Wakin, M.F. Duarte, D. Baron, S. Sarvotham,K.F. Kelly, and R.G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in Proc. SPIE Conf. Computational Imaging IV,San Jose, CA, Jan. 2006, pp. 43-52. [7] J. Tropp, “Just relax: Convex programming methods for identifying sparse signals in noise,” IEEE Trans. Inform. Theory, vol. 52, no. 3, pp. 1030–1051 ,2006. [8] M. Vetterli and J. Kovacevic, Wavelets and Subband Coding. Englewood Cliffs, NJ: Prentice-Hall, 1995. [9] R. Baraniuk, H. Choi, F. Fernandes, B. Hendricks, R. Neelamani, V.Ribeiro, J. Romber, R. Gopinath, H.T. Guo, M. Lang, J. E. Odegard, and D.Wei, Rice Wavelet Toolbox.http://www.dsp.rice.edu/software/rwt.shtml, 1993. [10] E. van den Berg and M. P. Friedlander, In pursuit of a root, Tech. Rep.TR 2007-19, Department of Computer Science, University of British Columbia, June 2007. [11] R. Boisvert, R. Pozo, K. Remington, R. Barrett, and J. Dongarra, Matrix Market: A web resource for test matrix collections, in The quality of numerical software: assesment and enhancement, R. F. Boisvert, ed., Chapman & Hall, London 1997, pp. 125{137. [12] E. J. Candes, Compressive sampling, in Proceedings of the International Congress of Mathematicians, 2006. [13] E. J. Candes, L. Demanet, D. L. Donoho, and L.X. Ying, CurveLab.

http: //www.curvelet.org/, 2007. [14] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, Compressed sensing MRI, 2007. Submitted to IEEE Signal Processing Magazine. [15] D. Malioutov, M. C etin, and A. S. Willsky, A sparse signalreconstruction prespective for source localization with sensor arrays, IEEE Trans. Sig. Proc., 53 (2005), pp. 3010{3022. [16] S. Mendelson, A. Pajor, and N. TomczakJaegermann, Uniform uncertainty principle for Bernoulli and subgaussian ensembles, 2007. arXiv:math/0608665. [17] B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J.Comput., 24 (1995), pp. 227{234. [18] S. Mendelson, A. Pajor, and N. TomczakJaegermann, Uniform uncertainty principle for Bernoulli and subgaussian ensembles, 2007.arXiv:math/0608665. [19] B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J Comput., 24 (1995), pp. 227{234. [20] S. Mendelson, A. Pajor, and N. TomczakJaegermann, Uniform uncertainty principle for Bernoulli and subgaussian ensembles, 2007. arXiv:math/0608665. [21] B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J.Comput., 24 (1995), pp. 227{234}. [22] C. A. Shannon and W. Weaver, The mathematical theory ofcommuni-cation. University of Illinois Press, 1949. [23] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm,” IEEE Transactions on Signal Processing, vol. 45, pp. 600–616, March 1997. [24] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 50, no. 6, pp. 1417–1428, 2002. [25] E. Cand`es and J. Romberg, “Quantitative robust uncertainty principles and optimally sparse decompositions,” Foundations of Comput. Math,vol. 6, no. 2, pp. 227 – 254, 2006. [26] E. Cand`es and T. Tao, “Near optimal signal recovery from random projections: Universal encoding strategies?,” IEEE Trans. on Information Theory, vol. 52, no. 12, pp. 5406 – 5425, 2006. [27] D. Donoho, “Compressed sensing,” IEEE Trans. on Information Theory,vol. 52, no. 4, pp. 1289–1306, 2006.

SPARKS@ELECTROMANIA 2K9 

[28] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit,” Signal Processing,vol. 86, pp. 572–588, 2006. [29] J. A. Tropp, “Algorithms for simultaneous sparse approximation. PartII: Convex relaxation,” Signal Processing, vol. 86, pp. 589–602, 2006. [30] K. S. R. Gribonval, H. Rauhut and P. Vandergheynst, “Atoms of all channels, unite! Average case analysis of multichannel sparse recovery using greedy algorithms,” Journal of Fourier analysis and applications,Published online, DOI:10.1007/s00041-008-9044-y, October, 2008. [31] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006. [32] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006. [33] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006. [34] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006 [35] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006. [36] E. Cand`es, “Compressive sampling,” in Proceedings of the International Congress of Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 2006. [37] A. Aldroubi and K. Gr¨ochenig, “Nonuniform sampling and reconstruction in shift-invariant spaces,” SIAM Review, vol. 43, no. 4, pp. 585–620, 2001. [38] C. Zhao and P. Zhao, “Sampling theorem and irregular sampling theorem for multiwavelet subspaces,” IEEE Trans. Signal Process., vol. 53, no. 2,pp. 705–713, Feb. 2005. [39] P. Zhao, C. Zhao, and P. G. Casazza, “Pertubation of regular sampling in shift-invariant spaces for frames,18 ” IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 4643–4648, Oct. 2006. [40] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process., vol. 50, no. 6, pp. 1417–1428, Jun. 2002 18

.[41] I. Maravic and M. Vetterli, “Sampling and reconstruction of signals with finite rate of innovation in the presence of noise,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 2788–2805, Aug. 2005. [42] P. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process., vol. 55, no. 5, pp. 1741–1757, May 2007. [43] D. L. Donoho, M. Vetterli, R. A. DeVore, and I. Daubechies, “Data compression and harmonic analysis,” IEEE Trans. Inf. Theory, vol. 44,no. 6, pp. 2435–2476, Oct. 1998. [44] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed. San Diego:Academic Press, 1999. [45] A. M. Bruchstein, T. J. Shan, and T. Kailath, “The resolution of overlapping echos,” IEEE Trans. Acoust., Speech, and Signal Process.,vol. 33, no. 6, pp. 1357–1367, Dec. 1985. [46] A. Aldroubi and K. Gr¨ochenig, “Nonuniform sampling and reconstruction in shift-invariant spaces,” SIAM Review, vol. 43, no. 4, pp. 585–620, 2001. [47] C. Zhao and P. Zhao, “Sampling theorem and irregular sampling theorem for multiwavelet subspaces,” IEEE Trans. Signal Process., vol. 53, no. 2,pp. 705–713, Feb. 2005.[48] P. Zhao, C. Zhao, and P. G. Casazza, “Pertubation of regular sampling in shiftinvariant spaces for frames,” IEEE Trans. Inf. Theory, vol. 52,no. 10, pp. 4643–4648, Oct. 2006. [49] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process., vol. 50, no. 6, pp. 1417–1428, Jun. 2002. [50] I. Maravic and M. Vetterli, “Sampling and reconstruction of signals with finite rate of innovation in the presence of noise,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 2788– 2805, Aug. 2005. [51] P. Dragotti, M. Vetterli, and T. Blu, “Sampling moments andreconstructing signals of finite rate of innovation: Shannon meets Strang-Fix,” IEEE Trans. Signal Process., vol. 55, no. 5, pp. 1741–1757, May 2007. [52] D. L. Donoho, M. Vetterli, R. A. DeVore, and I. Daubechies, “Data compression and harmonic analysis,” IEEE Trans. Inf. Theory, vol. 44,no. 6, pp. 2435–2476, Oct. 1998. [53] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed. San Diego: Academic Press, 1999. [54]. A. M. Bruchstein, T. J. Shan, and T. Kailath, “The resolution of overlapping echos,” IEEE Trans.

SPARKS@ELECTROMANIA 2K9 

Acoust., Speech, and Signal Process.,vol. 33, no. 6, pp. 1357–1367, Dec. 1985.

Related Documents

Ieee Format
November 2019 38
Ieee Format
May 2020 23
Ieee Paper Format
June 2020 26
Ieee Paper Format
December 2019 26