.

Saturday, March 30, 2019

Compressive Sensing: Performance Comparison of Measurement

Compressive Sensing mental branch Comparison of measuringCompressive Sensing A Per constellationance Comparison of Measurement MatricesY. Arjoune, N. Kaabouch, H. El Ghazi, and A. TamtaouiAbstractCompressive spying paradigm involves one-third main regalees fragile re devoteation, amount, and slight convalescence butt on. This theory deals with lean argues using the fact that most of the accredited world orientises ar thin. Thus, it uses a beat hyaloplasm to exemplar sole(prenominal) the components that best represent the thin ratify. The choice of the measuring ground substance affects the victor of the slender recovery process. Hence, the design of an accurate meter ground substance is an outstanding process in compressive sensing. Over the last decades, several bill matrices lay down been proposed. Therefore, a detailed review of these measuring stick matrices and a comparison of their slayings is needed. This report card gives an overview on co mpressive sensing and richlylights the process of step. Then, proposes a three-level bill intercellular substance categorisation and comp atomic summate 18s the per multifariousnessance of eight cadence matrices after presenting the mathematical toughie of from each one hyaloplasm. Several experiments atomic chassis 18 performed to comp ar these standard matrices using four military rank metrics which are distributed recovery error, impact clock time, covariance, and phase rebirth diagram. Results show that Circulant, Toeplitz, and partial(p) Hadamard measurement matrices allow fast reconstruction of tenuous signals with subtile recovery errors.Index Terms Compressive sensing, sparse representation, measurement ground substance, haphazard intercellular substance, deterministic matrix, sparse recovery.TRADITIONAL data acquisition techniques acquire N samples of a presumptuousness signal sampled at a rate at least twice the Nyquist rate in lay to guarantee h one signal reconstruction. After data acquisition, data abridgement is needed to narrow the high amount of samples because most of the signals are sparse and need a few(prenominal)er samples to be represented. This process is time consuming because of the large number of samples acquired. In addition, devices are often not able to store the amount of data generated. Therefore, compressing sensing is necessary to reduce the process time and the number of samples to be stored. This sensing technique overwhelms data acquisition and data compression in one process. It exploits the sparseness of the signal to recover the original sparse signal from a small readiness of measurements 1. A signal is sparse if only a few components of this signal are nonzero. Compressive sensing has be itself as a promising solving for high-density signals and has major applications ranging from image processing 2 to wireless sensor ne dickensrks 3-4, spectrum sensing in cognitive radio 5-8, and s tockpile estimation 9-10.As shown in Fig. 1. compressive sensing involves three main processes sparse representation, measurement, and sparse recovery process. If signals are not sparse, sparse representation projects the signal on a suitable basis so the signal squirt be sparse. Examples of sparse representation techniques are Fast Fourier Transform (FFT), distinguishable Wavelet Transform (DWT), and decided Cosine Transform (DCT) 11. The measurement process consists of selecting a few measurements, from the sparse signal that best represents the signal where. Mathematically, this process consists of multiplying the sparse signal by a measurement matrix. This matrix has to gull a small reciprocal coherence or run into the curtail Isometry Property. The sparse recovery process aims at recovering the sparse signal from the few measurements selected in the measurement process habituated the measurement matrix . Thus, the sparse recovery enigma is an undetermined constitu tion of linear comparabilitys, which has an immortal number of solutions. However, sparsity of the signal and the small mutual coherence of the measurement matrix ensure a unique solution to this problem, which roll in the hay be formulated as a linear optimization problem. Several algorithms commit been proposed to solve this sparse recovery problem. These algorithms can be classified into three main categories Convex and Relaxation household 12-14, Greedy stratum 15-20, and Bayesian category 21-23. Techniques under the Convex and Relaxation category solve the sparse recovery problem through optimization algorithms such(prenominal) as slope Descent and Basis Pursuit. These techniques are complex and have a high recovery time. As an alternative solution to reduce the processing time and speed up the recovery, Greedy techniques have been proposed which build the solution iteratively. Examples of these techniques intromit Orthogonal Matching Pursuit (OMP) and its derivatives. T hese Greedy techniques are faster solely sometimes inefficient. Bayesian found techniques which use a prior friendship of the sparse signal to recover the original sparse signal can be a good approach to solve sparse recovery problem. Examples of these techniques entangle Bayesian via Laplace Prior (BSC-LP), Bayesian via Relevance Vector mould (BSC-RVM), and Bayesian via Belief Propagation (BSC-BP). In general, the existence and the uniqueness of the solution are guaranteed as soon as the measurement matrix use to sample the sparse signal satisfies some criteria. The devil well-known criteria are the Mutual Coherence Property (MIP) and the curtail Isometry Property (RIP) 24. Therefore, the design of measurement matrices is an important process in compressive sensing. It involves devil fundamental steps 1) survival of the fittest of a measurement matrix and 2) determination of the number of measurements necessary to sample the sparse signal without losing the information stor ed in it.A number of measurement matrices have been proposed. These matrices can be classified into twain main categories ergodic and deterministic. hit-or-miss matrices are generated by identical or independent statistical distributions such as Gaussian, Bernoulli, and haphazard Fourier ensembles. These matrices are of two pillowcases un incorporate and organise. Unstructured part matrices are generated haphazardly interest a assumption distribution. Example of these matrices include Gaussian, Bernoulli, and resembling. These matrices are easy to construct and satisfy the RIP with high chance 26 however, because of the ergodicness, they present some drawbacks such as high computation and dear(p) hardware performance 27. Structured type matrices are generated following a accustomed structure. Examples of matrices of this type include the hit-or-miss partial(p) Fourier and the random partial Hadamard. On the other hand, deterministic matrices are constructed determ inistically to have a small mutual coherence or satisfy the RIP. Matrices of this category are of two types semi-deterministic and full-deterministic. Semi-deterministic type matrices have a deterministic construction that involves the randomness in the process of construction. Example of semi-deterministic type matrices are Toeplitz and Circulant matrices 31. Full-deterministic type matrices have a pure deterministic construction. Examples of this type measurement matrices include second-order Reed-Muller codes 28, chant sensing matrices 29, binary Bose-Chaudhuri-Hocquenghem (BCH) codes 30, and quasi-cyclic infrequency parity-check code (QC-LDPC) matrix 32. Several bases that run a performance comparison of deterministic and random matrices have been published. For instance, Monajemi et al. 43 force some semi-deterministic matrices such as Toeplitz and Circulant and show that their phase transition diagrams are similar as those of the random Gaussian matrices. In 11, the autho rs provide a survey on the applications of compressive sensing, highlight the drawbacks of unstructured random measurement matrices, and they present the advantages of some full-deterministic measurement matrices. In 27, the authors provide a survey on full-deterministic matrices (Chirp, second order Reed-Muller matrices, and Binary BCH matrices) and their comparison with unstructured random matrices (Gaussian, Bernoulli, Uniform matrices). All these papers provide comparisons between two types of matrices of the same category or from two types of two different categories. However, to the best of knowledge, no preliminary manoeuver compared the performances of measurement matrices from the two categories and all types random unstructured, random structured, semi-deterministic, and full-deterministic. Thus, this paper addresses this gap of knowledge by providing an in depth overview of the measurement process and comparing the performances of eight measurement matrices, two from ea ch type.The rest of this paper is organized as follows. In Section 2, we give the mathematical puzzle behind compressive sensing. In Section 3, we provide a three-level classification of measurement matrices. Section 4 gives the mathematical model of each of the eight measurement matrices. Section 5 describes the experiment setup, defines the evaluation metrics used for the performance comparison, and discusses the experimental results. In section 6, conclusions and perspectives are given.Compressive sensing exploits the sparsity and compresses a k-sparse signal by multiplying it by a measurement matrix where. The resulting sender is called the measurement vector. If the signal is not sparse, a aboveboard projection of this signal on a suitable basis, can make it sparse i.e. where. The sparse recovery process aims at recovering the sparse signal given the measurement matrix and the vector of measurements. Thus, the sparse recovery problem, which is an undetermined system o f linear equations, can be stated as (1)Where is the, is a sparse signal in the basis , is the measurement matrix, and is the set of measurements.For the next of this paper, we drive that the signals are sparse i.e. and . The problem (1) then can be write as (2)This problem is an NP-hard problem it cannot be solved in practice. Instead, its umbel-like relaxation is considered by replacing the by the . Thus, this sparse recovery problem can be stated as (3)Where is the -norm, is the k-parse signal, the measurement matrix and is the set of measurements. Having the solution of problem (3) is guaranteed as soon as the measurement matrix has a small mutual coherence or satisfies RIP of order. rendering 1 The coherence measures the maximum correlation between any two columns of the measurement matrix . If is a matrix with formulaized column vector , each is of unit length. Then the mutual coherence everlasting (MIC) is defined as(4)Compressive sensing is concerned with matrices that have low coherence, which means that a few samples are required for a perfect recovery of the sparse signal. description 2 A measurement matrix satisfies the Restricted Isometry Property if in that location exist a constant such as(5)Where is the and is called the Restricted Isometry Constant (RIC) of which should be much smaller than 1.As shown in the Fig .2, measurement matrices can be classified into two main categories random and deterministic. Matrices of the offshoot category are generated at random, easy to construct, and satisfy the RIP with a high probability. haphazard matrices are of two types unstructured and structured. Matrices of the unstructured random type are generated at random following a given distribution. For example, Gaussian, Bernoulli, and Uniform are unstructured random type matrices that are generated following Gaussian, Bernoulli, and Uniform distribution, respectively. Matrices of the second type, structured random, their entries are generated following a given mold or specific structure. Then the randomness comes into play by selecting random rows from the generated matrix. Examples of structured random matrices are the stochastic Partial Fourier and the Random Partial Hadamard matrices. Matrices of the second category, deterministic, are highly desirable because they are constructed deterministically to satisfy the RIP or to have a small mutual coherence. deterministic matrices are also of two types semi-deterministic and full-deterministic. The generation of semi-deterministic type matrices are through in two steps the stolonly step consists of the generation of the entries of the first column randomly and the second step generates the entries of the rest of the columns of this matrix based on the first column by applying a simple transformation on it such as shifting the element of the first columns. Examples of these matrices include Circulant and Toeplitz matrices 24. Full-deterministic matrices have a pure deterministic construction. Binary BCH, second-order Reed-Solomon, Chirp sensing, and quasi-cyclic low-density parity-check code (QC-LDPC) matrices are examples of full-deterministic type matrices.Based on the classification provided in the previous section, eight measurement matrices were implemented two from each category with two from each type. The following matrices were implemented Gaussian and Bernoulli measurement matrices from the structured random type, random partial Fourier and Hadamard measurement matrices from the unstructured random type, Toeplitz and Circulant measurement matrices from the semi-deterministic type, and finally Chirp and Binary BCH measurement matrices from the full-deterministic type. In the following, the mathematical model of each of these eight measurement matrices is described.A. Random Measurement MatricesRandom matrices are generated by identical or independent distributions such as normal, Bernoulli, and random Fourier ensembles. These random matrices are of two types unstructured and structured measurement random matrices.1) Unstructured random type matricesUnstructured random type measurement matrices are generated randomly following a given distribution. The generated matrix is of sizing . Then M rows is randomly selected from N. Examples of this type of matrices include Gaussian, Bernoulli, and Uniform. In this work, we selected the Random Gaussian and Random Bernoulli matrix for the implementation. The mathematical model of each of these two measurement matrices is given below.a) Random Gaussian matrixThe entries of a Gaussian matrix are independent and follow a normal distribution with expectation 0 and variance. The probability density function of a normal distribution is(6)Where is the mean or the expectation of the distribution, is the standard deviation, and is the variance.This random Gaussian matrix satisfies the RIP with probability at least given that the sparsity satisfy the following formula(7)W here is the sparsity of the signal, is the number of measurements, and is the length of the sparse signal 36.b) Random Bernoulli matrixA random Bernoulli matrix is a matrix whose entries take the value or with equal probabilities. It, therefore, follows a Bernoulli distribution which has two possible outcomes labeled by n=0 and n=1. The outcome n=1 occurs with the probability p=1/2 and n=0 occurs with the probability q=1-p=1/2. Thus, the probability density function is(8)The Random Bernoulli matrix satisfies the RIP with the same probability as the Random Gaussian matrix 36.2) Structured Random Type matricesThe Gaussian or other unstructured matrices have the disadvantage of being slow thus, large problems are not practicable with Gaussian or Bernoulli matrices. Even the implementation in term of hardware of an unstructured matrix is more catchy and requires significant space memory space. On the other hand, random structured matrices are generated following a given structur e, which reduce the randomness, memory storage, and processing time. Two structured matrices are selected to be implemented in this work Random Partial Fourier and Partial Hadamard matrix. The mathematical model of each of these two measurement matrices is described belowa) Random Partial Fourier matrixThe Discrete Fourier matrix is a matrix whose entry is given by the equation (9)Where.Random Partial Fourier matrix which consists of choosing random M rows of the Discrete Fourier matrix satisfies the RIP with a probability of at least , if (10)Where M is the number of measurements, K is the sparsity, and N is the length of the sparse signal 36.b) Random Partial Hadamard matrixThe Hadamard measurement matrix is a matrix whose entries are 1 and -1. The columns of this matrix are orthogonal. Given a matrix H of order n, H is said to be a Hadamard matrix if the transpose of the matrix H is closely related to its inverse. This can be expressed by(11)Where is the identity matrix, is the transpose of the matrix.The Random Partial Hadamard matrix consists of taking random rows from the Hadamard matrix. This measurement matrix satisfies the RIP with probability at least provided with and as positive constants, K is the sparsity of the signal, N is its length and M is the number of measurements 35.B. Deterministic measurement matricesDeterministic measurement matrices are matrices that are knowing following a deterministic construction to satisfy the RIP or to have a low mutual coherence. Several deterministic measurement matrices have been proposed to solve the problems of the random matrices. These matrices are of two types as mentioned in the previous section semi-deterministic and full-deterministic. In the following, we investigate and present matrices from both types in terms of coherence and RIP.1) Semi-deterministic type matricesTo generate a semi-deterministic type measurement matrix, two steps are required. The first step is randomly generating the f irst columns and the second step is generating the full matrix by applying a simple transformation on the first column such as a rotation to generate each row of the matrix. Examples of matrices of this type are the Circulant and Toeplitz matrices. In the following, the mathematical models of these two measurement matrices are given.a) Circulant matrixFor a given vector, its associated circulant matrix whose entry is given by(11)Where.Thus, Circulant matrix has the following formC=If we choose a random subset of cardinality, then the partial circulant submatrix that consists of the rows indexed by achieves the RIP with high probability given that (12)Where is the length of the sparse signal and its sparsity 34.b) Toeplitz matrixThe Toeplitz matrix, which is associated to a vector whose entry is given by(13)Where. The Toeplitz matrix is a Circulant matrix with a constant diagonal i.e. . Thus, the Toeplitz matrix has the following formT=If we randomly select a subset of card inality , the Restricted Isometry Constant of the Toeplitz matrix restricted to the rows indexed by the set S satisfies with a high probability provided(14)Where is the sparsity of the signal and is its length 34.2) Full-deterministic type matricesFull-deterministic type matrices are matrices that have pure deterministic constructions based on the mutual coherence or on the RIP property. In the following, two examples of deterministic construction of measurements matrices are given which are the Chirp and Binary Bose-Chaudhuri-Hocquenghem (BCH) codes matrices.a) Chirp Sensing MatricesThe Chirp Sensing matrices are matrices their columns are given by the chirp signal. A discrete chirp signal of length has the form (15)The full chirp measurement matrix can be written as(16)Where is an matrix with columns are given by the chirp signals with a fixed and base frequency values that vary from 0 to m-1. To expound this process, let us assume that and Given , The full chirp matrix is as followsIn order to calculate, the matrices and should be calculated. Using the chirp signal, the entries of these matrices are calculated and given as Thus, we get the chirp measurement matrix asGiven that is a -sparse signal with chirp code measurements and is the length of the chirp code. If(17)then is the unique solution to the sparse recovery algorithms. The complexity of the computation of the chirp measurement matrix is. The main limitation of this matrix is the restriction of the number of measurements to 29.b) Binary BCH matricesLet denote as a divisor of for some integer an

No comments:

Post a Comment