`
cloudtech
  • 浏览: 4595688 次
  • 性别: Icon_minigender_1
  • 来自: 武汉
文章分类
社区版块
存档分类
最新评论

Compressive sensing for subsurface imaging using ground penetrating radar

 
阅读更多
Compressive sensing for subsurface imaging using ground penetrating radarstar, open

Ali C. Gurbuza, Corresponding Author Contact Information, 1, E-mail The Corresponding Author, James H. McClellanb and Waymond R. Scott Jr.b

aDepartment of Electric and Electronics Engineering, TOBB University of Economics and Technology, Ankara 06560, Turkey

bSchool of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0250, USA

Received 27 March 2008;
revised 17 February 2009;
accepted 23 March 2009.
Available online 1 April 2009.

Abstract

The theory of compressive sensing (CS) enables the reconstruction of sparse signals from a small set of non-adaptive linear measurements by solving a convex 1 minimization problem. This paper presents a novel data acquisition system for wideband synthetic aperture imaging based on CS by exploiting sparseness of point-like targets in the image space. Instead of measuring sensor returns by sampling at the Nyquist rate, linear projections of the returned signals with random vectors are used as measurements. Furthermore, random sampling along the synthetic aperture scan points can be incorporated into the data acquisition scheme. The required number of CS measurements can be an order of magnitude less than uniform sampling of the space–time data. For the application of underground imaging with ground penetrating radars (GPR), typical images contain only a few targets. Thus we show, using simulated and experimental GPR data, that sparser target space images are obtained which are also less cluttered when compared to standard imaging results.

Keywords: Compressive sensing; Synthetic aperture; Ground penetrating radar (GPR); 1 Minimization; Subsurface imaging; Sparsity; Source localization

Article Outline

1.
Introduction
2.
Theory: CS for GPR synthetic aperture imaging
2.1. Creating a dictionary for GPR data
2.2. Compressive sensing data acquisition
2.3. GPR imaging with compressive sensing
3.
Selection of algorithm parameters
4.
Discussion
5.
Results
5.1. Performance in noise
5.2. Effect of the measurement matrix Φ
5.3. Random spatial sampling
5.4. Experimental results
6.
Conclusions
References

1. Introduction

Imaging with a synthetic aperture typically requires the collection of a large amount of space–time data. The synthetic aperture is formed by scanning a sensor over the region of interest and recording the time signal returns for many spatial positions. When the sensor is a ground penetrating radar (GPR) the measurements are made by transmitting short electromagnetic pulses into the ground and recording the reflections [1]. The total subsurface response, which is a superposition of the responses from all reflectors within the medium, can be inverted using various imaging algorithms, e.g., time-domain standard backprojection (SBP) [2] X. Feng and M. Sato, Pre-stack migration applied to GPR for landmine detection, Inverse Problems 20 (2004), pp. 99–115.[2]. The SBP algorithm requires fine spatial sampling and Nyquist-rate time sampling of the received signals in order to perform matched filtering with the impulse response of the data acquisition process to form an image. Also, SPB does not use any prior knowledge about the image. In this paper, we approach imaging as an inverse problem based on the same model as used in the SBP algorithm, but we compute the image within a regularization framework, so we can take into account prior information.

We formulate the GPR subsurface imaging problem as a dictionary selection problem, where the dictionary entries are produced by discretizing the target space, and synthesizing the GPR model data for each discrete spatial position [3]. Recently, there has been considerable interest in representing signals as a superposition of elements from an overcomplete dictionary, and many methods have been developed to extract an “optimal” representation in terms of the given dictionary elements; this kind of search is called basis pursuit [4]. The assumptions made here are that the targets are point reflectors at discrete spatial positions, the targets do not interact so superposition is valid, and the wave propagation obeys ray theory. The reason we use a point target reflectivity model is simplicity in calculating the received data from the model. The point-target assumption is not crucial; other models can be used as long as the received data can be calculated for the assumed target model.

Generally, potential targets like buried landmines and metallic objects cover a small part of the total subsurface to be imaged. Thus, our basic a priori assumption about the GPR image is sparsity of targets. In other words, the number of targets is much less than the total number of discrete spatial positions, e.g., a View the MathML source image usually contains less than five targets, and rarely more than 10. Sparsity is not limited to the point-target model; other models can be used as long as the targets are sparse or compressible in some transform domain, e.g., smooth targets are sparse in the Fourier or wavelet domain.

Recent results in compressive sensing (CS) state that it is possible to reconstruct a K-sparse signal x=Ψs of length N from View the MathML source measurements [5], [6] and [7]. CS takes non-traditional linear measurements, y=Φx, in the form of randomized projections. The signal vector, x, which has a sparse representation in a transform domain Ψ, can be reconstructed exactly (with high probability) from M=C(μ2(Φ,Ψ)logN)K compressive measurements by solving a convex optimization problem of the following form:

<nobr>(1)</nobr>View the MathML sourcewhich can be solved efficiently with linear programming. The constant C is usually small and μ(Φ,Ψ) is the mutual coherence [5] between Φ and Ψ.

Instead of taking traditional time-samples, a very small set of informative measurements in the form of random linear projections will still allow us to obtain the subsurface image. This will reduce the load in the data acquisition part of the problem, but put more work on the signal processing. Note that we do not take the random projections of the target space; instead, we take random projections of the received signals at each scan point. Then, we formulate the CS reconstruction problem using our model(s) for the received signals as a function of target space positions.

The next section describes the CS formulation of the subsurface imaging problem and discusses its advantages and how possible problems are handled. Section 3 explains the choice of algorithm parameters and Section 4 discusses handling non-ideal conditions. Section 5 presents results for simulated and experimental GPR data, along with a performance analysis of the proposed algorithm for different parameters.

2. Theory: CS for GPR synthetic aperture imaging

Ground penetrating radar uses electromagnetic (EM) radar pulses to image the subsurface. When the transmitted EM wave reflects from a buried object or a boundary with different dielectric constants, the receiving antenna records the reflected return signal [8] and [9]. Although the response of possible targets like landmines could be more complex, we use the common point-target model [2], [8] and [9] due to its simplicity. It is important to note that the point target model is not crucial to the method developed in this paper; a more sophisticated target model could be used. We use the model to write the received signal reflected from a point target as a time delayed and scaled version of the transmitted signal s(t),

<nobr>(2)</nobr>View the MathML sourcewhere τi(p) is the total round-trip delay between the antenna at the ith scan point and the target at p, σp is the reflection coefficient of the target and Ai,p is a scaling factor used to account for any attenuation and spreading losses.

For imaging, the GPR sensor is moved in space to collect reflected time data at different scan positions. The collection of these scan points constitute a synthetic aperture that forms the space–time data. Thus it is essential to know the space–time response from a point target. We typically use a bistatic GPR with separate transmitting and receiving antennas separated by a fixed transmitter–receiver (TR) distance dtr, as shown in Fig. 1(a). Calculating the travel-time τi requires knowledge of the path the wave travels from the transmitter antenna to the target and then to the receiver, as well as the wave velocities in the different media.



Full-size image (24K) - Opens new window Full-size image (24K)

Fig.1.(a) Bistatic GPR measurement setup. T denotes the transmit antenna, R the receiving antenna, and dtr the fixed offset between T and R. The midpoint, xi, is chosen as the location of the sensor at ith scan point. (b) Space–time impulse response of the GPR data acquisition process for a single point target scanned from -70 to View the MathML source by a bistatic GPR with View the MathML source and View the MathML source.


At the boundary between two different media with different dielectric properties, e.g., air and soil, the propagation direction changes according to Snell's law. Taking wave refraction into account is especially critical for near-field imaging problems, such as mine detection, since the distance between the antennas and the target is relatively small. Calculation of the exact refraction point requires the solution of a 4th degree polynomial. Several approximations to compute the refraction point are available [10]. After finding the refraction points, the distances d1:4,i of the four segments of the path shown in Fig. 1(a) can be found and the τi times can be calculated as

<nobr>(3)</nobr>View the MathML sourceTo collect data the T–R antenna pair is moved along the scan direction.2 Fig. 1(b) shows the space–time data collected over a single point target at View the MathML source when scanning along the x-axis from -70 to View the MathML source at a fixed y slice. The scan location is referenced to the middle of the T–R pair. The response from a single point target follows a spatially variant curve in the space–time domain as shown in Fig. 1(b). When using data collected over this region of interest, the processing is called synthetic aperture imaging.

A model which is a simple linear superposition of targets is obtained if we ignore the interaction between the targets:

<nobr>(4)</nobr>View the MathML sourcewhere d(ux,uy,t) is the measured GPR space–time data at scan point (ux,uy), τ(ux,uy;x,y,z) is the total travel time from the transmitting antenna to the target space point (x,y,z) and on to the receiver, σ(x,y,z) is the reflection coefficient at the target space point (x,y,z), and AL(ux,uy,x,y,z) accounts for spreading and losses. In (4) (x,y,z) constitutes the target space πT which lies in the product space [xi,xf]×[yi,yf]×[zi,zf]. Here (xi,yi,zi) and (xf,yf,zf) denote the initial and final positions of the target space along each axis.3 The goal is to generate the reflectivity profile or in other words an image of the target space using the measurements d(ux,uy,t).

Standard imaging algorithms do matched filtering4 with the measured data with the impulse response of the data acquisition process (Fig. 1(b)) for each spatial location. Time-domain standard backprojection [2], [10] and [11] can be expressed as

<nobr>(5)</nobr>View the MathML sourcewhere f(x,y,z) is the created subsurface image, w(ux,uy) is the weighting function on scan positions typically used to reduce sidelobes in images, and the shifted View the MathML source is the impulse response.

Normally we work with the discrete versions of (4) and (5). If the measurements and the image are concatenated into vectors, a sampled f is related to the observed discrete (noisy) space–time domain data d through a matrix. This matrix embodies the assumed point-target model, and it can also be used to generate a dictionary of GPR responses.

2.1. Creating a dictionary for GPR data

A discrete inverse operator can be created by discretizing the spatial domain target space πT and synthesizing the GPR model data for each discrete spatial position. Discretization generates a finite set of target points View the MathML source, where N determines the resolution and each πj is a 3D vector [xj,yj,zj]. Using (2) and (3) the signal at the GPR receiver can be calculated for a given element of View the MathML source using σp=1 in (2). For the ith scan point, the jth column of Ψi is the received signal that corresponds to a target at πj as shown in Fig. 2. The nth index of the jth column can be written as

<nobr>(6)</nobr>View the MathML sourcewhere tn=t0+n/Fs with 0less-than-or-equals, slantnless-than-or-equals, slantNt-1, and the denominator is the energy in the time signal. Here Fs is the sampling frequency, t0 is the appropriate initial time, and Nt is the number of temporal samples. Note that the nth component of the vector s(τi(πj)) is s(tn-τi(πj)). Thus each column has unit norm, is independent of the spreading factor in (2), and depends only on the travel time. Repeating (6) for each possible target point in View the MathML source generates the dictionary Ψi of size Nt×N when the GPR is at the ith scan point.





Full-size image (18K) - Opens new window Full-size image (18K)

Fig.2.Creating the GPR data dictionary. The antenna is at the ith scan point. Vectors next to the target space and the GPR model data represent the vectorized forms of the target space and the model data, respectively.


This allows us to write the received signal ζi that consists of reflections from multiple targets as linear combinations of dictionary columns as

<nobr>(7)</nobr>ζi=Ψib.where b is a weighted indicator vector for the targets, i.e., a nonzero value at index j if b selects a target at πj. If there is a target at πj the jth index of b should be σj/Ai,j, otherwise, it is zero. Our goal is to find b which is an image representation of target space. The loss factor Ai,j is assumed unknown and is included in the estimation of b, however, if Ai,j is known it could be included a priori.

2.2. Compressive sensing data acquisition

Standard GPR receivers sample the received signal at a very high rate Fs. In this paper, we propose a new data acquisition model based on compressive sensing which would require many fewer samples to construct the target space image when the target space is sparse. In the spirit of CS, a very small number of “random” measurements carry enough information to completely represent the signal. We define a measurement as a linear projection of the signal onto another vector. In the general case, measurements of a signal x can be written as y=Mx where M is the measurement matrix. When M is the identity matrix, standard time samples of the signal are obtained. Fig. 3 illustrates the standard and compressive sensing data acquisition models.





Full-size image (18K) - Opens new window Full-size image (18K)

Fig.3.Standard time samples vs. compressive samples of the a signal.


In compressive sensing, we measure linear projections of ζi onto a second set of basis vectors View the MathML source which can be written in matrix form for the ith scan point as

<nobr>(8)</nobr>βi=Φiζi=ΦiΨib,where βi is the measurement vector, Φi is an M×Nt measurement matrix and Mdouble less-than signNt. The CS theory depends on the restricted isometry property (RIP) of the product matrix ΦiΨi [5] and [6]. The RIP is basically related to the coherency of Φi and Ψi, where we must guarantee that the rows of Φi cannot be sparsely represented by the columns of Ψi, or vice versa. The RIP holds for many pairs of bases like delta spikes and Fourier sinusoids, or sinusoids and wavelets. It has been shown that a random measurement matrix Φi whose entries are i.i.d. Bernoulli or Gaussian random variables will satisfy the RIP with any fixed basis Ψi like spikes, sinusoids, wavelets, Gabor functions, curvelets, etc. [12].

In Section 5 we will show results using several types of measurement matrices. Entries of the Type I random matrix are drawn from View the MathML source. Type II random matrices have random ±1 entries with probability of View the MathML source, and a Type III random matrix is constructed by randomly selecting some rows of an identity matrix of size Nt×Nt, which amounts to measuring random space–time domain samples. Note that although the entries for the measurement matrices are selected randomly from different distributions, once they are selected they are fixed and known. Selecting different types of measurement matrices impacts the imaging performance, but it will also result in different hardware implementations. For example, the system in Fig. 4 could be used to implement the Types I and II measurement matrices. Microwave mixers can be used for the multipliers and low-pass filters for the integrators. The system would require the real-time generation of the signal vectors View the MathML source, at a rate near the Nyquist rate. A GPR intended to find small shallow targets like anti-personnel landmines would have a maximum frequency in the 2–6GHz range, while a GPR intended to find larger and deeper targets such as tunnels would have a much lower maximum frequency in the 10–100MHz range. It is difficult with current technology to generate the signal vectors for the Type I measurement matrix at these rates, but it is relatively straight forward to generate the signal vectors for the Type II measurement matrix; particularly, if pseudorandom binary sequences are used, they can be generated by a state machine at the required rates. In fact, a more traditional GPR that uses pseudorandom binary sequences has been built [13]. Another hardware pseudo-random sequence generator is the one obtained from a linear feedback shift register [14].





Full-size image (30K) - Opens new window Full-size image (30K)

Fig.4.(a) Data acquisition for GPR at one single scan point and (b) one possible compressive sensing implementation at the GPR receiver.


A sensor based on the Type III measurement matrix would be easy to implement since it would only require relatively minor changes to the sequential-sampling systems used in most commercial GPRs [1] and [15]. If the sampling system were changed from sequential to random, the system would directly measure the compressive measurements βi. Random sampling is well understood and has some timing advantages over sequential sampling [15] and [16]. Depending on the structure of Φi, other analog implementations could also be used [17] and [18]. Other types of Φi are analyzed in Section 5.2.

2.3. GPR imaging with compressive sensing

For imaging, we form a synthetic aperture from L scan points and write a super problem with the matrices View the MathML source, and Φ=diag{Φ1,…,ΦL}, and the measurement vector View the MathML source.5 According to CS theory the target space indicator vector b can be recovered exactly (with overwhelming probability) if the number of measurements M is View the MathML source [5], by solving an 1 minimization problem

<nobr>(9)</nobr>View the MathML sourceThe optimization problem in (9) is valid for the noiseless case because it uses an equality constraint. If the GPR signal is noisy, i.e., View the MathML source, then the compressive measurements βi at the ith scan position are

<nobr>(10)</nobr>View the MathML sourcewhere View the MathML source and ni is the concatenation of the noise samples at scan point i which is assumed to be View the MathML source independent of the antenna position i. Since Φi is known, we have View the MathML source. Hence, if we constrain the norm of the φim vectors to be one, then View the MathML source.

It is shown in [19], [20], [21] and [22] that stable recovery of the sparsity pattern vector b is possible by solving either of the following relaxed 1 minimization problems:

<nobr>(11)</nobr>View the MathML sourceor

<nobr>(12)</nobr>View the MathML sourcewhere A=ΦΨ and ε1,2 are the regularization parameters. Selection of algorithm parameters is discussed in Section 3. For the target space images created using (11), 1 minimization is a linear program that is easy to implement, while (12) is a second-order cone program [23]. The optimization problems in (9), (11) and (12) all minimize convex functionals, so a global optimum is guaranteed.

The discretized SBP algorithm for (5) in this framework can be seen as application of adjoint operator to the standard measured data. SBP uses standard time samples ζ=Ψb and generates a target space vector View the MathML source. However, it is hard in practice to implement SBP as this manner due to memory requirements since Ψ is a huge matrix of size LNt×N. On the other side, the CS method requires construction of A of size only LM×N where Mdouble less-than signNt.

For the numerical solution of (11) or (12) a convex optimization package called 1-magic [24] is used. The convex optimization programs use interior point methods which use iterations of Newton's method. For optimizing (11) or (12), the cost6 is View the MathML source with the observation that the number of iterations is typically quite low, almost independent of the size of the problem [23] and [25]. A theoretical worst case bound on the number of iterations is View the MathML source [25]. The computational complexity is higher than that of backprojection which has a complexity of View the MathML source where L is the total number of scan points. However, the benefits of CS are generality, sparser images as well as fewer measurements in both the time and spatial domains. To reduce the computational complexity other suboptimal solvers like iterative hard thresholding [26], orthogonal matching pursuit [27] and CoSaMP [28] can also be used.

3. Selection of algorithm parameters

An important part of the proposed subsurface imaging system is the selection of two algorithm parameters: the spatial grid density, N, and the regularization parameter, ε1,2 in (11) or (12), which controls the tradeoff between the sparsity of the solution and the closeness of the solution to the data. Since discrete spatial positions are used to create the sparsity dictionary, the estimates of the target locations are confined to the selected grid. Increasing N makes the grid uniformly very fine, but it also increases the complexity of the algorithm. The CS method is suitable for multiresolution grid refinement. Initially a coarse grid might be used to obtain an approximate knowledge of possible target locations. Later the grid can be made finer in the regions where better precision is required. There is some risk that a rough initial grid might introduce substantial bias into the estimates. Our results indicate that using a 0.5–1cm depth resolution usually suffices.

Selecting a proper regularization parameter ε is very important. Otherwise, (11) either will not fully reconstruct the sparsity pattern vector and will miss some targets (underfitting), or it will try to explain significant portion of the noise by introducing spurious peaks. When the noise statistics are known, a “good” choice of ε1,2 can be made, e.g., for AWGN with variance σ2 selecting View the MathML source makes the true b feasible with high probability [20]. For ε2 other selections can be made [19].

In most cases, the noise statistics are unknown and need to be estimated. We propose an automatic method for selecting the regularization parameter based on cross-validation (CV) [29] and [30] which does not require any knowledge or estimate of the noise statistics. The method depends on separating the measurements into estimation and CV sets. The CS method is applied to the estimation data set with an initial selection of ε and the method's result is tested on the CV data set. As the algorithm iterates the prediction performance in the CV set increases. When the method starts to overfit the estimation data set which means estimating part of the noise, performance in the CV set decreases. Further decrease in ε is not beneficial and the algorithm should be terminated.

The CV based algorithm consists of the following steps, where the subscript E denotes the estimation set, and CV the cross-validation set:

(i) Initialization: Set View the MathML source, View the MathML source and i=1. An initial ε that allows the method not to overfit the data can be selected by setting α=0.99. Note that for α>1, we would get View the MathML source as the minimum 1 norm solution.

(ii) Estimation: Apply

(11) to estimate the target locations View the MathML source using AE.

(iii) Cross-validation: if View the MathML source then set View the MathML source, increment i and iterate from Step (ii). Otherwise, terminate the algorithm.

4. Discussion

The theory of compressive subsurface imaging outlined in Section 2 is based on two important assumptions: the speed of propagation in the medium is known, and potential targets are point targets. However, in most subsurface imaging problems, the propagation velocity may only be known approximately, and targets will generally not fall on an assumed grid. These problems also affect other subsurface imaging algorithms such as SBP. The proposed algorithm and SBP are robust to these kinds of non-ideal situations.

One way to attack the unknown velocity problem is to use an overcomplete dictionary that consists of several sub-dictionaries [31], one for each possible velocity. The 1 minimization defined in (9), (11) or (12) will be able to select sparsely from this larger overcomplete dictionary to satisfy the measurement constraints of the optimization problem. Such an example has been tested and successful results have been obtained.

The “target off the grid” problem can be addressed with the Dantzig selector constraint (11), but we need to show that this inequality constraint will lead to a sparse solution that is close to the true solution. The constraints in (11) and (12) allow non-idealities. The terms in the Dantzig selector constraint (11), ATβ and ATA, are actually auto-correlations7 of the transmitted signal s(t). For ATβ, we get a column vector whose nth element is R(τ(πn),τ(πT)), where R is the autocorrelation of the signal s(t) at two time delays corresponding to targets at πn and πT. Here πn is the nth discrete position and πT is the true target position. For the matrix ATA, the element in the nth row and rth column is R(τ(πn),τ(πr)). The shape of the autocorrelation of s(t) affects the resolvability of two targets and the handling of non-idealities. If s(t) does not decorrelate quickly, two close targets might not be resolved. On the other hand, a broad autocorrelation makes it easy to handle non-idealities because the correlation ATβ will still be high even with a velocity mismatch or when the target is off the grid. In the other extreme case where the autocorrelation of s(t) only peaks at zero lag, i.e., R(·,·) is an impulse, the resolvability of closely spaced targets will increase. This tradeoff between resolution and ability to handle nonideal conditions is actually an open problem for waveform design. As detailed in [31] use of redundant dictionaries requires small coherence values. For the simulated and experimental results presented in Section 5, a double differentiated Gaussian pulse is used, which decorrelates quickly giving small coherence and is a compromise between two properties discussed above.

5. Results

A test example will illustrate the ideas presented in the previous section. A 2D slice from the target space of size View the MathML source containing three randomly placed point targets is used. The true target space distribution is shown in Fig. 5(a).





Full-size image (139K) - Opens new window Full-size image (139K)

Fig.5.(a) Target space, (b) space–time domain GPR response, (c) compressive measurements at each scan point, (d) least-squares solution, (e) solution obtained with the proposed method using (11), and (f) solution obtained with SBP.


For this example a bistatic antenna pair with antenna height of View the MathML source and transmitter–receiver distance of View the MathML source is used. The subsurface is dry soil with permittivity of var epsilon=4. The noisy space–time domain response of the target space shown in Fig. 5(b) consists of 512×30 raw space–time domain measurements generated in Matlab [32]. The SNR for this example is View the MathML source. Instead of measuring the entire space–time domain response, twenty inner product measurements are formed at each of the 30 scan points making 600 measurements in total (Fig. 5(c)). The inner products can be written as the product of the time-domain response with rows of a random matrix Φi of size 20×512 whose entries are drawn independently from View the MathML source.

The sparsity pattern vector for the 30×30 target space has length 900 and we have 600 measurements which results in an underdetermined system of equations, β=Ab. One possible feasible result is the least squares solution, View the MathML source, which gives the target space image shown in Fig. 5(d). The 2 solution is feasible because it satisfies the compressive measurements, but it provides no sensible information about possible targets.

For target space imaging, the Dantzig selector (11) is used with cross-validation to select a proper ε and a stopping condition. The 600 available measurements are divided into two groups: 500 are used for imaging in (11), and 100 for CV. Fig. 5(e) is obtained as the target space image, with the 2 solution of Fig. 5(d) used as a feasible starting solution for the convex optimization. The number of targets is not assumed to be known.

It can be seen that the actual target positions are found correctly, and the obtained image is much sparser than the standard backprojection result in Fig. 5(f). Furthermore, the backprojection result is obtained using all of the space–time data. Both of the images in Fig. 5(e) and (f) are normalized to their own maxima and are shown on a logarithmic 40-dB scale. The convex optimization result has less clutter in the image since the 1 norm minimization forces a sparse solution. Typical computation time for the CS algorithm to image an area as in this example is around 24s.

An important measure is the successful recovery rate, plotted in Fig. 6 as a function of the number of measurements for a varying number of targets (sparsity level), i.e., 1–10. For each point on the plot, the proposed method is applied one hundred times. If the mean squared error (MSE) between the ground truth image and the recovered image is <10% of the norm of the ground truth image, the image is counted as a successful recovery. The number of successful recoveries over 100 trials yields the probability of successful recovery (PSR) in terms of the measurement and sparsity level.





Full-size image (38K) - Opens new window Full-size image (38K)

Fig.6.Probability successful recovery (PSR) vs. the number of measurements (M) for varying number of targets (P) in the target space.


It can be observed that increasing the sparsity level P, i.e., the number of targets, requires more measurements for the same level of PSR. However, even for P=10, the required number of measurements (60) for high PSR is still far less than the 512 measurements needed for Nyquist rate sampling.

5.1. Performance in noise

To analyze performance versus (additive) noise level, the algorithm is applied to GPR data with SNRs from -25 to View the MathML source. The Dantzig selector (11) is used to construct the target space image. This procedure is repeated 50 times with random initialization of the noise each time. For each SNR level, the variance of the target locations is calculated8 and plotted in Fig. 7(a) for SBP and for the proposed method with a varying number of measurements. SBP has lower variance values at low SNRs because it uses many more measurements; at moderate and high SNRs the proposed method has much lower variances. The variance of SBP is nearly flat for moderate to high SNRs probably because SBP has reached its resolution limit. Our method provides much lower variances indicating a super-resolution property, which has also been observed in similar sparse signal reconstruction applications [33] and [34].





Full-size image (66K) - Opens new window Full-size image (66K)

Fig.7.(a) Variance of target positions vs. SNR; (b) normalized variability of the created images vs. SNR; and (c) probability of successful recovery (PSR) vs. SNR. M is the number of measurements at each scan point for the proposed algorithm. SBP uses M=220 which is the full space–time domain data at each scan point.


Fig. 7(b) shows the normalized variability, i.e., the average MSE between the constructed images and the mean of the constructed images, as a function of SNR for the tested methods. Since both SBP and the proposed method introduces no bias to the target positions,9 smaller image variability is a rough indication of robustness of the methods to noise, and correctly reconstructed and sparse image. The proposed method has lower variability than SBP for SNRs View the MathML source and with increasing measurements variance decreases faster. The fact that the proposed method favors sparse solutions explains this result. Fig. 7(c) shows the PSR as a function of SNR for varying number of measurements. The MSE metric between the ground truth and the recovered image is used for successful recovery decision. It is observed that increasing the number of measurements allows the method to perform at lower SNR values. Even with small number of measurements, i.e., 5 or 10, high successful recovery rates can be obtained with SNR more than View the MathML source. However, the method has low PSR values even for high number of measurements for SNRs View the MathML source.

5.2. Effect of the measurement matrix Φ

The results shown in Fig. 5 use a measurement matrix whose entries are drawn from a normal distribution. Here the effects of using six different types of measurement matrices are analyzed. Types I–III are as defined in Section 2.2. Types IV, V and VI are random matrices that lie between Types II and III. At each scan point a random subset of the data is selected and projected onto vectors having random ±1 entries with probability of View the MathML source. Type IV uses 50% of the data, Type V 10%, and Type VI 2%. Each matrix is normalized to have unit norm columns.

An average mutual coherence μ(Φ,Ψ) is found for the GPR sparsity dictionary Ψ and each type of measurement matrix Φ by averaging over 100 randomly selected measurement matrices. The mutual coherences tabulated in Table 1 show that the required number of compressive measurements will be similar if Type I or II matrices are used, although Type II is slightly better than Type I. Using a Type III matrix will require approximately (9.82/3.15)2≈9 times more compressive measurements for the same imaging performance. Although Type III requires many more measurements it has a simpler data acquisition implementation (see Section 2.2). Types IV–VI show the tradeoff between the required number of samples and the simplicity of the data acquisition process.



Table1. Mutual coherence for different types of measurement matrices.
Type I II III IV V VI
μ 3.76 3.15 9.82 3.62 4.56 7.54

A Monte Carlo simulation is done to test the performance of the proposed method as a function of the number of compressive measurements for each random matrix type. Each random matrix is tested with noisy GPR data having three targets at View the MathML source. The Monte Carlo simulation uses 100 trials and each trial uses a different random measurement matrix to generate the compressive measurements, then the target space image is constructed using (11). Fig. 8 shows the variability of images from 100 trials versus the number of measurements for the six different measurement types defined above. Types I, II, IV and V require a similar number of measurements, but Types III and VI random matrices require many more measurements to obtain a similar level of performance as expected from the mutual coherences given in Table 1.





Full-size image (31K) - Opens new window Full-size image (31K)

Fig.8.Image variability vs. measurement number for different types of random matrices. Legend indicates the measurement matrix type.


5.3. Random spatial sampling

The convex optimization problem in (11) solves for the target space using measurements from different scan point positions jointly. It is possible to omit some scan points and still satisfy the minimal number of measurements needed for correct reconstruction. Fig. 9 shows the results when 15 spatial scan positions are randomly selected, and 20 compressive measurements are taken at each scan point. Recall that in Fig. 5, 20 compressive measurements at 30 spatial positions were used. The actual target space is the same as Fig. 5(a). The space–time domain response is shown in Fig. 9(a) where the skipped scan positions are denoted with black stripes. The compressive measurements at the selected scan positions are shown in Fig. 9(b). Convex optimization and standard backprojection results are shown in Fig. 9(c) and (d). The CS result is a slightly degraded compared to using all 30 apertures; however, the SBP result is severely degraded compared to the full scan result.





Full-size image (92K) - Opens new window Full-size image (92K)

Fig.9.(a) Space–time domain response of the target space to the GPR data acquisition process at 15 randomly sampled spatial scan positions, (b) compressive measurements at the sampled scan positions, (c) target space image obtained with the CS method, and (d) target space image obtained with SBP.


To test how much further the random spatial sampling can be reduced, a Monte Carlo simulation was performed. Noisy GPR data of three point targets are generated for 30 scan points with View the MathML source. The target space is constructed from compressive measurements taken at different sized subsets of the 30 scan points. Each subset is randomly selected and the target space is reconstructed with (11) using only the measurements from the selected scan points. This procedure is repeated 100 times with random scan point selections each time. Two cases are tested. In Case 1, M=10 measurements are taken at each scan point used. In Case 2, the total number of measurements is kept at 300, i.e., when 10 scan points are used, M=30. Table 2 shows the image variability versus the number of scan points for the two cases of the proposed method to compare with backprojection.



Table2. Variability for reduced number of spatial samples.
Reduced spatial sampling test
# Scan points 30 20 10 5
Case 1 (M=10) 0.07 0.14 0.40 0.91
Case 2 (varying M) 0.07 0.12 0.22 0.48
Backprojection 0.13 0.43 0.71 0.91

Table 2 shows that the CS method is less affected by the missing spatial samples than backprojection. Decreasing the number of scan points from 30 to 20 increases the variability of the CS method from 0.07 to 0.14, while the variability of backprojection increases by more than three times from 0.13 to 0.43. Dropping to five scan points, Case 2 is the best because it uses 300 compressive measurements while Case 1 uses 50.

5.4. Experimental results

The proposed algorithm is tested on the experimental GPR data collected from a simulated mine field [35]. The simulated minefield is set up in a sandbox that is approximately 1.5m deep and filled with 50tons of damp compacted sand. The GPR consists of resistive-vee antennas spaced 12cm apart. Data acquisition is performed by a vector network analyzer. The GPR collects frequency domain data from 60MHz to 8.06GHz which is then used to synthesize the time-domain data that would be collected if the transmitting antenna was excited by a differentiated Gaussian pulse with a center frequency of 2.5GHz. Two scenarios have been studied:

GPR air results: This part presents CS imaging of a 1in diameter metal sphere held in the air at a height of 36.5cm on a styrofoam support, shown in Fig. 10(a). The wave speed is known to be View the MathML source. The raw data measured over the target for a 2D slice is shown in Fig. 10(b). Since the proposed CS measurement system is not yet built in hardware, standard Nyquist-rate space–time sampling is done and the compressive measurements were created offline. Ten measurements (with Type I random matrices) are taken at each of the 70 scan points for a total of 700 CS measurements. Two different cases for Φi are tested. Fig. 10(c) shows the actual compressive measurements when a different Φi is used at each scan point i, whereas Fig. 10(d) uses the same Φi for all i. Using the same measurement matrix at each scan point reduces the memory requirements and will be much easier to implement.





Full-size image (80K) - Opens new window Full-size image (80K)

Fig.10.(a) Experimental setup for GPR imaging, (b) space–time measured response of a 1in metal sphere in air, (c) compressive measurements of the space–time data shown in (b) when a different Φi is used at each scan point i, and (d) when the same Φi is used at each scan point i.


For CS target space reconstruction, the Dantzig selector (11) is used with ε=0.5short parallelATβshort parallel=1.34×10-4 for the measurement sets shown in Fig. 10. The results of the Dantzig selector for Fig. 10(c) and (d) are shown in Fig. 11(a) and (b), respectively. Note that the target is a 1in sphere and its bottom is positioned at a height of 36.5cm. While both measurement sets can construct a sparse representation of the target space successfully, it is seen that using different Φi matrices at each scan point has the potential to extract more information about the target space for the same ε level. Since the scanned object is not a perfect point target, the space–time data is not a single hyperbolic response. The two distinct points in Fig. 10(a) indicates this fact about the target. The results of the proposed algorithm can be compared with SBP which uses all the space–time data in Fig. 10(b). Fig. 11 shows that the proposed CS methods both do a much better job at creating a less cluttered and sparser target space image than the backprojection result shown in Fig. 11(c). Note that while backprojection uses a total of 15,400 standard time domain samples, CS uses only 700 compressive measurements. All images are shown on a 30-dB scale and are normalized to their own maxima.





Full-size image (40K) - Opens new window Full-size image (40K)

Fig.11.(a) Target space image found with the CS method using the measurement set in Fig. 10(c), (b) target space image obtained with the CS method using the measurement set in Fig. 10(d), and (c) Target space image produced by SBP.


Buried target results: In this experiment, multiple targets, such as antipersonnel and antitank mines, metal spheres, nylon cylinders and mine stimulants, are buried in a sandbox in an area of View the MathML source. A photograph of the buried targets is shown in Fig. 12(a) [35] along with the burial locations and depths in Fig. 12(b). The phase centers of the GPR antennas are elevated 27.8cm, and the transmitter–receiver distance is 12cm. Fig. 13(a) shows space–time data from a line scan at x=0 over four buried targets. The targets on the line scan are schematically shown in Fig. 13(b). The hyperbolic responses of the targets can be seen in the space–time data. Ground reflections are removed by time-gating. The wave speed in the soil is View the MathML source for both methods. When SBP (5) is applied to the space–time data in Fig. 13(a), the image in Fig. 13(c) is obtained. The image is normalized to its maximum and shown on a 30-dB scale. Although the four targets are clearly seen, the image is highly cluttered. Applying the CS method using (11) with 15 Type I compressive measurements at each scan point yields the target space image in Fig. 13(d). Fig. 13(c) is also normalized to its own maximum and shown on the same scale as SBP image. The CS method can generate a much sparser and less cluttered target space image while finding the targets correctly. Thus the CS method has good performance even in cases where the algorithm assumptions are not exactly valid, e.g., approximate wave velocity or targets off the discrete grid or non-point targets.





Full-size image (41K) - Opens new window Full-size image (41K)

Fig.12.(a) Picture of buried targets and (b) burial map showing the location of targets in the sandbox. The numbers in parentheses are the target depths.




Full-size image (74K) - Opens new window Full-size image (74K)

Fig.13.(a) Space–time GPR data for the View the MathML source line scan of the burial scenario shown in Fig. 12; (b) burial depths for the vertical slice at x=0. Images of the target space slice obtained by (c) SBP and (d) CS.


Each line scan of the experimental data is imaged with both the CS method and SBP. The resultant 2D images are put together to create a 3D image of the subsurface. Fig. 14 shows the top view of the scan area when the energy along the depth axis is summed to the surface for SBP images and CS images. The proposed CS method uses 15 compressive measurements at each scan point. Both top view images are shown on a 30-dB scale. The actual target locations with corresponding sizes are also drawn on these images. It can be seen that both methods have similar imaging performance, but the proposed CS method creates a much sparser image than the SBP method. Although most of the targets are imaged correctly, the deep metal sphere and the M-14 mine are missed by both methods.





Full-size image (64K) - Opens new window Full-size image (64K)

Fig.14.Surface energy images created by (a) SBP and (b) CS. The selected region in (b) bounded by dashed lines is presented in (c) as a 3D isosurface (at View the MathML source).


To show the spread of the targets in depth, a 3D isosurface image is shown in Fig. 14(c). The isosurface image shows the selected region from Fig. 14(b) which includes two shallow TS-50 mines, one shallow mine simulant and one deep VS 1.6 mine. All targets are clearly seen at their corresponding depths.

6. Conclusions

A novel data acquisition and imaging algorithm for ground penetrating radars based on sparseness and compressive sensing was demonstrated. However, the outlined method can be applied to the general problem of wideband synthetic aperture imaging with modified sparsity models. The results depend on the random measurement matrix used, but the CS method usually outperforms SBP with fewer measurements. The choice of the measurement matrix should be studied further, and we believe that our results can be improved either using a more generalized measurement matrix, i.e., a full random matrix instead of a block diagonal Φ. The computed image might be improved by employing a different spatial sparsity method such as the total variation minimization which was successfully used in Fourier domain MRI imaging [36].

References

[1] D.J. Daniels, Surface-penetrating radar, Electronics and Communication Engineering Journal 8 (1996), pp. 165–182. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (31)

[2] X. Feng and M. Sato, Pre-stack migration applied to GPR for landmine detection, Inverse Problems 20 (2004), pp. 99–115.

[3] A.C. Gurbuz, J.H. McClellan and W.R. Scott Jr., Compressive sensing for GPR imaging, Asilomar Conference (2007), pp. 2223–2227.

[4] S.S. Chen, D.L. Donoho and M.A. Saunders, Atomic decomposition by basis pursuit, SIAM Journal on Scientific Computing 20 (1999), pp. 33–61.

[5] E. Candes, J. Romberg and T. Tao, Robust uncertanity principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory 52 (2006), pp. 489–509. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (1380)

[6] D. Donoho, Compressed sensing, IEEE Transactions on Information Theory 52 (4) (2006), pp. 1289–1306. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (1776)

[7] R. Baraniuk, Compressive sensing, IEEE Signal Processing Magazine 24 (4) (July 2007), pp. 118–121. Full Text via CrossRef

[8] D. Daniels, Ground Penetrating Radar (second ed.), The Institution of Electrical Engineers (IEE), London (2004).

[9] L. Peters, J. Daniels and J. Young, Ground penetrating radar as a subsurface environmental sensing tool, Proceedings of the IEEE 82 (12) (December 1994), pp. 1802–1822. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (130)

[10] E.M. Johansson and J.E. Mast, Three dimensional ground penetrating radar imaging using a synthetic aperture time-domain focusing, Proceedings of the SPIE Conference on Advanced Microwave and Millimeter Wave Detectors 2275 (1994), pp. 205–214. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (33)

[11] S.-M. Oh, Iterative space–time domain fast multiresolution SAR imaging algorithms, Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, 2001.

[12] R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, A simple proof of the restricted isometry property for random matrices, Constructive Approximation, 28(3) (December 2008) 253–263.

[13] J. Sachs, M-sequence ultra-wideband-radar: state of development and applications, Radar Conference (2003), pp. 224–229.

[14] J. Laska, S. Kirolos, M. Duarte, T. Ragheb, R. Baraniuk and Y. Massoud, Theory and implementation of an analog-to-information converter using random demodulation, IEEE International Symposium Circuits and Systems, New Orleans, LA (2007), pp. 1959–1962.

[15] M. Kahrs, 50 years of RF and microwave sampling, IEEE Transactions on Microwave Theory and Techniques 51 (6) (2003), pp. 1787–1805. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (31)

[16] G. Frye, N. Nahman, Random sampling oscillography, IEEE Transactions on Instrumentation and Measurement (March 1964) 8–13.

[17] R. Baraniuk and P. Steeghs, Compressive radar imaging, IEEE Radar Conference (2007), pp. 128–133.

[18] J. Tropp, M. Wakin, M. Duarte, D. Baron and R. Baraniuk, Random filters for compressive sampling and reconstruction, ICASSP-2006 3 (2006), pp. 872–875.

[19] E. Candès, J. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Communications on Pure and Applied Mathematics 59 (8) (2006), pp. 1207–1223. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (641)

[20] E. Candes and T. Tao, The Dantzig selector: statistical estimation when p is much larger than n, Annals of Statistics 35 (6) (2007), pp. 2313–2351. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (244)

[21] J. Haupt and R. Nowak, Signal reconstruction from noisy random projections, IEEE Transactions on Information Theory 52 (9) (2006), pp. 4036–4048. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (141)

[22] D. Donoho, M. Elad and V. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Transactions on Information Theory 52 (1) (2006), pp. 6–18. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (291)

[23] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, MA (2004).

[24] J. Romberg, l1-magic left angle brackethttp://www.acm.caltech.edu/l1magic/right-pointing angle bracket.

[25] M. Lobo, L. Vandenberghe, S. Boyd and H. Lebret, Applications of second-order cone programming, Linear Algebra and its Applications 284 (1998), pp. 193–228. Article | PDF (2018 K) | View Record in Scopus | Cited By in Scopus (519)

[26] T. Blumensath, M. Yaghoobi and M. Davies, Iterative hard thresholding and l0 regularisation, ICASSP 3 (2007), pp. 877–880.

[27] J. Tropp and A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on Information Theory 53 (12) (December 2007), pp. 4655–4666. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (389)

[28] D. Needell, J.A. Tropp, Cosamp: iterative signal recovery from incomplete and inaccurate samples, Applied and Computational Harmonic Analysis, 26, 2009, pp. 301–321.

[29] P. Boufounos, M. Duarte and R. Baraniuk, Sparse signal reconstruction from noisy compressive measurements using cross validation, Proceedings of the IEEE Workshop on SSP (2007), pp. 299–303.

[30] R. Ward, Cross validation in compressed sensing via the Johnson Lindenstrauss Lemma, 2008. left angle brackethttp://www.citebase.org/abstract?id=oai:arXiv.org:0803.1845right-pointing angle bracket.

[31] H. Rauhut, K. Schnass and P. Vandergheynst, Compressed sensing and redundant dictionaries, IEEE Transactions on Information Theory 54 (5) (May 2008), pp. 2210–2219. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (69)

[32] A. Gurbuz, J. McClellan and W. Scott Jr., Subsurface target imaging using a multi-resolution 3D quadtree algorithm, Detection and Remediation Technologies for Mines and Minelike Targets X, Proceedings of the SPIE 5794 (June 2005), pp. 1172–1181.

[33] D. Donoho, Superresolution via sparsity constraints, SIAM Journal on Mathematical Analysis 23 (5) (1992), pp. 1309–1331. Full Text via CrossRef

[34] D. Malioutov, M. Cetin and A. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays, IEEE Transactions on Signal Processing 53 (8) (2005), pp. 3010–3022. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (111)

[35] T. Counts, A.C. Gurbuz, W.R. Scott Jr., J.H. McClellan and K. Kangwook, Multistatic ground-penetrating radar experiments, IEEE Transactions Geoscience and Remote Sensing 45 (8) (August 2007), pp. 2544–2553. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (13)

[36] M. Lustig, D. Donoho and J. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance in Medicine 58 (6) (2007), pp. 1182–1195. Full Text via CrossRef | View Record in Scopus | Cited By in Scopus (333)

star, openThis work supported by an ARO-MURI grant: “Multi-Modal Inverse Scattering for Detection and Classification of General Concealed Targets”, under Contract number DAAD19-02-1-0252.


Corresponding Author Contact InformationCorresponding author.
1The author was with the School of Electrical and Computer Engineering, Georgia Institute of Technology, USA.

2x is taken as the scan direction for the example shown in Fig. 1(a) and (b).
3Although not required, the scan points (ux,uy) are taken within the x- and y-axis boundaries of πT for imaging purposes.
4SBP is equivalent to multiplying by the adjoint operator.
5Note that Ψi is Nt×N, Ψ is LNt×N, Φi is M×Nt, Φ is LM×LNt, βi is M×1 and β is LM×1. Φ is a block diagonal matrix.
6We assume MLless-than-or-equals, slantN where ML corresponds to the true number of measurements of the spatial grid.
7Assuming the measurement matrix is an identity.
8To obtain the plot in Fig. 7(a) we used a grid size of 0.01cm to get estimates not limited to a coarse grid.
9For high SNR and number of measurement case.

Signal Processing
Volume 89, Issue 10, October 2009, Pages 1959-1972
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics