Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USING AN MM-PRINCIPLE TO ENFORCE A SPARSITY CONSTRAINT ON FAST IMAGE DATA ESTIMATION FROM LARGE IMAGE DATA SETS
Document Type and Number:
WIPO Patent Application WO/2015/050596
Kind Code:
A2
Abstract:
The mathematical majorize-minimize principle is applied in various ways to process the image data to provide a more reliable image from the backscatter data using a reduced amount of memory and processing resources. A processing device processes the data set by creating an estimated image value for each voxel in the image by iteratively deriving the estimated image value through application of a majorize-minimize principle to solve a maximum a posteriori (MAP) estimation problem associated with a mathematical model of image data from the data. A prior probability density function for the unknown reflection coefficients is used to apply an assumption that a majority of the reflection coefficients are small. The described prior probability density functions promote sparse solutions automatically estimated from the observed data.

Inventors:
ANDERSON JOHN (US)
NDOYE MANDOYE (US)
ODE OLUDOTUN (US)
OGWORONJO HENRY CHIDOZIE (US)
Application Number:
PCT/US2014/042562
Publication Date:
April 09, 2015
Filing Date:
June 16, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV HOWARD (US)
International Classes:
G01S13/02
Attorney, Agent or Firm:
KRATZ, Rudy et al. (Even Tabin & Flannery, LLP,120 S. LaSalle Street,Suite 160, Chicago Illinois, US)
Download PDF:
Claims:
What is claimed is:

1. A method of creating an image from received data by estimating unknown reflection coefficients for individual voxels in a scene of interest (SOI), the method comprising:

receiving the data;

processing the data with a processing device by creating an estimated image value for individual voxels in the image by iteratively deriving the estimated image value through application of a majorize -minimize technique to solve a maximum a posteriori (MAP) estimation problem having a MAP objection function associated with a mathematical model of image data from the data, wherein the MAP objection function includes:

a data component including at least a portion of the data, and

a prior probability density function for the unknown reflection coefficients to apply an assumption that a majority of the reflection coefficients are small;

displaying the image using the estimated image value of individual voxels of the image,

2. The method of claim 1 wherein the receiving the data comprises receiving data representing transmission site locations of radar pulses, reception site locations of reception of reflections from the radar pulses, radar-return profiles for pairings of the transmission site locations and the reception site locations, and data samples associated with individual radar-return profiles.

3. The method of any of claims 1 and 2 wherein the data component of the MAP objection function φ(χ, σ2, λ) is

where σ2 is variance of noise in the data and where χ (χ) ½ \\y— arH ;

where y is the data, A is a K x L system matrix associated with a mathematical model of the data, K— IJL where / is a number of transmission site locations, J is a number of reception sites for each transmit site location, L is a number of pixels in the image, and x is a vector representing the reflection coefficients for the data. 4. The method of any of claims 1-3 wherein the prior probability density function for the unknown reflection coefficients is

where xx is the estimated image value at an Ith voxel, and

λ

og 4

s(xf, A)

45 1 + expCljXf j)

where λ is a constant controlling a degree of application of the assumption that the majority of the reflection coefficients are small.

5. The method of claim 4 wherein the iterativeiy deriving the estimated image value comprises iterativeiy deriving the reflection coefficient for a given voxel using for i = 1,2, ... , 1

(m+l) _ "I xl + ljl

X

where

K- l

!C --=:0 iterate number (m 4- 1), x 'x) is the estimated image value at an Ith voxel for iterate number (m), σ2(?η) is an estimation of va ria nce of noise in the data, and

Aaexp(Aj j)

s'(a, l)

\ a\ [l + exp(Aj j)]

where λ is a constant controlling a degree of application of the assumption that the majority of the reflection coefficients are small. O. The method of claim 5 wherein

is used as an estimate for the variance σ2 of the noise in the data for iterate number (m) and

AW = arg mm φ (χ , σ1^ , λ)

as solved using a ID line search, and where φ is the MAP objection function, is used as an estimate for the constant controlling the degree of application of the assumption that the majority of the reflection coefficients are small.

7. The method of any of claims 1-3 wherein the prior probability density function for the unknown reflection coefficients is

L

^jT s (*!, 0)

1 = 1.

where xx is the estimated image value at an I jt"h1 voxel, and

s(a) - togy^i

8. The method, of claim 7 wherein the iteratively deriving the estimated, image value comprises iteratively deriving the reflection coefficient for a given voxel using

where

wherexim+ ^ is the estimated image value at an Ith voxel for iterate number (m + 1),

(m)

is the estimated image value at an l h voxel for iterate number (m), σζ , ' is an estimation of variance of noise in the data, and

s!(a) 1

The method of claim 8 wherein

+ l) I; ·- j (m+ 1 y (m

— Ax;

K

is used as an estimate for the variance o 2 of the noise in the data for iterate number (rn! 1.) and

as solved using a I D line search, and where φ is the MAP objection function, is used as an estimate for the constant controlling the degree of application of the

assumption that the majority of the reflection coefficients are small.

10. The method of any of claims 1-3 wherein the prior probability density

function for the unknown reflection coefficients is a function with a peak and a width that decays to zero..

11. The method of claim 10 wherein the function is defined as

o

i +

where e and n are parameters that control width and decay rate for this prior probability density function and where

1

kn(e) da

+ (

12. The method of any of claims 1-3 and 10-11 wherein the iterative^ deriving the estimated image value comprises iteratively deriving the reflection coefficient for a given voxel using for I = 1,2, ... , L

where x is a vector representing the reflection coefficients for the d ta, xf'1 ' l^ is the estimated image value at an ltn voxel for iterate number (m + 1) , is the estimated image value at an itn voxel for iterate number (m), σ2^ is the estimate of the variance of noise in the data at the mth iteration, B is least upper bound of the second derivative of

kv (( )

g(a; e) 2n

1 +

where e and n are parameters that control width and decay rate for the prior probability density function and where

and where

and where

13. The method of claim 12 wherein the variance of the noise in the data is estimated using a maximum likelihood estimate of noise power.

14. The method of claim 13 wherein the maximum likelihood estimate of noise power for iterate number (rri) is given by y - Ax

K

(m) Λ (rri) , (m)

w. here =§=

15. The method of any of claims 1-14 further comprising calculating terms used to obtain the estimated image value,

wherein the calculating reduces processing time and required memory by

accounting for a symmetric nature of a given radar pulse, accounting for similar discrete time delays between transmission of a given radar pulse and reception of reflections from the given radar pulse, and accounting for a short duration of the given radar pulse.

16. The method of claim 15 wherein the calculating the terms comprises:

computing 0^ηι) by applying a hash-table-based computation to iesk

where d jf — ai l k and αί]Ί represents

attenuation of the given radar pulse during travel from an ith transmit location to an Ith voxel and back to a rn receiver.

17. The method of claim 16 further comprising computing G^ via

where

and where ytj [k] is a ktn sample of a radar-return profile associated with an ic ch transmit location and jth receiver, sffl, [k] is the mth estimate of a noise-free component of y^ [k], w is a discretized version of the given radar pulse, and nijl is a discrete time-delay corresponding to rounding the quotient of the time taken by the given radar pulse to travel from a transmitter at the ith transmit location to the Ith voxel and back to the jth receiver and a sampling interval, and * denotes discrete-time convolution.

18. The method of any of claims 12-17 wherein the calculating of the terms comprises:

computing H¾ by applying a hash-table-based computation to where \Sk \ denotes a number of elements in the set Sk.

19. The method of claim 18 wherein the computing Hi further comprises computing Hl via

where

% = { Yij * h) [k] \k=n.}l ,

γπ - v(max(0, n - ))),

and where h is a discretized version of a squared radar pulse and 2M + 1 is a number of non-zero samples in the given radar pulse.

20. The method of any of claims 1-19 further comprising:

emitting a radar pulse at specified intervals into a scene of interest;

detecting magnitude of signal reflections from the scene of interest from the radar pulse;

recording position data corresponding to individual radar pulse emissions and individual receptions of the signal reflections; and.

creating the initial data set from the position data and detected magnitudes of the signal reflections.

21. The method of any of claims 1-20 further comprising calculating terms used to obtain the estimated image value,

wherein the calculating reduces processing time and required, memory by

accounting for a symmetric nature of a given radar pulse, accounting for similar discrete time delays between transmission of a given radar pulse and reception of reflections from the given radar pulse, and accounting for a short duration of the given radar pulse.

22. An apparatus for detecting objects in a scene of interest, the apparatus comprising:

a vehicle;

a plurality of radar transmission devices mounted on the vehicle and configured to transmit radar pulses into a scene of interest;

a plurality of radar reception devices mounted on the vehicle configured to detect magnitude of signal reflections from the scene of interest from the radar pulses; a location determination device configured to detect location of the vehicle at times of transmission of the radar pulse from the plurality of radar transmission devices and reception of the signal reflections by the radar reception devices; a. processing device configured to process a. data set representing transmission site locations of individual ones of the radar pulses, reception site locations of reception of individual ones of the signal reflections, and number of data samples per reception profile by:

creating an estimated image value for each voxel in the image by iterativeiy deriving the estimated image value through application of a majorize-minimize technique to minimize a maximum a posteriori (MAP) estimation problem having a MAP objection function associated with a mathematical model of the image data from the data wherein the MAP objection function includes:

a data component including at least a portion of the data, and

a prior probability density function for the unknown reflection coefficients to apply an assumption that a majority of the reflection coefficients are small.

23. The apparatus of claim 22 wherein the processing device is further configured to calculate the data component of the MAP objection function φ(χ, σ2, λ) as

1

Ι ( Α:) +—\og a 2

2a*

where σ2 is variance of noise in the data and where φ1 (x) = \\y -~ AxW ;

where y is the data, A is a K x L system matrix associated with a mathematical model of the data, K— IJL where / is a number of transmission site locations, / is a number of reception sites for each transmit site location, L is a number of pixels in the image, and x is a vector representing the reflection coefficients for the data. 24, The apparatus of any of claims 22-23 wherein the processing device is further configured to use the prior probability density function for the unknown reflection coefficients as

where x? is the estimated image value at an Ith voxel, and

λ

log 4 where A is a constant controlling a degree of application of the assumption that the majority of the reflection coefficients are small.

25. The apparatus of claim 24 wherein the processing device is further configured to iteratively derive the reflection coefficient for a given voxel using

(m i- i) H,

x

-> ( ) ! ( (m) -i f:

σ2^ JS [ Xj , A

H, + · '

χ^'

where

K- l k=0

Κ-Λ

Akl (y - [Λχ^%)

k=0 where x;m+lj is the estimated image value at an Ith voxel for iterate number (m 4- 1), is the estimated image value at an Ith voxel for iterate number (rn), σ2 is variance of noise in the data, and

Aaexp(A j a j)

5 = p— Γν

j j [l T exp(A|a| )J

where λ is a constant controlling a. degree of application of the assumption that the majority of the reflection coefficients are sma.ll.

26, The apparatus of claim 25 wherein

Urn) I

σ2 (?η)

K

is used as an estimate for the variance σ2 of the noise in the data and

ψη) = arg min 0 (xim>, ff2 ("°, i)

λ

as solved using a I D line search, and where φ is the MAP objection function, is used as an estimate for the constant controlling the degree of application of the

assumption that the majority of the reflection coefficients are small.

27. The apparatus of any of claims 22-23 wherein the processing device is further configured to use the prior probability density function for the unknown reflection coefficients as

where xL- is the estimated image value at an Ith voxel, and

1

s(a) ~— log j^j

28. The apparatus of claim 27 wherein the processing device is further configured, to iteratively derive the reflection coefficient for a. given voxel using

for I ~ 1,2, ... , L

(rn + l) l + x l " I

Hi + (7" ( ' where rkAki m)

k)

fc = 0

wherex,™ is the estimated image value at an Ith voxel for iterate number (m + 1) , is the estimated image value at an Ith voxel for iterate number (m), o-2 m' is an estimation of variance of noise in the data, and

s!(a) 1

29. The apparatus of claim 28 wherein

is used as an estimate for the variance σ2 of the noise in the data for iterate number (m + 1) and

as solved using a ID line search, and where φ is the MAP objection function, is used as an estimate for the constant controlling the degree of application of the assumption that the majority of the reflection coefficients are small.

30. The apparatus of any of claims 22-23 wherein the prior probability density function for the unknown reflection coefficients is a function with a peak and a width that decays to zero.

31. The apparatus of claim 30 wherein the function is defined as where e and n are parameters that control width and decay rate and where

-i

1

kn (e) 2n da

32. The apparatus of any of claims 22-23 and 30-31 wherein the processing device is further configured to iteratively derive the estimated image value comprises iteratively deriving the reflection coefficient for a given voxel using

.(m+1)

/ a2'm ■ B + Ht

where x is a vector representing the reflection coefficients for the data, Xj mT l^ is the estimated image value at an l voxel for iterate number (m + 1) , is the estimated image value at an Ith voxel for iterate number (m), a2 jn) is the estimate of the variance of noise in the data at the mth iteration, B is least upper bound of the second derivative of

,X< )

2n

1

where e and n are parameters that control width and decay rate for the prior probability density function and where

and where

2n-l

2n

2n

e \ 1 + f - )

and where

33. The apparatus of claim 32 wherein the variance of the noise in the data is estimated using a maximum likelihood estimate of noise power.

34. The apparatus of claim 33 wherein the maximum likelihood estimate of noise power for the iterate number ( n) is given by

02(rr ) = i ||y - 4*0") II2 where [jcjm), x<m), ... , xjm)] .

35. The apparatus of any of claims 22-34 wherein the processing device is further configured to calculate terms used to obtain the estimated image value,

wherein the calculating reduces processing time and required memory by

accounting for a symmetric nature of a given radar pulse, accounting for similar discrete time delays between transmission of a given radar pulse and. reception of reflections from the given radar pulse, and accounting for a shor duration of the given radar pulse.

36. The apparatus of claim 35 wherein the processing device is configured to calculate the terms by:

computing by applying a hash-table-based computation to iesk

where d)J' } - α }1 and Sk ~ {I— 1,2, , ηί- — k] and αέ; represents

attenuation of the given radar pulse during travel from an ith transmit location to an Ith voxel and back to a fn receiver.

37. The apparatus of claim 36 wherein the processing device is further configured to compute C7 m via

i - 1 , /; / - 1,2, ... J ; I = 1,2, ,.. , L

where

Sy [n] = (qtj * w) [n]

and where ytj [k] is a fcift sample of a radar-return profile associated with an ith transmit location and jth receiver, s' l) [k] is the mth estimate of a noise-free component of }¾[&], w is a discretized version of the given radar pulse, and riy, is a discrete time-delay corresponding to rounding the quotient of the time taken by the given radar pulse to travel from a transmitter at the ith transmit location to the Ith voxel and back to the jth receiver and a sampling interval, and * denotes discrete-time convolution.

38. The apparatus of any of claims 35-37 wherein processing device is configured to calculate the terms by:

computing Hl by applying a hash-table-based computation to where \Sk \ denotes a number of elements in the set Sk.

39. The apparatus of claim 38 wherein the processing device is configured to compute Hi via

where

Yij [n] = v( }iin(ji + M, N)— ν(ι<ιαχ ίβ, ιι— M))");

and where h is a discretized version of a squared radar pulse and 2M + 1 is a number of non-zero samples in the given radar pulse.

Description:
USING AN MM-PRINCIPLE TO ENFORCE A SPARSITY

CONSTRAINT ON FAST IMAGE DATA ESTIMATION FROM LARGE IMAGE DATA SETS

RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional application number 61 /835.579, filed June 15, 2013 and U.S. Provisional application number 61 /835.580, fi led June 15, 2013, each of which is incorporated by reference in its entirety herein.

GOVERNMENT LICENSE RIGHTS

10002] This invention was made with government support under contract number W911NF-1120039 awarded by US Army Research Laboratory and the Army Research Office. The government has certain rights in the invention.

FIELD OF THE INVENTION

00031 This invention relates generally to image data processing and more specifically to applying the maj orize-mmirnize mathematical principle to achieve fast image data estimation for large image data sets. BACKGROUND

10004] Half of the coalition forces casualties in the Iraq and Afghanistan wars are attributed to land mines and improvised explosive devices lEDs) . Consequently, a critical goal of the US Array is to develop robust and effective land-mines/! ED detection systems that are deployable in combat environments. Accordingly, there is a desire to create robust algorithms for sub-surface imaging using ground penetrating radar (GPR) data and thus facilitate higher 1ED detection rates and lower false alarm probabilities.

[0005] Referring to the example schematic of FIG. 1, a GPR imaging system transmits signals from an above ground transmitter 102 into the ground of a scene-of- interest (SOI) 104. Signals that are reflected off of objects 106, 108, and 110 in the SOI 104 are received by one or more receivers 112 to generate images that convey relevant information about the objects 106, 108, and 1 10 (also known as scatters) within the SOI 104. As a transmitted pulse propagates into a SOI 104, reflections occur whenever the pulse encounters changes in the dielectric constant (e r ) of the material through which the pulse propagates. Such a transition occurs, for example, when the radar pulse moving through dirt encounters a metal object such as an JED. The strength of a reflection due to a patch of terrain can be quantified by its reflection coefficient, which is proportional to the overall change in dielectric constant within the patch,

10006] In principle, GPR imaging is -well-suited for detecting lEDs and land mines because these targets are expected to have much larger dielectric constants than their surrounding material, such as soil and rocks. It should be noted that for a high frequency transmission pulse { i. e., greater than 3 MHz) , the backscattered signal of a target can be well approximated as the sum of the backscattered signals of individual elementary scatterers.

[0007| The phrase GPR image reconstruction refers to the process of sub-dividing a SOI into a grid of voxels ( i. e. , volume elements ) and estimating the reflection coefficients of the voxels from radar-return data. Existing image formation techniques for GPR datasets include the delay-and-sum (DAS) or backprojection algorithm and the recursive side-lobe minimization (RSM) algorithm.

10008] The DAS algorithm is probably the most commonly used image formation technique in radar applications because its implementation is straightforward. The DAS algorithm simply estimates the reflectance coefficient of a voxel by coherently adding up, across the receiver-aperture, all the backscatter contributions due to that specific voxel. Although the DAS algorithm is a fast and easy-to-implement method, it tends to produce images that suffer from large side-iobes and poor resolution. The identification of targets with relatively small radar cross section (RCS) is thus difficult from DAS images because targets with large a RCS produce large side-iobes that may obscure adjacent targets with a smaller RCS.

10009] The RSM algorithm is an extension of the DAS algorithm that provides better noise and side-lobe reduction, but no improvement in image resolution. Moreover, results from the RSM algorithm are not always consistent. This may be attributed to the algorithm's use of randomly selected apertures or windows through which a measurement is taken. The requirement for a minimum threshold for probability detection and false alarms would make it difficult to use the RSM algorithm in practical applications. 10010] Both the DAS and RSM algorithms fail to take advantage of valuable a- priori or known information about the scene-of-interest in a GPR context, namely sparsity. More specifically, because only a few scatterers are present in a typical scene-of-interest, in other words most of the backscatter data is zero, it is reasonable to expect better estimates of the reflectance coefficients when this a-priori sparsity assumption is incorporated into the image formation process.

1] Several linear regression techniques for sparse data set applications are known. Algorithms for sparse linear regression can be roughly divided into the following categories: "greedy" search heuristics, iterative re-weighted linear least squares algorithms, and linear inversion and deconvolution via ^-regularized least-squares.

[0012] "Greedy" search heuristics such as projection pursuit, orthogonal matching pursuit (OMP), and the iterative deconvolution algorithm known as CLEAN comprise one category of algorithms for sparse linear regression. Although these algorithms have relatively low computational complexity, regularized least-squares methods have been found to perform better than greedy approaches for sparse reconstruction problems in many radar imaging problems. For instance, the known sparsity learning via iterative minimization (SLIM) algorithm incorporates a-priori sparsity information about the scene-of-interest and provides good results. However, its high computational cost and memory-size requirements may make it inapplicable in real-time settings.

|0013| Another known approach to sparse linear regression is the iterative re- weighted linear least-squares (IRLS), where the solution of the mathematical i \ - minimization problem is given by solving a sequence of re- eighted ^-minimization problems. A conceptually similar approach is to compute the o-∞inimization by solving a sequence of re-weighted ^-minimization problems. 10014] Still another known approach to sparse linear regression are e linear inversion and deconvolution via ^-regularized least-squares (LS) methods. In these methods, the reflection coefficients are estimated using

where λ is the regulariz

sparsity assumptions by approximating the minimum i a problem, which is to find the most sparse vector that fits the data model. Directly solving the 0 -regularization problem is typically not even attempted because it is known to be non-deterministic polynomial-time hard (N P-hard) , i. e. , very processing intensive to solve. To date, €i-regularization has been the recommended approach for sparse radar image reconstruction.

[0015] So called I' I -LS algorithms incorporate the sparsity assumption, generally give acceptable results, and cou!d be made reasonably fast via speed-up techniques or para! lei / distributed implementations. LS-based estimation can , however, be ineffective and biased in the presence of outliers in the data. This is a particular disadvantage, however, because in practical settings, the presence of outliers in measurements is to be expected.

[00161 More specifically, the i LS estimation method has been known for some time, wherein the concept has been popularized in the statistics and signal processing communities as the Least Absolute Selection and Shrinkage Operator (LASSO) and Basis Pursuit denoising, respectively. A number of iterative algorithms have been introduced for solving the ίχ-LS estimation problem. Classical approaches use linear programming or interior-point methods. However, in many real-world and large scale problems, these traditional approaches suffer from high computational cost and lack of estimation accuracy. Heuristic greedy alternatives like Orthogonal Matching Pur- suit and Least Angle Regression (LA RS) have aiso been proposed. These algorithms are also likely to fail when applied to real-world, large-scale problems. Several other types of algorithms for providing fi-LS estimates exist in the literature and others continue to be proposed.

[00171 Some of the shortcomings of the DAS and R.SM algorithms may be attributed to the fact that their data model does not take into consideration known prior information about SOIs. Since only a few scatterers are present in a typical SOI, better reflection coefficient estimates can be expected when the a-priari sparsity assumption is incorporated into the model. The SLIM algorithm produces good results by incorporating the assumption the SOIs are sparsely populated by scatterers. However, its high computational cost may make the SLIM algorithm impractical for large-scale real time applications. The class of ^--regularized least squares ( t' i --LS) algorithms have been recommended for radar imaging where sparse solutions are expected. This class of algorithms have been found to perform well in a number of applications, such as machine learning and neuroimaging. In off-line applications, where training data is attainabie, generalized cross-validation or similar techniques can be used to obtain optimal or near-optimal regularization parameter. However, in real-time on-line applications such as GPR imaging, there is no straightforward way to choose an appropriate regularization parameter. It may therefore be difficult to effectively take advantage of -LS algorithms in GPR imaging problems or other real-time applications.

SUMMARY

Generally speaking and pursuant to these various embodiments, the math- eraatical majorize-minimize principle is applied in various ways to process the image data to provide a more reliable image from the backscatter data using a reduced amount of memory and processing resources. In one approach, a processing device processes the data set by creating an estimated image value for each voxel in the image by iteratively deriving the estimated image value through application of a majorize-minimize principle to solve a maximum, a posteriori (MAP) estimation problem associated with a mathematical model of image data from the data. As part of this approach, a prior probability density function for the unknown reflection coefficients is used to apply an assumption that a majority of the reflection coefficients are small. The prior probability density function used to apply an assumption that a majority of the reflection coefficients are small can be applied in a variety of ways. So configured, the described approaches produce sparse images with significantly suppressed background noise and sidelobes. The described prior probability density functions promote sparse solutions automatically estimated from the observed data.

[0019j The application of the majorize-minimize principle can be further optimized for the GPR context by accounting for a symmetric nature of a given radar pulse, accounting for similar discrete time delays between transmission of a given radar pulse and reception of reflections from the given radar pulse, and accounting for a short duration of the given radar pulse. Application of these assumptions results in a relatively straight forward algorithm that can produce higher quality images while using reduced memory and processing resources.

[0020! Accordingly, the above methods use the particularized data collected using transmitters and receivers to output images representing objects in a SOI. Devices, including various computer readable media, incorporating these methods then provide for display of image data using reduced processing and memory resources and at

Y increased speed as processed according to these techniques.

10021] These and other benefits may become clearer thorough review and study of the following detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS

00221 The above needs are at least partially met through provision of the methods and apparatuses for receiving and processing image data as described in the following detailed description, particularly when studied in conjunction with the drawings, wherein:

0023] FIG. 1 comprises a schematic of operation a prior art GPR system;

FIG. 2 comprises a schematic of an example radar system as configured in accordance with various embodiments of the invention;

|0025| FIG. 3 comprises a perspective view of an example implementation of a prior art ultra wide-band (UWB) synchronous reconstruction (SIRE) radar system;

[00261 FIG. 4 comprises a schematic demonstrating a prior art approach to obtaining initial data from a SOI;

10027] FIG. 5 comprises a graph demonstrating application of the mathematical majorize-minimize principle;

10028] FIG. 6 comprises a graph illustrating a Laplacian prior density function (pdf) and a Laplacian- like prior density function as configured in accordance with various embodiments of the invention: 29] FIG. 7 comprises a graph illustrating a Butterworth prior density function as configured in accordance with various embodiments of the invention;

|0030| FIG. 8 comprises a flow diagram of an example algorithm applying the M-M principle to a MAP algorithm using various prior density functions as configured in accordance with, various embodiments of the invention:

FIG. 9 comprises a graph of simulated GPR data used to evaluate various approaches described herein;

FIG. 10 comprises a displayed image derived from the data of FIG. 9 using rior art DAS method:

FIG. 11 comprises a displayed image derived from the data of FIG. 9 using a MAP method as configured in accordance with various embodiments of the invention;

PIG. 12 comprises a graph displaying receiver operating curves for the prior art DAS method and a MAP method described herein as applied, respectively, to create the images of FIGS. 10 and 11;

[0035] FIG. 13 comprises a graph displaying a zoomed in portion of the graph of FIG. 12;

36] FIG. 14 comprises a displayed image derived from the real ARL data using a prior art DAS method;

|0037| FIG. 15 comprises a displayed image derived form the real ARL data using a MAP method as configured in accordance with various embodiments of the invention;

FIG. 16 comprises a displayed image derived from a set of simulated data using a prior art DAS method; [0039] FIG. 17 comprises a displayed image of the objects in the SOI of FIG. 16, in this case using image data processed according to prior art RSM algorithm;

[0040] FIG. 18 comprises a displayed image of the objects in the SOI of FIG. 16, in this case using image data processed according to an LMM algorithm;

[00411 FIG. 19 comprises a displayed image of the objects in the SOI of FIG. 16, in this case using image data processed according to an MAP algorithm using a Jeffreys' prior as configured in accordance with various embodiments of the invention;

[0042] FIG. 20 comprises a displayed image of the objects in the SOI of FIG. 16, in this case using image data processed according to an MAP algorithm using a Laplacian-like prior as configured in accordance with various embodiments of the invention;

[0043 [ FIG. 21 comprises a displayed image of the objects in the SOI of FIG. 16, in this case using image data processed according to an MAP algorithm using a Lapla- cian prior as configured in accordance with various embodiments of the invention;

[0044] FIG. 22 comprises a displayed image derived from a set of simulated data using a prior art DAS method;

[0045] FIG. 23 comprises a displayed image of the objects in the SOI of FIG. 22, in this case using image data processed according to prior art RSM algorithm;

|0046| FIG. 24 comprises a displayed image of the objects in the SOI of FIG. 22, in this case using image data processed according to an LMM algorithm;

[00471 FIG. 25 comprises a displayed image of t e objects in the SOI of FIG. 22, in this case using image data processed according to an MAP algorithm using a Butterworth prior as configured in accordance with various embodiments of the invention;

10048] FIG. 26 comprises a displayed image derived from a second set of AR.L data using a prior art DAS method;

[0049] FIG. 27 comprises a displayed image of the objects in the SOI of FIG. 26, in this case using image data processed according to prior art SM algorithm;

[0050] FIG. 28 comprises a displayed image of the objects in the SOI of FIG. 26, in this case using image data processed according to an LMM algorithm;

10051] FIG. 29 comprises a displayed image of the objects in the SOI of FIG. 26, in this case using image data processed according to an MAP algorithm using a Butterworth prior as configured in accordance with various embodiments of the invention.

[0052 [ Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and /or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted to facilitate a less obstructed view of these various embodiments. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein

DETAILED DESCRIPTION

10053] Referring now to the drawings, and in particular to FIG. 2, an illustrative apparatus that is compatible with many of these teachings will now be presented. In a GPR application, a vehicle 202 includes a plurality of radar transmission devices 204 and 206 mounted on the vehicle 202. The radar transmission devices 204 and 206 are configured to transmit radar puises 208 and 210 into a scene of interest 212. A plurality of J radar reception devices 214 are mounted on the vehicle 202 and configured to detect magnitude of signal reflections from the scene of interest 212 from the radar pulses 208 and 210. The vehicle 202 can be any structure able to carry the radar transmitters and receivers to investigate a scene of interest.

10054] FIG. 3 illustrates one example implementation: a truck 302 mounted ultra wide-band (UWB) synchronous reconstruction (SIRE) radar system developed by the US Army Research Laboratory (ARL) in Adelphi, MD. This system includes a left transmit antenna 304, a right transmit antenna 306, and 16 receive antennas 314. Other systems may have different numbers of and arrangement of transmit and receive antennas. The transmit and receive antennas are mounted to a support structure 321, which supports these elements on the truck 302. The truck 302 drives through a scene of interest while the transmit antennas 304 and 306 alternately transmit radar pulses and the receiving antennas 314 receive reflections of the transmitted radar pulses from backscatter objects in the scene of interest.

10055] Referring again to the example of FIG. 2, the positions along the vehicle's 202 path at which a radar pulse is transmitted are referred to as the transmit loca- tions, /. As the vehicle 202 moves, the two transmit antennas 204 and 206 alternately send respective probing pulses 208 and 210 toward the SOI 212, and the radar-return profiles reflected from the SOI 212 are captured by multiple receive antennas 214 at each transmit location.

[00561 A location determination device 220 detects the location of the vehicle 202 at times of transmission of the radar pulse from the plurality of radar transmission devices 204 and 206 and reception of the signal reflections by the radar reception devices 214. In one example, the location determination device is a global positioning system (GPS) device as commonly known and used, although other position determination devices can be used. Accordingly, the positioning coordinates of the active transmit antenna 204 or 206 and all the receive antennas 214 are also logged. When using the UWB SIRE system of FIG. 3, there is typically a minimum range of detection of objects in the SOI from the vehicle 2020 of about 8 meters, a maximum range of about 34 meters, and a cross-range of about 25 meters. The UWB SI E syste uses a FORD EXPLORER as the vehicle 202 such that the transmit antennas 204 and 206 have about a two meter separation between 16 receivers.

[0057] In one approach, the vehicle 202 includes a processing device 242 in communication with the location determining device 220, the transmit antennas 204 and 206, and the receivers 214 to coordinate their various operations and to store information related to their operations in a memory device 244. Optionally, a display 246 is included with the vehicle 202 to display an image related to the data received from the scanning of the scene of interest 212.

[00581 Due to the large size of he scene-of-interest, an initial data set is not generated by processing all voxels at once. Such an image would have cross-range resolution that varies from the near-range to the far-range voxels. The voxels in the near- range would have much, la ger resolution than those for the tar-range ones. To create GPR images with consistent resolution across the scene-of-interest, we use the mosaicking approach discussed in L. Nguyen, "Signal and Image Processing Algorithms for the U.S. Army Research Laboratory Ultra-wideband (UWB) Synchronous Impulse Reconstruction (SIRE) Radar," ARL Technical Report, ARL-TR-4784, April 2009, which is incorporated by reference and descri bed with reference to FIG. 4. The steps taken to produce a complete image of the scene-of-interest (using the mosaicking approach) are described as follows. The image space associated with the scene-of- interest is divided into 32 sub-images of size 25 x 2 m 2 . Each sub-image has 250 voxels in the cross-range direction and 100 voxels in the down-range direction. Thus, each sub-image has L — 25000 voxels. The aperture (meaning the distance over which the vehicle travels while accumulating data for a SOI) is divided into 32 sub- apertures corresponding to separate, overlapping distances traveled by the vehicle, where adjacent sub-apertures ( or vehicle travel windows) overlap by approximately 2 meters. Each sub-aperture has 43 transmit locations and is approximately of size 12 x 2 m 2 . The radar-return and location positioning measurements associated with a sub-aperture are used to estimate the reflectance coefficients for the corresponding sub-image. The reconstructed sub-images are merged together to obtain the complete image of the scene-of-interest.

10059] In another approach, referring again to FIG. 2, a separate computing device 260 may receive an initial data set from the vehicle based system to further process to create and optionally display images related to the SOI. The computing device 260 will typically include a processing device 262 in communication with a memory 264 to allow for processing the data according to any of the methods described herein. A display 266 may be included with or separate from the computing device 260 and controlled to display images resulting from the processing of the data received from the vehicle 202. Those skilled in the art will recognize and appreciate that such processing devices 242 and 262 can comprise a fixed-purpose hard-wired platform, including, for example, parallel processing devices, or can comprise a partially or wholly programmable platform. All of these architectural options are well known and understood in the art and require no further description here. Moreover, the memory devices 244 and 264 may be separate from or combined with the respective processing devices 242 and 262. Any memory device or arrangement capable of facilitating the processing described herein may be used.

10060] With respect to the collection of data, consider a single scatterer, with spatial position p s , located at the center of a voxel (i. e. , volume element ) within the SOI. The spatial positions of the active transmit antenna and a receive antenna are denoted by p t and p r , respectively. If the contributions of measurement noise are momentarily ignored, the relationship between the transmitted signal pit) and the received signal g a (t) can be modeled as

g s (t) = <¾ · pit - T(p s , pt , pr) ) Xs, (2)

where x s is the reflection coefficient of the voxel, r(p s , pt , p r ) is the time it takes for the pulse to travel from the transmit antenna to the scatterer and back to the receive antenna, and a is the attenuation the pulse undergoes along the round-trip path.

10061] The single scatterer model in (2) can be generalized to describe al l the measurements captured by the SIRE GPR system. The SOI is subdivided into a rectangular grid of L voxels and the unknown reflection coefficient at the I th voxel is denoted by ar< . Extending the model in (2) to the SIRE system, the output of the j th receive antenna, at the / vehicle-stop is given by

L

·¾(*) ^ ∑ α ϋΐ 'P 1 " T iji) ' ;Γ ' + ¾ ¾(*)· * = 1,2,...,J, j = 1,2,..., J (3)

|0082| In this equation, τ is the time it takes for the transmitted pulse to propagate from the active transmit antenna at the i th transmit location to the I th voxel and for the backscattered signal to return to the j th receive antenna. The parameter Tiji is given by

where ¾ denotes the distance from the active transmit antenna at the i th transmit location to the I th voxel, denotes the return distance from the I th voxel to the j ih receive antenna when the truck is at the r h transmit location, and c is the speed of light.

10063] The notation α¾; is the propagation loss that the transmitted pulse undergoes as it travels from the active transmit antenna at the i tn transmit location to the l l voxel and back to the j l receive antenna. The parameter a i ; j ? l> is given by

The notation represents the noise contribution.

10064] The above mathematical model defined in (3) is continuous whereas, in practice, the SIRE GPR system only stores discrete and separate sampled versions of the return signals. Thus, we introduce the following discrete-time signals to adpat the above model to the real world application: for i = 1, 2, ...,/, j = 1,2,..., J and n = 0,l,...,N-l,

ey[n] WijinT,) (7) where T a is the sampling interval and J'V is the number of samples per radar return.

Prom (6) and (7), we can express yn[n\ as

(8)

1=1

and write the corresponding system of equations

¾·[()] 7 " . - ¾;) -\- x 2 ij2 p(0 T s - (9)

+ ¾«¾L ° T s ~ T ijL ) + e ¾ |0]

= ~ r ijl )+ x 2 a ij2 p{l T s ~ r ij2 ) (10) + x L a ijL p(A. T s - T ijL ) + Cijll]

Vi j lN-l] - + x 2 ij2 p((N - 1) T s - T ij2 )

+ x L ijL p((N - 1) -T 3 - T ijL ) + e i:j [N ~ 1]

This system of equations can be written in matrix form as

where the L x 1 vector x, and Λ' " x 1 vectors y^ and. e. t , ; are defined to be

The matrix is an Λ Γ x. L matrix that contains shifted and scaled versions of the transmitted pulse. From (9) - (11), the matrix A i; - is defined to be

<XijiP(Q · s - Tiji) a ij2 p(0 T s - T ij2 ) ... a ijL p(Q T s - T ijL ) t i ip{l T s - r i3l ) ij2 p(l · T s - 7¾ 2 ) ··· <½ p(l · T s - r ijL ) i jlP ((N ~ i) -T s ~ r i:jl ) a i:j2 p((N— 1) · T s ¾2 ) ... ijL p((N - 1) T s - n jL ) We concatenate the sampled data vectors {y^- for all transmitter and receiver pairs to obtain the K x 1 data, vector y

r T T T T T

y = [yii , yi 2 > ■■■■ ■ yij > y2 2 where K = U N. Extending (12) to account for all transmitter and receiver pairs yields the desired model y — Ax -\- e (16) where A is a known K x L matrix given by

A r A Ί T T T 2 1 A

L ll ' Λ 12 > L 1J; ·Λ 21 , Λ. 22 , . . · , Λ 2„θ · · · > · » *·/! > Λ Ι2 ' (17) and e - , e£, θ 2 J 5

noise vector that is assumed to be zero mean Gaussian white noise with variance σ 2 .

55] Given the transmitted pulse pit) and observed data y, the problem is to estimate the reflectance coefficient vector x. Note, the time delays {¾; } and attenuation values m} are computed, using (4) and ( 5) , respectively, and the geometry defined by the chosen SOI and the locations of the transmit and receive antennas.

The Maj orize- Minimize Principle

The M M (which stands for raajorize-rainimize in minimization problems, and minirnize-majorize in maximization problems) principle is a prescription for constructing solutions to optimization problems. An MM algorithm minimizes an objective function by successively minimizing, at each iteration, a judiciously chosen objective function that is known as a majorizing function. W henever a majorizing function is optimized, in principle, a step is taken toward reaching the minimizer of the original objective function. A brief summary of the M M principle is now given with reference to FIG. -5. 10088! Let / be a, function to be minimized over some domain D G R L , i. e. , the function's minimum value is to be found within the given domain . A real value function g with domain D x D is said to majorize / if e?(x, y) > /(x) for all x. e fl (18) <7(x, x) — /(x) for all x G D. (19)

Suppose the majorizing function o is easier to minimize than the original objective function /. Then, the M M algorithm for minimizing / is given by x ;m+1 ) z . arg mi g ( x, x ;m ; , (20) where ' x m) is the current estimate for the minimizer of /. The algorithm defined by (20), which is illustrated in Fig. 5, is guaranteed to monotonically decrease the objective function / with increasing iteration. In other words, a further minimal or smaller value for the function / is obtained with each iteration of the algorithm, stepping between successive functions g. To see this result, first observe that, by definition, g (x ! " l÷ 1 ) , x (;n) ) < g ( (m) , x^ ! ) . (21)

Now from (18) and (19) , it follows that

/ (j m+v >) < q fx (m+] \ x (r ) < g (χ} ηι) , x<-™>) = f (χ πι ) ) . (22 )

In other words and as illustrated in FIG. 5, the function g(x. x i:n ) intersects with the function f(x) at x (n ' and also has a further minimum at point x [ n÷1 < . That further minimum point is used in the next iteration as a new g[ x ! - n! ) from which a new minimum at a new χ [ η+1! may be determined.

701 Maximum A Posteriori Estimation: MAP Obiective Function There are many estimation methods that could be used to estimate the reflectance coefficients. The described approach uses the M AP method because it allows for the incorporation of a priori information in a relatively straightforward manner. Let Y and X represent the random vectors underlying the data y and reflectance coefficient vector x, respectively. From the white Gaussian noise assumption, it follows from (16) that the conditional density function of Y given that X— x is a multivariate Gaussian density function with mean E[Y\— Ax and covariance matrix C = σ 2 Ικ, where I K is the K x K identity matrix. The a posteriori density function of X given that Y " y can be expressed as

where / x (x) and , V(y) are the joint density functions of the reflectance coefficients and data, respectively. Under the additional assumption that the reflectance coefficients are independent and identically distributed, the MAP estimate is given by ar max / x | y (x|y) TG ~ ] (y - Ax) )

K

arg min log(a 2 ) + -τ-ζΦι fx) + <¾ (x (26) where

Φι (x) ! 7 ' \ and / is the chosen prior density function for the reflectance coefficients. In the following sections, several density functions are described that can provide reasonable models for the prior distribution of the reflectance coefficients. 2] First Prior Probability Function: )73l Joint Probability Density Function of Reflectance Coefficients

[0074 j To promote sparsity, the density function / should be an even function with iong tails, and have the property that values near zero occur with high probability. The first approach follows

/(*<) = Ί (A r , p (29) where the constant λ controls the degree of sparsity and k(X) is a normalizing constant. The value of the normalizing constant was determined to be k(X)—

10075] Proposed MAP Algorithm for First Prior Probability Function

10076] We define φ to be the M AP ob ective function and express this function as φ(χ) - ) (30)

where φ,ίχ) = ||y ~ xjil (31) s a log /(a) log (32)

1 + exp(A!aj)

From (46), it follows that to minimize φ we must find majorizing functions for both φι and. s.

|0077| De Pierro showed in "A modified expectation maximization algorithm for penalized iikeiihood estimation in emission tomography," Medicai Imaging Processing, IEEE Transactions on, vol. 14, no. 1, pp. 132-137, 1995, and incorporated by reference, that a majorizing function for φχ at the current iterate x ' ^ n is given by where

component of the vector Ax, n k is the number of non-zero elements in the k i row of A, and

%\ A kl ≠0

<-kl (34) 0, A M = 0

10078] Using the theorem 4.5 in J. cle Leeuw and K. Lange, "Sharp quadratic majorization in one dimension," Computational statistics and data analysis, vol. 53, no. 7, pp. 2471-2484, 2009, which is incorporated by reference herein, the best quadratic majorizer for the function s at the point b is given by

s'(b), ?

q 2 (a, b) -V-( v a 6 2 ) + Sib) (35)

2b

where the derivative of s is equal to

Thus, a majorizing function for the MAP objective function φ is obtained by substituting Qi for φι and q 2 for s in (46)

1

Q(x,x m ) = ¾( , (m) ) + iog σ

L six;

(37)

2xf

Taking the derivative of the majorizing function Q with respect to x \ and setting the result to zero yields the desired update for the I th reflectance coefficient, / = 1,2,...,L

Estimation of σ 2 and A for First Prior Density Function il] One possible choice for the initial reflectance vector is the estimate obtained from the DAS algorithm, which we denote by X.DAS- However, the quantity [AX BJ S ] 4 . which is an estimate of the noise-free data point, is typically much greater than the k th observed data point, y¾. Therefore, we use ≡ c XDAS as the initial estimate where

Vk -DASik

arg min | |y— cAx. DAS \ = (39)

12} The parameters σ 2 and λ are chosen to be σ arg mm φ σ

σ K

A— axg min 0(A, ~ x [ ' ! ) (41 ) λ

where the optimization problem in (41 ) is solved using a I D line search. With these choices, σ 2 and λ in ( 38) are replaced by σ 2 and A to estimate the reflectance coefficients in this approach.

00831 Second and Third Prior Probability Functions:

Jeffreys' Non- Informative Prior and Laplacian-like Prior

The second approach for the prior probability density function is the Jeffreys' non-informative prior

.. ■ -r-r 1

1=1

which is an improper prior because its area is infinite. This prior is in fact an amplitude-scaled invariance prior, which means that the units in which a quantity is measured do not influence the resulting conclusion. It turns out that the Jeffreys' prior is an extremely heavy-tailed density function and therefore is able to enforce the sparsity assumption. The Jeffreys' prior is parameter-free and thus may be suitable for a wide-range of applications.

[00861 We refer to the third approach as the Laplacian-like prior, which follows where the constant λ controls the degree of sparsity and k{X) is a normalizing constant. The value of e normalizing constant was determined to be k( X ) = j----; by solving the equation

cc i r expi A \ a \ ;

Hence, the proposed sparsity inducing probability density function can be expressed as

A

oe; <- j ( a A ; - —— » , . 4o )

Fig. 6 illustrates plots of the Laplacian-like prior density function 608 and the Lapla- cian probability density function 612 for the purpose of making a comparison.

087] GMR Algorithm for Jeffreys' and Lapiacian-Like Priors

088] in this section, we present the M M based GM R algorithm , which reconstructs GPR images by iterativeiy minimizing the negative MAP objective function , for the Jeffreys' and Laplacian-like priors. First, we consider the case where the noise variance, σ 2 , and prior parameter, Θ, are known. Then we consider the case where both of these quantities are unknown. 10089] Where the noise variance and PDF parameter are known, it will be conv nient to define a function φ to be the objective function on the right hand side (26) and express this function as

1 . . K

φ( χ )— τ Φι ί χ ) -b— log(a ) - ^ s(xi; Θ) (461

where

φ ί (χ) V — Λ χ \\ό (471 s(a: &) = - log /(a; 0) (48)

For convenience, we will refer to the function s(-; Θ) as the negative log prior.

|0090| From (46) , it follows that to minimize the negative MAP objective function φ using the MM technique we must find majorizing functions for both φι and s. In the context of reconstructing positron emission tomography images, De Pierro developed a majorizing function for linear least-squares objective functions (see A.R. De Pierro, "A modified expectation maximization algorithm for penalized likelihood estimation in emission tomography" IEEE transox.ti.ons, medical imagery pp 132- 137, 1995, which is incorporated by reference herein) . Using his result, a majorizing function for φι at the current iterate x (mi is given by

where r(xi, x (m) ) = c kl (n k A kl x, - η ,Α,, χ " + [Ax^] ¾ ) 2 (50)

Δ

n k = number of '/ion— zero elements in the k h row of A (51

Δ

32) 0, A kl = 0, x! f e ^ J A M x, (53)

10091] Now, we address the problem of determining a majorizing function for the function s{- 6) by exploiting the following theorem by de Leeuw and Lang (J. de Leeuw and K. Lange, "Sharp Quadratic Majorization in One Dimension" Computational Statistics and Data Analysis vol. 53 no.l pp 2478 February 2004, which is incorporated by reference herein) Suppose d(a) is an even, differentiable function on E such that the ratio d'(a)/a is decreasing on (0. oo). Then, the following function

Δ d'(b)

fc 2 ) - d(b) (54)

26 is the best quadratic majorizing function for d at the point b. Assuming that the negative log prior satisfies the conditions of de Leeuw and Lang's theorem, then a majorizing function for this function at the point b is

In turn, it follows that a majorizing function for the term ^ g(a¾; fl) in (46) about the current iterate x m > is

1----Λ- . Given the results in (49) and ί56) . we can conclude that

1 K

; q. {x, x l ) -\- ~~- iog a *

s ι xj ; 0)

X , ' I + S [ X, ' : 0 ) O I

2x} is a majorizing function for the negative MAP objective function φ about the current iterate x [ rn K The proposed GMR algorithm of this approach follows by taking the derivative of Q with respect to x t and setting the result to zero. From straightforward calcu lations

Equating the derivative in (58) to zero yields the desired update

.h . (59)

where G) M ' and Ji) are defined as, o (60)

A careful observation of (59) reveals that the process of estimating the individual reflection coefficients is decoupled in the sense that, for any voxel, the computation of the next estimate depends only on the previous estimate. As a result, this algorithm can be easily parallelized to enhance its computational speed. It should also be mentioned that a fast memor efficient method for computing (60) and (61) has been developed (see US Patent Application Number 14/245,733 filed February 19, 2014, which is incorporated by reference herein) and which fast methods are described below. Therefore, this GMR. algorithm is expected to be applicable to real-world GPR imaging problems with high dimensionality data.

For the Jeffreys' prior, we have that s(a) = -log 7^7 (62) s'(a)

{ o3 a

Because the conditions of de Leeuw and Lange's theorem are met, the GMR algorithm for the Jeffreys' prior is

6V (64)

H, + σ 2

O "

10095] For the Laplacian-like prior, the negative log prior s(a; X) and function * follow s(a-X) - -log ,, , lo " , (65) s'(a;\) Aexp(Aia!) f ^

Like the Jeffreys' prior, the Laplacian-like prior satisfies the conditions of de Leeuw and Lange's theorem so the corresponding GMR algorithm is

Similarly, as an alternative approach, the Laplaeian prior can be used as well. In this case, the negative log prior s(a; X) and function Ll2£L for the Laplaeian prior follow s(a: X) = - log- ' exp(--Ajaj) (68) s'(«;A) = x_ m Like the Jeffreys' prior and Lap!acian-like prior, the Laplacian prior satisfies the conditions of de Leeuw and Lange's theorem so the corresponding GMR algorithm is x, ' — ( 70

[0097| Previously, we assumed that the noise variance, σ 2 , and prior parameter, Θ, were known. However, oftentimes in practice these quantities are unknown. We present in this section a procedure to jointly estimate the reflection coefficient vector, noise power and prior parameter. For simplicity, we will consider the Laplacian-iike prior parameter.

|0098| First, we modify the notation to account for the fact that the negative MAP objective function, φ, and majorizing function, Q, now explicitly depend on λ and σ 2 . Thus, (x) and Q(x: x ) ) are now expressed as <^(x, λ, σ 2 ) and Q x, λ, σ 2 ; x i m i ) , respectively The cyclic optimization method given below cou!d in principle be used to jointly estimate the reflectance coefficient vector, x, noise variance, σ 2 , and Laplacian- iike prior parameter, λ:

for m— 0, 1, . . . , niter do

arg rnin Φ(χ, Λ- ' " ' , σ"

axe min φ(χ ι '

" λ>ο

σ arg min >(x (m÷1) , A (m÷1) , σ 2 ) end for where and a i{m> are the m th iterates of λ and σ 2 , respectively.

0099] Instead of solving the minimization problem in (71) it will be convenient to find an iterate x' m~ ; that decreases the negative AP objective function φ(χ, A, σ 2 ) in the following sense

Taking into account the discussion above regarding noise variance and PDF parameters, an iterate that satisfies 74) can be found by minimizing the majorizing function Q(x, respectively. Thus, a solution to (71) is x (m+1 ^ arg min Q(x, , σ (πι > : x m) ι (75)

The minimization problem in (72) can be solved using a one-dimensional line search such as the golden section search method. Implicitly, we have assumed that an interval can be determined that contains the miniraizer of c>(x' m+1 - i , Λ, a [m> ). Final!y, we estimate the noise power by obtaining a closed form expression for the miniraizer of σ 2 ). Putting the above ideas altogether, an alternative to the algorithm defined by (71)-(73) is summarized below for m— 0, 1, ... , niter do

- argminQ(x,A (m) ,ff 2(m) ;x (m) )

(m+i) arg min (b(i m+1) , A, σ 2{ηι> )

°λ,<λ<λ ¾

[r +l )

arg min V-log/i ' .

-f ·

K , „ 1

arg min ~ log σ 2 + -i- (x (m - f i) ) (80)

2' end for 10100] We will now show that the cyclic minimization algorithm defined by (76)- (80) is guaranteed to rnonotonically decrease the MAP objective function c6(x, A, σ 2 ). From (76), it follows that φ(κ^' , A (m) , σ 2 ^*) > <f>(x.( m + r > , λ ! " ι) , σ 2{ίη '). Also, (78) implies that Finally, it can be observed from (80) that ψ(;χ" ί+1 > , A< m+1 \ 2 ^) > <f>( m+i m+i σ 2 ^ +1) ). Therefore, ) (81)

Thus, the iterates obtained from the cyclic minimization technique described above are guaranteed to rnonotonically decrease the objective function (x, A, σ").

[01011 The solution to (76) was discussed above and is given by (67) with A and σ 2 replaced by A (m) and a 2(mi respectively

To obtain the estimate for the noise power, we set the derivative of , σ 2 ) with respect to σ 2 to zero and solve the resulting equation. The derivative of

Setting this derivative to zero yields for this approach the maximum likelihood estimate of the noise power

1

σ 2(m+!) Ax (84)

[01021 Now, given our discussion above, it follows that for the case of the Laplacian prior that the update for the reflection coefficient vector is equal to Reca! l the following wel! known result: Given n observations w «¾, . . . , «½ of a Laplacian probability density function with unknown parameter μ, the maximum likelihood estimate of μ is μ =— ^ i ¾ v | - From this result, we can conclude that

A (m l) = -∑|.c} m+1) | (86) is the solution to ( 78) for the Laplacian prior. As in the Lap!acian-like prior case, the update for the noise variance is given by (84). Finally, because the Jeffreys' prior does not depend on a parameter, the update for x is given by (64) with σ ζ replaced by σ 2 ( γη) in (84) .

[0103] Fourth Prior Probability Function: Butterworth Prior

[0104| In this approach, we present a novel density function, known as the Butterworth density function, which can be used to incorporate the sparsity assumption. The Butterworth density function is defined to be

where

is a normalization factor, and e and n are parameters that control the width and decay rate of the density function, respectively. Fig. 7 illustrates a plot of he Butterworth density function for e— 0.1 and n— 3. One should observe that for large n the Butterworth density function approximates the uniform distribution and k n (e) ··¾ ~ . It should also be noted that we fix the parameter n for our application so we do not explicitly include it in the notation for the Butterworth density function. With the Butterworth density function, the negative log prior becomes

L

02 (x) = " l g [] /Bw (¾ : f) 0105] MAP Approach for Butterworth Prior

10106] A MAP algorithm using the Butterworth density function as the prior density function is developed as follows.

[01071 The MAP algorithms we have developed are based on the majorize-minimize (MM) methodology so we now introduce this concept. An MM aigorithm reduces an objective function by iteratively minimizing a carefully chosen majorizing function. Let / be a function to be minimized within some domain D C M L . A real valued function g is said to be a majorizing function for / at the point y G D if o(x ; y) > /(xlVxjefl (90)

If we succeed in obtaining a majorizing function for /, then the associated MM algorithm for minimizing / is given by x(m + ll = f x (m) ) ,Q2)

From (90) and (91), it follows that

.f(x (m+i) ) < - .f(x (m) ) (93) where the second inequality follows from (92). This implies that the iterates mono- tonically decrease the function / as the iteration number increases.

10108] Reca!l the negative MA objective function is given by

Φ(χ) ' · (x) + φ 2 ( ) (94)

Note that if we find functions g-:(-;y) and <jf 2 (-;y) that are majorizing functions for φι and 02· respectively, where y £ M L , then q ; y) = log( 2 ) + ~qi (x; y) 4- ¾ (x; y) (95) is a majorizing function for φ about the point y. In this case, the associated MM algorithm is given by x .m+1 >— arg nin gix: x' (96 i

0109| Now we must determine majorizing functions ¾ (-; y) and ¾( 3')- As noted above, De Pierro constructed a majorizing function for linear least-squares objective functions such as Φ> · We now summarize his result for the Butterworth prior. First, we rewrite the least-squares objective function as

(97

where y k is the k component of y. We can obtain a majorizing function for φχ by first finding a majorizing function for ([Axj ¾ ) * , where [Ax]*, is the k th element of the vector A.x, The term (!Ax! fr ) -, 2 ' can be written as

(∑A fc/ ( -A klXl - -^Α,, χ Γ" -f [AxH] ) where

0 o,w,

Tih — number of nonzero elements in row k

0 for all k, I. Therefore, by convexity of th

(!Ax!,) 2 < rix: x (rfi ΊθΓι wn here It is easy to see that r(xW ;x W) = ([Ax> m %)~ . Thus, r(-;x (m) ) is a majorizing function for ([Ax] ¾ ) 2 . If we replace ([Axi ¾ )' by r(x;x l;n ') in (97), we obtain the desired majorizing function for the least-squares objective function x; x (m) ) = E (yl - 2[Ax] fe?/fc + r(x; x^)) (103)

[OllOj We now turn to the problem of finding a majorizing function for <¾, which we now write as

0 2 (x) -lo g n jBwix e) =∑g(x i; e) (104) where g{a;e) = -- log f BW [a; e). As discussed above, de Leeuw and Lange showed that for a univariate function p, the function on R 2 h(x, y) = p(y) + p'{y) (x-y) + ~{x- yf (105) majorizes p at ¾/, provided p is twice differentiable and C > 0 is an upper bound for p" . We now use this "Taylor series" expansion method to obtain a majorizing function for ¾¾· Replacing the function g in (104) by its Taylor series majorizing function results in the following majorizing function for <p 2 at x l'm

L D

(m)\ Δ I / !ml \ , u im! \ [ml \ , ■L> /

g 2 l.x;x ' ' , ) - _, |ί/ί·¾ +5 i x > ,^Κ χ ι - χ ί ) + - χ ί (106) where, in this case, B is the least upper bound of t e second derivative of g{- e). This choice for B guarantees the optimaiity of the majorizing function q 2 . Observe that ¾(x:x im) ) = < (* {m) ) when x = x im) .

[0111] from (95), (103), and (106), we have the following majorizing function for the MAP objective function

K 1 K

q x^) - — log( 2 ) — V (//; ! 2z fe [Ax] fe + r(x; x^ )) + ¾(x; x¾)7) Reca! l that we can obtain an algorithm for minimizing the MAP objective function using (92) . Taking the derivative of q( :, ' x} m> ) with respect to xi and setting the result to zero produces the desired MAP algorithm. The derivative of g(x; x' m - ) with respect to xi equals

E (y k A ki - A kl

where

Setting this derivative to zero yields the following MAP algorithm for reconstructing images from GPR date where

K- l

K-l

· » <·.· Vk

Using the particular structure of t he GPR data model , the quantities H; and Gf"' can be computed efficiently using fast iraplementation s as described below and developed in US Patent Application Number 14 /245,733 filed February 19, 2014, which is incorporated by reference herein. 10112] Automatic Selection of Noise Variance and Prior Parameter 10113] for the Butterworth Prior

[0114| The noise variance σ 2 and prior parameter e quantities are typically unknown in practical applications. In this section, we develop a method to adaptively estimate and update the unknown noise variance σ 2 and the unknown Butterworth parameter e. First, we modify the notation to account for the fact that the MAP objective function and majorizing function explicitly depend on e and σ 2 . Thus, φ( χ ), <¾(x); g(x; xS mi ), and ¾(x; x lm) ) are now expressed as <<>(x, e, σ 2 ), φ 2 ( χ , ε), fi(x, , σ 2 ; x^), and q 2 ( ' x,e x m >), respectively. (Although technically incorrect, we will refer to the quantities φ( ., e, σ 2 ), φ 2 ( χ , a). q(x, e, σ 2 ; x r i ), and e; ~ x. l - m) ) as functions in order to simplify the discussion.) The cyclic optimization method given below could in principle be used to jointly estimate the reflectance coefficient vector, x, noise variance, σ Δ , and Butterworth prior parameter, e.

for m = 0, 1, , .. , niter do

= ar min0(x,e (m) . 2 m ^ (114)

= aremin©(x k '.e. " ') iiio end for

0115] Instead of solving the minimization problem in (114) it will be convenient ;o find an iterate χ' ίίί÷1 ' that decreases the MAP objective function όίχ, e. σ 2 ) in the Ό Rowing sense ψιχ' -', e ', σ - ') ^ (t>(x l ',e^ ',σ M17) Taking into account the discussion above, an iterate that satisfies (117) can be found by minimizing the majorizing function q(x, σ 2ι,η) ;x irn) ) (i.e., (107)) with e and σ 2 replaced with€ im) and 2 ^, respectively. Thus, a solution to (114) is x.( m÷1 ) arg min g(x, σ 2( " ι) ; x (m) ) (118)

10116] The minimization problem in (115) is solved using a one-dimensional line search such as the golden section search method. Implicitly, we have assumed that an interval can be determined that contains the minimizer of φ( χ ' ;η+ '', c, σ 2 πι> ). Finally, we estimate the noise power by obtaining a closed form expression for the minimizer of An alternative to the algorithm defined by (114)-(116) is summarized below

for m— 0, 1, ... , niter do

x- arg min g(x, e (m) , σ 2ίίη) ; x} m> ) (119) f(m+l) arg min φ(χ (;η+ ) , e, σ 2{πί ) (120) σ -t!; arg min ·¾· loga 2 + J-^ 1 ( x (" l + 1 )) (121) end for where the solution of (120) is assumed to lie within the interval [e { , e ¾ ].

[0117] We will now show that the cyclic minimization algorithm defined by (119)- (121) is guaranteed to monotonicaliy decrease the MAP objective function ΰί>( χ , e, σ 2 ). From (114) and (119), it follows that

Also, (120) implies that > (x (m+:l) ,e (m+,) ,a 2 i m) ). Finally, it can be observed from (121) that ώ(χ <7η+ι) , c im+1) . σ 2 " η >) > 0(x( m+1 ),e( m i y> m+1 )). Therefore. e' m÷1) , σ 2ί?η+ ! ) ) ( 122)

Thus, the iterates obtained from the cyclic minimization technique described above are guaranteed to monotonically decrease the objective function (Χ, e, σ 2 ) . The solution to (119) was discussed above and is given by (110) with e and σ 2 replaced and σ 2! ^ > , respectively

lm-\- i ;

123 )

Σ 2(« ; . β + Η,

0118] To obtain the estimate for the noise power, we set the derivative of φ(χ. (πί+1 ) , e " l+1 σ 2 ) with respect to σ 2 to zero and solve the resulting equation. The derivative of φ(χ (τη+1) , e m+i σ 2 ) with respect to σ equals

Setting this derivative to zero yields the maximum likelihood estimate of the noise power

,2(rr. Ax (m 1)

0119] Given the single Butterworth prior for the reflectance coefficients, where the noise power and the Butterworth parameter are unknown, the implementation of the above MM-based MAP algorithm, which we call Algorithm I. is now outlined

Algori m i:

« Initialization:

Obtain initial estimates x'°- , (^ 0; , and and set the value for the parameter

Compute B numerically « Iteration: for m— 0, 1, . , ..niter do

for I 1 , 2, . . . , /, do

Compute g {xf "' ; e ! ;n ' ) from equation ( 1 13) , Compute

tion (123)

end for

Compute } m÷1> from equation (120) using a line search

Compute a 2!m÷1 > from equation (125)

end. for where niter is the chosen number of iterations. 10120] Description of the Fast implementation

10121] When applied in a typical GPR context, the computation of the term G) , } requires the K x L matrix A where K — UN . For the above described UWB SIRE radar system, these parameters are I = 43 transmit locations. J = 16 receive antennas, N— 1350 data samples per return-profile, and L— 25000 pixels.

[01221 These parameter settings require 173 gigabytes (GB ) of memor to merely store the system matrix A. Because A has many zero-elements, however, the data could be more efficiently stored as a sparse matrix. Nevertheless, a sparse representation for A would still require approximately 16 GB of memory. With such a large meraory size requirement, the construction of the A matrix in the current formulation of the algorithm is not feasible or practical for typical computing platforms, especial ly in field deployment where on site imaging would be advantageous. Indeed, virtually any other GPR image formation method that requires explicitly constructing the system matrix would have comparable requirements.

10123] In addition to memory size challenges, computational cost would also be an issue for the current format of the MM-based £i~LS algorithm. At each iteration, the computation of would require the matrix multiplication Ax™ ; . which has

0{KL) time complexity. This operation is thus not practical for large-scale implementations where the parameters K and L are expected to be relatively large. For example, in our case, we have K = 27520 and L = 25000. Additional costs include the computation of the term Hi where the number of non-zero elements in each of the K rows of A is needed. To arrive at a fast and memory-efficient implementation of the above algorithms, the following acceleration techniques may be implemented.

[0124! Fast Implementation of G ' 1 '

[0125! In a GPR context, the mathematical expressions at equations (111) and (112) above can be modified to reduce processing time and required memory by accounting for a symmetric nature of a given radar pulse, accounting for similar discrete time delays between transmission of a given radar pulse and reception of reflections from the given radar pulse, and accounting for a short duration of the given radar pulse. Accordingly, the equation for determining estimates for values x representing reflectance coefficients of the objects in the SOI, involves calculation of the terms Gf ' " ' and Hi, which calculation can be streamlined according to the above assumptions. In application, a processing device is configured to calculate terms used to obtain the estimated value. Pursuant to these aspects, the expression for G m i in ( 112) can be written as (126) where, for n 0,1, ...,N— 1,

0126] To facilitate a fast implementation, we approximate the quantity G ' " " by

where ran - round f ) (129)

L

ag A '(nT e ) = α Ψ ' ' η<Γ · """" t kjiTs)- ( 31)

We will refer to the set of values {«;,·(} as the discrete-time delays. We can write

^ - ¾ ^ (132)

¾ r here

= ∑^ ----¾^^ } (13 )

with «[??] ^ ρ(ηΤ„). Because the transmitted pulse p(t) is symmetric, w\n— k] — w\k— n\ holds for ail n and k, and thus

= {( ¾ ·*.¾ ,1' ) l*=¾< ( 137 0127] It is readily observed that computing G^ ' requires the convolution of the discrete pulse w[n] with the m ih iteration of the error-term sequence —s K . The sequences w[n] and ¾,· [η] are given. Hence, to efficiently compute G™ J , a computationally efficient way for generating the sequence s^" ' |n] is needed.

10128] First, we note that the collection of discrete-time delays is expected to have repeated values. Let fc mir , and fc max denote respectively the minimum and maximum discrete-time delays. The sifting property of the unit impulse function can be used to write

¾.? i^. w\n -- k] 42 ) where . Ί44 )

The term <¾· [&] can then be expressed as i ΓΛ-Ί ( 145 1 ies k

where d)"l o¾> · re and S k = {1 = 1, 2, . . . , L : η ί - k) . 0129] In other words, the term <¾ jfc] is computed by accumulating all elements of o J for which associated discrete-time deiay indexes have the same integer value k. Consequently, q % j[k] can then be computed in a very efficient manner using the hash table data structure concept. The indexes of a hash table, typically referred to as keys, are the integers between k aun and k max _, and the record associated with the k th key is the set of values

{4f : 1 = 1,2,..., L; nijl = k}. (147)

By one approach, the hash-table-based computation of ¾ 7 -[fe] is implemented using a processing device configured to use MATLAB using the accumarray function. The variables d. ri and q store the following sequences: d - dj ; = c¾2 ; , ... , C¾L] (148) n 4- n ijt = η _, n ij2 , n ijL ] (149)

\\ (150)

The variable q is computed via the command q = accumarray (Ώ,, d) where fc m i n n iji≤ kjnax for all I, ij[k] = 0 for all indexes k < An example of pseudocode to be run by the processing device for implementation of the proposed algorithm for efficiently computing G "! is given below.

■ Subroutine 1: Pseudocode for computing G m) for 1— 1,2,..., L for i = 1, 2, ... , / do for j— 1, 2, ... , J do

SET qij [k] = 0 for 0 < fe < fc^ for K = fe j nja, fcr t im + i, .. , , L ja}; do

<½ = {/ = 1,2,..., = A;} qa\k\— di ' <' (hash- table-implementation)

end for s}j \n\ ----- [Qi j * w) \n\ for / = 1, 2, ... , L do

end for end for end for

/ j

< :::: a Hi

[0130| In addition to being more computationally efficient, the proposed implementation does not require constructing the large system matrix A. A tangible benefit of this fact is the size of data (i.e., the number of transmit locations) that can be used to form an image is no longer limited. It is also readily observed from the pseudocode that the computation of G\ mj is parallelizable such that faster processing techniques such as parallel or GPU based processing can be used to process the data.

[0131] Fast implementation t.-f //,

|0132| An alternative expression for Hi in (ill) is

Hi = ∑∑∑ (¾ι) 2 · ·^ - r i3 i), (151) n-----A

where β(ί) p 2 (t) and r - n is the number of non-zero elements in the n tn row of the N x L sub-matrix A,.-— To facilitate a fast implementation, we approximate the quantity Hi by

//: - VV V (α ¾ί ) 2 - r ljn β(ηΤ 3 - π^Τ β ), (152) i ^ l. τ Λ η-----Λ We write H) as

/ ' /.. V * \. t.ijl) ' ijl 153) where

N-l

Hiji - ∑ ί ίη -β( η Τ β ~ η Τ β ) (154) n=0

- ∑ li j [n] h[n - rii j i] (155)

- {(¾*¾) }k= ¾i (158) with = β(ηΤ 3 ), and τ¾ η is now represented by the n-indexed sequence i j [n] = 7¾ n . For the sake of convenience and. consistency, we assume here that rows of a matrix are counted starting from a zeroth row. The computation H w requires the convolution of the squared and discretized pulse h[n] with the sequence 7^ In]. The computation of Η is significantly accelerated with the introduction of a fast procedure for generating j 7 -|nj.

[0133| First, we recall that the sample ^ [n\ is the number of non-zeros entries in the n th row of the N x L sub-matrix A^. Because the radar system has an ultra wide band, the transmitted pulse p(t) is short. Consequently, the samples of the length-TV sequence < M and non-zero, otherwise. The parameter M is even and significantly smaller than N. The I th column of Α, :? · coincides with the length- /V vector

[oiji -piO- Tiji), · p(T s - Tijt), a¾ p((N - 1)T S - τψ) Y . (159) The (>ι, Γι-entry of A,., is thus non-zero if

Using (129), the above rule in (160) can be approximated by

10134] A computed delay index , is such that 0 < ,ψ < N. Consequently, for computational convenience, we write that the fn, i) -entry of A,, is non-zero if max(0, n - M) < n < min(n + M, N). (162)

The number i j [n] of non-zeros entries in the n tk row of A, l? is thus equal to the number of elements in the n th row that satisfy (162). A more convenient definition is

where j ' 7^. r J denotes the number of elements in the set

R n = {I = 1,2,..., L I maxiO, n - M) < η ϊ < minin + M, ./¥ ' )}. (164)

[0135! The parameter ^[η] can be efficiently computed by taking advantage of the hash-tabie-based fast imp leme tation concept used in (145). First, we write

where ni = {l = 1, 2, ... , L I rii j i < min (n + , N) } (166)

K ~ = {! = 1, 2, ... , L \ η ί < max(0, n-M -I)}. (167)

The expression in (165) is further expanded as

7ij[n]= ∑ \S k \- ! « ¾! ( 168 ) Finally, we have

[min(n - M, N)} - v [max(0, n -- M)\ (169) where

(.1.70) with iSfe = {I = 1, 2, , . , , L : η,^ι = k). The inner summation in (170) (and. hence the computation of v[m\) is efficiently computed using the hash-table-based fast implementation previously discussed and used in (145). Example pseudocode to be run by the processing device for implementation of the proposed algorithm for efficiently computing H is given below.

H Subroutine 2: Pseudocode for computing Hi for I ----- I, 2, ... , L for i ----- 1,2, ..., / do for j = 1, 2, , .. , J do

SET q[k] - 0 for 0 < k < fc ffiin for k— kjota, feniin H- 1, ... , ¾maj[ do

«¾ = {Z = l,2,...,L : n f / = A;}

= jsSfej ( hash-table-implementation) end for for m = 0,1,..., N do k------0

end for for n— 0, 1, ... , Λ do ij[n] w [m (n + M, N)\— w [max(0, n— M)] end for for / = 1, 2, ... , L do

end for end for end for

10136] Putting together the results for calculating terms G "' and H ¾ , example pseudocode for the ; -j--LS algorithm follows.

U Pseudocode for computing ^-LS algorithm for m = 1, 2, . , . ,num it initialization: x ' — \x\ , 2 Ί · - · · X L r

for ΐ = 1, 2, .... L do

Compute H; (via Subroutine 2)

end for for m— 1,2,..., num do for ί = 1,2, ... ,L do

Compute Ο ' " (via Subroutine 1)

end for

irn+l) Hi - + G) 1

' ' I

2 s end for

[0137! Summary of Results for the Various Approaches

[01381 The described MM-based MAP algorithms are applicable to large-scale, real applications. Although the proposed algorithms effectively estimates reflection coefficients of scenes- o -inter est using GPR datasets, the algorithms could be readily applied to a variety of applications where datasets are collected using synthetic aperture imaging measurement principles. We have applied our algorithms to simulated and real datasets supplied by ARL. The results obtained using the described MAP algorithms are consistent in the sense that the resulting images, for both simulated and real datasets, have the same characteristics in terms of noise removal and side- lobe reduction, in general, the images generated were sufficiently sparse without a loss of known scatterers. The desirable results obtained using real datasets are even more encouraging because they suggest that our algorithms are robust enough for practical applications.

[0139] With respect to the described Butterworth prior used to exploit the known sparsity constraint of scatterers within an SOI, this approach does not require any user input parameters. It can also be observed that the application of the Butterworth prior need not be limited to modeling the distribution of reflectance coefficients. The Butterworth prior is a good approximation for the uniform distribution and can thus be used in other problems that require a uniform distribution model. Moreover, the majorizing functions we developed for the negative log priors are general enough that they can be used in other MM-based MAP methods regardless of the form of the density function for the corresponding problem. Finally, we have improved upon image reconstruction techniques, namely the DAS and RSM algorithms, currently used by ARL for GPR. 10140] The methods described above can be implemented as il lustrated in FIG. 8, where a processing device receives 805 a da,ta set arid processes 810 the initial data set by creating an estimated image value for individual voxels in the image by itera- tively deriving the estimated image value through application of a maj orize-minimize technique to solve a maximum a posteriori (MAP) estimation problem having a MAP objection function associated with a mathematical model of image data from the data. These basic steps can be applied to achieve fast and computationally efficiently prepared image using the estimated image value of individual voxels of the image that can be displayed 815, wherein the initial data set may be sourced from a variety of applications where datasets are collected using synthetic aperture imaging measurement principles. In the GPR context, the method may further include emitting 820 a radar pulse at specified intervals into a scene-of-interest and detecting 825 magnitude of signal reflections from the scene of interest from the radar pulse. Position data is recorded 830 corresponding to individual radar pulse emissions and individual receptions of the signal reflections. The initial data set in this application is created 835 from the position data and detected magnitudes of the signal reflections. Where the method is carried out remote from the vehicle, it is sufficient where the receipt of the data to be processed includes receiving data representing transmission site locations of radar pulses, reception site locations of reception of reflections from the radar pulses, radar-return profiles for pairings of he transmission site locations and the reception site locations, and data samples associated with individual radar-return profiles.

[01411 We tested the first described approach to a GPR MAP reconstruction (GMR) algorithm using simulated GPR. data that closely mimics the data generated by A L's SIRE system. Fig. 8 shows the layout of the point- scatter ers for the r efi ect an ce i mage .

10142] We corrupted the simulated data with white Gaussian noise (WGN). Note that although we assumed the noise is WGN, the noise in practice may differ greatly from this assumption. We compare the images obtained from the GMR and DAS algorithms subjectively, and objectively using the receiver operating curves (ROCs) that result from the Reed-Xiaoli detector. FIG. 9 shows the image obtained using the DAS algorithm. The image has significant side lobes and shadows, and the background noise is clearly visible. By contrast, the image generated by the GMR algorithm (FIG. 10) is sparser and adequately suppress the side lobes and background noise. 0143] The ROCs in FIGS. 12 and 13 show that the GMR algorithm suppresses the background noise well enough to provide a better detection rate and a lower false alarm rate than the DAS algorithm. Similar results were obtained for data from ARL. We show only the real data image FIGS. 14 and 15 for brevity.

[0144j We also similarly evaluated the performance of the MA P algorithm using the Jeffery's, Laplacian-like, and Lapiacian priors using a synthetic GPR dataset. These algorithms are compared to the DAS, RSM and LMM algorithms. The LMM algorithm is a fast algorithm that uses the majorization-mimmization technique to solve the ^-regularized least-squares estimation problem (see US Patent Application Number 14/245,733 filed February 19, 2014, which is incorporated by reference herein).

[01451 FIG. 16 shows the image obtained using the DAS algorithm. The image has significant side-lobes and shadows, and the background noise is clearly visible. The RSM image in FIG. 17 is an improvement over the DAS image but there is room for iraprovernent. By contrast, the images generated by the ^-regularized least squares algorithm (FIG. 18) and GMR algorithms (see FIGS. 19-21) are sparser and adequately suppresses the side-lobes and background noise. However, unlike the LMM algorithm, the GMR algorithms are user independent because the noise variance and prior parameter are estimated from t e observed GPR data.

[0146j We also similarly evaluated the performance of the MAP-Butterworth algorithm using both synthetic and real GPR datasets provided by ARL. The MAP- Butterworth algorithm is compared to the DAS, RSM, and LMM algorithms. 0147! We applied the DAS, RSM, LM M and M AP-B tterworth algorithms to the simulated data. In FIG. 22, we observe that the standard DAS algorithm is not able to satisfactorily remove the noise and sidelobes. In FIG. 23, we observe that the RSM algorithm improves upon the DAS algorithm's results but substantial noise and sidelobes are still present in the image it generated. In contrast, as seen in FIG. 24 and FIG. 25, both the LMM and M AP-Butterworth algorithms retain all the scatterers while significantly suppressing the sidelobes and noise. We observe that the scatterers in the MAP-Butterworth-image are slightly more sparse than in the LMM-image. This may be attributed to the fact in the MAP-Butterworth all parameters are optimally estimated whereas the regularization parameter λ in the LMM a!gori hra was user-selected using a trial-and-error approach.

|0148| We now evaluate the performance of the MAP-Butterworth algorithm using data obtained from ARL's FLGPR system. The test data corresponds to measurements taken from / — 274 consecutive transmit locations using J — 16 receive antennas. We choose a 65 x25 m 2 SOI and divide it into a grid of 250 voxels in the cross-range direction and 3200 voxels in the down-range direction. As with the simulated data, the cross-range and down-range voxel sizes are 0.1 m and 0.02 m. r respectively.

10149] Due to the large size of the SOI, an image is not generated, by estimating the reflection coefficients of the voxels all at once. Such an image would have cross-range resolution that varies from the near-range to the far-range voxels. The voxels in the near-range would have higher resolution than those in the far-range. To create GPR images with consistent resolution across the SOI, we use the mosaicing approach discussed in reference to FIG. 4 above. The radar-return data and GPS positioning measurements associated with each sub-aperture are used to estimate the reflection coefficients for the corresponding sub-image. The reconstructed sub-images are merged together to obtain the complete image of the SOI. It should be noted that there is a system matrix for each pair of sub-apertures and sub-SOI. in this sense the model for real GPR data is time-varying.

[0150] FIGS. 26-29 show the GPR images that were generated using the DAS, RS M and LMM and MAP-Butterworth algorithms, respectively. It is evident from these figures that the DAS image has significant side lobes and shadows, with clearly visible background noise. Although, side lobes and shadows are reduced in the RSM image, there is still room for improvement. The images obtained using the LMM and MAP-Butterworth algorithms, shown respectively in FIGS. 28 and 29, are sparser than the DAS and RSM images, and adequately suppress both the side lobes and background noise. These results show that the MAP- Butterworth algorithm gives results comparable to those of popular £ -LS algorithms without the usual challenges of finding the most suitable regularization parameter.

[01511 Those ski lled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the scope of the invention, and that such modifi- cations, alterations, and combinations are to be viewed as being within the ambit of the inventive concept.