I. INTRODUCTION
High-resolution microwave radar built on ultra-wideband (UWB) technology suits imaging applications which require a cm-order resolution. This holds especially for operational ranges not exceeding the radar aperture size, i.e. 1–2 m. Such a radar is seen as a promising sensor for concealed weapon detection with smaller cost and complexity than those of mm-wave and THz systems [Reference Augurto, Li, Tian, Bowring and Lockwood1]. In general, an active sensor delivers to the operator a 3D image of the scene after digital focusing of the acquired electromagnetic field. Thus, the speed of a focusing algorithm becomes of crucial importance for real-time use.
The state-of-the-art in UWB short-range imaging radars features advanced algorithms originating from seismic integration methods. Kirchhoff migration is proved to be one of the most accurate reconstruction techniques in time domain, while Stolt migration in frequency-wavenumber domain is famous for its fastness [Reference Zhuge, Yarovoy, Savelyev and Ligthart2–Reference Lopez-Sanchez and Fortuny-Guasch4]. In spite of their advantages, the first one is the slowest while accuracy of the second depends heavily on spectral interpolation. Another group of algorithms (SEABED and others) aims at fast reconstruction of the shape of reflecting surfaces by transforming the reflected quasi-wavefronts, defined from the signal peaks, to images [Reference Kidera, Sakamoto and Sato5]. This approach might have difficulty with imaging of complex targets through clothes.
A separate imaging method makes use of deconvolution of the a priori estimated system-medium-target properties and as such it provides a higher resolution in the focused image. The deconvolution-based imaging as evolved as a research direction in geophysical applications since long ago. One can find a thorough comparison of three imaging techniques, namely image extraction by travel time, cross-correlation with source wavelet, and deconvolution with source wavelet in [Reference Lee, Mason and Jackson6]. It shows that deconvolution by a Wiener inverse filter outperforms the others in high signal-to-noise scenarios and does no worse than cross-correlation imaging at low signal-to-noise levels. In a case when the medium-target properties are known with insufficient accuracy, one can try iterative and semi-blind deconvolution methods developed for magnetic archaeological prospecting in [Reference Zunino, Benvenuto, Armadillo, Bertero and Bozzo7].
Regarding UWB radar imaging, deconvolution has been already used in detection of landmines, i.e. objects of simple, mostly cylindrical shape by ground penetrating radar [Reference Scheers, Acheroy and Vander Vorst8–Reference Savelyev, van Tol, Yarovoy and Ligthart11]. The technique estimates firstly a point spread function (PSF) that expresses the radar impulse response at all possible target positions along with properties of the medium. Then the PSF is deconvolved out from the measured data volume by means of a Wiener inverse filter and fast Fourier transform (FFT) [Reference Savelyev, van Tol, Yarovoy and Ligthart11]. In this work, we extend that approach to the more demanding case of complex targets, higher resolution, and wider bandwidth.
The paper is organized as follows. Section II describes the imaging principle and estimation of the PSF. Section III focuses on a regularized inverse filter with criteria for efficiency. Section IV presents images of metallic and non-metallic weapons obtained by synthetic aperture radar (SAR) measurements, and evaluates the performance of the technique. Conclusions are summarized in Section V.
II. IMAGING PRINCIPLE
A) Deconvolution
Imaging by deconvolution treats the scattered field as a superposition of reflections from point-like scatterers forming the target's surface. That means the field received by radar aperture represents a convolution of the PSF with target reflectivity distribution or with the true shape of the target. In the 3D case, we acquire the data with a 2D synthetic or real-antenna array (aperture) and attempt to reconstruct a target whose reflectivity changes over the aperture and with distance. Since the antenna beamwidth increases with distance, the PSF also changes with distance, which immediately limits applicability of deconvolution to relatively flat target surfaces. Imaging of weapons concealed on a human body follows such a scenario. A person under test should stand at a fixed distance from the sensor, while a few PSFs can be estimated beforehand with a small step in distance to image successively concealed objects of possibly different thickness.
Mathematically, the received field in space-time domain or the radar data volume is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn1.gif?pub-status=live)
where ${\bi H}\lpar x\comma \; y\comma \; t \left\vert z_0\right.\rpar $ expresses the PSF as a function of horizontal and vertical offsets of the virtual antenna with respect to a point-like scatterer at a given distance z 0, and of time;
${\bf \Lambda} \lpar x\comma \; y\comma \; t\left\vert z_0\right.\rpar $ stands for the target reflectivity (image) at the distance z 0; and operator
$\otimes$ denotes convolution over x, y and t. Having estimated the PSF and measured the radar volume, we compute the target reflectivity by deconvolution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn2.gif?pub-status=live)
which gives a 3D image ${\bf \Lambda} \lpar x\comma \; y\comma \; t\left\vert z_0\right.\rpar $ via the relationship
$z=ct/2$ accounting for propagation in the air with the speed of light c.
Estimation of the PSF for particular imaging radar can be done beforehand either by modeling or by precise measurement. Figure 1 illustrates two 2D PSFs ${\bf H}\lpar y\comma \; t\left\vert {50}\right.\rpar $ and
${\bf H}\left({y\comma \; t\left\vert {60} \right.}\right)$, which were measured in the vertical plane from two small metal spheres of a 2 cm diameter, placed at positions (0, 50) and (0, 60) cm, respectively. The measurement was done by a 5–25 GHz laboratory radar system whose description one will find in Section IV. The experimental image highlights the difference in shape between the two hyperbolas, which causes different deconvolution results. In general, only one PSF should be measured at a time to be used in (2).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-70734-mediumThumb-S1759078713000251_fig1g.jpg?pub-status=live)
Fig. 1. PSFs measured at 50 and 60 cm distances.
Now the imaging strategy looks as follows:
1) Estimate a set of N PSFs separated from each other by a small distance related to the downrange resolution of the radar. Apparently N should not be large if a person under inspection stands at a given spot. N represents an interval defined by a positioning error and thickness of clothes.
2) Compute N respective images by deconvolution for the acquired radar volume.
3) Combine the obtained images either into one image or into a video sequence (to see multiple objects at different distances).
4) Visualize the result in a clear way to the operator.
B) Kirchhoff migration
In order to illustrate a proper quality of the images obtained by deconvolution, we compare them with the images obtained by Kirchhoff migration for the same scenarios. Generally, Kirchhoff migration based on wave equation is seen as one of the most accurate reconstruction techniques because it accounts for the wavefront of the scattered electromagnetic field. As a space-time integration technique, it provides the highest signal-to-noise ratio (SNR) in the focused image at the cost of relatively long computation [Reference Zhuge, Yarovoy, Savelyev and Ligthart2, Reference Zhuge, Savelyev, Yarovoy, Ligthart and Levitas3]. It requires a user-defined 3D grid with a certain voxel at its input and then it migrates the acquired data volume onto that grid. According to the formulation for multi-static radar given in [Reference Zhuge, Savelyev, Yarovoy, Ligthart and Levitas3], Kirchhoff migration can be expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn3.gif?pub-status=live)
where Λ(x,y,z) represents the target reflectivity at a grid-point (x, y, z); s(x′, y′, t) is the field received at a point (x′, y′, t); R 1 and R 2 express the distances between the grid-point (x, y, z) and Tx- and Rx-antennas, respectively; ϕ1 and ϕ2 indicate the respective aspect angles; v stands for the propagation velocity. Given a measurement geometry along with a grid of interest, the cosines and travel times in (3) can be computed beforehand and organized in look-up tables to speed up computation. Note that Λ (x, y, z) and s(x′, y′, t) in (3) express scalar values with different coordinates, which means repetition of the integration procedure for each grid-point (x, y, z). Meanwhile, deconvolution (2) deals with data volumes with the same coordinates and delivers a focused 3D image at once.
III. REGULARIZED 3D INVERSE FILTERING
The fastest classical way of performing deconvolution uses a Wiener inverse filter in frequency domain along with FFT [Reference Dhaene, Martens and De Zutter12]. In our case, we extend it to the frequency-wavenumber domain as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn4.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn5.gif?pub-status=live)
where ${\bi \dot I}\left({k_x\comma \; k_y\comma \; f} \right)$,
${\bi \dot S}\left({k_x\comma \; k_y\comma \; f} \right)$, and
${\bi \dot H}\left({k_x\comma \; k_y\comma \; f} \right)$ are the complex spectra of the focused image, measured radar volume, and PSF, respectively; IFFT means an inverse FFT; symbol * stands for complex conjugate, and β denotes the regularization parameter originating from the inverse SNR. From here on we treat deconvolution as an ill-posed inverse problem by means of the regularization theory. Note piecewise multiplication of 3D matrices in (4) and later on.
Proper selection of the regularization parameter defines the image quality that must satisfy certain criteria for efficiency. The regularized solution is seen as a trade-off between stability and accuracy of deconvolution. Stability means negligible ringing in the 1D deconvolution result or a low artifact level in the image. Accuracy can be defined as a measure of similarity between the original signal and its reconstruction obtained by convolution of the found result with the PSF. A stable solution does not always mean an accurate solution. Normally lesser the stability larger the accuracy. Furthermore, if deconvolution produces an unstable and inaccurate result, this means a physically incorrect PSF.
Two complementary numerical criteria, namely error and instability, have been proposed in [Reference Savelyev, van Tol, Yarovoy and Ligthart11] for deconvolution of UWB signals. In the 1D case, the criteria are given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn6.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn7.gif?pub-status=live)
where s(t), Λ(t) are the received signal and deconvolution result, respectively; ${\bi \hat s}\left(t \right)={\bf \Lambda }\left(t \right)\otimes {\bf h}\left(t \right)$ is the reconstructed signal; operator
$ \Vert \cdots \Vert _2 $ stands for the two-norm (square root from energy) of a vector. The error becomes 100% when
${\bf s}\left(t \right)$ and
${\bf \hat s}\lpar t\rpar $ have the same amplitude but opposite polarity. The instability of more than 100% means that the signal energy after deconvolution exceeds that found earlier, which indicates the presence of ringing. Starting from a low value and iteratively increasing the regularization parameter
$\beta $ we arrive at a solution that respects admissible thresholds for error and instability. In order to remove influence of the PSFs energy on instability, we normalize a realistic PSF hr(t) obtained by measurement or modeling beforehand, by its 2-norm as
${\bi h}\left(t \right)={\bi h}_r \left(t \right)/\Vert {\bi h}_r \lpar t\rpar \Vert _2 $.
Figure 2 illustrates the efficiency of deconvolution for signals acquired from a small metal sphere and a metal gun, more specifically 1% error and 35% instability. These values were obtained with the regularization parameter corresponding to an SNR of 44 dB. Note that deconvolution shifts the signal back in time (Fig. 2(c)) because it removes the phase of the PSF from the received signal. However, this shift is constant and, if necessary, it can be easily compensated for (in 2D and 3D cases as well).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-70474-mediumThumb-S1759078713000251_fig2g.jpg?pub-status=live)
Fig. 2. 1D deconvolution with 44 dB SNR, 1% error and 35% instability: (a) estimated PSF; (b) original received signal and reconstructed signal; (c) signal after deconvolution.
In the 3D case, the formulas for the PSF, reconstructed signal, error, and instability become as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn8.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn9.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn10.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn11.gif?pub-status=live)
where operator $ \Vert \cdots \Vert_F $ computes the Frobenius norm (square root from energy) of a matrix. From our experience, thresholds δ ≤ 20% and γ ≤ 50% provide good image quality in most scenarios. Since an ill-posed inverse problem by definition has many similar acceptable regularized solutions, using the same thresholds for most target scenarios is more relevant than attempting to find an optimal combination for every scenario (data volume) separately.
The above criteria depend on the regularization parameter that expresses the inverse SNR in the data volume. Our deconvolution algorithms start with an initial guess for the SNR, which can be estimated on a calibration dataset beforehand. Once the SNR is assumed in the space-time domain, the regularization parameter is defined in frequency-wavenumber domain as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn12.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn13.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn14.gif?pub-status=live)
where operator max[⋯] gives the maximal value of a data volume; operator $\left\vert \ldots \right\vert $ returns absolute values of a data volume;
${\rm \sigma }_{n}^2 $ is the noise variance in space-time domain; K, L, M express the dimensions of the data volume; P n stands for the noise power in frequency-wavenumber domain; P(k x, k y, f) is the volume of power spectral density of the data in frequency-wavenumber domain; operator mean[⋯] finds a mean value of a data volume. Obviously, the higher the SNR the lower the error, and the higher the instability. A well-assumed SNR gives a proper deconvolution result in one run, if not we adjust it iteratively with a step of 1 dB.
A satisfactory deconvolution result needs one more step to become a solid 3D image. In order to make it unipolar and to smooth oscillations we compute the envelope of each time-domain signal in ${\bf \Lambda }\lpar x\comma \; y\comma \; t\rpar $ by means of a Hilbert transform. The latter delivers a proper positive envelope for UWB signals whose frequency band lies far enough from 0 Hz [Reference Astanin and Kostylev13]. Figure 2(c) gives an example of such a signal obtained by deconvolution in the 5–25 GHz band.
IV. IMAGING RESULTS
The proposed deconvolution algorithm was validated on the high-quality data obtained by precise SAR measurements in an anechoic chamber. A metal gun (a toy actually) and a fully non-metal ceramic knife have been used as targets which we imaged together in free space and on a large metal plate. To estimate the PSF, we measured a single metal sphere with a diameter of 2 cm. Next to that, the gun was fixed on the chest of a dummy and imaged through a thick raincoat. We used a plastic dummy plated with nickel spray instead of a living person because each SAR measurement lasted several hours. The images obtained by deconvolution are compared here with those of Kirchhoff migration in terms of quality and computational time.
A) Experimental setup
The measurements were conducted by means of a calibrated vector network analyzer (Agilent E8364B). The stepped-frequency data were acquired within a frequency band of 5–25 GHz with a 10 MHz step, 300 Hz intermediate frequency bandwidth, and 2 dBm transmitted power. These settings resulted in a large dynamic range of 120 dB with respect to the receiver noise. The selected 5–25 GHz band fitted well with the transfer function of UWB antennas we used [Reference Zhuge and Yarovoy14], and meanwhile it gives the down-range resolution of 0.75 cm according to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn15.gif?pub-status=live)
where c is the speed of light, and B is our bandwidth. This resolution determines actually the minimal detectable thickness of a weapon concealed on the body.
A synthetic aperture of 60 × 60 cm was formed by movement of one Tx/Rx antenna pair consisting of vertically oriented Vivaldi antennas, in the X–Y plane with a 1 cm step in both directions. Figure 3(a) illustrates the setup wherein an X–Y translation table moves the equipment. The target distance was chosen to be 40 cm in order to have it smaller than the aperture for a better cross-range resolution given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-99347-mediumThumb-S1759078713000251_fig3g.jpg?pub-status=live)
Fig. 3. Measurement setup: (a) metal gun and ceramic knife on metal plate; (b) metal gun on conductive dummy under raincoat.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_eqn16.gif?pub-status=live)
where λc = 2 cm is the central wavelength, R = 40 cm is the target distance, and L = 60 cm is the synthetic aperture in X- or Y-plane. Thus, we may expect the cross-range resolution of about 1.33 cm for the given setup.
The measurement with a dummy, shown in Fig. 3(b), required a larger scan, more specifically 150 × 74 cm with the center pointing at the solar plexus. The gun was fixed on the body parallel to the aperture not only for the sake of simplicity but also assuming that in reality a person under test may be asked to turn around in front of the scanning system. Note that the raincoat consists of a plastic layer from outside and a layer of artificial fur from inside. The distance between the aperture and the surface of the dummy equals 50 cm.
To be independent from a transmission–reception UWB radar technology (e.g. impulse, noise, frequency-modulated continuous wave, and stepped-frequency) our deconvolution algorithm requires the time-domain data at its input. Therefore, the frequency-domain data acquired by the network analyzer were pre-processed as follows: (a) subtraction of the background (antenna crosstalk), which we measure at the SAR center; (b) multiplication of each 1D data by a Hann window to avoid ringing in time domain; (c) padding each 1D data with zeros to obtain an interpolated signal after transformation; (d) transformation into time domain by IFFT; and (e) time gating of unwanted reflections. Calibration was performed by estimating a time delay in the Tx/Rx antenna pair from the sphere's reflection at a given distance and then by respective time shift of all the data. Prior to transformation to frequency-wavenumber domain for inverse filtering (4), the spatial 2D data at each time instant in ${\bf S}\left({x\comma \; y\comma \; t} \right)$ were multiplied by a 2D Tukey window, which features a flat top and smooth edges, in order to avoid ripples after transformation. Standard 3D FFT and IFFT procedures were used to transform the data to frequency-wavenumber domain and back.
B) PSF
A pre-processed data volume, which has been acquired over a metal sphere placed in the middle of the synthetic aperture, was used as the PSF in the deconvolution algorithm. The distance between the antennas’ aperture and the closest sphere's point was set at 40 cm. Figure 4 shows a focused 2D image of the sphere in the logarithmic amplitude scale that reduces difference between strong and weak scatterers for better visualization. This image was obtained from the focused data volume by energy projection onto the vertical plane. In fact, it expresses the focused PSF of our imaging radar, which features a sidelobe level of less than 30 dB along with a cross-range resolution of 1 cm at −10 dB. The image was focused by deconvolution with the regularization parameter (14) corresponding to an SNR of 40 dB, which gave an error ${\rm \delta }=6{\rm \percnt \; }$ and instability
${\rm \gamma }=17{\rm \percnt}$. The increase of SNR results in appearing of artifacts above −30 dB, while its decrease broadens the focused beam. The found SNR can be used as an initial value in deconvolution of all the datasets acquired later on. In general, both unfocused and focused PSFs represent the key characteristics of the imaging system with deconvolution.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-07351-mediumThumb-S1759078713000251_fig4g.jpg?pub-status=live)
Fig. 4. PSF measured from metal sphere of 2 cm diameter and focused by deconvolution.
C) Imaging of unconcealed weapons
In the next step, a metal gun and a fully non-metallic ceramic knife, placed at positions (−12, −12, 40) and (12, 12, 40) cm, respectively, have been imaged. The distance z = 40 cm means a distance between the antennas’ aperture and the rear side of the objects fixed on a large plate of styrene foam. Figure 5(a) shows the weapons imaged by deconvolution at correct positions and with recognizable shape in a 20 dB dynamic range. The image was obtained with the following deconvolution parameters: 50 dB SNR, 15% error and 19% instability. Comparison with the presumably most accurate result obtained by Kirchhoff migration (Fig. 5(b) proves the high performance of deconvolution. Note that Kirchhoff migration was done for an artificially defined grid of 60 × 60 × 30 cm with a voxel of 5 × 5 × 5 mm. Deconvolution defines the grid from the acquired data automatically, which is here a volume of 61 × 61 × 4096 points. For the used 1 cm SAR step and 2 ps sample interval, we receive a voxel of 10 × 10 × 0.3 mm.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-88481-mediumThumb-S1759078713000251_fig5g.jpg?pub-status=live)
Fig. 5. Image of metal gun and ceramic knife in free space: (a) focused by deconvolution and (b) focused by Kirchhoff migration.
Imaging of the weapons on a large metal plate can be seen as a quite difficult scenario for their detection in the presence of the strongest possible unwanted reflection. Images focused by deconvolution and Kirchhoff migration are presented in Fig. 6(a) and (b) respectively. Both techniques reconstruct the recognizable shapes of the weapons in a 10 dB dynamic range although with some distortion. Owing to high sensitivity of deconvolution to the target distance the metal plate is not seen in Fig. 6(a). Moreover, in order to obtain this result we used an SNR of 46 dB, which is in fact smaller than the actual SNR defined by the metal plate. Thus, we received a large error of 69% along with 5% instability.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-72506-mediumThumb-S1759078713000251_fig6g.jpg?pub-status=live)
Fig. 6. Image of metal gun and ceramic knife on metal plate: (a) focused by deconvolution and (b) focused by Kirchhoff migration.
D) Concealed weapon detection
In the scenario with a concealed gun, we measured not only the dummy but also three PSFs for the same aperture and three distances: 47, 50, and 53 cm, respectively. The best deconvolution result showing the shape of the gun most clearly was obtained with the PSF measured at 47 cm. Recalling the 50 cm distance to the body and accounting for the thickness of the gun we have a good agreement here between the result and the experimental setup. Figure 7 presents the images focused by deconvolution with different PSFs, and also by Kirchhoff migration that has been implemented for a grid of 150 × 74 × 20 cm with a 5 × 5 × 5 mm voxel. The images obtained by deconvolution have a voxel of 10 × 10 × 0.3 mm. They were focused with the regularization parameter corresponding to an SNR of 46 dB, which gave an error and instability of about 6% and 19%, respectively.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160712045255-70410-mediumThumb-S1759078713000251_fig7g.jpg?pub-status=live)
Fig. 7. Image of gun on dummy under raincoat, shown in 12 dB dynamic range: (a–c) focused by deconvolution with different PSFs and (d) focused by Kirchhoff migration.
The images represent 2D projections of focused data volumes onto the vertical plane. A significant advantage of 3D imaging comes from the possibility to process the focused volume slice-by-slice for the most appropriate visualization. Knowing that the gun protrudes from the body to some extent, a simple visualization algorithm assigns larger intensity to slices at smaller distances, sums them up, and thus makes the gun brighter than the body.
The obtained results demonstrate that Kirchhoff migration delivers a better image of the concealed weapon (Fig. 7(d)) than the best deconvolution result (Fig. 7(a)). However, the latter provides a reasonable image quality at the cost of much faster computation.
E) Computational time
The imaging algorithms have been implemented in MATLAB running on a laptop of the latest generation with Intel Core i7 2.3 GHz processor, Ivy Bridge architecture, 8 GB RAM and Windows 7 64-bit. In terms of computational time, deconvolution focuses a data volume of 151 × 75 × 4096 points in 9 s for one PSF including visualization (Fig. 7(a–c)). Kirchhoff migration produces an image of 301 × 149 × 41 voxels in 13 min (Fig. 7(d)).
Unlike Kirchhoff migration, deconvolution does not create a spatial volumetric grid artificially but works with the acquired data directly. The voxel is defined here by how densely we perform spatial and temporal sampling of the scattered field, i.e. by the SAR step and sample interval. The above images focused by deconvolution have a voxel of 10 × 10 × 0.3 mm. Although deconvolution improves the down-range resolution, 0.3 mm is too small in our case because it depends physically on the selected 20 GHz bandwidth which gives 7.5 mm. Numerical experiments have shown that the images remain nearly the same unless the sample interval does not exceed 16 ps meaning 512 points in a signal. The computational time after such downsampling becomes 1.4 s for deconvolution. Further reducing the 151 × 75 × 512 data volume to 128 × 64 × 512 points (to make the full use of FFT) does not speed up focusing drastically, and gives 1 s. The speed of Kirchhoff migration does not change with downsampling in the time domain. However, increasing its voxel to 7.5 × 7.5 × 7.5 mm reduces the computational time to 4 min, while the image deteriorates slightly. Table 1 summarizes computational efficiency of the imaging algorithms in the considered cases.
Table 1. Computational efficiency for an image of 150 × 74 × 20 cm.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20151130142813329-0790:S1759078713000251_tab1.gif?pub-status=live)
V. CONCLUSION
A fast imaging algorithm based on 3D deconvolution has been developed for concealed weapon detection by UWB radar. The algorithm treats the scattered field as a superposition of reflections from point-like scatterers and deconvolves the PSF out from the measured radar volume. Since the PSF differs with the target distance, deconvolution performs well only for a given range. This limitation can be bypassed in practice by estimation of a few PSFs at the distances of interest and by their successive deconvolution. High imaging speed comes from the use of FFT and 3D inverse filtering in frequency-wavenumber domain. Implementation of an inverse filter involves the regularization theory and two proposed numerical criteria allowing us to find an optimal regularization parameter iteratively.
The deconvolution algorithm has been tested on the experimental set of data acquired by short-range SAR with a low transmitted power over the frequency band of 5–25 GHz. It delivers visually recognizable 3D images of a metal gun and a ceramic knife both in free space and on a large metal plate. Next to that, the metal gun has been successfully imaged through a thick raincoat on a dummy. Comparison with the imaging by Kirchhoff migration, seen as one of the most accurate reconstruction techniques, has shown that the developed deconvolution algorithm provides sufficient quality of imagery at the cost of much faster computation. It is able to image a person within 1.4 s versus 240 s by Kirchhoff migration. Since deconvolution requires the knowledge of the PSF a priori, it can be used in systems with certain scanning rules such as a constant distance to a person under test, turning around in front of the system, etc.
It has also been demonstrated that concealed weapon detection gains from proper visualization of the focused data. In this respect, computer processing of a focused 3D image slice-by-slice gives a way to various visualization techniques.
ACKNOWLEDGEMENTS
This work has been done within the Weapon Scanner project in cooperation with Rotterdam Rijnmond Police and financially supported by the Dutch Ministry of Security and Justice. The authors also thank Pascal Aubry for his contribution to measurements and visualization techniques.
Timofey Savelyev received a Dipl.-Eng. degree with honors in radio electronic systems from the Baltic State Technical University, St. Petersburg, Russia in 1997. A Ph.D. degree in radar and navigation systems was granted to him by the State University of Aerospace Instrumentation, St. Petersburg in 2000. Since then he has worked as a Visiting Researcher at Vrije Universiteit Brussel, Brussels, Belgium in 2002, as a Research Associate at the Centre for Northeast Asian Studies, Tohoku University, Sendai, Japan during 2003–2005. In 2006–2012, he was a Senior Researcher with the International Research Centre for Telecommunications and Radar, Delft University of Technology, The Netherlands. At present he holds the position of Radar System Architect at Omniradar, The Netherlands. His main research interests include short-range radar systems, ultra-wideband signal processing and analysis, and radar imaging.
Alexander Yarovoy graduated from Kharkov State University, Ukraine, in 1984 with a Diploma with honours in Radiophysics and Electronics. He received the Cand. Sci. and Dr. Sci. degrees in radiophysics in 1987 and 1994, respectively. In 1987, he joined the Department of Radiophysics at Kharkov State University as a Researcher and became a Professor there in 1997. From September 1994 through 1996, he was with the Technical University of Ilmenau, Germany as a Visiting Researcher. Since 1999 he has been with Delft University of Technology, The Netherlands. Since 2009, he has led there a chair of Microwave Sensing, Systems, and Signals. His main research interests are in ultra-wideband microwave technology and its applications (in particular, radars) and applied electromagnetics (in particular, UWB antennas). He has authored and co-authored more than 250 research articles, four patents and 14 book chapters.