1. INTRODUCTION
Autonomous navigation has attracted considerable attention in the indoor, underground mining, subway, railway, underwater and tunnel industries. However, Global Navigation Satellite System (GNSS) signals are not always available in these areas (Hesch et al., Reference Hesch, Kottas, Bowman and Roumeliotis2016; Allotta, et al., Reference Allotta, Costanzi, Meli, Pugi, Ridolfi and Vettori2014). In order to estimate the positions of moving objects in GNSS-denied environments, it is necessary to use other sensors to measure the displacements of the moving objects. For this reason, an Inertial Measurement Unit (IMU) that senses three Degrees Of Freedom (DOF) linear accelerations as well as three DOF rotational velocities of a moving object has been widely used in attitude measurement. However, IMU measurements are often corrupted by background noise, and the consequent biased estimation and filtering divergence errors will result in unreliable position and attitude estimation during autonomous navigation (Allotta et al., Reference Allotta, Pugi, Ridolfi, Malvezzi, Vettori and Rindi2012). Furthermore, due to drift over time, an IMU will accumulate measurement errors (Wang et al., Reference Wang, Hu, Liu and Li2015; Markham et al., Reference Markham, Niki Trigoni, Macdonald and Ellwood2012). Although an IMU's drift can be bounded within an acceptable arrange to improve measurement accuracy, IMUs are generally too expensive for practical everyday use (Lehtola et al., Reference Lehtola, Virtane, Vaaja, Hyyppä and Nüchter2016). For practical applications, some alternative sensors, such as Light Direction And Ranging (LiDAR), laser scanning or cameras, have been applied to calculate the position and attitude of moving objects in GNSS-denied environments. As a laser scanner can provide both the distance and azimuth information for IMUs with high accuracy and efficiency, it has been commonly utilised in moving object navigation (Lauterbach et al., Reference Lauterbach, Borrmann, He, Eck, Schilling and Nüchter2015; Bosse et al., Reference Bosse, Zlot and Flick2012).
Recent work has demonstrated that a Laser Scanning aided Inertial Navigation System (LSINS) can obtain highly accurate position and attitude information for moving object navigation. This combination of laser scanning and inertial measurements assumes prior knowledge on the rigid transformation between different sensors (Soloviev and Uijt De Haag, Reference Soloviev and Uijt De Haag2010; Tang et al., Reference Tang, Chen, Niu, Wang, Chen, Liu, Shi and Hyyppä2015; Shi et al., Reference Shi, Ji, Shao, Yang, Wu, Shi and Shibasaki2015). However, inconsistencies are usually found between laser scanning and inertial measurements, and hence the combination has not delivered satisfactory results in existing LSINS. Biased estimation and filtering divergence errors are consequently generated and decrease the navigation accuracy (Xiong et al., Reference Xiong, Han and Xiong2009; Liu et al., Reference Liu, Wang, Wang and Li2013; Kim et al., Reference Kim, Baeg, Yang, Cho and Park2012). For example, Hesch et al. (Reference Hesch, Mirzaei, Mariottini and Roumeliotis2016) presented an Extended Kalman Filter (EKF) to fuse information from an IMU and a Two-Dimension (2D) laser scanner to concurrently estimate the six DOF position and attitude of a moving object with respect to a Three-Dimension (3D) map of the environment. However, without bias correction, the IMU measurement accumulated attitude estimation errors until divergence. Aghili and Su (Reference Aghili and Su2016) achieved a robust six DOF relative navigation solution by combining an Iterative Closest Point (ICP) registration algorithm and a noise Adaptive Kalman Filter (AKF) in a closed-loop configuration. The measurements from a laser scanner and an IMU were processed by the ICP-AKF method in autonomous navigation. Although the proposed ICP-AKF was sensitive to local minima and outliers in the sensory measurements, without proper initial attitude information, the method converged slowly and then diverged. Baglietto et al. (Reference Baglietto, Sgorbissa, Verda and Zaccaria2011) developed a system for mapping and self-localisation based on a six DOF IMU and a laser scanner. The sensory measurements were fed into an Unscented Kalman Filter (UKF)-based Simultaneous Localisation And Mapping (SLAM) algorithm, and experimental results shown that the UKF-based approach failed to integrate multi-sensory data for position estimation.
Literature review indicates that EKF, AKF or UKF algorithms used for LSINS in GNSS-denied environments (Wu et al., Reference Wu, Jia, Shan and Meng2014; Kong et al., Reference Kong, Mao and Li2016; Sun et al., Reference Sun, Chu, Zhang, Jia, Guo, Zhang and Zhang2015) can suffer from inconsistencies between different sensors. That is, the error model of LSINS will greatly influence the inconsistent state estimation results, and consequently accumulate biased estimation and filtering divergence errors. Therefore, a priori knowledge about the error model of LSINS is required to eliminate state estimation inconsistencies. In the course of LSINS state estimation, the EKF or UKF-based integration algorithms cannot solve the filtering divergence, which may directly affect the measuring accuracy of LSINS (Tan et al., Reference Tan, Wang, Jin and Meng2015; Chen et al., Reference Chen, Meng, Wang, Zhang, Tian and Yang2015). Although recent research attempted to control the filter error using a Singular Value Decomposition Unscented Kalman Filter (SVDUKF), singular value decomposition only improves the stability of the state covariance matrix of the UKF algorithm while the state estimation bias caused by measurement uncertainty remains unsolved (Gao et al., Reference Gao, Wang and Jiao2010). Fortunately, Radial Basis Function Neural Networks (RBFNN) are able to model arbitrary nonlinear dynamic systems and identify system uncertainty (Deng and Zhang, Reference Deng and Zhang2013; Jwo and Huang, Reference Jwo and Huang2004; Ma et al., Reference Ma, Li, Malekian, Zhang, Song and Sotelo2018; Zhan and Wan, Reference Zhan and Wan2006). It is possible to utilise RBFNN to discover the state estimation bias in SVDUKF processing to reduce the biased estimation and filtering divergence errors. However, RBF improved SVDUKF has not yet been investigated in autonomous navigation.
In order to bridge the aforementioned research gap in autonomous navigation, this paper proposes a novel RBF-improved SVDUKF to reduce the biased estimation and filtering divergence errors in the process of LSINS integration. First, the relationship between laser scanner coordinates and inertial sensor coordinates is established to build the error state vector and measurement error model of LSINS. Second, a RBFNN is employed to optimise the model parameters of SVDUKF to improve LSINS position accuracy. In addition, instead of unscented transformation, a RBFNN algorithm can be used to reduce SVDUKF and will be adopted to suppress negative definite variation in the a priori covariance matrix of UKF. Lastly, experimental tests are implemented to demonstrate the effectiveness of the proposed method in GNSS-denied environments.
The remainder of this paper is organised as follows. Section 2 describes the error model of LSINS. The proposed error control approach, RBF-improved SVDUKF, is presented in Section 3. An evaluation of the proposed error control method using experimental tests is presented in Section 4 and Section 5 concludes the paper.
2. ERROR MODELLING OF LSINS
Generally, an LSINS contains four basic coordinate frames: the laser scanner frame {L}, the inertial measurement unit frame {I}, the object frame {O} and the world frame {W}. Figure 1 depicts the relationships of these four frames.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig1g.jpeg?pub-status=live)
Figure 1. Coordinate frames {L}, {I}, {O} and {W} of LSINS.
2.1. Transformation between {L} frame and {I} frame
To effectively integrate the measurements of the laser scanner and IMU, we should first calculate the relative positions and orientations between these sensors.
Assuming that the point WP in the {W} frame corresponds to the point LP in the {L} frame, the relationship between the {W} and {L} frames can be calculated as (Liu, Reference Liu2017):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn1.gif?pub-status=live)
where $R^{L}_{W}$ represents a 3 × 3 orthonormal rotation matrix between the {W} and {L} frames; LT LW is the translation vector. The success of the coordinate transformation depends on finding a suitable
$R^{L}_{W}$ and LT LW.
In order to calculate $R^{L}_{W}$ and LT LW, the coordinate transformation can be obtained by observing the laser scanning to the calibration plane. A geometric constraint is calculated between the laser scanning points and the calibration planes. As the laser scanning point clouds lie on the calibrating planes, we have:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn2.gif?pub-status=live)
where Wr denotes the normal vector to the plane and d represents the distance between the {W} frame and calibrating plane. Equation (1) can be re-written as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn3.gif?pub-status=live)
Substituting Equation (3) into Equation (2) yields:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn4.gif?pub-status=live)
For a given laser scanning point and a calibration plane position, $R^{L}_{W}$ and LT LW have to satisfy the constraint in Equation (4), which can be solved by the following steps.
Step 1: Linear solution. Let us assume that in the {L} frame, the coordinates of the laser scanning point can be defined as LP = [LP x LP y 0] T. Therefore, Equation (4) can be rewritten as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn5.gif?pub-status=live)
Define $Z=\lpar R^{L}_{W}\rpar ^{-1}\left[\matrix{1 & 0 & 0 \cr 0 & 1 & -{}^{L}T_{LW} \cr 0 & 0 & 0}\right]$, then Equation (5) can be simplified as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn6.gif?pub-status=live)
where Z denotes the parameters to be calculated, that is, the unknown $R_{W}^{L}$ and LT LW. The Least Squares (LS) method is adopted to estimate Z in this paper.
To obtain the optimal estimation of Z, multiple calibration planes are used in the coordinate transformation process. It is assumed that the total numbers of calibration planes are N, and the i-th plane Wr i = [r i,1 r i,2 r i,3] contains M i(i = 1, …, N) laser scanning points. Let us define $Z=\left[\matrix{z_{11} & z_{12} & z_{13} \cr z_{21} & z_{22} & z_{23} \cr z_{31} & z_{32} & z_{33}}\right]$ for Wr i, the distance between Wr i and the {W} frame origin d i, and the j-th (j = 1, 2, …, M i) laser scanning point in Wr i LP ij = [LP ij,x LP ij,y 1] T. Equation (6) can be then rewritten as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn7.gif?pub-status=live)
As can be seen in Equation (7), each laser scanning point can be described as a linear equation, and hence, the LS method can be applied to solve Z from the equations. $R_{W}^{L}$ and LT LW can be then calculated from Z.
Let R i be the i-th row of $R_{W}^{L}$, then
$\lpar R_{W}^{L}\rpar ^{-1}=\lpar R_{W}^{L}\rpar ^{T}=\lsqb R_{1}^{T}\ R_{2}^{T}\ R_{3}^{T}\rsqb $, and we can obtain:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn8.gif?pub-status=live)
As the columns of $R_{W}^{L}$ are orthogonal to each other, according to Equation (8) we derive:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn9.gif?pub-status=live)
where Z i represents the i-th column of Z. In Equation (8) we note that $Z_{3}=-\lpar R_{W}^{L}\rpar ^{-1}\, {}^{L}T_{LW}$, which can be re-written as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn10.gif?pub-status=live)
Thus, by solving the combination of Equations (9) and (10), $R_{W}^{L}$ and LT LW can be obtained.
Step 2: Nonlinear solution. The linear solution can be calculated using the LS method to minimise the algebraic distance between the laser scanning points and the calibration plane. In order to eliminate the difference between the measuring distance and algebraic distance, the nonlinear solution is employed.
In Equation (5) there are two types of distances, that is, the algebraic distance d between the calibration plane and the {W} frame and the measured distance ${}^{W}r\cdot \lpar R_{W}^{L}\rpar ^{-1}\lpar {}^{L}P-{}^{L}T_{LW}\rpar $. The differences of these two distances represent the distance errors between the laser scanning points and the calibration plane. The goal of the nonlinear solution is to eliminate the distance errors using the following object function:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn11.gif?pub-status=live)
In this work, Equation (11) is minimised by the nonlinear LS method (Pulford, Reference Pulford2010). The nonlinear LS performs optimisation processing via an initial guess of $R_{W}^{L}$ and LT LW to output the optimal estimation of
$R_{W}^{L}$ and LT LW. Then the geometric relationships between the laser scanner and IMU can be established to complete the coordinate transformation. The IMU usually consists of three gyros and three accelerometers. The gyros can provide the changes of Euler angles, while the accelerometers can give the specific forces. After integrating the outputs of accelerometers and gyros, the rotation and translation matrix between the {W} frame and the {I} frame can be obtained. Defining the rotation matrix as
$R_{I}^{W}$ and the translation vector as IT WI, a point in the {I} frame can be projected into the {W} frame through:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn12.gif?pub-status=live)
where IP represents the point in the {I} frame, WP denotes the point in the {W} frame.
Substituting Equation (1) into Equation (12), the relationship between the {I} frame and the {L} frame can be expressed by Equation (13).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn13.gif?pub-status=live)
2.2. Error Modelling of LSINS
The kinematic model of LSINS is $\dot{Y}=F\lpar y\comma \; u\rpar $, where
$y=\lsqb {}^{O}T_{WO}\ {}^{O}V\ \Theta\rsqb ^{T}$ is the state vector and u is the input parameter. The LSINS kinematic model can be calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn14.gif?pub-status=live)
where ${}^{O}\dot{T}_{WO}$ denotes the translation vector from the {W} frame origin to the {O} frame origin; OV represents the velocity of the moving object in the {O} frame;
$\Theta=\lsqb \alpha\ \beta\ \theta\rsqb $ denotes the roll, pitch and yaw of the moving object;
$R^{W}_{O}$ represents the rotation matrix from the {O} frame to the {W} frame; Oa O = f O + g O is the moving object's acceleration in the {O} frame, f O and g O denote the specific force and local gravity vectors of the object in the {O} frame, respectively, and Oω WO denotes the angle rate of the origin of the {O} frame to the {W} frame.
$\Omega^{-1}_{E}$ can be defined as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn15.gif?pub-status=live)
where α and β are the roll and pitch angle of the moving object.
The standard form of localisation equations can be expressed as $\hat{B}=F\lpar \hat{b}\comma \; \hat{u}\rpar $, where
$\hat{B}$ is the estimated value of the state
$\hat{b}$, and
$\hat{u}$ is the measured input. The localisation equations of LSINS can be calculated as (Huang, Reference Huang2010):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn16.gif?pub-status=live)
where ${}^{O}\hat{\omega}_{WO}={}^{O}\hat{\omega}_{IO}-{}^{O}\hat{\omega}_{IW}$,
${}^{O}\hat{\omega}_{IW}=\hat{R}^{O}_{W}\cdot {}^{W}\hat{\omega}_{LW}$ with
${}^{W}\hat{\omega}_{LW}=\omega_{IL}\lsqb \cos\hat{\alpha}\ \ 0\ \ {-}\sin\hat{\alpha}\rsqb$ and
$\hat{\alpha}$ is the measured attitude.
${}^{O}\hat{a}_{O}$ is computed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn17.gif?pub-status=live)
where γ a is the measurement noise with Gaussian distribution and b a is the random noise.
The error state vector includes the position velocity and attitude errors.
(1) The position errors: the errors of position are computed as:
(18)where$$\delta\lpar {}^{W}\dot{T}_{WO}\rpar =R_{O}^{W}\cdot {}^OV-\hat{R}^{W}_O\cdot {}^O\hat{V}=\hat{R}^{W}_O\cdot \delta\lpar {}^{O}V\rpar -{}^W\hat{V}\cdot\rho $$
$R^{W}_{O}=\hat{R}^{W}_{O}\lpar I+\rho\rpar $,
$I=\lpar R^{L}_{W}\rpar ^{T}\cdot R^{L}_{W}$, ρ is the tilt error of the calibration plane, and
${}^{O}V={}^{O}\hat{V}+\delta\lpar {}^{O}V\rpar $.
(2) The velocity errors: the errors of velocity can be calculated as:
(19)where g W represents the local gravity vector of the moving object in the {W} frame, b g is the gyro bias due to random plus noise, γ g represents the white Gaussian measuring noise and Wω IL in the {W} frame represents the angular rate from the origin of the {L} frame to the {I} frame.$$\eqalign{\delta\lpar {}^{O}\dot{V}\rpar &={}^O\dot{V}-{}^O\dot{\hat{V}}={}^Oa_O-\lpar \Omega_{IL}^O+\Omega_{IO}^O\rpar \cdot {}^OV-{}^O\hat{a}_{O}+\lpar \hat{\Omega}_{IL}^{O}+\hat{\Omega}_{IO}^{O}\rpar \cdot {}^{O}\hat{V} \cr &=\hat{R}_{O}^{W} \left(\left(\displaystyle{{\partial\lpar g^{W}\rpar }\over{\partial\lpar {}^{W}T_{WO}\rpar }}+{}^W\hat{V}\cdot \displaystyle{{\partial\lpar {}^{W}\omega_{IL}\rpar }\over{\partial\lpar {}^{W}T_{WO}\rpar }}\right)\delta\lpar {}^{W}T_{WO}\rpar \right. \cr &\quad + \lpar g^{W}+{}^W\omega_{IL}\lpar {}^{W}\hat{V}\rpar^{T}-\lpar {}^{W}\omega_{IL}\rpar^{T}\cdot {}^W\hat{V}\cdot I\rpar \rho\rpar \cr & \quad -\lpar \hat{\Omega}_{IL}^{O}+\hat{\Omega}_{IO}^{O}\rpar \delta\lpar {}^{O}V\rpar -\delta b_{a}- {}^{O}\hat{V}\cdot \delta b_{g}-\gamma_{a}-{}^O\hat{V}\cdot\gamma_{g}}$$
(3) The attitude errors: the attitude error is given in Equation (20):
(20)where δ(Wω IW) is computed as:$$\delta\lpar \dot{\Theta}\rpar =\dot{\Theta}-\dot{\hat{\Theta}}=\Omega_{E}^{-1}\cdot {}^W\hat{\omega}_{IW}-\delta\lpar {}^{W}\omega_{IW}\rpar -\hat{R}^{W}_I\lpar \delta b_{g}+\gamma_{g}\rpar $$
(21)$$\delta\lpar {}^{W}\omega_{IW}\rpar =-\omega_{IL}\cdot \left[\matrix{\sin \alpha \cr 0 \cr \cos \alpha}\right]\cdot\displaystyle{{\partial\Theta}\over{\partial\lpar {}^{W}R_{IW}\rpar }}\cdot\delta\lpar {}^{W}R_{IW}\rpar $$
The error state vector can be expressed as a function of the position velocity and attitude errors in Equation (22):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn22.gif?pub-status=live)
where x, γ and b denote the error state vector, measuring noise vector and process noise vector.
Based on Equations (18), (19), (20) and (22), the measurement error model of LSINS is obtained as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn23.gif?pub-status=live)
2.3. LSINS Navigation Processing
In the LSINS aided navigation process, the IMU can be applied to measure the position and orientation of moving objects, while the laser scanner can provide accurate distance and azimuth information to a landmark. SVDUKF can be used to combine the IMU and the laser scanner to predict the position of moving objects. The observation functions of SVDUKF are given by Equation (24):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn24.gif?pub-status=live)
where X k is a measurement vector of LSINS, $p^{n}_{IMU}$ is the predicted position from the IMU mechanisation,
$p^{n}_{LiDAR}$ is the observed position from LiDAR and
$\varepsilon^{n}_{IMU}$ and
$\varepsilon^{n}_{LiDAR}$ are the predicted and observed angles, respectively. The LiDAR observed angle can be obtained from LiDAR scan matching. H k is the designed matrix that describes the relationship between the state vector and the measurement vector of LSINS.
The LSINS navigation process can be carried out as the following two steps. The first step is time propagation, which aims to integrate the outputs of the IMU and to obtain the estimations of current states. The second step is measurements updating. The laser scanner measurements are used to correct the states estimations in step 1.
Step 1: Time propagation.
(1) In time t k, obtain X +k and P +k from the IMU, and incorporate into
$\dot{X}=f\lpar X\comma \; u\rpar $ over t ∈ (t k, t k+1) to get X −k+1.
(2) Calculate
$P^-_{k+1}=\Phi P^+_{k}\Phi^{T}+Q_{d}$, where Q d = GQG TT, T = t k+1 − t k and Φ = e HT with ‖HT‖≪1.
Step 2: Measurement updating.
(1) In time t k predict
$Y^-_{k+1}=h\lpar X^-_{k+1}\rpar $.
(2) The measurement residual is
$\delta Y_{k+1}=Y_{k+1}-Y^-_{k+1}$.
(3) The correction gain is
$K_{k+1}=P^-_{k+1}H^{T}_{k+1}\lpar H_{k+1}P^-_{k+1}H^{T}_{k+1}+R_{k+1}\rpar ^{-1}$.
(4) Correct the state
$X^+_{k+1}=X^-_{k+1}+\delta Y_{k+1}K_{k+1}$.
(5) Update the error covariance
$P^+_{k+1}=K_{k+1}R_{k+1}K^{T}_{k+1}+\lpar I-K_{k+1}H_{k+1}\rpar P^+_{k+1} \lpar I-K_{k+1}H_{k+1}\rpar ^{-1}$.
3. OPTIMAL ESTIMATION USING RBF-IMPROVED SVDUKF
After the error state vector and error measurement model of LSINS are established, the optimal estimation procedure of RBF-improved SVDUKF is conducted to improve the position accuracy of LSINS.
3.1. SVDUKF algorithm
Assume that the error state vector and the error measurement equation of LSINS are of discrete-time nonlinear systems:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn25.gif?pub-status=live)
where f(·) is the nonlinear error state updating function and h(·) is the error measurement function. The estimation procedure of SVDUKF includes the following four steps:
(1) Initialisation:
(26)$$\left\{\matrix{ \hat{x}_{0}=E\lsqb x_{0}\rsqb \hfill \cr S_{0}=E\lsqb \lpar x_{0}-\hat{x}_{0}\rpar \lpar x_{0}-\hat{x}_{0}\rpar^{T}\rsqb \hfill }\right. $$
(2) Calculate the sigma points with corresponding weight coefficients:
(27)where S t−1 is the covariance matrix of the state vector; χ is a matrix of state sigma points; Λt−1 is a diagonal fading factor matrix; G t−1 = diag(g 1, g 2, · · · , g r), g 1 ≥ g 2 ≥ · · · ≥ g r ≥ 0, g 1, g 2, · · · , g r is the singular value of S t−1; r is the rank of S t−1; χ t−1 ∈ R m×(2m+1) is a matrix of state sigma points; U t−1 and$$\left\{\matrix{S_{t-1}=U_{t-1}\cdot \Lambda_{t-1}\cdot C^T_{t-1}=U_{t-1}\cdot \left[\matrix{G_{t-1} & 0 \cr 0 & 0}\right]\cdot C^T_{t-1} \hfill \cr \chi_{i\comma t-1}=\hat{x}_{t-1}+\lpar U_{t-1}\sqrt{\Lambda_{t-1}\lpar m+\lambda\rpar }\rpar_i\comma \; \lpar i=1\comma \; \ldots\comma \; m\rpar \hfill \cr \chi_{i\comma t-1}=\hat{x}_{t-1}-\lpar U_{t-1}\sqrt{\Lambda_{t-1}\lpar m+\lambda\rpar }\rpar_{i-m}\comma \; \lpar i=m+1\comma \; \ldots\comma \; 2m\rpar \hfill \cr W^m_0=\displaystyle{{\lambda}\over{m+\lambda}} \hfill \cr W^c_0=\displaystyle{{\lambda}\over{m+\lambda}}+1+\varepsilon^2-\eta \hfill \cr W^m_i=W^c_i=\displaystyle{{\lambda}\over{2\lpar m+\lambda\rpar }}\comma \; \lpar i=1\comma \; \ldots\comma \; 2m\rpar \hfill }\right. $$
$C^{T}_{t-1}$ are the left and right singular vectors of S t−1, respectively; λ = ε 2(m + k) − m is a scaling parameter; ε(0 ≤ ε ≤ 0 · 5) is the pre-setting distance between the sigma point and S t−1, determining the spread rate of the sigma points around the mean of state x; k is a secondary scaling parameter, which is usually set to 3 − m and η denotes the prior knowledge of the distribution of x.
(3) Time updating equation:
(28)where$$\left\{\matrix{\chi_{t\vert t-1}=f\lpar \chi_{t-1}\rpar \hfill \cr \hat{x}_{t\vert t-1}=\sum\limits^{2m}_{i=0}W^m_i \chi_{i\comma t\vert t-1} \hfill \cr S_{t\vert t-1}=\sum\limits^{2m}_{i=0}W^c_i \lpar \chi_{i\comma t\vert t-1}-\hat{x}_{t\vert t-1}\rpar \cdot\lpar \chi_{i\comma t\vert t-1}-\hat{x}_{t\vert t-1}\rpar^T+b_t \hfill \cr y_{t\vert t-1}=h\lpar \chi_{t\vert t-1}\rpar \hfill \cr \hat{y}_{t\vert t-1}=\sum\limits^{2m}_{i=0}W^m_i y_{i\comma t\vert t-1} \hfill }\right. $$
$\hat{x}_{t \vert t-1}$ is the predicted state value and S t|t−1 is the predicted covariance matrix of the state vector.
(4) Error measurement updating equations:
(29)where σ i represents the multiple fading factors and κ t is the gain matrix.$$\left\{\matrix{S_{y_ty_t}=\sigma_i \sum\limits^{2m}_{i=0} W^c_i \lpar y_{i\comma t\vert t-1}-\hat{y}_{t\vert t-1}\rpar \cdot \lpar y_{i\comma t\vert t-1}-\hat{y}_{t\vert t-1}\rpar^T +\gamma_t \hfill \cr S_{x_ty_t}=\sigma_i \sum\limits^{2m}_{i=0} W^c_i \lpar \chi_{i\comma t\vert t-1}-\hat{x}_{t\vert t-1}\rpar \lpar y_{i\comma t\vert t-1}-\hat{y}_{t\vert t-1}\rpar^T \hfill \cr \kappa_t=S_{x_ty_t}\cdot S_{y_ty_t}^{-1} \hfill \cr \hat{x}_t=\hat{x}_{t\vert t-1}+\kappa_t\lpar y_{i\comma t\vert t-1}-\hat{y}_{t\vert t-1}\rpar \hfill \cr S_t=\sigma_iS_{t\vert t-1}-\kappa_tS_{y_ty_t}\kappa^T_t \hfill \cr D_t=y_{i\comma t\vert t-1}-\hat{y}_{t\vert t-1} \hfill \cr \sigma_i=\left\{\matrix{ 1\comma \; \lpar tr\lpar D_tD_t^T\rpar \le tr \lpar S_{x_ty_t}\rpar \rpar \hfill \cr \displaystyle{{tr \lpar S_{x_ty_t}\rpar }\over{tr\lpar D_tD_t^T\rpar }}\comma \; \lpar tr\lpar D_tD_t^T\rpar \le tr \lpar S_{x_ty_t}\rpar \rpar }\right.\hfill } \right. $$
3.2. Optimisation using RBFNN
According to Equation (29), to incorporate with the optimisation processes of $S_{y_{t}y_{t}}$,
$S_{x_{t}y_{t}}$ and S t, it is important to eliminate the influences of the gross errors, system errors and disturbances in measurement. RBFNN is one of the most popular intelligent computing methods, has a significant ability to approximate any non-linear function to a designed accuracy and also has the ability to eliminate the gross errors, system errors and disturbances more effectively and quickly. Therefore, this paper will adopt RBFNN to optimise
$S_{y_{t}y_{t}}$,
$S_{x_{t}y_{t}}$ and S t.
Figure 2 shows the structure of a three layers RBFNN, consisting the input layer, the hidden layer and the output layer. We use $S_{y_{t}y_{t}}$,
$S_{x_{t}y_{t}}$, S t to denote the input and
$F^{l}_{i}$ to express the output of the i-th node in the l-th layer. To clearly explain the signal propagation and the mathematical function in each layer, the functions of RBFNN can be represented as follows.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig2g.gif?pub-status=live)
Figure 2. RBFNN architecture.
Input layer: Every node in the input layer corresponds to one input variable $\Gamma=S_{y_{t}y_{t}}$ or
$S_{x_{t}y_{t}}$ or S t, and transmits the input values directly to the next layer, it is written as (Liu and Li, Reference Liu and Li2017):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn30.gif?pub-status=live)
Hidden layer: Every node in the hidden layer corresponding to the input layer is calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn31.gif?pub-status=live)
According to the non-linear transfer function the output of each node is calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn32.gif?pub-status=live)
Output layer: Each node in the output layer connects to all output nodes of the hidden layer. The output of each node in the output layer is computed via the weighted sum:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn33.gif?pub-status=live)
The adjustable parameter vectors of O, ξ and ϒ in RBFNN can be defined as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn34.gif?pub-status=live)
Thus, the output vector of RBFNN can be modified as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn35.gif?pub-status=live)
Using the great approximation ability of RBFNN, the uncertain function H that defines an optional neural network structure is calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn36.gif?pub-status=live)
where q ∈ R denotes a minimum approximation errors vector. The RBFNN approximation errors, uncertainties and other unmoulded dynamics are bounded, that is, there exists a positive constant, such as ‖q‖ ≤ ρ. The corresponding estimation $\hat{F}$ can be expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_eqn37.gif?pub-status=live)
where $\hat{O}$ is the tuning parameter matrix of RBFNN and it is adjusted in the learning process.
Therefore, based on Equations (27), (28), (29) and (37), the optimal estimation of LSINS can be calculated.
3.3. Implementation of the RBFNN aiding SVDUKF procedure
As shown in Figure 3, the RBF-SVDUKF procedure (see Appendix) can be implemented using the following steps.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig3g.gif?pub-status=live)
Figure 3. Flow chart of the RBFNN aiding SVDUKF procedure.
Step 1: Based on the established errors state vector and error measurement model, calculate the sigma points at time step t, and carry out SVD analysis on the obtained sigma points.
Step 2: Update the measurements of the sigma points, predict the matrix of $S_{y_{t}y_{t}}$,
$S_{x_{t}y_{t}}$ and S t and compare the predictions with the threshold (1% in this study is satisfactory). If satisfactory, output the position and attitude of the system; otherwise, go to the RBFNN optimisation procedure in Step 3.
Step 3: Implement the RBFNN training procedure to optimise $S_{y_{t}y_{t}}$,
$S_{x_{t}y_{t}}$ and S t. Repeat Step 2 and Step 3 until the threshold requirement is met.
4. EXPERIMENTS AND DISCUSSION
To evaluate the effectiveness of the proposed method, field experiments were carried out at the floor of a building in the China University of Mining and Technology (see Figure 4). Figure 5 shows the experiment layouts. The experiments were conducted with a VLP-16 laser scanner and a Spatial Fibre Optic Gyro (FOG) IMU. The specifications of the IMU and laser scanner are given in Table 1.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig4g.jpeg?pub-status=live)
Figure 4. Experimental site: (a) 3D model of the experimental site; (b) floor map of the fourth floor.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig5g.jpeg?pub-status=live)
Figure 5. Experiment layouts.
Table 1. The specifications of the IMU and laser scanner
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_tab1.gif?pub-status=live)
As shown in Figure 5, we conducted the experiments in a GNSS-denied indoor environment. The total length of LSINS trajectory was 100 m and the 3D trajectory was covered in the fourth floor of the building. The LSINS was driven along the hallway of the building to evaluate the position precision. The position ground-truth values were provided by the optical measuring system, which can measure the position of LSINS with an accuracy level up to 0·1 mm. A software platform programmed with C++ was designed to record and process the raw data.
During the experiments, the proposed algorithm was evaluated from starting point to destination point along a trajectory 100 m in length. Figure 6 shows the original measurements of the LSINS (red line) and the optical measuring system (black line). As can be seen in the figure, a significant error was observed between the ground-truth and the LSINS navigation without the filter processing.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig6g.jpeg?pub-status=live)
Figure 6. The measured trajectory using LSINS and optical measuring system.
Figures 7(a), 7(c) and 7(e) show the positioning errors in the east, north and up coordinate directions using the EKF method and Figures 7(b), 7(d) and 7(f) show the positioning errors using the proposed RBF-SVDUKF method. The Root Mean Square (RMS) of the navigation errors using EKF were 0·076 m/s, 0·126 m/s and 0·275 m/s in the east, north and up directions, respectively. However, the RMS using the proposed method was 0·026 m/s, 0·047 m/s and 0·095 m/s in the three directions, respectively. The experimental results demonstrate that the position accuracy of the proposed method was improved by 23·6% compared with that of the EKF method.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_fig7g.jpeg?pub-status=live)
Figure 7. The positioning error using EKF and RBFNN aiding SVDUKF algorithm.
As mentioned above, the proposed RBF-SVDUKF method achieved significant improvement compared with the EKF in terms of position accuracy, reliability and accumulated error. As a result, the proposed method has great potential in practical navigation in GNSS-denied environments.
5. CONCLUSIONS
In order to effectively extract and integrate the measurements from inertial sensors and laser scanners, an optimal estimation algorithm is proposed based on RBF-improved SVDUKF. The coordinate transformation between laser scanner frame and inertial sensor frame was established to model the state vector and measurement errors in LSINS. The error estimation optimisation procedure using RBF-SVDUKF was applied to improve the position accuracy of LSINS. Experimental tests were conducted to evaluate the performance of the proposed method. The analysis results demonstrate that the localisation accuracy of the proposed method has been improved by 23·6% when compared with that using the EKF method. Future research will focus on improving the calculation efficiency of the proposed RBF-SVDUKF for practical applications. We also plan to explore applications of the proposed RBF-SVDUKF in other types of multi-sensor data fusion problems.
ACKNOWLEDGEMENTS
This project was supported by National Key R&D Program of China (2018YFC0604503), the National Natural Science Foundation of Jiangsu Province (BK20150202), the National Natural Science Foundation of China (U1610251) and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions. The authors would like to express their sincere thanks to them.
APPENDIX: RBFNN AIDING SVDUKF ALGORITHM
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190328060439027-0246:S037346331800084X:S037346331800084X_tabU1.gif?pub-status=live)