1. INTRODUCTION
Global Positioning System/Inertial Navigation System (GPS/INS) tightly coupled integrated navigation systems have been widely used in both civil and military applications (Noureldin et al., Reference Noureldin, El-Shafie and Bayoumi2011; Schmidt, Reference Schmidt2010). Compared to using only a GPS or INS, a GPS/INS tightly coupled system can provide higher precision and better anti-jamming performance by utilising information fusion technology to combine GPS and INS data (Han and Wang, Reference Han and Wang2010).
Soft faults are especially troubling for GPS/INS tightly coupled systems. Soft faults are a type of fault that changes slowly and accumulates with time. This feature makes such faults difficult to detect (Feng et al., Reference Feng, Ochieng, Walsh and Ioannides2006). In a GPS/INS tightly coupled system, soft faults can be produced for many reasons. The most important reason is that raw measurements made using GPS, such as pseudorange and pseudorange rate, are commonly used as observations for use with a filter. These raw GPS measurements are vulnerable to external disturbance to the signal generation process on the satellite, signal transmission in space and signal reception in the receiver. For instance, satellite clock misbehaviour can result in a range error of thousands of metres; when a GPS satellite comes out of an eclipse, its trajectory is perturbed due to the effect of, for example, changing solar radiation pressure (Bhatti et al., Reference Bhatti, Ochieng and Feng2007a). The vulnerability faced by GPS can lead to different types of faults in raw measurements and thus can produce inaccurate navigation results and safety hazards.
The speed of soft fault detection is a main influence on the fault tolerance of GPS/INS tightly coupled systems. Because soft faults are especially difficult to detect, the alert time of soft faults is significantly longer than that of abrupt faults, which can potentially threaten navigation in safety of life applications. For instance, an on-orbit GPS satellite experienced a slowly growing clock drifting error that eventually resulted in a position error of a few kilometres (Bhatti and Ochieng, Reference Bhatti and Ochieng2007b). Therefore, to guarantee safety of life and minimise economic damage in aviation, it is vital to detect soft faults in GPS/INS tightly coupled integrated navigation systems as quickly as possible (Xiong et al., Reference Xiong, Chen, Wang and Liu2013).
For fault detection in GPS/INS integrated navigation systems, commonly used methods include the chi-square test, Multiple Solution Separation (MSS), Optimal Fault Detection (OFD), and Autonomous Integrity Monitored Extrapolation (AIME) methods (Bhatti et al., Reference Bhatti, Ochieng and Feng2012; Bruggemann et al., Reference Bruggemann, Greer and Walker2011; Shi et al., Reference Shi, Miao, Ni and Shen2012). Among those methods, the chi-square test and MSS methods represent “snapshot” algorithms because they can detect abrupt faults (jump errors) quickly and accurately. However, snapshot algorithms suffer from serious time delays in detecting soft faults (Lee et al., Reference Lee, Kim and Lee2012; Liu et al., Reference Liu, Feng and Wang2011; Zhou and Hu, Reference Zhou and Hu2009). Compared with snapshot algorithms, the AIME and OFD methods exhibit better performance in soft fault detection (Liu et al., Reference Liu, Yue and Yang2012; Park et al., Reference Park, Jeong, Kim, Hwang and Lee2011). In AIME and OFD, a sliding window method is applied in the test statistic calculation to solve the problem of slowly changing characteristics in soft fault detection; thus the real-time ability is improved to a certain extent. However, in AIME, the test statistic construction is based on the innovation obtained from a system filter, which is commonly a Kalman filter in GPS/INS tightly coupled systems. A Kalman filter can provide high-precision navigation results using error tracking and closed-loop correction during the prediction and updating processes. However, for the purpose of fault detection, the innovation used for the test statistic calculation is degraded by the effect of error tracking and closed-loop correction, thus being unable to reflect the actual amplitude of occurring faults. As a result, the real-time ability is degraded due to the decrease of the test statistic. Therefore introducing a test statistic in AIME based on a more accurate innovation can improve the real-time ability of soft fault detection.
As mentioned above, the error tracking of a filter under soft faults represents a challenge for fault detection. Thus Support Vector Machines (SVMs), which represent a powerful methodology for regression and forecasting, are introduced to fault detection. SVMs have been widely used in the area of pattern recognition and fault detection (Dandare and Dudul, Reference Dandare and Dudul2012; Konar and Chattopadhyay, Reference Konar and Chattopadhyay2011). SVMs were introduced within the context of statistical learning theory and structural risk minimisation and further investigated by many researchers (Bhavsar and Panchal, Reference Bhavsar and Panchal2012; Orrù et al., Reference Orrù, Pettersson-Yeo, Marquand, Sartori and Mechelli2012). This method is especially applicable to small sample and high-dimensional problems due to its good regression and forecasting abilities. Least Squares Support Vector Machines (LS-SVM) are reformulations of standard SVMs. An LS-SVM obtains a lower computation cost by solving linear equations instead of the convex Quadratic Programming (QP) problems solved by classical SVMs (Long et al., Reference Long, Tian and Wang2012). Therefore, LS-SVMs have been used in INS/GPS integration for bridging GPS outages and fault detection (Chen et al., Reference Chen, Wang, Niu, Ren and Qu2014; Xu et al., Reference Xu, Li, Rizos and Xu2010), and LS-SVMs obtain higher accuracy in short-period prediction in bridging GPS outages. Therefore, LS-SVMs are preferred, especially in engineering applications where low computation costs are required.
In this paper, a new method of detecting soft faults is proposed for the purpose of improving real-time performance in GPS/INS tightly coupled systems. The proposed method combines an LS-SVM with AIME to provide independent test statistics, which are immune to the effects of error tracking and closed-loop correction in the Kalman filter. The independent test statistics are based on the innovation obtained from the LS-SVM through two stages of training and forecasting. The real observations of the GPS/INS tightly coupled system are taken as the input of the LS-SVM, and the innovation for calculating independent test statistics is forecasted based on the training stage. Because the forecasted innovation replaces the effect of error tracking and closed-loop correction, the independent test statistics can more accurately reflect the real fault amplitudes of observations. As a result, both the real-time ability and sensitivity of soft fault detection are enhanced. Based on theoretical research, a simulation is conducted, the results of which show that the proposed LSSVM-AIME method can effectively improve the real-time ability of detecting soft faults in GPS/INS tightly coupled systems.
2. FAULT-TOLERANT GPS/INS INTEGRATED NAVIGATION
2.1. Fault-tolerant Navigation Architecture
The system architecture of the GPS/INS tightly coupled integrated navigation system is illustrated in Figure 1.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-67636-mediumThumb-S037346331600076X_fig1g.jpg?pub-status=live)
Figure 1. System Architecture of a GPS/INS Tightly Coupled System with Fault Detection Function.
The system consists of three main parts: a GPS unit, an INS unit and a data fusion unit. In the GPS unit, the GPS receiver provides measurements such as the pseudorange, pseudorange rate and position of visible satellites. These measurements are used to establish the observation equation of GPS/INS system modelling. In the INS unit, vehicle positions, velocities and attitudes are provided based on inertial solution algorithms. In the data fusion unit, all the above information provided by the GPS and INS is transmitted to the Kalman filter to estimate and correct for navigation errors. Therefore the state vector of the Kalman filter is composed of two parts: one is the state of the GPS, and the other is the state of the INS.
Before being transmitted to the Kalman filter, the observation information needs to be processed by fault detection and identification algorithms. When a signal disturbance or malfunction occurs on a satellite, faults will occur in the pseudorange and pseudorange rate data. The faults should be identified and not allowed to be transmitted to the Kalman filter to avoid inaccurate results.
2.2. INS/GPS Integrated Model
The system model of the Kalman filter includes a state model and an observation model. In the state model, the state vector consists of INS states and GPS states. The INS states include errors generated by inertial sensors, and GPS states include errors from raw GPS measurements. The state equation is given in Equation (1):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn1.gif?pub-status=live)
where
${\bf \Phi}$
denotes the state transition model built from the INS error analysis equation;
${\bi w}$
denotes the process noise vector;
${\bi x}$
is the state vector, defined as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn2.gif?pub-status=live)
where ϕ
E
, ϕ
N
, ϕ
U
denote the angle errors from the true axes to the platform in the directions of east, north and up, respectively; δ v
E
, δ v
N
, δ v
U
denote the velocity errors in the directions of east, north and up, respectively; δ
L
,
$\delta_{\lambda}$
, δ
h
denote the vehicle latitude, longitude and height position errors in geographic coordinates;
$\varepsilon_{bx}$
,
$\varepsilon_{by}$
,
$\varepsilon_{bz}$
denote the biases of the gyros;
$\varepsilon_{rx}$
,
$\varepsilon_{ry}$
,
$\varepsilon_{rz}$
are the first-order Gauss-Markov processes of the gyros; ∇
x
, ∇
y
, ∇
z
denote the first-order Gauss-Markov processes of the accelerometers; and
$\delta \rho_{u}$
,
$\delta \dot{\rho}_{u}$
denote the equivalent pseudorange and pseudorange rate caused by the clock bias and clock drift of the GPS, respectively.
In the observation model, the observation vector consists of a pseudorange observation and a pseudorange rate observation. The measurement vector consists of two parts. The first part is the differences of the pseudorange and pseudorange rate. The difference of the pseudorange is calculated from pseudorange measurements based on raw GPS information and the equivalent pseudorange calculated by the INS solution based on the satellite position and velocity from ephemeris data. The pseudorange rate observation is calculated in the same manner. The system measurement observation equation is shown in Equation (3):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn3.gif?pub-status=live)
where
${\bi z}$
denotes the observation vector,
${\bi H}$
denotes the observation model, and
${\bi v}$
denotes the observation noise vector. The observation vector
${\bi z}$
is composed of a pseudorange and pseudorange rate, as shown in Equation (4):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn4.gif?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn5.gif?pub-status=live)
where
$\rho_{I_{i}}$
,
$\dot{\rho}_{I_{i}}$
(i = 1, 2, … n) denotes the equivalent observation of the INS, which is derived from the vehicle position and velocity obtained from the INS and satellite emphasis of the GPS; n is the number of visible satellites; and
$\rho_{G_{i}}$
,
$\dot{\rho}_{G_{i}}$
denote the pseudorange and pseudorange rate measurements of the GPS. In addition, the corresponding observation model
${\bi H}$
is shown in Equation (6):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn6.gif?pub-status=live)
where
${\bi 0}$
represents the zero matrix, in which the matrix dimension is denoted by the subscript, and
${\bi H}_{\rho_{1}}$
,
${\bi H}_{\rho_{2}}$
,
${\bi H}_{\dot{\rho}_{1}}$
,
${\bi H}_{\dot{\rho}2}$
are matrices that denote the relationship between observations and states.
2.3. Fault-tolerant Fusion Algorithm
As a classic error estimation algorithm, the Kalman filter has been widely used in integrated navigation systems (Gross et al., Reference Gross, Gu, Gururajan, Seanor and Napolitano2010; Zhong et al., Reference Zhong, Li, Wang and Liu2011). The recursive estimation process can be summarised as follows:
Predict:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn7.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn8.gif?pub-status=live)
Update:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn9.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn10.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn11.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn12.gif?pub-status=live)
Based on the principle of minimum variance, the Kalman filter produces optimal estimate results for system states. The state error estimations are directly related to the innovation vector of the Kalman filter, which is calculated based on observations in state prediction.
When faults occur, the observation vector is selected, and the observation model should be rebuilt. Assuming that the number of visible satellites is n, the dimension of the observation vector is supposed to be 2n × 1. Depending on the fault detection method, observations having faults will be detected. Later, the observation vector and observation model will be reconstructed.
3. FAULT DETECTION METHOD BASED ON LS-SVM-ENHANCED AIME
3.1. Enhanced Fault Detection Scheme
By utilising the forecasting and regression ability of the LS-SVM, an alternative innovation that is independent of the Kalman filter is provided to avoid the effects of the error tracking characteristics of the Kalman filter. The independent innovation will replace the original innovation in AIME to derive more accurate test statistics. By avoiding the influence of error tracking and closed-loop correction, the sensitivity and instantaneity of detecting soft faults with the proposed method will be improved. The fault detection scheme of the proposed method is shown in Figure 2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-75941-mediumThumb-S037346331600076X_fig2g.jpg?pub-status=live)
Figure 2. Scheme of soft fault detection based on LS-SVM-enhanced AIME.
The observation performed by the integrated navigation system is chosen as the training input of the LS-SVM, and the innovation immune to error tracking of the Kalman filter is chosen as the training output. The parameters of the LS-SVM regression function can be calculated based on the training stage. Then, the observation at the current epoch is used as the regression input, and the forecasted innovation that is used to replace the original innovation when calculating the new test statistics of AIME can be derived. The new test statistics are independent of the Kalman filter; therefore, they mitigate the error tracking effect. The new test statistics are used in fault decision making to select observations. If the values of the test statistics are smaller than a threshold, the integrated navigation system is judged as having no faults. If the values are larger than a threshold, the system is judged as having faults; the fault observation will be identified and isolated by other methods (Patino and Rohmer, Reference Patino and Rohmer2010), and the entire observation vector and model will be reconstructed before being input to the Kalman filter.
3.2. Error Tracking of Filter under Soft Fault
In the fusion filter of the GPS/INS tightly coupled integrated navigation system, the errors will be corrected based on error tracking, and the navigation accuracy will be improved. However, in the fault detection application, the error tracking of the filter may adversely affect the test statistic calculation, which should accurately reflect the level of the fault. The effect of the error tracking is analysed as follows.
When faults occur at time k, the observation is described by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn13.gif?pub-status=live)
where
$\tilde{{\bi z}}_{k}$
is the observation with a fault,
${\bi z}_{k}$
is the expected observation, and
$\Delta {\bi z}_{k}$
is the fault amplitude.
The innovation of the Kalman filter is usually used for calculating test statistics. When faults occur, the innovation is described by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn14.gif?pub-status=live)
where
$\tilde{{\bi r}}_{k}$
denotes the innovation when faults occur. Then the state estimation becomes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn15.gif?pub-status=live)
For the next step k + 1, the innovation can be derived as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn16.gif?pub-status=live)
which can be expressed as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn17.gif?pub-status=live)
In addition,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn18.gif?pub-status=live)
where
$\tilde{{\bi r}}_{k+1}$
denotes the innovation of the Kalman filter,
${\bi r}_{f\lpar k+1\rpar }$
denotes the real value of the innovation when faults occur and
$\Delta {\bi r}_{f\lpar k+1\rpar }$
denotes the component of the innovation caused by the error tracking effect.
From Equations (16) and (17), we can conclude that the increment of the innovation is decreased by the term
${\bi H}_{k+1} {\bf \Phi}_{k+1\comma k} {\bi K}_{k} \Delta {\bi z}_{k}$
, which will accumulate over time through the recursive calculation process, leading to a greater reduction in innovation.
In fault detection algorithms for GPS/INS integrated navigation systems, the innovation used for calculating test statistics is supposed to change accurately with the amplitude of faults to detect faults quickly and precisely. As a result of the decreased innovation accuracy caused by the error tracking characteristics of the Kalman filter, the sensitivity and real-time performance will be degraded. To resolve the above problem, a substitute innovation that cannot be affected by error tracking is required. Therefore a new fault detection method utilising LS-SVMs to forecast a new innovation based on real-time measurements will be discussed in the next section based on AIME.
3.3. Effect of Error Tracking on AIME
AIME is a classic algorithm proposed by Diesel (Reference Diesel2000) that has been applied to integrated navigation systems for soft fault detection. AIME represents a sequential algorithm for fault detection, in which the innovation is averaged over past data of the Kalman filter to weight the test statistics. The two stages of the Kalman filter are given by Equations (7)–(12). The innovation vector
${\bi r}_{k}$
and its covariance matrix
${\bi V}_{k}$
at each Kalman filter cycle k is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn19.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn20.gif?pub-status=live)
The test statistics are calculated based on the innovation and innovation covariance matrix:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn21.gif?pub-status=live)
where
${\bi s}_{avg}$
is the test statistic of AIME.
${\bi r}_{avg}$
is the weighted average of the innovation, and
${\bi V}^{-1}_{avg}$
is the inverse of the estimated covariance matrix mean, which are given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn22.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn23.gif?pub-status=live)
where l is the averaging time, which affects the fault detection performance. For example, a small l will reduce the sensitivity of soft error detection, whereas a large l will result in a larger computational complexity. The value of l depends on the integrity requirement. The test statistics
${\bi s}_{avg}$
follow central chi-square distributions when there are no faults in the integrated navigation system and follow non-central chi-square distributions when faults occur. When deciding whether faults occur, the test statistics are compared with a threshold that is derived from chi-square distributions and false alerts based on navigational requirements.
In GPS/INS tightly coupled systems, the error tracking effect will influence the innovation calculation of the Kalman filter, as shown in Equation (16); therefore, the test statistics of AIME calculated based on the innovation will inevitably be influenced. The test statistics at epoch k when faults occur in GPS/INS integrated systems can be expressed using Equation (24):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn24.gif?pub-status=live)
The real test statistics that can reflect the real amplitudes of faults should be
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn25.gif?pub-status=live)
Derived from Equations (17) and (22), we can conclude that
$\tilde{{\bi r}}_{avg} \lt {\bi r}_{f\lpar avg\rpar }$
. Therefore, the test statistics
$\tilde{{\bi s}}^{2}_{avg} \lt {\bi s}^{2}_{avg}$
. The component caused by error tracking will decrease the value of the test statistics and eventually result in time delays in fault detection.
3.4. LS-SVM Regression Algorithm
SVMs represent a type of statistical learning theory developed based on empirical risk minimisation. SVMs provide better performance in terms of accuracy and stability because more explicit learning and regression algorithms are applied in SVMs compared to other algorithms such as neural networks. SVMs utilise empirical data rather than prior information, which makes them similar to a real model. LS-SVMs are reformulations of standard SVMs. LS-SVMs have lower computation costs because they solve linear equations instead of convex quadratic programming problems, as in classical SVMs. For this reason, LS-SVMs are preferred, especially for engineering applications where low computation costs are highly desired.
We denote the training point collection as
${\bi D}=\lcub \lpar {\bi x}_{j\comma yj}\rpar \vert\, j=1\comma \; 2\comma \; \ldots N\rcub $
, where
${\bi x}_{j} \in {\bf R}^{n}$
are training inputs,
$y_{k} \in {\bf R}$
are training outputs, and N is the number of training sample. The form of the nonlinear regression function is (Suykens et al., Reference Suykens, Van Gestel, De Brabanter, De Moor and Vandewalle2002; Mehrkanoon and Suykens, Reference Mehrkanoon and Suykens2012)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn26.gif?pub-status=live)
where
$\varphi \lpar{\cdot}\rpar $
denotes the feature map,
${\bi w}$
denotes the weight vector, and b denotes the bias vector; both are unknown regression parameters.
The regression in LS-SVM can be converted into the problem of minimising the following expression (Xu et al., Reference Xu, Li, Rizos and Xu2010):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn27.gif?pub-status=live)
where
${\bi e}=\lpar e_{1}\comma \; e_{2}\comma \; \ldots\comma \; e_{N}\rpar ^{T}$
denotes the error vector and γ denotes the regularisation constant for improving the generalisation properties.
To solve the above optimisation problem, the Lagrangian function used for finding the minimum value is (Xu et al., Reference Xu, Li, Rizos and Xu2010):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn28.gif?pub-status=live)
where
${\bf \alpha}=\lpar \alpha_{1}\comma \; \alpha_{2}\comma \; \ldots \alpha_{l}\rpar ^{T}$
denotes Lagrangian multipliers.
Taking the partial derivatives of
$L\lpar {\bi w}\comma \; b\comma \; {\bi e}\comma \; {\bf \alpha}\rpar $
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn29.gif?pub-status=live)
Equation (29) can be expressed as the matrix
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn30.gif?pub-status=live)
where
${\bi Z} = \lsqb \varphi\lpar x_{1}\rpar \comma \; \varphi\lpar x_{2}\rpar \comma \; \cdots\comma \; \varphi \lpar x_{N}\rpar \rsqb ^{T}$
,
${\bi y}=\lsqb y_{1}\comma \; y_{2}\comma \; \cdots\comma \; y_{N}\rsqb ^{T}$
,
${\bi 1}=\lsqb 1\comma \; 1\comma \; \cdots\comma \; 1\rsqb ^{T}$
,
${\bi e} = \lsqb e_{1}\comma \; e_{2}\comma \; \cdots\comma \; e_{N}\rsqb ^{T}$
, and
${\bf \alpha}=\lsqb \alpha_{1}\comma \; \alpha_{2}\comma \; \cdots\comma \; \alpha_{N}\rsqb ^{T}$
.
From Equation (29) and Equation (30), by cancelling
${\bi w}$
and
${\bi e}$
, the following linear equation can be derived:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn31.gif?pub-status=live)
We denote
${\bf \Omega}_{ij} = {\bi ZZ}^{\rm T} = \varphi \lpar {\bi x}_{i}\rpar ^{T} \varphi\lpar {\bi x}_{j}\rpar = K\lpar {\bi x}_{i}\comma \; {\bi x}_{j}\rpar $
, where
$K\lpar {\bi x}_{i}\comma \; {\bi x}_{j}\rpar $
is the kernel function, and let
${\bi A}\equiv {\bf \Omega} + \gamma^{-1} {\bi I}$
. Thus, Equation (31) can be expressed as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn32.gif?pub-status=live)
If the matrix is invertible, from Equation (32), we can conclude that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn33.gif?pub-status=live)
From Equation (33) and Equation (29), the regression function can be derived as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn34.gif?pub-status=live)
where
$K\lpar {\bi x}_{j}\comma \; {\bi x}\rpar $
is defined as the kernel function, which can be set as an RBF kernel, linear kernel, polynomial kernel or Gaussian kernel. Among those functions, the RBF kernel is most often used in LS-SVMs.
Based on Equation (34), in GPS/INS integrated navigation systems, regression is used to forecast the innovation, which is unaffected by the error tracking effects of the Kalman filter, as shown in Equation (35):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn35.gif?pub-status=live)
where
$\bar{{\bi r}}_{k}$
is the innovation forecasted by the regression function, which is independent of the Kalman filter and immune to error tracking, and
${\bi z}_{k}$
is the observation of the GPS/INS integrated system used as the input of the regression function.
3.5. Soft Fault Detection Method based on LS-SVM-Enhanced AIME
As mentioned above, when faults occur in an integrated navigation system, the innovation gradually decreases as a result of error tracking effects and error correction in the Kalman filter. To mitigate the impact of decreased innovation, this section proposes a method for utilising a forecast innovation via LS-SVM regression instead of the original innovation of the Kalman filter. To realise the forecast function, the LS-SVM needs to implement two stages: training and regression. The LS-SVM training and regression procedure is given in Figure 3.
-
1. Training
-
1) Training input collection. At the training stage, the training input should be confirmed first. The real-time observation, the measurement of pseudorange and pseudorange rate in Equation (5) is set as the training input.
-
2) Training output collection. The innovations of the Kalman filter compensated by Equation (16), which is immune to the error tracking effect of the Kalman filter, are used as the output. Thus, a correct relation between measurements and the innovation can be established.
-
3) Training parameter setting. In this paper, the kernel function of the LS-SVM is the Radial Basis Function (RBF) kernel, and the other parameters are obtained via the parameter optimisation function (Ying and Keong, Reference Ying and Keong2004).
-
-
2. Regression
-
1) Regression input collection. To forecast the innovation free from Kalman filter error tracking and error correction, the real-time measurements are input into the regression equation after the training stage.
-
2) Regression output estimation. Based on the training stage, the type of regression output is the same as that of the training output. Therefore, the regression output is the alternative innovation independent of the Kalman filter, which can be used to detect faults.
-
-
3. Obtain forecasted innovation
$\bar{{\bi r}}_{k}$ is the forecasted innovation from the LS-SVM regression, which is given by
(36)where n s is the number of training samples. The sample number of the training and regression process should be chosen for forecasting independent innovations. On the one hand, the number should be sufficiently large to achieve an acceptable regression accuracy; on the other hand, the computational cost of regression increases with sample number, which affects the real-time performance of fault detection.$$\left\{{\matrix{Training\colon {\bi r}_i = f_{\hbox{LS-SVM}} \lpar {\bi z}_i\rpar \quad 1 \leq i \leq n\comma \; \hfill \cr Regression\colon \bar{{\bi r}}_j = f_{\hbox{LS-SVM}} \lpar {\bi z}_j\rpar \quad j \geq n_s + 1\cr}}\right. $$
-
4. Calculate weighted innovation
The weighted innovation is given by Equations (37) and (38):
(37)$$\bar{{\bi r}}_{avg} = \lpar {\bi V}^{-1}_{avg}\rpar ^{-1}\sum^l_{k=1}{\bi V}_k^{-1}\bar{{\bi r}}_k $$
(38)$${\bi V}^{-1}_{avg} = \sum^m_{k=1} {\bi V}^{-1}_k $$
-
5. Calculate test statistics
Based on the LS-SVM and AIME, the normalised test statistics of the proposed method are given by
(39)$$\bar{{\bi s}}_{avg}^2 = \lpar \bar{{\bi r}}^T_{avg}\rpar \lpar {\bi V}^{-1}_{avg}\rpar \lpar \bar{{\bi r}}_{avg}\rpar $$
The test statistics
$\bar{{\bi s}}^{2}_{avg}$ of fault detection follow a central chi-square distribution when measurements are free of faults. If faults occur, the pre-established Kalman filter model cannot follow changes in measurements, leading to non-central chi-square distributions of the test statistics. Following hypothesis testing theory, the test statistics are compared with a threshold T r to judge whether there is a fault occurring in the integrated navigation system.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-90991-mediumThumb-S037346331600076X_fig3g.jpg?pub-status=live)
Figure 3. The training and regression sequence of the LS-SVM.
4. SIMULATION AND ANALYSIS
4.1. Simulation Description
A simulation platform of a GPS/INS tightly coupled integrated navigation system is built based on Figure 1. The proposed method combining an LS-SVM and AIME (called the LSSVM-AIME below) is applied to the simulation platform. AIME is also applied to the GPS/INS tightly coupled system, and the results of the LSSVM-AIME and AIME are compared and analysed.
The parameters of the INS and GPS receiver simulation are shown in Table 1.
Table 1. Simulation parameters.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-88837-mediumThumb-S037346331600076X_tab1.jpg?pub-status=live)
The trajectory simulation of the entire navigation procedure is designed as a dynamic trace, including climbing, cruising and manoeuvring. The simulated trajectory curve is shown in Figure 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-96415-mediumThumb-S037346331600076X_fig4g.jpg?pub-status=live)
Figure 4. Navigation trajectory simulation.
4.2. Soft Fault Simulation
To verify the effect of detecting soft faults in AIME and LSSVM-AIME, ramp-type soft faults are added to a pseudorange measurement with different slopes. The soft faults are described by Equation (40):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_eqn40.gif?pub-status=live)
where ρ denotes the original pseudorange with no faults;
$\rho_{\rm fault}$
denotes the pseudorange with ramp errors; v denotes the ramp error slope, set to 0·1, 0·2, 0·5, 1·0, and 2·0 m/s, respectively; t is the current navigation time; and t
f
is the time when a fault occurs. The ramp error amplitude over time is shown in Figure 5, and the corresponding navigation results are given in Figure 6.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-91775-mediumThumb-S037346331600076X_fig5g.jpg?pub-status=live)
Figure 5. Fault amplitude under different ramp error sizes.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-87024-mediumThumb-S037346331600076X_fig6g.jpg?pub-status=live)
Figure 6. Navigation position error under different ramp error sizes.
From Figures 5 and 6, we can see that the navigation results are influenced by soft faults more seriously with increasing ramp error size. In the situation where the ramp error size is 0·1 m/s, the error in the navigation result is 10 metres, which has minimal effect on navigation applications. One reason for this is that the ramp error size is small. The other reason is that the Kalman filter has a low fault tolerance ability. Therefore, when ramp errors are small over a short time period, the fault measurement in a GPS/INS tightly coupled system does not need to be isolated. However, in applications demanding high accuracy, such as precision approaches, this error is too large to be neglected. In addition, when the ramp error size increases, the navigation error quickly increases, leading to a dangerous situation and presenting a hazard to humans.
Considering the computational complexity, the averaging time of AIME and LSSVM-AIME is set to 150 s for fault detection. For the LS-SVM, the RBF kernel is selected as the kernel function. The length of training window is ten. Because the observations are the pseudorange and pseudorange rate in a tightly coupled navigation system, the number of degrees of freedom of the chi-square distribution is twice the number of visible satellites. The false alarm rate is set to 0·00001.
The test statistics used in AIME and LSSVM-AIME are the original innovation of the Kalman filter and the alternative innovation obtained from the LS-SVM, respectively. The results of the two innovation values, which change with measurements, are shown in Figure 7.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-45257-mediumThumb-S037346331600076X_fig7g.jpg?pub-status=live)
Figure 7. Comparison between the two innovations from LSSVM-AIME and AIME.
Comparing the results in Figure 7 with Figure 5 for the same epoch, both the innovation of the Kalman filter and the forecasted innovation of LSSVM-AIME can follow the measurement error. The faster the ramp error increases, the larger the value of the innovation that can be obtained. In contrast, the LSSVM-AIME innovation demonstrates a better ability to follow errors in the same situation. Because the LS-SVM has the ability to forecast based on training and regression, whereas the Kalman filter provides error tracking effects and error correction functions, the innovation of the LS-SVM demonstrates a faster response concerning measurement changes compared to the Kalman filter, as shown in Figure 7.
The test statistic results of AIME and LSSVM-AIME are provided as follows. The fault detection result for ramp error sizes of 0·1 m/s and 0·2 m/s are given in Figure 8, and those of 0·5 m/s and 1 m/s are given in Figure 9.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-82034-mediumThumb-S037346331600076X_fig8g.jpg?pub-status=live)
Figure 8. Test statistics for ramp sizes of 0·1 m/s and 0·2 m/s.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-12478-mediumThumb-S037346331600076X_fig9g.jpg?pub-status=live)
Figure 9. Test statistics for ramp sizes of 0·5 m/s and 1 m/s.
In Figure 8, the red solid lines represent the values of the test statistics of LSSVM-AIME, and the blue dotted lines represent classic AIME. The line “Tr” represents the fault detection threshold, which is calculated using a chi-square distribution. From the results, we can see that both types of test statistics increase with increasing error. When the ramp error size increases, the test statistics also increase. It is worth noting that the test statistics of LSSVM-AIME increase faster and reach the threshold earlier than do those of AIME. This conclusion can be drawn from Figure 8. This is because the test statistics of the LS-SVM are free of error tracking effects in the Kalman filter. The times of the fault alarms are shown in Table 2.
Table 2. Comparison of time delays for LSSVM-AIME and AIME.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170412141201-91767-mediumThumb-S037346331600076X_tab2.jpg?pub-status=live)
Similar to the results in Figure 8, the test statistics in Figure 9 can further prove the higher detection performance in LSSVM-AIME. In addition, as a result of the larger ramp error size, the detection times are both smaller than those in Figure 8. A 50-run Monte Carlo simulation of LSSVM-AIME and AIME has been conducted to provide statistical results of the time delay, which are shown in Table 2.
In Table 2, a quantitative comparison of the time delays through Monte Carlo simulation under the two methods is made. We draw two conclusions from this table. First, LSSVM-AIME has a smaller time delay than does classic AIME. As listed in Table 2, when the ramp error size is 0·1 m/s, the LSSVM-AIME is 69·33 s faster than AIME in terms of fault alarms, and the time delay decreases by 63%; when the ramp error size is 0·2 m/s, the LSSVM-AIME is 48·36 s faster than AIME and the time delay decreases by 62%; when the ramp error size is 0·5 m/s, the LSSVM-AIME is 30·4 s faster than AIME, and the time delay decreases by 61·8%; when the ramp error size is 1 m/s, the LSSVM-AIME is 20·26 s faster than AIME, and the time delay decreases by 58·7%. Thus the conclusion that the LSSVM-AIME has smaller time delay in detecting soft faults can be drawn. Second, the result that a larger ramp error size leads to faster fault detection is proven again. Note that the time delay of both LSSVM-AIME and classic AIME obviously decrease with the growth of ramp error size. When the error size increases to greater than 0·5 m/s, AIME also exhibits better detection instantaneity. Therefore, along with the growth of ramp error size, the time delay gap between LSSVM-AIME and classic AIME decreases.
To compare different lengths of training window, Monte Carlo simulations with 10, 50 and 100 LSSVM-AIME training window length are made. The time delays of LSSVM-AIME under ramp error size 0·1m/s are shown in Table 3. It is evident from Table 3 that the time delay is shorter when the training window length is small.
Table 3. Comparison of different lengths of training window of LS-SVM.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170412042701264-0698:S037346331600076X:S037346331600076X_tab3.gif?pub-status=live)
Because it is a complicated algorithm, the extra elapsed time caused by the calculation count should also be considered. In the simulation, the differences between the elapsed time of LSSVM-AIME and AIME are also calculated. The simulation tests are conducted under different ramp error sizes and repeated 50 times. The average elapsed time gap of LSSVM-AIME and AIME is 4·853 ms. This result shows that the computational complexity of the proposed method has a finite effect on fault detection in practical applications. With the rapid development of processing hardware, this computational complexity gap can be further narrowed.
5. CONCLUSIONS
In this paper, an approach for detecting soft faults based on LS-SVM-enhanced AIME is proposed. In an attempt to improve the instantaneity performance of the traditional method, which is determined by the error tracking characteristics and error correction of the Kalman filter, a new test statistic is derived from a new innovation forecasted by an LS-SVM. Based on the good training and regression performances, the forecasted innovation free from error tracking and correction effects leads to a smaller time delay for soft fault detection. The simulation results demonstrate that the proposed LSSVM-AIME fault detection method effectively decreases the time delay of soft fault detection, especially for faults with small ramp error sizes. This performance is important for safety-critical applications.
ACKNOWLEDGMENTS
This work was partially supported by the National Natural Science Foundation of China (Grant No. 61374115, 61328301, 61273057, 61210306075), The University Industry Research Project of Aviation Industry Cooperation of China(CXY2012NH09), The Prospective University Industry Cooperation Project of Jiangsu Province(BY2013003-03), the Priority Academic Program Development of Jiangsu Higher Education Institutions, the Fundamental Research Funds for the Central Universities, the Nanjing University of Aeronautics and Astronautics Special Research Funding. The author would like to thank the anonymous reviewers for helpful comments and valuable remarks.