Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T02:50:39.328Z Has data issue: false hasContentIssue false

An Adaptive Sensor Fusion Method with Applications in Integrated Navigation

Published online by Cambridge University Press:  02 October 2008

Dah-Jing Jwo*
Affiliation:
(National Taiwan Ocean University)
Tsu-Pin Weng
Affiliation:
(National Taiwan Ocean University)
Rights & Permissions [Opens in a new window]

Abstract

The Kalman filter (KF) is a form of optimal estimator characterized by recursive evaluation, which has been widely applied to the navigation sensor fusion. Utilizing the KF requires that all the plant dynamics and noise processes are completely known, and the noise process is zero mean white noise. If the theoretical behaviour of the filter and its actual behaviour do not agree, divergence problems tend to occur. The adaptive algorithm has been one of the approaches to prevent divergence problems in the Kalman filter when precise knowledge on the system models is not available. Two popular types of adaptive Kalman filter are the innovation-based adaptive estimation (IAE) approach and the adaptive fading Kalman filter (AFKF) approach. In this paper, an approach involving the concept of the two methods is presented. The proposed method is a synergy of the IAE and AFKF approaches. The ratio of the actual innovation covariance based on the sampled sequence to the theoretical innovation covariance will be employed for dynamically tuning two filter parameters – fading factors and measurement noise scaling factors. The method has the merits of good computational efficiency and numerical stability. The matrices in the KF loop are able to remain positive definitive. Navigation sensor fusion using the proposed scheme will be demonstrated. Performance of the proposed scheme on the loosely coupled GPS/INS navigation applications will be discussed.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2008

1. INTRODUCTION

The Kalman filter (KF) (Brown and Hwang, Reference Brown and Hwang1997, Gelb, Reference Gelb1974) not only works well in practice, but it is also theoretically attractive since it has been shown that it is the filter that minimizes the variance of the estimation mean square error (MSE). Nevertheless, the fact that KF depends highly on a predefined dynamics model is a major drawback. To achieve good filtering results, the designers are required to have the complete a priori knowledge of both the dynamic process and measurement models. In addition there is an assumption that both the process and measurements are corrupted by zero-mean Gaussian white sequences.

In Kalman filter designs, the divergence due to modelling errors is critical. If the input data does not reflect the real model, the KF estimates may not be reliable. The case that the theoretical behaviour of a filter and its actual behaviour do not agree may lead to divergence problems. For example, if the Kalman filter is provided with information that the process behaves in a certain way, whereas, in fact, it behaves in a different way, the filter will continually intend to fit an incorrect process signal. Furthermore, when the measurement situation does not provide sufficient information to estimate all the state variables of the system, i.e. the estimation error covariance matrix becomes unrealistically small, the filter will disregard the measurement. In various circumstances where there are uncertainties in the system model and noise description, the assumptions on the statistics of disturbances are violated since in a number of practical situations, the availability of a precisely known model is unrealistic. This can be due to the fact that in the modelling step, some phenomena are disregarded and a way to take them into account is to consider a nominal model affected by uncertainty. To fulfil the requirement of achieving filter optimality or to prevent divergence problems in Kalman filters, the adaptive Kalman filter (AKF) approach (Ding, et al., Reference Ding, Wang and Rizos2007; El-Mowafy and Mohamed, Reference El-Mowafy and Mohamed2005; Mehra, Reference Mehra1970, Reference Mehra1971, Reference Mehra1972; Mohamed and Schwarz, Reference Mohamed and Schwarz1999; Hide et al., Reference Hide, Moore and Smith2003) has been one of the promising strategies for adjusting dynamically the parameters of the supposedly optimum filter, based on the estimates of the unknown parameters for an on-line estimation of motion as well as the available data for signal and noise statistics. Two popular types of the adaptive Kalman filter algorithms include the innovation-based adaptive estimation (IAE) approach (El-Mowafy and Mohamed, Reference El-Mowafy and Mohamed2005; Mehra, Reference Mehra1970, Reference Mehra1971, Reference Mehra1972; Mohamed and Schwarz, Reference Mohamed and Schwarz1999; Hide et al., Reference Hide, Moore and Smith2003) and the adaptive fading Kalman filter (AFKF) approach (Xia et al., Reference Xia, Rao, Ying and Shen1994), which is a type of covariance scaling method, for which suboptimal fading factors are incorporated. The AFKF incorporates suboptimal fading factors as a multiplier to enhance the influence of innovation information for improving the tracking capability in high dynamic manoeuvring.

The Global Positioning System (GPS) (Axelrad and Brown, Reference Axelrad, Brown, Parkinson, Spilker, Axelrad and Enga1996) and inertial navigation systems (INS) (Farrell, Reference Farrell1998; Salychev, Reference Salychev1998) have complementary operational characteristics and the synergy of both systems has been widely explored. GPS is capable of providing accurate position information. Unfortunately, the data is prone to jamming or being lost due to the limitations of electromagnetic waves, which form the fundamental of their operation. The system is not able to work properly in some areas due to signal blockage and attenuation that may deteriorate the overall positioning accuracy. The INS is a self-contained system that integrates three acceleration components and three angular velocity components with respect to time and transforms them into the navigation frame to deliver position, velocity and attitude components. The three orthogonal linear accelerations are continuously measured through three-axis accelerometers while three gyroscope sensors monitor the three orthogonal angular rates in an inertial frame of reference. For short time intervals, the integration with respect to time of the linear acceleration and angular velocity monitored by the INS results in an accurate velocity, position and attitude. However, the error in position coordinates increase unboundedly as a function of time. The GPS/INS integration is an adequate solution to provide a navigation system that has superior performance in comparison with either a GPS or an INS stand-alone system. GPS/INS integration is typically carried out through a Kalman filter. Therefore, the design of GPS/INS integrated navigation systems depends heavily on the design of the sensor fusion method.

This paper is organized as follows. In Section 2, preliminary background on adaptive Kalman filter is reviewed. The proposed adaptive sensor fusion strategy is introduced in Section 3. In Section 4, simulation experiments and analysis are carried out to evaluate the performance of the approach in comparison to those by conventional approach. Conclusions are given in Section 5.

2. ADAPTIVE KALMAN FILTER

The process model and measurement model are represented as:

(1a)
{\bf x}_{k \plus \setnum{1}} \equals \bmPhi_{k} {\bf x}_{k} \plus {\bf G}_{k} {\bf w}_{k}
(1b)
{\bf z}_{k} \equals {\bf H}_{k} {\bf x}_{k} \plus {\bf v}_{k}

where the state vector xk∊ℜn, process noise vector wk∊ℜn, measurement vector zk∊ℜm, and measurement noise vector vk∊ℜm. In Equation (1), both the vectors wk and vk are zero mean Gaussian white sequences having zero cross correlation with each other:

(2)
{\bf E}\lsqb {\bf w}_{k} {\bf w}_{i}^{\rm T} \rsqb \equals \left\{ {\matrix{  {{\bf Q}_{k} \comma } \tab {i \equals k} \cr {0\comma } \hfill\tab {i \ne k} \cr} } \right.\semi \quad {\bf E}\lsqb {\bf v}_{k} {\bf v}_{i}}^{\rm T} \rsqb \equals \left\{ {\matrix{ {{\bf R}_{k} \comma } \tab {i \equals k} \cr {0\comma } \hfill \tab {i \ne k} \cr} } \right.\semi \quad {\bf E}\lsqb {\bf w}_{k} {\bf v}_{i}^{\rm T} \rsqb \equals {\bf 0}\quad for{\rm \ }all{\rm \ }i{\rm \ }and{\rm \ }k

where Qk is the process noise covariance matrix, Rk is the measurement noise covariance matrix, Φk=e FΔt is the state transition matrix, and Δt is the sampling interval, E [·] represents expectation, and superscript “T” denotes matrix transpose.

The discrete-time Kalman filter algorithm is summarized as follows:

  • Prediction steps/time update equations:

    (3)
    {\bf \hat{x}}_{k \plus \setnum{1}}^{ \minus } \equals \bmPhi_{k} {\bf \hat{x}}_{k}
    (4)
    {\bf P}_{k \plus \setnum{1}}^{ \minus } \equals \bmPhi_{k} {\bf P}_{k} \bmPhi_{k}^{\rm T} \plus {\bf Q}_{k}
  • Correction steps/measurement update equations:

    (5)
    {\bf K}_{k} \equals {\bf P}_{k}^{ \minus } {\bf H}_{k}^{\rm T} \lsqb {\bf H}_{k} {\bf P}_{k}^{ \minus } {\bf H}_{k}^{\rm T} \plus {\bf R}_{k} \rsqb ^{ \minus {\rm \setnum{1}}}
    (6)
    {\bf \hat{x}}_{k} \equals {\bf \hat{x}}_{k}^{ \minus } \plus {\bf K}_{k} \lsqb {\bf z}_{k} \minus {\bf H}_{k} {\bf \hat{x}}_{k}^{ \minus } \rsqb
    (7)
    {\bf P}_{k} \equals \lsqb {\bf I} \minus {\bf K}_{k} {\bf H}_{k} \rsqb {\bf P}_{k}^{ \minus}

The innovation sequences have been utilized by the correlation and covariance-matching techniques to estimate the noise covariances. The basic idea behind the covariance-matching approach is to make the actual value of the covariance of the residual consistent with its theoretical value. The implementation of IAE based AKF to navigation design has been widely explored (Hide et al, Reference Hide, Moore and Smith2003, Mohamed and Schwarz Reference Mohamed and Schwarz1999). Equations (3)–(4) are the time update equations of the algorithm from k to step k+1, and Equations (5)–(7) are the measurement update equations. These equations incorporate a measurement value into a priori estimation to obtain an improved a posteriori estimation. In the above equations, Pk is the error covariance matrix defined by E\hskip 1pt \lsqb \lpar {\bf x}_{k} \minus {\bf \hat{x}}_{k} \rpar \lpar {\bf x}_{k} \minus {\bf \hat{x}}_{k} \rpar ^{\rm T} \rsqb, in which {\bf \hat{x}}_{k} is an estimation of the system state vector xk, and the weighting matrix Kk is generally referred to as the Kalman gain matrix. The Kalman filter algorithm starts with an initial condition value, {\bf \hat{x}}_{\setnum{0}}^{ \minus } and P0. When new measurement zk becomes available with the progression of time, the estimation of states and the corresponding error covariance would follow recursively.

From the incoming measurement zk and the optimal prediction {\bf \hat{x}}_{k}^{ \minus} obtained in the previous step, the innovations sequence is defined as:

(8)
\bmnu_{k} \equals {\bf z}_{k} \minus {\bf \hat{z}}_{k}^{ \minus}

The innovation reflects the discrepancy between the predicted measurement {\bf H}_{k} {\bf \hat{x}}_{k}^{ \minus} and the actual measurement zk. It represents the additional information available to the filter as a consequence of the new observation zk. The weighted innovation, {\bf K}_{k} \lpar {\bf z}_{k} \minus {\bf H}_{k} {\bf \hat{x}}_{k}^{ \minus } \rpar, acts as a correction to the predicted estimate {\bf \hat{x}}_{k}^{ \minus} to form the estimation {\bf \hat{x}}_{k}. Substituting the measurement model Equation (1b) into Equation (8) gives:

(9)
\bmnu_{k} \equals {\bf H}_{k} \lpar {\bf x}_{k} \minus {\bf \hat{x}}_{k}^{ \minus } \rpar \plus {\bf v}_{k}

which is a zero-mean Gaussian white noise sequence. An innovation of zero means that the two are in complete agreement. The mean of the corresponding error of an unbiased estimator is zero. By taking variances on both sides, we have the theoretical covariance, the covariance matrix of the innovation sequence is given by:

(10a)
{\bf C}_{\upsilon _{k} } \equals E\,\lsqb \bmnu_{k} \bmnu_{k}^{\rm T} \rsqb \equals {\bf H}_{k} {\bf P}_{k}^{ \minus } {\bf H}_{k}^{\rm T} \plus {\bf R}_{k}

which can be written as:

(10b)
{\bf C}_{\upsilon _{k} } \equals {\bf H}_{k} \lpar \bmPhi_{k} {\bf P}_{k} \bmPhi_{k}^{T} \plus \rmGamma_{k} {\bf Q}_{k} \rmGamma_{k}^{T} \rpar {\bf H}_{k}^{\rm T} \plus {\bf R}_{k}

Defining {\bf \hat{C}}_{\upsilon _{k}} as the statistical sample variance estimate of {\bf C}_{\upsilon _{k}}, matrix {\bf \hat{C}}_{\upsilon _{k}} can be computed through averaging inside a moving estimation window of size N:

(11)
{\bf \hat{C}}_{\upsilon _{k} } \equals {1 \over N}\sum\limits_{j \equals j_{\setnum{0}} }^{k} {\bmnu_{j} \bmnu_{j}^{\rm T}}

where N is the number of samples (usually referred to the window size); j 0=kN+1 is the first sample inside the estimation window. The window size N is chosen empirically (a good size for the moving window may vary from 10 to 30) to give some statistical smoothing. More detailed discussion can be obtained by referring to Gelb (Reference Gelb1974) and Brown & Hwang (Reference Brown and Hwang1997).

The benefit of the adaptive algorithm is that it keeps the covariance consistent with the real performance. The innovation sequences have been utilized by the correlation and covariance-matching techniques to estimate the noise covariances. The basic idea behind the covariance-matching approach is to make the actual value of the covariance of the residual consistent with its theoretical value. This leads to an estimate of Rk:

(12)
{\bf \hat{R}}_{k} \equals {\bf \hat{C}}_{\upsilon _{k} }\minus {\bf H}_{k} {\bf P}_{k}^{ \minus } {\bf H}_{k}^{\rm T}

Based on the residual based estimate, the estimate of process noise Qk is obtained:

(13)
{\bf \hat{Q}}_{k} \equals {1 \over N}\sum\limits_{j \equals j_{\setnum{0}} }^{k} {\rmDelta {\bf x}_{j} \rmDelta {\bf x}_{j}^{\rm T} } \plus {\bf P}_{k} \minus \bmPhi_{k} {\bf P}_{k \minus \setnum{1}} \bmPhi_{k}^{\rm T}

where \rmDelta {\bf x}_{k} \equals {\bf x}_{k} \minus {\bf \hat{x}}_{k}^{ \minus}. This equation can also be written in terms of the innovation sequence:

(14)
{\bf \hat{Q}}_{k} \approx {\bf K}_{k} {\bf \hat{C}}_{\upsilon _{k} } {\bf K}_{k}^{\rm T}

For more detailed information on the derivation of these equations, see Mohamed & Schwarz (Reference Mohamed and Schwarz1999).

3. THE PROPOSED ADAPTIVE SENSOR FUSION STRATEGY

A new strategy for tuning the filter parameters is presented. The conventional KF approach is coupled with the adaptive tuning system (ATS) to provide two system parameters: fading factor and noise covariance scaling factor. In the ATS mechanism, both adaptations on process noise covariance (also referred to as P-adaptation herein) and on measurement noise covariance (also referred to as R-adaptation herein) are involved. The idea is based on the concept that when the filter achieves estimation optimality, the actual innovation covariance based on the sampled sequence and the theoretical innovation covariance should be equal. In other words, the ratio between the two should equal one.

3.1. Adaptation on process noise covariance

One of the approaches for adaptive processing is the incorporation of fading factors. Xia et al. (Reference Xia, Rao, Ying and Shen1994) proposed a concept of adaptive fading Kalman filter and solved the state estimation problem. In the AFKF, suboptimal fading factors are introduced into the nonlinear smoother algorithm. The idea of fading Kalman filtering is to apply a factor matrix to the predicted covariance matrix to deliberately increase the variance of the predicted state vector. To account for the uncertainty, the covariance matrix needs to be updated, in the following way. The new {\bf \overline{P}}_{k}^{\vskip1pt \minus} can be obtained by multiplying Pk by the factor λP:

(15)
{\bf \overline{P}}_{k}^{\vskip1pt  \minus } \equals \lambda_{P} {\bf P}_{k}^{ \minus}

and the corresponding Kalman gain is given by:

(16a)
{\bf \overline{K}}_{k} \equals {\bf \overline{P}}_{k}^{\vskip1pt  \minus } {\bf H}_{k}^{\rm T} \lsqb {\bf H}_{k} {\bf \overline{P}}_{k}^{\vskip1pt  \minus } {\bf H}_{k}^{\rm T} \plus {\bf \overline{R}}_{k} \rsqb ^{ \minus {\rm \setnum{1}}}

If representing the new variable {\bf \overline{R}}_{k} \equals \lambda_{R} {\bf R}_{k}, we have:

(16b)
{\bf \overline{K}}_{k} \equals {\bf \overline{P}}_{k}^{\vskip1pt  \minus } {\bf H}_{k}^{\rm T} \lsqb {\bf H}_{k} {\bf \overline{P}}_{k}^{\vskip1pt  \minus } {\bf H}_{k}^{\rm T} \plus \lambda_{R} {\bf R}_{k} \rsqb ^{ \minus {\rm \setnum{1}}}

From Equation (16b), it can be seen that the change of covariance is essentially governed by two of the parameters: {\bf \overline{P}}_{k}^{\vskip1pt \minus} and Rk. In addition, the covariance matrix at the measurement update stage, from Equation (7), can be written as:

(17a)
{\bf \overline{P}}_{k} \equals \lsqb {\bf I} \minus {\bf \overline{K}}_{k} {\bf H}_{k} \rsqb {\bf \overline{P}}_{k}^{\vskip1pt  \minus}

and

(17b)
{\bf \overline{P}}_{k} \equals \lambda_{P} \lsqb {\bf I} \minus {\bf \overline{K}}_{k} {\bf H}_{k} \rsqb {\bf P}_{k}^{ \minus}

Furthermore, based on the relationship given by Equation (15), the covariance matrix at the prediction stage (i.e., Equation (4)) is given by:

(18)
{\bf \overline{P}}_{k \plus \setnum{1}}^{\vskip1pt  \minus } \equals \bmPhi_{k} {\bf \overline{P}}_{k} \bmPhi_{k}^{\rm T} \plus {\bf Q}_{k}

or, alternatively:

(19a)
{\bf \overline{P}}_{k \plus \setnum{1}}^{\vskip1pt  \minus } \equals \lambda_{P} \bmPhi_{k} {\bf P}_{k} \bmPhi_{k}^{\rm T} \plus {\bf Q}_{k}

On the other hand, the covariance matrix can also be approximated by:

(19b)
{\bf \overline{P}}_{k \plus \setnum{1}}^{\vskip1pt  \minus } \equals \lambda_{P} {\bf P}_{k \plus \setnum{1}}^{ \minus } \equals \lambda_{P} \lpar \bmPhi_{k} {\bf P}_{k} \bmPhi_{k}^{T} \plus {\bf Q}_{k} \rpar

where λP=diag1, λ2…, λm). The main difference between different adaptive fading algorithms is on the calculation of scale factor λP. One approach is to assign the scale factors as constants. When λi⩽1 (i=1,2,…, m), the filtering is in a steady state processing while λi>1, the filtering may tend to be unstable. For the case λi=1, it deteriorates to the standard Kalman filter. There are some drawbacks with constant factors, e.g., as the filtering proceeds, the precision of the filtering will decrease because the effects of old data tends to diminish. The ideal way is to use time varying factors that are determined according to the dynamic and observation model accuracy.

When there is deviation due to the changes of covariance and measurement noise, the corresponding innovation covariance matrix can be rewritten as:

{\bf \overline{C}}_{\upsilon _{k} } \equals {\bf H}_{k} {\bf \overline{P}}_{k}^{\vskip1pt  \minus } {\bf H}_{k}^{\rm T} \plus {\bf \overline{R}}_{k}

and

(20)
{\bf \overline{C}}_{\upsilon _{k} } \equals \lambda_{P} {\bf H}_{k} {\bf P}_{k}^{ \minus } {\bf H}_{k}^{\rm T} \plus \lambda_{R} {\bf R}_{k}

To enhance the tracking capability, the time-varying suboptimal scaling factor is incorporated, for on-line tuning the covariance of the predicted state, which adjusts the filter gain, and accordingly the improved version of AFKF is obtained. The optimum fading factors can be calculated through the single factor:

(21)
\lambda _{i} \equals \lpar \lambda_{P} \rpar _{ii} \equals \max \left\{ {1\comma {{tr\lpar {\bf \hat{C}}_{\upsilon _{k} } \rpar } \over {tr\lpar {\bf C}_{\upsilon _{k} } \rpar }}} \right\}\comma \quad i \equals 1\comma 2 \ldots \comma m

where tr[·] is the trace of matrix; λi⩾1, is a scaling factor. Increasing λi will improve tracking performance.

3.2. Adaptation on measurement noise covariance

As the strength of measurement noise changes with the environment, incorporation of the fading factor only is not able to restrain the expected estimation accuracy. To resolve these problems, the ATS needs a mechanism for R-adaptation in addition to P-adaptation, to adjust the noise strengths and improve the filter estimation performance.

A parameter which represents the ratio of the actual innovation covariance, based on the sampled sequence to the theoretical innovation covariance matrices, can be defined as one of the following methods:

  1. (a) Single factor

    (22a)
    \lambda _{j} \equals \lpar \lambda_{R} \rpar _{jj} \equals {{tr\lpar {\bf \hat{C}}_{\upsilon _{k} } \rpar } \over {tr\lpar {\bf C}_{\upsilon _{k} } \rpar }}\comma \quad j \equals 1\comma 2 \ldots \comma n
  2. (b) Multiple factors

    (22b)
    \lambda _{j} \equals {{\lpar {\bf \hat{C}}_{\upsilon _{k} } \rpar _{jj} } \over {\lpar {\bf C}_{\upsilon _{k} } \rpar _{jj} }}\comma \quad j \equals 1\comma 2 \ldots \comma n

It should be noted from Equation (20) that increasing Rk will lead to increasing {\bf C}_{\upsilon _{k}}, and vice versa. This means that time-varying Rk leads to time-varying {\bf C}_{\upsilon _{k}}. The value of λR is introduced in order to reduce the discrepancies between {\bf C}_{\upsilon _{k}} and Rk. The adaptation can be implemented through the simple relation:

(23)
{\bf \overline{R}}_{k} \equals \lambda_{R} {\bf R}_{k}

Further detail regarding the adaptive tuning loop is illustrated by the flow charts shown in Figures 1 and 2, where two architectures are presented. Figure 1 shows System architecture #1 and Figure 2 shows System architecture #2, respectively. In Figure 1, the flow chart contains two portions, for which the block indicated by the dotted lines is the adaptive tuning system (ATS) for tuning the values of both P and R parameters; in Figure 2, the flow chart contains three portions, for which the two blocks indicated by the dotted lines represent the R-adaptation loop and P-adaptation loop, respectively.

Figure 1. Flow chart of the proposed AKF method – System architecture #1.

Figure 2. Flow chart of the proposed AKF method – System architecture #2.

An important point needs to be made. When System architecture #1 is employed, only one window size is needed. It can be seen that the measurement noise covariance of the innovation covariance matrix has not been updated when performing the fading factor calculation. In System architecture #2, the latest information of the measurement noise strength was already available when performing the fading factor calculation. However, one should notice that utilization of the ‘old’ (i.e., before R-adaptation) information is required. Otherwise, unreliable results may occur since the deviation of the innovation covariance matrix due to the measurement noise cannot be correctly detected. One strategy for avoiding this problem is using two different window sizes, one for R-adaptation loop and the other for P-adaptation loop.

4. SIMULATION EXPERIMENTS AND ANALYSIS

Simulation experiments have been carried out to evaluate the performance of the proposed approach in comparison with the conventional methods for GPS/INS navigation processing. The loosely coupled GPS/INS architecture is presented for demonstration. Figure 3 provides the strategy for the GPS/INS navigation processing based on the ATS-coupled AKF mechanism. The GPS navigation solution based on the least squares (LS) is solved at the first stage. The measurement is the residual between GPS LS and INS derived data, which is used as the measurement of the KF.

Figure 3. GPS/INS navigation processing using the proposed AKF.

Simulation was conducted using a personal computer. The computer codes were constructed using the Matlab® 6.5 version software. The commercial software Satellite Navigation (SATNAV) Toolbox by GPSoft LLC was employed for generating the satellite positions and pseudoranges. The satellite constellation was simulated and the error sources corrupting GPS measurements include ionospheric delay, tropospheric delay, receiver noise and multipath. It was assumed that the differential GPS mode was used and most of the errors could be corrected, but the multipath and receiver thermal noise cannot be eliminated.

The differential equations describing the two-dimensional inertial navigation state are (Farrell, Reference Farrell1998):

(24)
\left[ {\matrix{ {\dot{n}} \cr {\dot{e}} \cr {\dot{v}_{n} } \cr {\dot{v}_{e} } \cr { \dot {\psi }} \cr} } \right] \equals \left[ {\matrix{ {v_{n} } \cr {v_{e} } \cr {a_{n} } \cr {a_{e} } \cr {\omega _{r} } \cr} } \right] \equals \left[ {\matrix{ {v_{n} } \cr {v_{e} } \cr {\cos \lpar \psi \rpar a_{u} \minus \sin \lpar \psi \rpar a_{v} } \cr {\sin \lpar \psi \rpar a_{u} \plus \cos \lpar \psi \rpar a_{v} } \cr {\omega _{r} } \cr} } \right]

where [a u, a v] are the measured accelerations in the body frame, ωr is the measured yaw rate in the body frame, as shown in Figure 4. The error model for INS is augmented by some sensor error states such as accelerometer biases and gyroscope drifts. Actually, there are several random errors associated with each inertial sensor. It is usually difficult to set a certain stochastic model for each inertial sensor that works efficiently in all environments and reflects the long-term behaviour of sensor errors. The difficulty of modelling the errors of INS raised the need for a model-less GPS/INS integration technique. Linearization of Equation (24) results in the following set of linearized equations:

(25)
\left[ {\matrix{ {\delta \dot{n}} \cr {\delta \dot{e}} \cr {\delta \dot{v}_{n} } \cr {\delta \dot{v}_{e} } \cr {\delta \dot{\psi }} \cr} } \right] \equals \left[ {\matrix{ 0 \tab 0 \tab 1 \tab 0 \tab 0 \cr 0 \tab 0 \tab 0 \tab 1 \tab 0 \cr 0 \tab 0 \tab 0 \tab 0 \tab 0 \cr 0 \tab 0 \tab 0 \tab 0 \tab 0 \cr 0 \tab 0 \tab 0 \tab 0 \tab 0 \cr} } \right]\left[ {\matrix{ {\delta n} \cr {\delta e} \cr {\delta v_{n} } \cr {\delta v_{e} } \cr {\delta \psi } \cr} } \right] \plus \left[ {\matrix{ 0 \cr 0 \cr {w_{n} } \cr {w_{e} } \cr {w_{\psi } } \cr} } \right]

which will be utilized in the integration Kalman filter as the inertial error model. In Equation (25), δn and δe represent the east, and north position errors; δv n and δv e represent the east, and north velocity errors; and δψ represent yaw angle, respectively. The state transition matrix for the model can be found to be:

\bmPhi_{k} \equals \left[ {\matrix{ 1 \tab 0 \tab {\rmDelta t} \tab 0 \tab 0 \cr 0 \tab 1 \tab 0 \tab {\rmDelta t} \tab 0 \cr 0 \tab 0 \tab 1 \tab 0 \tab 0 \cr 0 \tab 0 \tab 0 \tab 1 \tab 0 \cr 0 \tab 0 \tab 0 \tab 0 \tab 1 \cr} } \right]

The measurement model governed by (1b) leads to:

(26)
\left[ {\matrix{ {\delta n} \cr {\delta e} \cr} } \right] \equals \left[ {\matrix{ {n_{INS} } \cr {e_{INS} } \cr} } \right] \minus \left[ {\matrix{ {n_{GPS} } \cr {e_{GPS} } \cr} } \right] \equals \left[ {\matrix{ 1 \tab 0 \tab 0 \tab 0 \tab 0 \cr 0 \tab 1 \tab 0 \tab 0 \tab 0 \cr} } \right]\left[ {\matrix{ {\delta n} \cr {\delta e} \cr {\delta v_{n} } \cr {\delta v_{e} } \cr {\delta \psi } \cr} } \right] \plus \left[ {\matrix{ {v_{n} } \cr {v_{e} } \cr} } \right]

Figure 4. Two-dimensional inertial navigation (Farrell, Reference Farrell1998).

An experiment was conducted on a simulated vehicle trajectory originating from the (0, 0) m location. The simulated outputs for the accelerometers and gyroscope are shown as in Figure 5. The trajectory of the vehicle can be approximately divided into two categories according to the dynamic characteristics. The vehicle was simulated to conduct constant-velocity straight-line during the three time intervals, 0–300, 901–1200 and 1501–1800s, all at a speed of 10 πm/s. Furthermore, it conducted a counter-clockwise circular motion with radius 3000 metres during 301–900, and 1201–1500s where high dynamic manoeuvring is involved. The following parameters were used: window size N p=15 N R=20; the values of noise standard deviation are 1e-3 m/s2 for accelerometers and gyroscopes.

Figure 5. The simulated outputs for the accelerometers and gyroscope.

The trajectory for the simulated vehicle (solid) and the unaided INS derived position (dashed) is shown in Figure 6 and Figure 7 shows the east and north components of INS navigation errors. Figure 8 provides the positioning solution from the integrated navigation system (without adaptation) as compared to the GPS navigation solutions by the LS approach, while Figure 9 gives the positioning results for the integrated navigation system with and without adaptation. Substantial improvement in navigation accuracy can be obtained.

Figure 6. Trajectory for the simulated vehicle (solid) and the INS derived position (dashed).

Figure 7. East and north components of INS navigation errors.

Figure 8. The solution from the integrated navigation system without adaptation as compared to the GPS navigation solutions by the LS approach.

Figure 9. The solutions for the integrated navigation system with and without adaptation.

In the real world, the measurement will normally be changing in addition to the change of process noise or dynamics such as manoeuvring. In such cases, both P-adaptation and R-adaptation tasks need to be implemented. In the following discussion, results will be provided for the case when measurement noise strength is changing in addition to the change of process noise strength. The measurement noise strength is assumed to be changing with variances of the values r=42→162→82→32, where the ‘arrows (→)’ are employed for indicating the time-varying trajectory of measurement noise statistics. That is, it is assumed that the measure noise strength is changing during the four time intervals: 0–450s (N(0,42)), 451–900s (N(0,162)), 901–1350s (N(0,82)), and 1351–1800s (N(0,32)). However, the internal measurement noise covariance matrix Rk is set unchanged all the time in simulation, which uses r j~N(0,32), j=1,2…,n, at all the time intervals.

Figure 10 shows the east and north components of navigation errors and the 1-σ bound based on the method without adaptation on measurement noise covariance matrix. It can be seen that the adaptation of P information without correct R information (referred to as partial adaptation herein) seriously deteriorates the estimation result. Figure 11 provides the east and north components of navigation errors and the 1-σ bound based on the proposed method (referred to as full adaptation herein, i.e., adaptation on both estimation covariance and measurement noise covariance matrices are applied). It can be seen that the estimation accuracy has been substantially improved. The measurement noise strength has been accurately estimated, as shown in Figure 12.

Figure 10. East and north components of navigation errors and the 1-σ bound based on the method without measurement noise adaptation.

Figure 11. East and north components of navigation errors and the 1-σ bound based on the proposed method (with adaptation on both estimation covariance and measurement noise covariance matrices).

Figure 12. Reference (true) and calculated standard deviations for the east (top) and north (bottom) components of the measurement noise variance values.

It should also be mentioned that the requirement (λp)ii⩾1 is critical. An illustrative example is given in Figures 13 and 14. Figure 13 gives the navigation errors and the 1-σ bound when the threshold setting is not incorporated. The corresponding reference (true) and calculated standard deviations when the threshold setting is not incorporated is provided in Figure 14. It is not surprising that the navigation accuracy has been seriously degraded due to the inaccurate estimation of measurement noise statistics.

Figure 13. East and north components of navigation errors and the 1-σ bound based on the proposed method when the threshold setting is not incorporated.

Figure 14. Reference (true) and calculated standard deviations for the east and north components of the measurement noise variance values when the threshold setting is not incorporated.

5. CONCLUSION

This paper has proposed a new strategy of adaptive Kalman filter approach and provided an illustrative example for integrated navigation application. The conventional KF approach is coupled by the adaptive tuning system (ATS), which gives two system parameters: the fading factor and measurement noise covariance scaling factor. The ATS has been employed as a mechanism for the timely detection of the dynamical and environmental changes and implementation of the on-line parameter tuning by monitoring the innovation information so as to maintain good tracking capability and estimation accuracy. Unlike some of the conventional AKF methods, the proposed method has the merits of good computational efficiency and numerical stability. The matrices in the KF loop are able to remain positive definitive. Remarks to be noted for using the method are:

  • The window sizes can be set to different values to avoid the filter degradation/divergence;

  • The fading factors (λP)ii should be always larger than one while (λR)ii does not have such limitation.

Simulation experiments for navigation sensor fusion have been provided to illustrate the accessibility. The accuracy improvement based on the proposed AKF method has demonstrated substantial improvement in both navigational accuracy and tracking capability.

ACKNOWLEDGEMENTS

This work has been supported in part by the National Science Council of the Republic of China under grant no. NSC 96-2221-E-019-007.

References

REFERENCES

Axelrad, P. and Brown, R. G. (1996). GPS navigation algorithms, in Parkinson, B. W., Spilker, J. J., Axelrad, P., and Enga, P. (ed) Global Positioning System: Theory and Applications. Volume I, AIAA, Washington DC, Chap. 9Google Scholar
Brown, R., and Hwang, P. (1997). Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons, New York.Google Scholar
Ding, W., Wang, J., and Rizos, C. (2007). Improving Adaptive Kalman Estimation in GPS/INS Integration, The Journal of Navigation, 60, 517529.CrossRefGoogle Scholar
El-Mowafy, A., and Mohamed, A. (2005). Attitude Determination from GNSS Using Adaptive Kalman filtering, The Journal of Navigation, 58, 135148.CrossRefGoogle Scholar
Mehra, R. K. (1970). On the identification of variancea and adaptive Kalman filtering, IEEE Trans. Automat. Contr., AC-15, 175184.CrossRefGoogle Scholar
Mehra, R. K. (1971). On-line identification of linear dynamic systems with applications to Kalman filtering, IEEE Trans. Automat. Contr., AC-16, 1221.CrossRefGoogle Scholar
Mehra, R. K. (1972). Approaches to adaptive filtering, IEEE Trans. Automat. Contr., AC-17, 693698.CrossRefGoogle Scholar
Mohamed, A. H. and Schwarz, K. P. (1999). Adaptive Kalman filtering for INS/GPS, Journal of Geodesy, 73, 193203.CrossRefGoogle Scholar
Hide, C, Moore, T., and Smith, M. (2003). Adaptive Kalman filtering for low cost INS/GPS, The Journal of Navigation, 56, 143152.CrossRefGoogle Scholar
Gelb, A. (1974). Applied Optimal Estimation. M. I. T. Press, MA.Google Scholar
Salychev, O. (1998). Inertial Systems in Navigation and Geophysics, Bauman MSTU Press, Moscow.Google Scholar
Farrell, J. (1998). The Global Positioning System and Inertial Navigation, McCraw-Hill professional.Google Scholar
Xia, Q., Rao, M., Ying, Y., and Shen, X. (1994). Adaptive fading Kalman filter with an application, Automatica, 30, 13331338.CrossRefGoogle Scholar
Figure 0

Figure 1. Flow chart of the proposed AKF method – System architecture #1.

Figure 1

Figure 2. Flow chart of the proposed AKF method – System architecture #2.

Figure 2

Figure 3. GPS/INS navigation processing using the proposed AKF.

Figure 3

Figure 4. Two-dimensional inertial navigation (Farrell, 1998).

Figure 4

Figure 5. The simulated outputs for the accelerometers and gyroscope.

Figure 5

Figure 6. Trajectory for the simulated vehicle (solid) and the INS derived position (dashed).

Figure 6

Figure 7. East and north components of INS navigation errors.

Figure 7

Figure 8. The solution from the integrated navigation system without adaptation as compared to the GPS navigation solutions by the LS approach.

Figure 8

Figure 9. The solutions for the integrated navigation system with and without adaptation.

Figure 9

Figure 10. East and north components of navigation errors and the 1-σ bound based on the method without measurement noise adaptation.

Figure 10

Figure 11. East and north components of navigation errors and the 1-σ bound based on the proposed method (with adaptation on both estimation covariance and measurement noise covariance matrices).

Figure 11

Figure 12. Reference (true) and calculated standard deviations for the east (top) and north (bottom) components of the measurement noise variance values.

Figure 12

Figure 13. East and north components of navigation errors and the 1-σ bound based on the proposed method when the threshold setting is not incorporated.

Figure 13

Figure 14. Reference (true) and calculated standard deviations for the east and north components of the measurement noise variance values when the threshold setting is not incorporated.