1. INTRODUCTION
The objective of an embarked navigation system is to estimate, as precisely as possible, the components of a vehicle's kinematic state (attitude, position and velocity vectors) given all the available measurements on board. For several decades, real time estimation of a vehicle's kinematic state has been implemented through embedded navigation systems based on numerical filters fusing real time data coming from a variety of instruments included a core strap-down inertial measurement unit.
With respect to previously pure inertial navigation systems, multi-sensor integrated navigation systems allow for significant improvements in robustness, reliability, redundancy, costs and, of course, performance since the aggregation of independent measurements correlated with the state tends to reduce the covariance of its estimate. However, adding instrumentation is never for free in terms of increasing uncertainties such as measurement noises, modelling errors and, of course, costs and computational requirements.
While evaluating the performance of an integrated navigation system, one may find that certain state components do not converge or that their covariances remain large. When errors in data acquisition, on the time-labelling or in the embedded software may be ruled out, it is natural to search the causes on the quality of the sensors, but also on the worth of the available information itself.
Sensor measurements are typically modelled by memory-less transduction functions parameterised by a set of parameters and additive disturbances, usually uncorrelated noise. Nominal values of sensors' parameters may be known in advance from the instrument data sheet provided by the manufacturer or else obtained through previous calibration procedures. Once the sensor's model structure is fixed, uncertainties are presumed concentrated on those parameters, which, multi-sensor fusion filters consider as part of an Augmented State Vector (ASV). However, sensor models are necessarily approximated; in particular, low performance sensors seldom obey a consistent mathematical model with stable parameters. Fusion filters deal with parameter instabilities by assuming they are perturbed by Brownian motions. The parametric components of the ASV are thus assumed permanently disturbed - the more so when low cost instruments are employed.
For a given instrument configuration, the measurements and the ASV become correlated through the kinematics differential equations of the vehicle's motion and the sensor models. To estimate the ASV, fusion filters (for example, Extended (EKF), Linear (LKF) and Unscented (UKF) Kalman Filters) make use of that correlation to produce a state estimate with its covariance matrix. The trace of the covariance matrix is often seen as a measure of the overall state estimate inaccuracy. However, for a high dimension ASV the correlation among its components hampers discerning which navigation variables may be better estimated (or better “sensed”) with the available instrumentation. Ham and Brown (Reference Ham and Brown1983) pointed out the relevance that the eigen-structure of the covariance matrix has in addressing this issue. Further developing this idea, we here start from the singular value decomposition of the error covariance matrix to define a “practically sensable” subspace and its orthogonal complement, the “practically non-sensable” one. Based on this decomposition, a new instantaneous measure of the “sensability” of any linear combination of the ASV components is proposed as its orthogonal projection over the practically sensable subspace.
Deterministic “observability” (Willems and Mitter, Reference Willems and Mitter1971) designates the property and/or the conditions under which the state at a particular time may be determined from future measurable inputs and outputs. The non-linear nature of the kinematics equations makes observability in navigation systems strongly dependent on the vehicle's trajectory. Hence, besides measurement errors, sensors' parameter instabilities and uncertain initial conditions, the quality of the estimates depends on the actual vehicle's manoeuvres over different trajectory stretches. This motivated numerous analysis performed using ideal trajectories (lemniscates, circles, uniform acceleration, etc) so as to allow for a closed form analytical treatment of the problem (Bageshwar et al. (Reference Bageshwar, Gebre-Egziabher, Garrard and Georgiou2009), Rhee et al. (Reference Rhee, Abdel-Hafez and Speyer2004) and Hoo et al. (Reference Hoo, Lee, Chun, Kwon and Speyer2005) consider Inertial Navigation System (INS)/Global Positioning System (GPS) navigation systems). However, real vehicles' trajectories may rarely be analytically described or even predetermined. Shen et al. (Reference Shen, Xia, Wang, Neusypin and Proletarsky2018) proposed a method (based on a Kalman scalar estimator algorithm pioneered by Salychev (Reference Salychev1998)) for quantifying the observability of each state component over a given trajectory. Unfortunately, their approach requires the local Gramian observability matrix (denoted Oτ in the text) to be well conditioned over the considered trajectory.
A new “excitability measure” is proposed here to quantify the information, relative to the initial state, conveyed by the innovations during any given trajectory stretch. This is done based on a synthetic trajectory (Giribet et al., Reference Giribet, España and Miranda2007) determined from actual flight data. Linearized models of state deviations and innovations are determined by referring to this synthetic trajectory. Once more, a principal component analysis, this time of the deterministic innovation signals, over any time interval, allows us to decompose the initial ASV state space (of any given trajectory stretch) into a Practically Observable Subspace (POS) and its orthogonal complement: the “practically un-observable subspace”. The new measure of the excitability for any vector (physical state components included) of the initial state space is defined as the norm of its orthogonal projection onto the POS. Notice that, contrary to other approaches, the excitability measure empirically qualifies any given vehicle's trajectory stretch; that is, it is not the result of an ad hoc analysis of a pre-assumed manoeuvre (as for instance in Tang et al., Reference Tang, Wu, Wu, Wu, Hu and Shen2009).
The paper is organised as follows: Section 2 presents the mathematical formulation and main notation used in the paper; practical sensability and practical observability subspaces for the ASV are introduced, respectively, in Sections 3 and 4 whereby the sensability and excitability metrics are established.
In Section 5, these concepts are used to assess the performance navigation results with actual flight data from CONAE's (National Commission of Space Activities: Argentina's Space Agency) first experimental navigation payload on a suborbital rocket. Finally, Section 6 concludes the paper.
2. MATHEMATICAL FORMULATION
A causal non-linear filter reconstructs the present state given past measurements. An integrated navigation algorithm is a particular case of nonlinear filtering based on kinematics equations which, when using the Earth-Centred-Earth-Fixed (ECEF) frame as the reference frame, adopts the following form (see for example, España (Reference España2017)):
where: Pe, Ve are the vehicle's position and velocity in the ECEF frame; ${\brtheta}_{be}$ and ${\bf C}_{b}^{e}$ are, respectively, the vector angle and the rotation matrix relating the ECEF frame and the body frame; the specific force fb and the angular rate ${\bromega}_{ib}^{b}$, both projected onto the body's frame, are gathered in the inertial magnitude vector ${\brmu} \triangleq \lsqb {\bf f}^{b\prime} {{\bromega}_{ib}^{b}}^{\prime}\rsqb ^{\prime}$, ((′) stands for transpose), γ e(Pe) is the local gravity as a function of the vehicle's position, both in Earth Cartesian coordinates and Ωe is the Earth angular rate. By using the Inertial Measurement Unit (IMU) inverse model and assuming additive disturbances ξμ (España, Reference España2017), ${\brmu}$ may be expressed as a function of its measured value ${\mathop{{\brmu}} \limits^{\frown}}$ as:
The inertial as well as the exteroceptive instruments' models are characterised by a vector of unknown parameters ${\bf p} \triangleq \lsqb {{\bf p}_{i}}^{\prime}\ {{\bf p}_{e}}^{\prime}\rsqb ^{\prime}$ with assumed known expected values. The kinematics state vector ${\bf x} \triangleq \lsqb {\bf P}^{e \prime}\ {\bf V}^{e \prime}\ {{\rtheta}_{be}}^{\prime}\rsqb ^{\prime}$ and the instrument parameters vector p are compounded in the ASV defined as ${\brchi} \triangleq \lsqb {\bf x}^{\prime}\ {\bf p}^{\prime}\rsqb ^{\prime} \in {\open R}^{n}$. When p is modelled as a Brownian motion (Farrell, Reference Farrell2008), ${\brchi}$ may be modelled by the following stochastic equation, affine in the perturbations ξ(t):
where hk( · ) models the active exteroceptive instruments producing the measurement vector ${\mathop{y}\limits^{\frown}}_k$ at time t k; ηk is a discrete time white disturbance with known diagonal covariance matrix Rk; the noise ξp models the sensor's parameters instabilities; ξμ and ξp are continuous white noises with assumed known Power Spectral Densities (PSD), respectively: Sμ, Sp.
For t 0 as the starting time of an arbitrary trajectory stretch, the initial state ${\brchi} \lpar t_{0}\rpar $ in Equation (3) is a random vector with the first two moments $\hat{{\brchi}}_{0} = E\lcub {\brchi} \lpar t_{0}\rpar \rcub $, ${\bf P}_{0} = \hbox{cov}\lpar {\brchi} \lpar t_{0}\rpar \rpar $ provided by the navigation algorithm being assessed. When t 0 is the initial navigation time, $\hat{{\brchi}}_{0}$, P0 reflect the vehicle's alignment procedure and P0 is normally assumed diagonal.
Given a reference solution of the nonlinear model Equation (3), denoted as ${\brchi}_{r}\lpar t\rpar $, we introduce the state deviations $\delta {\brchi} \lpar t\rpar \triangleq {\brchi} \lpar t\rpar - {\brchi}_{r} \lpar t\rpar $ and the innovations $\delta {\bf y}_{k} \triangleq {\mathop{\bf y} \limits^{\frown}}_{k} - {\bf h}_{k} \lpar {\brchi}_{r} \lpar t_{k}\rpar \rpar $, which, for smooth enough fkim, hk and small enough perturbations and initial state errors, may be modelled by the following time varying linear equations:
where A( · ) and Hk( · ) are the partial derivatives matrices with respect to ${\brchi}$, respectively, of ${\cal F}\lpar {\brchi}\semicolon \; t\rpar $ and ${\bf h}_{k}\lpar {\brchi}\rpar $ in model Equation (3) evaluated along the reference trajectory.
When restricted to the kinematic state, from Equations (1), the deviation equations are given by (see, for example, España (Reference España2017), Equation 6.22).
where: S : ℝ3 → ℝ3×3 is the vector product matrix operator; $\gamma_{P}^{e}$ the Jacobian of the normal gravity with respect to Pe; $\delta {\bf P}^{e}\comma \; \delta {\bf V}^{e}\comma \; \delta {\brtheta}_{be}$, are the components of the kinematics state deviations vector and $\delta {\brmu}$ the IMU's deviation measurement vector, with ${\bf B}_{p_{i}} \lpar {\mathop{{\brmu}} \limits^{\frown}}\rpar $ the Jacobian of ${\cal M}$ (in Equation (2)) with respect to pi.
The solution to the linear time varying differential Equation (4) in the interval [t k t k+1] between two consecutive exteroceptive measurement acquisition times, is given by (Farrell, Reference Farrell2008):
where Φ( · , · ) ∈ ℝn×n is the state transition matrix of the time varying linear system Equation (4) and ${\brvarepsilon}_{k}$ is a centred uncorrelated random sequence such that:
The positive definite covariance matrix Qk is obtained by introducing the power spectral density (Starks and Woods, Reference Starks and Woods1994) S = diag(Sμ, S p) in the integral expression of ${\brvarepsilon}_{k}$ given in Equation (6) (see España (Reference España2017) for details) and δjk stands for the Kronecker delta. Denoting the non-singular matrix Fk = Φ(t k+1, t k) and $\delta {\brchi}_{k} = \delta {\brchi}\lpar t_{k}\rpar $, with Equation (6), the discrete time models for the ASV and the innovation are rewritten as:
The EKF refers the state deviations to the a priori estimate $\hat{{\brchi}}_{k + 1}^{-}$ calculated at the end of the k-th interval by numerical integration of model Equation (3) after substitution of all random variables by their expected values, that is: ξ → 0; ${\brchi}\lpar t_{k}\rpar \rightarrow \hat{{\brchi}}_{k}$ -the last one being the previous a posteriori estimate obtained at the beginning of the k-th interval. This procedure, first introduced by Jazwinski (Reference Jazwinski1970), is known as the Certainty Equivalence Principle. Once the next measurement becomes available, the a posteriori estimate $\hat{{\brchi}}_{k + 1}$ is obtained, updating the prediction $\hat{{\brchi}}_{k + 1}^{-}$ by adding the innovation: ${\mathop{\bf y} \limits^{\frown}}_{k + 1} - {\bf h}_{k} \lpar \hat{{\brchi}}_{k + 1}^{-}\rpar $ multiplied by the Kalman gain calculated as (Anderson and Moore, Reference Anderson and Moore1979):
where ${\bf P}_{k}^{-} = E\lpar {\brchi}_{k} - \hat{{\brchi}}_{k}^{-}\rpar \lpar {\brchi}_{k} - \hat{{\brchi}}_{k}^{-}\rpar ^{T}$ and ${\bf P}_{k} = E\lpar {\brchi}_{k} - \hat{{\brchi}}_{k}\rpar \lpar {\brchi}_{k} - \hat{{\brchi}}_{k}\rpar ^{T}$ are, respectively, the a priori and the a posteriori covariances of the state error modelled by Equation (8). We introduce the a priori and a posteriori information matrices: ${\cal I}_{k}^{-} \triangleq \lpar {\bf P}_{k}^{-}\rpar ^{-1}$, ${\cal I}_{k} \triangleq {\bf P}_{k}^{-1}$. As shown in Anderson and Moore (Reference Anderson and Moore1979) or Jazwinski (Reference Jazwinski1970), both matrices may be determined with the next recursive equations starting from ${\cal I}_{0}^{-1} = {\bf P}_{0} \gt 0$.
From Equations (10), the current state accuracy at any time t k is the consequence of two opposed effects. The first one, called “disturbability” by Bryson (Reference Bryson1978), concerns the growth of the state uncertainty due to the disturbance covariance noise matrix Qk (Equation (10a)). The second one, called “estimability” by Baram and Kailath (Reference Baram and Kailath1988) regards the growth of the information matrix produced by the new measurement acquired at time t k+1 (Equation (10b). Both effects depend on the Jacobians A( · ), H( · ) and B( · ) evaluated over the reference trajectory. As such, the accuracy of the ASV estimate depends on: (a) the performance of the onboard instruments (through the inertial sensor's PSD S and external measurement noise Rk), (b) the initial state errors covariance P0 and (c) the vehicle's trajectory itself (Giribet et al., Reference Giribet, Mas and Moreno2018).
Since the components of $\delta {\brchi}_{k}$, δyk and those of the corresponding driving noises are expressed in different physical units, their magnitudes are not directly comparable. Thus, for D0≜diag(P0) and R0 assumed diagonal, we proceed to normalise (“de-dimensionalise”) model Equation (8) through the following change of variables (Krener and Kayo, Reference Krener and Kayo2009):
With variable change Equation (11), matrices in Equation (8) are transformed as: ${\bf F}_{k} \rightarrow {\bf D}_{0}^{1/2}{\bf F}_{k}{\bf D}_{0}^{1/2}$; ${\bf H}_{k} \rightarrow {\bf R}_{0}^{-1/2} {\bf H}_{k} {\bf D}_{0}^{1/2}$. Thus, the new normalised state deviations' covariance transforms as: ${\bf P}_{k} \rightarrow {\bf D}_{0}^{-1/2}{\bf P}_{k}{\bf D}_{0}^{-1/2}$; ∀ k, while the output additive error noise covariance as: ${\bf R}_{k} \rightarrow {\bf R}_{0}^{-1/2}{\bf R}_{k}{\bf R}_{0}^{-1/2}$; ∀ k. Moreover, with ${\bf Q}_{k} \rightarrow {\bf D}_{0}^{-1/2}{\bf Q}_{k} {\bf D}_{0}^{-1/2}$, a simple substitution shows that the new normalised matrix Pk still satisfies the recursive Equations (10). We call ${\cal E} \triangleq \lcub {\bf e}^{i} \colon i = 1\comma \; 2\comma \; \ldots\comma \; n\rcub $ the canonical base in which the physical but dimensionless state is represented after transformation Equation (11).
${\cal I}_{k}$, a real symmetric positive definite matrix for any k, may be decomposed into a diagonal form, such as ${\cal I}_{k} = {\bf V}_{k} \Lambda_{k} {\bf V}_{k}^{T}$ where, $\Lambda_{k} = {diag}\lpar \lambda_{k}^{1}\comma \; \lambda_{k}^{2}\comma \; \ldots\comma \; \lambda_{k}^{n}\rpar $ is built with the ordered set of ${\cal I}_{k}$'s eigen-values: $\lcub \lambda_{k}^{1} \geq \lambda_{k}^{2} \geq \cdots \geq \lambda_{k}^{n} > 0\rcub $ and ${\bf V}_{k} = \lsqb {\bf v}_{k}^{1}\ \vert\ {\bf v}_{k}^{2}\ \vert\ \ldots\ \vert\ {\bf v}_{k}^{n}\rsqb \in {\open R}^{nxn}$ is the matrix with, as columns, the set of ${\cal I}_{k}$'s unit orthogonal right eigen-vectors. In what follows, it is assumed that there exists $0 < \underline{p} < \bar{p} < \infty$ such that for all $t_{k}\ \underline{p} < \lambda_{k}^{\prime} < \bar{p}$, i = 1, …, n. This ensures that there is no numerical divergence on recursion Equations (10) and also that there are no purely deterministic directions in the state space (see next section).
3. RELATIVE SENSABILITY: SUBSPACES AND METRICS
Assuming the ${\bf v}_{k}^{i}$ expressed in the canonical base ${\cal E}$ in which the normalised augmented physical state model Equation (8) is represented, we introduce the transformation $\delta {\brchi}_{k} \rightarrow \delta {\bf z}_{k}$ in ℝn defined as:
The component $\delta z_{k}^{i}$ of δzk is the projection of the state deviation vector $\delta {\brchi}_{k}$ (“physical” normalised coordinates) over the i-th ${\cal I}_{k}$'s eigen-vector. At each instant t k the covariance matrix of the transformed vector is:
From Equations (12) and (13) the transformed state deviation components are orthogonal random variables satisfying:
Each standard deviation $\sqrt{1/\lambda_{k}^{i}}$ being the state uncertainty along the eigen-direction ${\bf v}_{k}^{i}$, $\sqrt{{tr}\lpar {\bf P}_{k}\rpar }$ is a measure of the instantaneous “total state uncertainty”. Symmetrically, $\sqrt{{tr}\lpar {\cal I}_{k}\rpar }$ is seen as a measure of the “total state accuracy”. Given the ordering of the $\lambda_{k}^{\prime}$'s, the state space direction best estimated (least variance) is the first principal component (pc) ${\bf v}_{k}^{1}$ of the information matrix ${\cal I}_{k}$ with $\hbox{Cov}\lpar \delta z_{k}^{1}\rpar = 1/\lambda_{k}^{1}$; the next best estimated is ${\bf v}_{k}^{2}$ and so on until ${\bf v}_{k}^{n}$.
We now consider the sets of sequentially embedded subspaces ${\cal V}_{k}^{i} \triangleq {span}\lcub {\bf v}_{k}^{1}\comma \; {\bf v}_{k}^{2}\comma \; \ldots\comma \; {\bf v}_{k}^{i}\rcub \lpar {\cal V}_{k}^{i + 1} \supset {\cal V}_{k}^{i}\rpar $ and introduce the following ratios monotonously increasing with i towards r n = 1.
If, for a given i ≤ n, r t ≈ 1, it means that the subspace ${\cal V}_{k}^{\prime}$ contains the pcs that concentrates the major part of the total accuracy of the estimator. Given this, we introduce:
Definition 1: For a threshold 0 < ν< 1 empirically chosen close to 1, we denote nr the first index i in Equation (15) such that $r_{k}^{nr} \geq \nu$ and call $S_{k}^{s} \equiv {\cal V}_{k}^{nr}$ the practically sensable (ps) subspace and $S_{k}^{NS} \equiv \lpar S_{k}^{S}\rpar ^{\perp}$ the practically non-sensable (pns) subspace to its orthogonal complement.
By definition, the random components of δ z i over $S_{k}^{S}$ are the ps ones. Since ${\open R}^{n} = S_{k}^{S} \oplus S_{k}^{NS}$, on any t k, each element ei of the canonical (physical) base may be decomposed into its orthogonal projections πs(ei) k and πns(ei) k, respectively, over $S_{k}^{S}$ and $S_{k}^{NS}$ as:
Definition 2: $s_{k}^{i} \triangleq \Vert \pi_{s} \lpar {\bf e}_{i}\rpar_{k}\Vert $ is the practical sensability instantaneous measure of the ASV's i-th component.
Some practical considerations on the choice of the threshold ν are in order. It is assumed that at least one component of the ASV is ps (otherwise the instrumentation should be reviewed) so, whenever the $\lambda_{k}^{i}$'s are similar $\lpar \lambda_{ki}^{i} \approx \bar{\lambda}_{k}\comma \; \forall i\rpar $ one expects that all the ASV components be ps (that is: $S_{k}^{s} = {\cal V}_{k}^{n}$). Now, since in this case $r_{k}^{i} \approx \sqrt{i/n}$, it needs to be ν ∈ (1 − 1/n, 1), or else, one may have $r_{k}^{n - 1} \gt \nu$ meaning that the last component is pns, which is contrary to the assumption. Thus, for a small enough ε empirically chosen, we propose the dimensionally dependent choice of $\nu = \sqrt{1 - 1/n + \varepsilon} \lt 1$.
Notice that decomposition Equation (16) is generically time variant, which means that some eigen-directions may go in and out of the subspace $S_{k}^{S}$. As such, its associated measure may change with time depending on previous data. This motivates quantifying the available information embedded in a given vehicle's trajectory segment, which is the subject of the next section. Our point of view differs here once more with respect to that of Tang et al. (Reference Tang, Wu, Wu, Wu, Hu and Shen2009) in the sense that we search to study, segment by segment, the actual vehicle's trajectory.
4. OBSERVABILITY AND THE EXCITABILITY METRIC
Contrary to the sensability metric which is evaluated at each time t k, we now seek to assess the actual information conveyed by the innovations along a given time interval. The measure we look for is not intended to depend on the instrumentation performance but to meet the practical need of quantifying the impact of vehicles' manoeuvres on the estimability of the ASV, for a given instrument configuration.
The analysis is performed by first synthesising a deterministic trajectory with corresponding measurements signals (inertial and exteroceptive). The method, proposed by Giribet et al. (Reference Giribet, España and Miranda2007), produces the smooth B-spline exactly satisfying the kinematics Equations (1) that best fits the flight data in the a mean square sense. The measurements are calculated with Equation (2) assuming perfect instruments, that is: ξμ = 0, ${\brxi}_{p} = 0\ \forall t$, ${\rm \eta}_{k} = 0 \forall t_{k}$ and also ${\brvarepsilon}_{k} = 0$ in Equation (6). The synthetic trajectories and measurements are then used as references to construct the linearized state deviations and innovation models in Equations (4), (5) and (8).
4.1. Observability analysis
Given the above assumptions, the innovations on the set of time instants τ = {t 0, t 1, …, t N} modelled by Equation (8) may be expressed as a linear function of the initial state deviation as:
The matrix Oτ ∈ ℝ(pN) xn was called the extended observability matrix by Moore (Reference Moore1981). If a null linear combination were to exist among the n columns of Oτ a whole non-zero subspace of the initial deviations state-space would be mapped into the null innovation signal. Accordingly, the largest subspace of ℝn contained in the kernel of Oτ is the unobservable subspace of the system Equation (8) within the interval τ. When the unobservable subspace is reduced to the origin, the system Equation (8) is said to be completely observable. Observability is thus a binary condition stating whether or not the innovations carry total information about the initial state deviations. Unfortunately, this does not give us a hint of which state components may be better determined and, more importantly, in what amount.
4.2. Principal components of the innovation signal
We now introduce the Observability Gramian for the interval τ: ${\bf M}_{\tau} \triangleq {\bf O}_{\tau}^{T}{\bf O}_{\tau} \in {\open R}^{n \times n}$. Being a symmetric non-negative definite matrix, there exists a set $B_{\tau} = \lcub {\bf u}_{\tau}^{i} \in {\open R}^{n} \colon i = 1\comma \; 2\comma \; \ldots\comma \; n\rcub $ of unitary, mutually orthogonal eigenvectors associated with the ordered set $\lcub \sigma_{\tau}^{1} \geq \sigma_{\tau}^{2} \geq \ldots \geq \sigma_{\tau}^{n}\rcub $ of non-negative eigen-values (all positives only if system Equation (8) is completely observable, that is: Mτ is non-singular). With the elements of B τ expressed in coordinates of the canonical base, the orthogonal matrix ${\bf U}_{\tau} = \lsqb {\bf u}_{\tau}^{1}\ \vert\ {\bf u}_{\tau}^{2}\ \vert\ \ldots\ \vert\ {\bf u}_{\tau}^{n}\rsqb \in {\open R}^{ncn}$ leads to the similarity transformation ${\bf M}_{\tau} = {\bf U}_{\tau}^{T}{\bf \Sigma}_{\tau}{\bf U}_{\tau}$ for ${\bf \Sigma}_{\tau} \triangleq {diag}\lcub \sigma_{\tau}^{1}\comma \; \sigma_{\tau}^{2}\comma \; \ldots\comma \; \sigma_{t}^{n}\rcub $. Using the fact that $\sum\nolimits_{i} {\bf u}_{\tau}^{i} {\bf u}_{\tau}^{iT} = I_{n}$, we write:
The set of responses to the initial state deviations ${\bf u}_{\tau}^{i}$: ${\bf o}_{\tau}^{i} \triangleq {\bf O}_{\tau} {\bf u}_{\tau}^{i} = {\bf Y}_{\tau} \lpar {\bf u}_{\tau}^{i}\rpar $, i = 1, …, n, are called the innovation signal principal components (ispc) in τ. As can be shown, they are mutually orthogonal (that is: ${\bf o}_{\tau}^{iT} {\bf o}_{\tau}^{j} = \sigma_{\tau}^{j} \delta_{ij}$) and such that $\Vert {\bf o}_{\tau}^{i}\Vert^{2} = \sigma_{\tau}^{i}$. Moreover, for any initial state $\delta {\brchi}_{0} = \sum\nolimits_{i} \alpha_{i} {\bf u}_{\tau}^{i}$ represented in B τ coordinates, the innovation signal is expressed by the same linear combination of the ispcs, that is: ${\bf Y}_{\tau} \lpar \delta \chi_{0}\rpar = \sum_{i} \alpha_{i} {\bf o}_{\tau}^{i}$. On the other hand, for a given innovation signal over an interval τ, the components on B τ of the initial state may be recovered through: $\alpha_{i} = \lpar {\bf Y}_{\tau}^{T} {\bf o}_{\tau}^{i}\rpar /\sigma_{\tau}^{i}$ provided that $\sigma_{\tau}^{i} \, {\ne}\, 0$. This just restates the fact that an unobservable subspace exists in τ whenever $\sigma_{\tau}^{n} = 0$.
By using the orthogonality of the ${\bf 0}_{\tau}^{i}$ s, one shows that the total energy of the output signal over τ, given by: $E_{\tau} \lpar \delta {\brchi}_{0}\rpar \triangleq \Vert {\bf Y}_{\tau} \lpar \delta {\brchi}_{0}\rpar \Vert^{2} = \sum\nolimits_{i} \alpha_{i}^{2} \Vert {\bf o}_{\tau}^{i}\Vert^{2}$, which may also be decomposed into a linear combination of the individual ispc's energies: $E_{\tau} \lpar {\bf u}_{\tau}^{\prime}\rpar = \Vert {\bf o}_{\tau}^{\prime}\Vert^{2} = \sigma_{\tau}^{i}$, with, as coefficients, the squares of the initial state components in B τ. Clearly, the combined energy of all the ispcs within τ is tr(Mτ).
4.3. An excitability metric
We now consider the mapping sending any point of the (unit) n-hypersphere S n into ℝn : eτ(v) = E τ(v)v with ‖v‖ = 1. Its image eτ(S n) describes an n-hyper-ellipsoid E n with axes $\sigma_{i}^{\tau} {\bf u}_{\tau}^{i}$; i = 1, …, n. The radius vector to each point on E n measures the sensitivity of the innovation's energy to an initial state deviation in that particular direction. The maximum sensitivity is $\sigma_{\tau}^{1}$ attained in the direction of ${\bf u}_{\tau}^{1}$.
In practice, for a given vehicle's trajectory and time interval τ, one may expect that the innovation's energy will be almost insensitive to some directions in S n. This motivates establishing a criterion to decide which directions contribute to the innovations and which do not.
Considering that tr(Mτ)/n is the average energy induced by the ${\bf u}_{\tau}^{i}s$, we introduce the relative excitability index of the direction v on S n:
Now, if for a sufficiently small number η, empirically chosen, there exists an index ne ∈ {1, …, n} such that:
then, that means that the contribution of the components ${\bf u}_{\tau}^{ne}\comma \; \ldots\comma \; {\bf u}_{\tau}^{n}$ to the total energy tr(Mτ) may be neglected. This establishes a partition in the orthogonal set $\lcub {\bf u}_{\tau}^{i}\rcub $, which in turn determines the orthogonal complementary subspaces:
Definition 3: $S_{\tau}^{O}$ is called the practically observable subspace and the practically non-observable one.
Since ${\open R}^{n} = S_{\tau}^{O} \oplus S_{\tau}^{{NO}}$, as in Equation (16), each element ei of the canonical base ${\cal E}$ is decomposed into its orthogonal projections $\pi_{\tau}^{O} \lpar {\bf e}^{i}\rpar $ and $\pi_{\tau}^{{NO}}\lpar {\bf e}^{i}\rpar $ respectively over $S_{\tau}^{O}$ and $S_{\tau}^{{NO}}$:
Definition 4: $e_{\tau}^{i} \triangleq \Vert \pi_{\tau}^{O} \lpar {\bf e}^{i}\rpar \Vert$ is the practical excitabilty metric of the ASV's i-th component over the interval τ.
Notice that (contrary to Shen et al. (Reference Shen, Xia, Wang, Neusypin and Proletarsky2018)) no a priori claim on the conditioning number of the local observability matrix Oτ has been invoked. From the definition it naturally follows that, whenever rang(Oτ) < n, there exist a linear subspace of the ASV state space with zero practical excitabilty metric.
5. SUBORBITAL ROCKET EXPERIMENTAL DATA ANALYSIS
The above concepts were applied to a CONAE navigation and control experimental payload on board a Brazilian VS30 sounding rocket having a tactical IMU (Systron Donner Motion-Pack 3 accelerometers and three gyros) sampled at 100 Hz and a GPS receiver delivering ECEF position at 1 Hz.
5.1. The trajectory description
An analytic trajectory defined for any time t was synthesised (by means of the approach proposed by Giribet et al. (Reference Giribet, España and Miranda2007) mentioned above) to fit the actual flight data. Figure 1 shows the X components, aligned with the rocket axis, of the synthetic specific force and angular rate, and evidences the different flight stages.
The vehicle stood on the launching pad until t = 18 s. During propulsion (18 s < t < 46 s) it attained a maximum acceleration of 110 m/s2 while being spun aerodynamically. At t = 46 s, the thrust was turned off, changing the sign of the axial acceleration due to residual air friction. An almost free flight stage (nearly null specific force) started at t = 60 s above 40 km above sea level. At t = 73 s a de-spin mechanism was activated reducing the axial angular rate by a half. The atmospheric reinsertion phase, not shown in the figure, started a few seconds later.
5.2. The ASV deviation equations
The ASV ${\brchi} \in {\open R}^{21}$ in Equation (3) includes the kinematic state x ∈ ℝ9 (referred to ECEF) and the IMU's parameters vector pi ∈ ℝ12 (six biases plus six scale factors). Also ξ ∈ ℝ21 while ${\breta}_{k} \in {\open R}^{3}$ is the GPS position error measurement. The ASV deviation vector in Equation (4) is defined as (sub-indices a and b stand, respectively, for accelerometer and gyro):
The specific time varying matrices in Equations (4) and (5), given below, are used as stated in Section 2 to obtain the stochastic discrete time Equation (8) used in the EKF fusion filter.
5.3. Analysis of the excitability metric
We distinguish between three main trajectory segments. τ1 = [0s, 19s]: Prior to blast-off with ECEF acceleration constant equal to local gravity at the launch pad; τ2 = [19s, 60s]: powered flight induces changes in linear acceleration and angular rate due to aero dynamical spinning; τ1 = [60s, 86s]: propulsion is cut off leading to a free fall motion flight. Figure 3 shows the excitability measurements obtained through each of these segments for each of the 21 components of the ASV for the threshold assumed value in Equation (20).
The above results allow us to draw the following conclusions:
• The position and velocity vectors (e1 to e6) are mostly contained in the S O subspace and are thus observable on the three stretches. This is a natural consequence of the GPS measurements.
• The attitude vector (e7 to e9) is fundamentally contained in S O during τ1 and τ2 but it almost does not excite the innovations during τ3. As seen in España (Reference España2017) (Section 6.6.3), gravity and Earth rotation rate enable the vehicle's attitude to excite the innovations even when at rest over the Earth's surface. For the given instrumentation configuration (GPS + IMU) gravity and propulsion becomes thus necessary (but not sufficient as we will see next) for attitude estimation.
• The scale factor of the x-gyro in body coordinates e16 becomes “excitable” during τ2 due to the combined presence of a high acceleration and angular rate projected over that axis.
• A growth on the excitability of the x-accelerometer scale factor (e19) from τ1 to τ2 is once more due to the presence of propulsion. However, this fact does not seem to be enough to avoid its predominant component over S NO.
• The rest of the IMU parameters have very low excitability metrics over the three segments and should thus be considered as practically unobservable for the given vehicle's trajectory. Nevertheless, one notices that biases of the excitability metric tends to be predominant over τ1 while those of the scale factors are over τ2.
• Finally, it can be verified that free fall is a quite unfavourable condition for in-flight IMU parameters re-calibration.
5.4. Analysis of the sensability metric
As seen before, the GPS position direct measure ensures almost perfectly excitable position and velocity vectors all along the trajectory. This induces a high sensability metric of the position from the very beginning and a growing tendency in the sensability of the velocity. For the assumed value u = 0·985 in Definition 1, Figure 3(a) shows this fact for the ECEF x-components, while Figure 3(b) demonstrates how, being a measure of the available information rate, the sensabilty anticipates changes of the estimates' 2σ error band. Actually, the former is a sort of “derivative” of the latter. As such, a deep valley in the velocity sensability before t = 5 sec. induces a growth of the estimation error band, while, close to t = 12 s, the metric increase induces an error reduction. Finally, after t = 55 s the metric stabilises at its highest value (~eq1) in correspondence to the lowest estimation error band of the velocity.
An exciting trajectory does not necessarily imply high sensability, but it is a required condition besides high performance instruments. Figure 2 shows that the attitude (θbe) is highly exciting on trajectory segments τ1, τ2, however, as seen in Figure 4(a), the sensability of q x and q y only grow after blast-off (t = 19 s). While the vehicle is at rest the attitude excitability is the consequence of two vectors potentially measured by the IMU: the Earth angular rate and gravity. However, the random walk noise and bias instability levels (see España (Reference España2017)) of the tactical IMU's gyro used prevent measuring the first one and thus to take full advantage of a high excitability. Notice that, under these conditions, the IMU works as a mere inclinometer and thus, as shown in Figure 4(a), the θbe components shall be more sensable the larger their projections are over the local tangent plane.
On the other hand, the propulsion ignition induces a boost of inertial measurements (acceleration and angular rate) within the sensitivity range of the IMU inducing a fast and sustained growth the sensability of θbe during τ2, accompanied by a reduction of its error bandwith (Figure 4(b)). At last, during free fall (τ2 : t ≥ 40 < /xref > s), a steady descent of the attitude sensability corresponds to a smooth increase of its error bandwidth (Figure 4(b)).
For similar reasons the IMU parameters (accelerometer and gyro biases and scale factors) turn out to be weakly sensable too. Only the y-component bias appears to be sensable but only during an interval so short in τ2, that the filter is unable to gather enough information so as to improve its a priori error bound (see Figure 5(a) and 5(b)). The above result suggests that for the given trajectory and instrumentation it may be futile to try to improve the a priori instrument calibration during flight.
6. CONCLUSIONS
Integrated navigation systems search to estimate a vehicle's kinematics state by fusing different and independent on board measurements. As expected, the precision of the state estimates depends on measurement errors, sensors' parameters instabilities, uncertain initial conditions but, also, on the actual vehicle's trajectory.
The work proposes two novel complementary performance metrics aimed at guiding the designer in the process to evaluate the fitness of the navigation system to a family of trajectories of a given vehicle with a given on board instrument configuration. Whenever possible, the same tools would help him or her planning appropriate vehicle manoeuvres. To evaluate the aptness of a navigation system in a particular application, the designer often needs to asses which state components will be best estimated (and which will not) over a given trajectory and for a given specific sensor configuration.
Precision depends on how uncertainties “excite” the actual measurements' innovation signals. This paper formalises the concept of “excitability” together with its metric over any particular trajectory segment. In addition, an instantaneous metric quantifying the combined effects of the excitability and the involved uncertainties is also introduced called “sensabilty”. As shown, an exciting trajectory does not necessarily imply high sensability of any of the state components but, it is although a required condition besides a good performing instrumentation.
The definition of practically sensable and practically observable subspaces of the state space and their corresponding orthogonal complements allows us to restrict the above measures to any specific component of the state vector. In this sense this approach may be seen as an alternative the one based on the mutual information concept proposed by Mohler and Hwang (Reference Mohler and Hwang1988).
Since real vehicles' trajectories may rarely be analytically described or even predetermined, contrary to other approaches also founded on the observability concept but using ideal analytical trajectories (circles, lemniscates, uniform acceleration, etc), we here propose an empirical method based on actual flight data.
All concepts have been illustrated with real flight data gathered on board a suborbital rocket. Its trajectory turned out to be not exciting enough to justify the recalibration of the sensors' parameters in flight. For this application, it is thus concluded that to complexify the sensor models used in data fusion algorithm will be of no assistance in terms of improving performance.
For the sake of simplicity, the proposed excitability metric was only linked to observability and thus related with data acquired in the future of a given initial state. A similar analysis may be the pursuit using the concept of re-constructability based on past data with respect to the present state. Further research is suggested on excitability metrics based simultaneously on both concepts. Many applications using post-processed navigation (such as remote sensing systems) with non-causal filters (smoother) will greatly benefit from such an extension.
The authors are currently working in demonstrating the effectiveness of the above proposed concepts in applications with more complex trajectories and diversified instrumentation.