1. INTRODUCTION
The task of precise pin-point landings on other planets, moons and asteroids is always challenging and has drawn increasing recent attention. However, communication delays and low bit-rate communication between the lander and the Earth and the lack of prior information of the target planet's environments have been great challenges for deep space exploration missions (Kubota et al., Reference Kubota, Hashimoto, Sawai, Kawaguchi, Ninomiya, Uo and Baba2003). Using traditional navigation methods has had limited success for precise landing, and the navigation system is required to have some autonomous functions. However, the errors of the traditional autonomous navigation method, inertial navigation, are of the order of a few kilometres as the navigation errors are always accumulating and the initial errors are hard to correct (Braun and Manning, Reference Braun and Manning2007). It is hard for inertial navigation systems to meet the National Aeronautics and Space Administration (NASA) precise landing requirement that the landing errors are less than 100 m (Qin et al., Reference Qin, Zhu, Cui and Gao2014; Wolf et al., Reference Wolf, Graves, Powell and Johnson2004). In 2014, NASA tested the Lander Vision System on the new Mars Lander (Johnson and Golombek, Reference Johnson and Golombek2012), Mars 2020 Lander Vision System Tested (2016) showed that visual navigation based on feature matching is feasible. A NASA technology report also pointed out that autonomous vision navigation in future planetary landing missions can be effective and accurate. This paper proposes a novel visual navigation technology based on feature line matching to accurately estimate a lander's attitude and position.
In 2004, the Descent Image Motion Estimation System (DIMES), the first on board optical navigation system, was designed to estimate the horizontal velocity of the Spirit and Opportunity missions by using images during the descent phase (Cheng et al., Reference Cheng, Goguen, Johnson, Leger, Matthies, Martin and Willson2004). However, the stability of the matching algorithm of this system is barely satisfied, and a pair of matching points is lost in the horizontal velocity estimation process. Meanwhile, a number of optical navigation algorithms have been presented in the past 20 years. Johnson and Mathies (Reference Johnson and Mathies1999) presented an algorithm based on automatic feature tracking from a pair of descent camera images to estimate a lander's motion parameters for a small body landing. Ma and Xu (Reference Ma and Xu2014) proposed a real-time only feature point Light-Of-Sight (LOS) relative navigation technology utilising the theorem of triangle geometry and the filter method. The real-time nature of this algorithm can be ensured only by the on board navigation camera, and the errors of relative attitude are reduced due to the invariability of angles of unit feature point LOS vectors in this algorithm. In order to meet the pin-point landing requirement, Panahandeh and Jansson (Reference Panahandeh and Jansson2014) introduced the Vision-Aided Inertial Navigation (VAIN) scheme which contains a monocular camera and an Inertial Measurement Unit (IMU). It forms a novel closed-form measurement model based on the image data and IMU to overcome the shortcomings of inertial navigation systems and the attitude and position of the lander are estimated by using an Unscented Kalman Filter (UKF). More recently, algorithms based on a Stereo-Vision (SV) camera and IMU have been introduced to estimate a lander's pose respectively by two cameras or Digital Elevation Model (DEM) (Woicke and Mooij, Reference Woicke and Mooij2018; Delaune et al., Reference Delaune, Le Besnerais, Voirin, Farges and Bourdarias2016). In addition, an algorithm based on crater matching was proposed to compute motion parameters by using Kronecker products (Shao et al., Reference Shao, Gao, Xi, Leng and Gu2016).
Most of the algorithms mentioned above are able to meet the precise requirements of the landing, but the applications of these common navigation landmarks, feature points or craters, are clearly limited in visual navigation technologies. On the one hand, feature point extraction and matching are complex and time-consuming, and feature points can only be applied to relative navigation as the position of the feature points are not universally known. On the other hand, crater extraction and matching are difficult, and craters are sparse on some parts of the surface of planets so that the lander's motion parameters cannot be accurately estimated at sites where craters are rare.
Ridges and gullies are usually common landforms on the surface of planets, and their features are similar to lines. Thus, they can be synthesised to be feature lines as landmarks for the visual navigation. When feature lines with known position in a cartographic coordinate system are obtained from orbiters’ images and are defined as the navigation landmarks, absolute navigation can be carried out and the extraction and matching of feature lines is simpler as the feature lines are easier to describe than craters. Therefore, feature lines would be a better navigation landmark and an algorithm to achieve Visual Navigation based on Feature Line Correspondences (VN-FPC) should be designed and developed. Recently, some algorithms based on line correspondences in computer vision have been proposed to estimate pose parameters. Elqursh and Elgammal (Reference Elqursh and Elgammal2011) proposed an algorithm which was suitable for urban and indoor environments to estimate the pose by using the mutually parallel or orthogonal lines. Zhang et al. (Reference Zhang, Zhang, Li, Zhu, Yu and Ou2012b) presented an iterative algorithm to optimise the objective function and estimate the pose. An algorithm for perspective poses estimation from three or more line correspondences has been designed (Mirzaei and Roumeliotis., Reference Mirzaei and Roumeliotis2011). In this algorithm, the problem is transformed into non-linear least-squares, and is resolved as an eigenvalue problem using the Macaulay matrix without needing initialisation. Finally, a solution using a 16-order polynomial was presented by Zhang et al. (Reference Zhang, Xu, Lee and Koch2012a). These algorithms have a great limitation, the range of the applications of Elqursh and Elgammal's (Reference Elqursh and Elgammal2011) method is small due to the use of parallel lines, and the calculation processes of the three other methods are complex and time-consuming. This paper proposes a Visual Navigation algorithm based on Feature Line Correspondences (VN-FLC) to estimate the lander's pose. By using at least three pairs of matched feature lines in a database which has been built in advance by taking advantage of data from orbiters of the target planet and their projection lines, the algorithm can easily estimate four candidate solutions of a lander's pose by using Singular Value Decomposition and select the unique lander's pose by orthogonal errors and re-projection residuals.
The remainder of this paper is organised as follows. The extraction and matching algorithms of feature lines are introduced in Section 2. Constraint equations are constructed by using the matched feature lines in a database and their extracted feature lines from landing images in Section 3. In Section 4, the lander's motion parameters are estimated, and the unique solution is determined. Then, an extended Kalman filter is constructed in Section 5. Section 6 introduces the simulation process and analyses the simulation results. Finally, the paper's conclusions are given in Section 7.
2. FEATURE LINE CORRESPONDENCES
In order to estimate motion parameters accurately, feature lines on the plane of the planetary surface need to be extracted from their Two-Dimensional (2D) images and matched to the same feature lines with known position in the database during the descent phase. This section focuses on feature line extraction and matching algorithms.
First of all, Akinlar and Topal's (Reference Akinlar and Topal2011) extraction algorithm, EDLine (Edge Drawing Line), is applied to our proposed algorithm to extract feature lines from landing images. As this algorithm is obviously faster than more traditional algorithms, it can have a better real-time performance (Burns et al., Reference Burns, Hanson and Riseman1986; Grompone von Gioi et al., Reference Grompone von Gioi, Jakubowicz, Morel and Randall2008; Etemadi, Reference Etemadi1992). As the method can directly extract the directions of feature lines and points on the feature lines, the outputs of the algorithm are consistent with the input of our visual navigation algorithm. Secondly, the algorithm presented in Zhang and Koch (Reference Zhang and Koch2013) is selected to match the extracted feature lines from the landing image with the feature lines in a database as it takes advantage of the Line Band Descriptor (LBD), which is robust to the rotation, scaling and brightness of image to finish feature line matching. In addition, because the extracted feature lines of EDLine are fitted by using a single pixel edge from an edge drawing, it helps that the main direction of Zhang and Koch's method is found exactly. Hence, Akinlar and Topal's (Reference Akinlar and Topal2011) extraction algorithm is combined with Zhang and Koch's (Reference Zhang and Koch2013) matching algorithm to finish feature line matching in this paper.
In this paper, the landing images taken by spacecraft Spirit at altitudes of approximately 1,986 m and 1,690 m and spacecraft Curiosity at altitudes of approximately 6,654 m and 6,221 m are extracted and matched by using the algorithm mentioned above (Resolution of landing images are 1024 × 1024 pixels). The matching results are shown in Figures 1 and 2. In the matching result of spacecraft Spirit, 106 and 194 feature lines from two landing images are extracted respectively in 920 ms and 1,277 ms, and 33 pairs of feature lines are matched in 97 ms. Meanwhile, the results matched by using landing images from spacecraft Curiosity show that 45 and 55 feature lines from two landing images are extracted respectively in 321 ms and 337 ms, and 20 pairs of feature lines are matched in 31 ms. In conclusion, because the numbers of extracted feature lines and matched feature lines are respectively more than 33 and 20, they provide a prerequisite for the development of the VN-FLC. As the time of feature line extraction and feature line matching is less than 1,277 ms and 97 ms respectively, the image processing has better real-time performance than Scale-Invariant Feature Transform (SIFT) in which the time of feature point extraction and feature line matching is more than 6 s and 3 s, respectively. (Matching conditions: CPU: Intel(R) Core(TM) i7-4558U CPU @ 2.80GHz, 4GB, Visual Studio 2013, Opencv2.4.10, ARPACK (ARnoldi PACKage), BIAS-2.8.0 (Basic Linear Algebra Subprograms), CLAPACK-3.1.1(C Language Interface of Linear Algebra PACKage), LAPACK (Linear Algebra PACKage) and SuperLU (Supernodal LU)).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig1g.jpeg?pub-status=live)
Figure 1. Feature line extraction and matching based on the images of spacecraft Spirit.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig2g.jpeg?pub-status=live)
Figure 2. Feature line extraction and matching based on the images of spacecraft Curiosity.
3. CONSTRAINT EQUATIONS
Here, it is assumed that the planet surface is a plane and all feature lines on the planet surface are in the same plane. The coordinate systems used in this paper are then introduced, and the nonlinear equations are deduced according to coordinate transformation and geometric constraints.
3.1. Coordinate Systems
The definition of coordinate systems is important for clearly explaining the coordinate transformation and geometric constraints. In this paper, the coordinate systems are built as Figures 3 and 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig3g.jpeg?pub-status=live)
Figure 3. A geometry sketch map of several common coordinate systems.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig4g.jpeg?pub-status=live)
Figure 4. A geometry sketch map of model coordinate systems.
3.1.1. The mass centre coordinate system O w − X wY wZ w
O w is the mass centre of the targeted planet; O wX w directs to the intersection between the equatorial plane and the ecliptic plane; the Z w-axis is perpendicular to the equatorial plane; the X w-axis, Y w-axis and Z w-axis meet the right-hand corkscrew rule.
3.1.2. The cartographic coordinate system O s − X sY sZ s
O s is the landing site; the direction of O sZ s is the direction of vector from the mass centre of the targeted planet to the landing site; O sX s is the vector along the tangent line of the meridian of the landing site to the south pole; X s-axis, Y s-axis and Z s-axis meet the right-hand corkscrew rule.
3.1.3. The camera coordinate system O c − X cY cZ c
O c is the optical centre of the navigation camera; Z c-axis is defined as the optic axis and the X c-axis and Y c-axis are parallel to the horizontal axis and vertical axis of image respectively.
3.1.4. The model coordinate system O m − X mY mZ m
In this paper, Li = (Vi, Pi) denotes the 2D feature lines on the planet surface, in which Vi is defined as the unit vector of the direction of the feature lines and Pi is defined as a point on these feature lines. li = (si, ei) denotes the projection of Li on the 2D image plane, in which si and ei are defined as the endpoints of li. Let O m − X m Y m Z m be the model coordinate system, in which V0 is defined as the X m-axis (the projection length of L0 is the longest from {Li}), the Z m-axis is parallel to the Z s-axis, and O m coincides with the origin O s of the cartographic coordinate system O s − X s Y s Z s.
3.2. The perspective-3-line constraint equation
In this paper, it is assumed that Li = (Vi, Pi) and li = (si, ei) are known. ${\bi R}_{{\bi s}}^{{\bi m}}$ denotes that the direction vector is rotated from the cartographic coordinate system to the model coordinate. Therefore,
${\bi V}_{0}^{{\bi m}} ={\bi R}_{{\bi s}}^{{\bi m}} {\bi V}_{0}^{{\bi s}}=\lsqb 1\comma \; 0\comma \; 0\rsqb ^{T}$. It can be shown that
${\bi R}_{{\bi s}}^{{\bi m}}$ can be written as
$\lsqb \lpar {\bi V}_{0}^{{\bi s}} \rpar ^{T}\semicolon \; {\bf \varepsilon}_{1}^{T} \semicolon \; {\bf \varepsilon}_{2}^{T}\rsqb $ where ε1 and ε2 are mutually orthogonal and they are solutions from
$\lpar {\bi V}_{0}^{{\bi s}}\rpar ^{T}{\bi X}=0$. Similarly, the directions of the remaining feature lines are transformed from the cartographic coordinate system to the model coordinate by
${\bi R}_{{\bi s}}^{{\bi m}} $, such that
${\bi V}_{{\bi i}}^{{\bi m}} ={\bi R}_{{\bi s}}^{{\bi m}} {\bi V}_{{\bi i}}^{{\bi s}}$.
The plane that passes through the origin O c of the camera coordinate system and li is represented as Πi. The normal of Πi is defined as ni. Therefore, the rotation matrix ${\bi R}_{{\bi m}}^{{\bi c}}$ by which
${\bi V}_{0}^{{\bi m}}$ in the cartographic coordinate system is transformed to
${\bi V}_{0}^{{\bi c}}$ in the camera coordinate system must meet the constraint that
${\bi n}_{0}^{T} {\bi R}_{{\bi m}}^{{\bi c}} {\bi V}_{0}^{{\bi m}} =0$. Due to this constraint,
${\bi R}_{{\bi m}}^{{\bi c}}$ can be decomposed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn1.gif?pub-status=live)
in which R is an arbitrary orthogonal identity matrix whose second column equals n0, Ry(λ) means a rotation whose angle around the Y-axis is equal to γ, and Rx(β) means a rotation whose angle around the X-axis is equal to β.
${\bi R}_{{\bi s}}^{{\bi c}}$ denotes that the direction vector is rotated from the cartographic coordinate system to the camera coordinate system. It is expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn2.gif?pub-status=live)
In this condition, ${\bi R}_{{\bi m}}^{{\bi c}}$ can be determined by two unknown variables λ and β. Owing to the geometrical constraint that the normal ni of the plane Πi can be perpendicular to all lines in this plane, it meets the constraint that:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn3.gif?pub-status=live)
Hence, the other two constraints can be expressed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn4.gif?pub-status=live)
Let ${\bi n}_{{\bi i}}^{T} =\lsqb {x}^{\prime}_{i} \comma \; {y}^{\prime}_{i} \comma \; {z}^{\prime}_{i} \rsqb $ and
${\bi V}_{{\bi i}}^{{\bi m}} =\lsqb a_{i}^{m} \comma \; b_{j}^{m} \comma \; 0\rsqb ^{T}$. By substituting Equation (1) into Equation (4), the constraints can be deduced as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn5.gif?pub-status=live)
in which
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqnU1.gif?pub-status=live)
By solving Equation (5), cos γ and sin γ are written as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn6.gif?pub-status=live)
By the geometrical constraint that cos 2γ + sin2γ = 1, we can get:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn7.gif?pub-status=live)
Similarly, due to cos 2β + sin 2β = 1, we obtain a perspective-3-line constraint equation:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn8.gif?pub-status=live)
in which r denotes sin β.
The perspective-3-line constraint equation can be applied to estimate the four candidate solutions of sin β, when three arbitrary feature lines on the same plane are matched with their projections.
3.3 The Perspective-n-Line (PnL) constraint equations
As longer feature line projections are less affected by noise, the feature line whose projection is the second longest is selected as a guideline, and the feature line whose projection is the longest has been selected as the X m-axis. We can construct n-2 constraint equations together with the rest of the feature lines such as Equation (8):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn9.gif?pub-status=live)
In order to get the four candidate solutions of sin β, the Perspective-n-Line (PnL) must be solved. However, the calculation process is difficult. Aiming at solving Equation (9) easily, an objective function is proposed in Section 4.
4. MOTION ESTIMATION
In this section, the four candidate solutions of sin β are solved based on the nonlinear Equation (9). Then, the four candidate solutions for the lander's pose are estimated according to the four candidate solutions of sin β. Finally, the unique position and attitude of the lander are selected.
4.1. Candidate solutions
Firstly, for solving the nonlinear Equation (9), an objective function Equation (10) is constructed to pick the optimal solutions by least square methods.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn10.gif?pub-status=live)
In order to obtain the candidate solutions, the local minima of Equation (10) is computed. Its derivative is calculated as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn11.gif?pub-status=live)
The minima is determined by computing the roots of its derivative Equation (11). Four roots are counted easily by the eigenvalue method and they are candidate solutions of sin β.
Then, let ${\bi P}_{{\bi i}}^{{\bi s}}$ be a point on the Li in the cartographic coordinate system and t be the coordinate of O c in the cartographic coordinate system. Hence, we can get:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn12.gif?pub-status=live)
Substituting Equations (1), (2) and candidate solutions of sin β into Equation (4) and Equation (12), the following equation is obtained:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn13.gif?pub-status=live)
in which $\bar{{\bi t}}={\bi R}_{{\bi s}}^{{\bi c}} {\bi t}=\lsqb \matrix{{x^{c}} & {y^{c}} & {z^{c}}}\rsqb ^{T}$.
Let ${\bi n}_{{\bi i}}^{T} {\bi R}=\lsqb \bar{x}_{i} \comma \; \bar{y}_{i} \comma \; \bar{z}_{i}\rsqb $,
${\bi R}_{{\bi x}} \lpar \beta_{j}\rpar {\bi R}_{{\bi s}}^{{\bi m}} {\bi V}_{{\bi i}}^{{\bi s}} =\lsqb \matrix{{a_{ij} } & {b_{ij} } & {c_{ij} }}\rsqb ^{T}$ and
${\bi R}_{{\bi x}} \lpar \beta_{j}\rpar {\bi R}_{{\bi s}}^{{\bi m}} {\bi P}_{{\bi i}}^{{\bi s}} =\lsqb \matrix{{x_{ij}^{m} } & {y_{ij}^{m} } & {z_{ij}^{m} }}\rsqb ^{T}$. A linear equation is obtained:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn14.gif?pub-status=live)
in which
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqnU2.gif?pub-status=live)
By solving Equation (14) with SVD, the candidate solutions of the rotation angle γ and the rotated translation vector $\bar{{\bi t}}$ are obtained. Then, the candidate solution of the translation vector is calculated as
${\bi t}=\lpar {\bi R}_{{\bi s}}^{{\bi c}} \rpar ^{T}\bar{{\bi t}}$.
4.2. Determination of lander's attitude and position
The candidate solutions of ${\bi R}_{{\bi s}}^{{\bi c}}$ and t are calculated from the linear system Equation (14). The solutions which are affected by noise in the data extracted from images may be not a normalisation. Hence, the solutions should be normalised by a standard Three-Dimensional (3D) alignment scheme (Umeyama, Reference Umeyama1991).
After the normalisation, the orthogonal error E er of each candidate solution is computed as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn15.gif?pub-status=live)
As the results of multiple experiments show that their orthogonal errors are less than10−4, the orthogonal errors of candidate solutions which are larger than 10−2 are deleted. The other m 1 candidate solutions are retained.
Then, ${\bi P}_{{\bi ni}}^{{\bi s}}$ is defined as the closest point on the feature line Li to the origin of the cartographic coordinate.
$P_{ni}^{s}$ can be shown as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn16.gif?pub-status=live)
${\bi P}_{{\bi ni}}^{{\bi c}}$ and
${\bi P}_{{\bi i}}^{{\bi c}}$ on the feature line Li in the camera coordinate are obtained by using the candidate solution of the rotation matrix and the translation vector from Equation (14), and are projected onto the interpretation plane of lines
$\lcub \bar{{\bi P}}_{{\bi ni}}^{{\bi c}}\rcub $ and
$\lcub \bar{{\bi P}}_{{\bi i}}^{{\bi c}}\rcub $ as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn17.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn18.gif?pub-status=live)
Following Figure 5, we define a re-projection residual E re as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig5g.jpeg?pub-status=live)
Figure 5. Illustration of observed feature line model and re-projected feature line model on the projection plane.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn19.gif?pub-status=live)
in which ℓi is the length of the projection li on the image plane and D is and D ie denote the distance between the observed feature line endpoints and the re-projected feature line endpoints.
Substituting Equations (17) and (18) into Equation (19), we can obtain the re-projection residual E re of each candidate solution and select the solution with the smallest re-projection residual as the lander's attitude and position.
Lastly, in order to improve accuracy, the lander's attitude and position are calibrated by using Kumar and Hanson's (Reference Kumar and Hanson1994) algorithm.
5. APPLICATION OF EXTENDED KALMAN FILTER
In this section, in order to improve the navigation accuracy, the position and attitude of the lander are estimated by an extended Kalman filter.
5.1. Dynamic equations
Following Figure 3, the dynamic model of the spacecraft's landing orbit is derived as follows. Firstly, the universal dynamic model of the lander's position in the cartographic coordinate system O s − X s Y s Z s is established (Li and Cui, Reference Li and Cui2008):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn20.gif?pub-status=live)
in which r is defined as the vector from the centroid of the planet to the mass centre of the lander, ΔF is defined as the solar radiation pressure, the third body gravity and so on, v denotes the velocity of the lander, ωa represents the angular velocity of the cartographic coordinate system relative to the mass centre coordinate system and U means the gravitational acceleration of the planet.
Similarly, the dynamic model of the lander's attitude at landing can be developed by the angular velocity ω of the lander relative to the cartographic coordinate system and the angular velocity ωa:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn21.gif?pub-status=live)
in which
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqnU3.gif?pub-status=live)
5.2. State equation
In order to correct the attitude and position, these motion parameters are input into the state vector X:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn22.gif?pub-status=live)
By taking the first derivative of X with respect to time T, the following state equation is deduced
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn23.gif?pub-status=live)
in which f(·) is derived from the dynamic Equation (20) and Equation (21).
5.3. Observation equation
According to the result of 1,000 independent simulations of our algorithm without an Extended Kalman Filter (EKF), shown in Figure 6, the errors of the pose estimation of the lander can be defined as white noise whose averages are 0. The lander's parameters are shown in Table 1. The lander's motion parameters estimated in Section 4 can be input into the observation vector Y:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig6g.jpeg?pub-status=live)
Figure 6. Simulation without the extended Kalman filter for detecting noise: (a) lander's attitude errors; (b) lander's position errors.
Table 1. Representative parameters of the simulation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_tab1.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn24.gif?pub-status=live)
Next, a linear observation equation is obtained on the basis of the relationship between the state vector X and the observation vector Y
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn25.gif?pub-status=live)
5.4. Extended Kalman Filter (EKF)
Here, through state Equation (23) and observation Equation (25), an EKF is created.
Firstly, the discrete state model and the discrete observation model are deduced by using Equations (23) and (25):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn26.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn27.gif?pub-status=live)
where w and v respectively refer to white noise in the system and in the measurements, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqnU4.gif?pub-status=live)
in which K and L are defined as the covariance matrix of state and the covariance matrix of observation and E( · ) denotes the expectation.
Then, we can obtain a first order Taylor expansion of the discrete state model:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn28.gif?pub-status=live)
in which Jacobi matrices are obtained by computing the first derivative of f(X) with respect to X as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn29.gif?pub-status=live)
Lastly, we can obtain the EKF:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn30.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn31.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn32.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn33.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn34.gif?pub-status=live)
where $\hat{{\bi X}}_{0}$ and P0 are initial values of the state vector and the covariance matrix of estimation error:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn35.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_eqn36.gif?pub-status=live)
6. SIMULATION RESULTS AND ANALYSIS
In this section, the accuracy and robustness of the proposed algorithm will be tested according to the simulation results of the lander's landing process under different conditions.
In this paper, the measurement noise of landing images is assumed to be white noise. The method for adding the measurement noise to observed feature lines on landing images is shown in Figure 7. The white noise of measurement {(Δsi, Δei)} is added to the feature line endpoints {(si, ei)} on the matched 2D projections in our simulations.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig7g.jpeg?pub-status=live)
Figure 7. The extracted errors of a feature line's 2D image.
Firstly, the proposed algorithm is simulated without the EKF under different white noise intensity. The lander's attitude and position are estimated when the endpoints of observed feature lines are added to white noise intensity between 1 and 10. The rest of the simulation parameters are shown in Table 1 and the surface altitude of the planet are not considered. In this condition, we simulate the operation of our algorithm 1,000 times for each noise level. The motion errors along with the noise level variation are expressed in Figure 8 after eliminating gross errors by using the 3σ rule.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig8g.jpeg?pub-status=live)
Figure 8. Simulation without the EKF for testing the impact of noise intensity of the endpoints on accuracy: (a) lander's attitude errors; (b) lander's position errors.
As shown in Figure 8, the lander's attitude errors increase along with the growth of white noise intensity, but they are less than 1·5° as long as the noise intensity is less than 10 pixels.
The trend of the lander's position error variation is similar to the trend of attitude errors, and the error of the Z-axis which is less than 35 m within the white noise intensity from 1 to 10 pixels is more sensitive than the others which are less than 5 m. In addition, the errors of attitude and position and the corresponding measurement noise are approximately linear, and the errors are not very sensitive with the variation of the noise intensity.
The simulation without the EKF based on the variation of the number of feature lines matched with their 2D projections is discussed to testify to the accuracy and robustness of our algorithm. As the force is very weak when the number of feature lines is equal to three, gross errors in the lander's pose parameters occur easily and the lander's pose parameters can be easily affected by the white noise of feature line endpoints. By using the simulation parameters as shown in Table 1, the average errors of the lander's attitude are approximately 2·5°, and the average error of the Z-axis is equal to approximately 120 m. Hence, in order to visually display the variation of motion parameters’ error along with the variation of the number of matched feature lines in the simulation figures, it is assumed that the number of matched feature lines increases from four to 14 in the 2D image taken by the on board navigation camera, and the feature line endpoints {(si, ei)} are corrupted with two pixels noise {(Δsi, Δei)}. The rest of parameters of this simulation are kept as per Table 1. The motion errors under the different number of matched feature lines are shown in Figure 9 after eliminating gross errors by using the 3σ rule.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig9g.jpeg?pub-status=live)
Figure 9. The simulation without the EKF for testing the impact of the number of matched feature lines on accuracy: (a) the lander's attitude errors; (b) the lander's position errors.
As depicted in Figure 9(a), the more the number of matched feature lines, the smaller the lander's attitude errors. When the link number of matched feature lines is more than ten, the errors begin to stabilise within 1°. At the same time, the error of the X-axis and the error of the Y-axis are less than 1 m, and the error of the Z-axis is less than 10 m in Figure 9(b). The reasons for these are simple. With the increase of the number of matched feature lines, the constraint equation PnL contains more constraint equations, and its force becomes stronger, but when the number of feature lines reaches ten, the increase of its force is minimal. Hence, the errors are close to stable.
It is assumed that the visual navigation method meets the requirements of a precise landing mission when the position errors are less than 20 m and the attitude errors are less than 1·5° at an altitude of 2,000 m. The ranges of the visual navigation algorithm's applications are detected without the EKF. As the planet surface is assumed to be a plane which includes all feature lines in our algorithm, the relation between the surface altitudes of the planet and the pose errors is explored and simulated. In this simulation, we let the surface altitudes of the planet vary from 0 m to 50 m. The rest of the simulation parameters are kept as per Table 1.
The results of simulation of the surface altitudes of the planet are shown in Figure 10 after eliminating gross errors by using the 3σ rule. The errors of the lander's motion parameters are growing and the error of the Z-axis is easily affected with the increase of the surface altitudes of the planet. When the surface altitudes of the planet are larger than 40 m, the error of the Z-axis is more than 20 m and the errors of β and γ are more than 1·5°. Hence, in order to meet the requirements of a precise landing mission, the range of our visual navigation algorithm's application for the surface altitudes of the planet is 0 to 40 m at an altitude of 2,000 m.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig10g.jpeg?pub-status=live)
Figure 10. The simulation without the EKF for testing the impact of the surface altitudes of planet on accuracy: (a) the lander's attitude errors; (b) the lander's position errors.
When the white noise which is added to the endpoints of the feature lines do not change, the different lengths of the feature lines have different influences on our algorithm. Hence, the range of our method's application with regard to the length of the longest feature line is considered. For this, the simulation of our visual navigation algorithm is carried out by using the parameters in Table 1 without the EKF when the length of the longest feature line varies from 100 m to 1,100 m at an altitude of 2,000 m.
In our simulation, the lander's pose errors with regard to the different lengths of the longest feature lines are displayed in Figure 11. When the length of the longest feature line becomes longer, the effects of white noise which are added to the endpoints of the longest feature line are, in theory, reduced. In this simulation, the theory is consistent with the results of the lander's pose estimation. When the length of the longest feature line is more than 200 m, the attitude errors are less than 1° and the position errors are less than 20 m at an altitude of 200 m. Hence, the range of our visual navigation algorithm's application with regard to the length of the longest feature line is 200 to 1,100 m due to the length of the longest feature line which can be taken at an altitude of 2,000 m being approximately 1,100 m. From Figure 1, the length of the longest feature line which is matched is more than 200 m at an altitude of 2,000 m. Therefore, our algorithm can estimate the lander's attitude and position by using landing images.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig11g.jpeg?pub-status=live)
Figure 11. The simulation without the extended Kalman filter for testing the impact of the lengths of feature lines on accuracy: (a) the lander's attitude errors; (b) the lander's position errors.
Lastly, the landing process of the NASA spacecraft Curiosity is simulated by using our method with the EKF (Steltzner et al., Reference Steltzner, Miguel San Martin, Rivellini, Chen and Kipp2014). The simulation result can confirm whether the proposed algorithm meets the requirements of a pin-point landing. We hypothesise that the lander's initial height is 8·4 km above the surface after the heat shield separated and the radar-based solution converged. We assume the lander takes 103 seconds to land at an altitude of 247·9 m. The feature line endpoints {(si, ei)} are the same as for the previous simulation. The covariance matrices of the error vectors of this simulation are obtained from the white noise variances of the 1,000 repetitions of the simulation experiments at each height, and the covariance matrices of error vectors are input into the algorithm of Chang et al. (Reference Chang, Xu and Wang2017a; Reference Chang, Xu, Wang and Liu2017b). The covariance matrices of measurement errors are obtained and applied to this simulation of the EKF. Then, the covariance matrices of the state errors are obtained from the system's white noise at each height. The rest of the parameters are kept, and the motion errors along with the landing time are illustrated in Figure 12.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20181005111614477-0193:S0373463318000358:S0373463318000358_fig12g.jpeg?pub-status=live)
Figure 12. The simulation for landing on Mars: (a) the lander's attitude errors; (b) the lander's position errors.
From Figure 12, the conclusion can be drawn that the result of the simulation by using the EKF is clearly better than the previous solution and the visual navigation algorithm based on crater matching of Shao et al. (Reference Shao, Gao, Xi, Leng and Gu2016). After the extended Kalman filtering, the lander's attitude errors are stable within 0·5° and the lander's position errors decrease with decreasing altitude to within 1 m at a height of about 247·9 m. This is because 2D images become sharper with the decreasing altitude of the lander. Hence the effect of noise is smaller.
In summary, all simulation of our proposed algorithm proves that the new algorithm can estimate the lander's pose parameters during the descent phase.
7. CONCLUSIONS
A novel algorithm is presented to estimate a lander's attitude and position in this paper. The proposed algorithm can obtain a unique solution of a lander's motion parameters by using at least three feature lines with known position on the surface of a targeted planet in a database of their projection lines. The accuracy and robustness of our algorithm are demonstrated by a series of simulations. The lander's attitude errors are less than 0·5° and the lander's position errors are less than 1 m. However, the method is limited, because the position of the feature lines must be known in advance in a database. Hence, in future study, the relative attitude and position of the lander will be estimated by using the feature lines from sequence images during the descending phase when the absolute positions of feature lines are unknown.
ACKNOWLEDGEMENT
The work described in this paper was supported by the National Science Foundation of China (Grant No. 61773227). The author fully appreciates their financial support.