Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-11T13:28:37.000Z Has data issue: false hasContentIssue false

Body-fixed SLAM with Local Submaps for Planetary Rover

Published online by Cambridge University Press:  26 June 2019

Bo Zheng
Affiliation:
(School of Astronautics, Harbin Institute of Technology, 150080 Harbin, China)
Zexu Zhang*
Affiliation:
(School of Astronautics, Harbin Institute of Technology, 150080 Harbin, China)
Jing Wang
Affiliation:
(Science and Technology on Optical-Radiation Laboratory, 100000 Beijing, China)
Feng Chen
Affiliation:
(Shanghai Institute of Aerospace System Engineering, 201108, Shanghai, China)
Xiangquan Wei
Affiliation:
(Shanghai Institute of Aerospace System Engineering, 201108, Shanghai, China)
Rights & Permissions [Opens in a new window]

Abstract

In traditional Simultaneous Localisation and Mapping (SLAM) algorithms based on Extended Kalman Filtering (EKF-SLAM), the uncertainty of state estimation will increase rapidly with the development of the exploration process and the increase of map area. Likewise, the computational complexity of the EKF-SLAM is proportional to the square of the number of feature points contained in the state variables in a single filtering process. A new SLAM algorithm combining the local submaps and the body-fixed coordinates of the rover is presented in this paper. The algorithm can reduce the computational complexity and enhance computational speed in consideration of the processing capability of the onboard computer. Due to the introduction of local submaps, the algorithm represented in this paper is able to reduce the number of feature points contained in the state variables in each single filtering process. Therefore, the algorithm could reduce the computational complexity and improve the computational speed. In addition, rover body-fixed SLAM could improve the navigation accuracy of a rover and decrease the cumulative linearization error by coordinates transformation during the update process, which is shown in the simulation results.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2019 

1. INTRODUCTION

Deep space exploration has become a hot research topic for several countries, and the development of a rover system for complex environments has become a key issue in planetary surface exploration. The Simultaneous Localisation and Mapping (SLAM) algorithm can achieve self-localisation without prior information by constructing an incremental map, and this technique has great potential for rover systems (Islam et al., Reference Islam, Chowdhury, Rezwan, Ishaque, Akanda, Tuhel and Riddhe2017).

In general, semi-autonomous positioning methods are common for rover surface exploration, but accuracy is affected by the reliability of the rover system and the computational ability of the onboard computer (Meng and Hutao, Reference Meng and Hutao2014). The methods are unstable and inaccurate when the rover drives on rough terrain, because the wheels of the rover may skid. In such cases, the rover can obtain relative position and altitude information more easily through a SLAM algorithm, and this can play an important role in autonomous navigation missions (Tan et al., Reference Tan, Liu, Dong, Zhang and Bao2013).

The original SLAM algorithm was presented by Leonard and Durrant-Whyte (Reference Leonard and Durrant-Whyte1991) and could locate position while modelling environmental maps. The Extended Kalman Filter (EKF)-SLAM algorithm was established by Dissanayake et al. (Reference Dissanayake, Newman, Clark, Durrant-Whyte and Csorba2001) and is based on a Kalman Filter (KF) on the general computational framework of a SLAM algorithm for mobile robots. In the EKF-SLAM algorithm, Dissanayake et al. (Reference Dissanayake, Newman, Clark, Durrant-Whyte and Csorba2001) proved that a solution to the SLAM problem is indeed possible and gave the underlying structure of the solution to the SLAM problem.

In early SLAM research, scholars focussed on a Two-Dimensional (2D) model of the SLAM problem, but this is not suitable for complex planetary terrain. Therefore, the Six-Degree Of Freedom (6-DOF) SLAM for outdoor environments was developed by Nuchter et al. (Reference Nüchter, Lingemann, Hertzberg and Surmann2007).

The traditional EKF-SLAM method has an obvious limitation: the computational complexity is proportional to the square of the number of feature points contained in the state variables in a single filtering process. Many scholars have focussed on reducing the computational complexity of EKF-SLAM. Martinez-Cantin and Castellanos (Reference Martinez-Cantin and Castellanos2006) addressed the consistency issue of the EKF-SLAM problem and proposed a local map sequencing algorithm, which can reduce the computational cost in the majority of updates. The submaps method was presented as a useful way to solve this problem by Paz and Neira (Reference Paz and Neira2006) and Joly and Rives (Reference Joly and Rives2009). The computational complexity will decrease by this method.

Also, the limitation of EKF linearization errors is a big problem for EKF-SLAM. The existing methods include the alternative linearization method (Martinez-Cantin and Catellanos, Reference Martinez-Cantin and Castellanos2005) and nonparametric method (Thrun et al., Reference Thrun, Montemerlo, Koller, Wegbreit, Nieto and Nebot2004). The traditional EKF-SLAM method is established in global coordinates, then the accumulated uncertainty of the pose of the rover and the uncertainty of the position of feature points are introduced into the observation equation, which increases the linearization error. In the body-fixed SLAM method, the feature points will be updated in the body coordinates of the rover; only the uncertainty of camera range and Inertial Measurement Unit (IMU) measurement noise are added to the linearization of the observation equation, which greatly reduces the linearization error.

In addition, the terrain of the planet is complex, and the computing ability of the onboard computer is limited. It is necessary to reduce the computational complexity, increase the computational speed and decrease the algorithm error during the large-scale cruising exploration. A flow chart of this paper's structure is given in Figure 1.

Figure 1. A flow chart of the paper's structure.

2. MOTION MODEL OF THE ROVER

The set of the coordinate systems adopted to describe the rover's motion is shown in Figure 2.

Figure 2. Coordinate system of the rover's motion.

2.1. Notation and hypotheses

In the following: OG − x Gy GZ G is the global coordinate system, which is considered inertial. OR − x Ry Rz R is the body coordinate system of the rover, where x R points to the direction of the rover's motion and y R points to the left direction of the rover's motion. OC − x Cy Cz C is the coordinate system of the binocular camera, where x C points to the vertical direction of the camera lens and z C points to the opposite direction of the connecting stick. ${\bf O}_{A_{i}} - x_{A_{i}} y_{A_{i}} z_{A_{i}}$ is the coordinate system of the rover's i-th wheel. ${\bf O}_{W_{i}} - x_{W_{i}}y_{W_{i}}z_{W_{i}}$ is the coordinate system of the contact point for the rover's i-th wheel and the planet surface, where $x_{W_{i}}$ points to the tangential direction of the contact point's surface and $z_{W_{i}}$ points to the normal direction of the contact point's surface. I mi is the distance between the i-th feature point and the binocular camera. d x is the distance between the i-th feature point and the binocular camera along the x C coordinate axis. d s is the distance between the i-th feature point and the binocular camera along the z C coordinate axis. [a, b, c] is the installation location of the binocular camera in the OR − x Ry Rz R coordinate of the rover. r W is the radius of the rover's wheel and B is the width of the rover.

2.2. Kinematic model of the rover

The rover investigated in this paper is a four-wheel differential rover with three translational Degrees Of Freedom (DOF) and three rotational DOF. The attitude of the rover is determined by the contact point of the soil surface and the rover. Linear velocity and angular velocity are control variables during the process of the rover's motion, which is determined by the angular velocity of two sets of differential wheels (Yen et al., Reference Yen, Jain and Balaram1999).

[x R, y R, z R, ϕ x, ϕ y, ϕ z] T denotes position and attitude vectors of the rover, where ϕ z, ϕ y, ϕ x stand for yaw angle, pitch angle and roll angle, respectively. The motion model equation is the following:

(1)$$\eqalign{\dot{x} & = r_{w} \left(\displaystyle{{\dot{\theta}_{L} \lpar \hbox{t}\rpar + \dot{\theta}_{R} \lpar \hbox{t}\rpar }\over{2}}\right)\cr \omega_{z} & = \displaystyle{{r_{w}}\over{B}} \lpar \dot{\theta}_{L} \lpar \hbox{t}\rpar - \dot{\theta}_{R} \lpar \hbox{t}\rpar \rpar } $$

where $\dot{\theta}_{L} \lpar \hbox{t}\rpar $ is the rotation velocity of the left wheel, $\dot{\theta}_{R} \lpar \hbox{t}\rpar $ is the rotation velocity of the right wheel, r w is the radius of the wheel, B is the distance between two wheels. ${\bi \nu} \lpar \hbox{t}\rpar \in {\cal R}^{3}$ is the linear velocity which points to the x axis, ${\bi \omega} \lpar \hbox{t}\rpar \in {\cal R}^{3}$ is the angular velocity which points to the z axis, $\dot{x}$ and ω z are parameters of linear velocity and angular velocity. ${\bi \nu}\lpar \hbox{t}\rpar = \lsqb \dot{x}\comma \; 0\comma \; 0\rsqb ^{T}$ and ω(t) = [0, 0, ω z] T.

The control inputs are controlled in the body coordinate system of the rover. Therefore, the linear velocities and angular velocities need to be converted to a global coordinate system. The rotation matrix $C_{R}^{G}$ is expressed as:

(2)$$C_{R}^{G} = \left[\matrix{ c \phi_{y}s\phi_{z} &-c\phi_{x}s\phi_{z} + s\phi_{x}s\phi_{y}s\phi_{z} & s\phi_{x}s\phi_{z} + c\phi_{x}s\phi_{y}c\phi_{z} \cr c\phi_{y}s\phi_{z} & c\phi_{x}c\phi_{z} + s\phi_{x}s\phi_{y}s\phi_{z} &-s\phi_{x}c\phi_{z} + c\phi_{x}s\phi_{y}s\phi_{y}s\phi_{z} \cr -s\phi_{y} & s\phi_{x}c\phi_{y} & c\phi_{x}c\phi_{y} }\right]$$

where c represents cosine, s represents sine and $C_{R}^{G}$ represents the transformation matrix from the rover coordinates to global coordinates.

The transformation matrix of the angular velocity $S_{\omega}^{G}$ is:

(3)$$S_{\omega}^{G} = \left[\matrix{ 1 & s\phi_{x}t\phi_{y} & c\phi_{x}t\phi_{y} \cr 0 & c\phi_{x} &-s\phi_{x} \cr 0 & \displaystyle{{s\phi_{x}}\over{c\phi_{y}}} & \displaystyle{{c\phi_{x}}\over{c\phi_{y}}}}\right]$$

where t represents tangent and $S_{\omega}^{G}$ represents the transformation matrix of the angular velocity. The entire kinematic model equation of the rover can be written as:

(4)$$\left[\matrix{ \dot{x}_{R} \cr \dot{y}_{R} \cr \dot{z}_{R} \cr \dot{\phi}_{x} \cr \dot{\phi}_{y} \cr \dot{\phi}_{z} }\right]= \left[\matrix{ {\bf {C}}_{R}^{G} &0 \cr 0 &{\bf {S}}_{\omega}^{G} }\right]\left[\matrix{ {\bi {\nu}} \cr {\bi \omega} }\right]$$

where v is the linear velocity of the rover and w is the angular velocity.

Since the attitude of the rover is determined by the contact point of the soil and the rover, the motion of a single wheel needs to be analysed. The coordinates of the contact point can be obtained in the coordinate system of the rover's wheel:

(5)$$\left[\matrix{ X_{W_{i}}^{A_{i}} \cr Y_{W_{i}}^{A_{i}} \cr Z_{W_{i}}^{A_{i}} \cr 1 \cr }\right]= {\bf {C}}_{W_{i}}^{A_{i}} \left[\matrix{ X_{W_{i}} \cr Y_{W_{i}} \cr Z_{W_{i}} \cr 1 \cr }\right]\quad i = 1\comma \; 2\comma \; 3\comma \; 4 $$

where i represents the i-th wheel of the rover, $C_{W_{i}}^{A_{i}}$ is the transformation matrix from the coordinate system of the contact point to the coordinate system of the rover and $\lpar X_{W_{i}}\comma \; Y_{W_{i}}\comma \; Z_{W_{i}}\rpar $ are the coordinates of the contact point. ${\bf {C}}_{W_{i}}^{A_{i}}$ can be represented as:

(6)$${\bf {C}}_{W_{i}}^{A_{i}} = \left[\matrix{\cos \delta_{i} &0 &\sin \delta_{i} &-r_{w} \sin \delta_{i} \cr 0 &1 &0 &0 \cr -\sin \delta_{i} &0 &\cos \delta_{i} &-r_{w} \cos \delta_{i} \cr 0 &0 &0 &1}\right]$$

where δ i is the angle between the normal vector of the contact point and the $z_{A_{i}}$ axis of coordinate system of the rover.

2.3. Installation model of the binocular camera

Three-dimensional ranging information is a common measurement in the localisation system of rovers. A binocular camera is used as a ranging sensor in this paper. The navigation observations are a series of three-dimensional point clouds which are generated by matching pairs of visual features. The installation model of the binocular camera on the rover is shown in Figure 3 (vertical view) and Figure 4 (lateral view).

Figure 3. An installation model for the binocular camera (vertical view).

Figure 4. An installation model for the binocular camera (lateral view).

The binocular camera is fixed on an oblique stick with two DOF at a certain angle. The stick, whose length is L a, is able to rotate around the connecting bolt with an angular velocity ω ϕ. ϕ L is the angle between the control stick and the vertical direction of the platform. θ L is the angle between the control stick and the horizontal forward direction of the rover. The coordinate system of the camera has its origin at the connecting bolt for convenience. The stick can also rotate with an angular velocity ω δ about the platform system. A wide range of the three-dimensional terrains is achieved by the camera's control inputs u L = [ω δ, ω ϕ].

Valid feature points are selected according to measuring distances. In this paper, the i-th valid point is equivalent to the i-th feature point in the external environment. Aiming at the i-th feature point, the measurement model of the camera can be written as follows:

(7)$$l_{m_{i}} = \sqrt{\lpar p_{L\comma x} - m_{i\comma x}\rpar ^{2} + \lpar p_{L\comma y} - m_{i\comma y}\rpar ^{2} + \lpar p_{L\comma x} - m_{i\comma x}\rpar ^{2}} $$

where p L is the fixed point of the binocular camera in the global coordinate system and m i is the i-th feature point in the global coordinate system. The new feature point will be projected to the global coordinate system by the SLAM algorithm when a new feature point is observed. For each valid feature point, the position in the coordinate system of the camera is given by:

(8)$$m_{i_{C}} = \left[\matrix{ m_{x_{C}\comma i} \cr m_{y_{C}\comma i} \cr m_{z_{C}\comma i} }\right]= \left[\matrix{ l_{m\comma i} \cos \psi_{i} \cr l_{m\comma i} \sin \psi_{i} \cr L_{a} }\right]i = 1\comma \; \cdots\comma \; N $$

where ψ i is the scanning azimuth angle for the i-th feature point in the coordinate system of the camera and N is the number of feature points at that moment. The rotation matrix ${\bf {C}}_{C}^{H}$ from the coordinate system of the camera to the coordinate system of the holder platform is:

(9)$${\bf {C}}_{C}^{H} = \left[\matrix{ 1 &0 &0 \cr 0 &\cos \phi_{L} &\sin \phi_{L} \cr 0 &-\sin \phi_{L} &\cos \phi_{L} }\right]$$

The attitude transition from the coordinate system of the holder platform to the coordinate system of the rover is just one rotation around the z axis, thus the rotation matrix ${\bf {C}}_{H}^{R}$ is:

(10)$${\bf {C}}_{H}^{R} = \left[\matrix{ \cos \delta_{L} &\sin \delta_{L} &0 \cr -\sin \delta_{L} &\cos \delta_{L} &0 \cr 0 &0 &1 }\right]$$

therefore, the position of the feature point in the global coordinate system can be described as:

(11)$$m_{i_{G}} = p_{R} + {\bf {C}}_{R}^{G} \left({\bf {C}}_{H}^{R}{\bf {C}}_{C}^{H}m_{i_{C}} + \left[\matrix{ a \cr b \cr c }\right]\right)i = 1\comma \; \cdots\comma \; N $$

where p R = [x R, y R, z R] T is the position of the rover in the global coordinate system and d = [a, b, c] is the position of the holder platform in the coordinate system of the rover.

3. STANDARD EKF-SLAM PROBLEM FOR THE ROVER

3.1. System state equation

The state of the rover includes three position coordinates and three attitude angles. The state equation can be obtained by discretising the motion model of the rover, that is:

(12)$${\bf X}_{G}\lpar \hbox{k}\rpar = {\bf X}_{G} \lpar \hbox{k} - 1\rpar + \Delta T \left[\matrix{ {\bf {C}}_{R}^{G}\lpar k\rpar &0 \cr 0 &{\bf {C}}_{\omega}^{G}\lpar \hbox{k}\rpar }\right]\left[\matrix{ v_{R} \lpar k\rpar + \tilde{v}_{R} \cr \omega_{R} \lpar \hbox{k}\rpar + \tilde{\omega}_{R} }\right]$$

where Δ T is a discrete interval time, $\tilde{v}_{R} \sim N\lpar 0\comma \; Q_{v}\rpar $ and $\tilde{\omega}_{R} \sim N\lpar 0\comma \; Q_{\omega}\rpar $ are control noise and k is the discrete time coefficient. The positions of the valid feature points in the local maps are static, thus the recursive state equation for a new feature point is:

(13)$${\bf X}_{m_{i}} \lpar \hbox{k}\rpar = {\bf X}_{m_{i}} \lpar \hbox{k} - 1\rpar $$

It is assumed that the estimated values of the state variables and the covariance matrix are $\hat{{\bf X}}\lpar \hbox{k}\rpar $ and P(k) at time k, respectively:

(14)$$\hat{{\bf X}}\lpar \hbox{k}\rpar = \left[\matrix{ \hat{{\bf X}}_{R} \lpar \hbox{k}\rpar \cr \hat{{\bf X}}_{m} \lpar \hbox{k}\rpar }\right]$$
(15)$${\bf P}\lpar \hbox{k}\rpar = \left[\matrix{ {\bf P}_{RR} \lpar \hbox{k}\rpar &{\bf P}_{Rm} \lpar \hbox{k}\rpar \cr {\bf P}_{Rm}^{T} \lpar \hbox{k}\rpar &{\bf P}_{mm} \lpar \hbox{k}\rpar }\right]$$

where $\hat{{\bf X}}_{R} \lpar \hbox{k}\rpar $ is the estimated value of the position and attitude of the rover, $\hat{{\bf X}}_{m} \lpar \hbox{k}\rpar $ is the estimated value of the feature point, PRR(k) is the covariance matrix of the rover, Pmm(k) is the covariance matrix of the feature point and PRm(k) is the cross covariance of the rover and the feature point.

The predicted value of the position and attitude of the rover at time k can be defined as:

(16)$$\hat{{\bf X}}_{R} \lpar \hbox{k} \vert \hbox{k} - 1\rpar = \hat{{\bf X}}_{R} \lpar \hbox{k} - 1\rpar + \left[\matrix{ \hat{{\bf C}}_{R}^{G} \lpar \hbox{k} - 1\rpar &0 \cr 0 &\hat{{\bf S}}_{\omega}^{G} \lpar \hbox{k} - 1\rpar }\right]\left[\matrix{ T^{2} {\bf a}_{R} \lpar \hbox{k}\rpar \cr T^{2} {\bi \omega}_{R} \lpar \hbox{k}\rpar }\right]$$

where aR(k) and ωR(k) are the linear acceleration and angular velocity of the rover, assumed measured by an IMU and $\hat{{\bf S}}_{\omega}^{G} \lpar \hbox{k} - 1\rpar $ is the transformation matrix of the angular velocity as in Equation (3). The predicted state variables and covariance matrix are then given as:

(17)$$\hat{{\bf X}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar = \left[\matrix{ \hat{{\bf X}}_{R} \lpar \hbox{k} \vert \hbox{k} - 1\rpar \cr \hat{{\bf X}}_{m} \lpar \hbox{k} - 1\rpar }\right]$$
(18)$${\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar = \nabla f_{X} \lpar \hbox{k}\rpar {\bf P} \lpar \hbox{k} - 1\rpar \nabla f_{X}^{T} \lpar \hbox{k}\rpar + {\bf Q}\lpar \hbox{k}\rpar $$

where $\nabla f_{X} \lpar \hbox{k}\rpar $ is the Jacobian matrix to the state equation f(·) and Q(k) is the variance matrix of random state error. We assume QIMU(k) is the covariance matrix of random state error generated by the IMU, thus:

(19)$${\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar = \nabla f_{R} \lpar \hbox{k}\rpar {\bf P}\lpar \hbox{k} - 1\rpar \nabla f_{R}^{T}\lpar \hbox{k}\rpar +\nabla f_{a\comma \omega} \lpar \hbox{k}\rpar {\bf Q}_{{IMU}} \nabla f_{a\comma \omega}^{T} \lpar \hbox{k}\rpar + {\bf Q}\lpar \hbox{k}\rpar $$

The predicted value of the whole covariance matrix can be simplified as:

(20)$$\hbox{P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar = \left[\matrix{ \nabla f_{R} \lpar \hbox{k}\rpar {\bf P}_{RR} \lpar \hbox{k} - 1\rpar \nabla f_{R}^{T} \lpar \hbox{k}\rpar + {\bf Q}_{RR} \lpar \hbox{k}\rpar & \nabla f_{R} \lpar \hbox{k}\rpar {\bf P}_{Rm} \lpar \hbox{k} - 1\rpar \cr \nabla f_{R}^{T} \lpar \hbox{k}\rpar {\bf P}_{mR} \lpar \hbox{k} - 1\rpar &{\bf P}_{mm} \lpar \hbox{k} - 1\rpar }\right]$$

where QRR(k) is the simplified form of the measurement noise variance matrix and state error variance, which can be described as:

(21)$${\bf Q}_{RR} \lpar \hbox{k}\rpar = \nabla f_{a\comma \omega} \lpar \hbox{k}\rpar {\bf Q}_{{IMU}} \nabla f_{a\comma \omega}^{T} \lpar \hbox{k}\rpar + {\bf Q}\lpar \hbox{k}\rpar $$

where the Jacobian matrix $\nabla f_{a\comma \omega} \lpar \hbox{k}\rpar $ and $\nabla f_{R} \lpar \hbox{k}\rpar $ are obtained as follows:

(22)$$\nabla f_{a\comma \omega} \lpar \hbox{k}\rpar = \left[\matrix{ T^{2}\hat{{\bf C}}_{R}^{G} \lpar \hbox{k}- 1\rpar &0 \cr 0 & T\hat{{\bf S}}_{\omega}^{G} \lpar \hbox{k}- 1\rpar }\right]$$
(23)$$\nabla f_{R} \lpar \hbox{k}\rpar = \displaystyle{{\partial \hat{{\bf X}}_{R} \lpar \hbox{k} \vert \hbox{k}-1\rpar }\over{\partial \hat{{\bf X}}_{R} \lpar \hbox{k}-1\rpar }} $$

QIMU is the covariance of the IMU:

(24)$${\bf Q}_{{IMU}} = \left[\matrix{ {\bi \sigma}_{a}^{2} & \cr &{\bi \sigma}_{\omega}^{2} }\right]$$

where σ a is the standard deviation of the accelerometer's random noise and σ ω is the standard deviation of the gyroscope's random noise. At this moment, if new feature points are added to the global map, the EKF update process can be completed.

3.2. Map augmentation

It is assumed that the feature points have been obtained by a binocular camera, but the valid feature points cannot be brought into the state estimation process directly, since they need to be transformed into the global coordinate system. The observed variable zj(k) is associated to a new feature point m j at time k:

(25)$${\bf z}_{j} \lpar \hbox{k}\rpar = \left[\matrix{ I_{m_{j}} \lpar \hbox{k}\rpar \cr \psi_{j} \lpar \hbox{k}\rpar }\right]^{T} = h_{k} \lpar {\bf X}_{G} \lpar \hbox{k}\rpar \comma \; \hbox{m}_{j}\rpar + {\bf v}_{k} $$

where vk ~ N(0, RLL) is the gaussian white noise with zero mean and RLL is the variance. The position of the feature point m j in the global coordinate system is:

(26)$$\hbox{m}_{j_G} \lpar \hbox{k}\rpar = g_{k_{G}} \lpar \hbox{z}_{j} \lpar \hbox{k}\rpar \comma \; \hat{{\bf X}}_{G} \lpar \hbox{k}\rpar \rpar = \hat{{\bf P}}_{R} \lpar \hbox{k}\rpar + \hat{{\bf C}}_{R}^{G}\lpar \hbox{k}\rpar \lpar {\bf C}_{H}^{R}{\bf C}_{C}^{H}m_{j_{C}} \lpar \hbox{k}\rpar + p_{R}\rpar $$

where $m_{j_{C}}\lpar k\rpar $ is the position of the feature point in the coordinate system of the binocular camera, whose relation to the observed variable is:

(27)$$m_{j_{C}} \lpar \hbox{k}\rpar = g_{k_{C}} \lpar \hbox{z}_{j} \lpar \hbox{k}\rpar \rpar = \left[\matrix{ l_{m_{j}} \lpar \hbox{k}\rpar \cos \psi_{j} \lpar \hbox{k}\rpar \cr l_{m_{j}} \lpar \hbox{k}\rpar \sin \psi_{j} \lpar \hbox{k}\rpar \cr L_{a} }\right]$$

Finally, the specific form of the observation equation becomes:

(28)$$\eqalign{\hat{z}_{j} & = h_{k} \lpar \hat{{\bf X}}_{G}\comma \; m_{j}\rpar = \left[\matrix{ \sqrt{\lpar q_{xc}\rpar^{2}_ \lpar q_{yc}\rpar^{2}} \cr \hbox{arctan} \lpar q_{yc} / q_{xc}\rpar \cr L_{a} }\right]\cr q_{C} & = \hat{{\bf C}}_{H}^{C}\hat{{\bf C}}_{R}^{H}\hat{{\bf C}}_{G}^{R}m_{jG} - \hat{{\bf C}}_{H}^{C}\hat{{\bf C}}_{R}^{H} \lpar \hat{{\bf C}}_{G}^{R}{\bf p}_{G} + d_{R}^{H}\rpar } $$

where q C represents the position of the feature point m j in the coordinate system of the binocular camera.

So far, the augmentation process for the new feature points has been completed. Next, the augmentation for the variance matrix of the system state needs to be finished. The covariance matrix of the feature point itself is:

(29)$$\eqalign{\hbox{m}_{j_{G}} \lpar \hbox{k}\rpar &= g_{k_{G}} \lpar z_{j} \lpar \hbox{k}\rpar \comma \; \hat{{\bf X}}_{G} \lpar \hbox{k}\rpar \rpar \cr {\bf G}_{\hat{X}_{G}} &= \displaystyle{{\partial g_{k_{G}} \lpar z_{j} \lpar \hbox{k}\rpar \comma \; \hat{{\bf X}}_{G} \lpar \hbox{k}\rpar \rpar }\over{\partial \hat{{\bf X}}_{G} \lpar \hbox{k}\rpar }} \cr {\bf G}_{z_{j}} &= \displaystyle{{\partial g_{k_{G}} \lpar z_{j} \lpar \hbox{k}\rpar \comma \; \hat{{\bf X}}_{G} \lpar \hbox{k}\rpar \rpar }\over{\partial z_{j}}}} $$

Then, the variance matrix of the new feature point and the covariance matrix of the planetary rover can be initialised as follows:

(30)$$\eqalign{{\bf P}_{mm} & = {\bf G}_{\hat{X}_{G}} {\bf P}_{RR} {\bf G}_{\hat{X}_{G}}^{T} + {\bf G}_{z_{j}} {\bf R}_{LL} {\bf G}_{z_{j}}^{T} \cr {\bf P}_{mx} &= {\bf G}_{\hat{X}_{G}} {\bf P}_{Rx} = {\bf G}_{\hat{X}_{G}} \left[\matrix{ {\bf P}_{RR} &{\bf P}_{\hat{X}_{G}{\rm M}} }\right]} $$

where M in ${\bf P}_{\hat{X}_{G}{\rm M}}$ is the initial collection of the map features, which is just a symbol for the initialisation map, the symbol L will be used to represent the global map when the features are added to the state equation. Finally, the augmentation matrices of the system state and variance become:

(31)$$\eqalign{{\bf X}_{R} \lpar \hbox{k}\rpar & \rightarrow \left[\matrix{ {\bf X}_{R} \lpar \hbox{k}\rpar \cr L_{k} = \left[\matrix{ m_{1_{G}} \cr m_{2_{G}} \cr \vdots \cr m_{N_{G}} }\right]}\right]\cr {\bf P}_{RR} \lpar \hbox{k}\rpar & \rightarrow \left[\matrix{ {\bf P}_{RR} \lpar \hbox{k}\rpar &{\bf P}_{mx}^{T} \cr {\bf P}_{mx} &{\bf P}_{mm} }\right]} $$

3.3. State updates and revisit

The predicted observed value $\hat{z}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $ is given in Section 3.2. When the observed value of the landmark feature is obtained by the sensor of the rover, the residual error r(k) between real measured value z(k) and predicted observed value $\hat{z}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $ can be expressed as:

(32)$${\bf r}\lpar \hbox{k}\rpar = {\bf z}\lpar \hbox{k}\rpar - \hat{z}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $$

Therefore, the matrix S(k) which is the covariance matrix of the residual error of the observed value is:

(33)$${\bf S}\lpar \hbox{k}\rpar = \nabla h_{X} \lpar \hbox{k}\rpar {\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar \nabla h_{X}^{T} \lpar \hbox{k}\rpar + {\bf Q}_{v} \lpar \hbox{k}\rpar $$

where $\nabla h_{X} \lpar \hbox{k}\rpar $ is the Jacobian matrix of the observation model with respect to the state variables and Qv(k) is the covariance matrix of the observation model. It is assumed that only one feature point is observed during each observation process, then $\nabla h_{X} \lpar \hbox{k}\rpar $ can be expressed as:

(34)$$\nabla h_{X} \lpar k\rpar = \left.\displaystyle{{\partial h \lpar \hbox{k}\rpar }\over{\partial {\bf X}}}\right\vert_{{\bf X} = \hat{{\bf X}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar } = \left[\matrix{ \nabla h_{R} \lpar \hbox{k}\rpar &0 &\cdots &0 &\nabla h_{j} \lpar \hbox{k}\rpar &0 &\cdots }\right]$$

where $\nabla h_{R} \lpar \hbox{k}\rpar $ is the Jacobian matrix of the observation model for the position and attitude variables of the rover and $\nabla h_{j} \lpar \hbox{k}\rpar $ is the Jacobian matrix of the observation model for the feature point state.

After the observed variables are associated with the map state and the feature point state, the estimated state variables and the covariance matrix of the state variables can be updated by using the gain matrix W(k). The process is:

(35)$$\hat{{\bf X}} \lpar \hbox{k}\rpar = \hat{{\bf X}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar + {\bf W}\lpar \hbox{k}\rpar {\bf r}\lpar {\bf k}\rpar $$
(36)$${\bf P}\lpar \hbox{k}\rpar = {\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar - {\bf W}\lpar \hbox{k}\rpar {\bf S}\lpar \hbox{k}\rpar {\bf W}^{T} \lpar \hbox{k}\rpar $$
(37)$${\bf W}\lpar \hbox{k}\rpar = {\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar \nabla_{\rm x} h^{T} \lpar \hbox{k}\rpar {\bf S}^{-1} \lpar \hbox{k}\rpar $$

The revisit process is similar to the update process of the EKF, which is the most important link.

It is assumed that the feature point $m_{i_{C}}$ observed by the rover at time k is the feature point associated to the feature point $m_{j_{G}}$ in the map, so the feature point $m_{j_{G}}$ is added to the map at time (k − w), where w denotes a certain time interval. Therefore, the rover is able to update the EKF state combining the information from $m_{j_{G}}$ and $m_{i_{C}}$, which reduces the cumulative uncertainty of the state estimation from time (k − w) to time k.

4. SLAM WITH LOCAL SUBMAPS

The computational complexity is proportional to the square of the number of feature points contained in the state variables in a single filtering process. The local submap method can reduce the computational complexity of the SLAM by updating the covariance of whole system within a suitable interval (Wang et al., Reference Wang, Song, Zhao and Huang2018). The local submaps are updated independently before joining the global map. Estimation of the local map is irrelevant to the estimation of the global map; therefore, the covariance of the local status and the global status is zero. The global estimation is not changed during the local update because it is irrelevant, which decreases the computational complexity during the prediction process. First, an independent local submap is established for the feature points observed by the camera in the early stage. Then, when the feature points contained in the submap reach a certain threshold value, namely, the submap reaches a certain scale, the local submap can be converted to the global state vector, which constitutes the global map. Next, the local submap is initialised for new feature points. Finally, a number of observations are consistently integrated into the global map after a time interval, which reduces the uncertainty of the state variables of the rover.

4.1. System state variables

The system state variables of the local submap method are:

(38)$${\bf X}_{{lS}} \lpar k\rpar = \left[\matrix{ {}^{G}{\bf X}_{S} \lpar \hbox{k}\rpar \cr {}^{G}{\bf X}_{i} \lpar \hbox{k}\rpar \cr {}^{S}{\bf X}_{R} \lpar \hbox{k}\rpar \cr {}^{S}{\bf X}_{i} \lpar \hbox{k}\rpar }\right]$$

where GXS(k) is the position and attitude vector of local submap in the global map, GXi(k) is the position of the feature point in the global map, SXR(k) is the position and attitude vector of the rover in the local submap and SXi(k) is the position of the feature point in the local submap. GXm(k) and SXm(k) can be used to represent the collection of the feature points in global map and the collection of the feature points in the local submap.

4.2. System state estimation process

The state covariance matrix includes the covariance of the global state variables and the local state variables and the cross covariance between them. However, the cross covariance between the global state variables and the local state variables is zero. Therefore, the state covariance matrix PlS(k) can be represented as:

(39)$${\bf P}_{{lS}} \lpar \hbox{k}\rpar = \left[\matrix{ {}^{G}{\bf P}_{SS} \lpar \hbox{k}\rpar &{}^{G}{\bf P}_{mS} \lpar {\bf k}\rpar &0 &0 \cr {}^{G}{\bf P}_{Sm} \lpar \hbox{k}\rpar &{}^{G}{\bf P}_{mm} \lpar {\bf k}\rpar &0 &0 \cr 0 &0 &{}^{S}{\bf P}_{RR} \lpar \hbox{k}\rpar &{}^{S}{\bf P}_{Rm} \lpar \hbox{k}\rpar \cr 0 &0 &{}^{S}{\bf P}_{mR} \lpar \hbox{k}\rpar &{}^{S}{\bf P}_{mm} \lpar \hbox{k}\rpar }\right]$$

where GPSS(k) is the covariance matrix of the position estimation of the local submap in the global reference and SPRR(k) is the covariance matrix of the position and attitude estimation of the rover in the local reference.

The predicted state value is then:

(40)$${\bf X}_{{lS}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar = \left[\matrix{ {}^{G}{\bf X}\lpar \hbox{k} - 1\rpar \cr f\lpar {}^{S}{\bf X}_{R} \lpar \hbox{k} - 1\rpar \comma \; {\bf u}\lpar \hbox{k}\rpar \rpar \cr {}^{S}{\bf X}_{m} \lpar \hbox{k} - 1\rpar }\right]$$

The covariance matrix is updated by the rover model during the prediction process. The global estimation is not changed during the local update, because the local submaps are updated independently before joining the global map. The computational complexity is reduced because of the irrelevance. PlS(k|k − 1) is given as:

(41)$${\bf P}_{{lS}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar = \left[\matrix{ {}^{G}{\bf P}\lpar k - 1\rpar &0 &0 \cr 0 &\nabla f_{X} \lpar \hbox{k}\rpar^{S} {\bf P}_{RR} \lpar \hbox{k} - 1\rpar \nabla f_{X}^{T} \lpar \hbox{k}\rpar + {\bf Q} \lpar \hbox{k}\rpar &\nabla f_{X} \lpar \hbox{k}\rpar^{S} {\bf P}_{Rm} \lpar \hbox{k} - 1\rpar \cr 0 &\lpar \nabla f_{X} \lpar \hbox{k}\rpar^{S} {\bf P}_{Rm} \lpar \hbox{k} - 1\rpar \rpar^{T} &{}^{S}{\bf P}_{mm} \lpar \hbox{k} - 1\rpar }\right]$$

The predicted state value $\hat{\hbox{z}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $ in the local submap method is:

(42)$$\hat{z} \lpar \hbox{k} \vert \hbox{k} - 1\rpar = h\lpar {}^{S}\hat{{\bf X}}_{R} \lpar \hbox{k}-1\rpar \comma \; {}^{S}\hat{{\bf X}}_{i} \lpar \hbox{k}-1\rpar \rpar $$

The observation residuals and the covariance are:

(43)$${\bf r}\lpar \hbox{k}\rpar = z\lpar \hbox{k}\rpar - \hat{z}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $$
(44)$${\bf S}\lpar \hbox{k}\rpar = \nabla h_{X} \lpar \hbox{k}\rpar {}^{S}{\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar \nabla h_{X}^{T} \lpar \hbox{k}\rpar + {\bf Q}_{v} \lpar \hbox{k\rpar } $$

Finally, the update process can be written as:

(45)$$\hat{{\bf X}}_{{lS}} \lpar \hbox{k}\rpar = \hat{{\bf X}}_{{lS}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar + {\bf W}\lpar \hbox{k}\rpar {\bf r} \lpar \hbox{k}\rpar $$
(46)$${\bf P}_{{lS}} \lpar \hbox{k}\rpar = {\bf P}_{{lS}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar - {\bf W}\lpar \hbox{k}\rpar {\bf S}\lpar \hbox{k}\rpar {\bf W}^{T} \lpar {\bf k}\rpar $$
(47)$${\bf W}\lpar k\rpar = P_{{lS}} \lpar \hbox{k} \vert \hbox{k} - 1\rpar \nabla_{X} h^{T} \lpar \hbox{k}\rpar {\bf S}^{-1} \lpar \hbox{k}\rpar $$

The feature points contained in the local reference are far less than those contained in the global reference, therefore the computational complexity will be reduced significantly. PlS(k) and W(k) are given as:

(48)$${\bf P}_{{lS}} \lpar \hbox{k}\rpar = \left[\matrix{ {}^{G}{\bf P}\lpar \hbox{k} \vert \hbox{k}-1\rpar & 0 \cr 0 &{}^{S}{\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar - {\bf W}\lpar \hbox{k}\rpar {\bf S}\lpar \hbox{k}\rpar {\bf W}^{T} \lpar \hbox{k}\rpar }\right]$$
(49)$${\bf W}\lpar \hbox{k}\rpar = {}^{S}{\bf P}\lpar \hbox{k} \vert \hbox{k}-1\rpar \nabla h_{X}^{T} \lpar \hbox{k}\rpar {\bf S}^{-1} \lpar \hbox{k}\rpar $$

4.3. The transformation from local submap to global map

The submap needs to be transformed to the global map, when the submap reaches a certain scale. Then a new submap is established and initialised. It is assumed that T(k) is the transformation matrix from the local reference to the global reference.

(50)$${}^{G}\hat{{\bf X}}_{{lS}} \lpar \hbox{k}\rpar = {\bf T}_{S}^{G} \lpar \hbox{k}\rpar \hat{{\bf X}}_{{lS}} \lpar \hbox{k}\rpar $$
(51)$${}^{G}{\bf P}_{{lS}} \lpar \hbox{k}\rpar = \nabla {\bf T}_{S}^{G} \lpar {\bf k}\rpar {\bf P}_{{lS}} \lpar \hbox{k}\rpar \lpar \nabla {\bf T}_{S}^{G} \lpar \hbox{k}\rpar \rpar ^{T} $$

Global variables can be written as:

(52)$${}^{G}\hat{{\bf X}}_{{lS}} \lpar \hbox{k}\rpar = \left[\matrix{ {}^{G}\hat{{\bf X}}_{S} \lpar \hbox{k}\rpar \cr {}^{G}\hat{{\bf X}}_{i} \lpar \hbox{k}\rpar }\right]= {\bf T}_{S}^{G} \lpar \hbox{k}\rpar \left[\matrix{ {}^{G}\hat{{\bf X}}_{S} \lpar \hbox{k}\rpar \cr {}^{S}\hat{{\bf X}}_{i} \lpar \hbox{k}\rpar }\right]$$

where ${\bf T}_{S}^{G} \lpar \hbox{k}\rpar $ is the transformation matrix:

(53)$${\bf T}_{S}^{G} \lpar \hbox{k}\rpar = \left[\matrix{{\bf I}_{6} & 0 \cr {\bf I}_{3} \quad 0 &{\bf C}_{S}^{G} \lpar \hbox{k}\rpar }\right]$$

where ${\bf C}_{S}^{G}\lpar \hbox{k}\rpar $ is the transformation matrix from the local coordinate system to the global coordinate system. The covariance matrix of the global variables can be expressed as:

(54)$${}^{G}{\bf P}_{{lS}} \lpar \hbox{k}\rpar = \nabla {\bf T}_{S}^{G} \lpar \hbox{k}\rpar \left[\matrix{ {}^{G}{\bf P}_{SS} \lpar \hbox{k}\rpar & \cr &{}^{S}{\bf P}_{ii} \lpar \hbox{k}\rpar }\right]\lpar \nabla {\bf T}_{S}^{G} \lpar \hbox{k}\rpar \rpar ^{T} $$

where $\nabla {\bf T}_{S}^{G} \lpar \hbox{k}\rpar $ is the Jacobian matrix of the transformation matrix to the state variables, namely:

(55)$$\nabla {\bf T}_{S}^{G} \lpar \hbox{k}\rpar = \displaystyle{{\partial {\bf T}_{S}^{G} \lpar \hbox{k}\rpar }\over{\partial \lpar {}^{G}\hat{{\bf X}}_{S} \lpar \hbox{k}\rpar \comma \; {}^{S}\hat{{\bf X}}_{i} \lpar \hbox{k}\rpar \rpar }} $$

4.4. Estimation constraints for local feature points

The estimation of the feature points in the global coordinate system cannot be applied to the update process in the local coordinate system without the coordinate system connection. Moreover, local feature points will be computed multiple times when proceeding to the coordinate transformation process. In order to maintain the effectiveness of the algorithm, the independent estimation of the feature points should be combined and applied to the global estimation process.

The consistent estimation of the feature point status can be made through the known constraints between common states. The independent estimation of the feature point status is integrated into the global coordinate system by the constraints between the state variables, which generates a consistent global estimation result. The constraint process can be simplified down to the weighted projection process of the crossing states, which can be described as follows:

(56)$${\bf C}\lpar \hat{{\bf X}}\lpar \hbox{k}\rpar \rpar = {\bf b} $$

An EKF algorithm can be used for approximate estimation, therefore the result of the constraint is:

(57)$$\hat{{\bf X}}_{c} \lpar \hbox{k}\rpar = \hat{{\bf X}} \lpar \hbox{k}\rpar + {\bf W}_{c} \lpar \hbox{k}\rpar \lpar {\bf b} - {\bf C}\lpar \hbox{k}\rpar \hat{{\bf X}}\lpar \hbox{k}\rpar \rpar $$
(58)$${\bf P}_{c} \lpar \hbox{k}\rpar = {\bf P}\lpar \hbox{k}\rpar + {\bf W}_{c} \lpar \hbox{k}\rpar {\bf S}_{c} \lpar \hbox{k}\rpar {\bf W}_{c}^{T} \lpar \hbox{k}\rpar $$

where:

(59)$${\bf W}_{c} \lpar \hbox{k}\rpar = {\bf P}\lpar \hbox{k}\rpar \nabla {\bf C}\lpar \hbox{k}\rpar {\bf S}_{c}^{-1} \lpar \hbox{k}\rpar $$
(60)$${\bf S}_{c} \lpar \hbox{k}\rpar = \nabla {\bf C} \lpar \hbox{k}\rpar {\bf P}\lpar \hbox{k}\rpar \nabla {\bf C}^{T} \lpar \hbox{k}\rpar $$

The constraints can be described as:

(61)$${\bf C}\lpar \hbox{k}\rpar = \left[\matrix{ {}^{G}{\bf X}_{S} \lpar \hbox{k}\rpar \cr {}^{G}{\bf X}_{i} \lpar \hbox{k}\rpar \cr {}^{S}{\bf X}_{R} \lpar \hbox{k}\rpar \cr {}^{S}{\bf X}_{i} \lpar \hbox{k}\rpar }\right]= {\bf 0} $$

where:

(62)$${\bf C}\lpar \hbox{k}\rpar = \left[\matrix{ -{\bf I} &{\bf I} &{\bf 0} &-{\bf I} }\right]$$

The process of transforming local feature points to the global map can be filtered by Equations (57)–(60), where b = 0.

5. BODY-FIXED SLAM WITH LOCAL SUBMAPS

The errors in the linearization process of the observation equation and rover's motion model are the main cause of the inconsistency. Therefore, reducing the linearization error is the important research content, which can reduce the uncertainty of the estimation process and increase the consistency of the algorithm.

The rover body-fixed SLAM algorithm with local submaps is presented in this section. The traditional EKF-SLAM algorithm is established in the absolute coordinate system, which increases the linearization error, due to the accumulative uncertainty of the rover's pose and the accumulative uncertainty of the feature points' positions which are introduced into the observation equation. The feature points will be updated in the coordinate system of the planetary rover in the body-fixed SLAM algorithm, namely, only the uncertainty of the binocular camera and IMU observed noise will be introduced into the linearization process of the observation equation, which will reduce the linearization errors. In addition, a better result can be obtained by combining the local submaps and transformation of the coordinates.

5.1. The state prediction process

The coordinate system of the rover is considered as the reference coordinate in this section. The state variables of the system can be described as follows:

(63)$${}^{R}{\bf X} = \left[\matrix{ {}^{R}{\bf X}_{B} \cr {}^{R}{\bf X}_{1} \cr \vdots \cr {}^{R}{\bf X}_{n_{f}} }\right]{}^{R}{\bf P} = \left[\matrix{ {}^{R}{\bf P}_{BB} &\ldots &{}^{R}{\bf P}_{Bn_{f}} \cr \vdots &\ddots &\vdots \cr {}^{R}{\bf P}_{n_{f}\ B} &\ldots &{}^{R}{\bf P}_{n_{f} n_{f}} }\right]$$

where RXB is the vector of the absolute coordinate in the coordinate system of the rover, RPBB is the covariance of RXB, n f denotes the number of the feature points and RXi, i = 1, · · · , n f is the position of the feature points in the coordinate system of the rover.

The position of the feature point in the body coordinate at time k can be written as:

(64)$${}^{R}{\bf X}_{i}\lpar \hbox{k}\rpar = {}^{R}{\bf X}_{k - 1}^{k} \lpar \hbox{k}\rpar \oplus {}^{R}{\bf X}_{i} \lpar \hbox{k} - 1\rpar $$

where ⊕ denotes the vector sum including coordinate transformation and position shift. The pose observed by the IMU is considered as an independent feature point, which is added to the state variables. The predicted values of the state variables and covariance matrix can be described as follows:

(65)$$\eqalign{{}^{R}\hat{{\bf X}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar & = \left[\matrix{ {}^{R}\hat{{\bf X}}\lpar \hbox{k} - 1\rpar \cr {}^{R}\hat{{\bf X}}_{k - 1}^{k} \lpar \hbox{k} \vert \hbox{k} - 1\rpar }\right]\cr {}^{R}{\bf P} \lpar \hbox{k} \vert \hbox{k} - 1\rpar & = \left[\matrix{ {}^{R}\hat{{\bf P}}\lpar \hbox{k} - 1\rpar & \cr &{\bf Q}_{{IMU}} }\right]} $$

where QIMU is the covariance matrix of the observation noise.

5.2. The state update process

The positions of the feature points need to be transferred into the body coordinates from global coordinates. The estimated value of the observation $\hat{z} \lpar \hbox{k} \vert \hbox{k} - 1\rpar $ is:

(66)$$\hat{z} \lpar \hbox{k} \vert \hbox{k} - 1\rpar = h \lpar {}^{R}\hat{{\bf X}}_{i}\comma \; \ {}^{R}\hat{{\bf X}}_{k - 1}^{k}\rpar $$

The residual error r(k) between the real measured value z(k) and predicted value $\hat{{\bf z}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $ is:

(67)$${\bf r}\lpar \hbox{k}\rpar = {\bf z}\lpar \hbox{k}\rpar - \hat{{\bf z}}\lpar \hbox{k} \vert \hbox{k} - 1\rpar $$

The covariance matrix of the residual error of observation variable can be described as:

(68)$${\bf S}\lpar \hbox{k}\rpar = \nabla h_{R_{X}} \lpar \hbox{k}\rpar {\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar \nabla h_{R_{X}}^{T} \lpar \hbox{k}\rpar + {\bf Q}_{v} \lpar \hbox{k}\rpar $$

The whole state update process is similar to those in previous sections. The covariance matrix after augmentation is shown as follows, which should be noted:

(69)$${}^{R}{\bf P}\lpar \hbox{k}\rpar = \left[\matrix{ {}^{R}{\bf P}\lpar \hbox{k} \vert \hbox{k} - 1\rpar & \cr &{\bf Q}_{v} }\right]$$

5.3. The state merging process

The position of the feature points and global coordinate in the body coordinate should be updated by pose changes.

(70)$${}^{R}\hat{{\bf X}}_{B}\lpar \hbox{k}\rpar = {}^{R}\hat{{\bf X}}_{B} \lpar \hbox{k} - 1\rpar \oplus {}^{R}\hat{{\bf X}}_{k - 1}^{k} \lpar \hbox{k}\rpar $$
(71)$${}^{R}\hat{{\bf X}}_{i}\lpar \hbox{k}\rpar = {}^{R}\hat{{\bf X}}_{i} \lpar \hbox{k} - 1\rpar \oplus {}^{R}\hat{{\bf X}}_{k - 1}^{k} \lpar \hbox{k}\rpar $$

Meanwhile the update process of the state covariance can be obtained as:

(72)$${}^{R}{\bf P}_{BB}\lpar \hbox{k}\rpar = \nabla {}^{R}\hat{{\bf X}}_{B} \left[\matrix{ {}^{R}{\bf P}_{BB} \lpar \hbox{k}\rpar & \cr &{}^{R}{\bf P}_{\hat{X}_{k - 1}^{k}} \lpar \hbox{k}\rpar }\right]\lpar \nabla {}^{R}\hat{{\bf X}}_{B}\rpar ^{T} $$
(73)$${}^{R}{\bf P}_{ii} \lpar \hbox{k}\rpar = \nabla^{R} \hat{{\bf X}}_{i} \left[\matrix{ {}^{R}{\bf P}_{ii}\lpar \hbox{k}\rpar & \cr &{}^{R}{\bf P}_{\hat{X}_{k - 1}^{k}} \lpar \hbox{k}\rpar }\right]\lpar \nabla^{R} \hat{{\bf X}}_{i}\rpar ^{T} $$

where $\nabla^{R} \hat{X}_{B}$ and $\nabla^{R} \hat{X}_{i}$ are covariance matrices of the estimated variables ${}^{R} \hat{X}_{B}$ and ${}^{R} \hat{X}_{i}$ during the update process. $\nabla^{R} \hat{X}_{B}$ can be described as follows:

(74)$$\nabla^{R} \hat{{\bf X}}_{B} = \left[\displaystyle{{\partial^{R} \hat{{\bf X}}_{B} \lpar \hbox{k}\rpar }\over{\partial^{R} \hat{{\bf X}}_{B} \lpar \hbox{k} - 1\rpar }}\, \displaystyle{{\partial^{R} \hat{X}_{B} \lpar \hbox{k}\rpar }\over{\partial^{R} \hat{{\bf X}}_{k - 1}^{k} \lpar \hbox{k}\rpar }}\right]$$

the merging process can be done through Equations (71)–(74).

5.4. The constraints for body-fixed SLAM with submaps

The state variables and covariance matrix are defined as follows, corresponding to the local submaps method:

(75)$${}^{R}{\bf X}_{l} = \left[\matrix{ {}^{R}{\bf X}_{G} \cr {}^{R}{\bf X}_{s} }\right]\quad {}^{R}{\bf P}_{l} = \left[\matrix{ {}^{R}{\bf P}_{G} & \cr &{}^{R}{\bf P}_{s} }\right]$$

the submaps will be integrated into the body-fixed global map when the submaps reach a certain scale, and the process is shown in Section 3.

Only the changes in state variables and relative positions of the feature points are updated in the body-fixed SLAM algorithm. Equation (75) can be written as follows:

(76)$${}^{R}{\bf X}_{l} = \left[\matrix{ {}^{R}{\bf X}_{B}^{G} \cr {}^{R}{\bf X}_{i}^{G} \cr {}^{R}{\bf X}_{l}^{s} \cr {}^{R}{\bf X}_{i}^{s} }\right]\quad {}^{R}{\bf P}_{l} = \left[\matrix{ {}^{R}{\bf P}_{G} & \cr &{}^{R}{\bf P}_{s} }\right]$$

where ${}^{R}{\bf X}_{l}^{S}$ is the accumulation values of the state changes in the submaps. ${}^{R}{\bf X}_{i}^{G}$ and ${}^{R}{\bf X}_{i}^{s}$ represent the same feature point in the body-fixed coordinate and local submap, then we have the following constraint:

(77)$${\bf c} {}^{R}{\bf X}_{i} = {\bf b} $$

where ${\bf c} = \left[\matrix{ 0 &{\bf I} &{\bf I} &-{\bf I}}\right]\quad {\bf b} = 0$, because the same feature point should be the same by coordinate transformation in different coordinate systems. The filtering process was illustrated in Section 3.

6. SIMULATION RESULTS

Simulation results are analysed in this section. It is assumed that the trajectory of the planetary rover is a circle, so that the modified EKF-SLAM can be analysed more conveniently. The Three-Dimensional (3D) trajectory of the planetary rover with the terrain constraint is shown in Figure 5, where the solid pink line is the trajectory of the centre of mass of the rover and the blue dotted lines are the trajectories of the contact point of the wheels. Meanwhile, the trajectories of the contact point of the wheels are enlarged in the upper-left figure in blue dotted lines.

Figure 5. Three-dimensional trajectory of the planetary rover with the terrain constraint.

200 landmarks represented as asterisks are distributed randomly in the map.

The simulation parameters used in the following simulation process are shown in Table 1.

Table 1. The setting of simulation parameters.

Notice: SD denotes Standard Deviation, RN denotes Random Noise.

The true path and the estimated trajectory by standard EKF-SLAM are shown in Figure 6, where the bold pink line represents the planetary rover's planned trajectory and the fine blue line represents the estimated result by EKF-SLAM. The green points denote feature points in the map, while the red points denote the observed feature points and the black ellipses around the red points denote the uncertainty ellipses.

Figure 6. Simulation results of the true path and the estimated trajectory by standard EKF-SLAM algorithm.

The position error curve and the attitude error curve of the rover based on standard EKF-SLAM are shown in Figures 7 and 8, respectively, which are drawn in blue lines. Meanwhile, the green lines in Figures 7 and 8 are the covariance curves of the variables, namely, position covariance curve and attitude covariance curve.

Figure 7. The position error curve of the rover (Standard EKF-SLAM).

Figure 8. The attitude error curve of the rover (Standard EKF-SLAM).

The position error is less than 1 m, and the attitude error is less than 0 · 5°, which can be analysed through the following simulation results. The trace of the covariance of feature points is shown in Figure 9, from which a new feature point observed by the rover can be seen; the trace of feature points increases immediately.

Figure 9. The trace of the covariance of feature points (Standard EKF-SLAM).

In the following part, the body-fixed SLAM algorithm with local submaps is analysed. The actual motion trajectory of the rover and the sensor parameters are the same as above in Table 1. The true path and the estimated trajectory based on body-fixed SLAM with local submaps are shown in Figure 10, where the bold pink line denotes the actual trajectory and the fine blue line denotes the estimated trajectory. The green points denote feature points in the map, while the red points denote the observed feature points and the black ellipses around the red points denote the uncertainty ellipses.

Figure 10. Simulation results of the true path and the estimated trajectory by body-fixed SLAM algorithm with local submaps.

The position error curves and the attitude error curves of the rover based on body-fixed SLAM are shown in Figures 11 and 12, respectively, which are drawn in blue lines. Meanwhile, the green lines in Figures 11 and 12 are the covariance curves of the variables, namely, position covariance curve and attitude covariance curve.

Figure 11. The position error curve of the rover (Body-fixed SLAM).

Figure 12. The attitude error curve of the rover (Body-fixed SLAM).

The estimated error of position is kept within 1 m while the estimated error of the attitude is kept within 0 · 5°, which are shown in Figures 11 and 12. The accuracy of body-fixed SLAM is close to standard EKF-SLAM. Moreover, the uncertainty ellipses are smaller in Figure 8 than those in Figure 6. Therefore, the uncertainty of the rover's motion in the global map can be reduced because of the constraint filtering and local submaps.

The Standard Deviation (SD) curve of the local position and azimuth deviation of the rover is shown in Figure 13. The SD will decrease after each period of time using the body-fixed SLAM algorithm compared with the standard EKF-SLAM algorithm in Figure 9. The cumulative values of the feature points contained in the state variables of the standard EKF-SLAM and body-fixed SLAM with local submaps are shown in Figure 14.

Figure 13. The Standard Deviation (SD) curve of the local position and azimuth deviation of the rover (Body-fixed SLAM).

Figure 14. The cumulative values of the feature points contained in the state variables of standard EKF-SLAM and body-fixed SLAM with local submaps.

The computational complexity is proportional to the square of the number of feature points contained in the state variables. Therefore, the computational complexity is reduced significantly in body-fixed SLAM. This is because only the local submaps which contain minor feature points need to be updated in a single filtering process. In addition, the computational efficiency will be improved correspondingly. Some comparison results of the two algorithms are listed in Table 2, from which we can see the difference of the two algorithms clearly. The proposed method has a better performance.

Table 2. Some comparison results of the two algorithms.

7. CONCLUSIONS

In this paper, a body-fixed SLAM method with local submaps for planetary surface exploration is proposed. The changes of the rover's position are taken as the state variables before the update of EKF, and then the process of update for EKF is started. Finally, the position of feature points in the body-fixed coordinate is updated. The error of the position and attitude can be controlled within acceptable ranges (1 m, 0 · 5°) for a planetary rover by the body-fixed SLAM with submaps, which is shown in the simulation results. However, lower computational complexity is obtained through the joint algorithm presented in this paper. The algorithm can be used for planetary rovers in exploration missions. In future research, the algorithm represented in this paper is also expected to be used in multi-spacecraft formation exploration missions (Zhuo et al., Reference Zhuo, Yang, Zexu and Weisheng2018a; Reference Zhuo, Zexu and Hui2018b; Reference Zhuo, Yang, Zexu, Hui and Sheng2018c).

ACKNOWLEDGEMENTS

This paper has been completed with the guidance and help of Professor Franco Bernelli-Zazzera and Professor Mauro Massari in Aerospace Engineering Department of Politecnico di Milano, and the authors would like to express their gratitude. This work is supported by the National Science Foundation of China (No.61374213, No.61573247) and Project Supported by Key Laboratory Opening Funding of Landing and Returning Control Technique of Deep Space Exploration (HIT.KLOF.2018072).

References

REFERENCES

Dissanayake, M.G., Newman, P., Clark, S., Durrant-Whyte, H.F. and Csorba, M. (2001). A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on robotics and automation. 17(3), 229241.Google Scholar
Islam, M.R., Chowdhury, F.H., Rezwan, S., Ishaque, M.J., Akanda, J.U., Tuhel, A.S. and Riddhe, B.B. (2017). Novel design and performance analysis of a Mars exploration robot: Mars rover mongol pothik. In Research in Computational Intelligence and Communication Networks (ICRCICN). 132–136. Kolkata, India.Google Scholar
Joly, C. and Rives, P. (2009). Building consistent local submaps with omnidirectional SLAM. Computer Vision Workshops, 2009 IEEE 12th International Conference on IEEE. 2180–2187. Kyoto, Japan.Google Scholar
Leonard, J.J. and Durrant-Whyte, H.F. (1991). Mobile robot localization by tracking geometric beacons. IEEE Transactions on robotics and Automation. 7(3), 376382.Google Scholar
Martinez-Cantin, R. and Castellanos, J.A. (2006). Bounding uncertainty in EKF-SLAM: The robocentric local approach. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on IEEE. 430–435. Orlando, FL, USA.Google Scholar
Meng, Y. and Hutao, C. (2014). A new approach based on crater detection and matching for visual navigation in planetary landing. Advances in Space Research. 53(12), 18101821.Google Scholar
Martinez-Cantin, R. and Castellanos, J.A. (2005). Unscented SLAM for large-scale outdoor environments. Intelligent Robots and Systems. 2005 IEEE/RSJ International Conference on. IEEE. Edmonton, Alta, Canada.Google Scholar
Nüchter, A., Lingemann, K., Hertzberg, J. and Surmann, H. (2007). 6D SLAM—3D mapping outdoor environments. Journal of Field Robotics. 24(8–9), 699722.Google Scholar
Paz, L.M. and Neira, J. (2006). Optimal local map size for EKF-based SLAM. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on IEEE. 5019–5025. Beijing, China.Google Scholar
Tan, W., Liu, H., Dong, Z., Zhang, G. and Bao, H. (2013). Robust monocular SLAM in dynamic environments. In Mixed and Augmented Reality (ISMAR), 2013 IEEE International Symposium. 209–218. Adelaide, SA, Australia.Google Scholar
Thrun, S., Montemerlo, M., Koller, D., Wegbreit, B., Nieto, J. and Nebot, E. (2004). Fastslam: An efficient solution to the simultaneous localization and mapping problem with unknown data association. Journal of Machine Learning Research, 4(3), 380407.Google Scholar
Wang, J., Song, J., Zhao, L. and Huang, S. (2018). A Submap Joining Based RGB-D SLAM Algorithm Using Planes as Features. In Field and Service Robotics. 367–382. ETH Zurich, Switzerland.Google Scholar
Yen, J., Jain, A. and Balaram, J. (1999). ROAMS: Rover Analysis Modeling and Simulation Software. In Artificial Intelligence, Robotics and Automation in Space, 440, 249. Noordwijk, Netherland.Google Scholar
Zhuo, Z., Yang, S., Zexu, Z. and Weisheng, Y. (2018a). New Results on Sliding-Mode Control for Takagi-Sugeno Fuzzy Multiagent Systems. IEEE Transactions on Cybernetics. (99), 113.Google Scholar
Zhuo, Z., Zexu, Z. and Hui, Z. (2018b). Distributed Attitude Control for Multi-Spacecraft via Takagi-Sugeno Fuzzy Approach. IEEE Transactions on Aerospace and Electronic Systems. 54(2), 642654.Google Scholar
Zhuo, Z., Yang, S., Zexu, Z., Hui, Z. and Sheng, B. (2018c). Modified Order-Reduction Method for Distributed Control of Multi-Spacecraft Networks With Time-Varying Delays. IEEE Transactions on Control of Network Systems, 5(1), 7992.Google Scholar
Figure 0

Figure 1. A flow chart of the paper's structure.

Figure 1

Figure 2. Coordinate system of the rover's motion.

Figure 2

Figure 3. An installation model for the binocular camera (vertical view).

Figure 3

Figure 4. An installation model for the binocular camera (lateral view).

Figure 4

Figure 5. Three-dimensional trajectory of the planetary rover with the terrain constraint.

Figure 5

Table 1. The setting of simulation parameters.

Figure 6

Figure 6. Simulation results of the true path and the estimated trajectory by standard EKF-SLAM algorithm.

Figure 7

Figure 7. The position error curve of the rover (Standard EKF-SLAM).

Figure 8

Figure 8. The attitude error curve of the rover (Standard EKF-SLAM).

Figure 9

Figure 9. The trace of the covariance of feature points (Standard EKF-SLAM).

Figure 10

Figure 10. Simulation results of the true path and the estimated trajectory by body-fixed SLAM algorithm with local submaps.

Figure 11

Figure 11. The position error curve of the rover (Body-fixed SLAM).

Figure 12

Figure 12. The attitude error curve of the rover (Body-fixed SLAM).

Figure 13

Figure 13. The Standard Deviation (SD) curve of the local position and azimuth deviation of the rover (Body-fixed SLAM).

Figure 14

Figure 14. The cumulative values of the feature points contained in the state variables of standard EKF-SLAM and body-fixed SLAM with local submaps.

Figure 15

Table 2. Some comparison results of the two algorithms.