Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-11T14:35:30.611Z Has data issue: false hasContentIssue false

Rapid star identification algorithm for fish-eye camera based on PPP/INS assistance

Published online by Cambridge University Press:  13 June 2022

Chonghui Li
Affiliation:
State Key Laboratory of Geo-information Engineering, Xian, Shanxi, 710054, the People's Republic of China Institute of Geospatial Information, Information Engineering University, Zhengzhou, Henan, 450001, the People's Republic of China
Yuanxi Yang
Affiliation:
State Key Laboratory of Geo-information Engineering, Xian, Shanxi, 710054, the People's Republic of China
Guorui Xiao*
Affiliation:
Institute of Geospatial Information, Information Engineering University, Zhengzhou, Henan, 450001, the People's Republic of China
Zhanglei Chen
Affiliation:
Institute of Geospatial Information, Information Engineering University, Zhengzhou, Henan, 450001, the People's Republic of China
Shuai Tong
Affiliation:
Institute of Geospatial Information, Information Engineering University, Zhengzhou, Henan, 450001, the People's Republic of China
Zihao Liu
Affiliation:
Institute of Geospatial Information, Information Engineering University, Zhengzhou, Henan, 450001, the People's Republic of China
*
*Corresponding author. E-mail: xgr@whu.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

The fish-eye star sensor with a field of view (FOV) of 180° is an important piece of equipment for attitude determination, which improves the visibility of stars significantly. However, it also brings the star identification (star-ID) difficulties because of imprecise calibrations. Thus, a fish-eye star-ID algorithm supported by the integration of the precise point positioning/inertial navigation system (PPP/INS) is proposed. At first, a reference star map is generated in combination with the distortion model of the fish-eye camera based on the position and attitude information from the PPP/INS. Then the star points are extracted in a specific neighbourhood of the reference star points. Subsequently, the extracted star points are individually tested and identified according to angular distance error. Finally, the real-time precise attitude is determined based on the star-ID results. Experimental results show that, 270–310 stars can be identified in a fish-eye star map with an average time of 0.03 s if the initial attitude error is smaller than 1.5° and an attitude determination accuracy better than 10″ can be achieved by support from PPP/INS.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of The Royal Institute of Navigation

1 Introduction

The celestial navigation system (CNS) is an important autonomous navigation method. CNS determines the position and attitude of a carrier by using the sun, moon, planets, stars and other natural celestial bodies as navigation beacons. The horizontal coordinates of the selected celestial bodies will be observed. The equipment for CNS is simple, not susceptible to interference and the positioning error does not accumulate with time (Vulfovich and Fogilev, Reference Vulfovich and Fogilev2010). CNS has been widely used in the attitude determination of spacecraft flight and is still one of the core methods for attitude determination of space carriers. Due to the remarkable improvement of the observability of CNS equipment, CNS is often used for attitude determination on near-earth carriers such as ships, vehicles and aircraft. However, shooting star maps in the atmosphere will inevitably be affected by weather conditions, resulting in discontinuous outputs of celestial navigation results. Therefore, a feasible approach is to combine CNS with a global navigation satellite system (GNSS) and inertial navigation system (INS) to form a GNSS/INS/CNS integrated navigation system (Quan et al., Reference Quan, Li, Gong and Fang2015). The role of CNS is not only to improve the positioning accuracy, but also to improve the accuracy in the attitude determination of the carrier (Christian and Crassidis, Reference Christian and Crassidis2021), which does not contain accumulation error.

In the atmosphere, celestial navigation is often affected by weather conditions, such as clouds and fog, which limits the availability of the narrow field of view (FOV) star sensor. When expanding the FOV of the star sensor, the number of visible stars can be increased, the observation redundancy is increased accordingly, and the accuracy of positioning and attitude can be improved. A fish-eye camera is a type of imaging equipment that was developed based on modern optical technologies to realise an ultra-large FOV up to 180° (Kumler and Bauer, Reference Kumler and Bauer2000), with which half of the celestial sphere can be imaged simultaneously. Even if there are some clouds and fog in the sky, attitude determination can be realised as long as a few stars can be photographed in the whole sky, which effectively improves the availability of celestial navigation. However, the use of the fish-eye camera also leads to a large radial distortion while expanding the FOV, making it difficult to be accurately calibrated (Kannala and Brandt, Reference Kannala and Brandt2006) and increasing the difficulty in star identification (Star-ID). The Star-ID, which is the process of matching the stars captured in the star map with the reference stars in the navigation star database, is an intermediate but crucial step for the attitude determination of the star sensor (Na and Jia, Reference Na and Jia2006).

Star-ID algorithms are mainly divided into two categories according to whether a priori attitude information exists. The first category is based on the lost-in-space problem, that is, blind star-identification without any prior attitude information (Somayehee et al., Reference Somayehee, Nikkhah, Roshanian and Salahshoor2020). To realise matching recognition (Spratling and Daniele, Reference Spratling and Daniele2009), this type of algorithm mostly adopts the principle of subgraph isomorphism, which often takes stars as the vertex and the angular distance between stars as the edge. Namely, it generally regards the observation star map as the sub-graph of the entire star map, and starts from the magnitude, mutual position and geometric relationship of stars. This type of algorithm mainly includes the triangle algorithm, polygon algorithm, pyramid algorithm and so on (Arani et al., Reference Arani, Toloei and Eghbaleh2015; Hernández et al., Reference Hernández, Alonso, Chávez, Covarrubias and Conte2016; Zhu et al., Reference Zhu, Liang and Zhang2018; Rijlaarsdam et al., Reference Rijlaarsdam, Yous, Byrne, Oddenino, Furano and Moloney2020). The second type is the iterative star-ID algorithm based on the approximate attitude information, which is obtained from the previous star map or external attitude measurement equipment (Samaan et al., Reference Samaan, Mortari and Junkins2005; Liu et al., Reference Liu, Zhou, Zhang, Liu and Zhang2017). Most of these algorithms use the principle of map recognition to construct a unique feature for each observation star. This kind of algorithm is often composed of the geometric distribution characteristics of other stars in a certain neighbourhood of the star, and primarily includes the grid, singular-value, genetic, list and marking algorithms (Padgett and Kreutz-Delgado, Reference Padgett and Kreutz-Delgado1997; Juang et al., Reference Juang, Kim and Junkins2003; Sun et al., Reference Sun, Jiang, Zhang and Wei2016; Mehta et al., Reference Mehta, Chen and Low2018; Kim and Cho, Reference Kim and Cho2019).

According to the FOV of the star camera, the star image recognition algorithm can be divided into two categories: ordinary small field camera and wide-angle camera. The former is mainly for small FOV star sensors with which a precise calibration is easier. Therefore, the corresponding algorithm is relatively simple, and the research results are relatively rich (Toloei et al., Reference Toloei, Zahednamazi, Ghasemi and Mohammadi2020). The latter is based on the star-ID algorithm of a wide-angle camera which is not precisely calibrated, and with which the research is relatively weak (Leake et al., Reference Leake, Arnas and Mortari2020). Rousseau et al. (Reference Rousseau, Bostel and Mazari2005) and Samaan and Mortari (Reference Samaan and Mortari2006) proposed two dimensionless methods which are based on the inner angles of spherical triangles. Although these methods have demonstrated good robustness to the calibration errors and the inaccuracy of the projection equation, they are only suitable for small-FOV cameras with small distortion and a linear projection error function. In this case, the angular distance between stars and the inner angle of a spherical triangle will only change marginally. However, for large-FOV cameras with nonlinear projection imaging, the angular distance between stars and the inner angle of the spherical triangle could be significantly distorted. Therefore, another dimensionless algorithm for star-ID using star cluster features has been proposed (Lang et al., Reference Lang, Hogg, Mierle, Blanton and Roweis2010). However, the algorithm has not been tested with a large FOV, and the deformation caused by the nonlinear projection function has been ignored. Therefore, Mohamad et al. (Reference Mohamad, Ghafarzadeh, Taheri, Mosadeq and Khakian2015) proposed a star-ID algorithm for an uncalibrated large field camera. For a selected triangle in the star map, the algorithm completes the recognition based on the intersection of triangle sets found in the navigation star database. The star-ID of an uncalibrated camera with a FOV of 114° ± 12° can be realised by using this algorithm, with a success rate of 94%. There are no star-ID algorithms for the fish-eye camera with an ultra-large FOV and without precision calibration to this day.

Aiming at the distortion problem caused by the ultra-large FOV of the fish-eye camera, this study investigates the identification of the fish-eye star image in a GNSS/INS/CNS integrated navigation system and attempts to improve the fish-eye star-ID algorithm. Due to the disadvantages of high cost and limitation of usage distance of the commonly used RTK/INS combination mode, this paper takes advantage of the high-efficient positioning mode of precise point positioning (PPP) technology (Xiao et al., Reference Xiao, Li, Gao and Heck2019). PPP uses a single receiver to obtain the absolute positioning result with a cm–dm level supported by precise satellite orbit and clock correction provided by IGS via internet. The PPP service provided by the BeiDou navigation satellite system (BDS) is very helpful for the users who do not have any internet, because the precise satellite orbit and clock products of GPS and BDS are broadcast by B2b signals of BDS geo-stationary satellites (Yang et al., Reference Yang, Gao, Guo, Mao and Yang2019, Reference Yang, Mao and Sun2020, Reference Yang, Liu, Li, Yang, Zhang, Mao, Sun and Ren2021; Jin and Su, Reference Jin and Su2020). Furthermore, if the PPP is integrated with INS, the approximate position and attitude information for CNS can be provided and their accuracy can be improved.

This paper is structured as follows. In Section 2, the projection distortion model of the fish-eye camera is introduced. The generation of a reference star map according to the approximate position and attitude parameters provided by PPP/INS is discussed in Section 3. Section 4 describes the algorithm for the star point extraction and recognition based on the ‘hypothesis test’ mode. Section 5 presents the experimental analysis based on the measured star map. The conclusion is summarised in Section 6.

2 Projection distortion of fish-eye camera

To achieve ultra-large FOV imaging, the fish-eye camera adopts the idea of non-similar imaging, which introduces a large radial distortion when making the target shape in the imaging process. Common projection models of fish-eye lenses include isometric projection, equidistant projection, stereoscopic projection and orthogonal projection models (Hughes et al., Reference Hughes, Denny, Jones and Glavin2010). This study uses isometric projection models as an example to analyse the distortion error of the fish-eye camera, and the derivation process of other projection models is similar.

The projection formula of isometric solid angle models is expressed as follows:

(1)\begin{equation}r = 2f \cdot \sin \left( {\frac{\theta }{2}} \right),\end{equation}

where $\theta$ is the semiangular field of the image point, f is the focal length of the fish-eye camera and r is the distance from the projection point in the image plane to the principal point of the image, where

(2)\begin{equation}r = \sqrt {{{(x - {x_0})}^2} + {{(y - {y_0})}^2}}. \end{equation}

According to Equation (2), the formula for calculating the semiangular FOV under isometric solid angle projection is expressed as follows:

(3)\begin{equation}\theta = 2\arcsin \left( {\frac{r}{{2f}}} \right).\end{equation}

The actual projection formula is different from that expressed above even if the nominal fish-eye lens adopts equal solid angle projection due to the existence of imaging optical distortion difference. The distortion of the fish-eye camera generally includes the radial distortion, eccentric distortion and image plane distortion (Tao et al., Reference Tao, Xi, Ping and Fei2019). As shown in Figure 1, the position of the ideal image point of the star point $\sigma$ imaged through the fish-eye camera is ${p_0}$, the radial distortion difference will make the actual image point ${p_0}$ deviated to ${p_1}$ along the radial direction of the ideal image point and the radius will be correspondingly changed from ${r_0}$ to ${r_1}$. The eccentric distortion difference will make the actual image point deviate along the radial direction perpendicular to the ideal image point, that is, from ${p_\textrm{0}}$ to ${p_2}$. In addition to the radius ${r_0} \ne {r_2}$, there is an angle $\alpha$ between ${r_0}$ and ${r_2}$. The distortion in the radial direction is called asymmetric radial distortion and the distortion perpendicular to the radial direction is called tangential distortion.

Figure 1. Distortion of the fish-eye camera

In the imaging distortion of a fish-eye camera, however, radial distortion is the most important, while eccentric distortion and in-plane distortion are relatively unimportant. When only the radial distortion is considered, the error of the distance between the image point and the principal point can be expressed by the semiangular field angle error of the corresponding square point, as shown in Figure 2.

Figure 2. Radial distortion of fish-eye camera

A polynomial distortion model of the equal solid angle projection for a fish-eye camera is expressed as follows:

(4)\begin{equation}\theta = 2\arcsin \left( {\frac{r}{{2f}}} \right) + {k_1}{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)^2} + {k_2}{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)^3} + {k_3}{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)^4},\end{equation}

where ${k_1}$, ${k_2}$ and ${k_3}$ are the coefficients of the second-, third- and fourth-order distortion terms, respectively.

3 Generation of reference star images based on approximate position and attitude

When using the position and attitude parameters of the carrier to generate the reference star map, it is necessary to determine the sky area that the camera can image and then calculate the image coordinates of the navigation star projected on the imaging plane according to the projection distortion model. If the universal time coordinate (UTC) of the star map is T, and the longitude and latitude of the carrier are (${L_b}$, ${B_b}$) determined by PPP/INS, the transformation matrix ${C_{ih}}$ between the horizontal coordinate system and the equatorial inertial coordinate system can be calculated. By using the attitude matrix ${C_{hb}}$ of the carrier in the horizontal coordinate system given by PPP/INS and the transformation matrix ${C_{bc}}$ of the camera coordinate system relative to the carrier coordinate system, we get the attitude matrix ${C_{ic}}$ of the camera in the equatorial inertial system as

(5)\begin{equation}{C_{ic}} = {C_{ih}} \cdot {C_{hb}} \cdot {C_{bc}}.\end{equation}

Because the principal optical axis coincides with the camera coordinate system axis ${Z_c}$, the vector $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$ of the principal optical axis in the equatorial inertial coordinate system is

(6)\begin{equation} \mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P} = \left\{\begin{array}{@{}l} {x_{iP}}\\ {y_{iP}}\\ {z_{iP}} \end{array} \right. = {C_{ic}} \cdot \begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}.\end{equation}

By converting $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$ into the right ascension ${\alpha _P}$ and declination ${\delta _P}$ of the direction in the equatorial coordinate system, we obtain (Wei et al., Reference Wei, Cui, Wang and Wan2019)

(7)\begin{gather}{\alpha _P} = \arctan \frac{{{y_{iP}}}}{{{x_{iP}}}},\end{gather}
(8)\begin{gather}{\delta _P} = \arctan \frac{{{z_{iP}}}}{{\sqrt {{x_{iP}}^2 + {y_{iP}}^2} }}.\end{gather}

The coordinates (${\alpha _P}$, ${\delta _P}$) of the direction of the camera's principal optical axis in the equatorial coordinate system indicate the right ascension and declination of the direction of the camera's FOV centre. If the FOV of the camera is $\omega \times \omega$, all the stars whose angular distance from the vector $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$ of the principal optical axis’ direction is within the range $\omega /2$ are in the FOV, as shown in Figure 3. In particular, when a fish-eye camera with a FOV of 180° is used, all the stars located above the horizon with angular distance from the direction of the principal optical axis is less than 90° will be shown in the FOV.

Figure 3. Principal optical axis and the FOV

In addition to the angular distance between the navigation star and the direction of the principal optical axis, the magnitude information of the star should be considered when selecting the star. Only stars with sufficient brightness can be imaged in the star map. By performing many experiments, it is found that stars with a brightness magnitude more than 6.2 can be imaged in the fish-eye star map.

In the navigation star database, the coordinates of the stars in the image can be estimated after the stars are selected according to the angular distance from the direction of the principal optical axis and the magnitude information. According to the apparent position of the navigation star, the projection model and distortion model of the fish-eye camera, and the approximate direction of the principal optical axis in the horizontal coordinate system, the image coordinates of the star point projection can be calculated, with the main steps as follows.

  1. (1) Calculation of stellar horizon coordinates.

The navigation star database provides the star's right ascension and declination, while the observation is directly related to the star's azimuth and attitude angle. Therefore, it is necessary to convert the star's apparent position ($\alpha$, $\delta$) into the azimuth A and attitude angle h at the observation time. Readers can refer to Kaplan et al. (Reference Kaplan, Bangert and Bartlett2009) for the detail of this method.

  1. (2) Atmospheric refraction correction.

It is necessary to correct the attitude angle of the star because starlight is usually affected by the atmospheric refraction during its propagating process. The atmospheric refraction correction formula is

(9)\begin{equation}{h^\prime } = h - \rho ,\end{equation}

where ${h^\prime }$ represents the attitude angle of the star observed by the ground user and $\rho$ is the atmospheric refraction difference. The calculation formula is

(10)\begin{equation}\rho = {\rho _0}\left[ {1 - \frac{{Lt}}{{1 + Lt}} + \left( {\frac{p}{{760}} - 1} \right)} \right],\end{equation}

where L is a constant whose value is 1/273, t is the instantaneous centigrade temperature, p is the instantaneous atmospheric pressure (mm-Hg) and ${\rho _0}$ is the atmospheric refraction difference under the standard state, and the current universal approximate formula follows:

(11)\begin{equation}{\rho _0} = a\tan \left( {\frac{\pi }{2} - h} \right) + b{\tan ^3}\left( {\frac{\pi }{2} - h} \right).\end{equation}

Here, $a = 60.27^{\prime\prime}$, $b ={-} 0.0\mathrm{669^{\prime\prime}}$. Using Equations (9), (10) and (11), we can obtain the attitude angle h of the celestial body corrected by the atmospheric refraction (Li et al., Reference Li, Zheng, Zhang, Yuan, Lian and Zhou2014).

  1. (3) Transformation from horizon coordinate system to camera coordinate system.

The horizontal coordinates of stars need to be transformed from azimuth and attitude angles ($A$, ${h^\prime }$) into rectangular coordinates (${x_h}$, ${y_h}$, ${z_h}$) with the relations as follows:

(12)\begin{equation}\left\{\begin{array}{@{}l} {x_h} = \cos {h^\prime }\cos A\\ {y_h} = \cos {h^\prime }\sin A\\ {z_h} = \sin {h^\prime } \end{array} \right..\end{equation}

Based on the attitude matrix ${C_{hb}}$ of the carrier in the horizontal coordinate system and the transformation matrix ${C_{bc}}$ of the camera coordinate system relative to the carrier coordinate system, the coordinates of stars in the camera coordinate system are

(13)\begin{equation} \begin{bmatrix} {{x_c}}\\ {{y_c}}\\ {{z_c}} \end{bmatrix} = {C_{bc}}{C_{hb}}\begin{bmatrix} {{x_h}}\\ {{y_h}}\\ {{z_h}} \end{bmatrix}.\end{equation}
  1. (4) Conversion from camera coordinate system to image coordinate system.

The relations between the Cartesian coordinates of stars in the camera coordinate system and the polar coordinates are

(14)\begin{gather}\Omega = \arctan \frac{{{y_c}}}{{{x_c}}},\end{gather}
(15)\begin{gather}\theta = \arctan \frac{{\sqrt {{x_c}^2 + {y_c}^2} }}{{{z_c}}}.\end{gather}

Here, $\Omega$ is the angle between the star point and the $O{X_C}$ axis in the image plane, and $\theta$ is the angle between the line-of-sight direction of the star and the $O{Z_C}$ axis, that is, the semiangular field of the star. According to the polynomial distortion model of the equal solid angle projection, the corrected radial length can only be obtained by iterative calculations. Therefore, Equation (4) is transformed into

(16)\begin{equation}r = 2f\sin \left[ {\frac{1}{2}\left( {\theta - {k_1}{{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)}^2} - {k_2}{{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)}^3} - {k_3}{{\left( {\arcsin \left( {\frac{r}{{2f}}} \right)} \right)}^4}} \right)} \right].\end{equation}

The initial value of r is set to 1, the right side of Equation (16) is used to calculate the new value and the iterative calculation is performed until the difference between the two calculation results is less than the threshold. At this point, the final radial length r is obtained. Thus far, the coordinates ($\Omega$, $r$) of the star point in the projection coordinate system are calculated and then the polar coordinate form of the star point is transformed into the rectangular coordinate form, as shown in the following formula:

(17)\begin{equation}\left\{\begin{array}{@{}l} {x_i} = {x_0} + r\cos \Omega \\ {y_i} = {y_0} + r\sin \Omega \end{array} \right.,\end{equation}

where (${x_0}$, ${y_0}$) is the coordinate of the image principal point and (${x_i}$, ${y_i}$) is the image coordinate of the ith star point. By solving the corresponding image coordinates of all the stars in the FOV, a reference star map is obtained.

4 All-sky star identification algorithm

In the integrated navigation system, the reference star map is obtained using the above steps, and the ‘hypothesis test’ can be used to efficiently extract and identify the star points by matching the measured star map with the reference star map (Duan et al., Reference Duan, Li, Niu, Jing and Chen2018). When the reference star map is known, the star points can be found in a certain neighbourhood of the image coordinates of the navigation star according to the reference star map, as shown in Figure 4.

Figure 4. Schematic diagram of measured stars and reference stars

In star point extractions, the median filtering algorithm is first used to pre-process the star map to filter out the image noise points caused by lights, and the star point is detected by the star map template detection algorithm. Then, the measured star points are found in a certain neighbourhood with the reference star point as the centre, as shown in Figure 4. If there are no measured star points in the neighbourhood of the reference star points, it implies that the navigation star point is not imaged in the star map or the imaged star point cannot be effectively detected, and then the star point is abandoned. If there is a star point in the neighbourhood, it can be preliminarily determined as a star point in the search. If there are multiple measured star points in the neighbourhood, it is a redundant identification issue which needs to be further researched.

It is obvious that the radius size of the reference star neighbourhood is crucial. If the neighbourhood radius is set too small, there will be no measured star points in the neighbourhood. If the neighbourhood radius is set too large, there will be multiple measured star points in the neighbourhood, and the stars cannot be identified directly. If the focal length of the fish-eye camera is 10.5 mm and the physical size of the pixel is 12 μm × 12 μm, a star point can only occupy a few pixels in the image and the brighter part of the star point may even occupy only 2–3 pixels. The number of pixels occupied by the measured star point is generally less than 30. Therefore, the number of pixels occupied by the brightest star point is less than 7 × 7 and the distance between any two star points is generally larger than 20 pixels. Hence, the neighbourhood radius of the reference star points can be set to 10 pixels.

The flow chart of the star-ID algorithm for a fish-eye camera based on the approximate position and attitude information is shown in Figure 5. The efficiency and accuracy of identification are crucial in the tracking map for star-ID. After the preliminary matching between the measured star map and the reference star map, most of the measured star points can establish a one-to-one correspondence with the reference star points. However, there are also many-to-one relationships between some measured star points and reference star points, and, for some measured star points, corresponding reference star points cannot be found. To improve the efficiency of star-ID, the following steps can be followed.

  1. (1) First, the image distances among the reference stars in the measured star region which have established one-to-one correspondence are calculated and sorted according to the difference in the distance.

  2. (2) Four measured star points with the smallest distance differences are selected as the candidate reference stars in the central region of the star map.

  3. (3) Matching verification is carried out according to the angular distance error of the four stars in the reference star map and the measured star map.

  4. (4) If the verification is passed, these stars will be determined as reference stars to identify the subsequent stars.

  5. (5) If the verification cannot be passed, the star with the largest error will be cancelled, and the star point with the smallest distance value among the other star points will be added as the candidate reference star.

  6. (6) Steps (2), (3) and (4) are repeated until the reference star is determined.

  7. (7) According to the reference star, the other one-to-one corresponding measured star points are verified, which are marked as identified in the measured star map and navigation star database.

  8. (8) According to the reference star, all the unmarked star points are identified until the star map identification is completed.

From the above process, it can be observed that most of the one-to-one corresponding star points can be preliminarily identified based on the matching of the measured star map and the reference star map. In addition, the pyramid method identification process can be sped up when the reference star is identified. When recognising the remainder of the star points, the process of triangle method recognition can be omitted. Thus, compared with the lost-in-space mode, the calculation difficulty is significantly reduced. In addition, only verification is required when recognising stars, which also satisfies the requirements for accuracy.

Figure 5. Fish-eye camera star identification flow based on approximate position and attitude

5 Experimental results

5.1 Experimental facilities

Actual observation data for fish-eye star-ID on the evening of December 30, 2020 were applied to verify the effectiveness of the proposed algorithm, At the same time, the integrated GNSS/INS/CNS navigation experiment was carried out, and the experimental equipment photo is shown in Figure 6.

Figure 6. Experimental equipment

In the experimental system, the GNSS antenna, the inertial navigation system and the fish-eye camera are fixed on the same metal base, so the relative positions among them are fixed. GNSS uses the multi-system PPP method with decimetre level accuracy in real-time. INS adopts two inertial measurement units (IMUs): the first IMU is NovAtel SPAN-ISA-100C with the navigation level performance, which integrates three-axis fibre optic gyroscope and a micro electro mechanical systems (MEMS) accelerometer with full temperature compensation, and the approximate cost value is $150,000; the second IMU is NovAtel SPAN-IGM-S1/A1, which is a low-cost and miniaturised MEMS inertial navigation system, and the approximate cost value is $20,000. In the experiment, the measurement data of the two IMUs are combined with GNSS-PPP integration to calculate the position, velocity and attitude parameters, which are used for the subsequent comparative analysis. The performance of the two IMUs are shown in Table 1.

Table 1. Bias and bias instability of IMUs

CNS adopts the fish-eye camera as the core component, and its main technical parameters are shown in Table 2.

Table 2. Technical parameters of the fish-eye camera

5.2 Single star image processing

The experiment was carried out from 21:00 to 24:00 p.m. in local time. The GNSS/INS/CNS integrated navigation experiment was carried out when the vehicle was running on the road. A typical fish-eye star is shown in Figure 7.

Figure 7. Typical vehicle-mounted fish-eye star image

In the picture, the over-exposed circular area on the right is the moon, surrounded by trees and lights, and the bright dot-like target in the middle is the star. After processing the star map, 360 star points can be extracted from this map. It can be observed that the fish-eye camera can simultaneously image half of the celestial sphere with a 180° FOV. Even if some stars are obscured by clouds in some areas of the sky, the calculation for attitude determination can still be realised as long as a few stars are visible.

In the fish-eye star map, the size of the corresponding angular distance of each pixel is related to the distance from the pixel to the image centre. Further, the size of the corresponding angular distance of each pixel is not a constant value, which changes with the distance to the image centre, because of the non-linearity of the projection function. In the navigation process, the position and attitude of the star map are given by PPP/INS; then, the reference star map at the observation time is generated according to the proposed method, then the star-ID is completed. The pixel positions of star points in the reference star map and the measured star map are obtained, as shown in Figure 8.

Figure 8. Image coordinate of reference star and measured star

It can be seen from Figure 8 that most of the measured star points can establish one-to-one correspondence with the reference star points. However, there is also some of the measured star points without corresponding reference points. In particular, there are a large number of reference star points on the right side of the figure with no measured ones as they are masked by the bright moonlight. Eventually, we identified 276 stars successfully in this image using the proposed algorithm.

5.3 Error analysis of the prediction star position for different IMUs

To further verify the efficiency of the proposed algorithm, 200 star images taken continuously during the experiment were used for the analysis. As we know, the position error between the stars in the reference and the measured maps is crucial for the star-ID. The position error provided by PPP/INS is better than 30 cm, the corresponding geocentric angle error is less than 0.01, but the attitude error can exceed 1°. Therefore, the initial attitude value error is the main error source of the star prediction coordinates. As the accuracy of the attitude determination is better than 10, the three Euler angle errors calculated by the integrations of the above two IMUs supported by PPP are analysed based on the final calculated star sensor attitude as the reference, and the results are shown in Figures 9–11.

Figure 9. Variation of yaw calculated by PPP/INS

Figure 10. Variation of pitch calculated by PPP/INS

Figure 11. Variation of roll calculated by PPP/INS

Figures 9–11 reveal that the errors of three Euler angles obtained by PPP/INS1 are less than 0.1°, while those values obtained from PPP/INS2 are larger, especially, the yaw error has large drifts with a variation range up to 1.5°. The mean value of the projection position errors of star points corresponding to the two reference star maps based on PPP/INS1 and PPP/INS2 are quite different, as shown in Figure 12.

Figure 12. Average star position error calculated by PPP/INS

In Figure 12, the mean value of the star position error calculated by PPP/INS1 is less than 1 pixel, while the maximum error calculated by PPP/INS2 is up to 7 pixels.

5.4 Efficiency and accuracy of star identification

The calculated star position errors are strongly related to the error characteristics of the two IMUs, and the efficiency and accuracy of the star-ID are affected accordingly. To compare the above differences, three schemes are carried out for the star-ID. The first and second schemes adopt the algorithm proposed in this paper, but they are identified according to the initial values of the position and attitude parameters given by PPP/INS1 and PPP/INS2, respectively. The third scheme adopts the traditional triangle method for identification. The number of stars identified in each image is shown in Figure 13, and the identification time (Using Matlab R2016a, Intel i7-8550U processor) is shown in Figure 14.

Figure 13. Identified number of stars for each image

Figure 14. Star identification time for each star image

Figure 13 shows that, for both PPP/INS1 data and PPP/INS2 data, 270–310 stars can be identified in each fish-eye star map by using this algorithm with a relatively stable number of identified stars, while the triangle algorithm is not stable in the star identification and some outliers exist. The recognition times of the two schemes using PPP/INS1 and PPP/INS2 are close to each other, shown in Figure 14. The recognition time by using PPP/INS2, however, is slightly increased due to the large attitude error, but it is still less than 0.1 s. Meanwhile, the average recognition time of the triangle algorithm is 9.04 s, which is significantly larger.

The accuracies of the attitude determined by the three schemes above are compared, which are shown in Figures 15–17.

Figure 15. Yaw accuracy of the three schemes

Figure 16. Pitch accuracy of the three schemes

Figure 17. Roll accuracy of the three schemes

As shown in Figures 15–17, the attitude determination accuracies of the three schemes are roughly the same. It should be mentioned that the number of star points identified by the two methods in this paper is less than that identified by the triangle method (Figure 13), but the accuracy of the yaw is higher by using the proposed method (Figure 15). Because some of the stars with large errors are filtered out by the proposed algorithm, the number of the identified stars is slightly reduced, but the accuracy is improved. Further statistical results are shown in Table 3.

Table 3. Mean attitude accuracy of the three schemes

As shown in Figures 9–17 and Table 3, when compared with the PPP/INS system, the attitude accuracy of the carrier has been improved due to the addition of CNS. For INS1, the yaw accuracy has been improved from 228.87 to 8.05, the pitch accuracy has been improved from 109.42 to 3.78 and the roll accuracy has been improved from 243.37 to 3.66. For INS2, the yaw accuracy has been improved from 3786.64 to 7.94, the pitch accuracy has been improved from 175.45 to 4.02 and the roll accuracy has been improved from 271.99 to 3.88. Meanwhile, when compared with the triangle star-ID method, the proposed fish-eye star-ID algorithm based on PPP/INS not only reduces the recognition time from 9.04 s to less than 0.1 s, but also improves the accuracy of the attitude determination to better than 10. In practice, only one INS device is needed for the final system. Since the final attitude accuracy based on INS1 and INS2 are basically the same, the cheaper INS2 is preferred.

6 Conclusion

The fish-eye camera with 180° FOV can shoot hundreds of star points in the same star image; therefore, the visibility of stars in the near-Earth space is much improved even under the condition of cloud and fog interference. However, the inherent calibration difficulty is introduced. A fish-eye star-ID algorithm based on approximate pose parameters provided by PPP/INS is proposed in this paper. The algorithm first calculates the position coordinates of stars in the reference star map, then matches the reference star map with the actual star map and finally completes the star-ID after the angular distance error test. The proposed algorithm can reduce the average recognition time from 9.04 s to 0.03 s by avoiding a lot of the circular searching observed with the triangle algorithm. The accuracy of yaw angle is improved from 12.09 to 8.04, and the accuracy of pitch angle and roll angle is maintained at approximately 4 because some stars with large errors are eliminated during matching. In addition, when the attitude of the carrier changes rapidly or the time interval between frames is long, the traditional star-ID algorithm based on inter frame correlation experiences un-lock phenomena, while the position and attitude parameters based on real-time PPP/INS can effectively prevent such problems. The algorithm can be applied not only to vehicles, but also to ships, aircraft, satellites and other carriers which need accurate attitude through star sensors.

Acknowledgements

The authors would like to thank the Natural Science Foundation of China (Nos. 41931076, 41804031 and 41904039), Natural Science Foundation of Henan Province (No. 212300410421) and Young Elite Scientists Sponsorship Program by Henan Association for Science and Technology (No. 2022HYTP008) for their financial support.

References

Arani, M. S., Toloei, A. and Eghbaleh, Z. (2015). A geometric star identification algorithm based on triple triangle map, International Conference on Recent Advances in Space Technologies, IEEE, 81–85.CrossRefGoogle Scholar
Christian, J. A. and Crassidis, J. L. (2021). Star identification and attitude determination with projective cameras. IEEE Access, 9, 2576825794.CrossRefGoogle Scholar
Duan, Y., Li, M., Niu, Z., Jing, P. and Chen, Z. (2018). A star pattern recognition algorithm for cameras with large FOV. Journal of Modern Optics, 65(1), 8597.CrossRefGoogle Scholar
Hernández, E. A., Alonso, M. A., Chávez, E., Covarrubias, D. H. and Conte, R. (2016). Robust polygon recognition method with similarity invariants applied to star identification. Advances in Space Research, 59, 10951111.Google Scholar
Hughes, C., Denny, P., Jones, E. and Glavin, M. (2010). Accuracy of fish-eye lens models. Applied Optics, 49(17), 33383347.CrossRefGoogle ScholarPubMed
Jin, S. G. and Su, K. (2020). PPP models and performances from single- to quad-frequency BDS observations. Satelliate Navigation, 1(1), 16. doi: 10.1186/s43020-020-00014-yCrossRefGoogle Scholar
Juang, J. N., Kim, H. and Junkins, J. (2003). An efficient and robust singular value method for star pattern recognition and attitude determination. Advances in the Astronautical Sciences, 115(1), 211220.Google Scholar
Kannala, J. and Brandt, S. (2006). A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8), 13351340.CrossRefGoogle ScholarPubMed
Kaplan, G. H., Bangert, J. and Bartlett, J. L. (2009). Naval observatory vector astrometry software (NOVAS) version 3.0. Bulletin of the American Astronomical Society, 42, 523.Google Scholar
Kim, S. and Cho, M. (2019). New star identification algorithm using labelling technique. Acta Astronaut, 162, 367372.CrossRefGoogle Scholar
Kumler, J. and Bauer, M. (2000). Fish-eye lens designs and their relative performance. International Symposium on Optical Science & Technology, 4093, 360369.Google Scholar
Lang, D., Hogg, D. W., Mierle, K., Blanton, M. and Roweis, S. (2010). Blind astrometric calibration of arbitrary astronomical images. The Astronomical Journal, 139, 17821800.CrossRefGoogle Scholar
Leake, C., Arnas, D. and Mortari, D. (2020). Non-dimensional star-identification. Sensors, 20, 2697.CrossRefGoogle ScholarPubMed
Li, C. H., Zheng, Y., Zhang, C., Yuan, Y. L., Lian, Y. Y. and Zhou, P. Y. (2014). Astronomical vessel position determination utilizing the optical super wide angle lens camera. Journal of Navigation, 67(4), 633649.CrossRefGoogle Scholar
Liu, X., Zhou, Z., Zhang, Z., Liu, D. and Zhang, X. (2017). Improvement of star identification based on star trace in star images. Measurement, 105, 158163.CrossRefGoogle Scholar
Mehta, D. S., Chen, S. and Low, K. S. (2018). A robust star identification algorithm with star shortlisting. Advances in Space Research, 61, 26472660.CrossRefGoogle Scholar
Mohamad, J. A., Ghafarzadeh, M., Taheri, M., Mosadeq, E. and Khakian, M. G. (2015). ). star identification algorithm for uncalibrated, wide FOV cameras. The Astronomical Journal, 149, 182187.Google Scholar
Na, M. and Jia, P. (2006). A survey of all-sky autonomous star identification algorithms. International Symposium on Systems and Control in Aerospace and Astronautics, IEEE, Harbin, China, January 19–21.Google Scholar
Padgett, C. and Kreutz-Delgado, K. (1997). A grid algorithm for autonomous star identification. IEEE Transactions on Aerospace and Electronic Systems, 33(1), 202213.CrossRefGoogle Scholar
Quan, W., Li, J., Gong, X. and Fang, J. (2015). INS/CNS/GNSS Integrated Navigation Technology. Beijing: Springer Berlin Heidelberg.CrossRefGoogle Scholar
Rijlaarsdam, D., Yous, H., Byrne, J., Oddenino, D., Furano, G. and Moloney, D. (2020). A survey of lost-in-space star identification algorithms since 2009. Sensors, 20, 2579.CrossRefGoogle ScholarPubMed
Rousseau, G., Bostel, J. and Mazari, B. (2005). Star recognition algorithm for APS star tracker: Oriented triangles. IEEE Aerospace and Electronic Systems, 20(2), 2731.CrossRefGoogle Scholar
Samaan, M. A. and Mortari, D. (2006). Nondimensional star identification for uncalibrated star cameras. The Journal of the Astronautical Sciences, 54(1), 95111.CrossRefGoogle Scholar
Samaan, M. A., Mortari, D. and Junkins, J. L. (2005). Recursive mode star identification algorithms. IEEE Transactions on Aerospace and Electronic Systems, 41(4), 12461254.CrossRefGoogle Scholar
Somayehee, F., Nikkhah, A. A., Roshanian, J. and Salahshoor, S. (2020). Blind star identification algorithm. IEEE Transactions on Aerospace and Electronic Systems, 56(1), 547557.CrossRefGoogle Scholar
Spratling, B. B. and Daniele, A. (2009). A survey on star identification algorithms. Algorithms, 2, 93107.CrossRefGoogle Scholar
Sun, L., Jiang, J., Zhang, G. and Wei, X. (2016). A discrete hmm-based feature sequence model approach for star identification. IEEE Sensors Journal, 16(4), 931940.CrossRefGoogle Scholar
Tao, Y. A., Xi, Z. B., Ping, S. C. and Fei, Y. A. (2019). A global optimization algorithm for laboratory star sensors calibration problem. Measurement, 134, 253265.Google Scholar
Toloei, A., Zahednamazi, M., Ghasemi, R. and Mohammadi, F. (2020). A comparative analysis of star identification algorithms. Astrophysics and Space Science, 63, 19.Google Scholar
Vulfovich, B. and Fogilev, V. (2010). New ideas for celestial navigation in the third millennium. The Journal of Navigation, 63, 373378.CrossRefGoogle Scholar
Wei, X., Cui, C., Wang, G. and Wan, X. (2019). Autonomous positioning utilizing star sensor and inclinometer. Measurement, 131, 132142.CrossRefGoogle Scholar
Xiao, G., Li, P., Gao, Y. and Heck, B. (2019). A unified model for multi-frequency PPP ambiguity resolution and test results with galileo and BeiDou triple-frequency observations. Remote Sensing, 11, 116.CrossRefGoogle Scholar
Yang, Y., Gao, W., Guo, S., Mao, Y. and Yang, Y. (2019). Introduction to BeiDou-3 navigation satellite system. Navigation, 66, 112.CrossRefGoogle Scholar
Yang, Y., Mao, Y. and Sun, B. (2020). Basic performance and future developments of BeiDou global navigation satellite system. Satellite Navigation, 1, 18. doi: 10.1186/s 43020-019-0006-0CrossRefGoogle Scholar
Yang, Y. X., Liu, L., Li, J. L., Yang, Y. F., Zhang, T. Q., Mao, Y., Sun, B. J. and Ren, X. (2021). Featured services and performance of BDS-3. Chinese Science Bulletin, 66, 21352143.Google Scholar
Zhu, H., Liang, B. and Zhang, T. (2018). A robust and fast star identification algorithm based on an ordered set of points map. Acta Astronaut, 148, 327336.CrossRefGoogle Scholar
Figure 0

Figure 1. Distortion of the fish-eye camera

Figure 1

Figure 2. Radial distortion of fish-eye camera

Figure 2

Figure 3. Principal optical axis and the FOV

Figure 3

Figure 4. Schematic diagram of measured stars and reference stars

Figure 4

Figure 5. Fish-eye camera star identification flow based on approximate position and attitude

Figure 5

Figure 6. Experimental equipment

Figure 6

Table 1. Bias and bias instability of IMUs

Figure 7

Table 2. Technical parameters of the fish-eye camera

Figure 8

Figure 7. Typical vehicle-mounted fish-eye star image

Figure 9

Figure 8. Image coordinate of reference star and measured star

Figure 10

Figure 9. Variation of yaw calculated by PPP/INS

Figure 11

Figure 10. Variation of pitch calculated by PPP/INS

Figure 12

Figure 11. Variation of roll calculated by PPP/INS

Figure 13

Figure 12. Average star position error calculated by PPP/INS

Figure 14

Figure 13. Identified number of stars for each image

Figure 15

Figure 14. Star identification time for each star image

Figure 16

Figure 15. Yaw accuracy of the three schemes

Figure 17

Figure 16. Pitch accuracy of the three schemes

Figure 18

Figure 17. Roll accuracy of the three schemes

Figure 19

Table 3. Mean attitude accuracy of the three schemes