1 Introduction
The celestial navigation system (CNS) is an important autonomous navigation method. CNS determines the position and attitude of a carrier by using the sun, moon, planets, stars and other natural celestial bodies as navigation beacons. The horizontal coordinates of the selected celestial bodies will be observed. The equipment for CNS is simple, not susceptible to interference and the positioning error does not accumulate with time (Vulfovich and Fogilev, Reference Vulfovich and Fogilev2010). CNS has been widely used in the attitude determination of spacecraft flight and is still one of the core methods for attitude determination of space carriers. Due to the remarkable improvement of the observability of CNS equipment, CNS is often used for attitude determination on near-earth carriers such as ships, vehicles and aircraft. However, shooting star maps in the atmosphere will inevitably be affected by weather conditions, resulting in discontinuous outputs of celestial navigation results. Therefore, a feasible approach is to combine CNS with a global navigation satellite system (GNSS) and inertial navigation system (INS) to form a GNSS/INS/CNS integrated navigation system (Quan et al., Reference Quan, Li, Gong and Fang2015). The role of CNS is not only to improve the positioning accuracy, but also to improve the accuracy in the attitude determination of the carrier (Christian and Crassidis, Reference Christian and Crassidis2021), which does not contain accumulation error.
In the atmosphere, celestial navigation is often affected by weather conditions, such as clouds and fog, which limits the availability of the narrow field of view (FOV) star sensor. When expanding the FOV of the star sensor, the number of visible stars can be increased, the observation redundancy is increased accordingly, and the accuracy of positioning and attitude can be improved. A fish-eye camera is a type of imaging equipment that was developed based on modern optical technologies to realise an ultra-large FOV up to 180° (Kumler and Bauer, Reference Kumler and Bauer2000), with which half of the celestial sphere can be imaged simultaneously. Even if there are some clouds and fog in the sky, attitude determination can be realised as long as a few stars can be photographed in the whole sky, which effectively improves the availability of celestial navigation. However, the use of the fish-eye camera also leads to a large radial distortion while expanding the FOV, making it difficult to be accurately calibrated (Kannala and Brandt, Reference Kannala and Brandt2006) and increasing the difficulty in star identification (Star-ID). The Star-ID, which is the process of matching the stars captured in the star map with the reference stars in the navigation star database, is an intermediate but crucial step for the attitude determination of the star sensor (Na and Jia, Reference Na and Jia2006).
Star-ID algorithms are mainly divided into two categories according to whether a priori attitude information exists. The first category is based on the lost-in-space problem, that is, blind star-identification without any prior attitude information (Somayehee et al., Reference Somayehee, Nikkhah, Roshanian and Salahshoor2020). To realise matching recognition (Spratling and Daniele, Reference Spratling and Daniele2009), this type of algorithm mostly adopts the principle of subgraph isomorphism, which often takes stars as the vertex and the angular distance between stars as the edge. Namely, it generally regards the observation star map as the sub-graph of the entire star map, and starts from the magnitude, mutual position and geometric relationship of stars. This type of algorithm mainly includes the triangle algorithm, polygon algorithm, pyramid algorithm and so on (Arani et al., Reference Arani, Toloei and Eghbaleh2015; Hernández et al., Reference Hernández, Alonso, Chávez, Covarrubias and Conte2016; Zhu et al., Reference Zhu, Liang and Zhang2018; Rijlaarsdam et al., Reference Rijlaarsdam, Yous, Byrne, Oddenino, Furano and Moloney2020). The second type is the iterative star-ID algorithm based on the approximate attitude information, which is obtained from the previous star map or external attitude measurement equipment (Samaan et al., Reference Samaan, Mortari and Junkins2005; Liu et al., Reference Liu, Zhou, Zhang, Liu and Zhang2017). Most of these algorithms use the principle of map recognition to construct a unique feature for each observation star. This kind of algorithm is often composed of the geometric distribution characteristics of other stars in a certain neighbourhood of the star, and primarily includes the grid, singular-value, genetic, list and marking algorithms (Padgett and Kreutz-Delgado, Reference Padgett and Kreutz-Delgado1997; Juang et al., Reference Juang, Kim and Junkins2003; Sun et al., Reference Sun, Jiang, Zhang and Wei2016; Mehta et al., Reference Mehta, Chen and Low2018; Kim and Cho, Reference Kim and Cho2019).
According to the FOV of the star camera, the star image recognition algorithm can be divided into two categories: ordinary small field camera and wide-angle camera. The former is mainly for small FOV star sensors with which a precise calibration is easier. Therefore, the corresponding algorithm is relatively simple, and the research results are relatively rich (Toloei et al., Reference Toloei, Zahednamazi, Ghasemi and Mohammadi2020). The latter is based on the star-ID algorithm of a wide-angle camera which is not precisely calibrated, and with which the research is relatively weak (Leake et al., Reference Leake, Arnas and Mortari2020). Rousseau et al. (Reference Rousseau, Bostel and Mazari2005) and Samaan and Mortari (Reference Samaan and Mortari2006) proposed two dimensionless methods which are based on the inner angles of spherical triangles. Although these methods have demonstrated good robustness to the calibration errors and the inaccuracy of the projection equation, they are only suitable for small-FOV cameras with small distortion and a linear projection error function. In this case, the angular distance between stars and the inner angle of a spherical triangle will only change marginally. However, for large-FOV cameras with nonlinear projection imaging, the angular distance between stars and the inner angle of the spherical triangle could be significantly distorted. Therefore, another dimensionless algorithm for star-ID using star cluster features has been proposed (Lang et al., Reference Lang, Hogg, Mierle, Blanton and Roweis2010). However, the algorithm has not been tested with a large FOV, and the deformation caused by the nonlinear projection function has been ignored. Therefore, Mohamad et al. (Reference Mohamad, Ghafarzadeh, Taheri, Mosadeq and Khakian2015) proposed a star-ID algorithm for an uncalibrated large field camera. For a selected triangle in the star map, the algorithm completes the recognition based on the intersection of triangle sets found in the navigation star database. The star-ID of an uncalibrated camera with a FOV of 114° ± 12° can be realised by using this algorithm, with a success rate of 94%. There are no star-ID algorithms for the fish-eye camera with an ultra-large FOV and without precision calibration to this day.
Aiming at the distortion problem caused by the ultra-large FOV of the fish-eye camera, this study investigates the identification of the fish-eye star image in a GNSS/INS/CNS integrated navigation system and attempts to improve the fish-eye star-ID algorithm. Due to the disadvantages of high cost and limitation of usage distance of the commonly used RTK/INS combination mode, this paper takes advantage of the high-efficient positioning mode of precise point positioning (PPP) technology (Xiao et al., Reference Xiao, Li, Gao and Heck2019). PPP uses a single receiver to obtain the absolute positioning result with a cm–dm level supported by precise satellite orbit and clock correction provided by IGS via internet. The PPP service provided by the BeiDou navigation satellite system (BDS) is very helpful for the users who do not have any internet, because the precise satellite orbit and clock products of GPS and BDS are broadcast by B2b signals of BDS geo-stationary satellites (Yang et al., Reference Yang, Gao, Guo, Mao and Yang2019, Reference Yang, Mao and Sun2020, Reference Yang, Liu, Li, Yang, Zhang, Mao, Sun and Ren2021; Jin and Su, Reference Jin and Su2020). Furthermore, if the PPP is integrated with INS, the approximate position and attitude information for CNS can be provided and their accuracy can be improved.
This paper is structured as follows. In Section 2, the projection distortion model of the fish-eye camera is introduced. The generation of a reference star map according to the approximate position and attitude parameters provided by PPP/INS is discussed in Section 3. Section 4 describes the algorithm for the star point extraction and recognition based on the ‘hypothesis test’ mode. Section 5 presents the experimental analysis based on the measured star map. The conclusion is summarised in Section 6.
2 Projection distortion of fish-eye camera
To achieve ultra-large FOV imaging, the fish-eye camera adopts the idea of non-similar imaging, which introduces a large radial distortion when making the target shape in the imaging process. Common projection models of fish-eye lenses include isometric projection, equidistant projection, stereoscopic projection and orthogonal projection models (Hughes et al., Reference Hughes, Denny, Jones and Glavin2010). This study uses isometric projection models as an example to analyse the distortion error of the fish-eye camera, and the derivation process of other projection models is similar.
The projection formula of isometric solid angle models is expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn1.png?pub-status=live)
where $\theta$ is the semiangular field of the image point, f is the focal length of the fish-eye camera and r is the distance from the projection point in the image plane to the principal point of the image, where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn2.png?pub-status=live)
According to Equation (2), the formula for calculating the semiangular FOV under isometric solid angle projection is expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn3.png?pub-status=live)
The actual projection formula is different from that expressed above even if the nominal fish-eye lens adopts equal solid angle projection due to the existence of imaging optical distortion difference. The distortion of the fish-eye camera generally includes the radial distortion, eccentric distortion and image plane distortion (Tao et al., Reference Tao, Xi, Ping and Fei2019). As shown in Figure 1, the position of the ideal image point of the star point $\sigma$ imaged through the fish-eye camera is ${p_0}$
, the radial distortion difference will make the actual image point ${p_0}$
deviated to ${p_1}$
along the radial direction of the ideal image point and the radius will be correspondingly changed from ${r_0}$
to ${r_1}$
. The eccentric distortion difference will make the actual image point deviate along the radial direction perpendicular to the ideal image point, that is, from ${p_\textrm{0}}$
to ${p_2}$
. In addition to the radius ${r_0} \ne {r_2}$
, there is an angle $\alpha$
between ${r_0}$
and ${r_2}$
. The distortion in the radial direction is called asymmetric radial distortion and the distortion perpendicular to the radial direction is called tangential distortion.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig1.png?pub-status=live)
Figure 1. Distortion of the fish-eye camera
In the imaging distortion of a fish-eye camera, however, radial distortion is the most important, while eccentric distortion and in-plane distortion are relatively unimportant. When only the radial distortion is considered, the error of the distance between the image point and the principal point can be expressed by the semiangular field angle error of the corresponding square point, as shown in Figure 2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig2.png?pub-status=live)
Figure 2. Radial distortion of fish-eye camera
A polynomial distortion model of the equal solid angle projection for a fish-eye camera is expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn4.png?pub-status=live)
where ${k_1}$, ${k_2}$
and ${k_3}$
are the coefficients of the second-, third- and fourth-order distortion terms, respectively.
3 Generation of reference star images based on approximate position and attitude
When using the position and attitude parameters of the carrier to generate the reference star map, it is necessary to determine the sky area that the camera can image and then calculate the image coordinates of the navigation star projected on the imaging plane according to the projection distortion model. If the universal time coordinate (UTC) of the star map is T, and the longitude and latitude of the carrier are (${L_b}$, ${B_b}$
) determined by PPP/INS, the transformation matrix ${C_{ih}}$
between the horizontal coordinate system and the equatorial inertial coordinate system can be calculated. By using the attitude matrix ${C_{hb}}$
of the carrier in the horizontal coordinate system given by PPP/INS and the transformation matrix ${C_{bc}}$
of the camera coordinate system relative to the carrier coordinate system, we get the attitude matrix ${C_{ic}}$
of the camera in the equatorial inertial system as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn5.png?pub-status=live)
Because the principal optical axis coincides with the camera coordinate system axis ${Z_c}$, the vector $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$
of the principal optical axis in the equatorial inertial coordinate system is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn6.png?pub-status=live)
By converting $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$ into the right ascension ${\alpha _P}$
and declination ${\delta _P}$
of the direction in the equatorial coordinate system, we obtain (Wei et al., Reference Wei, Cui, Wang and Wan2019)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn7.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn8.png?pub-status=live)
The coordinates (${\alpha _P}$, ${\delta _P}$
) of the direction of the camera's principal optical axis in the equatorial coordinate system indicate the right ascension and declination of the direction of the camera's FOV centre. If the FOV of the camera is $\omega \times \omega$
, all the stars whose angular distance from the vector $\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} \over P}$
of the principal optical axis’ direction is within the range $\omega /2$
are in the FOV, as shown in Figure 3. In particular, when a fish-eye camera with a FOV of 180° is used, all the stars located above the horizon with angular distance from the direction of the principal optical axis is less than 90° will be shown in the FOV.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig3.png?pub-status=live)
Figure 3. Principal optical axis and the FOV
In addition to the angular distance between the navigation star and the direction of the principal optical axis, the magnitude information of the star should be considered when selecting the star. Only stars with sufficient brightness can be imaged in the star map. By performing many experiments, it is found that stars with a brightness magnitude more than 6.2 can be imaged in the fish-eye star map.
In the navigation star database, the coordinates of the stars in the image can be estimated after the stars are selected according to the angular distance from the direction of the principal optical axis and the magnitude information. According to the apparent position of the navigation star, the projection model and distortion model of the fish-eye camera, and the approximate direction of the principal optical axis in the horizontal coordinate system, the image coordinates of the star point projection can be calculated, with the main steps as follows.
(1) Calculation of stellar horizon coordinates.
The navigation star database provides the star's right ascension and declination, while the observation is directly related to the star's azimuth and attitude angle. Therefore, it is necessary to convert the star's apparent position ($\alpha$, $\delta$
) into the azimuth A and attitude angle h at the observation time. Readers can refer to Kaplan et al. (Reference Kaplan, Bangert and Bartlett2009) for the detail of this method.
(2) Atmospheric refraction correction.
It is necessary to correct the attitude angle of the star because starlight is usually affected by the atmospheric refraction during its propagating process. The atmospheric refraction correction formula is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn9.png?pub-status=live)
where ${h^\prime }$ represents the attitude angle of the star observed by the ground user and $\rho$
is the atmospheric refraction difference. The calculation formula is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn10.png?pub-status=live)
where L is a constant whose value is 1/273, t is the instantaneous centigrade temperature, p is the instantaneous atmospheric pressure (mm-Hg) and ${\rho _0}$ is the atmospheric refraction difference under the standard state, and the current universal approximate formula follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn11.png?pub-status=live)
Here, $a = 60.27^{\prime\prime}$, $b ={-} 0.0\mathrm{669^{\prime\prime}}$
. Using Equations (9), (10) and (11), we can obtain the attitude angle h of the celestial body corrected by the atmospheric refraction (Li et al., Reference Li, Zheng, Zhang, Yuan, Lian and Zhou2014).
(3) Transformation from horizon coordinate system to camera coordinate system.
The horizontal coordinates of stars need to be transformed from azimuth and attitude angles ($A$, ${h^\prime }$
) into rectangular coordinates (${x_h}$
, ${y_h}$
, ${z_h}$
) with the relations as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn12.png?pub-status=live)
Based on the attitude matrix ${C_{hb}}$ of the carrier in the horizontal coordinate system and the transformation matrix ${C_{bc}}$
of the camera coordinate system relative to the carrier coordinate system, the coordinates of stars in the camera coordinate system are
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn13.png?pub-status=live)
(4) Conversion from camera coordinate system to image coordinate system.
The relations between the Cartesian coordinates of stars in the camera coordinate system and the polar coordinates are
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn14.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn15.png?pub-status=live)
Here, $\Omega$ is the angle between the star point and the $O{X_C}$
axis in the image plane, and $\theta$
is the angle between the line-of-sight direction of the star and the $O{Z_C}$
axis, that is, the semiangular field of the star. According to the polynomial distortion model of the equal solid angle projection, the corrected radial length can only be obtained by iterative calculations. Therefore, Equation (4) is transformed into
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn16.png?pub-status=live)
The initial value of r is set to 1, the right side of Equation (16) is used to calculate the new value and the iterative calculation is performed until the difference between the two calculation results is less than the threshold. At this point, the final radial length r is obtained. Thus far, the coordinates ($\Omega$, $r$
) of the star point in the projection coordinate system are calculated and then the polar coordinate form of the star point is transformed into the rectangular coordinate form, as shown in the following formula:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_eqn17.png?pub-status=live)
where (${x_0}$, ${y_0}$
) is the coordinate of the image principal point and (${x_i}$
, ${y_i}$
) is the image coordinate of the ith star point. By solving the corresponding image coordinates of all the stars in the FOV, a reference star map is obtained.
4 All-sky star identification algorithm
In the integrated navigation system, the reference star map is obtained using the above steps, and the ‘hypothesis test’ can be used to efficiently extract and identify the star points by matching the measured star map with the reference star map (Duan et al., Reference Duan, Li, Niu, Jing and Chen2018). When the reference star map is known, the star points can be found in a certain neighbourhood of the image coordinates of the navigation star according to the reference star map, as shown in Figure 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig4.png?pub-status=live)
Figure 4. Schematic diagram of measured stars and reference stars
In star point extractions, the median filtering algorithm is first used to pre-process the star map to filter out the image noise points caused by lights, and the star point is detected by the star map template detection algorithm. Then, the measured star points are found in a certain neighbourhood with the reference star point as the centre, as shown in Figure 4. If there are no measured star points in the neighbourhood of the reference star points, it implies that the navigation star point is not imaged in the star map or the imaged star point cannot be effectively detected, and then the star point is abandoned. If there is a star point in the neighbourhood, it can be preliminarily determined as a star point in the search. If there are multiple measured star points in the neighbourhood, it is a redundant identification issue which needs to be further researched.
It is obvious that the radius size of the reference star neighbourhood is crucial. If the neighbourhood radius is set too small, there will be no measured star points in the neighbourhood. If the neighbourhood radius is set too large, there will be multiple measured star points in the neighbourhood, and the stars cannot be identified directly. If the focal length of the fish-eye camera is 10.5 mm and the physical size of the pixel is 12 μm × 12 μm, a star point can only occupy a few pixels in the image and the brighter part of the star point may even occupy only 2–3 pixels. The number of pixels occupied by the measured star point is generally less than 30. Therefore, the number of pixels occupied by the brightest star point is less than 7 × 7 and the distance between any two star points is generally larger than 20 pixels. Hence, the neighbourhood radius of the reference star points can be set to 10 pixels.
The flow chart of the star-ID algorithm for a fish-eye camera based on the approximate position and attitude information is shown in Figure 5. The efficiency and accuracy of identification are crucial in the tracking map for star-ID. After the preliminary matching between the measured star map and the reference star map, most of the measured star points can establish a one-to-one correspondence with the reference star points. However, there are also many-to-one relationships between some measured star points and reference star points, and, for some measured star points, corresponding reference star points cannot be found. To improve the efficiency of star-ID, the following steps can be followed.
(1) First, the image distances among the reference stars in the measured star region which have established one-to-one correspondence are calculated and sorted according to the difference in the distance.
(2) Four measured star points with the smallest distance differences are selected as the candidate reference stars in the central region of the star map.
(3) Matching verification is carried out according to the angular distance error of the four stars in the reference star map and the measured star map.
(4) If the verification is passed, these stars will be determined as reference stars to identify the subsequent stars.
(5) If the verification cannot be passed, the star with the largest error will be cancelled, and the star point with the smallest distance value among the other star points will be added as the candidate reference star.
(6) Steps (2), (3) and (4) are repeated until the reference star is determined.
(7) According to the reference star, the other one-to-one corresponding measured star points are verified, which are marked as identified in the measured star map and navigation star database.
(8) According to the reference star, all the unmarked star points are identified until the star map identification is completed.
From the above process, it can be observed that most of the one-to-one corresponding star points can be preliminarily identified based on the matching of the measured star map and the reference star map. In addition, the pyramid method identification process can be sped up when the reference star is identified. When recognising the remainder of the star points, the process of triangle method recognition can be omitted. Thus, compared with the lost-in-space mode, the calculation difficulty is significantly reduced. In addition, only verification is required when recognising stars, which also satisfies the requirements for accuracy.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig5.png?pub-status=live)
Figure 5. Fish-eye camera star identification flow based on approximate position and attitude
5 Experimental results
5.1 Experimental facilities
Actual observation data for fish-eye star-ID on the evening of December 30, 2020 were applied to verify the effectiveness of the proposed algorithm, At the same time, the integrated GNSS/INS/CNS navigation experiment was carried out, and the experimental equipment photo is shown in Figure 6.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig6.png?pub-status=live)
Figure 6. Experimental equipment
In the experimental system, the GNSS antenna, the inertial navigation system and the fish-eye camera are fixed on the same metal base, so the relative positions among them are fixed. GNSS uses the multi-system PPP method with decimetre level accuracy in real-time. INS adopts two inertial measurement units (IMUs): the first IMU is NovAtel SPAN-ISA-100C with the navigation level performance, which integrates three-axis fibre optic gyroscope and a micro electro mechanical systems (MEMS) accelerometer with full temperature compensation, and the approximate cost value is $150,000; the second IMU is NovAtel SPAN-IGM-S1/A1, which is a low-cost and miniaturised MEMS inertial navigation system, and the approximate cost value is $20,000. In the experiment, the measurement data of the two IMUs are combined with GNSS-PPP integration to calculate the position, velocity and attitude parameters, which are used for the subsequent comparative analysis. The performance of the two IMUs are shown in Table 1.
Table 1. Bias and bias instability of IMUs
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_tab1.png?pub-status=live)
CNS adopts the fish-eye camera as the core component, and its main technical parameters are shown in Table 2.
Table 2. Technical parameters of the fish-eye camera
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_tab2.png?pub-status=live)
5.2 Single star image processing
The experiment was carried out from 21:00 to 24:00 p.m. in local time. The GNSS/INS/CNS integrated navigation experiment was carried out when the vehicle was running on the road. A typical fish-eye star is shown in Figure 7.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig7.png?pub-status=live)
Figure 7. Typical vehicle-mounted fish-eye star image
In the picture, the over-exposed circular area on the right is the moon, surrounded by trees and lights, and the bright dot-like target in the middle is the star. After processing the star map, 360 star points can be extracted from this map. It can be observed that the fish-eye camera can simultaneously image half of the celestial sphere with a 180° FOV. Even if some stars are obscured by clouds in some areas of the sky, the calculation for attitude determination can still be realised as long as a few stars are visible.
In the fish-eye star map, the size of the corresponding angular distance of each pixel is related to the distance from the pixel to the image centre. Further, the size of the corresponding angular distance of each pixel is not a constant value, which changes with the distance to the image centre, because of the non-linearity of the projection function. In the navigation process, the position and attitude of the star map are given by PPP/INS; then, the reference star map at the observation time is generated according to the proposed method, then the star-ID is completed. The pixel positions of star points in the reference star map and the measured star map are obtained, as shown in Figure 8.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig8.png?pub-status=live)
Figure 8. Image coordinate of reference star and measured star
It can be seen from Figure 8 that most of the measured star points can establish one-to-one correspondence with the reference star points. However, there is also some of the measured star points without corresponding reference points. In particular, there are a large number of reference star points on the right side of the figure with no measured ones as they are masked by the bright moonlight. Eventually, we identified 276 stars successfully in this image using the proposed algorithm.
5.3 Error analysis of the prediction star position for different IMUs
To further verify the efficiency of the proposed algorithm, 200 star images taken continuously during the experiment were used for the analysis. As we know, the position error between the stars in the reference and the measured maps is crucial for the star-ID. The position error provided by PPP/INS is better than 30 cm, the corresponding geocentric angle error is less than 0.01″, but the attitude error can exceed 1°. Therefore, the initial attitude value error is the main error source of the star prediction coordinates. As the accuracy of the attitude determination is better than 10″, the three Euler angle errors calculated by the integrations of the above two IMUs supported by PPP are analysed based on the final calculated star sensor attitude as the reference, and the results are shown in Figures 9–11.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig9.png?pub-status=live)
Figure 9. Variation of yaw calculated by PPP/INS
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig10.png?pub-status=live)
Figure 10. Variation of pitch calculated by PPP/INS
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig11.png?pub-status=live)
Figure 11. Variation of roll calculated by PPP/INS
Figures 9–11 reveal that the errors of three Euler angles obtained by PPP/INS1 are less than 0.1°, while those values obtained from PPP/INS2 are larger, especially, the yaw error has large drifts with a variation range up to 1.5°. The mean value of the projection position errors of star points corresponding to the two reference star maps based on PPP/INS1 and PPP/INS2 are quite different, as shown in Figure 12.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig12.png?pub-status=live)
Figure 12. Average star position error calculated by PPP/INS
In Figure 12, the mean value of the star position error calculated by PPP/INS1 is less than 1 pixel, while the maximum error calculated by PPP/INS2 is up to 7 pixels.
5.4 Efficiency and accuracy of star identification
The calculated star position errors are strongly related to the error characteristics of the two IMUs, and the efficiency and accuracy of the star-ID are affected accordingly. To compare the above differences, three schemes are carried out for the star-ID. The first and second schemes adopt the algorithm proposed in this paper, but they are identified according to the initial values of the position and attitude parameters given by PPP/INS1 and PPP/INS2, respectively. The third scheme adopts the traditional triangle method for identification. The number of stars identified in each image is shown in Figure 13, and the identification time (Using Matlab R2016a, Intel i7-8550U processor) is shown in Figure 14.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig13.png?pub-status=live)
Figure 13. Identified number of stars for each image
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig14.png?pub-status=live)
Figure 14. Star identification time for each star image
Figure 13 shows that, for both PPP/INS1 data and PPP/INS2 data, 270–310 stars can be identified in each fish-eye star map by using this algorithm with a relatively stable number of identified stars, while the triangle algorithm is not stable in the star identification and some outliers exist. The recognition times of the two schemes using PPP/INS1 and PPP/INS2 are close to each other, shown in Figure 14. The recognition time by using PPP/INS2, however, is slightly increased due to the large attitude error, but it is still less than 0.1 s. Meanwhile, the average recognition time of the triangle algorithm is 9.04 s, which is significantly larger.
The accuracies of the attitude determined by the three schemes above are compared, which are shown in Figures 15–17.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig15.png?pub-status=live)
Figure 15. Yaw accuracy of the three schemes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig16.png?pub-status=live)
Figure 16. Pitch accuracy of the three schemes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_fig17.png?pub-status=live)
Figure 17. Roll accuracy of the three schemes
As shown in Figures 15–17, the attitude determination accuracies of the three schemes are roughly the same. It should be mentioned that the number of star points identified by the two methods in this paper is less than that identified by the triangle method (Figure 13), but the accuracy of the yaw is higher by using the proposed method (Figure 15). Because some of the stars with large errors are filtered out by the proposed algorithm, the number of the identified stars is slightly reduced, but the accuracy is improved. Further statistical results are shown in Table 3.
Table 3. Mean attitude accuracy of the three schemes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20221109200150522-0178:S0373463322000285:S0373463322000285_tab3.png?pub-status=live)
As shown in Figures 9–17 and Table 3, when compared with the PPP/INS system, the attitude accuracy of the carrier has been improved due to the addition of CNS. For INS1, the yaw accuracy has been improved from 228.87″ to 8.05″, the pitch accuracy has been improved from 109.42″ to 3.78″ and the roll accuracy has been improved from 243.37″ to 3.66″. For INS2, the yaw accuracy has been improved from 3786.64″ to 7.94″, the pitch accuracy has been improved from 175.45″ to 4.02″ and the roll accuracy has been improved from 271.99″ to 3.88″. Meanwhile, when compared with the triangle star-ID method, the proposed fish-eye star-ID algorithm based on PPP/INS not only reduces the recognition time from 9.04 s to less than 0.1 s, but also improves the accuracy of the attitude determination to better than 10″. In practice, only one INS device is needed for the final system. Since the final attitude accuracy based on INS1 and INS2 are basically the same, the cheaper INS2 is preferred.
6 Conclusion
The fish-eye camera with 180° FOV can shoot hundreds of star points in the same star image; therefore, the visibility of stars in the near-Earth space is much improved even under the condition of cloud and fog interference. However, the inherent calibration difficulty is introduced. A fish-eye star-ID algorithm based on approximate pose parameters provided by PPP/INS is proposed in this paper. The algorithm first calculates the position coordinates of stars in the reference star map, then matches the reference star map with the actual star map and finally completes the star-ID after the angular distance error test. The proposed algorithm can reduce the average recognition time from 9.04 s to 0.03 s by avoiding a lot of the circular searching observed with the triangle algorithm. The accuracy of yaw angle is improved from 12.09″ to 8.04″, and the accuracy of pitch angle and roll angle is maintained at approximately 4″ because some stars with large errors are eliminated during matching. In addition, when the attitude of the carrier changes rapidly or the time interval between frames is long, the traditional star-ID algorithm based on inter frame correlation experiences un-lock phenomena, while the position and attitude parameters based on real-time PPP/INS can effectively prevent such problems. The algorithm can be applied not only to vehicles, but also to ships, aircraft, satellites and other carriers which need accurate attitude through star sensors.
Acknowledgements
The authors would like to thank the Natural Science Foundation of China (Nos. 41931076, 41804031 and 41904039), Natural Science Foundation of Henan Province (No. 212300410421) and Young Elite Scientists Sponsorship Program by Henan Association for Science and Technology (No. 2022HYTP008) for their financial support.