1. Introduction
In collaborative navigation, all the collaborative navigation targets share the fused information acquired from different navigation sensors to achieve a more accurate and stable navigation information service (Kealy et al., Reference Kealy, Alam, Toth, Moore, Gikas, Danezis, Roberts, Retscher, Hasnur-Rabiain, Grejner-Brzezinska, Hill, Hide and Bonenberg2012; Kassas, Reference Kassas2014; Fankhauser et al., Reference Fankhauser, Bloesch, Krüsi, Diethelm, Wermelinger, Schneider, Dymczyk, Hutter and Siegwart2016) as shown in Figure 1. At present, there is much research on multi-target collaborative navigation. Sivaneri (2018) considered operating the UAV (Unmanned Aerial Vehicle)s and UGV (Unmanned Ground Vehicle) for collaborative navigation in environments challenging for the global navigation satellite system. In the process, UVGs provide point-to-point radio range measurement. The geometry structure of UAVs’ positioning is improved by designing the optimal moving trajectory of UGVs, to best help UAVs improve the positioning accuracy. Xu et al. (Reference Xu, Bai and Hao2015) reviewed the research on error modelling and compensation methods under collaborative navigation networks, including the impact of unknown ocean currents, underwater acoustic delay compensation, etc. In ‘Cooperative Localization and Navigation: Theory, Research and Practice’, Gao and Fourati carried out the statistical shape analysis in collaborative localisation. The concept of relative configuration is adopted to describe the shape or its variations of the node network, without considering its absolute location, orientation and scaling (Gao et al., Reference Gao, Zhao and Fourati2020). In addition, there are many projects from industry related to collaborative navigation between different kinds of target. For example, projects targeting vehicle-to-vehicle technology by Cadillac, Mitsubishi Electric and other companies are trying to integrate the environmental information detected from other vehicles to increase the awareness ability of each target in the navigation (Frost, Reference Frost2019). In 2018, the U.S. Defense Advanced Research Projects Agency started the project ‘Collaborative Operations in Denied Environment (CODE)’ in order to develop and demonstrate the value of collaborative autonomy in multi-UAV navigation (DARPA, 2016; Wierzbanowski, Reference Wierzbanowski2016).
The current research mainly concentrates on multi-sensor localisation, collaborative navigation patterns, the hardware foundation of collaborative communication, collaborative path planning, etc. (Liu et al., Reference Liu, Greve, Jiang and Xu2018; González-García et al., Reference González-García, Gómez-Espinosa, Cuan-Urquizo, Govinda García-Valdovinos, Salgado-Jiménez and Arturo Escobedo Cabello2020) Research on collaborative navigation under different spatial distributions of navigation targets is limited to the optimisation of the static formation of multi-targets in the same target category. Besides, the collaborative navigation communication and response process is an important part and is in need of further research. Thus, there is a need to analyse and evaluate the collaboration efficiency of the dynamic distribution shape of multi-targets in the different target category. At the same time, it is necessary to design a collaborative navigation communication and response process for collaboration amongst targets. Figure 1 shows the main part of collaborative navigation research; the research content of this paper is included in the data collaboration part of the collaborative navigation method.
2. Collaborative communication and response process
The design of a collaborative navigation process needs to consider the actual information requirements of the navigation target and the spatial distribution of the collaborative navigation targets, to perform collaborative observation and collaborative information application as efficiently as possible. The specific collaboration process can be expressed as in Figure 2, and it includes seven main steps, as follows in detail:
1. The navigation target broadcasts the collaborative navigation request to the collaborative navigation targets within communication and detection range.
2. The collaborative navigation target receiving the collaboration request broadcasts the message response to the navigation target. First of all, it judges whether its sensors are capable of collaboration. If not capable, the collaborative navigation request message is ignored. If capable, when the navigation target can be identified, it sends a collaboration message with the attributes and status information of the collaborative navigation target, the movement state information of the collaborative navigation target, the collaborative observation information of the status of the navigation target, the observation information of navigation situation, and the observation information of the navigation environment. When the navigation target cannot be identified, it sends the attributes and status information of the collaborative navigation target, the movement state information of the collaborative navigation target, the observation information of navigation situation, and the observation information of the navigation environment.
3. The navigation target receives collaborative navigation information.
4. The navigation target judges the effectiveness of the received collaborative message and the collaboration capability of the collaborative navigation target according to the navigation target's attributes, collaborative state information and other related information contained in the collaborative message. Then the navigation target judges whether the collaborative target meets the requirements of position calculation according to the movement state of the collaborative target and the corresponding CGDOP (collaborative geometric dilution of precision) parameter.
5. If the received collaborative navigation information is valid and the corresponding collaborative target has high collaboration capability, then the navigation target will process different parts of the received information for further application. Finally, it will comprehensively update various types of collaborative navigation information for automatic control of the navigation target or assistance in the user's decision making in real-time navigation.
6. The navigation target sends a response message to the collaborative navigation targets for improvement of the collaboration network and further collaboration.
7. The collaborative navigation targets continue to perform the observation updates based on the response message of feedback.
The relative position of the collaborative target changes in real time, so the shape of the collaborative observation network also changes by time. Sometimes the collaborative navigation target moves beyond the communicable range of the navigation target. Therefore, in the collaborative navigation process, the navigation target broadcasts the collaborative request information to other targets with a certain time-frequency and re-executes the above process to update the collaborative navigation targets, to maintain the accuracy and efficiency of the collaborative navigation.
In order to explain the information distribution and collaboration process in detail, the application scenario of multi-target collaborative navigation in an earthquake relief navigation task is introduced for illustration. The main navigation targets involved in earthquake relief are the ground targets of rescue vehicles and personnel, and the air targets of UAVs and helicopters etc. The ground vehicles are mainly used to transport rescue materials and personnel, and the main role of UAVs and other air targets is information detection based on remote sensing technology and communication relay (Flemisch et al., Reference Flemisch, Canpolat and Altendorf2017). Both rescue vehicles and UAVs face the problem of navigating in the complex environment after the earthquake. For rescue vehicles, many roads are damaged and impassable, and due to the impact of terrain and other factors in the disaster area, ground rescue vehicles may face the problem of loss of GPS and communication signals. For aircraft such as reconnaissance UAVs, due to the complex weather conditions and ground conditions, there are problems such as low visibility and difficulty in landing. The information requirements and information collaboration of the air and ground targets during collaborative navigation can be expressed as shown in Figure 3.
In the collaborative communication of air targets and ground targets in earthquake relief, the collaborative navigation data are mainly composed of numbers and text, which includes the real-time coordinate set, the speed of the target, the navigation path of the target, the category of the target, the collaborative information demands and the attribute data of mutual observation sensors. The target navigation situation and the data related to the navigation environment are usually composed of images and videos. Therefore, in the collaborative information sharing process, the information composed of numbers and text can be combined into a collaborative message set with a unified format, and the message frame of the collaborative message set can be expressed as shown in Figure 4:
$CNDS—Message Frame header
[ASN.1 code]
--Main message frame
MessageFrame ::=CHOICE{
statusFrame TargetAttributeStatusMessage,
bmsFrame TargetBasicMovingStatusMessage,
sobFrame TargetStateObservationMessage,
asobFrame TargetBasicMovingStatusObservationMessage,
eobFrame TargetAssistanceSituationObservationMessage
}
It can be seen from the above ASN.1 code that the message frame in the collaborative navigation information interaction is mainly composed of five basic message bodies:
(1) Navigation target attributes and collaborative status information: Msg_STATUS{Target ID; Target classification(Mini-sized vehicle→1/ Small-sized vehicle→2/ Medium-sized vehicle→3/ Large-sized vehicle→4/ Short-range small-sized aircraft→5/ Medium-range medium-sized aircraft→6/ Long-range large-sized aircraft→7/ Small-sized ships→8/ Medium-sized ships→9/ Large-sized ships→10); Information type(Collaborative request→0/ Active collaboration→1); Collaborative information demand(More accurate, stable and safe positioning data→1/ No demand→0, More comprehensive and real-time navigation environment information about weather condition and road accessibility→1/ No demand→0, Comprehensive real-time dynamic collaborative navigation (DCT) situation information about collaborative targets and threat targets→1/ No demand→0, Comprehensive and real-time point of interest (POI) situation information of specific POI type→1/ No demand→0); Effective navigation sensors(Positioning sensors = no →0/ yes→ 1, Range sensors = no →0/ yes→ 1, Camera sensors = no →0/ yes→ 1, Velocity sensors = no →0/ yes→ 1, Angle sensors = no →0/ yes→ 1)}
Definition: This message describes the identity, the type and size attributes, the collaborative demand status and the collaborative demand category of the navigation target. Therefore, other collaborative targets in the area can acquire the basic information of the navigation target and conduct related navigation collaborative observation and information sharing according to the actual needs.
(2) Information on the basic movement state of the target in real-time navigation: Msg_BMS{Time of the navigation system(hour, minute, second); Coordinates of the navigation target(x, y, z); Speed of the navigation target, Heading angle of the navigation target(clockwise $0 \le \alpha \le {360^ \circ }$); Attitude of the navigation target(roll angle $- {180^ \circ } \le \varphi \le {180^ \circ }$, pitch angle $- {180^ \circ } \le \phi \le {180^ \circ }$, yaw angle $- {180^ \circ } \le \theta \le {180^ \circ }$); Acceleration of the navigation target; Planned navigation path}
Definition: This message describes the time of the navigation system and the coordinates, speed, direction, acceleration of the navigation target or collaborative navigation targets in the unified navigation coordinate system. The message can help the collaborative targets to understand roughly the approximate location and movement state of the navigation target based on effective sensors of the navigation target. It will help make relevant collaborative observations, and at the same time provide the coordinates, heading angle and other movement information of the collaborative targets according to the actual needs of the navigation target. If there is a problem with the sensor of the navigation target obtaining the relevant data or the navigation target is not interested in the movement state of the collaborative target, this section can be set as null. Usually, the navigation target sends the collaboration request only when some navigation sensors fail, and there is no need to acquire the motion state information of the collaborative target in detail. Therefore, some sections in Msg_BMS message body are often set as null.
(3) State observation information of the collaborative targets on the navigation target: Msg_SOB{Ranging value; Relative angle; Attitude of the navigation target in collaborative observation; Speed of the navigation target in collaborative observation; Heading angle of the navigation target in collaborative observation}
Definition: This message contains the basic observation values of the collaborative navigation target to the navigation target with respect to the ranging value and relative angle between navigation targets and the collaborative targets, the attitude, speed and heading angle of the navigation target in collaborative observation. The message is mainly used to assist the navigation target to make real-time corrections to its position, attitude, speed and heading angle.
(4) Navigation target assistance and situation observation information Msg_ASOB{Position of the required POI and DCT; Status of the required POI and DCT}
Definition: This message contains the real-time observed position and status information of the POIs and DCTs that are important to the navigation target. It is mainly used to assist the navigation target to acquire the basic status information of the navigation elements represented by POI and DCT, which is outside the perception range of its sensors. So it could help the navigation target make better path plans in real-time navigation.
(5) Observation information of navigation environment Msg_EOB{Navigation weather; Road accessibility}
Definition: This message contains navigation weather and road accessibility observed by the collaborative navigation targets. The message can help the navigation target acquire the status of the navigation environment around the planned path, and help it make effective adjustments to the navigation plan in advance considering the navigation environment.
Here take the collaborative communication between ground rescue vehicle A and reconnaissance UAV C as an example. The following is the contents of the message set of collaborative communication between vehicle A and UAV C.
Collaboration request message of vehicle A:
$CNDS—Message Frame header
Msg_STATUS (001, 3, 0, (1, 1, 1, 1), (1, 0, 0, 0))
Msg_BMS (12:00:00, (14000 ± 100, 3000 ± 100, Null), 15, Null, (Null, Null, Null), 0, (20000, 10000, Null))
Msg_BMSOB (Null, Null, (Null, Null, Null), Null, Null)
Msg_ASOB (Null, Null)
Msg_EOB (Null, Null)
Collaboration message of UAV C:
$CNDS—Message Frame header
Msg_STATUS (112, 5, 1, (Null, Null, Null, Null), (1, 1, 1, 0))
Msg_BMS (12:00:00, (14100, 3050, 100), 15, 90, (0, 0, 0), 0, (20000, 10000, Null))
Msg_BMSOB (112, 64, (Null, Null, Null), 16, 90)
Msg_ASOB ((14500, 3800, Null, 3), (15000, 4200, Null, 3), (12000, 5800, Null, 5))
Msg_EOB (Null, Block Point(16000,6000))
Due to the loss of the GPS signal, vehicle A's positioning accuracy is reduced to 100 m level. Therefore, the UAV C is required for collaboration on positioning and road condition information. Combined with the navigation needs of vehicle A, UAV C collects and transmits observation data such as ranging distance, speed, navigation situation of the target, coordinates of road congestion points and other collaborative information to vehicle A. The collaboration of navigation information between vehicle A and UAV C is shown as follows, and illustrated in Figure 5:
Ground vehicle A revises its coordinates, speed, heading angle and other navigation information according to the collaborative data of UAV C's ranging, speed measurement, heading measurement and coordinates of road congestion points. At the same time, the real-time position of other ground rescue vehicles and the road accessibility information is acquired according to the data of the navigation situation and the coordinates of the road congestion points, so as to plan a safe and feasible navigation path.
3. Feature analysis and automatic evaluation method of the spatial distribution shape
In GPS positioning, there is a correspondence between the quality of observations and the geometric distribution shape of the GPS signal receiver and GPS satellites involved in the measurement (Lassoued et al., Reference Lassoued, Fantoni and Bonnifait2015). For this reason, the GDOP (geometric dilution of precision) parameter is introduced as the index of the spatial distribution quality of the GPS satellites. Similarly, for the networking of collaborative targets, the spatial distribution of collaborative targets has a certain impact on the accuracy and efficiency of collaboration between targets (Cui et al., Reference Cui, Wang, Zhang and Zhang2017). Before conducting collaborative navigation research, it is necessary to analyse the features of the target spatial distribution network, then establish a spatial distribution evaluation model of collaborative navigation targets similar to GDOP to guarantee efficient collaborative communication.
In general, the efficiency of collaborative navigation is closely related to the shape of the observation network of collaborative navigation targets (Zhong and Chen, Reference Zhong and Chen2019). Meanwhile, the shape features of the collaborative observation network vary with the number of collaborative targets. For example, when there are three collaborative navigation targets, there are a total of three observation network shapes. In this situation, the distribution shape of relevant navigation targets could have the feature of no observation line coinciding, only two observation lines coinciding, or three observation lines coinciding. When the collaborative observation network has no observation line coinciding, or only two observation lines coinciding, the positioning error of the navigation target is related to the variance of the intersection angle of observation lines; when the collaborative observation network has three observation lines coinciding, the positioning error of the navigation target in the direction of the observation lines is much smaller than the vertical direction of the observation line.
The grey area in Figure 6 represents the range of target collaborative positioning error. After summarising the positioning error in collaborative navigation with different numbers of collaborative observation targets, it can be seen that the evaluation of the collaborative effect of the spatial distribution shape of collaborative navigation targets can be carried out through the variance value of the intersection angle of observation lines. Generally, not considering the importance weight of each collaborative navigation target, the collaborative navigation targets corresponding to the smallest variance should be selected for network observation. For example, under the premise that there are sufficient collaborative navigation targets, it is preferable not to select a collaborative navigation target with the same observation direction. For collaborative navigation targets that are not suitable for collaborative positioning, the collaborative observation should focus on obtaining the information about POI distribution, navigation situation and navigation environment, etc.
In the actual networking process of the collaborative targets, the navigation target in need of collaboration would evaluate the spatial distribution of the collaborative targets, and select collaborative navigation targets with the best spatial distribution shape considering the collaboration capabilities of each collaborative target. The evaluation model designs the corresponding CGDOP parameters to evaluate the spatial distribution shape of collaborative targets, and it selects the optimal collaborative navigation targets based on the least squares method, as shown in Figure 7.
Assume that $({x_o},{y_o},{z_o})$ and $({x_i},{y_i},{z_i})$ are the coordinates of the navigation target and the collaborative navigation target in the NED (North-East-Down) navigation coordinate system. N is the number of collaborative navigation targets, satisfying $i = 1,2,\ldots,N$. ${\delta _i}$ is the ranging error of the collaborative navigation target i, including the error of the rangefinder and the error related to the time of the navigation system. The measured ranging distance between the navigation target and the collaborative navigation target i is ${d_i}$, then ${d_i}$ satisfies:
Assuming that $({\hat{x}_o},{\hat{y}_o},{\hat{z}_o})$ is the estimated measured coordinates of the navigation target in NED navigation coordinate system, then Equation (1) can be expanded around $({\hat{x}_o},{\hat{y}_o},{\hat{z}_o})$ into a Taylor series ignoring higher-order terms, and it can be linearised and expressed as:
In the equation, ${\hat{d}_i}$ is the actual distance between the estimated navigation target and the collaborative navigation target i, and ${\hat{d}_i}$ satisfies:
${\phi _x},{\phi _y},{\phi _z}$ is the coordinate offset of the navigation target in the direction of $x,y,z$ respectively. ${\varepsilon _{ix}},{\varepsilon _{iy}},{\varepsilon _{iz}}\ (i = 1,2,\ldots,N)$ represents the cosine value of the vector composed of the collaborative navigation target and the navigation target in the direction of $x,y,z$, and ${\varepsilon _{ix}},{\varepsilon _{iy}},{\varepsilon _{iz}}$ can be expressed as:
Therefore, the ranging measurement matrix Z between the navigation target and all the collaborative navigation targets satisfies:
Then it can be solved according to the least squares method:
Assuming that the covariance of the offset of the ranging coordinate is ${\sigma ^2}$, the offset variance matrix satisfies:
The variance is a function of diagonal element ${({H^T}H)^{ - 1}}$, so here the CGDOP parameter related to the covariance of the offset of the ranging coordinate can be introduced to measure the positioning error of the collaborative navigation observation based on the ranging value. Assume that $\overline {CGDOP}$ is the preliminary CGDOP parameter on the premise of not considering the important weight of collaboration. Then $\overline {CGDOP}$ satisfies:
Since the conditions of accuracy and movement state of each collaborative navigation target sensor are different, their importance weights of collaboration to the navigation target vary from each other. When calculating the CGDOP parameters, the importance weights of each collaborative target need to be considered. Assume that W is the importance weight matrix of the collaborative navigation target, then W satisfies:
In the equation, ${\sigma _i}^2$ is the ranging variance of collaborative navigation target i. ${k_i}$ is the reliability parameter of collaborative navigation target i. The reliability parameter is determined by the collaboration relationship between targets. To simplify the calculation, the collaboration relationship can be divided into three levels in positive and negative relationship, as shown in Figure 8. When collaborative navigation target i is unreliable (Negative collaborative target level 1–3), set ${k_i} = 0$; when the reliability of collaborative navigation target i cannot be determined or is at a low level (Positive collaborative target level 1), set ${k_i} = 0.5$; when collaborative navigation target i is reliable (Positive collaborative target level 2 and level 3), set ${k_i} = 1$.
Based on the importance weight matrix W of the navigation target, the weighted least squares solution can be expressed in Equation (12):
Then it can be deduced that the CGDOP parameter value considering the importance weight of the collaborative navigation target can be expressed as:
In the equation, $\lambda$ is the rating constant. After receiving the coordinate set and ranging set information from the collaborative navigation target, the most suitable collaborative navigation targets are selected for network observation and positioning of navigation target according to the CGDOP value.
Combining the collaboration effect of observation network with different CGDOP parameters, it is divided into five levels. Table 1 describes the evaluation description and application scenarios of collaborative navigation network in different CGDOP levels.
To sum up, increasing the number of collaborative navigation targets and changing the spatial distribution of collaborative navigation targets are the main methods to improve the CGDOP parameter. For example, in the collaborative navigation of polar transport UAV and icebreakers, increasing the number and optimising the formation shape of the collaborative UAVs can help icebreakers to perform better real-time position correction and distribution monitoring of ice floes. In the collaborative navigation of urban vehicles, increasing the number of collaborative vehicles on the road network can help a single vehicle obtain more accurate and stable positioning data. It can more effectively decrease the possible risk of collision between vehicles, as well as other navigation problems caused by changes in the navigation environment. Therefore, in collaborative formation navigation, the position of each navigation target in the formation should be designed according to the navigation capabilities, target importance and other conditions of each navigation target. This is so that a single navigation target could obtain more collaborative targets and effective collaborative observation information, and thus improve the collaboration of the navigation formation as much as possible.
4. Experiment and analysis
The experiment aims to evaluate the influence of the shape of the collaborative communication network on the collaboration effect, especially the influence on positioning accuracy of the navigation target. The experimental navigation targets include three unmanned vehicles (A, B and C), two experimental boats (D and E) and two UAVs (F and G). All the experimental targets are equipped with a collaborative communication module and positioning and mutual observation (MO) sensors. The experimental targets start from points a, b, c, f, g at the same time and move to points a1, b1, c1, f1, g1 in the experiment area, as shown in Figure 9. In the navigation process, the collaborative navigation target performs the ranging measurement on the navigation target at a frequency of 1 Hz. At the same time, it broadcasts the observation information to other targets in real time for collaboration.
The experiment recorded the real-time positioning data of navigation targets A, B, C, D, E, F, G. Then it compared and analysed the accuracy of the navigation target at the time of t1, t2, t3 according to the measured data of DGPS (Differential Global Position System). Figure 10 shows the trajectory distribution of all navigation targets based on the recorded positioning data. Figure 11 expresses the estimated positioning distribution of the navigation targets A, B, C based on ranging MO data with different numbers of collaborative navigation targets. Figure 12 illustrates the positioning accuracy of the navigation target A, B, C at the time of t1, t2, t3, and it is compared with the calculated CGDOP value to verify the evaluation model of the collaborative navigation network as shown in Tables 2–4. In the calculation of CGDOP value, all the collaborative targets are considered to be reliable, so the reliability parameters can be set as 1 and the ranging variances equal to 0⋅04 m2 according to the performance of ranging module of the experimental targets. The experiment is conducted in sunny, foggy and windy weather as the control variable, and the experiment data in sunny weather is selected for analysis.
Take the collaborative navigation situation at t1 as an example. The positioning distribution of the navigation target with different numbers of collaborative navigation targets estimated by the collaborative ranging value and the error model of the collaborative navigation method can be expressed as follows. In Figure 11, (a) represents the position estimation of the navigation target on the x-y section that contains the DGPS reference coordinates of the navigation target, and the estimated position is calculated by the ranging values of collaborative navigation targets; (b) represents the position estimation on the x-z section based on the ranging values of the collaborative navigation targets; (c) represents the position estimation on the y-z section based on the ranging values of the collaborative navigation targets; (d) represents the estimated position distribution of the navigation target in the three-dimensional (3D) coordinate system. Since the position estimation of the navigation target based on the ranging value of a single collaborative navigation target obeys a spherical distribution in the 3D coordinate system, it presents as circular distribution on the section in different directions.
As can be seen from Figure 11(d), the accuracy of position prediction of the navigation target is gradually improved with the increase in the number of collaborative navigation targets, and the performance of the collaborative navigation network is gradually enhanced. When there are two collaborative navigation targets, the positioning result of the navigation target obeys the circular distribution; when there are three collaborative navigation targets, the positioning result of the navigation target is in line with the bimodal Gaussian distribution; when there are four collaborative navigation targets, the positioning result of the navigation target presents as a Gaussian distribution. So only when there are four or more collaborative navigation targets with reasonable spatial distribution can the collaborative navigation network provide accurate positioning and navigation services independently of the navigation targets that have lost GPS or other positioning signals.
According to the collaborative navigation network evaluation model based on CGDOP value, the CGDOP values corresponding to the navigation targets A, B, C at the time of t1, t2, t3 were calculated and compared with the real-time positioning accuracy measured in the experiment. The results are shown in Tables 2, 3, and 4.
It can be seen from Tables 2, 3, and 4 that the positioning accuracy value of collaborative navigation gradually grows with the increase of the CGDOP coefficient. When the CGDOP value is lower than 10, the navigation accuracy value grows rapidly with the increase of CGDOP. After the CGDOP value is greater than 10, the navigation accuracy value grows more and more slowly with the increase of CGDOP value, until the collaborative positioning effect can be ignored. Based on the value of CGDOP and positioning accuracy in the tables above, the relationship between positioning accuracy and the CGDOP value of the navigation target A, B, C in t1, t2, t3 can be visualised as in Figure 12.
Compared with the experimental results of single-target navigation based on the data fusion of multi-sensors in the literature, collaborative navigation technology can improve the quality of navigation data with lower configuration requirements on navigation sensors of a single target. In urban navigation or other navigation scenarios with sufficient collaborative navigation targets, the target can continuously update the collaborative navigation network to select the optimal collaborative observation network, in order to dynamically obtain stable and high-precision positioning and navigation data. At the same time, it can obtain navigation assistance services such as safety warnings and navigation situation analysis based on the real-time collaborative navigation data. In the UAV formation navigation or other navigation scenarios that have high requirements for collaborative navigation data, navigation targets with low positioning accuracy or malfunction can obtain real-time position correction from collaborative navigation targets with high-precision position data and enough MO capability, by rationally designing the position distribution of UAVs with different performance and real-time adjustment considering the actual navigation situation. In summary, the establishment of a reasonable collaborative navigation observation network can improve the accuracy and stability of the navigation and positioning data, and reduce the cost of sensor configuration for a single navigation target in different navigation scenarios.
5. Conclusion
To sum up, this paper presents the following work: (1) design of a multi-target collaborative communication process in collaborative navigation considering the functions, features and information requirements of land, sea and air targets. A collaborative navigation scene of earthquake rescue was selected to illustrate the collaboration process specifically; (2) building of the CGDOP evaluation model to assess the collaboration effect of collaborative navigation network with the different spatial shape of target distribution; (3) a simulation experiment conducted to evaluate the efficiency of the collaborative navigation process and the CGDOP evaluation model. Results of the experiment show that effective real-time information collaboration between navigation targets could improve their positioning accuracy, stability and information richness in real-time navigation. At the same time, the collaborative navigation effect is closely related to the spatial shape of the collaborative navigation network, which can be described through CGDOP value. When the CGDOP is greater than 10, the collaborative effect from other navigation targets gradually reduces until it can be ignored. In further research, the importance weight matrix of the collaborative navigation target and the multi-mode application of collaborative information based on the collaborative navigation method need to be focused on, to increase the accuracy of the CGDOP evaluation model and extend the application areas of collaborative navigation.