1. INTRODUCTION
In recent years, there has been a drastic rise in direct economic losses caused by maritime accidents in China, especially in inland waterways, where the number of ships is increasing, and accidents occur frequently (Safety Department, Maritime Safety Administration, Ministry of Transport of the People's Republic of China, 2018). Thus, reducing the number of these accidents has become the top priority of waterway traffic management.
Although ships use marine radars and Automatic Radar Plotting Aids (ARPAs) to help avoid collision, the information provided is limited, and the static information and operation intention of the ship cannot be identified. Vessel Traffic Services (VTS), using Very High Frequency (VHF) radio and radar observation, identify a ship manually. However, the operation is time-consuming and information is limited. The Automatic Identification System (AIS) exchanges messages between ships as well as ships and traffic management centres, and can help identify ships, assist in tracking, exchange information, provide collision warnings, etc. Overall, AIS can enhance measures to avoid collisions between ships and strengthen the ARPA information delivery, the vessel traffic management system and report the specific function of the ship. In addition, AIS-based information can be displayed on Electronic Chart Display and Information Systems (ECDIS), which can improve spatial awareness and facilitate more effective maritime communications. It can also provide voice- and text-based communication methods for ships, and thus enhance the ship's global awareness. Therefore, AIS is not only an important way to obtain the voyage information of ships but also helps improve their safe navigation.
However, AIS has its limitations, for instance, if an error exists in the AIS information. Information may be input erroneously, a ship may never update the AIS information or if the AIS equipment is switched off, then the system cannot play its role and can potentially cause chaos. The findings of previous studies have shown that the data provided by AIS is often not reliable. For instance, Wu et al. (Reference Wu, Xu, Wang, Wang and Xu2016) proposed an algorithm to account for errors in data when a ship being tracked seemed to have “jumped” large distances. Harati-Mohtari et al. (Reference Harati-Mokhtari, Wall, Brooks and Wang2007) investigated different regulations, supervision for proper use and training and management of AIS users. The potential of AIS to cause problems was analysed and the classic human factor in the “Swiss Cheese” model resulting in system failure was modified for AIS to investigate a possible accident trajectory. Although AIS equipment could easily have self-checking mechanisms and links to other equipment to detect obvious inconsistencies, maritime authorities still lack methods for regulating the authenticity and accuracy of AIS data.
The application of Unmanned Aerial Vehicles (UAVs) is becoming increasingly extensive. By using a UAV, there is potential for a wider view and a better grasp of the overall marine situation. This has advantages in the dynamic management of ships, anti-pollution monitoring, and control of shipping density. A method for verifying AIS data by using a UAV is proposed in this paper. This method processes and analyses the video images taken by a UAV, and then calculates the spatial position of a ship target. This spatial position information and AIS-based ship position are analysed and compared to determine the authenticity and accuracy of AIS. The proposed method was tested in the Huangpu River in China, and the results show that the AIS information can be verified in real time.
2. RELATED WORK
The research work discussed in this paper includes analysis of AIS data, target detection and tracking, and visual target positioning. Some relevant research works are analysed below.
2.1. Analysis of AIS data
In simple terms, AIS is a technology that makes ships visible electronically to each other. An AIS transmitter automatically broadcasts information from the vessel to all AIS-fitted ships and to shore-based receivers through VHF that are within range. The information includes ship name, ship positions (from the Global Positioning System (GPS) equipment), ship speed, ship course and ship heading, along with a unique identification number, the Maritime Mobile Service Identity (MMSI). This information can be displayed on the equipment of other ships, or the VTS centre. The data from AIS can be used for ship navigation and by a VTS centre for traffic safety and efficiency.
Historical AIS data is usually used to study the characteristics of vessel traffic to further improve safety and efficiency. These traffic characteristics include the mapping of global shipping density (Wu et al., Reference Wu, Xu, Wang, Wang and Xu2016), navigation patterns (Gunnar Aarsæther and Moan, Reference Gunnar Aarsæther and Moan2009), and navigational risk assessment (Maimun et al., Reference Maimun, Nursyirman, Sian, Samad and Oladokun2014). AIS has also been used in research into ship-collision avoidance (Mou et al., Reference Mou, Cees and Ligteringen2010; Goerlandt and Kujala, Reference Goerlandt and Kujala2011; Zhang et al., Reference Zhang, Goerlandt, Kujala and Wang2016). Spatial and temporal information from AIS is also important in the calculation of collision probability (Altan and Otay, Reference Altan and Otay2018).
AIS information is often used in association with radar. The common trend is to use radars as the primary source of surveillance and AIS as a secondary source with interaction between the two corresponding datasets. In general, the fused estimates from these two sensors can improve the overall tracking performance with respect to estimation accuracy, number of false tracks, and missed detections over the corresponding metrics with a single source. One way of fusing information from multiple sensors is track-to-track fusion, in which the fusion is performed at the track level such that separate tracks are initiated and maintained at each sensor and combined later at the fusion node (Chen et al., Reference Chen, Kirubarajan and Bar-Shalom2003). Another method involves fusion through measurements, where measurements from different sources are forwarded to a centralised fusion processor and processed jointly by a measurement-to-track association algorithm to initialise and maintain tracks (Habtemariam et al., Reference Habtemariam, Tharmarasa, McDonald and Kirubarajan2015). However, the combined analysis of AIS and video images is relatively rare.
This literature overview shows that a ship's spatial information in AIS is the main content of analysis, and is the basis for judging the authenticity of AIS in this study.
2.2. Target detection and tracking
Target detection is performed to identify a moving target and separate it from backgrounds of differing complexity. It is also used to complete follow-up tasks such as tracking and identification. In recent years, machine learning has gradually become one of the most popular video-based target-detection methods. The basic process is to calibrate the learning sample, extract appropriate features, and then use machine learning to obtain a detection model. Relevant methods include artificial neural networks, Support Vector Machines (SVM) and boosting (Yin et al., Reference Yin, Chen, Chai and Liu2016).
The motion-target-tracking problem can be considered to be equivalent to the corresponding problem of target location, speed, shape, texture, colour and other related features in continuous image frames. Classic methods include tracking methods based on feature matching, regional statistical matching, model matching, as well as mean shift tracking methods, Kalman filtering and particle filtering. Although these methods can achieve certain effects, they still have some defects. For example, tracking can easily fail when the target changes shape or is blocked. This is mainly because the use of a training sample to cover all possible deformation and various scales, attitude changes and light changes of the target is difficult. Therefore, the target model during tracking should be upgraded to deal with modifications in the appearance of the target.
Some scholars have proposed new methods for detecting and tracking a training sample selection. For tracking a single target, researchers have proposed visual target tracking methods based on methods such as online-tracking target-appearance learning (Smeulders et al., Reference Smeulders, Chu, Cucchiara, Calderara, Dehghan and Shah2014), multiple-instance tracking methods (Zhang and Song, Reference Zhang and Song2013), or a tracking–learning–detection method (Kalal et al., Reference Kalal, Mikolajczyk and Matas2012). Multi-target tracking methods include the joint integrated probabilistic data association method (Daronkolaei et al., Reference Daronkolaei, Nazari, Menhaj and Shiry2008) and a multi-objective tracking method based on sparse reconstruction analysis (Han et al., Reference Han, Jiao, Zhang, Ye and Liu2011).
The background of the ship target captured by a UAV is normally the water surface. In this case, the background modelling method is suitable for detecting moving ships. In this respect, the ViBe algorithm proposed by Barnich and Van Droogenbroeck (Reference Barnich and Van Droogenbroeck2011) has attracted attention. This algorithm can complete the initialisation of the background model in the first frame, with low complexity and low memory requirements, which is suitable for real-time target detection. In terms of anti-noise performance, the algorithm can better suppress the influence of shadows, illumination, jitters, etc., than other similar methods, thus showing strong adaptability in complex scenes and being able to achieve better detection of the prospective target outline.
The aim of this research is to detect ship targets in video images. Even if a UAV is hovering when shooting, it possesses a certain wobble. In addition, there are various types of ships in the shooting area, with varying speeds. In some cases, a ship being filmed is large and moves slowly; sometimes even covering the entire image plane. Moreover, water surface clutter such as water ripples and a reflective surface can affect performance. Therefore, in this study, the authors improved the ViBe algorithm, making it suitable for tracking and detection of ship targets in video images shot by a UAV.
2.3. Visual target positioning
Visual target positioning has the advantage of non-contact, real-time, online measurement and high accuracy, and can be rapidly developed and used widely (Kanatani et al., Reference Kanatani, Sugaya and Kanazawa2016). According to the number of cameras used, the positioning method can be divided into monocular, binocular and multiple vision positioning (Hartley and Zisserman, Reference Hartley and Zisserman2003).
Monocular vision is simple, easy to use, and can be updated rapidly. In addition, compared to the multi-visual positioning method, the problems of optimal distance and feature point matching do not need to be solved. However, monocular vision positioning requires some prior knowledge to recover Three-Dimensional (3D) spatial information from a single Two-Dimensional (2D) image. Moreover, the target-positioning accuracy and reliability are relatively low and cannot solve the occlusion problem. Binocular visual localisation generally includes camera calibration, feature extraction, stereo matching and parallax localisation. Parallax localisation based on triangulation is simple when the error of the observed value is not considered. Multi-vision positioning has the advantages of high precision and it can solve the occlusion problem in the measurement process to a certain extent. As the number of cameras increases, the matching problem becomes more complex.
As the UAV is videoing the ship from the air, there is generally no problem of occlusion. In addition, as the ship is located on the water surface, its height information is known, and the monocular visual positioning method can be used for target positioning. Moreover, as a UAV can generally carry at least one camera, we used the method of monocular vision positioning in this research. The basic process of monocular vision is as follows. First, the image is corrected to allow for the camera calibration, from which the distortion effect on the positioning accuracy is eliminated to a certain extent. Then, the features of the target object in the scene are identified and extracted, and then their image coordinates are obtained using feature-extraction algorithms such as point feature extraction (Somani and Raman, Reference Somani and Raman2015) and line feature extraction (Von Gioi et al., Reference Von Gioi, Jakubowicz, Morel and Randall2010). Finally, the spatial position of the target is obtained by transforming the image coordinates of the features into 3D space coordinates by using a single matrix.
3. ALGORITHM FLOW
Figure 1 presents the process design of the verification method. The main input is the video image shot by the UAV and the AIS information. The output is the conclusion of the question of the authenticity of the target's AIS information. The detailed process is as follows:
(1) The input video shot by the UAV is processed; here, an improved Canny operator is integrated with the detection result from the ViBe algorithm to detect and track the ship target.
(2) Based on the obtained internal and external orientation elements of the camera, the spatial position of the ship is calculated from the video image.
(3) The AIS data is selected from the search area comprising the ship position within a range of no more than 150 m from the current position of the UAV.
(4) The distance between the ship positions obtained from the video image and AIS information is calculated. If the distance is less than the threshold value (we set the threshold value as 20 m), the value of the AIS is placed in the alternative set.
(5) The nearest ship in the alternative set is selected. If the speed and course obtained from the video image and AIS are the same or if the corresponding difference is small, then we assumed that the speed and course characteristics belong to the same ship. Otherwise, the information about the next nearest ship is obtained from the alternative set and verified. If no AIS data can be gained through the verification in the alternative set, then we assume that there is no corresponding AIS information about the target ship.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig1g.gif?pub-status=live)
Figure 1. Comparison and verification process of AIS data.
In this whole process, the main problem is to detect the ship target from the video image, and then position the ship target, as discussed in the next section.
4. SHIP TARGET DETECTION
The UAV experiences noise in the process of video-image shooting, and this could affect the accuracy of ship target detection. For instance, ripples in the water, sunlight reflecting off the water surface, or local noise such as bubbles, dents and burrs on the image caused by the external disturbance from the UAV. This noise must be corrected and compensated. Therefore, some improvements were made to the ViBe algorithm, as shown in Figure 2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig2g.gif?pub-status=live)
Figure 2. Process of ship target detection.
(1) An improved Canny operator was used to obtain the edge features in the image, and is calculated as follows:
A median filter instead of a Gaussian filter was used to smooth the surface ripples in the input video because it proved to be more suitable in smoothing these ripples in the experiment. Then, the average value of the gradient in the directions of 45° and 135° was calculated (as in Equation (1)) and compared with the gradient in the horizontal and vertical directions in the image; their maximum value is taken as the final gradient of the region:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn1.gif?pub-status=live)
Next, adaptive threshold segmentation was performed on the image feature edges by using the maximum between-class variance method based on the final gradient of the region.
(2) Morphological filtering was used to suppress the surface clutter owing to obvious vibration of water ripples. Let the input image be f(x, y) and the structural elements be g(x, y) (with directions of 0°, 45°, 90° and 135°). Then, the opening and closing operations are respectively expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn2.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn3.gif?pub-status=live)
The image and four structural elements are used to conduct a morphological open operation as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn4.gif?pub-status=live)
The average of the results from the four expressions is used as the calculation result.
(3) The abovementioned result is fused with the foreground objects obtained through the ViBe algorithm, and the target is filtered out by using an erosion filter. Next, the external contour of the ship is extracted according to the area and aspect ratio of the target. Finally, a seed filling algorithm is used to fill the holes, and the ship targets are framed using the outer contours, which are convenient for subsequent ship positioning and tracking.
The ship target image taken by a UAV can be divided into vertical and oblique shots. The results of the abovementioned processing for vertical and oblique shootings are shown in Figures 3 and 4, respectively. Figures 3(a) and 4(a) present the video images shot by the UAV. Figures 3(b) and 4(b) show the result processed by the ViBe algorithm, where the result of the target-detection effect is not ideal and the ship contour is unclear. In addition, there are certain ghost objects in the image. The results of the improved Canny edge-detection algorithm are shown in Figures 3(c) and 4(c). This method can realise the basic extraction of ship contours before fusion with the detection results of the ViBe algorithm. Figures 3(d) and 4(d) present the fusion result, showing the optimisation of contour detection in the ViBe algorithm. However, contour holes can still be observed. Therefore, the seed filling algorithm was used and its results are shown in Figures 3(e) and 4(e). Figures 3(f) and 4(f) show the detection result of the connected domain mark of the ship target with minimum external rectangles.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig3g.jpeg?pub-status=live)
Figure 3. Process of detecting ship targets on vertical image.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig4g.jpeg?pub-status=live)
Figure 4. Process of detecting ship targets on oblique image.
5. SHIP TARGET POSITIONING
In the process of ship target positioning, a UAV carrying a GPS receiver and video camera is flown over the ship to capture the video image of the target. As the position of the UAV is known, we can get spatial information of the ships being shot from the video image. Moreover, the AIS data also contains information, such as location, course and speed of the ship. Therefore, the authenticity of the AIS information or whether the AIS equipment on the ship is functioning can be verified. The position information received by the UAV and the position information of AIS are all in the World Geodetic Spheroid (WGS)-84 coordinate system, which is a 3D spherical coordinate system. To calculate the position of the ship target, the spatial positions obtained via the UAV and AIS information are converted to a spatial rectangular coordinate system, and the spatial relationship is shown in Figure 5.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig5g.gif?pub-status=live)
Figure 5. Visual measurement positioning.
The 3D camera coordinate C-XcYcZc is localised to the image plane, where C is the perspective centre of the camera and is considered as the position of the UAV. The Z axis of the C-XcYcZc passes through the image plane, and crossover point c is the principle point with the image coordinate (u, v). In this paper, the coordinate system of a value is identified by superscripts w, c and i for the world, camera and image coordinates, respectively. The orientation of C-XcYcZc in the 3D world coordinate W-XwYwZw is captured by an orthonormal rotation matrix R, which is defined by three successive rotation angles α, β and γ around W-Xw, W-Yw and W-Zw, respectively, and is expressed as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn5.gif?pub-status=live)
Suppose that object point P in W-XwYwZw is projected onto image point p in i-xiyi, Pc is the 3D camera coordinate, pi is the image coordinate and Pa is the auxiliary image space coordinate. (The auxiliary image space coordinate system C-XaYaZa is definite, with origin C, and the orthogonal axes are labelled XaYaZa, where the orientation of its axes are the same as W-XwYwZw). Then, we have:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn6.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn7.gif?pub-status=live)
Here, (u, v, f) are the elements of the interior orientation and can be obtained through camera calibration in advance. Thus, (u, v, f) are known in the following equations. The perspective projection from P to p through the perspective centre is captured by a collinearity equation (points C, p, and P are collinear):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn8.gif?pub-status=live)
Alternatively, it can be written in analytical form as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn9.gif?pub-status=live)
By eliminating scaling factor λ, two analytical equations for each pair of the object-image points can be obtained as follows (Hartley and Zisserman, Reference Hartley and Zisserman2003):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn10.gif?pub-status=live)
where $\lpar X_{C}^{w}\comma \; Y_{C}^{w}\comma \; Z_{C}^{w}\rpar $ is the camera position with respect to the 3D world coordinate and can be obtained from the GPS receiver in the UAV. (α, β, γ) is the camera orientation with respect to the world coordinates, which is the function of (a i, b i, c i; i = 1, 2, 3). They can also be obtained from the camera control system of the UAV. (x, y) is the image coordinate of point p and can be measured from the image and (X, Y, Z) is the world coordinate of point P and is the unknown quantity to be calculated. As a result, there are two equations with three unknown quantities in Equation (10). However, as the ship is on the river, the Z value of P is approximately zero. In addition, the UAV's camera captures video images in the vertically downward direction. Therefore, the effect of the deviation of the Z value on the result is small, and Z can be set to zero. Therefore, the position of point P can be obtained from Equation (10).
As the AIS information contains the ship's speed and course, it can also be used to verify the authenticity of the AIS data. As shown in Figure 6, the ship moved from P1 to P2 in time Δt, from which average speed v and average course angle μ can be obtained. As the method is based on the video-image data, the value of Δt can be set to be small, and average speed v and average course angle μ can be approximated as real-time speed and course angle, respectively. The calculation methods of speed v and course angle μ are listed in Equations (11) and (12), respectively. These results can be compared with the ship speed and course in the AIS data.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig6g.gif?pub-status=live)
Figure 6. Calculation of ship's motion state.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn11.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_eqn12.gif?pub-status=live)
6. THE EXPERIMENT
To verify the efficiency of the proposed method, a test experiment was conducted in the navigable waters of the Huangpu River in Shanghai, China. In the process, a UAV was flown 75–100 m above the Huangpu River, and then filmed the ships underway. Figure 7 shows the experiment site.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig7g.jpeg?pub-status=live)
Figure 7. The UAV taking off at the experiment site.
The overall architecture of the experimental system is shown in Figure 8. This includes the UAV, remote control, android equipment, master server and AIS server. The resolution of video image captured by the camera on the UAV was 1, 920 × 1, 080 pixels and the frame rate was 48 frames per second. The video image was sent to the remote control and then to the master server through the image grabber. The position of the UAV was also sent to the remote control and then to the master server through a mobile terminal using Wifi. The main part of the experimental platform system was the master server. The calculation processes include image processing, AIS message analysis, obtaining UAV position, visual target positioning, and information comparison. The specific process is listed in the aforementioned algorithm.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig8g.gif?pub-status=live)
Figure 8. System structure of the experimental platform.
The video images taken by the UAV are shown in Figures 9(a)–(d), in which Figure 9(a) was taken from the UAV flying approximately 150 m above the experimental site. Figure 9(b) shows the image of ships vertically below the camera, and Figures 9(c) and 9(d) show oblique images taken from the UAV flying approximately 100 m above the ships.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_fig9g.jpeg?pub-status=live)
Figure 9. Video images taken by the UAV.
The proposed method listed in Figure 1 was used to extract the ship's target in the image, and then its spatial position was calculated and compared with the AIS information. The results in Table 1 show that the error of longitude and latitude is within 0·010′. The error is mainly due to the error of GPS positioning and ship antenna position. However, this is enough to verify the signal of the AIS based on the size and spacing of general ships.
Table 1. Comparison of the AIS information and ship target positioning results.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20191017064604094-0592:S0373463319000262:S0373463319000262_tab1.gif?pub-status=live)
7. CONCLUSION
In recent years, maritime authorities have begun to use UAVs for patrolling within their jurisdictions. In this study, we developed a method for verifying AIS data. Through the process of target extraction and target positioning, the spatial position, speed and heading of the ship's target was obtained and compared with AIS information to determine the authenticity of the AIS data. This method has been verified and tested in the Huangpu River in China, and the result shows that AIS information can be verified in real time. AIS signals can be automatically checked and verified by a UAV in its daily cruise, thus improving the supervision efficiency of maritime departments with lower cruise cost. In addition, the enhancement of the UAV cruise function could urge ships within the jurisdiction to improve their AIS data quality. However, the positioning method is based on the theory of monocular visual positioning. The use of multiple cameras would be better for improving positioning accuracy, and the accuracy of the ship target must be improved.
Moreover, the electronic cruise work has become an important means for maritime departments to manage navigation safety on site. It is a model that integrates advanced monitoring methods such as VTS, AIS, VHF and CCTV, and builds a unified command platform to conduct dynamic tracking of ships, to carry out dynamic management of shipping, and to implement electronic monitoring as well as all-round coverage towards important river sections. It can effectively enhance maritime supervision and emergency response abilities, and reduce accidents including ship collision and grounding. Air cruise, with UAV introduced into electronic cruise, has advantages such as “great height, far vision”, making it easy to grasp the overall situation, demonstrating unique advantages which render other methods incomparable in terms of dynamic management of shipping, pollution prevention and monitoring as well as ship density control. The special advantages of a UAV can enrich the working mode of electronic cruising and even bring revolutionary changes to this aspect. This is also our main purpose in exploring the application of UAV systems in the maritime field. Thus, further research is required to explore these issues.
ACKNOWLEDGMENTS
This research was supported by the National Natural Science Foundation of China under grant 41701523.