1. INTRODUCTION
Underwater vehicles require precision navigational sensors for the purpose of effective navigation and to determine the position of the vehicle in real time. The precision requirement depends on the mission objectives, sensor accuracies and duration (Ramadass et al., Reference Ramadass, Vedachalam, Balanagajyothi, Ramesh and Atmanand2013; Geng et al., Reference Geng, Martins and Sousa2010; Li and Wang, Reference Li and Wang2013). Manned and unmanned underwater vehicles designed for complex subsea operations and long mission durations are often equipped with very high precision Fibre Optic Gyro (FOG)-based Inertial Navigation Systems (INS), aided by a Doppler Velocity Log (DVL) and acoustic base line positioning systems (Vedachalam et al., Reference Vedachalam, Ramadass and Atmanand2014a; Narayanaswamy et al., Reference Narayanaswamy, Raju, Durairaj, Ananthapadmanabhan, Annamalai, Ramadass and Atmanand2013; Vedachalam et al., Reference Vedachalam, Ramadass and Atmanand2014b). Shallow water vehicles, which are designed for shorter mission periods, are often fitted with a navigational system comprising MEMS (Micro-Electro-Mechanical Systems)/magnetic sensor technology-based attitude sensors (Fossen, Reference Fossen2012), DVL and a processor for real time computation of the position. The navigation system is initialised with a position input from an external terrestrial Global Positioning System (GPS) and works in a dead reckoning mode (Marco and Healey, Reference Marco and Healey2001) during the underwater mission. In the dead reckoning mode, based on the last known or computed position, the navigational algorithm estimates the current position based on the inputs from the sensor suite. The sensor suite comprises attitude sensors measuring the vehicle's orientation in three Degrees Of Freedom (DOF) and the acoustic-based DVL measures the vehicle's body frame linear velocities in all three axes.
The precision of the sensor hardware suite and the effectiveness of the position estimation algorithm are vital in effective position determination. The selection of the sensor suite is a trade-off between the position estimation needs of the mission, sensor footprint and costs (Grewal et al., Reference Grewal, Weill and Andrews2001; Ellingsen, Reference Ellingsen2008). The National Institute of Ocean Technology (NIOT) designed and developed a Remotely Operated Vehicle (ROV) with a depth rating of 500 m for shallow water and polar exploration. This paper presents the development of an INS for the shallow water ROV which can be deployed from any ship. The selection of the navigation sensor suite is done with the aid of the in-house developed simulator tool, which is a MATLAB mathematical model that takes the sensor error model parameters and user-defined mission profile as inputs, and simulates the vehicle trajectory in geo-coordinates. The tool also displays the vehicle trajectory under ideal sensor conditions for the same mission profile. Based on the simulation results obtained for various combinations of sensors and the mission target position accuracy requirement of less than 5%, the attitude sensor and DVL hardware are realised. A real time controller and LabVIEW software are used for data acquisition and algorithm development for computing the position with the Kalman filter. Prior to installing the systems on board the ROV, the calibration of the inertial and attitude sensor has been carried out as per the recommended calibration procedures given by the manufacturer. Calibration of the Photonic Inertial Navigation System (PHINS) was performed with the aid of GPS, as per the instruction given in the user manual (PHINS, 2013). A staircase shaped trajectory was performed during the fine alignment mode until the heading covariance was less than 0·1°. With this, the developed navigational system is validated for the desired performance, using an in-house developed emulator (which is based on the GPS-aided high precision FOG-based INS and error models of the sensors), which provides the attitude and velocity sensor inputs, with accuracies resembling the selected sensor suite and validated on board a terrestrial vehicle, with a mission profile closely resembling the actual underwater vehicle. The developed system is installed and qualified for the desired performance on board the NIOT developed 500 m depth rated ROV.
2. POSITION ESTIMATION METHODOLOGY IN UNDERWATER VEHICLES
Land-based and ocean surface vessel navigational systems utilise GPS signals for computing their position (Hegrenaes et al., Reference Hegrenaes, Berglund and Hallingstad2008). GPS signals cannot be used by underwater vehicles as electromagnetic waves suffer significant attenuation in water (Fallon et al., Reference Fallon, Papadopoulos, Leonard and Patrikalakis2010). Prior to deployment, the underwater vehicle navigational system is initialised with the position input from the external GPS. It then works in the dead reckoning mode during the underwater mission. In the dead reckoning mode, based on the last known or computed position, the navigational algorithm, with inputs from the vehicle on board attitude sensors and DVL, estimates the current position. The DVL, which is mounted on the body frame of the vehicle, measures the vehicle velocity in body frame coordinates (Jalving et al., Reference Jalving, Gade, Hagen and Vestgard2003). In order to compute the change in position of the navigating vehicle with respect to the Earth frame, the measured body frame velocity needs to be converted into an Earth frame. Figure 1 represents the vehicle with reference to the Earth frame, where the velocities are to be resolved in the North and East directions and Figure 2 represents the six DOF parameters for a typical ROV in the body frame.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-99351-mediumThumb-S0373463315001058_fig1g.jpg?pub-status=live)
Figure 1. Vehicle orientation in the Earth frame.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-27799-mediumThumb-S0373463315001058_fig2g.jpg?pub-status=live)
Figure 2. Representation of a typical vehicle orientation in the body frame.
The conversion of the DVL measured body frame velocity to an Earth frame velocity is achieved using the Direct Cosine Matrix (DCM) transformation with the vehicle attitude parameters (Fossen, Reference Fossen1994; Hofmann-Wellenhof et al., Reference Hofmann-Wellenhof, Legat and Wieser2011; Hegrenaes et al., Reference Hegrenaes, Berglund and Hallingstad2008) represented in Equation (1).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn1.gif?pub-status=live)
where,
$\; {\rm C}_{\rm b}^{\rm e} $
is a transformation matrix, which is computed with the attitude data measured by the attitude sensor (Lammas et al., Reference Lammas, Sammut and He2007).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn2.gif?pub-status=live)
where, s. = sin (.), c. = cos (.) and t. = tan (.), roll (ϕ), pitch (θ) and Yaw (ψ) or attitude/Euler angles of the underwater vehicle and Vb is the body frame velocity matrix [Vx, Vy, Vz] and Ve is the Earth frame velocity [Vn, Ve, Vd] matrix.
The vehicle velocity computed in the Earth frame coordinates is integrated over a time ∆t to get the updated position every one second in geo-coordinates represented in latitude and longitude computed using Equations (3) and (4), which takes into account for the Earth parameters (Liansheng and Tao, Reference Liansheng and Tao2011; Titterton and Weston, Reference Titterton and Weston2004; Christensen et al., Reference Christensen, Fogh, la Cour-Harbo and Bisgaard2008; Hofmann-Wellenhof et al., Reference Hofmann-Wellenhof, Legat and Wieser2011) shown in Equations (5) and (6).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn3.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn4.gif?pub-status=live)
where, VN is the Northing velocity of the vehicle in m/s; VE is the Easting velocity of the vehicle in m/s; RN is the meridian radius curvature of the Earth, RE is the transverse radius curvature of the Earth computed as per the following Equations (5), (6), (7), (10) and ∆t is the integration time of one second.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn5.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn6.gif?pub-status=live)
The Earth parameters can be defined as per the international standard World Geodetic System (WGS).WGS84 (Grewal et al., Reference Grewal, Weill and Andrews2001) is a reference Earth model as an ellipsoid.The Radius of the Earth, RE = 6378137m, eccentricity, e = 0.00335281066475
3. SELECTION OF THE SENSOR SUITE USING A SIMULATOR
The selection of suitable navigational sensors is a key challenge in the design of underwater position estimation systems, as the accuracy of the position estimation depends mainly on the sensor characteristics (Fossen, Reference Fossen2012; Foss and Meland, Reference Foss and Meland2007). Hence, based on the accuracy requirements of the mission, the navigational sensors are selected. A modelling and simulation tool was developed in MATLAB software, which accepts the initial position, sensor characteristics and mission profile as inputs, and provides the estimated position in the time domain as the output is used (Ramadass et al., Reference Ramadass, Vedachalam, Balanagajyothi, Ramesh and Atmanand2013). The tool, which is coded with the error models of the DVL and attitude sensor, is explained below.
3.1. Attitude sensor error model
The vehicle's angular orientation in three axes is measured, using the attitude sensor housing compass and tilt sensors. The compass measures the vehicle heading with respect to true North and the tilt sensors provide the roll and pitch values. The error model of the attitude sensor is based on the following Equation (7) representing the bias error and random noise (Cruz et al., Reference Cruz, Matos, de Sousa, Pereira, Silva, Silva and Dias2003; Healey et al., Reference Healey, An and Marco1998; Foss and Meland, Reference Foss and Meland2007; PNI sensors Corporation, 2014) parameters.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn7.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn8.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn9.gif?pub-status=live)
where, Ψi is the input heading measurement from the compass; Ψacc is the heading accuracy; Ψo is the heading output as shown in Equation (7). Φi is the input roll measurement from the tilt sensor; Sf is the scale factor; Φo is the roll output as shown in Equation (8). θi is the input pitch measurement from the tilt sensor; Sf is the scale factor and θo is the pitch output as shown in Equation (9). b(t) is the white Gaussian noise modelled with zero mean and the sensor standard deviation.
3.2. Doppler Velocity Log error model
The body frame velocity of the underwater vehicle is measured using the Doppler Principle, and the error model of the DVL is shown and explained in Equation (10):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn10.gif?pub-status=live)
Where V i is the simulated velocity input data from the DVL sensor, V o is the velocity output from the model, Sf is the scale factor, and b(t) is the white Gaussian noise with zero mean and the sensor standard deviation (Jalving et al., Reference Jalving, Gade, Svartveit, Willumsen and Sorhagen2004; Mandt et al., Reference Mandt, Gade and Jalving2001; Braga et al., Reference Braga, Healey and Sousa2012; Teledyne RD Instruments, 2013; LinkQuest Incl, 2013, Hegrenaes et al., Reference Hegrenaes, Berglund and Hallingstad2008) derived using the Allan Variance method as shown in Figure 3. The noise characteristics of the DVL have been carried out by placing the DVL in a static mode in the NIOT in-house tank facility continuously for two hours and logging the data in a computer every second. Using the velocity data from the DVL, the Allan Variance graph (Allan, Reference Allan1966; Hou, Reference Hou2004; El-Sheimy et al., Reference El-Sheimy, Hou and Niu2008) is plotted in Figure 3. It can be seen from Figure 3 that when the time is less than 10 seconds, the slope is −1/2 representing the white noise variance of 0·002 at one second.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-40569-mediumThumb-S0373463315001058_fig3g.jpg?pub-status=live)
Figure 3. Analysis of the white noise characteristics for the DVL sensor using Allan Variance.
3.3. Simulation methodology
Figure 4 explains the principle of the simulator software developed in MATLAB. In the simulation it is considered that the entire sensor data update rate is at one second. Based on the described sensor error models, three-axes body frame velocity and three-axes attitude, the sensor data is generated and stored as data files as per the designed circular vehicle trajectory (Ramadass et al., Reference Ramadass, Vedachalam, Balanagajyothi, Ramesh and Atmanand2013). The initial position of the vehicle in latitude and longitude will be obtained from the GPS receiver, which is a standalone system with a fixed antenna, providing the position output in geo-coordinates using the GPS satellite constellation (Acquaris, 2012) in the National Marine Electronics Association (NMEA 0183) $GPGGA format.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-26853-mediumThumb-S0373463315001058_fig4g.jpg?pub-status=live)
Figure 4. Simulation methodology.
The Earth parameters are initialised as per the WGS84 standard. The body frame velocities of the DVL are transformed into the Earth frame using a direct cosine matrix as explained in Equation (1) with Euler angles. The position estimation is done based on the navigation Equations (3)–(6) in the dead reckoning mode. The simulation algorithm computes and plots the position of the vehicle in geo-coordinates over the mission period for the given vehicle trajectory.
3.4. Selection of the sensor suite
The navigational system developed for the 500 m depth rated ROV demands a position estimation accuracy of better than 10 m in both the latitude and longitude coordinates, when the vehicle is operated with an average velocity of one knot, for a period of 30 minutes. In order to meet the defined mission requirements, a combination of the sensor suite with a range of commercially available DVL (Braga et al., Reference Braga, Healey and Sousa2012; Panish and Taylor, Reference Panish and Taylor2011; de Morais et al., Reference de Morais2013; Mandt et al., Reference Mandt, Gade and Jalving2001) and attitude sensors (PHINS, 2013; PNI sensors Corporation, 2014; Honeywell magnetic sensors, 2014) was selected as three different types. Types 1, 2 and 3 represent the sensor suite with a highly accurate velocity log and different attitude sensors and types 4, 5 and 6 represent a less accurate velocity log and different attitude sensors. The performance of the sensor suite was simulated for the position estimation performance for the defined mission.
The simulation results for sensor suite types 1, 2 and 5 plotted in the geographical coordinates are shown in Figure 5. Details of the sensors considered in each suite and their simulated position estimation performances are shown in Table 1.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-76775-mediumThumb-S0373463315001058_fig5g.jpg?pub-status=live)
Figure 5. Simulation results for the sensor suites.
Table 1. Performance of the various combinations of sensor suites.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-06612-mediumThumb-S0373463315001058_tab1.jpg?pub-status=live)
From the simulation results shown in Table 1, it can be seen that sensor suites 2 and 5 were found to match the target position estimation performance for the mission. By considering the overall project requirements, the type 5 sensor suite is identified for developing the vehicle's navigational system as it is low cost, small size and low weight and thus suitable for a shallow water ROV.
4. SYSTEM DEVELOPMENT AND EMULATOR-BASED VALIDATION
Based on the simulation results, the navigational system is developed. The architecture of the hardware is shown in Figure 6 and the details of the interface are shown in Table 2.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-79294-mediumThumb-S0373463315001058_fig6g.jpg?pub-status=live)
Figure 6. Architecture of the navigation system.
Table 2. Description of the hardware used for the navigational system.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-23055-mediumThumb-S0373463315001058_tab2.jpg?pub-status=live)
The developed navigational system is qualified for its position estimation performance, using an in-house developed emulator. The emulator comprises a GPS-aided high precision FOG-based INS (PHINS, 2013) providing attitude and velocity sensor inputs, which are fed to the software coded to introduce inaccuracies resembling the selected sensor suite. The GPS-aided FOG-based INS has a proven high precision performance with a heading stability accuracy of 0·002°/ hour and velocity accuracy of 0·01 m/s.
The output of the emulator module thus matches the output performance of the selected sensor suite. The emulator output is provided as input to the developed navigational system. Figure 7 shows the methodology involved in providing the emulation inputs and validating the position estimation performance of the navigational system. The developed navigational system is qualified for performance using the emulator-fed system by moving in a terrestrial vehicle in a known location (Latitude - 12·94621, Longitude - 80·21350) which can be seen in Figure 8. The terrestrial vehicle is manoeuvred in a profile almost resembling the expected underwater mission profile conditions (X and Y linear DOF and all three angular DOF).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-96528-mediumThumb-S0373463315001058_fig7g.jpg?pub-status=live)
Figure 7. Performance validation algorithm using emulator.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-17616-mediumThumb-S0373463315001058_fig8g.jpg?pub-status=live)
Figure 8. Test setup for performance test using the emulation.
Figure 9 shows the navigation plot of the GPS-aided FOG based-INS and the output of the emulator fed developed navigational system. Table 3 details the temporal performance of the emulator-fed navigational system with reference to the GPS-aided FOG-based INS.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-03305-mediumThumb-S0373463315001058_fig9g.jpg?pub-status=live)
Figure 9. System's navigation performance using the emulator.
Table 3. Performance of the system compared with near ideal reference.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-89412-mediumThumb-S0373463315001058_tab3.jpg?pub-status=live)
5. IMPLEMENTATION AND PERFORMANCE IN SHALLOW WATER ROV
The developed navigational system comprising attitude sensors, DVL and controller is mounted on board the 500 m depth rated ROV (Vedachalam at al., Reference Vedachalam, Ramesh, Subramanian, Sathianarayanan, Ramesh, Harikrishnan and Atmanand2015). The vehicle velocity measurement using the DVL is based on the Doppler acoustic principle, which provides three-axes linear velocity data at an update rate of one second to the position estimation algorithm. Due to the vehicle dynamics, there are possibilities that could result in data outages from the DVL that could lead to erroneous position computation. To overcome the system's degraded performance during such data outages, a linear Kalman Filter (KF) is developed and incorporated in the DVL velocity measurements (Leader, Reference Leader1994; Anonsen and Hallingstad, Reference Anonsen and Hallingstad2007; Welch and Bishop, Reference Welch and Bishop1995; Simon, Reference Simon2006; Mandt et al., Reference Mandt, Gade and Jalving2001). The prediction methodology of the KF is shown below.
The KF matrices for the estimation of velocities are given below.
A, is the state transition matrix linking the states at time k-1 and k-
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU1.gif?pub-status=live)
B, is the control input vector matrix
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU2.gif?pub-status=live)
uk−1, is the input vector
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU3.gif?pub-status=live)
$\hat{\rm X}_{{\rm k} - 1}$
, is the a priori state vector at time k
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU4.gif?pub-status=live)
Hk, is the measurement matrix
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU5.gif?pub-status=live)
Q is the process noise covariance
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU6.gif?pub-status=live)
R is the measurement noise covariance
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU7.gif?pub-status=live)
${\rm P}_{\rm k}^ - $
, is the covariance matrix of the prediction error initialised as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU8.gif?pub-status=live)
Zk, is the measurement vector at time k
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU9.gif?pub-status=live)
where V k is the linear vehicle velocity Input data in 3-axes from the DVL,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqnU10.gif?pub-status=live)
Time Update
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn11.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn12.gif?pub-status=live)
Measurement Update
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn13.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn14.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921020022884-0598:S0373463315001058:S0373463315001058_eqn15.gif?pub-status=live)
Where
${\hat {\rm X}_{{\rm k} -}} $
is the posteriori state vector at time k, Kk is the Kalman gain and Pk is the covariance of the posterior error.
Based on the Equations (11)–(15), the velocities measured from the DVL are taken as the input matrix, Hk and the process and measurement covariance parameters are tuned to filter the noisy velocity data. Based on standard KF tuning methods (Li et al., Reference Li, Wu, Wang and Lu2013; Healey et al., Reference Healey, An and Marco1998), the filter is tuned to have a process noise covariance (Q) of 0·0001 and measurement noise covariance (R) of 0·2704, and the filtered velocities are obtained for the navigation algorithm computation. The implemented logic is shown in Figure 10.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-76691-mediumThumb-S0373463315001058_fig10g.jpg?pub-status=live)
Figure 10. Implemented algorithm incorporating KF.
As a part of the ROV qualification trials, Figure 11 shows the ROV being launched from the anchored barge in Idukki Lake in the Kerala State of South India, where the water depth at the test location was 40 m. The figure at right shows the vehicle operation, and the data is logged in the pilot console located on board the anchored barge (Vedachalam et al., Reference Vedachalam, Ramesh, Subramanian, Sathianarayanan, Ramesh, Harikrishnan and Atmanand2015).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-91095-mediumThumb-S0373463315001058_fig11g.jpg?pub-status=live)
Figure 11. Navigational system tested for performance.
Figure 12 shows the vehicle velocity input for the position computation algorithm, without and with the KF in place.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-80589-mediumThumb-S0373463315001058_fig12g.jpg?pub-status=live)
Figure 12. KPA performance for velocity during DVL outages.
The ROV was navigated for a period of 30 minutes at an average depth of 25 m and the recorded depth plot in the pilot console is shown in Figure 13.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-15465-mediumThumb-S0373463315001058_fig13g.jpg?pub-status=live)
Figure 13. Depth plot for underwater navigation.
Figure 14 shows the position plot of the underwater vehicle in geo-coordinates during the test period. It can be seen that the vehicle has navigated a distance of 120 m and at a maximum speed in x-axis to 0·5 m/s and y-axis to 0·115 m/s to perform a circular trajectory. After the 30 minute period, when the vehicle surfaced and was recovered to the stationary anchored barge, it could be seen that there were position offsets of 2·89 m in Latitude and −4·38 m in the longitude coordinates with resultant offset of 5·2 m and the calculated position error was 4·3%.The result is found to comply with the emulated performance with a position error of less than 5% which meets the mission requirements.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20160922160820-11294-mediumThumb-S0373463315001058_fig14g.jpg?pub-status=live)
Figure 14. Geo-referenced plot of underwater navigation.
6. CONCLUSION
This paper has presented the navigational system developed for the position estimation of a shallow water Remotely Operated Vehicle using attitude and the Doppler Velocity Log with an initial position aid from GPS. Based on the defined vehicle position accuracy requirement of less than 10 m in the Latitude and Longitude coordinates for 30 minutes at one knot vehicle speed, suitable velocity and attitude sensors are identified using the in-house developed simulator tool. The position estimation performance of the developed system was validated using a precision emulator providing attitude and velocity sensor inputs with accuracies resembling the selected sensor suite, and was found to comply with the simulation results. The developed system was found to perform with the envisaged accuracy of less than 5%, when tested in the in-house developed shallow water Remotely Operated Vehicle. In the near future, it is planned to test the ROV in deeper waters and for longer mission durations, to test the position accuracy and improve the navigation algorithm.
ACKNOWLEDGEMENT
The authors gratefully acknowledge the support extended by the Ministry of Earth Sciences, Government of India, in funding this research. The authors also wish to thank the other staff involved in the project for their contribution and support.