Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-06T09:02:52.522Z Has data issue: false hasContentIssue false

Design and development of a novel autonomous scaled multiwheeled vehicle

Published online by Cambridge University Press:  21 September 2021

Aaron Hao Tan
Affiliation:
Department of Automotive, and Mechatronics Engineering, University of Ontario Institute of Technology, Oshawa, Ontario, CanadaL1H 7K4
Michael Peiris
Affiliation:
Department of Automotive, and Mechatronics Engineering, University of Ontario Institute of Technology, Oshawa, Ontario, CanadaL1H 7K4
Moustafa El-Gindy
Affiliation:
Department of Automotive, and Mechatronics Engineering, University of Ontario Institute of Technology, Oshawa, Ontario, CanadaL1H 7K4
Haoxiang Lang*
Affiliation:
Department of Automotive, and Mechatronics Engineering, University of Ontario Institute of Technology, Oshawa, Ontario, CanadaL1H 7K4
*
*Corresponding Author:haoxiang.lang@ontariotechu.ca
Rights & Permissions [Opens in a new window]

Abstract

This article proposes the design and development of a novel custom-built, autonomous scaled multiwheeled vehicle that features an eight-wheel drive and eight-wheel steer system. In addition to the mechanical and electrical design, high-level path planning and low-level vehicle control algorithms are developed and implemented including a two-stage autonomous parking algorithm is developed. A modified position-based visual servoing algorithm is proposed and developed to achieve precise pose correction. The results show significant gains in accuracy and efficiency comparing with an open-source path planner. It is the aim of this work to expand the research of autonomous platforms taking the form of commercial and off-road vehicles using actuated steering and other mechanisms attributed to passenger vehicles. The outcome of this work is a unique autonomous research platform that features independently driven wheels, steering, autonomous navigation, and parking.

Type
Research Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1. Introduction

The world of automobiles has experienced several milestones in its development since its inception in 1886 by Karl Benz [Reference Crouse1]. From inventions such as automatic transmissions, satellite navigation to sensor-based cruise control, automotive engineers have produced several commercialized and innovative solutions that made traveling easier, more affordable, and accessible. Today, the automotive industry is seeing its latest revolution centered around the automation of transportation systems. This revolution entails the retiring of old manual gasoline vehicles with driverless electric systems to create a more convenient and safer way of travel [Reference Bagloee, Tavana, Asadi and Oliver2]. This is accomplished by vehicles becoming more intelligent through the addition of sensors. Benefits of this include better vehicle accessibility for those who cannot drive and car-sharing features that lessen traffic congestion. The potential to impact the streets for road users is unmatched; not to mention, its ability to generate numerous job and business opportunities around the world [Reference Bagloee, Tavana, Asadi and Oliver2]. These advantages along with several unmentioned ones quickly made autonomous vehicle technologies an extremely sought-after research topic. As a result, substantial efforts are made by automakers, technology companies, and academic institutions to collaboratively accelerate progress in this field.

Presently, autonomous navigation features for vehicles with traditional configurations such as two axles and front-wheel steering are studied and documented extensively with real-world deployment [Reference Blok, van Boheemen, van Evert, Ijsselmuiden and Kim3,Reference Peterson, Li, Cesar-Tondreau, Bird, Kochersberger, Czaja and McLean4]. However, multiwheeled vehicles have not received nearly as much attention due to their limited market. This type of vehicle finds its applications primarily in off-road and military settings because of its ability to traverse through rough terrains. As the push for more autonomous navigation capabilities continues, a shared space with mobile robots begins to emerge due to the comparable fundamentals. Likewise, mobile robots and scaled vehicle platforms are often smaller than their life-size counterpart which permits researchers to conduct experiments in indoor laboratories. This is tremendously convenient when focusing on the different subsystem algorithms pertaining to mapping, localization, and path planning since a full-size vehicle model is not always necessary during development.

Nevertheless, a problem that exists is that commercial mobile robots sold today are generally equipped with a differential drive set up and often lack car-like features such as steering and suspension. This problem is even more apparent for mobile robots that feature multiaxles. As a result, a novel eight-wheel drive and eight-wheel steer (8WD8WS) scaled robotic vehicle is designed and developed in this article along with both high and low-level control algorithms to enable autonomous point-to-point navigation. The platform will be referred to as the scaled multiwheeled vehicle (SMWV). The motive is to create a physical prototype that features car-like suspension and steering for future autonomous vehicles and dynamics control research. The novelty of the work includes the designed autonomous research platform, capable of independent drive and steer of each of its eight wheels as well the associated kinematic model. In addition, two main algorithms are developed for the autonomous motion of the SMWV including a parking algorithm and a modified position-based visual servoing (M-PBVS) algorithm. Both algorithms are validated through experimentation in the following sections of this article.

The remaining portions of this article are organized into the following sections: related work on this topic is covered in the next section with a detailed description of the design and development of the SMWV thereafter. The different methodologies relating to motion control, path planning, and visual servoing are discussed in Section 5 and 6, respectively, with experimental results and discussion presented in Section 7. Following this, Section 8 describes the conclusions reached and recommended future works, followed by acknowledgments and references.

2. Related Work

Because of its ability to maneuver in rough terrains, multiwheeled vehicles often find their applications in off-road environments that range from agriculture [Reference Bawden, Kulk, Russell, McCool, English, Dayoub, Lehnert and Perez5,Reference Carpio, Potena, Maiolini, Ulivi, Rossello, Garone and Gasparri6] to space exploration [Reference Sanguino7]. Related work in this field is categorized into three major areas which are the novel design and development of the mobile platform [Reference Islam, Chowdhury, Rezwan, Ishaque, Akanda, Tuhel and Riddhe8Reference Ye, He and Zhang14], the low-level vehicle control, and the high-level navigation algorithms. The following covers publications within the past decade in each of these areas to provide readers with an understanding of the current state of the art.

Starting with the design and development of novel multiaxle vehicles, recent publications on six-wheeled robots are proposed in [Reference Islam, Chowdhury, Rezwan, Ishaque, Akanda, Tuhel and Riddhe8]. In these articles, the primary application focuses on space exploration robots. An interesting agriculture application is seen by Kumar et al. [Reference Kumar, Gogul, Raj, Pragadesh and Sebastin9] where a novel platform operates in a garden for plant identification and classification using neural networks. The goal is to recognize and determine the amount of water and fertilizer necessary to facilitate optimal growth. The above-mentioned papers all focus on six-wheeled designs that feature rocker-bogie suspension and differential drive trains. In terms of other wheel configuration for robots, an interesting application seen by Prabhu et al. [Reference Radhakrishna Prabhu, Seals, Kyberd and Wetherall10] uses a six-wheeled articulated robot that has its design parameters optimized for step climbing operations. Although the mentioned papers thus far feature multiwheeled drive trains; none share similarities with traditional vehicles in terms of steering such as the work conducted by Garcia et al. [Reference Martínez-García, Lerín-García and Torres-Córdoba11]. The authors develop a 12 degrees of freedom model for a four-wheel drive, four-wheel steer (4WD4WS) robot as well as validating the model on a prototype. Another group of researchers [Reference Li, Lee, Lin, Liou and Chen12] developed a similar platform with added features for lane following, reverse and parallel parking using machine vision and fuzzy controllers. For soil sample collection and fertilizer dispensing, [Reference Qiu, Fan, Meng, Zhang, Cong, Li, Wang and Zhao13] proposes another 4WD4WS platform where a new extended Ackerman steering principle is introduced. Since robots with independently steered wheels are theoretically capable of multiple steering modes, a comprehensive analysis for a 4WD4WS platform is described in [Reference Ye, He and Zhang14]. The steering modes studied include front-wheel, all-wheel, crab, and diamond steer. From the mentioned papers, recent publications on novel designs of all-wheel drive and all-wheel steer (AWDAWS) platforms are primarily focused on four-wheel variations while multiwheeled platforms achieve a maximum of eight wheels with the lack of car-like steering and suspension due to system complexity.

In terms of low-level kinematics and dynamics control systems for multiwheeled vehicles, papers such as [Reference Segura, Hernandez, Dutra, Mauledoux and Avilés15,Reference Stania16] set the basis for modeling of six and eight-wheeled platforms, respectively. A kinematics control law that considers wheel yaw, roll, and suspension pitch for a 4WS4WD vehicle is proposed in [Reference Martínez-García, Lerín-García and Torres-Córdoba11]. Aponte et al. [Reference Menendez-Aponte, Kong and Xu17] designed a dynamic model for a four-wheeled strawberry collecting robot that also analyzed tire soil interactions in addition to testing a physical prototype. Motion control with in-wheel motors is described for a six-wheel drive and six-wheel steer (6WD6WS) vehicle in [Reference Kim, Ashfaq, Kim, Back, Kim, Hwang, Jang and Han18] where vehicle dynamics performance are improved, implementing independent wheel torque and steering control; the results from this work are validated based on simulation. Vehicle stability and maneuverability are discussed in [Reference Kim, Kang and Yi19] where both an upper and lower controller works together to determine steering angles based on longitudinal forces, yaw moment, and tire force information. To ensure the vehicle accurately follows a given path, a bounded velocity motion controller with nonlinear control techniques is described in [Reference Oftadeh, Aref, Ghabcheloo and Mattila20]. Beyond control system development, the controllability of a similar vehicle for high-speed navigation in rough terrains is studied in [Reference Aliseichik and Pavlovsky21]. All the mentioned work in control systems up to this point along with the others that are available generally focuses on four to six-wheeled vehicles with limited literature for eight-wheeled vehicles.

In terms of high-level navigation algorithms of AWDAWS vehicles, recent publications have centered around either path planning or path following. An application in this area is seen by Li et al. [Reference Yan, Li, Shi and Wang22] where a hybrid visual servo trajectory strategy is developed for wheeled mobile robots equipped with onboard vision systems and an adaptive controller is designed to periodically update the objective distance. As seen by Wu et al. [Reference Wu and Wang23] a nonholonomic four-wheeled robot employs asymptomatic tracking control using a neural controller. In this way, tracking errors can be reduced when subjected to actuator saturation and external disturbances simultaneously. Another study completed by Wu et al. [Reference Bozek, Karavaev, Ardentov and Yefremov24] used a four-wheeled robot and proposed a control algorithm for training an artificial neural network for path planning. The system consists of two artificial neural networks to ensure optimal motion for steering from the current position of the mobile robot to a prescribed position taking its orientation into account. One of the neural networks serves to specify the position and the size of the obstacle, and the other forms a continuous trajectory to reach it. The neural network is trained on the basis of samples obtained by modeling the equations of motion in the form of Euler’s Elastica. In other works, a two-wheel drive, two-wheel steer, car-like robot is analyzed alongside its kinematic model by Chen et al. [Reference Chen, Yang, Wang and Zhang25]. In addition, the authors used the back-stepping method alongside a sliding mode controller to reduce error. Path planning using A* and the Dynamic Window Approach is implemented in research conducted by Lin et al. [Reference Chih-Jui, Su-Ming, Ying-Hao, Cheng-Hao, Chien-Feng and Li26] for a 4WD4WS robot. This work is improved in [Reference Bo27] where pose estimation with RTK GPS and wheel encoders through an extended Kalman filter is applied. Most recently, a path planning technique that utilizes 7-order Bezier curves is developed to also provide velocity and acceleration profiles for a 4WD4WS vehicle in [Reference Penglei and Katupitiya28]. In this work, the vehicle is represented as a rigid body with previously determined characteristics such as mass and inertia. Conversely, recent path following algorithms includes a basic approach that considers kinematic geometry is presented in [Reference Li, He and Yang29]. Hamerlain et al. [Reference Hamerlain, Floquet and Perruquetti30] studied the control of a car-like robot using a custom-designed practical tracking controller using the second-order sliding mode control of the super twisting algorithm. Ghaffari et al. [Reference Ghaffari and Homaeinezhad31] utilize Mamdani fuzzy logic controllers to follow waypoints that are generated based on the curvature-derived point selection algorithm. Further development of this approach can be found in [Reference Ghaffari and Homaeinezhad32].

For 8WD8WS vehicles specifically, research focused on dynamics control and path planning has been published over the last few years by members of the Crash Simulation and Vehicle Dynamics Lab at the University of Ontario Institute of Technology. The work in control systems started most notably with torque distribution in [Reference Ragheb, El-Gindy and Kishawy33] for an 8WD8WS vehicle. This work was later improved by [Reference El-Gindy and D’Urso34] with a feedforward zero side slip controller that is implemented to generate the rear-axle steering angles. An optimal path planning algorithm based on the artificial potential field is proposed in [Reference Mohamed, El-Gindy, Ren and Lang35] to drive the vehicle to a goal destination. Later, a robust heading angle controller using h-infinity is introduced to overcome system disturbances such as noise [Reference Mohamed, El-Gindy and Ren36]. All mentioned works are tested in simulation with promising results; however, physical experiments are required for further validation.

From the mentioned literature, it is evident that an area lacking investigation is experimental research using 8WD8WS mobile robot systems. As a result, a novel SMWV that features independent suspension, car-like steering, and driven wheels is introduced in this article. The intent is to create an innovative platform that will serve as a research tool for future autonomous vehicle technologies. The design and development of this custom SMWV are covered in all its major areas such as mechanical, electrical, and software systems. In mechanical systems, the design of the chassis, suspension, and steering are discussed. To actuate the vehicle, sensor instrumentation and hardware architecture are described in detail along with the derivation of a kinematics model. Lastly, low-level driving and steering controllers that consider Ackerman’s geometry is proposed in this work along with an incremental-based localization algorithm. All algorithms are combined with open-source path planners in robot operating system (ROS) to create a fully functioning SMWV platform with obstacle avoidance and navigation capabilities.

3. The Scaled Multiwheeled Vehicle

The mechanical system of the SMWV is classified into four different subsystems; namely, chassis, suspension, driving, and steering. Each of these subsystems is illustrated in this section along with detailed descriptions regarding the design decisions. A model is derived after all aspects of the mechanical and hardware architecture design are covered to describe the kinematics and steering geometry of the vehicle. Figure 1 shows the physical model of the SMWV that was designed and built over 2 years.

Figure 1. Physical model of the SMWV.

3.1. Chassis and Suspension System

The design of the chassis of the vehicle resembles the shape of the letter “T” as seen in Fig. 2. It is made of five sheets of aluminum with one for the front, back, left, right, and bottom. Each of these sheets is bent into shape and riveted together to form a rigid chassis. This design features two internal layers which are utilized for the steering and driving systems. As highlighted in Fig. 2, attached to each wheel is an upper and lower control arm which is inspired by a double-wishbone suspension system [Reference Moreno Ramírez, Tomás-Rodríguez and Evangelou37] which traditionally uses two “wish-bone” shaped arms to control the vertical motion of the wheel. The lower control arm is custom-made with water jet steel to ensure durability. Each suspension features gas shocks that are reinforced by coils to improve maneuverability in rough terrain. This kind of system has the advantage of creating negative camber as the wheel travels through its range vertically. Because of this, the vehicle can achieve greater handling abilities due to improved stability as tires can maintain contact with the surface.

3.2. Driving and Steering System

Next, the driving and steering systems are discussed in this section. As previously mentioned, the “T” shape chassis is designed with two internal layers housing the necessary driving and steering components to actuate the vehicle. Each layer has a surface area of approximately 0.10 m2. Figure 3 illustrates the driving layer and the steering layers.

Figure 2. Front view of the SMWV.

Figure 3. Internal layers of the SMWV.

Starting from the bottom with the driving layer, there are two DC motors per axle for a total of eight independently driven wheels. Each DC motor is attached to a 33:1 gearbox to reduce the total rotational speed while increasing the output torque. More specifically, the maximum rotational output speed is 217 rpm while the nominal output torque is 2.41 Nm. With these specifications, the vehicle can achieve a top speed of approximately 1.80 m/s. Due to the limitations imposed by the dimension of the chassis, it is not possible to align two motors along an axle for a direct drive system unless the motors were placed at an angle. This would create an inefficient drive train; therefore, a belt drive system with a 1:1 ratio per wheel is implemented instead. This setup enables the motors to be mounted in parallel with the axle axis as illustrated in Fig. 4. In this figure, the top and bottom DC motors are driving the left and right wheels, respectively. Placed in between the sidewall of the chassis and the output pulley is an encoder that is mounted on the output shaft.

Figure 4. Top view of the 1st axle in the driving layer.

The steering layer sits approximately 6.35 cm on top of the driving layer where linear actuators are attached to each wheel assembly through a tie rod. Each actuator has a total stroke of 50 mm with 25 mm being the neutral position. Steering of each wheel is accomplished by extending and retracting each actuator. In Fig. 5, an extended left actuator with a retracted right actuator would steer both wheels to the left. The tightest turn would happen at full extension and retraction, respectively. Built into each actuator are potentiometers that provide stroke position feedback. The maximum stroke of each linear actuator and achievable steering angles are 50 mm and 35 degrees, respectively. The relationship between this feedback and the achieved steering angle is derived experimentally as seen in Fig. 12, in the following sections of the paper.

Figure 5. Top view of the 1st axle in the steering layer.

3.3. Kinematics Model

3.3.1. Robot Position and Reference Frame

In this section, the kinematics model of the SMWV is derived. Siegwart & Noorkbash’s reference book is utilized to provide a guideline to develop the kinematic model [Reference Siegwart, Nourbakhsh and Scaramuzza38]. The complete kinematic model also utilized attributes from the two-wheel bicycle model of a vehicle. Figure 6 demonstrates the relationship between the global reference frame (signified by axes X and Y) and the local reference frame oriented in the direction of the wheelbase (B) and the longitudinal axis (L).

Figure 6. Global (X,Y) and Local (L,B) reference frame of SMWV.

3.3.2. Robot Maneuverability

The SMWV is composed of eight standard steerable wheels which increase complexity when analyzing the maneuverability of the vehicle. The first component to analyze is the degree of mobility ( $\delta$ m) which is a measure of the number of degrees of freedom of the robot chassis that can be immediately manipulated through changes of wheel velocity [Reference Siegwart, Nourbakhsh and Scaramuzza38]. As illustrated by Fig. 7, a simplified bicycle model along the longitudinal axis is illustrated between the physical wheels of the vehicle. More specifically, $\left( {{\delta _{Li}},\;{\delta _{Ri}}} \right)$ denotes the steering angles of the wheels of the vehicle while $\left( {{\boldsymbol\delta _{\bf 1}},\;{\boldsymbol\delta _{\bf 4}}} \right)$ denotes the steering angle average of the first and last axle.

Figure 7. Kinematic model of 8 × 8.

As seen in Fig. 7 for the complete kinematic model of the vehicle, the eight wheels can be found to connect to an instantaneous centre of rotation which contributes one kinematic constraint. Therefore, the mobility can be calculated as a difference between the number of constraints and total possible degrees of freedom of the SMWV which is equal to 3, thus the degree of mobility ( $\delta$ m ) is seen to be equal to 2.

Another important parameter is the degree of steerability ( $\delta$ s ) which considers that steered wheels do not have an instantaneous effect on the pose of the robot chassis but affect the pose over time. The degree of steerability accounts for each independent, steerable wheel. Due to the SMWV having eight steerable wheels, in general, the degree of steerability for this vehicle is 8. However as seen in proceeding sections to streamline vehicle control, several steering configurations which coordinate and limit wheel steering angles have been developed. This, therefore, reduces the degree of steerability but allows for useful steering maneuvers.

Finally, the degree of maneuverability ( $\delta$ M) can be defined as the overall degrees of freedom that a robot can manipulate. As seen in Eq. (1), the degree of maneuverability ( $\delta$ M) is the sum of the degree of mobility ( $\delta$ m) and the degree of steerability ( $\delta$ s)

(1) \begin{align}{\delta _{\rm{M}}} = {\delta _{\rm{m}}} + {\delta _{\rm{s}}} = 2 + 8 = 10\end{align}

3.3.3. Robot Kinematic Model and Steering Angle Constraints

The linear velocities are calculated by the product of the total longitudinal velocity, $v$ , and the vehicle orientation with respect to the x-axis, $\theta $ , along with the angle between the direction of the velocity with respect to the longitudinal axis of the vehicle, $\varphi $ . The rate of change of the heading angle is denoted by $\dot \theta $ , which is calculated by considering the length to the center of gravity (CG) of the vehicle from the front and rear axles. Equation (2) represents the nonlinear continuous-time relationships of the different velocities of the system where ( $\dot{\textrm{x}} $ , $\dot{\textrm{y}} $ ) are the linear velocities along the respective axis.

(2) \begin{align}\left[ \begin{array}{l} {\dot x} \\[2pt] {\dot y} \\[2pt] {\dot \theta } \end{array} \right] = v*\left[ \begin{array}{l} {\cos (\theta + \varphi )} \\[2pt] {\sin (\theta + \varphi )} \\[3pt] {{\frac{\cos (\varphi )}{{l_1} + {l_4}}}(\tan {\delta _1} - \tan {\delta _4})} \\ \end{array} \right]\end{align}

To find the velocity at the CG, the velocity average of the first and last axle of the vehicle, $\left( {{v_1},\;{v_4}} \right)$ , is calculated as shown below:

(3) \begin{align}\begin{array}{*{20}{c}}{v = \;\dfrac{{{v_1}\cos \left( {{\delta _1}} \right) + {v_4}\cos \left( {{\delta _4}} \right)}}{{2\cos \left( \varphi \right)}}\;}\end{array}\end{align}

Moving forward, the angle between $v$ and the longitudinal axis of the vehicle is calculated with Eq. (4):

(4) \begin{align}\begin{array}{*{20}{c}}{\varphi = {{\tan }^{ - 1}}\left( {\dfrac{{{l_1}\tan \left( {{\delta _1}} \right) + {l_4}\tan \left( {{\delta _4}} \right)}}{{{l_1} + {l_4}}}} \right)}\end{array}\end{align}

To simplify Eqs. (3) and (4), the path curvature of the vehicle, $\sigma $ , along with some assumptions are considered. Starting with the curvature equation which is calculated as the inverse of the turning radius which is better defined as the quotient of angular and linear velocities. This relationship is illustrated in Eq. (5):

(5) \begin{align}\begin{array}{*{20}{c}}{\sigma = {R^{ - 1}} = \dfrac{{\dot \theta + \dot \varphi }}{v}\;}\end{array}\end{align}

Besides the curvature equation, the necessary assumptions made to simplify Eq. (2) include setting the CG location to the middle of the vehicle body and assuming the velocities, $\left( {{v_1},\;{v_4}} \right)$ , are equal in magnitude but opposite in direction. With these assumptions applied, Eq. (2) and the derivative of Eq. (4) is substituted into Eq. (5) to form the kinematics model below:

(6) \begin{align}\left[ \begin{array}{l} {\dot x} \\[2pt] {\dot y} \\[2pt] {\dot \theta } \\ \end{array} \right] = v*\left[ \begin{array}{l} {\cos (\theta )} \\[2pt] {\sin (\theta )} \\[2pt] \sigma \\ \end{array} \right],\,where\,\,\sigma = {{2\tan (\delta )} \over l}\end{align}

As shown in Fig. 7, the scaled 8WD8WS vehicle is designed to follow Ackerman’s steering geometry [Reference Adamu, Afolayan, Umaru and Garba39] to reduce tire degradation. Since the vehicle is in all-wheel steer mode, the instantaneous center of curvature is denoted by, $P$ , which intersects the CG of the vehicle. From there, eight steering angles relations shown as Eqs. (7a)–(7d) representing the kinematic constraints are calculated as where ${l_i}$ and $t$ denotes the length of each axle to the CG and the track width of the vehicle, respectively.

(7a) \begin{align}\begin{array}{*{20}{c}}{{\delta _{L1}} = {{\tan }^{ - 1}}\left( {\frac{{{l_1}}}{{R - B/2}}} \right),\;{\delta _{R1}} = {{\tan }^{ - 1}}\left( {\frac{{{l_1}}}{{R + B/2}}} \right)}\end{array}\end{align}
(7b) \begin{align}\begin{array}{*{20}{c}}{{\delta _{L2}} = {{\tan }^{ - 1}}\left( {\frac{{{l_2}}}{{R - B/2}}} \right),\;{\delta _{R2}} = {{\tan }^{ - 1}}\left( {\frac{{{l_2}}}{{R + B/2}}} \right)}\end{array}\end{align}
(7c) \begin{align}\begin{array}{*{20}{c}}{{\delta _{L3}} = {{\tan }^{ - 1}}\left( {\frac{{{l_3}}}{{R - B/2}}} \right),\;{\delta _{R3}} = {{\tan }^{ - 1}}\left( {\frac{{{l_3}}}{{R + B/2}}} \right)}\end{array}\end{align}
(7d) \begin{align}\begin{array}{*{20}{c}}{{\delta _{L4}} = {{\tan }^{ - 1}}\left( {\frac{{{l_4}}}{{R - B/2}}} \right),{\delta _{R4}} = {{\tan }^{ - 1}}\left( {\frac{{{l_4}}}{{R + B/2}}} \right)}\end{array}\end{align}

3.4. Vehicle Specifications

With the mentioned subsystems, Table I summarizes all basic dimensions and performance specifications of the SMWV.

Table I. 8WD8WS SMWV specification.

4. Electronics Hardware Architecture

The hardware architecture of the SMWV is described in this section with relationships between all electrical components and specifications listed. Starting with the full system hardware architecture in Fig. 8, the central processing unit of the SMWV is an onboard laptop computer loaded with Ubuntu 14.04. This computer is interfaced with various controllers and sensors via a Universal Serial Bus (USB) hub. Beginning with the controllers, two types are embedded in the vehicle. The first type is denoted as the steering controller which controls up to four linear actuators per unit. Since one of the novelties of the vehicle is its 8WS setup; therefore, two steering control units are necessary to control one linear actuator per wheel. Besides receiving and providing 12V from the onboard power supply to the linear actuators, the steering controllers also receive feedback from a built-in potentiometer that enables closed-loop stroke/steering control. On the other hand, the second type of controller is denoted as the master driving controller where a single unit is implemented per axle to control up to two DC motors. For simplicity, the driving controllers are set up in a way where only a master is controlled by the laptop via USB and the remaining three are controlled via CAN as slave nodes. Attached to the end of every DC motor are encoders that provide feedback for closed-loop speed control. As shown in Fig. 8, each driving controller receives 15V from the onboard power supply and provides them to the DC motors where it is then stepped down to 5V for the encoders. In terms of sensor instrumentation, a 9 DOF inertial measurement unit (IMU) and 360-degree laser scanner are integrated into the SMWV along with a Bluetooth receiver for close-range teleoperation.

Table II. Electronics component specifications.

Figure 8. Full system hardware architecture.

Table II shows the specifications of the different components embedded within the SMWV. All components are carefully selected based on size and power constraints:

5. Navigation Methodologies

In this section, the algorithms to achieve both low-level vehicle control and high-level path planning are described. These algorithms include Proportional – Integral – Derivative (PID) controllers for both driving and steering as well as an incremental localization method with wheel encoders and IMU. Once the low-level algorithms are established, they are consolidated with global and local path planners to achieve obstacle avoidance and navigation tasks.

5.1. PID Differential Driving Controller

Starting with the driving controller, an incremental encoder is attached to the output shaft of each DC motor for closed-loop control, as mentioned previously. A PID controller is implemented to ensure the error between the desired and actual vehicle velocity remains as minimal as possible during operation. With the encoder feedback, the controller calculates a velocity for the CG of the vehicle, $v$ , which is used by the software differential to generate inner and outer wheel speeds in the presence of nonzero yaw commands. The goal is to improve steering maneuverability and decrease tire scrubbing. Equation (8) calculates the differential speed based on linear and angular velocity as well as the track width, $B$ , of the vehicle. Fig. 9 shows the PID control block diagram for the motors.

(8a) \begin{align}\begin{array}{*{20}{c}}{Righ{t_{velocity}} = v - \left( {\dot \theta *B/2} \right)}\end{array}\end{align}
(8b) \begin{align}\begin{array}{*{20}{c}}{Lef{t_{velocity}} = v + \left( {\dot \theta *B/2} \right)}\end{array}\end{align}

Figure 9. PID differential driving controller block diagram.

5.2. PID Steering Controller

Once the desired steering angles are calculated based on Ackerman’s geometry from Section III, an actuator stroke position control algorithm is implemented to ensure satisfactory output. Since the vehicle features independent linear actuators for steering, built-in potentiometers are used for stroke position estimation. The maximum stroke of each linear actuator and achievable steering angles are 50 mm and 35 degrees, respectively. Figure 10 illustrates the relationship between steering angle and actuator stroke based on experimental data.

Figure 10. Steering angle versus linear actuator stroke.

Equation (9) shows 2 third-order polynomials that model this relatively linear relationship except at full actuator extension. A control law was subsequently derived based on the PID control scheme and this model. Figure 11 shows the PID control block diagram for the linear actuators.

(9a) \begin{align}\begin{array}{*{20}{c}}{Strok{{\rm{e}}_{{\rm{left}}}} = \left( {5 \times {{10}^{ - 5}}} \right)\delta _{\rm{L}}^3 + 0.0014\delta _{\rm{L}}^2 - 0.7589{\delta _{\rm{L}}} + 24.974}\end{array}\end{align}
(9b) \begin{align}\begin{array}{*{20}{c}}{Strok{{\rm{e}}_{{\rm{right}}}} = - \left( {5 \times {{10}^{ - 5}}} \right)\delta _{\rm{R}}^3 + 0.0014\delta _{\rm{R}}^2 + 0.7586{\delta _{\rm{R}}} + 24.974}\end{array}\end{align}

Figure 11. PID steering controller block diagram.

5.3. Incremental Localization Algorithm

To localize the SMWV, a dead reckoning algorithm that utilizes both wheel encoders and an IMU is implemented. In the proposed strategy, the wheel encoders and IMU are responsible for linear and angular displacement, respectively. The wheel encoders show acceptable performance in short-range navigation; however, IMU drift issues are hard to ignore. From the experiment, it was determined that the yaw drift is extra prominent during long-distance navigation and when the vehicle is in an immobile state. For the former issue, a linear drift was deduced during physical trials; therefore, a compensator was integrated as a remedy. To alleviate the latter issue, an algorithm that stops updating the orientation of the vehicle when it is stationary and resumes when the vehicle becomes mobile is integrated. The results show promising capabilities for the application of this work.

5.4. Path Planning

After low-level vehicle control and localization are established, two open-source algorithms are implemented to achieve high-level global and local path planning [Reference Qiu, Fan, Meng, Zhang, Cong, Li, Wang and Zhao13]. Starting with global path planning, the classic Dijkstra’s algorithm is deployed to solve the problem of generating a path between the initial and goal destination. This algorithm works by evaluating a grid cell where values are assigned to every node to represent the cost of arriving. With this information, an iterative approach is utilized to find the shortest path. To consider real-time sensor data and mobile robot dynamics, a local path planner known as the “Timed Elastic Band” is implemented. The primary goal of this algorithm is to transform a series of waypoints into a trajectory that considers time intervals. It achieves this by modifying the global planner’s output based on a multiobjective optimization framework. The objective function is the weighted summation of components, ${f_k}$ , which considers topics such as the nonholonomic constraint, fastest path, distance to waypoints, and obstacles. This is illustrated within Eq. (10) where $B$ is defined as a sequence of robot pose and time intervals, $\left( {Q,\;\tau } \right)$ :

(10) \begin{align}\begin{array}{*{20}{c}}{f\left( B \right) = \;\mathop \sum \nolimits_k {\gamma _k}{f_k}\left( B \right)\;}\end{array}\end{align}

The previously mentioned component topics are represented in two ways. For the first way, objective functions that consider the nonholonomic constraint, as well as the fastest path, are shown in Eqs. (11) and (12). In the following equation, ${d_{i,\;i + 1}}$ denotes the direction vector between two consecutive waypoints:

(11) \begin{align}\begin{array}{*{20}{c}}{{f_k}\left( {{x_i},\;{x_{i + 1}}} \right) = \;\left\|\left[ {\left( {\begin{array}{*{20}{c}}{{\rm{cos}}{\beta _i}}\\{{\rm{sin}}{\beta _i}}\\0\end{array}} \right) + \left( {\begin{array}{*{20}{c}}{{\rm{cos}}{\beta _{i + 1}}}\\{{\rm{sin}}{\beta _{i + 1}}}\\0\end{array}} \right)} \right] \times {d_{i,\;i + 1}}\right\|^2}\end{array}\end{align}
(12) \begin{align}\begin{array}{*{20}{c}}{{f_k} = {{\left( {\mathop \sum \nolimits_{i = 1}^n \Delta{T_i}} \right)}^2}}\end{array}\end{align}

For the second way, distance to each waypoint on the generated global path as well as nearby obstacles, are considered with the following two penalty functions. Equations (11), (12), (13), and (14) are combined with Eq. (9) to form the complete multiobjective optimization framework:

(13) \begin{align}\begin{array}{*{20}{c}}{{f_{{\rm{path}}}} = e\left( {{d_{{\rm{min}},j}},\;{r_{p{\rm{max}}}},\; \in ,\;S,\;n} \right)\;}\end{array}\end{align}
(14) \begin{align}\begin{array}{*{20}{c}}{{f_{{\rm{obstacle}}}} = e\left( { - {d_{{\rm{min}},\;j}},\; - {r_{{\rm{omin}}}},\; \in ,\;S,\;n} \right)}\end{array}\end{align}

6. Autonomous Navigation Algorithm

The different methodologies introduced in the previous section are combined to enable autonomous navigation with the SMWV. The user is required to provide a goal pose for the mobile robot within the map frame. Once a goal is received, the global planner determines the shortest collision-free path and the localization algorithm begins to launch in the background. The local planner is responsible to transform the global plan (series of waypoints) into linear and angular velocities that are then sent to the lower controllers for actuation. Table III summarizes the methodologies into pseudocode to provide the reader with an understanding of the workings of the vehicle.

7. Modified Position-Based Visual Servoing

Alternative steering configurations are explored to develop a two-stage algorithm called modified position-based visual servoing (M-PBVS). In this work, the alternative configurations are referred to as diamond steer (DS) and synchronous steer (SS) and they are illustrated in Position B of Fig. 12. Beginning with the first stage of the M-PBVS algorithm which is to correct the orientation of the SMWV using DS. In this stage, the M-PBVS approach utilizes the onboard camera sensor to search for the visual landmark simultaneously as the SMWV pivot. Once the landmark is detected, a closed-loop orientation control using purely vision as feedback is deployed until the longitudinal axis of the SMWV is perpendicular to the x-axis of the visual landmark. By doing so, the SMWV does not depend on its odometry sensors rather just the accuracy of the pose estimation algorithm. The pivot action is illustrated for Position B in Fig. 12. Once the orientation is corrected, the M-PBVS algorithm enters the second stage which utilizes the SS configuration. Since the SMWV features a maximum steering angle, ${\delta _{max}}$ , of 30 degrees, two scenarios of control are possible; namely, one with a direct goal and the other with an alternate goal. The first scenario happens when the approach angle, ${\delta _{{\rm{approach}}}}$ , is within the maximum steering angle such as Positions B and C so the SMWV can directly arrive at the desired. In the second scenario, the approach angle is greater than the maximum steering angle; as a result, an alternative goal is calculated to re-position the SMWV in a way that would achieve the direct scenario.

Table III. SMWV autonomous navigation pseudocode

Figure 12. Difference scenarios of M-PBVS.

This is illustrated by positions A, D, and E. From there, the proposed M-PBVS controller proceeds to minimize the position error based on visual feedback. Since the heading angle of the SMWV does not change during SS, the visual landmark remains within the camera’s field of view during its course. The next section details the mathematical model and controller design of the proposed M-PBVS algorithm.

7.1. System Modeling

The forward kinematics of a two-joint manipulator is used to model the mobile robot system in this case. By establishing the relationship between different coordinate frames within the manipulator, a Jacobian matrix is derived to relate the velocity of the end effector with respect to the world, ${}_{eff}^{\phantom{ef}0}V$ , to the individual joint velocities ${\left[\begin{array}{l} {{{\dot \theta }_1}}\ {{{\dot d}_2}} \\ \end{array} \right]^T}$ . This is shown in Eq. (14) where the joint velocities, ${\left(\begin{array}{l} {{{\dot \theta }_1}}\ {{{\dot d}_2}} \\ \end{array} \right)}$ signify the robot’s angular and linear velocities, ${{{\dot \theta }_1}}$ respectively. More specifically, represents the SMWV’s angular velocity during DS and steering velocity during SS as discussed in later sections. This model restricts the movement of the robot to the x–y plane of the world frame with lateral slip neglected.

(14) \begin{align}_{eff}^{\phantom{ef}0}V = \left[ \begin{array}{l} {{v_x}} \\[3pt] {{v_y}} \\[3pt] {{v_z}} \\[3pt] {{w_x}} \\[3pt] {{w_y}} \\[3pt] {{w_z}} \\ \end{array} \right] = \left[ \begin{array}{ll} {{d_2}{C_1}} & {{S_1}} \\[3pt] {{d_2}{S_1}} & { - {C_1}} \\[3pt] 0 & 0 \\[3pt] 0 & 0 \\[3pt] 0 & 0 \\[3pt] 1 & 0 \\ \end{array} \right]\left[ \begin{array}{l} {{{\dot \theta }_1}} \\[3pt] {{{\dot d}_2}} \\[3pt] \end{array} \right] = J\left[ \begin{array}{l} {{{\dot \theta }_1}} \\[3pt] {{{\dot d}2}_1} \\ \end{array} \right]\end{align}

7.2. Stage One: Orientation Control

With the kinematics model complete, the next step is to derive the control law for stage one. To accomplish this, Lyapunov’s control scheme is applied as shown in Eq. (15) where $k$ represents the proportional gain. The error term for orientation, $e{\left( t \right)_w}$ , is the difference between the current and desired image feature sets as extracted by a pose estimation algorithm. The proposed M-PBVS algorithm formulates the control law with respect to the desired camera frame; therefore, the orientation error term is equivalent to the orientation vector, ${}_c^c\phi $ , since the desired orientation is zero. In this work, the pose estimation algorithm implemented is developed where the output is the position and orientation estimation of the visual landmark.

(15) \begin{align}e{(\dot t)_w} = - k\,e{(t)_w} = - k{{(^c}_c^d\phi - 0)_w}\end{align}

Moving forward, the derivation of the orientation control law begins with representing the angular velocity of the camera with respect to the desired camera frame, ${}_{\phantom{d}c}^{{c_d}}w$ , as the angular velocity of the camera with respect to itself, ${}_c^cw$ with the help of a rotation matrix, ${}_{\phantom{d}c}^{{c_d}}R$ .

(16) \begin{align}\begin{array}{*{20}{c}}{{}_c^cw = {{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}{}_{\phantom{d}c}^{{c_d}}w}\end{array}\end{align}

Next, a transformation matrix, $T\left( \phi \right)$ , is applied to convert the orientation expressed in Euler’s angles, ${}_{\phantom{d}c}^{{c_{\dot d}}}\phi $ , into angular velocities of the camera relative to the desired camera frame, ${}_{\phantom{d}c}^{{c_d}}w$ .

(17) \begin{align}_c^{{c_d}}w = T{(\phi )^{{c_{\dot d}}}_{\phantom{c}c}}\phi \end{align}

Accordingly, Eq. (17) is substituted into Eq. (16) to form the following.

(18) \begin{align}_c^cw = {{(^c}_c^dR)^T}T{(\phi )^{{c_{\dot d}}}_{\phantom{c}c}}\phi \end{align}

Since $e{\left( t \right)_w}$ is equivalent to ${}_{\phantom{d}c}^{{c_d}}\phi $ ; as a result, Eq. (18) is rewritten as below which considers the rate of the error, $e{(\dot t)_w}$ .

(19) \begin{align}_c^cw = {{(^c}_c^dR)^T}T(\phi )*e{(\dot t)_w}\end{align}

By substituting Lyapunov’s control scheme from Eq. (15) into Eq. (19), the following control law for the angular velocity of the camera frame is derived.

(20) \begin{align}\begin{array}{*{20}{c}}{{}_c^cw = - k\;{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}T\left( \phi \right)*e{{\left( t \right)}_w}}\end{array}\end{align}

Since the SMWV is only pivoting without translation at this stage; therefore, the two joint manipulator model is reduced to just the revolute joint. As a result, the angular velocity of the SMWV’s base is related to the angular velocity of the camera as shown in Eq. (21) where, ${}_c^bR$ , represent an identity rotation matrix because the camera is rigidly attached to the base.

(21) \begin{align}\begin{array}{*{20}{c}}{{}_b^bw = {}_c^bR\;{}_c^cw}\end{array}\end{align}

Next, the angular velocity of the base, ${}_b^bw$ , is related to the angular velocity of the base with respect to the world, ${}_b^0w$ by Eq. (22).

(22) \begin{align}\begin{array}{*{20}{c}}{{}_b^0w = {}_b^0R\;{}_b^bw}\end{array}\end{align}

Using Eq. (14), the angular velocity of the SMWV’s base frame is described based on the angular joint velocity, $\dot \theta $ , using a Jacobian matrix, ${J_w}$ , as seen below.

(23) \begin{align}\begin{array}{*{20}{c}}{{}_{{\rm{base}}}^{\phantom{bas}0}w = {J_w}\;\dot \theta }\end{array}\end{align}

By combining Eqs. (20), (21), (22), and (23), the complete law that controls the angular velocity of the SMWV based on the orientation error is shown below. It is important to note that a pseudo-inverse for the Jacobian matrix, $J_w^ + $ , is applied to approximate the inverse kinematics.

(24) \begin{align}\begin{array}{*{20}{c}}{\dot \theta = - k\;J_w^ + \;{}_b^0R\;{}_c^bR\;{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}\;T\left( \phi \right)\;e{{\left( t \right)}_w}}\end{array}\end{align}

Using differential kinematics, the angular velocity of the SMWV is further expressed in terms of wheel velocities as shown in Eq. (25).

(25) \begin{align}\begin{array}{*{20}{c}}{\dot \theta = \;\left[ {\begin{array}{*{20}{c}}{ - \dfrac{r}{L}} {\dfrac{r}{L}}\end{array}} \right]\;\left[ {\begin{array}{*{20}{c}}{{\omega _{{\rm{left}}}}}\\[3pt] {{\omega _{{\rm{right}}}}}\end{array}} \right]}\end{array}\end{align}

7.3. Stage Two: Position Control

Stage two of the M-PBVS algorithm begins with first determining which of the two scenarios the SMWV is currently in (direct or alternate goal). To do this, the desired position coordinates, $\left( {{x_d},\,{y_d}} \right)$ , are compared with the current, $\left( {x,\,y} \right)$ as shown in Eq. (26). The approach angle is then measured against the maximum steering angle to determine which is larger in value.

(26) \begin{align}{{\delta _{{\rm{approach}}}} = \;\dfrac{{{y_d} - y}}{{{x_d} - x}}}\end{align}

If the approach angle is less than the maximum steering angle then the SMWV is in the direct goal scenario where the steering angles are set as shown below.

(27) \begin{align}{{\delta _{{\rm{steering}}}} = {{\tan }^{ - 1}}{\delta _{{\rm{approach}}}}}\end{align}

If the approach angle is greater than the maximum steering angle, then the SMWV requires an alternate goal which is calculated by finding the intersection between lines extended from the current and desired positions at the maximum steering angle as calculated by Eqs. (26) and (27). These lines are represented by grey dash lines in Fig. 12.

(28) \begin{align}\begin{array}{*{20}{c}}{\textrm{current:y} = \;\tan \left( { \pm {\delta _{{\rm{max}}}}} \right)\;x + b}\end{array}\end{align}
(29) \begin{align}\begin{array}{*{20}{c}}{\textrm{desired:}{y_d} = \;\tan \left( { \pm {\delta _{{\rm{max}}}}} \right)\;{x_d} + {b_d}}\end{array}\end{align}

By doing so, the alternate goal is guaranteed to be within the achievable steering angle. Once the SMWV has arrived at the alternate goal, the new approach angle towards the desired goal should be equivalent to the maximum steering angle; thereby making it a direct goal. It is important to note that there are two alternate goal solutions at each initial position because of paths calculated based on positive and negative ${\delta _{{\rm{max}}}}$ . To solve this issue, the alternate goal with the shortest distance from its current position is always selected. For example, alternate goal scenarios in Quadrant 1 and 2 would always result in the SMWV generating a forward velocity as shown by Position D in Fig. 12. On the other hand, the opposite is true for alternate goal scenarios in Quadrants 3 and 4 as illustrated by Positions A and E where the shortest distance requires the SMWV to reverse. Once the direct or alternate goal positions are either detected or calculated, Lypapunov’s proportional control scheme is applied similarly to stage one. However, the only difference is that the error term in the second stage consists of two vectors which are the translational vector, ${}_{\phantom{d}c}^{{c_d}}s$ , and the orientation vector, ${}_{\phantom{d}c}^{{c_d}}\phi $ . Starting with the latter, the derivation of the orientation control for stage two is the same as stage one; however, steering velocity, $\dot \varphi $ , is used for SS instead of the angular velocity from Eq. (24). Also, the revolute joint’s angle is constrained by the maximum steering angle. The following illustrates the steering control law.

(30) \begin{align}\begin{array}{*{20}{c}}{\dot \varphi = - k\;J_w^ + \;{}_b^0R\;{}_c^bR\;{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}\;T\left( \phi \right)\;e{{\left( t \right)}_w}}\end{array}\end{align}

where $ - {\delta _{{\rm{max}}}} \lt {\theta _1} \lt {\delta _{{\rm{max}}}}$ . On the other hand, Lyapunov’s control scheme is applied to the translational vector similarly to Eq. (15) for orientation as shown below.

(31) \begin{align}e{(\dot t)_t} = - k{{(^c}_c^ds - 0)_s}\end{align}

Next, the translational vector is represented in terms of translational camera velocity, ${}_c^cv$ , as shown below.

(32) \begin{align}{}_c^cv = {{({}^c}_c^dR)^T}^{\phantom{d}c}_c{}^{\dot d}s\end{align}

From here, the rate of change of the translational vector, $^c_c{}^{\dot d}s$ , is equivalent to the rate of the error as suggested by Eq. (31). As a result, the translational control law is written as follows.

(33) \begin{align}\begin{array}{*{20}{c}}{{}_c^cv = \; - k{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}e{{\left( t \right)}_s}}\end{array}\end{align}

The above control law is capable of controlling the translational velocity of the camera; however, it is not complete in the sense that it does not consider SMWV’s base frame. Therefore, rotation matrices between the world and camera frame are substituted into Eq. (31) to formulate the translational control law below.

(34) \begin{align}\begin{array}{*{20}{c}}{v = \; - k\;J_t^ + {}_b^0R\;{}_c^bR\;{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}\;e{{\left( t \right)}_s}}\end{array}\end{align}

Lastly, the full control law for stage two of the proposed M-PBVS is completed by combining the steering control from Eq. (28) and the translational control as seen above to form the following where $s(d)$ is a skew matrix that describes the camera’s position with respect to the base frame.

(35) \begin{align}\begin{array}{*{20}{c}}{\left[ {\begin{array}{*{20}{c}}v\\{\dot \varphi }\end{array}} \right] = \; - k\;{J^ + }\;E\;L\;e\left( t \right)}\end{array}\end{align}
(36) \begin{align}\begin{array}{*{20}{c}}{E = \;\left[ {\begin{array}{c@{\quad}c}{{}_b^0R} & {{0_{3x3}}}\\[3pt] {{0_{3x3}}} & {{}_b^0R}\end{array}} \right]\left[ {\begin{array}{c@{\quad}c}{{}_c^bR} & {s\left( d \right){}_c^bR}\\[3pt] {{0_{3x3}}} & {{}_c^bR}\end{array}} \right]}\end{array}\end{align}
(37) \begin{align}\begin{array}{*{20}{c}}{L = \left[ {\begin{array}{c@{\quad}c}{{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}} & 0\\[3pt] 0 & {{{\left( {{}_{\phantom{d}c}^{{c_d}}R} \right)}^T}{\rm{*}}T\left( \phi \right)}\end{array}} \right]}\end{array}\end{align}

7.4. Mobile Robot Parking Algorithm

As mentioned previously, a common limitation in recent literature regarding autonomous parking is the lack of obstacle avoidance and the use of outdated sensors. Additionally, specific coordinates of the charging station are often required. These two requirements cause fundamental problems since obstacle and parking station locations can frequently change depending on the environment. To combat these constraints, the proposed algorithm works in two stages. First, the path planners are utilized to generate a collision-free path towards the parking area. Once arrived, the SMWV utilizes the proposed M-PBVS algorithm to correct its pose and reach the desired location precisely. This method allows the SMWV to plan its path online while specific parking station coordinates are not necessary. The pseudocode of the proposed mobile robot parking approach is presented in Table IV.

Table IV. Mobile robot parking pseudocode

8. Experimental Results

In this section, the proposed algorithm is implemented in the physical model for two different experiments. The first one is denoted as the “Slalom Test” which is intended to evaluate the vehicle’s ability to maneuver between obstacles in both directions. The second test is denoted as the “Parking Test” which is intended to evaluate the navigation ability in tight spaces. For the parking test, the overall travel of the vehicle is less than 0.2 m, however, this is accurate as the purpose of this test is to correct errors in vehicle pose during parking. Both tests illustrate common scenarios that an autonomous platform would encounter. The experimental setup and results from both tests are shown below.

8.1. Experiment Set-Up

As previously mentioned, the vehicle is instrumented with a 360-degree laser scanner and IMU for obstacle detection and localization, respectively. Figure 13 shows the sensor placement relative to the base frame of the vehicle. The laptop is placed on top of the chassis at the center of the vehicle. The two experiments are conducted in an indoor environment with flat smooth surfaces and opaque, rigid obstacles. Due to the placement of the laser scanner at the top of the chassis, a minimum obstacle height of 0.6 m is necessary for obstacle detection during navigation.

Figure 13. Experimental setup.

8.2. Slalom Test

Starting with the Slalom Test, the vehicle begins at the origin with a goal at approximately $\left( {10,\; - 0.8} \right)$ in the map frame. Two barrier obstacles are placed along the way with dimensions of approximately 1.0 × 0.6 m. Figure 14(a) successfully illustrates the robot’s trajectory as generated by both Dijkstra and timed elastic band (TEB), around the obstacles towards its goal. The whole navigation took approximately 30 s with the maximum linear and angular velocities constrained to 0.3 m/s and 0.25 rad/s, respectively. In Fig. 14(b), the desired velocity remained at the maximum velocity throughout the course until the last 5 s where the vehicle slowed down to adjust its pose. Since the desired velocity represents the output of the TEB local planner, the actual velocity data exhibits significantly more noise as it is obtained from achieved wheel speeds. Regardless of the noise, it is apparent that the vehicle was able to follow the path planner commands as the overall trends of both data resemble each other. In the same graph, the inner and outer wheel velocities are also displayed. In this case, the left and right wheel velocities zig-zag one another as the angular velocities from Fig. 14(c) are taken into account. In this figure, the vehicle attempts to clear the first and second obstacle between 0–18 s and 18–28 s, respectively. By the convention assigned in the experimental setup section, a negative angular velocity implies a right steer command. With this in mind, it is easy to see that the differential speed from Fig. 14(b) matches accordingly as outer wheels always exhibit a higher velocity than inner due to turning radius difference. In Fig. 14(d), the steering angles of the front two axles are illustrated. It is important to note that the rear steering angles exhibit the same magnitude but opposite in direction (for spacing and clarity, only the front two-axle steering angles are displayed). From this figure, the maximum steering angle reached is 35 degrees. When looking closer at Fig. 14(d) around 25 s, the vehicle is attempting to steer right after clearing the second obstacle.

Figure 14. (a) robot trajectory, (b) linear velocity, (c) front two-axle steering angles, (d) angular velocities.

During this, the first axle right wheel exhibits the highest turning angle, followed by the left wheel of the same axle and then the right and left wheels of the second axle, accordingly. This relationship represents the Ackerman geometry that was discussed in the previous sections. Figure 15 shows consecutive images of the slalom experiment with the top row displaying the physical SMWV and the bottom row displaying the accompanying laser data visualization. This test shows the overall successful navigation ability of the vehicle in an obstacle-ridden environment using its steering capabilities and arriving at the destination within a minimal tolerance with regards to both position and orientation.

Figure 15. Physical experiment (top) and RVIZ (bottom).

8.3. Parking experimentation

To validate the proposed two-stage Parking algorithm, first, the navigation ability of the SMWV is tested. From there, the proposed M-PBVS algorithm is employed to alleviate any pose inaccuracies. Lastly, the M-PBVS algorithm is compared with traditional path planning to study the improvements made.

Figure 16. Autonomous navigation experiment setup.

Figure 17. Physical experiment (top) and RVIZ (bottom).

8.3.1. Autonomous Navigation Test

This test intends to evaluate stage one of the proposed algorithms which is the SMWV’s ability to navigate towards the parking area. To do so, three initial regions that are located throughout an indoor hallway as seen in Fig. 16 are chosen. The map of the test environment was acquired prior to the experimentation using standard ROS packages, namely the Open Slam implementation known as Gmapping [Reference Koubaa40]. During the experiment, the map was loaded and the robot’s initial position was manually defined prior to autonomous travel. Within each region, three different positions and orientations are randomly selected as starting poses. From there, all poses are sent to the parking area via Dijkstra’s and TEB algorithm to account for any obstacles along the way [Reference Magyar, Tsiogkas, Deray, Pfeiffer and Lane41,Reference Mao and Ma42]. To ensure a proper reset of error, sensor power is cycled between tests. When looking closer at one of the trials, Fig. 16 illustrates the path planner’s ability to utilize the onboard sensors to successfully navigate around obstacles towards its goal. Figure 17 shows consecutive images of this experiment with the top row displaying the physical SMWV and the bottom row displaying the accompanying laser data visualization. Since the accuracy of the final achieved pose is of great importance in parking applications, the test is repeated nine times. Taking a look at the overall trend, Table V tabulates the different starting poses as well as the final achieved poses with their respective average position and orientation error. From these results, it is evident that Region 1 exhibits the highest amount of drift, leading to the largest deviation of 2.06 m and 0.26 rad between achieved and desired. Region 3 achieves a more accurate result with a deviation of 0.98 m and 0.19 rad. The final positions are plotted in Fig. 18, which shows the tolerance window that is calculated by taking the average error of all final positions. The result is a circle with a 1.5-m radius. Based on the findings of this experiment, it was concluded that the SMWV can navigate autonomously; however, with an average position and orientation error of 1.54 m and 0.22 rad, respectively.

Table V. Navigation test results

Figure 18. Drift results.

8.3.2. M-PBVS Test

With the findings of the previous experiment, the following evaluates the M-PBVS algorithm’s performance in terms of close quarters pose correction. Starting with the experimental setup, all tests conducted in this section are performed in an indoor lab environment with smooth surfaces. The desired position is chosen in front of a visual landmark at (0, 0) with a heading angle of zero as shown in Fig. 19. Note here that the heading angle, $\vartheta $ , is measured relative to the longitudinal axis of the SMWV where a counter-clockwise rotation is deemed positive. From the desired pose, four quadrants are identified based on a cartesian coordinate system. When considering the results of the previous experiment as well as the setup presented here, two sets of experiments are conducted. The first test known as the M-PBVS Test begins with the SMWV at a position and orientation error that fits within the previously acquired result. From there, it is the M-PBVS’s responsibility to determine whether it is in a direct or alternate goal scenario and then subsequently dock the SMWV as precisely as it can. The second test is called the Comparison Test which studies the performance differences between the proposed M-PBVS algorithm and the traditional path planner. Experimental data such as trajectory, linear/angular velocity as well as position and orientation errors are presented.

Figure 19. Experimental setup.

Figure 20. Direct goal trajectory.

Starting with the M-PBVS test, position and orientation errors are introduced by placing the SMWV at position (0.15, 0.9) with a –0.2 rad heading angle in Quadrant 1. As mentioned before, the desired position is the origin of the cartesian plane with a 0 rad heading angle. In Fig. 20, the initial and final pose of the SMWV is illustrated on top of the trajectory that is generated by the proposed M-PBVS algorithm. Beginning with the first stage of the M-PBVS algorithm where an angular velocity is generated to reduce the heading error with the DS configuration. This velocity is evident during the first 4 s of the test where it reached a maximum of 0.15 rad/s, pivoting the SMWV clockwise about its center as seen in Fig. 21. Because of this, the orientation error is reduced to approximately zero as shown in Fig. 22 while both the linear velocity and position error remain unchanged. Once the orientation is corrected, the SMWV enters the second stage of M-PBVS which first determines whether it is in a direct goal or alternate goal scenario. Based on the calculation from Eq. (26), the initial position qualifies as a direct goal scenario as the initial ${\delta _{{\rm{approach}}}}$ is approximately 10 degrees which is less than the $\delta {\rm{max}}$ . Consequently, the generated steering angles are shown by Fig. 23 which remains between 8 and 10 degrees in a SS configuration until it arrives at the intended goal. In parallel with the steering control, the linear velocity reached a maximum of 0.2 m/s before slowly reducing to zero over the next 20 s. Also, the angular velocity is zero while the linear velocity is negative because the desired pose is set to behind the initial as shown in Fig. 22 during the second stage. Snapshots of the physical experiment along with the camera’s field of view are shown in Fig. 24. The result from this test successfully illustrates the M-PBVS’s ability to correct SMWV’s initial pose to centimeter accuracy as the final achieved position is (0.01, –0.04) which yields an error percentage of 5.6%.

Table VI. Comparison results.

Figure 21. Direct goal velocity.

Figure 22. Direct goal pose error.

Figure 23. Direct goal steering angle.

Figure 24. Direct goal test: physical experiment (top) and camera view (bottom).

Next is the Comparison Test which is intended to gain a better insight into the performance difference between TEB and M-PBVS. This time, the initial pose is located at (0.64, –0.91) within Quadrant 2 with an initial orientation of –0.28 rad. This pose is chosen such that the M-PBVS algorithm would enter an alternate goal scenario to avoid showing similar results as the previous test. The trajectory of both algorithms is plotted in Fig. 25 where the TEB planner changes the direction of the SMWV a total of four times while the M-PBVS algorithm generates a path that only requires a change in direction once. The result of this is a much higher traveled distance of 8.91 m for the TEB when compared to the 1.85 m by the M-PBVS algorithm. Furthermore, the M-PBVS’s efficiency is also prevalent when looking at Fig. 26 where the position error reduced to zero in 11 s compared to 19 s from the TEB planner. Also, the final achieved position of the TEB and M-PBVS algorithm is (0.23, 0.17) and (0.02, –0.05), respectively, which shows that the M-PBVS algorithm is much more accurate at position correction. In addition, the orientation error from Fig. 27 further validates the ability of the M-PBVS algorithm as it reduces the orientation error within 3 s with an error of 0.71%. The TEB planner utilizes multiple turns to correct its orientation yet its final result yields less accuracy than the M-PBVS algorithm with an error of 60.71%. From both comparison tests in this section, it is evident that the simplicity of the M-PBVS algorithm achieves higher accuracy and efficiency in both position and orientation correction when compared with the TEB planner. The following table (Table VI) tabulates the performance metrics between the two algorithms during this test.

Figure 25. Alternate goal trajectory comparison.

Figure 26. Alternate goal position error comparison.

Figure 27. Alternate goal orientation error comparison.

9. Conclusion and Future Work

The motivation behind this work is to develop a novel 8WD8Ws SMWV platform that is capable of autonomous navigation features. As presented, all subsystems of the vehicle including chassis, suspension, steering, driving, and parking were discussed in detail. Furthermore, the electronics hardware architecture is also presented to permit independent wheel steering and driving. Beyond the hardware, the necessary software to enable low-level motion control and localization is proposed and combined with high-level path planners to achieve obstacle avoidance and navigation. For the low-level control, PID controllers for both speed and steering angles are implemented. Localization is achieved through an incremental method that combines both the wheel encoders and IMU for position and orientation, respectively. The global and local path planners employed are Dijkstra’s algorithm and the TEB planner. The results are obtained from physical experiments that include two separate tests that are designed to study different performance aspects.

From the navigation experiment, the resultant trajectory for one of the experiments is presented to show the SMWV’s ability in navigating around obstacles towards the parking area. Furthermore, the developed steering and speed control proved to be fully functioning and able to keep up with the high-level path planning. To further study the effects of sensor drift, the final achieved poses from nine initial poses were recorded. The results showed that the arrival position and orientation error are proportionally related to the distance between the desired and starting pose. In other words, SMWVs that came from the farthest initial pose finished with the largest deviation from the desired. By taking all the deviations from each test, an average error of 1.5 meters was calculated. This error was then represented as a tolerance window in the form of a circle with a 1.5-m radius.

Based on the drift results acquired from the previous test, the proposed M-PBVS algorithm must be able to bring the 1.5 m, and 0.22 rad tolerance down to within 10 cm and 0.05 rad, respectively, to ensure successful parking. For the first experiment, the M-PBVS algorithm achieved a final position and orientation error of 5.78% and 1.07%, respectively which translated to a position error of approximately 7.20 cm and an orientation error of 0.003 rad. In the second experiment, the proposed M-PBVS algorithm was compared with TEB to evaluate its performance upgrades in pose correction. The results showed that the M-PBVS algorithm was far more accurate and faster than TEB at close quarters pose correction. This was because the M-PBVS algorithm utilized a visual landmark to correct its pose effectively in a two-stage manner while the TEB algorithm is constrained by the SMWV’s minimum turning radius. As a result of these limitations, the M-PBVS algorithm traveled nearly 7.06 m less than the TEB equipped SMWV. In addition, the M-PBVS algorithm repeatedly scored a higher accuracy as presented by both its position and orientation error percentage when compared with the TEB’s. These results validated the benefits of the M-PBVS algorithm as it was able to capitalize on the mechanical design of the proposed SMWV platform.

For future analysis, an enhanced control scheme in addition to position-based analysis will be leveraged. This will allow for better performance with rugged terrain and other more complex environments. This will allow for the full capabilities of the 8WD8WS vehicle to be showcased as its real-world counterpart rarely finds itself being used on roads or other basic conditions. In addition, more robust sensor fusion using an Extended Kalman Filter will also be investigated. Overall, the SMWV platform was successfully developed and validated.

Acknowledgment

The authors express their gratitude to NSERC DG for partially funding this study as well as Matt Levins, Eric McCormick, and Abdul Al-Shanoon for their assistance during experiments.

References

Crouse, M., “Today in Engineering History: Car Pioneer Karl Benz Born,” (in English), Product Design & Development, 2015 Nov 25 2016-01-13 (2015).Google Scholar
Bagloee, S. A., Tavana, M., Asadi, M., and Oliver, T., “Autonomous vehicles: challenges, opportunities, and future implications for transportation policies,” J. Mod. Transp. 24(4), 284303 (2016).CrossRefGoogle Scholar
Blok, P. M., van Boheemen, K., van Evert, F. K., Ijsselmuiden, J., and Kim, G.-H., “Robot navigation in orchards with localization based on Particle filter and Kalman filter,” Comput. Electron. Agric. 157, 261269 (2019).CrossRefGoogle Scholar
Peterson, J., Li, W., Cesar-Tondreau, B., Bird, J., Kochersberger, K., Czaja, W., and McLean, M., “Experiments in unmanned aerial vehicle/unmanned ground vehicle radiation search,” J. Field Robot. 36(4), 818845 (2019).CrossRefGoogle Scholar
Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., Dayoub, F., Lehnert, C., and Perez, T., “Robot for weed species plant-specific management,” J. Field Robot. 34(6), 11791199 (2017).CrossRefGoogle Scholar
Carpio, R. F., Potena, C., Maiolini, J., Ulivi, G., Rossello, N. B., Garone, E., and Gasparri, A., “A Navigation Architecture for Ackermann Vehicles in Precision Farming,” IEEE Robot. Autom. Letters 5(2), 11021109 (2020).CrossRefGoogle Scholar
Sanguino, T. J. M., “50 years of rovers for planetary exploration: A retrospective review for future directions,” Robot. Autonom. Syst. 94, 172185 (2017).Google Scholar
Islam, M., Chowdhury, M., Rezwan, S., Ishaque, M., Akanda, J., Tuhel, A., and Riddhe, B., Novel design and performance analysis of a Mars exploration robot: Mars rover mongol pothik. pp. 132–136 (2017).CrossRefGoogle Scholar
Kumar, S., Gogul, I., Raj, M., Pragadesh, S. K., and Sebastin, J., “Smart Autonomous Gardening Rover with Plant Recognition Using Neural Networks,” Procedia Comput. Sci. 93, 975981, 12/31 (2016).CrossRefGoogle Scholar
Radhakrishna Prabhu, S. G., Seals, R. C., Kyberd, P. J., and Wetherall, J. C., “A survey on evolutionary-aided design in robotics,” Robotica 36(12), 18041821 (2018).CrossRefGoogle Scholar
Martínez-García, E. A., Lerín-García, E., and Torres-Córdoba, R., “A multi-configuration kinematic model for active drive/steer four-wheel robot structures,” Robotica 34(10), 23092329 (2016).CrossRefGoogle Scholar
Li, T. H. S., Lee, M. H., Lin, C. W., Liou, G. H., and Chen, W. C., “Design of Autonomous and Manual Driving System for 4WIS4WID Vehicle,” IEEE Access 4, 22562271 (2016).CrossRefGoogle Scholar
Qiu, Q., Fan, Z., Meng, Z., Zhang, Q., Cong, Y., Li, B., Wang, N., and Zhao, C., “Extended Ackerman Steering Principle for the coordinated movement control of a four wheel drive agricultural mobile robot,” Comput. Electron. Agric. 152, 4050 (2018).CrossRefGoogle Scholar
Ye, Y., He, L., and Zhang, Q., “Steering Control Strategies for a Four-Wheel-Independent-Steering Bin Managing Robot,” IFAC-PapersOnLine 49(16), 3944, 2016/01/01/ (2016).CrossRefGoogle Scholar
Segura, C. C. G., Hernandez, J. C. M., Dutra, M. S., Mauledoux, M. M., and Avilés, O. F. S., “Ackerman Model for a Six-Wheeled Robot (ACM1PT),” Appl. Mech. Mater. 823, 441446 (2016).CrossRefGoogle Scholar
Stania, M., “Analysis of the Kinematics of an Eight-Wheeled Mobile Platform,” Solid State Phenom. 198, 6774, 03/11 (2013).CrossRefGoogle Scholar
Menendez-Aponte, P., Kong, X., and Xu, Y., “An Approximated, Control Affine Model for a Strawberry Field Scouting Robot Considering Wheel–Terrain Interaction,” Robotica 37(9), 15451561 (2019).CrossRefGoogle Scholar
Kim, C., Ashfaq, A. M., Kim, S., Back, S., Kim, Y., Hwang, S., Jang, J., and Han, C., “Motion Control of a 6WD/6WS wheeled platform with in-wheel motors to improve its maneuverability,” Int. J. Control Autom. Syst. 13(2), 434442 (2015).CrossRefGoogle Scholar
Kim, W. G., Kang, J. Y., and Yi, K., “Drive control system design for stability and maneuverability of a 6WD/6WS vehicle,” Int. J. Autom. Tech. 12(1), 6774 (2011).CrossRefGoogle Scholar
Oftadeh, R., Aref, M. M., Ghabcheloo, R., and Mattila, J., “Bounded-velocity motion control of four wheel steered mobile robots,” ed: IEEE, 255–260 (2013).CrossRefGoogle Scholar
Aliseichik, A. P. and Pavlovsky, V. E., “The model and dynamic estimates for the controllability and comfortability of a multiwheel mobile robot motion,” Autom. Remote Control 76(4) 675688 (2015).CrossRefGoogle Scholar
Yan, F., Li, B., Shi, W., and Wang, D., “Hybrid Visual Servo Trajectory Tracking of Wheeled Mobile Robots,” IEEE Access 6, 2429124298 (2018).CrossRefGoogle Scholar
Wu, Y. and Wang, Y., “Asymptotic tracking control of uncertain nonholonomic wheeled mobile robot with actuator saturation and external disturbances,” Neural Comput. Appl. 32(12) 87358745 (2020).CrossRefGoogle Scholar
Bozek, P., Karavaev, Y. L., Ardentov, A. A., and Yefremov, K. S., “Neural network control of a wheeled mobile robot based on optimal trajectories,” Int. J. Adv. Robot. Syst. 17(2), 172988142091607 (2020).CrossRefGoogle Scholar
Chen, H., Yang, H. a., Wang, X., and Zhang, T., “Formation control for car-like mobile robots using front-wheel driving and steering,” Int. J. Adv. Robot. Syst. 15(3), 172988141877822 (2018).CrossRefGoogle Scholar
Chih-Jui, L., Su-Ming, H., Ying-Hao, W., Cheng-Hao, Y., Chien-Feng, H., and Li, T.-H. S., “Design and implementation of a 4WS4WD mobile robot and its control applications,” ed: IEEE, 235–240 (2013).Google Scholar
Bo, H., “Precise navigation for a 4WS mobile robot,” J. Zhejiang Univ. Sci. 7(2) 185193 (2006).Google Scholar
Penglei, D. and Katupitiya, J., “Path planning and tracking of a 4WD4WS vehicle to be driven under force control,” ed: IEEE, 1709–1715 (2014).Google Scholar
Li, Y., He, L., and Yang, L., Path-following control for multi-axle car-like wheeled mobile robot with nonholonomic constraint, 268–273 (2013).Google Scholar
Hamerlain, F., Floquet, T., and Perruquetti, W., “Experimental tests of a sliding mode controller for trajectory tracking of a car-like mobile robot,” Robotica 32(1), 6376 (2014).CrossRefGoogle Scholar
Ghaffari, S. and Homaeinezhad, M. R., “Intelligent path following of articulated eight-wheeled mobile robot with nonholonomic constraints,” in 2016 4th International Conference on Robotics and Mechatronics (ICROM), 173–178 (2016).CrossRefGoogle Scholar
Ghaffari, S. and Homaeinezhad, M. R., “Autonomous path following by fuzzy adaptive curvature-based point selection algorithm for four-wheel-steering car-like mobile robot,” Proc. Inst. Mech. Eng., Part C 232(15), 26552665 (2018).CrossRefGoogle Scholar
Ragheb, H., El-Gindy, M., and Kishawy, H., “Torque Distribution Control for Multi-Wheeled Combat Vehicle,” 2014. Online]. Available: https://doi.org/10.1115/DETC2014-34034.CrossRefGoogle Scholar
El-Gindy, M. and D’Urso, P., “Development of control strategies of a multi-wheeled combat vehicle,” Int. J. Autom. Control 12, 325, 01/01 (2018).CrossRefGoogle Scholar
Mohamed, A., El-Gindy, M., Ren, J., and Lang, H., Optimal Collision-Free Path Planning for an Autonomous Multi-Wheeled Combat Vehicle, p. V003T01A002 (2017).CrossRefGoogle Scholar
Mohamed, A., El-Gindy, M., and Ren, J., “Design and Performance Analysis of Robust H∞ Controller for a Scaled Autonomous Multi-Wheeled Combat Vehicle Heading Control,” 2018. [Online]. Available: https://doi.org/10.1115/DETC2018-85032.CrossRefGoogle Scholar
Moreno Ramírez, C., Tomás-Rodríguez, M., and Evangelou, S. A., “Dynamic analysis of double wishbone front suspension systems on sport motorcycles,” Nonlinear Dyn. 91(4), 23472368, 2018/03/01 (2018).CrossRefGoogle Scholar
Siegwart, R., Nourbakhsh, I. R., and Scaramuzza, D., Introduction to autonomous mobile robots, 2nd ed. ed. (Intelligent robotics and autonomous agents). Cambridge, MA: MIT Press, 2011.Google Scholar
Adamu, E. I., Afolayan, M. O., Umaru, S., and Garba, D. K., “The modelling and control of the drive system of an Ackermann Robot using GA optimization,” Niger. J. Technol. 37(4), 1008 (2018).CrossRefGoogle Scholar
Koubaa, A., Robot Operating Systems (ROS) - The Complete Reference. Cham: Springer International Publishing AG, 2016.CrossRefGoogle Scholar
Magyar, B., Tsiogkas, N., Deray, J., Pfeiffer, S., and Lane, D., “Timed-Elastic Bands for Manipulation Motion Planning,” IEEE Robot. Autom. Letters 4(4), 35133520 (2019).CrossRefGoogle Scholar
Mao, R. and Ma, X., “Research on Path Planning Method of Coal Mine Robot to Avoid Obstacle in Gas Distribution Area,” J. Robot. 1–6, 2016 (2016).Google Scholar
Figure 0

Figure 1. Physical model of the SMWV.

Figure 1

Figure 2. Front view of the SMWV.

Figure 2

Figure 3. Internal layers of the SMWV.

Figure 3

Figure 4. Top view of the 1st axle in the driving layer.

Figure 4

Figure 5. Top view of the 1st axle in the steering layer.

Figure 5

Figure 6. Global (X,Y) and Local (L,B) reference frame of SMWV.

Figure 6

Figure 7. Kinematic model of 8 × 8.

Figure 7

Table I. 8WD8WS SMWV specification.

Figure 8

Table II. Electronics component specifications.

Figure 9

Figure 8. Full system hardware architecture.

Figure 10

Figure 9. PID differential driving controller block diagram.

Figure 11

Figure 10. Steering angle versus linear actuator stroke.

Figure 12

Figure 11. PID steering controller block diagram.

Figure 13

Table III. SMWV autonomous navigation pseudocode

Figure 14

Figure 12. Difference scenarios of M-PBVS.

Figure 15

Table IV. Mobile robot parking pseudocode

Figure 16

Figure 13. Experimental setup.

Figure 17

Figure 14. (a) robot trajectory, (b) linear velocity, (c) front two-axle steering angles, (d) angular velocities.

Figure 18

Figure 15. Physical experiment (top) and RVIZ (bottom).

Figure 19

Figure 16. Autonomous navigation experiment setup.

Figure 20

Figure 17. Physical experiment (top) and RVIZ (bottom).

Figure 21

Table V. Navigation test results

Figure 22

Figure 18. Drift results.

Figure 23

Figure 19. Experimental setup.

Figure 24

Figure 20. Direct goal trajectory.

Figure 25

Table VI. Comparison results.

Figure 26

Figure 21. Direct goal velocity.

Figure 27

Figure 22. Direct goal pose error.

Figure 28

Figure 23. Direct goal steering angle.

Figure 29

Figure 24. Direct goal test: physical experiment (top) and camera view (bottom).

Figure 30

Figure 25. Alternate goal trajectory comparison.

Figure 31

Figure 26. Alternate goal position error comparison.

Figure 32

Figure 27. Alternate goal orientation error comparison.