Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-03T21:03:22.438Z Has data issue: false hasContentIssue false

Self-adaptive differential evolution-based coati optimization algorithm for multi-robot path planning

Published online by Cambridge University Press:  03 February 2025

Lun Zhu
Affiliation:
College of Artificial Intelligence, Guangxi Minzu University, Nanning, China
Guo Zhou
Affiliation:
Department of Science and Technology Teaching, China University of Political Science and Law, Beijing, China
Yongquan Zhou*
Affiliation:
College of Artificial Intelligence, Guangxi Minzu University, Nanning, China Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning, China
Qifang Luo
Affiliation:
College of Artificial Intelligence, Guangxi Minzu University, Nanning, China Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning, China
Huajuan Huang
Affiliation:
College of Artificial Intelligence, Guangxi Minzu University, Nanning, China Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning, China
Xiuxi Wei
Affiliation:
College of Artificial Intelligence, Guangxi Minzu University, Nanning, China Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning, China
*
Corresponding author: Yongquan Zhou; Email: zhouyongquan@gxun.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

The multi-robot path planning problem is an NP-hard problem. The coati optimization algorithm (COA) is a novel metaheuristic algorithm and has been successfully applied in many fields. To solve multi-robot path planning optimization problems, we embed two differential evolution (DE) strategies into COA, a self-adaptive differential evolution-based coati optimization algorithm (SDECOA) is proposed. Among these strategies, the proposed algorithm adaptively selects more suitable strategies for different problems, effectively balancing global and local search capabilities. To validate the algorithm’s effectiveness, we tested it on CEC2020 benchmark functions and 48 CEC2020 real-world constrained optimization problems. In the latter’s experiments, the algorithm proposed in this paper achieved the best overall results compared to the top five algorithms that won in the CEC2020 competition. Finally, we applied SDECOA to optimization multi-robot online path planning problem. Facing extreme environments with multiple static and dynamic obstacles of varying sizes, the SDECOA algorithm consistently outperformed some classical and state-of-the-art algorithms. Compared to DE and COA, the proposed algorithm achieved an average improvement of 46% and 50%, respectively. Through extensive experimental testing, it was confirmed that our proposed algorithm is highly competitive. The source code of the algorithm is accessible at: https://ww2.mathworks.cn/matlabcentral/fileexchange/164876-HDECOA.

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

Robot systems, primarily accomplishing desired tasks through close collaboration between individual or multiple robots and humans, have now become an indispensable part of the development in agriculture, industry, and service sectors. In this trend of development, robot path planning has become one of the most crucial issues in robot systems. Multi-robot path planning is an NP-hard problem, which describes the scenario where, in the same working environment, these robots need to move from their initial positions to their respective destinations at the minimum cost while avoiding collisions with other robots or obstacles during the movement. Multi-robot path planning involves multiple constraints, particularly in dynamically changing environments. Previous work has proposed several solutions to avoid collisions and coordinate between multiple robots [Reference Olfati-Saber1]. Classical methods for solving robot path planning problems typically include potential field techniques [Reference Tournassoud2], bounding box representations [Reference Simeon, Leroy and Lauumond3], Breadth-First Search and Depth-First Search [Reference Žunić, Djedović and Žunić4], A* algorithm [Reference Fu, Chen, Zhou, Zheng, Wei, Dai and Pan5], and exploring random tree methods [Reference Zhang, Xiong, Li, Du and Zhao6]. However, these methods often fail to effectively address multi-robot path planning as the scale of robots or obstacles increases. Current researchers primarily utilize artificial neural networks and metaheuristic algorithms to solve such problems. Metaheuristic algorithms primarily mimic biological behaviors or natural phenomena as heuristic methods and are primarily categorized into four types. The first type is population-based metaheuristic algorithms, exemplified by particle swarm optimization (PSO) [Reference Kennedy and Russell7], which simulates the foraging behavior of bird flocks, and ant colony optimization (ACO) [Reference Dorigo, Mauro and Thomas8], which mimics the process where ant populations release pheromones along their paths during foraging to find optimal routes based on pheromone concentration. Similar algorithms include artificial bee colony (ABC) [Reference Karaboga and Bahriye9]. The second type is evolutionary-based metaheuristic algorithms, such as genetic algorithm (GA) [Reference Holland10] and differential evolution (DE) [Reference Qin, Huang and Suganthan11], which emulate the process of biological evolution where the most optimal genetic individuals are preserved over generations. The third type is based on chemical and physical phenomena, grounded in theoretical foundations, with representative algorithms including gravitational search algorithm (GSA) [Reference Rashedi, Nezamabadi-Pour and Saryazdi12] and Archimedes optimization algorithm (AOA) [Reference Hashim, Hussain, Houssein, Mabrouk and Al-Atabany13]. The fourth type is inspired by human social behaviors and can draw inspiration from various aspects of daily life. Examples include lungs performance-based optimization (LPO) [Reference Ghasemi, Zare, Zahedi, Trojovský, Abualigah and Trojovská14], which simulates human lung function activity, and football team training algorithm (FTTA) [Reference Tian and Mei15], which mimics the training process of a football team. However, due to the homogeneity of their update strategies, these algorithms tend to fall into local optima to some degree when solving problems, and their parameter settings are relatively uniform, making them unsuitable for solving diverse problems.

This article is a study on path planning problem based on metaheuristic algorithm. According to the No Free Lunch theorem, no single algorithm can solve all problems optimally. Therefore, researchers often integrate multiple strategies and algorithms to improve the performance of path planning algorithms. Geng et al. [Reference Geng, Sun, Wang, Bu, Liu, Li and Zhao16] combined sparrow search algorithm (SSA) with reverse learning strategy to solve a robot path planning problem on a grid map. Parhi et al. [Reference Parhi17] merged the classical method of linear regression (LR) with GSA called RGSA, incorporating multiple chaos strategies to obtain optimal paths in multi-humanoids (Nao robots) movement problems. Li et al. [Reference Li, Zhao, Chen, Xiong and Liu18] proposed a hybrid algorithm based on Improved GA and Dynamic Window Approach for solving mobile robot path planning problems. Xu et al. [Reference Xu, Maoyong and Baoye19] combined a new fourth-order Bezier transition curve with an improved PSO algorithm, proposed a novel method for smooth path planning of mobile robots. Nazarahari et al. [Reference Nazarahari, Esmaeel and Samira20] proposed an innovative artificial potential field (APF) algorithm to find all feasible paths between start and end points in a discrete grid environment and developed an Enhanced GA to improve initial path and find the optimal path between start and end points in robot path planning problems. Zhang et al. [Reference Zhang, Xu, Zhan and Han21] proposed a hybrid algorithm of GA and firefly algorithm to enhance the responsiveness and computational capability of mobile robots during movement. Dai et al. [Reference Dai, Long, Zhang and Gong22] introduced an Improved ACO utilizing characteristics of A* algorithm and MAX-MIN ant system to achieve efficient search capability for mobile robot path planning in complex maps. Chen et al. [Reference Chen and Jie23] addressed path planning problems for mobile robots in known environments, proposed a grid-based hybrid APF and ACO path planning method. Zhang et al. [Reference Zhang, Ning, Li, l. Pan and Zhang24] proposed a turning point-based grey wolf optimizer (GWO) for solving the path planning problem of patrol robots. The algorithm utilizes the concepts of roulette wheel selection and crossover mutation to broaden the search scope of the initial population. Additionally, the convergence factor function varies with the number of obstacles, enhancing the performance of the proposed algorithm. Liu et al. [Reference Liu, Liu and Xiao25] integrated the A* algorithm with an improved GWO to propose an A*-IGWO for solving parking lot path planning problems. The algorithm constructs a minimum cost equation using the A* algorithm on the basis of the population updating mechanism of the IGWO, fully leveraging the advantages of both algorithms to improve performance. Dai et al. [Reference Dai, Li, Chen, Nie, Rui and Zhang26] proposed an improved sparrow search algorithm (ISSA) for solving the path planning problem of cellular robots on large-scale three-dimensional truss. Julius et al. [Reference Fusic and Sitharthan27] proposed an improved self-adaptive learning PSO (ISALPSO) algorithm for solving the path planning problem of mobile robots in 2D lidar maps. The algorithm can fine-tune the lidar information using a binary occupancy grid method during the robot’s movement, thereby adaptively adjusting the parameters by ISALPSO.

The coati optimization algorithm (COA) [Reference Dehghani, Montazeri, Trojovská and Trojovský28] is an efficient and novel metaheuristic algorithm proposed by Dehghani et al. in 2022, widely applied to various global optimization problems. However, due to its inherent mechanism issues, it tends to fall into the dilemma of premature convergence when solving high-dimensional problems. Researchers have proposed some improvement ideas for COA. Hashim et al. [Reference Hashim, Houssein, Mostafa, Hussien and Helmy29] introduced an adaptive mutation strategy for the COA to solve feature extraction and global optimization problems. Baş et al [Reference Baş and Gülnur30] proposed an Enhanced COA for solving large-scale high-dimensional big data optimization problems (BOP). Yildizdan et al. [Reference Baş and Gülnur31] used a transfer function to transform the continuous optimization problem-solving COA into a binary optimization algorithm (BinCOA) to solve the Knapsack Problem (KP) and the Uncapacitated Facility Location Problem (UFLP). Hasanien et al. [Reference Hasanien, Alsaleh, Alassaf and Alateeq32] introduced a new improved COA to find the optimal solution for the Probability Optimal Power Flow (POPF) problem. Jia et al. [Reference Jia, Shi, Wu, Rao, Zhang and Abualigah33] proposed an improved COA based on the sound search envelope strategy to solve six engineering application problems. Although DE and COA have a wide range of applications and unique advantages, both algorithms exhibit poor solution quality when dealing with high-dimensional and multimodal problems, making them susceptible to local optima. To overcome the limitations of COA, this paper combines a cross-update strategy from some DE algorithm variants with the original COA to propose a new hybrid algorithm (SDECOA). When faced with different problems, self-adaptive differential evolution-based coati optimization algorithm (SDECOA) adaptively adjusts the use frequency of update strategies, fully utilizing the update mechanisms of both algorithms to enhance the algorithm’s global search and local search capabilities.

The contributions of this paper are summarized as follows:

  1. 1. Combining the update strategies from variants of DE algorithms with COA, an SDECOA is proposed, enhancing the COA’s global search and local search capabilities.

  2. 2. The crossover probability CR in the DE strategy adapts dynamically based on the algorithm’s iterations, increasing the convergence speed of the algorithm.

  3. 3. The proposed algorithm successfully solved 48 real-world constrained optimization problems from various fields and achieved superior results compared to the top algorithms in the CEC2020 competition.

  4. 4. The multi-robot path planning problem is NP-hard, and the presence of multiple moving obstacles further complicates the problem. The proposed algorithm effectively addresses this issue.

The organization of this paper is as follows: Section 2 discusses the relevant definitions of DE, COA, real-world constrained optimization problems, and online multi-robot path planning problems. Section 3 provides a detailed explanation of SDECOA’s definition. Section 4 showcases the experimental results of this study. Section 5 is the summary and future research objectives of this paper.

2. Preliminaries

In this section, COA and DE algorithm are introduced separately, along with the definition of real-world constrained engineering optimization problems and the definition of multi-robot path planning problems.

2.1. Coati optimization algorithm (COA)

This section describes the mathematical model of the original COA. The COA is a novel swarm intelligence algorithm proposed in 2022, which primarily simulates the behavior of coatis hunting iguanas. Initially, half of the population will climb trees to approach their food source, the iguana, and this behavior is defined by Eq. (1):

(1) \begin{align} Xnew_{i}\colon xnew_{i,j}=x_{i,j}+r1\cdot \left(Ig_{j}-I\cdot x_{i,j}\right),i=1,2,3,\ldots \left\lfloor \frac{N}{2}\right\rfloor \end{align}

In the above equation, x i,j represents the position of the i-th individual in the j-th dimension of the solution space, Ig denotes the position of the best solution in the current population, N represents the population size, r 1 is a random number between [0, 1], and I takes a random value of either 1 or 2.

Faced with the siege of coatis, the iguana will randomly drop to the ground. Eq. (2) expresses this behavior in the solution space. Meanwhile, the other half of the coatis will search for their prey, as represented by Eq. (3):

(2) \begin{align} Ig^{G}\colon Ig_{j}^{G} & = lbj+r2\cdot \left(ubj-lbj\right),\ j=1,2,3,\ldots m \\[-4pt] \nonumber \end{align}
(3) \begin{align} Xnew_{i}\colon xnew_{i,j} & = \left\{\begin{array}{l} x_{i,j}+r3\cdot \left(Ig_{j}^{G}-I\cdot x_{i,j}\right),F_{Ig}^{G}\lt F_{i}\\[3pt] x_{i,j}+r3\cdot \left(I\cdot x_{i,j}-Ig_{j}^{G}\right),else \end{array},\ i=\right.\left\lfloor \frac{N}{2}\right\rfloor +1,\ldots N \\[8pt] \nonumber \end{align}

where $Ig_{j}^{G}$ represents the random dropping position of the iguana, with lb and ub denoting the upper and lower bounds of the solution space, respectively. r 2 and r 3 are both random numbers between [0,1]. In Eq. (3), $F_{Ig}^{G}$ represents the fitness value of Ig and F i denotes the current fitness value of the i-th individual. Subsequently, if the fitness value Fnew i of the newly generated individual is superior to the current individual, its position is replaced; otherwise, the original position is retained, as expressed in Eq. (4):

(4) \begin{align} X_{i}=\left\{\begin{array}{l} Xnew_{i},Fnew_{i}\lt F_{i}\\[3pt] X_{i},else \end{array}\right. \end{align}

After locating the iguana, all the coatis will slowly surround their prey. This biological characteristic is represented by equations (5) and (6):

(5) \begin{align} lb_{\mathrm{j}}^{L} & =\frac{lbj}{t},ub_{\mathrm{j}}^{\mathrm{L}}=\frac{ubj}{t},t=1,2,3,\ldots T \\[-4pt] \nonumber \end{align}
(6) \begin{align} Xnew_{i}\colon xnew_{i,j} & = x_{i,j}+\left(1-2r4\right)\cdot \left(lb_{\mathrm{j}}^{L}+r5\cdot \left(ub_{\mathrm{j}}^{L}-ub_{\mathrm{j}}^{L}\right)\right),i=1,2,3,\ldots N \\[8pt] \nonumber \end{align}

where t denotes the current iteration number, while T represents the maximum iteration number, and r 4 and r 5 are both random numbers between [0, 1]. Subsequently, the new position of the current individual is also calculated using Eq. (4). The pseudocode of the basic framework of the COA is represented by Algorithm 1.

Algorithm 1 Pseudocode of COA.

2.2. Differential evolution (DE)

This section describes the mathematical model of original DE. DE is a well-known classic metaheuristic algorithm, mainly consisting of three evolutionary processes: mutation, crossover, and selection.

2.2.1. Mutation

In DE, the mutation process involves randomly selecting two individuals and using the difference of their position vectors as the step size for updating the target carrier position, as represented by Eq. (7):

(7) \begin{align} \mathrm{y}_{i,j}=x_{i,j}+F\cdot \left(x_{R1,j}-x_{R2,j}\right),i=1,2,3,\ldots, \mathrm{N} \end{align}

In the above equation, x i represents the i-th individual in the population, x R1 and x R2 are two different individuals randomly selected from the population, j denotes the j-th dimension of the solution, and F is the scaling factor.

2.2.2. Crossover

The crossover process involves exchanging the j-th dimension component between the mutated individual y i and the target carrier x i , generating a crossover individual z i , as represented by Eq. (8):

(8) \begin{align} \mathrm{Z}_{i}\colon z_{i,j}=\left\{\begin{array}{l} y_{i,j},r\lt CR||j_{r}=j\\[3pt] x_{i,j},else \end{array}\right. \end{align}

where CR denotes the crossover probability, j r represents a randomly selected number between 1 and the maximum dimensions, and r is a random number between [0, 1].

2.2.3. Selection

Similar to Eq. (4), after comparing the fitness values of the target carrier and the crossover individual, one of them is randomly selected. The better individual is chosen as the position vector for the next generation, achieving the goal of evolution:

(9) \begin{align} X_{i}=\left\{\begin{array}{l} \mathrm{Z}_{i},F_{{z_{i}}}\lt F_{i}\\[3pt] X_{i},else \end{array}\right. \end{align}

where $F_{{z_{i}}}$ represents the fitness value of the crossover individual and F i represents the fitness value of the target carrier.

2.3. Real-world constrained optimization problems

This paragraph introduces the basic definition of constrained optimization problems, which typically refer to minimization problems under several equality and inequality constraints. The real-world constrained optimization problems in this paper are all derived from CEC2020, covering various fields such as industrial chemical processes, synthesis and design, mechanical engineering, power systems, power electronics, and livestock feed optimization. The definitions of these problems are referenced from literature [Reference Kumar, Wu, Ali, Mallipeddi, Suganthan and Das34] and all meet the following function definition.

(10) \begin{align} \textrm{Minimize} \quad & \mathrm{f}\left(X\right),X=\left(x_{1},x_{2},\ldots, x_{n}\right)\\[3pt] \nonumber\textrm{Subject to:} \quad & \begin{array}{l} g_{\mathrm{i}}(X)\leq 0,i=1,\ldots, n\\[3pt] h_{j}(X)=0,j=n+1,\ldots, m \end{array} \end{align}

In the above equation, X denotes the solution vector of the problem, g represents inequality constraints, n denotes the number of g, h represents equality constraints, and m signifies the total number of all constraints.

2.4. Multi-robot online path planning problem

This article studies the multi-robot path planning problem with multiple static and dynamic obstacles; therefore, it is necessary to consider the safe distance between robots and other robots, robots and static obstacles, and robots and dynamic obstacles. The mathematical model of this problem is provided in literature [Reference Akay and Mustafa35], where each robot has its own starting point and destination. During the robots movement, there are static and dynamic obstacles of various sizes and shapes. All robots not only need to avoid these obstacles but also must not collide with each other. The robots need to reach their destinations with the minimum cost step by step.

Minimize,

(11) \begin{align} Fit=F_{1}+F_{2}+F_{3}+F_{4} \end{align}

where F 1 represents the shortest distance, F 2 represents avoiding static obstacles, F 3 represents avoiding dynamic obstacles, and F 4 represents avoiding other robots. F 1 can be expressed as Eq. (12):

(12) \begin{align} F_{1}=\sum _{i=1}^{NR}\left(f_{i}+g_{i}\right) \end{align}

where

(13) \begin{align} f_{i}=\sqrt{\left(x_{i}^{n}-x_{i}^{c}\right)^{2}+\left(y_{i}^{n}-y_{i}^{c}\right)^{2}} \\[-4pt] \nonumber \end{align}
(14) \begin{align} g_{i}=\sqrt{\left(x_{i}^{n}-x_{i}^{g}\right)^{2}+\left(y_{i}^{n}-y_{i}^{g}\right)^{2}} \\[8pt] \nonumber \end{align}

where NR represents the number of robots and ( $x_{i}^{g}, y_{i}^{g}$ ) represents the destination coordinates of the i-th robot. F 2 can be expressed as Eq. (15):

(15) \begin{align} F_{{_{2}}}=\left\{\begin{array}{l} \varepsilon, d_{i}^{s}\leq d_{s}\\[3pt] 0,else \end{array}\right. \end{align}

where $\varepsilon$ is a large penalty value, d s represent the safety distance between any two objects, and $d_{i}^{s}$ is the sum of distances from all static obstacles to the i-th robot, as derived from Eq. (16):

(16) \begin{align} d_{i}^{s}=\sum _{i=1}^{NR}\sum _{j=1}^{NS}\sqrt{\left(x_{i}^{n}-x_{j}^{s}\right)^{2}+\left(y_{i}^{n}-y_{j}^{s}\right)^{2}} \end{align}

where NS is the number of static obstacles and ( $x_{j}^{s}, y_{j}^{s}$ ) is the coordinate position of the j-th static obstacle. F 3 can be expressed as Eq. (17):

(17) \begin{align} F_{{_{3}}}=\left\{\begin{array}{l} \varepsilon, d_{i}^{D}\leq d_{s}\\[3pt] 0,else \end{array}\right. \end{align}

where $d_{i}^{D}$ is the sum of distances from all dynamic obstacles to the i-th robot, as obtained from Eq. (18):

(18) \begin{align} d_{i}^{D}=\sum _{i=1}^{NR}\sum _{j=1}^{ND}\sqrt{\left(x_{i}^{n}-x_{j}^{D}\right)^{2}+\left(y_{i}^{n}-y_{j}^{D}\right)^{2}} \end{align}

where ND is the number of dynamic obstacles, ( $x_{j}^{D}, y_{j}^{D}$ ) is the coordinate position of the j-th dynamic obstacle, and the movement of dynamic obstacles at each step is determined by equations (19) and (20):

(19) \begin{align} x_{j}^{D}=x_{j}^{D}+v_{j}^{D}\cos \left(\alpha _{j}\right) \\[-4pt] \nonumber \end{align}
(20) \begin{align} y_{j}^{D}=y_{j}^{D}+v_{j}^{D}\sin \left(\alpha _{j}\right) \\[8pt] \nonumber \end{align}

In the above equation, $v_{j}^{D}$ represents the movement speed of dynamic obstacle j and $\alpha _{j}$ represents its radial position relative to the target position. F 4 can be expressed as Eq. (21):

(21) \begin{align} F_{{_{4}}}=\left\{\begin{array}{l} \varepsilon, d_{i}^{R}\leq d_{s}\\[3pt] 0,else \end{array}\right. \end{align}

where $d_{i}^{R}$ is the sum of distances between different robots and ( $x_{j}^{R}, y_{j}^{R}$ ) is the coordinate position of the j-th robot, as obtained from Eq. (22):

(22) \begin{align} d_{i}^{R}=\sum _{i=1}^{NR-1}\sum _{j=i+1}^{NR}\sqrt{\left(x_{i}^{n}-x_{j}^{R}\right)^{2}+\left(y_{i}^{n}-y_{j}^{R}\right)^{2}} \end{align}

3. The proposed SDECOA algorithm

This section will elaborate on the mathematical definition of SDECOA in detail. In the original COA algorithm, there existed an imbalance between global search capability and local search capability. To overcome this deficiency, we propose called SDECOA. For optimization problems with fewer extreme points, a mutation operator with stronger local search capability should be used. For optimization problems with more extreme points, a mutation operator with stronger global and local search capability should be employed. It is difficult for a single mutation operator to balance global search capability and local search capability, creating obstacles to achieving this balance. In SDECOA, two variants of DE are introduced in the update strategy, and these strategies are called upon based on a roulette wheel method. Initially, all strategies have the same probability of being used. However, during the algorithm iteration process, strategies that yield better results are given higher usage probabilities through learning. Learning is determined through points, where a strategy that achieves good results earns a point. As a result, satisfactory results can be achieved for different problems, overcoming the challenge of imbalance between local and global search in COA.

3.1. Crossing

In traditional DE algorithms, the crossing probability CR is a fixed value, with literature [Reference Draa, Samira and Imene36] suggesting a range of values for CR between [0, 1]. When CR approaches zero, the changes in individuals between adjacent generations are minimal, leading to a high presence of similar individuals in the population and potentially causing the algorithm to get stuck in local optima. Conversely, when CR is close to 1, the individuals update positions fluctuate greatly, hindering efficient convergence. To address these issues, this paper proposes an adaptive dynamic adjustment method, calculated as follows:

(23) \begin{align} CR\left(\mathrm{t}\right)=CR_{\min }+\left(CR_{\max }-CR_{\min }\right)\cdot \left(1-\frac{t}{T}\right) \end{align}

where $CR_{\max }$ equals 0.95, $CR_{\min }$ equals 0.05, t represents the current iteration number, and T is the maximum number of iterations.

3.2. Variant update strategies of DE

The first introduced update strategy, rand < CR, was proposed in literature [Reference Das and Ponnuthurai37], while the other part was inspired by the GWO [Reference Kumar, Wu, Ali, Mallipeddi, Suganthan and Das34] and is represented by Eq. (24):

(24) \begin{align} Xnew_{i}\colon xnew_{i,j}=\left\{\begin{array}{l} x_{i,j}+FF\cdot \left(x_{best,j}-x_{i,j}+x_{R1,j}-x_{R2,j}\right);\ if\ rand\lt CR\\[3pt] \frac{1}{3}\left(x_{best,j}+x_{\sec ond,j}+x_{\textit{third},j}\right),else \end{array}\right. \end{align}

In the above equation, x best , x second , and x third represent the current population’s top three individuals, FF is the scaling factor with a value of 0.8, while R1 and R2 are randomly selected individuals that are different from the i-th individual.

The second introduced update strategy was proposed in literature [Reference Wang, Zixing and Qingfu39] and is represented by Eq. (25):

(25) \begin{align} Xnew_{i}\colon xnew_{i,j}=x_{i,j}+L\cdot \left(x_{R1,j}-x_{i,j}\right)+FF\cdot \left(x_{R2,j}-x_{R3,j}\right) \end{align}

where L is a random number between [0, 1], and R1, R2, and R3 are also three distinct random individuals.

3.3. Algorithm implementation

In the algorithm proposed, during the exploration phase, Eqs. (1), (24), and (25) are randomly used, with their probabilities determined by Eq. (26). During the invocation process, a counter records which update strategy an individual has been used. If an individual achieves better results in this iteration, the score of the strategy used by that individual is incremented. The probabilities for the next roulette wheel selection are calculated based on the total score obtained by each strategy in this iteration:

(26) \begin{align} p_{j}\left(t+1\right)=\frac{S_{j}\left(t\right)}{\sum _{i=1}^{SN}S_{i}\left(t\right)},\,j\in \left\{1,2,\ldots, SN\right\} \end{align}

where $p_{j}(t+1)$ represents the probability of strategy j being used in the next iteration, $S_{i}(t)$ represents the score of strategy i in the current iteration, and SN represents the number of strategies, which is set to 3 in this paper. The update strategy for each individual is determined by a roulette wheel selection. The pseudocode and flow chart of SDECOA are, respectively, shown in Algorithm 1 and Figure 1.

Algorithm 2 Pseudocode of SDECOA.

Figure 1. Flow chart of SDECOA.

3.4. Time complexity analysis

The operation of SDECOA includes population initialization, fitness calculation, population position update, individual probability update, and strategy selection, so by analyzing the complexity of each process, the time complexity of SDECOA can be obtained. Assuming the population size is N, the number of iterations is T, and the dimension of the problem is D, the time complexity of population initialization is O(ND). In each iteration process, the time complexity of population position update is O(ND+ND), the time complexity of fitness value calculation and comparison is O(ND+ND), and the time complexity of individual probability update and strategy selection is O(ND). Therefore, the time complexity of SDECOA can be concluded as O(ND+T×(ND+ND+ND+ND+ND+ND)) ≈ O(TND).

4. Experiment results and analysis

4.1. Experimental configuration and design

The programming language used in this experiment is MATLAB, and the compilation software is MATLAB 2021a. The computer configuration consists of a 64-bit Windows 10 operating system, 8.0G RAM, and a processor with a base frequency of 2.10 GHz. To demonstrate the effectiveness of the improved algorithm, our algorithm was initially compared with several classic and advanced algorithms on the 20-dimensional CEC2022 benchmark functions. To further validate the performance of our algorithm, SDECOA was employed to solve 48 real-world constrained optimization problems from CEC2020, and comparisons were made with several algorithms that won in the CEC2020 competition. Finally, our algorithm was applied to the problem of multi-robot online path planning.

4.2. SDECOA for CEC2022 benchmark functions

Solving benchmark test functions is a common method for testing algorithm performance, and this section presents the experimental results of SDECOA solving this test set. The CEC2022 benchmark test functions are the latest test functions, and basic information is shown in Table 1. This test suite mainly includes three dimensions (dim = {2, 10, 20}), with function types primarily including single-peak functions (F1), multimodal functions (F2–F5), hybrid functions (F6-F8), and composite functions (F9–F12). To validate the algorithm’s effectiveness, we compare SDECOA with the original DE and COA. Furthermore, to evaluate its performance, we also test and analyze it against some well-known classical algorithms and recent advanced algorithms. The algorithms involved in testing the CEC2022 benchmark functions include simulated annealing (SA) [Reference Kirkpatrick, Gelatt and Vecchi40], PSO [Reference Kennedy and Russell7], GWO [Reference Mirjalili, Mirjalili, Lewis and wolf optimizer38], sine-cosine algorithm (SCA) [Reference Mirjalili41], whale optimization algorithm (WOA) [Reference Mirjalili and Andrew42], Harris hawk optimizer (HHO) [Reference Heidari, Mirjalili, Faris, Aljarah, Mafarja and Chen43], SSA [Reference Xue and Bo44], modified adaptive sparrow search algorithm (MASSA) [Reference Mirjalili41], self-adaptive differential sine-cosine algorithm (sdSCA) [Reference Akay and Mustafa35], hybrid algorithm of differential evolution and flower pollination (HADEFP) [45], etc. In this set of experiments, all algorithms were independently run 25 times. Except for SA, the iteration count for all algorithms was set to 1000 iterations, with a population size of 50. The other parameters for each algorithm are shown in Table 2.

Table I. Basic information of CEC2022 benchmark test functions.

Table II. Parameter settings of different algorithms.

As shown in Table 3, the means and standard deviations of different algorithms are recorded. In the experimental results across all test functions, the proposed algorithm outperforms the original COA and DE by a significant margin, particularly achieving the best results in solving F1, F6, F7, F8, and F9. Among all the compared algorithms, SDECOA proves to be the most competitive. Table 3 also presents the rankings of all algorithms based on the Friedman rank-sum test, with the data confirming that the proposed algorithm achieved the best performance in solving the CEC2022 test functions.

Table III. Experimental results of different algorithms on CEC2022 benchmark functions in 20 dimensions.

The convergence curves for different CEC2022 test functions are shown in Figure 2. From these curves, it is evident that SDECOA exhibits substantial improvements in both optimization capability and convergence speed compared to DE and COA. Compared with other algorithms, SDECOA shows the best convergence speed and results for most test functions. The boxplots for all test functions are shown in Figure 3, where it can be seen that SDECOA performs better in terms of median values and lower variance for most functions. Compared to other algorithms, SDECOA is more stable, thus demonstrating the superior robustness of the proposed algorithm.

Figure 2. Convergence curves of different algorithms.

Figure 3. Boxplots of different algorithms.

4.3. SDECOA for CEC2020 problems

To further demonstrate the effectiveness of SDECOA, this section presents the experimental results and data analysis of SDECOA applied to real-world constrained optimization problems. The real-world constrained problems used in this study are based on the CEC2020 real-world constrained optimization problems. The mathematical models for all problems in this study are derived from reference [Reference Sharma and Jabeen46]. Table 4 displays the basic information of 48 problems, where D represents the dimension of the problem, g represents the number of inequality constraints, h represents the number of equality constraints, and f* represents the known optimal value. These problems are mainly designed in fields such as Industrial Chemical Processes, Process Synthesis and Design, Mechanical Engineering, Power Systems, Power Electronics, and Livestock Feed Ration Optimization, with dimensions ranging from 2 to 118 and varying numbers of constraints from 1 to 108.

In this set of experiments, SDECOA is compared with several advanced algorithms that won in the CEC2020 competition. These top-ranking algorithms, in descending order, include SASS [Reference Kumar, Das and Zelinka47], COLSHADE [Reference Gurrola-Ramos, Hernàndez-Aguirre and Dalmau-Cedeño48], sCMAgES [Reference Kumar, Das and Zelinka49], EnMODE [Reference Sallam, Elsayed, Chakrabortty and Ryan50], and BpMAgES [Reference Hellwig and Beyer51]. Each algorithm is independently run 25 times, with 1000 iterations, a population size of 60, and other control parameters consistent with the literature [Reference Xu and Lihong52]. A dynamic adaptive penalty function is used. Table 5 presents the optimal values, mean values, and standard deviations of these algorithms. It is worth noting that shaded cells in the table indicate that the proposed algorithm’s best result exceeds the f* for that problem. In this paper, the Friedman rank-sum test is also conducted on the average values of these 48 optimization problems, and the ranking of the proposed algorithm, obtained through calculation, is superior to the top five algorithms that won the CEC2020 competition.

The introduction in the literature [Reference Ezugwu, Adewumi and Frîncu53] determines the quality of the solutions by calculating the percentage deviation of the mean with respect to the known optimal solution (PDmean). As can be seen from Eq. (27), the smaller the average value obtained from 25 runs, the smaller the PDmean value, indicating higher solution quality. Table 6 presents the PDmean values when different algorithms solve different problems:

(27) \begin{align} \textit{PDmean}=\left\{\begin{array}{l} \frac{mean-f^{*}}{f^{*}+\varepsilon };if \quad f^{*}\geq 0\\[3pt] \frac{f^{*}-mean}{f^{*}},else \end{array}\right. \end{align}

where f* represents the best know solution, mean represents the average value of the test results, and $\varepsilon$ represents a very small constant. An analysis of Table 6 shows that the proposed algorithm generally demonstrates superior solution quality compared to these advanced algorithms. Based on the experimental results, the proposed algorithm achieved ideal results in most cases, with results for 26 problems surpassing the known optimal solution. These experimental results are sufficient to demonstrate that the proposed algorithm not only performs well but also excels in addressing real-world constrained optimization problems using SDECOA. Therefore, in comparison with these state-of-the-art algorithms, our algorithm demonstrates excellent competitiveness.

4.4. SDECOA for multi-robot path planning problem

In this section, the experimental content applies SDECOA to the multi-robot path planning problem, the mathematical model code of which can be obtained from reference [Reference Akay and Mustafa35].

4.4.1. Design of the scenario for multi-robot path planning problem

This section introduces the simulation scenario layout for the robot path planning problem. In the simulation model of this problem, all robots are circular with equal radii, and collisions between them are not allowed. Each robot has its own starting point and end point. As for static obstacles, each one has a different size and shape. Similarly, each dynamic obstacle is also circular with equal radii and has its own starting point and end point. Each dynamic obstacle moves at a constant speed without deviating during the motion. In this study, three different scenarios are created, with the numbers of robots, static obstacles, and dynamic obstacles shown in Table 7. Except for Scenario 1, which references literature [Reference Akay and Mustafa35], the other scenarios are arranged independently in this study. The layouts of these three scenarios are shown in Figures 4, 5, and 6. In these scenarios, the black paths represent the planned paths of robots, the blue paths represent the paths of dynamic obstacles, and ‘×’ indicates the respective end points of objects.

In the experiments of the three scenarios mentioned above, in addition to the proposed algorithm, the comparison algorithms include DE, PSO, SCA, COA, and sdSCA, with a population size of 30, and the parameter settings are consistent with Section 4.2. It is worth noting that in this experiment, the iteration of all algorithms stops when all robots reach their end points. Each algorithm runs independently 20 times.

Table IV. Basic information of 48 CEC2020 real-world constrained optimization problems.

4.4.2. Simulation experiment in Scenario 1

In the simulation experiment in Scenario 1, Figure 7 illustrates the travel paths of each algorithm. In solving the multi-robot path planning problem, the stronger the algorithm’s performance, the smoother the robot’s path and the shorter the average travel distance. As seen in Figure 7, the path corresponding to SDECOA is notably the smoothest. Table 8 records the average travel distance and standard deviation for each robot from the starting point to the destination and also calculates the total average travel distance for the six robots in Scenario 1. For ease of comparison, a histogram of the average travel distances for different algorithms is shown in Figure 8. From this, it is evident that SDECOA demonstrates the best performance. Table 9 lists the number of steps taken by each robot, with fewer steps indicating a smoother path.

Table V. Experimental results of different algorithms solving 48 CEC2020 real-world constrained optimization problems (shaded areas indicate results superior to the known optimal solution).

The experimental results from Scenario 1 show that, compared to DE and COA, the algorithm’s performance improved by 33.2% and 22.4%, respectively, while the average number of steps taken was reduced by 26.9% and 15.8%.

4.4.3. Simulation experiment in Scenario 2

Figure 9 shows the travel paths of each algorithm in Scenario 2. It is clearly observed in Figure 9 that, as the number of robots increases, the differences in path smoothness between the algorithms become more pronounced. Compared to the other algorithms, the path corresponding to SDECOA is the smoothest. Table 10 records the average travel distance and corresponding standard deviation for the 12 robots from the starting point to the destination. Figure 10 presents a histogram of the average travel distances for different algorithms. From the analysis of Figure 10, it can be seen that, despite the increased complexity of the problem, SDECOA still demonstrates strong performance. Table 11 records the number of steps taken by each robot.

Table VI. Percentage deviation values (PDmean) obtained through solutions using different algorithms.

Table VII. Number of robots and obstacles in different scenarios.

Figure 4. Scenario 1.

The experimental results from Scenario 2 show that, compared to DE and COA, the algorithm’s performance improved by 46.1% and 50.2%, respectively, while the average total number of steps taken was reduced by 40.0% and 48.3%.

4.4.4. Simulation experiment in Scenario 3

In Scenario 3, the number of robots was increased to 20, and the quantities of both dynamic and static obstacles were significantly raised, making the path planning problem more complex. It should be noted that in this experiment, COA led to the phenomenon of robots getting lost, demonstrating subpar performance. Therefore, the experimental results involving COA are excluded from consideration in this section. In this experiment, Figure 11 illustrates the trajectories of each algorithm in Scenario 3. From Figure 11, it is clearly observable that, even as the problem scale increased, the proposed algorithm still managed to find effective paths compared to the other algorithms. Table 12 records the average travel distance and corresponding standard deviation for 20 robots from the starting point to the destination, while Figure 12 shows the histogram of the average travel distances for different algorithms. The histogram clearly indicates that the average travel distance of SDECOA is significantly lower than that of the other algorithms. Table 13 documents the number of steps taken by each robot.

Figure 5. Scenario 2.

Figure 6. Scenario 3.

Figure 7. Paths of different algorithms in Scenario 1.

Table VIII. Experimental results of the travel distance for each robot in Scenario 1.

Figure 8. Average travel distance of each robot in Scenario 1.

From the experimental results in Scenario 3, it can be concluded that, compared to DE, the proposed algorithm improved performance by 45.3%, with the total average movement steps reduced by 37.0%.

Through these experimental data, it can be seen that regardless of the number of robots, the proposed algorithm always achieves the best performance. Moreover, as the number of robots increases, the values of other algorithms become very large, while SDECOA can still manage reasonably. The path diagrams in the three scenarios clearly show that the proposed algorithm’s results in a shorter travel distance and smoother paths for the robot, particularly demonstrating significant advantages when the robot operates in more complex environments. Therefore, it can be proven that our algorithm outperforms these algorithms in terms of performance and competitiveness.

5. Conclusion and future work

Current research on robot path planning mainly considers environments with static obstacles or involves a very small number of robots. However, in this study, we consider a scenario with 20 robots and 7 dynamic obstacles, significantly increasing the complexity of the problem. To address the issue of multi-robot online path planning in environments with both static and dynamic obstacles, this paper proposed a SDECOA. The proposed algorithm extends the original COA by incorporating two DE update strategies. These strategies are adaptively selected based on a learning mechanism, which eliminates the dependency on a single update strategy in COA. As a result, the algorithm is capable of achieving excellent performance across a variety of problem domains. Furthermore, the crossover probability, which was originally a constant parameter, is transformed into a dynamic adaptive variable to further improve the accuracy of the solutions. This approach effectively mitigates the premature convergence problem caused by the imbalance between local and global search capabilities in COA, thereby improving the algorithm’s optimization capability.

Table IX. Average number of steps traveled by each robot in Scenario 1.

Figure 9. Paths of different algorithms in Scenario 2.

Table X. Experimental results of the travel distance for each robot in Scenario 2.

Table XI. Average number of steps traveled by each robot in Scenario 2.

Figure 10. Average travel distance of each robot in Scenario 2.

Figure 11. Paths of different algorithms in Scenario 3.

Table XII. Experimental results of the travel distance for each robot in Scenario 3.

Table XIII. Average number of steps traveled by each robot in Scenario 3.

Figure 12. Average travel distance of each robot in Scenario 3.

To validate the performance of the proposed algorithm, it was applied to the CEC2022 benchmark test suite and the CEC2020 real-world constrained optimization problems. Experimental results show that, compared to DE and COA, the proposed algorithm significantly improves performance, solving optimization problems even with over 100 constraints. When compared with the winning algorithm in the CEC2020 competition, SDECOA achieved the best results, as confirmed by the Friedman test. Finally, the algorithm was applied to multi-robot path planning problems in three complex scenarios. In Scenario 1, compared to the original DE and COA, the proposed algorithm reduced the average total travel distance of all robots by 33.2% and 22.4%, respectively, and reduced the average total movement steps by 31.1% and 21.6%. In Scenario 2, the average total travel distance was reduced by 46.1% and 50.2%, respectively, while the average total movement steps were reduced by 40.0% and 48.3%. In Scenario 3, compared to the original DE, the average total travel distance was reduced by 45.3%, and the average total movement steps were reduced by 32.7%. These experimental results demonstrate the strong competitiveness of the proposed algorithm.

This paper presents a novel solution approach for the multi-robot path planning problem, holding promise for addressing more complex path planning challenges in future environments, such as multi-robot path planning in three-dimensional spaces with multiple dynamic obstacles. According to the No Free Lunch theorem, no single algorithm can solve all problems, but continuous improvement can enable an algorithm to solve a broader range of problems. Of course, the number of position update strategies is not always a determining factor for success. In future research on intelligent algorithms, better position update strategies that align with the COA are expected to emerge, along with more challenging real-world optimization problems for SDECOA to solve. One limitation of the proposed algorithm is that it has a relatively large number of control parameters, and the effective update strategy at the initial stage may not necessarily be suitable for the later iterations. Future research could focus on applying the algorithm to more realistic path planning problems. Moreover, this paper only applies SDECOA to single-objective constrained optimization problems; future work could extend its application to high-dimensional multi-objective optimization problems.

Author contributions

Lun Zhu: Methodology and Writing – original draft. Guo Zhou: Writing – original draft. Yongquan Zhou: Supervisor and Writing – review and editing. Qifang Luo: Supervisor and Experimental results analysis. Huajuan Huang: Algorithm. Xiuxi Wei: Software and Results analysis.

Data availability

No data were used for the research described in the article.

Financial support

This work was supported by National Natural Science Foundation of China under Grant U21A20464, 62066005.

Competing interests

All the work outlined in this paper is our own except where otherwise acknowledged and referenced. The work contained in the manuscript has not been previously published, in whole or part, and is not under consideration by any other journal. All authors are aware of and accept responsibility for the Manuscript. The authors declare that they have no conflicts of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

Olfati-Saber, R., “Flocking for multi-agent dynamic systems: Algorithms and theory,” IEEE Trans. Autom. Control 51(3), 401420 (2006).CrossRefGoogle Scholar
Tournassoud, P.. A strategy for obstacle avoidance and its application to mullti-robot systems. In: Proceedings. 1986 IEEE International Conference on Robotics and Automation, 3 (IEEE, 1986) pp. 12241229.Google Scholar
Simeon, T., Leroy, S. and Lauumond, J.-P., “Path coordination for multiple mobile robots: A resolution-complete algorithm,” IEEE Trans. Robot. Autom. 18(1), 4249 (2002).CrossRefGoogle Scholar
Žunić, E., Djedović, A. and Žunić, B., “Software solution for optimal planning of sales persons work based on Depth-First Search and Breadth-First Search algorithms,” In: 39th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (IEEE, 2016) pp. 12481253.Google Scholar
Fu, B., Chen, L., Zhou, Y., Zheng, D., Wei, Z., Dai, J. and Pan, H., “An improved A* algorithm for the industrial robot path planning with high success rate and short length,” Robotics and Autonomous Systems 106, 2637 (2018).CrossRefGoogle Scholar
Zhang, P., Xiong, C., Li, W., Du, X. and Zhao, C., “Path planning for mobile robot based on modified rapidly exploring random tree method and neural network,” Int. J. Adv. Robot. Syst. 15(3), 1729881418784221 (2018).CrossRefGoogle Scholar
Kennedy, J. and Russell, E.. Particle swarm optimization. In: Proceedings of ICNN 95-international conference on neural networks, 4 (IEEE, 1995) pp. 19421948.CrossRefGoogle Scholar
Dorigo, M., Mauro, B. and Thomas, S., “Ant colony optimization,” IEEE Comput. Intell. Mag. 1(4), 2839 (2006).CrossRefGoogle Scholar
Karaboga, D. and Bahriye, B., “A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm,” J. Global Optimiz. 39(3), 459471 (2007).CrossRefGoogle Scholar
Holland, J. H., “Genetic algorithms,” Sci. Am. 267(1), 6673 (1992).CrossRefGoogle Scholar
Qin, A. K., Huang, V. L. and Suganthan, P. N., “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Trans. Evol. Comput. 13(2), 398417 (2008).CrossRefGoogle Scholar
Rashedi, E., Nezamabadi-Pour, H. and Saryazdi, S., “GSA: A gravitational search algorithm,” Inform. Sci. 179(13), 22322248 (2009).CrossRefGoogle Scholar
Hashim, F. A., Hussain, K., Houssein, E. H., Mabrouk, M. S. and Al-Atabany, W., “Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems,” Appl. intell. 51(3), 15311551 (2021).CrossRefGoogle Scholar
Ghasemi, M., Zare, M., Zahedi, A., Trojovský, P., Abualigah, L. and Trojovská, E., “Optimization based on performance of lungs in body: Lungs performance-based optimization (LPO),” Comput. Method Appl. Mech. Eng. 419, 116582 (2024).CrossRefGoogle Scholar
Tian, Z. and Mei, G., “Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization,” Expert Syst. Appl. 245, 123088 (2024).CrossRefGoogle Scholar
Geng, J., Sun, X., Wang, H., Bu, X., Liu, D., Li, F. and Zhao, Z., “A modified adaptive sparrow search algorithm based on chaotic reverse learning and spiral search for global optimization,” Neural Comput. Appl. 35(35), 2460324620 (2023).CrossRefGoogle Scholar
Parhi, D. R., “Chaos-based optimal path planning of humanoid robot using hybridized regression-gravity search algorithm in static and dynamic terrains,” Appl. Soft Comput. 140, 110236 (2023).Google Scholar
Li, Y., Zhao, J., Chen, Z., Xiong, G. and Liu, S., “A robot path planning method based on improved genetic algorithm and improved dynamic window approach,” Sustainability-Basel 15(5), 4656 (2023).CrossRefGoogle Scholar
Xu, L., Maoyong, C. and Baoye, S., “A new approach to smooth path planning of mobile robot based on quartic Bezier transition curve and improved PSO algorithm,” Neurocomputing 473(2022), 98106 (2022).CrossRefGoogle Scholar
Nazarahari, M., Esmaeel, K. and Samira, D., “Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm,” Expert Syst. Appl. 115, 106120 (2019).CrossRefGoogle Scholar
Zhang, T. W., Xu, G. H., Zhan, X. S. and Han, T., “A new hybrid algorithm for path planning of mobile robot,” J. Supercomput. 78(3), 41584181 (2022).CrossRefGoogle Scholar
Dai, X., Long, S., Zhang, Z. and Gong, D., “Mobile robot path planning based on ant colony algorithm with A* heuristic method,” Front. Neurorobot. 13, 15 (2019).CrossRefGoogle Scholar
Chen, G. and Jie, L., “Mobile robot path planning using ant colony algorithm and improved potential field method,” Comput. Intell. Neurosci. 2019(1), 110 (2019).Google ScholarPubMed
Zhang, Q., Ning, X., Li, Y., l. Pan, R. G. and Zhang, L., “Path planning of patrol robot based on modified grey wolf optimizer,” Robotica 41(7), 19471975 (2023).CrossRefGoogle Scholar
Liu, S., Liu, S. and Xiao, H., “Improved gray wolf optimization algorithm integrating A* algorithm for path planning of mobile charging robots,” Robotica 42(2), 536559 (2024).CrossRefGoogle Scholar
Dai, Y., Li, S., Chen, X., Nie, X., Rui, X. and Zhang, Q., “Three-dimensional truss path planning of cellular robots based on improved sparrow algorithm,” Robotica 42(2), 347366 (2024).CrossRefGoogle Scholar
Fusic, S. J. and Sitharthan, R., “Self-adaptive learning particle swarm optimization-based path planning of mobile robot using 2D Lidar environment,” Robotica 42(4), 9771000 (2024).Google Scholar
Dehghani, M., Montazeri, Z., Trojovská, E. and Trojovský, P., “Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems,” Knowl.-Based Syst. 259, 110011 (2023).CrossRefGoogle Scholar
Hashim, F. A., Houssein, E. H., Mostafa, R. R., Hussien, A. G. and Helmy, F., “An efficient adaptive-mutated Coati optimization algorithm for feature selection and global optimization,” Alexandria Eng. J. 85, 2948 (2023).CrossRefGoogle Scholar
Baş, E. and Gülnur, Y., “Enhanced coati optimization algorithm for big data optimization problem,” Neural Process. Lett. 55(8), 1013110199 (2023).CrossRefGoogle Scholar
Baş, E. and Gülnur, Y., “A new binary coati optimization algorithm for binary optimization problems,” Neural Comput. Appl. 36(6), 27972834 (2024).CrossRefGoogle Scholar
Hasanien, H. M., Alsaleh, I., Alassaf, A. and Alateeq, A., "Enhanced coati optimization algorithm-based optimal power flow including renewable energy uncertainties and electric vehicles," Energy 283, 129069 (2023).CrossRefGoogle Scholar
Jia, H., Shi, S., Wu, D., Rao, H., Zhang, J. and Abualigah, L., “Improve coati optimization algorithm for solving constrained engineering optimization problems,” J. Comput. Design Eng. 10(6), 22232250 (2023).CrossRefGoogle Scholar
Kumar, A., Wu, G., Ali, M. X., Mallipeddi, R., Suganthan, P. N. and Das, S., “A test-suite of non-convex constrained optimization problems from the real-world and some baseline results,” Swarm Evol. Comput. 56, 100693 (2020).CrossRefGoogle Scholar
Akay, R. and Mustafa, Y. Y., “Multi-strategy and self-adaptive differential sine-cosine algorithm for multi-robot path planning,” Expert Syst. Appl. 232, 120849 (2023).CrossRefGoogle Scholar
Draa, A., Samira, B. and Imene, B., “A sinusoidal differential evolution algorithm for numerical optimization,” Appl. Soft Comput. 27, 99126 (2015).CrossRefGoogle Scholar
Das, S. and Ponnuthurai, N. S., “Differential evolution: A survey of the state-of-the-art,” IEEE Trans. Evol. Comput. 15(1), 431 (2010).CrossRefGoogle Scholar
Mirjalili, S., Mirjalili, S. M., Lewis, A. and wolf optimizer, G., “Grey wolf optimizer,” Adv. Eng. Softw. 69, 4661 (2014).CrossRefGoogle Scholar
Wang, Y., Zixing, C. and Qingfu, Z., “Differential evolution with composite trial vector generation strategies and control parameters,” IEEE Trans. Evol. Comput. 15(1), 5566 (2011).CrossRefGoogle Scholar
Kirkpatrick, S., Gelatt, C. D. Jr and Vecchi, M. P., "Optimization by simulated annealing," Science 220(4598), 671680 (1983).CrossRefGoogle ScholarPubMed
Mirjalili, S., “SCA: A sine cosine algorithm for solving optimization problems,” Knowl.-based Syst. 96, 120133 (2016).CrossRefGoogle Scholar
Mirjalili, S. and Andrew, L., “The whale optimization algorithm,” Adv. Eng. Softw. 95, 5167 (2016).CrossRefGoogle Scholar
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M. and Chen, H., “Harris hawks optimization: Algorithm and applications,” Future Gen. Comput. Syst. 97, 849872 (2019).CrossRefGoogle Scholar
Xue, J. and Bo, S., “A novel swarm intelligence optimization approach: Sparrow search algorithm,” Syst. Sci. Control Eng. 8(1), 2234 (2020).CrossRefGoogle Scholar
Song, H., Bei, J., Zhang, H., Wang, J. and Zhang, P., “Hybrid algorithm of differential evolution and flower pollination for global optimization problems,” Expert Syst. Appl. 237, 121402 (2024).CrossRefGoogle Scholar
Sharma, D. and Jabeen, S. D., “Hybridizing interval method with a heuristic for solving real-world constrained engineering optimization problems,” Structures 56, 104993 (2023).CrossRefGoogle Scholar
Kumar, A., Das, S. and Zelinka, I.. A self-adaptive spherical search algorithm for real-world constrained optimization problems. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (2020) pp. 1314.Google Scholar
Gurrola-Ramos, J., Hernàndez-Aguirre, A. and Dalmau-Cedeño, O., “COLSHADE for Real-World Single-Objective Constrained Optimization Problems,” In: 2020 IEEE Congress On Evolutionary Computation (CEC) (IEEE, 2020) pp. 18.Google Scholar
Kumar, A., Das, S. and Zelinka, I.. A Modified Costandard Deviation Matrix Adaptation Evolution Strategy for Real-world Constrained Optimization Problems. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (2020) pp. 812.Google Scholar
Sallam, K. M., Elsayed, S. M., Chakrabortty, R. K. and Ryan, M. J., “Multi-Operator Differential Evolution Algorithm for Solving Real-World Constrained Optimization Problems,” In: 2020 IEEE Congress on Evolutionary Computation (2020) pp. 18.Google Scholar
Hellwig, M. and Beyer, H. G., “A Modified Matrix Adaptation Evolution Strategy with Restarts for Constrained Real-World Problems,” In: 2020 IEEE Congress on Evolutionary Computation (CEC) (IEEE, 2020) pp. 18.Google Scholar
Xu, J. and Lihong, X., “Optimal stochastic process optimizer: A new metaheuristic algorithm with adaptive exploration-exploitation property,” IEEE Access. 9, 108640108664 (2021).CrossRefGoogle Scholar
Ezugwu, A. E. S., Adewumi, A. O. and Frîncu, M. E., “Simulated annealing based symbiotic organisms search optimization algorithm for traveling salesman problem,” Expert Syst. Appl. 77, 189210 (2017).CrossRefGoogle Scholar
Figure 0

Algorithm 1 Pseudocode of COA.

Figure 1

Algorithm 2 Pseudocode of SDECOA.

Figure 2

Figure 1. Flow chart of SDECOA.

Figure 3

Table I. Basic information of CEC2022 benchmark test functions.

Figure 4

Table II. Parameter settings of different algorithms.

Figure 5

Table III. Experimental results of different algorithms on CEC2022 benchmark functions in 20 dimensions.

Figure 6

Figure 2. Convergence curves of different algorithms.

Figure 7

Figure 3. Boxplots of different algorithms.

Figure 8

Table IV. Basic information of 48 CEC2020 real-world constrained optimization problems.

Figure 9

Table V. Experimental results of different algorithms solving 48 CEC2020 real-world constrained optimization problems (shaded areas indicate results superior to the known optimal solution).

Figure 10

Table VI. Percentage deviation values (PDmean) obtained through solutions using different algorithms.

Figure 11

Table VII. Number of robots and obstacles in different scenarios.

Figure 12

Figure 4. Scenario 1.

Figure 13

Figure 5. Scenario 2.

Figure 14

Figure 6. Scenario 3.

Figure 15

Figure 7. Paths of different algorithms in Scenario 1.

Figure 16

Table VIII. Experimental results of the travel distance for each robot in Scenario 1.

Figure 17

Figure 8. Average travel distance of each robot in Scenario 1.

Figure 18

Table IX. Average number of steps traveled by each robot in Scenario 1.

Figure 19

Figure 9. Paths of different algorithms in Scenario 2.

Figure 20

Table X. Experimental results of the travel distance for each robot in Scenario 2.

Figure 21

Table XI. Average number of steps traveled by each robot in Scenario 2.

Figure 22

Figure 10. Average travel distance of each robot in Scenario 2.

Figure 23

Figure 11. Paths of different algorithms in Scenario 3.

Figure 24

Table XII. Experimental results of the travel distance for each robot in Scenario 3.

Figure 25

Table XIII. Average number of steps traveled by each robot in Scenario 3.

Figure 26

Figure 12. Average travel distance of each robot in Scenario 3.