Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T18:45:42.234Z Has data issue: false hasContentIssue false

Cooperative line-of-sight guidance strategy with prescribed performance and input saturation against active defensive aircraft in two-on-two engagement

Published online by Cambridge University Press:  24 January 2025

X. Wang
Affiliation:
Control and Simulation Center, Harbin Institute of Technology, Harbin, China National Key Laboratory of Complex System Control and Intelligent Agent Cooperation, Harbin, China
T. Chao
Affiliation:
Control and Simulation Center, Harbin Institute of Technology, Harbin, China National Key Laboratory of Complex System Control and Intelligent Agent Cooperation, Harbin, China
M. Hou
Affiliation:
Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin, China
S. Wang
Affiliation:
Control and Simulation Center, Harbin Institute of Technology, Harbin, China National Key Laboratory of Complex System Control and Intelligent Agent Cooperation, Harbin, China
M. Yang*
Affiliation:
Control and Simulation Center, Harbin Institute of Technology, Harbin, China National Key Laboratory of Complex System Control and Intelligent Agent Cooperation, Harbin, China
*
Corresponding author: M. Yang; Email: myang_hitcsc@163.com
Rights & Permissions [Opens in a new window]

Abstract

This paper considers the guidance issue for attackers against aircraft with active defense in a two-on-two engagement, which includes an attacker, a protector, a defender and a target. A cooperative line-of-sight guidance scheme with prescribed performance and input saturation is proposed utilising the sliding mode control and line-of-sight guidance theories, which guarantees that the attacker is able to capture the target with the assistance of the protector remaining on the line-of-sight between the defender and the attacker in order to intercept the defender. A fixed-time prescribed performance function and first-order anti-saturation auxiliary variable are designed in the game guidance strategy to constrain the overshoot of the guidance variable and satisfy the requirement of an overload manoeuver. The proposed guidance strategy alleviates the influence of external disturbance by implementing a fixed-time observer and the chattering phenomenon caused by the sign function. Finally, nonlinear numerical simulations verify the cooperative guidance strategies.

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Nomenclature

$a$

acceleration

$R$

relative distance

${t_f}$

terminal time

${t_{go}}$

time-to-go

${u_{\rm{N}}}$

control input normal to initial line-of-sight

$V$

velocity

$y$

distance normal to initial line-of-sight

$z$

observer state

${Z_{{\rm{AT}}}}$

zero-effort miss

${Z_{\rm{L}}}$

zero-effort line-of-sight angle

$\lambda $

line-of-sight angle

$\gamma $

path angle

$\varphi $

state transition variable

$\tau $

time constant

$\varepsilon $

prescribed performance state

1.0 Introduction

As countries attach importance to aircraft penetration technology and establish anti-aircraft defense systems, single aircraft adopt traditional penetration strategies [Reference Xiong, Wang, Liu, Wang and Chen1, Reference Wang, Fu, Fang, Zhu, Wu and Wang2], such as flares and chaff, which aren’t able to guarantee the aircraft’s survival. Therefore, the cooperative penetration of multiple aircraft based on the framework of three-player conflict [Reference Asher and Matuszewski3], including an attacker, a defender and a target, has become one of the feasible countermeasures [Reference Faruqi4]. The cooperation of multiple aircraft increases the success rate of penetration, coping with the complex battlefield environment and accomplishing tasks that cannot be completed by a single aircraft.

Cooperative guidance strategies [Reference Isaacs5Reference Cheng and Yuan15] were proposed to study the three-player conflict problem according to the differential game theory. The theory of differential games was developed by the American scholar Isaacs [Reference Isaacs5] to address the problem of dynamic games involving two or more players in conflict. Cooperative linear quadratic differential game (CLQDG) guidance strategies were presented and the solutions for the differential game were given by Perelman et al. [Reference Perelman, Shima and Rusnak6] for continuous and discrete systems under the arbitrary-order adversaries’ dynamics. A cooperative pursuit-evasion game guidance scheme with the time-constrained was proposed in Ref. (Reference Sinha, Kumar and Mukherjee7) for the three-player conflict scenario, where the attacker was intercepted by the defender at the desired impact time to safeguard the target. Based on the framework of a three-player conflict, a cooperative active defense guidance scheme with limited observations was designed by Singh and Puduru [Reference Singh and Puduru8] to complete the mission that the attacker was pursued by multi-defender, which were visibility constrained. Liang et al. [Reference Liang, Wang, Wang, Wang, Wang and Liu9, Reference Liang, Li, Wu, Zheng, Chu and Wang10] derived the CLQDG guidance scheme to handle the three-player conflict issue that the attacker avoided the defender and hit the target with lower fuel cost and no control saturation. The problem of multi-attacker against active defense aircraft was considered by Liu et al. [Reference Liu, Dong, Li and Ren11] to devise the CLQDG strategy, which guaranteed that the multi-attacker avoided the defender and hit the target at the desired angle. Novel evasion guidance strategies based on the CLQDG theory were developed by Yan et al. [Reference Yan, Cai and Xu12] to address the issue of the target avoiding two attackers from the same direction. Tang et al. [Reference Tang, Ye, Huang, Sun and Sun13] studied the problem of the spacecraft interception game with incomplete-information and presented the switching strategies according to the CLQDG theory. The issue of robust multi-agent CLQDG and its application to cooperative guidance were investigated by Liu et al. [Reference Liu, Dong, Li and Ren14]. Cheng and Yuan [Reference Cheng and Yuan15], based on the CLQDG method, studied a new adaptive pursuit-evasion game scheme for a multi-player confrontation scenario. A novel cooperative interception guidance strategy with fast multiple model adaptive estimation was presented by Wang et al. [Reference Wang, Guo, Wang, Liu and Zhang16] to solve the issue of two pursuers intercepting an evasive target.

For three-player conflict, another research approach is cooperative line-of-sight (LOS) guidance, which was proposed by Yamasaki et al. [Reference Yamasaki and Balakrishnan17, Reference Yamasaki and Balakrishnan18] of Japan Defense University. The philosophy of the LOS guidance is that the defender remains on the LOS between the target and the attacker for the sake of intercepting the attacker. In recent years, many scholars have focused on the cooperative LOS guidance problem. A novel cooperative LOS guidance strategy was proposed by Kumar and Dwaipayan [Reference Kumar and Dwaipayan19, Reference Kumar and Dwaipayan20] based on the sliding mode control (SMC) theory to complete the combat mission of the defender intercepting the attacker and protecting the target. The approach of the input-to-state stability was utilised in Ref. (Reference Luo, Tan, Yan, Wang and Ji21) to design a cooperative LOS guidance law for the three-player conflict scenario. A three-player conflict problem was considered by Luo et al. [Reference Luo, Ji and Wang22, Reference Tan, Luo, Ji, Liao and Wu23] in three dimensions based on high-gain observers, which guaranteed that the defender maintained on the LOS of the attacker-target and intercepted the attacker. Liu et al. [Reference Liu, Wang, Li, Yan and Zhang24] presented a cooperative guidance algorithm for active defense with LOS constraint under a low-speed ratio. Based on the optimal control method, a cooperative LOS guidance strategy was proposed in Ref. (Reference Chen, Wang and Huang25) for active defense aircraft and minimised the weighted energy consumption. A type of three-dimensional trajectory approach was presented by Han et al. [Reference Han, Hu, Wang, Xin and Shin26] based on the LOS angle acceleration for aerial vehicles without range measurement. Muti-target assignment issue was analysed by Liu et al. [Reference Liu, Lin, Wei and Yan27] to provide an overview, which classified the addressing method as state multi-target assignment and dynamic multi-target assignment. By analysing the characteristic of active defense combat, Liu et al. [Reference Liu, Lin, Wang, Huang, Yan and Li28] proposed a cooperative guidance law with respective of a small speed ratio to investigate the three-player problem.

Since a two-versus-two combat involving four players enhances the diversity of battles compared to a three-player conflict, it has become a focal point of research for many scholars. A new type of cooperative guidance strategy, in which the target pair lured the pursuers into collision, was designed in Ref. (Reference Tan, Fonod and Shima29) by virtue of the optimal control theory for the two-on-two engagement scenario. Based on the above research, a nonlinear model predictive control approach was used by Manoharan and Sujit [Reference Manoharan and Sujit30] to present a cooperative defense strategy for the scenario of luring two attackers into a collision. Liang et al. [Reference Liang, Wang, Liu and Liu31] proposed the cooperative guidance scheme in a two-on-two engagement scenario where the interceptor pursued an active defense spacecraft with the assistance of a protector capturing the defender.

The prescribed performance control (PPC) is a robust method for controlling systems with defined performance objectives. It limits overshoot and maintains stability of the state variable performance. In Ref. (Reference Zhuang, Tan, Li and Song32), a spacecraft formation scheme was investigated by Zhuang et al. by taking the input saturation and prescribed performance constraints into account. Truong et al. [Reference Truong, Vo and Kang33] employed the SMC approach to propose a fixed-time control strategy with prescribed performance for robots. A new type of prescribed performance function (PPF) was studied in Ref. (Reference Zhang, Wu, Yang and Song34) to design the PPC scheme under the constraint of input saturation, which solved the collision obstacle of the spacecraft formation. The guidance issue of the attacker intercepting a stationary target was considered by Li et al. [Reference Li, Liu, Li and Liang35] to devise a guidance law requiring field-of-view and impact-angle. However, the existing literature primarily examines spacecraft formation and robot control, with limited research on the PPC of multi-aircraft cooperative guidance.

In this paper, a cooperative LOS guidance scheme is designed with prescribed performance and input saturation in a two-on-two engagement by virtue of the SMC method, including an attacker, a protector, a defender, and a target, which guarantees that the attacker hits the target when the protector maintains on the LOS between the defender and the attacker to intercept the defender. As depicted in Fig. 1, the attacker and the protector collaborate to defeat the attacker-defender team and accomplish the combat mission. The main contributions of this paper are as follows:

  1. 1) Zero-effort miss (ZEM) and zero-effort LOS angle (ZEL) are utilised to design a novel cooperative LOS guidance strategy in the two-on-two engagement. The performance of the guidance strategy is improved since once on the sliding surface, guidance command isn’t needed to guarantee the interception in the nominal case with perfect modelling. In contrast to Refs (Reference Kumar and Dwaipayan19, Reference Kumar and Dwaipayan20), the guidance scheme that does not include the sign function is not affected by the chattering phenomenon.

  2. 2) Compared to the traditional PPC, a fixed-time PPF is designed to limit the overshoot of the guidance variable, using dual PPFs for upper and lower boundaries to improve stability and symmetry.

  3. 3) The proposed guidance scheme is able to satisfy the overload requirements of the attacker and the protector employing a first-order anti-saturation auxiliary variable during the engagement.

  4. 4) A novel fixed-time disturbance observer is proposed to estimate the unknown disturbance in the guidance system, weakening the influence of unknown disturbance.

Figure 1. The relationship of two-on-two engagement.

The paper is organised as follows: Section 2 elaborates on the preliminaries, including nonlinear models, linear models and order reduction of the system in the two-on-two engagement scenario. The development process of the cooperative guidance strategy is given in Section 3, composed of the fixed-time PPF design, the fixed-time observer design, and the first-order anti-saturation auxiliary variable design. The cooperative guidance strategy is validated in Section 4 through nonlinear numerical simulations.

2.0 Preliminaries

The engagement relationship of each player in the two-on-two scenario is shown in Fig. 1. The problem involves four players: an attacker, a protector, a defender and a target. When the target detects the attacker, it deploys a defender to intercept the attacker in order to ensure its escape. In response, the attacker deploys a protector to attack the defender, increasing the chance of the attacking aircraft successfully intercepting the target. The purpose of the guidance strategy is for the attacker to intercept the target, while the protector remains on the line of sight between the defender and the attacker to intercept the defender.

2.1 Nonlinear kinematics

As shown in Fig. 2, the LOS angles and the flight path angles of each player are respectively represented as $ {\lambda _j},{\rm{ }}j \in \left\{ {{\rm{AT}},{\rm{ AD}},{\rm{ PD}}} \right\} $ and $ {\gamma _i},{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{ P}},{\rm{ D, T}}} \right\} $ . The velocities and the accelerations of each player are defined as $ {V_i} $ and $ \left\{ {{\rm{A}},{\rm{ P}},{\rm{ D, T}}} \right\} $ , respectively. The relative distances between the players are described as $ {R_j},{\rm{ }}j \in \left\{ {{\rm{AT}},{\rm{ AD}},{\rm{ PD}}} \right\} $ .

Figure 2. Two-on-two engagement scenario.

The nonlinear kinematics of the players are respectively given as follows.

The closing velocities are represented as

(1) \begin{align}{\dot R_{{\rm{AT}}}} = {V_{{R_{{\rm{AT}}}}}} = {V_{\rm{A}}}\cos \!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{AT}}}}} \right) - {V_{\rm{T}}}\cos \!\left( {{\gamma _{\rm{T}}} - {\lambda _{{\rm{AT}}}}} \right) \\[-24pt] \nonumber \end{align}
(2) \begin{align} {\dot R_{{\rm{AD}}}} = {V_{{R_{{\rm{AD}}}}}} = {V_{\rm{A}}}\cos \!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{AD}}}}} \right) - {V_{\rm{D}}}\cos \!\left( {{\gamma _{\rm{D}}} - {\lambda _{{\rm{AD}}}}} \right) \\[-24pt] \nonumber \end{align}
(3) \begin{align}{\dot R_{{\rm{PD}}}} = {V_{{R_{{\rm{PD}}}}}} = {V_{\rm{P}}}\cos \!\left( {{\gamma _{\rm{P}}} - {\lambda _{{\rm{PD}}}}} \right) - {V_{\rm{D}}}\cos \!\left( {{\gamma _{\rm{D}}} - {\lambda _{{\rm{PD}}}}} \right) \end{align}

The rates of LOS angles are

(4) \begin{align}{R_{{\rm{AT}}}}{\dot \lambda _{{\rm{AT}}}} = {V_{{\lambda _{{\rm{AT}}}}}} = {V_{\rm{A}}}\sin\!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{AT}}}}} \right) - {V_{\rm{T}}}\sin\!\left( {{\gamma _{\rm{T}}} - {\lambda _{{\rm{AT}}}}} \right) \\[-24pt] \nonumber \end{align}
(5) \begin{align}{R_{{\rm{AD}}}}{\dot \lambda _{{\rm{AD}}}} = {V_{{\lambda _{{\rm{AD}}}}}} = {V_{\rm{A}}}\sin\!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{AD}}}}} \right) - {V_{\rm{D}}}\sin\!\left( {{\gamma _{\rm{D}}} - {\lambda _{{\rm{AD}}}}} \right) \\[-24pt] \nonumber \end{align}
(6) \begin{align}{R_{{\rm{PD}}}}{\dot \lambda _{{\rm{PD}}}} = {V_{{\lambda _{{\rm{PD}}}}}} = {V_{\rm{P}}}\sin\!\left( {{\gamma _{\rm{P}}} - {\lambda _{{\rm{PD}}}}} \right) - {V_{\rm{D}}}\sin\!\left( {{\gamma _{\rm{D}}} - {\lambda _{{\rm{PD}}}}} \right) \end{align}

It is assumed that the dynamics of each player are arbitrary-order linear equations.

(7) \begin{align}\dot{\boldsymbol{{x}}}_{i} = \boldsymbol{{A}}_{i}\boldsymbol{{x}}_{i} + \boldsymbol{{b}}_{i}{u^{\prime}_i},{\rm{ }}{a_i} = \boldsymbol{{C}}_{i} \boldsymbol{{x}}_{i} + {d_i}{u^{\prime}_i},{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

The dynamics of the flight path angles are given by

(8) \begin{align}{\dot \gamma _i} = \frac{{{a_i}}}{{{V_i}}},{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

Define the LOS angle between $ {\rm{LO}}{{\rm{S}}_{{\rm{AD}}}} $ and $ {\rm{LO}}{{\rm{S}}_{{\rm{PD}}}} $ as

(9) \begin{align}\lambda = {\lambda _{{\rm{AD}}}} + {\lambda _{{\rm{PD}}}} \end{align}

2.2 Linear kinematics

Suppose that the scene is located near the collision triangle, the velocities $ V_{i}, i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} $ , and the initial LOS angles $ \lambda_{i0}, i \in \left\{ {{\rm{AT}},{\rm{AD}},{\rm{PD}}} \right\}$ are small to linearise the models (1)–(6) near the initial LOS in Fig. 3. The initial $ {\rm LOS}_{i0}, i \in \left\{ {{\rm{AT}},{\rm{AD}},{\rm{PD}}} \right\}$ are parallel with the axis $ X $ . It is defined that ${y_{{\rm{AT}}}}, {y_{{\rm{AD}}}} $ and $ {y_{{\rm{PD}}}} $ are respectively expressed as the displacement of the target normal to initial line-of-sight $ {\rm{LO}}{{\rm{S}}_{{\rm{A}}{{\rm{T}}_0}}} $ ; the displacements of the defender normal to initial line-of-sight $ {\rm{LO}}{{\rm{S}}_{{\rm{A}}{{\rm{D}}_0}}} $ and $ {\rm{LO}}{{\rm{S}}_{{\rm{P}}{{\rm{D}}_0}}} $ . The accelerations and control inputs of the players perpendicular to the initial LOS are respectively represented as

(10) \begin{align}{a_{i{\rm{N}}}} = {a_i}{\chi _i},{\rm{ }}{u_{i{\rm{N}}}} = {u_i}^\prime {\chi _i},{\rm{ }}{\chi _i} = \cos \!\left( {{\gamma _{i0}} - {\lambda _0}} \right),{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

Figure 3. Comparison between the proposed and the other PPFs.

where, the initial LOS angles of the players are expressed as $ {\lambda _0} $ . The subscript 0 indicates the initial state of each player.

The terminal time of engagement among the players can be calculated by

(11) \begin{align}{t_{fj}} = \frac{{{R_j}}}{{{V_{{R_j}}}}},{\rm{ }}j \in \left\{ {{\rm{AT, AD, PD}}} \right\} \end{align}

where, ${R_j}$ and ${V_{{R_j}}}$ , $j \in \left\{ {{\rm{AT, AD, PD}}} \right\}$ are the relative distances and the closing velocities between the attacker-protector team and the defender-target team.

The time-to-go between the attacker and the target, between the attacker and defender, and between the protector and the defender is

(12) \begin{align}{t_{goj}} = {t_{fj}} - t,{\rm{ }}j \in \left\{ {{\rm{AT, AD, PD}}} \right\} \end{align}

Differentiate the linearised form of equation (9) to obtain Equation (13) [Reference Chen, Wang and Huang25].

(13) \begin{align}\dot \lambda = {\dot \lambda _{{\rm{AD}}}} + {\dot \lambda _{{\rm{PD}}}} = \frac{{{y_{{\rm{AD}}}} + {{\dot y}_{{\rm{AD}}}}{t_{go{\rm{AD}}}}}}{{{V_{{R_{{\rm{AD}}}}}}t_{go{\rm{AD}}}^2}} + \frac{{{y_{{\rm{PD}}}} + {{\dot y}_{{\rm{PD}}}}{t_{go{\rm{PD}}}}}}{{{V_{{R_{{\rm{PD}}}}}}t_{go{\rm{PD}}}^2}} \end{align}

Let

(14) \begin{align}{\lambda _1} = \frac{1}{{{V_{{R_{{\rm{AD}}}}}}t_{go{\rm{AD}}}^2}},\,\,{\lambda _2} = \frac{1}{{{V_{{R_{{\rm{PD}}}}}}t_{go{\rm{PD}}}^2}} \end{align}

According to the above definition, (13) is rewritten as follow.

(15) \begin{align}\dot \lambda = {\lambda _1}\!\left( {{y_{{\rm{AD}}}} + {{\dot y}_{{\rm{AD}}}}{t_{go{\rm{AD}}}}} \right) + {\lambda _2}\!\left( {{y_{{\rm{PD}}}} + {{\dot y}_{{\rm{PD}}}}{t_{go{\rm{PD}}}}} \right) \end{align}

Select the following state of the linearised engagement:

(16) \begin{align} \boldsymbol{{x}} = {\left[ {\begin{array}{l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l}{{y_{{\rm{AT}}}}} & {}{{{\dot y}_{{\rm{AT}}}}} & {}{\boldsymbol{{x}}_{\rm{T}}^{\rm{T}}}& {\boldsymbol{{x}}_{\rm{A}}^{\rm{T}}} & {}{{y_{{\rm{AD}}}}} & {}{{{\dot y}_{{\rm{AD}}}}} & {}{\boldsymbol{{x}}_{\rm{D}}^{\rm{T}}} & {}{{y_{{\rm{PD}}}}} & {}{{{\dot y}_{{\rm{PD}}}}} & {}{\boldsymbol{{x}}_{\rm{P}}^{\rm{T}}} & {}\lambda \end{array}} \right]^{\rm{T}}} \\[-24pt] \nonumber \end{align}
(17) \begin{align}{\boldsymbol{{x}}_{{\rm{AD}}}} = {\left[ {\begin{array}{l@{\quad}l@{\quad}l@{\quad}l}{\boldsymbol{{x}}_{\rm{A}}^{\rm{T}}} & {}y_{\rm AD} & {}{\dot y}_{\rm AD} & {}{\boldsymbol{{x}}_{\rm{D}}^{\rm{T}}}\end{array}} \right]^{\rm{T}}} \end{align}

wherein, the dimensions of $ \boldsymbol{{x}} $ and $ {\boldsymbol{{x}}_{{\rm{AD}}}} $ are $ {n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{D}}} + {n_{\rm{P}}} + 7 $ and ${n_{\rm{A}}} + {n_{\rm{D}}} + 2 $ , respectively.

Equation (18) is able to be obtained by differentiating the state variable $ x $ with respect to time.

(18) \begin{align} \left\{ \begin{array}{l}{{\dot x}_1} = {x_2}\\[3pt]{{\dot x}_2} = {a_{{\rm{TN}}}} - {a_{{\rm{AN}}}}\\[3pt]{\dot{\boldsymbol{{x}}}_{\rm{T}}} = {\boldsymbol{{A}}_{\rm{T}}}{\boldsymbol{{x}}_{\rm{T}}} + {\boldsymbol{{B}}_{\rm{T}}}\,{{u_{{\rm{TN}}}}}/{{\chi _{\rm{T}}}}\\[3pt]{\dot{\boldsymbol{{x}}}_{\rm{A}}} = {\boldsymbol{{A}}_{\rm{A}}}{\boldsymbol{{x}}_{\rm{A}}} + {\boldsymbol{{B}}_{\rm{A}}}\,{{u_{{\rm{AN}}}}} /{{\chi _{\rm{A}}}}\\[3pt]{{\dot x}_{{n_{\rm{A}}} + {n_{\rm{T}}} + 3}} = {x_{{n_{\rm{A}}} + {n_{\rm{T}}} + 4}}\\[3pt]{{\dot x}_{{n_{\rm{A}}} + {n_{\rm{T}}} + 4}} = {a_{{\rm{AN}}}} - {a_{{\rm{DN}}}}\\[3pt]{\dot{\boldsymbol{{x}}}_{\rm{D}}} = {\boldsymbol{{A}}_{\rm{D}}}{\boldsymbol{{x}}_{\rm{D}}} + {\boldsymbol{{B}}_{\rm{D}}}\,{u_{{\rm{DN}}}}/{{\chi _{\rm{D}}}}\\[3pt]{{\dot x}_{{n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{D}}} + 5}} = {x_{{n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{D}}} + 6}}\\[3pt]{{\dot x}_{{n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{D}}} + 6}} = {a_{{\rm{DN}}}} - {a_{{\rm{PN}}}}\\[3pt]{\dot{\boldsymbol{{x}}}_{\rm{P}}} = {\boldsymbol{{A}}_{\rm{P}}}{\boldsymbol{{x}}_{\rm{P}}} + {\boldsymbol{{B}}_{\rm{P}}}\,{{u_{{\rm{PN}}}}}/{{\chi _{\rm{P}}}}\\[3pt]{\dot \lambda} = {{\dot \lambda }_{{\rm{AD}}}} + {{\dot \lambda }_{{\rm{PD}}}} = {\lambda _1}({x_1} + {x_2}{t_{go{\rm{AD}}}}) + {\lambda _2}({x_{{n_{\rm{A}}} + {n_{\rm{T}}} + 3}} + {x_{{n_{\rm{A}}} + {n_{\rm{T}}} + 4}}{t_{go{\rm{PD}}}})\end{array} \right. \end{align}

Assume that the linear guidance strategy is adopted by the defender. The specific form of the linear guidance strategy is represented as follows:

(19) \begin{align}{u_{{\rm{DN}}}} = {\boldsymbol{{K}}_{{\rm{AD}}}}({t_{go{\rm{AD}}}}){\boldsymbol{{x}}_{{\rm{AD}}}} + {k_{{u_{{\rm{DN}}}}}}{u_{{\rm{AN}}}} \end{align}

According to the above definitions, the equation of the state space is written as follows.

(20) \begin{align} \dot{\boldsymbol{{x}}} = \boldsymbol{{Ax}} + \boldsymbol{{B}}{\left[ {\begin{array}{l@{\quad}l}{{u_{{\rm{AN}}}}} & {}{{u_{{\rm{PN}}}}}\end{array}} \right]^{\rm{T}}} + {\boldsymbol{{B}}_{\rm{T}}}{u_{{\rm{TN}}}} \end{align}

where,

\begin{align*} \boldsymbol{{A}} = \left[ {\begin{array}{c@{\quad}c} {{\boldsymbol{{A}}_{{\rm{AT}}}}} & {}{{\textbf{0}_{({n_{\rm{A}}} + {n_{\rm{T}}} + 2) \times ({n_{\rm{D}}} + {n_{\rm{P}}} + 5)}}}\\[3pt]{{\boldsymbol{{A}}_1}} & {{\textbf{0}_{2 \times (2 + {n_{\rm{P}}})}}} \\[3pt]{{\boldsymbol{{A}}_2}} & {{\boldsymbol{{A}}_{{\rm{PD}}}}} \end{array}} \right],\, \boldsymbol{{B}} = \left[ {\begin{array}{l@{\quad}l}{{\boldsymbol{{B}}_{\rm{A}}}} & {}{{\boldsymbol{{B}}_{\rm{P}}}}\end{array}} \right] \end{align*}
\begin{align*}{\boldsymbol{{A}}_{{\rm{AT}}}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}0 & {}1 & {}{{\textbf{0}_{1 \times {n_{\rm{T}}}}}} & {}{{\textbf{0}_{1 \times {n_{\rm{A}}}}}}\\[3pt] 0 & {}0 & {}{{\boldsymbol{{C}}_{\rm{T}}}{\chi _{\rm{T}}}} & {}{ - {\boldsymbol{{C}}_{\rm{A}}}{\chi _{\rm{A}}}}\\[3pt]{{\textbf{0}_{{n_{\rm{T}}} \times 1}}} & {}{{\textbf{0}_{{n_{\rm{T}}} \times 1}}} & {}{{\boldsymbol{{A}}_{\rm{T}}}} & {}0\\[3pt]{{\textbf{0}_{{n_{\rm{A}}} \times 1}}} & {}{{\textbf{0}_{{n_{\rm{A}}} \times 1}}} & {}{{\textbf{0}_{{n_{\rm{A}}} \times {n_{\rm{T}}}}}} & {}{{\boldsymbol{{A}}_{\rm{A}}}}\end{array}} \right],\,\,{\boldsymbol{{A}}_{{\rm{PD}}}} = \left[ {\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{{\textbf{0}_{{n_{\rm{D}}} \times 1}}} & {}{{\textbf{0}_{{n_{\rm{D}}} \times 1}}} & {}{{\textbf{0}_{{n_{\rm{D}}} \times {n_{\rm{P}}}}}} & {}{{\textbf{0}_{{n_{\rm{D}}} \times 1}}}\\[3pt]0 & {}1 & {}{{\textbf{0}_{1 \times {n_{\rm{P}}}}}} & {}0\\[3pt]0 & {}0 & {}{ - {\boldsymbol{{C}}_{\rm{P}}}{\chi _{\rm{P}}}} & {}0\\[3pt] {\textbf{0}_{{n_{\rm{P}}} \times 1}} & {\textbf{0}_{{n_{\rm{P}}} \times 1}} & {\boldsymbol{{A}}_{\rm{P}}} & {\textbf{0}_{{n_{\rm{P}}} \times 1}}\\[3pt]{\lambda _2} & {\lambda _2}{t_{go{\rm{PD}}}} & {\textbf{0}_{1 \times {n_{\rm{P}}}}} & 0 \end{array}} \right], \end{align*}
\begin{align*}{\boldsymbol{{A}}_1} = \left[ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}{\textbf{0}_{1 \times ({n_{\rm{T}}} + 2)}} & {\textbf{0}_{1 \times {n_{\rm{A}}}}} & 0 & 1 & {{\textbf{0}_{1 \times {n_{\rm{D}}}}}} \\[3pt] {\textbf{0}_{1 \times ({n_{\rm{T}}} + 2)}} & ({\boldsymbol{{C}}_{\rm{A}}} - {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{A}}}){\chi _{\rm{A}}} & -{d_{\rm{D}}}{k_1} & {-{d_{\rm{D}}}{k_2}} & {-({\boldsymbol{{C}}_{\rm{D}}} + {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{D}}}){\chi _{\rm{D}}}} \end{array} \right] \end{align*}
\begin{align*}{\boldsymbol{{A}}_2} = \left[\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}{{\textbf{0}_{{n_{\rm{D}}} \times ({n_{\rm{T}}} + 2)}}} & {{\boldsymbol{{b}}_{\rm{D}}}{\boldsymbol{{A}}_{\rm{A}}}{\chi _{\rm{A}}}}/{{\chi _{\rm{D}}}} & {{k_1}{\boldsymbol{{b}}_{\rm{D}}}}/{{\chi _{\rm{D}}}} & {{k_2}{\boldsymbol{{b}}_{\rm{D}}}}/{{\chi _{\rm{D}}}} & (\boldsymbol{{A}}_{\rm D} + \boldsymbol{{k}}_{\rm D}\boldsymbol{{b}}_{\rm D}) \\[3pt]{{\textbf{0}_{1 \times ({n_{\rm{T}}} + 2)}}} & \textbf{0}_{1 \times n_{\rm A} } & 0 & 0 & \textbf{0}_{1 \times n_{\rm D} }\\[3pt]{{\textbf{0}_{1 \times ({n_{\rm{T}}} + 2)}}} & d_{\rm D}\boldsymbol{{k}}_{\rm A}\chi_{\rm A} & d_{\rm D}k_{1} & d_{\rm D}k_{2} & (\boldsymbol{{C}}_{\rm D} + d_{\rm D}\boldsymbol{{k}}_{\rm D})\chi_{\rm D}\\[3pt]{{\textbf{0}_{{n_{\rm P}} \times ({n_{\rm{T}}} + 2)}}} & \textbf{0}_{(n_{\rm P} + 1 ) \times n_{\rm A}} & \textbf{0}_{(n_{\rm P} + 1 ) \times 1} & \textbf{0}_{n_{\rm P} \times 1} & \textbf{0}_{n_{\rm P} \times n_{\rm D}}\\[3pt]{{\textbf{0}_{1 \times ({n_{\rm{T}}} + 2)}}} & \textbf{0}_{1 \times n_{\rm A}} & \lambda_{1} & \lambda_{2}t_{go{\rm AD}} & \textbf{0}_{1 \times n_{\rm D}} \end{array} \right] \end{align*}
\begin{align*}\boldsymbol{{B}}_{\rm{A}} = \left[\begin{array}{l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l@{\quad}l} 0 & {}{ - {d_{\rm{A}}}} & {}{{\textbf{0}_{1 \times {n_{\rm{T}}}}}} & \boldsymbol{{b}}_{\rm A}/\chi_{\rm A} & 0 & {({d_{\rm{A}}} - {d_{\rm{D}}}{k_{{u_{{\rm{AN}}}}}})} & {k_{{u_{{\rm{AN}}}}}}{\boldsymbol{{b}}_{\rm{D}}}/\chi_{\rm D} & 0 & {{d_{\rm{D}}}{k_{{u_{{\rm{AN}}}}}}} & {{\textbf{0}_{1 \times ({n_{\rm{P}}} + 1)}}}\end{array} \right]^{\rm{T}}, \end{align*}
\begin{align*}\boldsymbol{{B}}_{\rm P} = \left[\begin{array}{l@{\quad}l@{\quad}l@{\quad}l}{{\textbf{0}_{1 \times ({n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{D}}} + 5)}}} & {}{-{d_{\rm{P}}}} & {}\boldsymbol{{b}}_{\rm P}/\chi_{\rm P} & 0 \end{array} \right]^{\rm{T}},\,\, \boldsymbol{{B}}_{\rm{T}} = \left[\begin{array}{l@{\quad}l@{\quad}l@{\quad}l}0 & {}{{d_{\rm{T}}}} &\boldsymbol{{b}}_{\rm T}/\chi_{\rm T} & {{\textbf{0}_{1 \times ({n_{\rm{A}}} + {n_{\rm{P}}} + {n_{\rm{D}}} + 5)}}} \end{array} \right]^{\rm{T}} \end{align*}

2.3 Order reduction

In order to lower the complexity of the combat issue, ZEM [Reference Shima, Idan and Golan36] and ZEL are introduced to reduce the system order, which means the miss distance and LOS angle if none of the adversaries in engagement apply any control from the current time onward. The ZEM between the attacker and the target is expressed as ${Z_{{\rm{AT}}}}$ . The ZEL between $ {\rm{LO}}{{\rm{S}}_{{\rm{AD}}}} $ and $ {\rm{LO}}{{\rm{S}}_{{\rm{PD}}}} $ is expressed as ${Z_{\rm{L}}}$ according to (9) and (15).

The zero-effort quantities $Z_{i}, i \in \left\{ {{\rm{AT, L}}} \right\}$ are acquired by virtue of the terminal projection transformation [Reference Bryson and Ho37].

(21) \begin{align}{Z_i} = {\boldsymbol{{D}}_i} \boldsymbol{\phi} ({t_{fj}},t) \boldsymbol{{x}},\,i \in \left\{ {{\rm{AT, L}}} \right\},\,\,j \in \left\{ {{\rm{AT, PD}}} \right\} \end{align}

wherein, ${\boldsymbol{{D}}_i},i \in \left\{ {{\rm{AT, L}}} \right\}$ are given by

\begin{align*}{\boldsymbol{{D}}_{{\rm{AT}}}} = \left[ {\begin{array}{l@{\quad}l}1 & {}{{\textbf{0}_{1 \times ({n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{P}}} + {n_{\rm{D}}} + 6)}}}\end{array}} \right],\,\,{\boldsymbol{{D}}_{\rm{L}}} = \left[ {\begin{array}{l@{\quad}l}{{\textbf{0}_{1 \times ({n_{\rm{A}}} + {n_{\rm{T}}} + {n_{\rm{P}}} + {n_{\rm{D}}} + 6)}}} & {}1\end{array}} \right] \end{align*}

$\boldsymbol{\phi} $ is the state transition matrix of the state space equation (20)

\begin{align*} \boldsymbol{\phi} = \left[ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \varphi _{1,1} & \cdots & \boldsymbol{\varphi} _{1,3} & \boldsymbol{\varphi}_{1,4} {}& \cdots & \boldsymbol{\varphi}_{1,7} & \cdots & \varphi _{1,9} & \boldsymbol{\varphi}_{1,10} & \varphi _{1,11}\\[3pt]\vdots & \cdots & \vdots & \vdots & \cdots & \vdots & \cdots & \vdots & \vdots & \vdots \\[3pt]\varphi _{10,1} & \cdots & \boldsymbol{\varphi} _{10,3} & \boldsymbol{\varphi} _{10,4} & \cdots & \boldsymbol{\varphi} _{10,7} & \cdots & \varphi _{10,9} & \boldsymbol{\varphi} _{10,10} & \varphi _{10,11}\\[3pt]\varphi _{11,1} & \cdots & \boldsymbol{\varphi} _{11,3} & \boldsymbol{\varphi} _{11,4} & \cdots & \boldsymbol{\varphi} _{11,7} & \cdots & \varphi _{11,9} & \boldsymbol{\varphi} _{11,10} & \varphi _{11,11} \end{array} \right] \end{align*}

and satisfies

(22) \begin{align} {\rm d}t = -{\rm d}t_{goj} = -\dot{\boldsymbol{\phi}}(t_{goj}) = -\dot{\boldsymbol{\phi}}(t_{fj}, t) = \boldsymbol{\phi}(t_{goj})\boldsymbol{{A}}, \boldsymbol{\phi}(t_{goj} = 0) = \boldsymbol{{I}}, j \in \left\{ {{\rm{AT}},{\rm{AD}},{\rm{PD}}} \right\} \end{align}

The following equations are derived according to the above definitions.

(23) \begin{align}{\boldsymbol{{D}}_i} \dot {\boldsymbol{\phi}} ({t_{goj}}) = {\boldsymbol{{D}}_i} \boldsymbol{\phi} ({t_{goj}})\boldsymbol{{A}},\,\,i \in \left\{ {{\rm{AT, L}}} \right\},\,\,j \in \left\{ {{\rm{AT}},{\rm{ PD}}} \right\} \end{align}

Equation (21) can be rewritten by virtue of $ \boldsymbol{\phi} $ and $\boldsymbol{{A}}$ .

(24) \begin{align}\left\{ \begin{array}{l}{{\dot \varphi }_{i,1}} = 0,\\[3pt]{{\dot \varphi }_{i,2}} = {\varphi _{i,1}}{\rm{,}}\\[3pt]{\boldsymbol{\dot \varphi }_{i,3}} = {\boldsymbol{{C}}_{\rm{T}}}{\chi _{\rm{T}}}{\varphi _{i,2}} + {\boldsymbol{{A}}_{\rm{T}}}{\boldsymbol{\varphi} _{i,3}} + {\boldsymbol{{C}}_{\rm{T}}}{\chi _{\rm{T}}}{\varphi _{i,12}},\\[3pt]{\boldsymbol{\dot \varphi }_{i,4}} = - {\boldsymbol{{C}}_{\rm{A}}}{\chi _{\rm{A}}}{\varphi _{i,2}} + {\boldsymbol{{A}}_{\rm{A}}}{\boldsymbol{\varphi}_{i,4}} + ({\boldsymbol{{C}}_{\rm{A}}} - {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{A}}}){\chi _{\rm{A}}}{\varphi _{i,6}} + \frac{{{\boldsymbol{{b}}_{\rm{D}}}{\boldsymbol{{k}}_{\rm{A}}}{\chi _{\rm{A}}}}}{{{\chi _{\rm{D}}}}}{\boldsymbol{\varphi} _{i,{\rm{7}}}} + {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{A}}}{\chi _{\rm{A}}}{\varphi _{i,9}},\\[3pt]{{\dot \varphi }_{i,5}} = - {d_{\rm{D}}}{k_1}{\varphi _{i,6}}{\rm{ + }}\frac{{{k_1}{\boldsymbol{{b}}_{\rm{D}}}}}{{{\boldsymbol{\chi} _{\rm{D}}}}}{\varphi _{i,{\rm{7}}}} + {d_{\rm{D}}}{k_1}{\varphi _{i,9}}{\rm{ + }}{\lambda _1}{\varphi _{i,11}}{\rm{,}}\\[3pt]{{\dot \varphi }_{i,6}} = {\varphi _{i,5}} - {d_{\rm{D}}}{k_2}{\varphi _{i,6}} + \frac{{{k_2}{\boldsymbol{{b}}_{\rm{D}}}}}{{{\chi _{\rm{D}}}}}{\boldsymbol{\varphi} _{i,{\rm{7}}}} + {d_{\rm{D}}}{k_2}{\varphi _{i,9}} + {\lambda _1}{t_{go{\rm{AD}}}}{\varphi _{i,11}},\\[3pt]{\boldsymbol{\dot \varphi }_{i,7}} = - ({\boldsymbol{{C}}_{\rm{D}}} + {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{D}}}){\chi _{\rm{D}}}{\varphi _{i,6}} + ({\boldsymbol{{A}}_{\rm{D}}} + {\boldsymbol{{B}}_{\rm{D}}}{\boldsymbol{{k}}_{\rm{D}}}){\boldsymbol{\varphi} _{i,7}} + ({\boldsymbol{{C}}_{\rm{D}}} + {d_{\rm{D}}}{\boldsymbol{{k}}_{\rm{D}}}){\chi _{\rm{D}}}{\varphi _{i,9}},\\[3pt]{{\dot \varphi }_{i,8}} = {\lambda _2}{\varphi _{i,11}},\\[3pt]{{\dot \varphi }_{i,9}} = {\varphi _{i,8}} + {\lambda _2}{t_{go{\rm{PD}}}}{\varphi _{i,11}},\\[3pt]{\boldsymbol{\dot \varphi }_{i,10}} = - {\boldsymbol{{C}}_{\rm{P}}}{\chi _{\rm{P}}}{\varphi _{i,9}} + {\boldsymbol{{A}}_{\rm{P}}}{\boldsymbol{\varphi} _{i,10}},\\[3pt]{{\dot \varphi }_{i,11}} = 0,\end{array} \right.,\,\,i \in \left\{ {1,{\rm{ }}11} \right\} \end{align}

where, $ \varphi_{1, j}, j \in \left\{ {1,2, \cdots, 11} \right\} $ are the matrix elements in the first line of the state transition matrix $ \boldsymbol{\phi} ({t_{f{\rm{AT}}}},t) $ ; $ \varphi_{11, j}, j \in \left\{ {1,2, \cdots, 11} \right\} $ are the matrix elements in the eleventh line of the state transition matrix $ \boldsymbol{\phi} ({t_{f{\rm{PD}}}},t) $ .

Remark 1. ZEM and ZEL in (21) are used to reduce the order of system (20). The order of the system is changed from $ {n_{\rm{T}}} + {n_{\rm{A}}} + {n_{\rm{D}}} + {n_{\rm{P}}} + 7 $ to 1 in order to lower the calculation burden of the guidance issue.

2.4 Nonlinear form of zero-effort quantities

From (21), the expressions of the ZEM and the ZEL are represented as follows.

(25) \begin{align} {Z_i} & = {y_{{\rm{AT}}}}{\varphi _{j,1}} + {\dot y_{{\rm{AT}}}}{\varphi _{j,2}} + {\boldsymbol{\varphi} _{j,3}}{\boldsymbol{{x}}_{\rm{T}}} + {\boldsymbol{\varphi} _{j,4}}{\boldsymbol{{x}}_{\rm{A}}} + {y_{{\rm{AD}}}}{\varphi _{j,5}} \nonumber\\[3pt]& \quad + {\dot y_{{\rm{AD}}}}{\varphi _{j,6}} + {\boldsymbol{\varphi} _{j,7}}{\boldsymbol{{x}}_{\rm{D}}} + {y_{{\rm{PD}}}}{\varphi _{j,8}} + {\dot y_{{\rm{PD}}}}{\varphi _{j,9}} + {\boldsymbol{\varphi} _{j,10}}{\boldsymbol{{x}}_{\rm{P}}} + \lambda {\varphi _{j,11}} \end{align}

wherein, $i \in \left\{ {{\rm{AT, L}}} \right\},$ $j \in \left\{ {1,{\rm{ }}11} \right\}$ .

The following assumption is imposed in order to give the nonlinear form of the zero-effort quantities.

Assumption 1. First-order strictly proper dynamics with time constants $ \tau_{i}, i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} $ for the attacker, the protector, the defender and the target are defined as

(26) \begin{align} {\boldsymbol{{A}}_i} = \frac{1}{{{\tau _i}}},\ {\boldsymbol{{B}}_i} = \frac{1}{{{\tau _i}}},\ {\boldsymbol{{C}}_i} = 1,\,{d_i} = 0, i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

Equation (27) is obtained according to Ref. (Reference Kumar and Shima38).

(27) \begin{align} y_{i} + \dot{y}_{i}t_{goi} = V_{\lambda_{i}}t_{goi}, i \in \left\{ {\rm AT}, {\rm PD} \right\}, {y_{{\rm{AD}}}} + {\dot y_{{\rm{AD}}}}{t_{go{\rm{AD}}}} = - {V_{{\lambda _{{\rm{AD}}}}}}{t_{go{\rm{AD}}}} \end{align}

where, ${V_{{\lambda _i}}}$ predefined in (4)-(6) is the relative velocity normal to the initial LOS and ${t_{goi}}$ predefined in (12) represents the time-to-go between the attacker-protector team and the defender-target team.

(25) is written as Equation (28) and Equation (29) under Assumption 1.

(28) \begin{align}{Z_{{\rm{AT}}}} = {V_{{\lambda _{{\rm{AT}}}}}}{t_{go{\rm{AT}}}} + {a_{{\rm{TN}}}}{\hat \varphi _{1,3}} + {a_{{\rm{AN}}}}{\hat \varphi _{1,4}} - {V_{{\lambda _{{\rm{AD}}}}}}{\varphi _{1,6}} + {a_{{\rm{DN}}}}{\hat \varphi _{1,7}} + {a_{{\rm{PN}}}}{\hat \varphi _{1,10}} \\[-24pt] \nonumber \end{align}
(29) \begin{align}{Z_{\rm{L}}} = {V_{{\lambda _{{\rm{PD}}}}}}{t_{go{\rm{PD}}}} + {a_{{\rm{TN}}}}{\hat \varphi _{11,3}} + {a_{{\rm{AN}}}}{\hat \varphi _{11,4}} - {V_{{\lambda _{{\rm{AD}}}}}}{\varphi _{11,6}} + {a_{{\rm{DN}}}}{\hat \varphi _{11,7}} + {a_{{\rm{PN}}}}{\hat \varphi _{11,10}} + \lambda \\[6pt] \nonumber \end{align}

where, $\hat{\varphi}_{l,j} = \hat{\varphi}_{l,j}/\chi_{i}, i \in \{ {\rm A, P, D, T}\}, l \in \{1, 11\}, j \in \left\{ {3,4,7,10} \right\}$ .

On differentiating (28) and (29), utilising the chain rule of differentiation and collecting similar terms, we obtain

(30) \begin{align}{\dot Z_{{\rm{AT}}}} & = {\dot V_{{\lambda _{{\rm{AT}}}}}}{t_{go{\rm{AT}}}} + {\dot a_{{\rm{TN}}}}{\hat \varphi _{1,3}} + {\dot a_{{\rm{AN}}}}{\hat \varphi _{1,4}} - {\dot V_{{\lambda _{{\rm{AD}}}}}}{\varphi _{1,6}} + {\dot a_{{\rm{DN}}}}{\hat \varphi _{1,7}} + {\dot a_{{\rm{PN}}}}{\hat \varphi _{1,10}} \nonumber \\[3pt]& \quad + {\dot t_{go{\rm{AT}}}}\left( {{V_{{\lambda _{{\rm{AT}}}}}} + {a_{{\rm{TN}}}}{\dot{ \hat{\varphi}}_{1,3}} + {a_{{\rm{AN}}}}{\dot{ \hat{\varphi}}_{1,4}} - {V_{{\lambda _{{\rm{AD}}}}}}{{\dot \varphi }_{1,6}} + {a_{{\rm{DN}}}}{\dot{ \hat{\varphi}}_{1,7}} + {a_{{\rm{PN}}}}{\dot{ \hat{\varphi}}_{1,10}}} \right) \\[-24pt] \nonumber \end{align}
(31) \begin{align}{\dot Z_{\rm{L}}} & = {\dot a_{{\rm{TN}}}}{\hat \varphi _{11,3}} + {\dot a_{{\rm{AN}}}}{\hat \varphi _{11,4}} - {\dot V_{{\lambda _{{\rm{AD}}}}}}{\varphi _{11,6}} + {\dot a_{{\rm{DN}}}}{\hat \varphi _{11,7}} - {\dot V_{{\lambda _{{\rm{PD}}}}}}{\varphi _{11,9}} + {\dot a_{{\rm{PN}}}}{\hat \varphi _{11,10}} + \dot \lambda \nonumber \\[3pt] & \quad + {\dot t_{go{\rm{PD}}}}\left( {{a_{{\rm{TN}}}}{\dot{ \hat{\varphi}}_{11,3}} + {a_{{\rm{AN}}}}{\dot{ \hat{\varphi}}_{11,4}} - {V_{{\lambda _{{\rm{AD}}}}}}{{\dot \varphi }_{11,6}} + {a_{{\rm{DN}}}}{\dot{ \hat{\varphi}}_{11,7}} - {V_{{\lambda _{{\rm{PD}}}}}}{{\dot \varphi }_{11,9}} + {a_{{\rm{PN}}}}{\dot{ \hat{\varphi}}_{11,10}}} \right) \\[6pt] \nonumber \end{align}

wherein, the dynamics $ {\dot \varphi }_{i,j}, i \in \{1, 11\}, j \in \left\{ {3, \cdots, 10} \right\} $ of the state transition matrices are given in (24). $ {\dot a_{i{\rm{N}}}}, $ $ i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} $ are the acceleration dynamics of each player and satisfy

(32) \begin{align}{\dot a_{i{\rm{N}}}} = \frac{{{u_{i{\rm{N}}}} - {a_{i{\rm{N}}}}}}{{{\tau _i}}} + {d_i},{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

$ {\tau _i} $ is a predefined time constant of each player, $ {d_i} $ is a dynamic error of the system and external disturbance, and satisfies

(33) \begin{align}\left| {{d_i}} \right| \le {d_{iM}},{\rm{ }}i \in \left\{ {{\rm{A}},{\rm{P}},{\rm{D,T}}} \right\} \end{align}

where, the upper bound of the dynamics error $ {d_i} $ is defined as $ {d_{iM}} $ .

Take the time derivative of the nonlinear kinematics (1)–(6) and time-to-go (12) to yield the following equations (34)–(37).

(34) \begin{align}{\dot V_{{R_{{\rm{A}}l}}}} = \frac{{V_{{\lambda _{{\rm{A}}l}}}^2}}{{{R_{{\rm{A}}l}}}} + {a_{\rm{A}}}\sin\!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{A}}l}}} \right) - {a_l}\sin\!\left( {{\gamma _l} - {\lambda _{{\rm{A}}l}}} \right),\,l \in \left\{ {{\rm{T, D}}} \right\} \\[-24pt] \nonumber \end{align}
(35) \begin{align}{\dot V_{{\lambda _{{\rm{A}}l}}}} = - \frac{{{V_{{R_{{\rm{A}}l}}}}{V_{{\lambda _{{\rm{A}}l}}}}}}{{{R_{{\rm{A}}l}}}} - {a_{\rm{A}}}\cos \!\left( {{\gamma _{\rm{A}}} - {\lambda _{{\rm{A}}l}}} \right) + {a_l}\cos \!\left( {{\gamma _l} - {\lambda _{{\rm{A}}l}}} \right),\,l \in \left\{ {{\rm{T, D}}} \right\} \\[-24pt] \nonumber \end{align}
(36) \begin{align}{\dot V_{{R_{{\rm{P}}l}}}} = \frac{{V_{{\lambda _{{\rm{P}}l}}}^2}}{{{R_{{\rm{P}}l}}}} + {a_{\rm{P}}}\sin\!\left( {{\gamma _{\rm{P}}} - {\lambda _{{\rm{P}}l}}} \right) - {a_l}\sin\!\left( {{\gamma _l} - {\lambda _{{\rm{P}}l}}} \right),\,l \in \left\{ {{\rm{T, D}}} \right\} \\[-24pt] \nonumber \end{align}
(37) \begin{align}{\dot V_{{\lambda _{{\rm{P}}l}}}} = - \frac{{{V_{{R_{{\rm{P}}l}}}}{V_{{\lambda _{{\rm{P}}l}}}}}}{{{R_{{\rm{P}}l}}}} - {a_{\rm{P}}}\cos \!\left( {{\gamma _{\rm{P}}} - {\lambda _{{\rm{P}}l}}} \right) + {a_l}\cos \!\left( {{\gamma _l} - {\lambda _{{\rm{P}}l}}} \right),\,l \in \left\{ {{\rm{T, D}}} \right\} \\[-24pt] \nonumber \end{align}
(38) \begin{align} \dot{t}_{goi} = -1\frac{R_{i} \dot{V}_{R_{i}}}{V^{2}_{R_{i}}},\ i \in \left\{ {{\rm{AT}},{\rm{ AD, PD}}} \right\} \\[3pt] \nonumber \end{align}

Bringing equations (34)–(38) into (30) and (31), the following equations are obtained.

(39) \begin{align}{\dot Z_{{\rm{AT}}}} = {F_{{\rm{AT}}}} + \frac{{{{\hat \varphi }_{1,4}}}}{{{\tau _{\rm{A}}}}}{u_{{\rm{AN}}}} + \frac{{{{\hat \varphi }_{1,10}}}}{{{\tau _{\rm{P}}}}}{u_{{\rm{PN}}}} + {d_{{\rm{AT}}}} \end{align}

wherein,

\begin{align*}{F_{{\rm{AT}}}} & = \frac{{{R_{{\rm{AT}}}}{{\dot V}_{{R_{{\rm{AT}}}}}}}}{{V_{{R_{{\rm{AT}}}}}^2}}\left[ {{V_{{\lambda _{{\rm{AT}}}}}} + ({a_{{\rm{TN}}}} - {a_{{\rm{AN}}}}){t_{go{\rm{AT}}}} - (\frac{{{a_{{\rm{TN}}}}}}{{{\tau _{\rm{T}}}}}{{\hat \varphi }_{1,3}} + \frac{{{a_{{\rm{AN}}}}}}{{{\tau _{\rm{A}}}}}{{\hat \varphi }_{1,4}}) + ({V_{{\lambda _{{\rm{AD}}}}}}{{\dot \lambda }_{{\rm{AD}}}} + {a_{{\rm{AN}}}} - {a_{{\rm{DN}}}}){\varphi _{1,6}}} \right. \\[3pt] & \qquad -\left. {\frac{{{k_{\rm{A}}}{a_{{\rm{AN}}}} + {k_{\rm{D}}}{a_{{\rm{DN}}}} + {k_2}{V_{{\lambda _{{\rm{AD}}}}}}}}{{{\tau _{\rm{D}}}}}{{\hat \varphi }_{1,7}} - (\frac{{{a_{{\rm{DN}}}}}}{{{\tau _{\rm{D}}}}}{{\hat \varphi }_{1,7}} + \frac{{{a_{{\rm{PN}}}}}}{{{\tau _{\rm{P}}}}}{{\hat \varphi }_{1,10}})} \right] + \frac{{{{\hat \varphi }_{1,3}}}}{{{\tau _{\rm{T}}}}}{u_{{\rm{TN}}}} \end{align*}
\begin{align*}{d_{{\rm{AT}}}} = {\hat \varphi _{1,3}}{d_{\rm{T}}} + {\hat \varphi _{1,4}}{d_{\rm{A}}} + {\hat \varphi _{1,7}}{d_{\rm{D}}} + {\hat \varphi _{1,10}}{d_{\rm{P}}} \end{align*}
(40) \begin{align}{\dot Z_{\rm{L}}} = {F_{\rm{L}}} + \frac{{{{\hat \varphi }_{11,4}}}}{{{\tau _{\rm{A}}}}}{u_{{\rm{AN}}}} + \frac{{{{\hat \varphi }_{11,10}}}}{{{\tau _{\rm{P}}}}}{u_{{\rm{PN}}}} + {d_{\rm{L}}} \end{align}

wherein,

\begin{align*}{F_{\rm{L}}} = \frac{{{R_{{\rm{PD}}}}{{\dot V}_{{R_{{\rm{PD}}}}}}}}{{V_{{R_{{\rm{PD}}}}}^2}}\left[ {({a_{{\rm{DN}}}} - {a_{{\rm{PN}}}}){t_{go{\rm{PD}}}} - (\frac{{{a_{{\rm{TN}}}}}}{{{\tau _{\rm{T}}}}}{{\hat \varphi }_{11,3}} + \frac{{{a_{{\rm{AN}}}}}}{{{\tau _{\rm{A}}}}}{{\hat \varphi }_{11,4}})} \right. - ({\dot \lambda _{{\rm{AD}}}}{V_{{R_{{\rm{AD}}}}}} + {a_{{\rm{AN}}}} - {a_{{\rm{DN}}}}){\varphi _{11,6}} \end{align*}
\begin{align*}\left. { - \frac{{{k_{\rm{A}}}{a_{{\rm{AN}}}} + {k_{\rm{D}}}{a_{{\rm{DN}}}} + {k_2}{V_{{\lambda _{{\rm{AD}}}}}}}}{{{\tau _{\rm{D}}}}}{{\hat \varphi }_{11,7}} - {\lambda _1}{V_{{\lambda _{{\rm{AD}}}}}} + {\lambda _2}{V_{{\lambda _{{\rm{PD}}}}}}} \right] + \frac{{{{\hat \varphi }_{11,3}}}}{{{\tau _{\rm{T}}}}}{u_{{\rm{TN}}}} \end{align*}
\begin{align*}{d_{\rm{L}}} = {\hat \varphi _{11,3}}{d_{\rm{T}}} + {\hat \varphi _{11,4}}{d_{\rm{A}}} + {\hat \varphi _{11,7}}{d_{\rm{D}}} + {\hat \varphi _{11,10}}{d_{\rm{P}}} \end{align*}

3.0 Design process of cooperative guidance scheme

3.1 Fixed-time PPF design

The dynamics of ZEM and ZEL are introduced in the previous section. In this subsection, we design novel PPFs for the upper and lower boundaries, inspired by Ref. (Reference Truong, Vo and Kang33), to enhance the performance of the guidance scheme and compare it with other PPFs.

The upper boundary of the PPF is

(41) \begin{align} {\rho _{ui}}(t) = \left\{\begin{array}{l@{\quad}l}({\rho _{0i}} - {\rho _{\infty i}}) (1 -t/T_{di})^{{b_i}/(1-b_{i})} + {\rho _{\infty i}}, & 0 \le t \lt {T_{di}},\\[5pt] {\rho _{\infty i}}, & t \geq {T_{di}},\end{array} \right.,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

The lower boundary of the PPF is

(42) \begin{align}{\rho _{li}}(t) = \left\{\begin{array}{l@{\quad}l}({\rho _{1i}} - {\rho _{\infty i}}) (1 -t/T_{di})^{{b_i}/(1-b_{i})} + {\rho _{\infty i}}, & 0 \le t \lt {T_{di}},\\[5pt]{\rho _{\infty i}}, & t \geq {T_{di}},\end{array} \right.,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

According to the proposed PPFs given in (41) and (42), the predefined range of the ZEM and ZEL is

(43) \begin{align} - {\rho _{li}}(t) \lt {Z_i}(t){\rm{sign(}}{Z_i}{\rm{(}}0{\rm{)) \lt }}{\rho _{ui}}(t),\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

where, ${\rho _{ui}}(t)$ and ${\rho _{li}}(t)$ are smooth and continuous functions, which satisfy $\mathop {\lim }\limits_{t \to {T_{di}}} {\rho _{ui}}(t) = {\rho _{\infty i}}$ and $\mathop {\lim }\limits_{t \to {T_{di}}} {\rho _{li}}(t) = {\rho _{\infty i}}$ . ${\rho _{\infty i}}$ is the desired convergence boundary within a fixed-time period ${T_{di}}$ , ${\rho _{0i}} \gt $ ${\rho _{1i}} \gt {\rho _{\infty i}} \gt 0$ , ${b_i} = {y_1} - {y_2}\cos \!({t\pi }/ T_{di})$ , the positive constants ${y_1}$ and ${y_2}$ satisfy $0.5 \le {y_1} - {y_2} \lt 1$ , $0.5 \le {y_1} + {y_2} \lt 1$ . And $0 \lt \left| {{Z_i}(0)} \right| \lt {\rho _{ui}}(0)$ is assumed.

Taking the derivative of the PPFs given in (41) and (42), we obtain

(44) \begin{align} {\dot \rho _{ui}}(t) = \left\{ \begin{array}{c@{\quad}l}({\rho _{0i}} - {\rho _{\infty i}}) (1-t/T_{di})^{b_{i}/(1-b_{i})} \left( \frac{{\dot b}_{i}}{(1 - {b_i})^2} \ln (1-t/T_{di}) -\frac{b_i}{(1 - {b_i})({T_{di}} - t)} \right), & 0 \le t \lt {T_{di}} \\[6pt]0, & t \geq {T_{di}} \end{array} \right. \end{align}
(45) \begin{align} {\dot \rho _{li}}(t) = \left\{ \begin{array}{c@{\quad}l}({\rho _{1i}} - {\rho _{\infty i}}) (1-t/T_{di})^{b_{i}/(1-b_{i})} \left( \frac{{\dot b}_{i}}{(1 - {b_i})^2} \ln (1-t/T_{di}) -\frac{b_i}{(1 - {b_i})({T_{di}} - t)} \right), & 0 \le t \lt {T_{di}} \\[6pt]0, & t \geq {T_{di}} \end{array} \right. \end{align}

Remark 2. The traditional PPF in Ref. (Reference Ma, Zhou, Li and Lu39) is designed as

(46) \begin{align}{\rho _{ui}}(t) = ({\rho _{0i}} - {\rho _{\infty i}}){e^{ - {\rho _1}t}} + {\rho _{\infty i}} \end{align}

where, ${\rho _1} = 3,$ ${\rho _{0i}} = 5.17,$ ${\rho _{\infty i}} = 0.1$ . The comparison between the novel PPFs given in (41), (42) and the traditional PPF given in (46) is performed in Fig. 3 to illustrate the advantages of the novel PPFs. It can be seen from Fig. 3 that, compared with the traditional PPF, novel PPFs have a better convergence effect since the traditional PPF includes one function to form a performance boundary. The performance region is not symmetric about zero for the upper boundary ${\rho _{ui}}$ and the lower boundary $\delta {\rho _{li}}$ , $0 \lt \delta \lt 1$ , which greatly affects the convergence accuracy.

Remark 3. The specific form of the fixed-time PPF, which the recently proposed in Ref. (Reference Zhuang, Tan, Li and Song32), is

(47) \begin{align}\left\{ \begin{array}{l}\rho (0) = {\rho _{0i}}\\[3pt] {{\dot \rho }_{ui}}(t) = - {\rho _2}{\left| {{\rho _{ui}}(t) - {\rho _{\infty i}}} \right|^\alpha }{\rm{sign(}}{\rho _{ui}}{\rm{(}}t{\rm{)}} - {\rho _{\infty i}}) - {\rho _3}{\left| {{\rho _{ui}}(t) - {\rho _{\infty i}}} \right|^\beta }{\rm{sign(}}{\rho _{ui}}{\rm{(}}t{\rm{)}} - {\rho _{\infty i}})\end{array} \right. \end{align}

where, ${\rho _2} = 5.17,$ ${\rho _3} = 1.23,$ $\alpha = 0.21,$ $\beta = 1.3$ , ${\rho _{0i}} = 5.17,$ ${\rho _{\infty i}} = 0.1$ . Similarly, Fig. 3 demonstrates the effectiveness of the novel PPFs by comparing them with fixed-time PPF (47). The convergences of the upper and lower boundaries are guaranteed based on the fixed-time PPF. However, the chattering phenomenon influences the performance of the fixed-time PPF due to the existence of the sign function. The fixed-time upper boundary isn’t determined flexibly since the desired convergence period isn’t included in fixed-time PPF (47).

A transformation function is employed to convert the ZEM and ZEL, which is defined as

(48) \begin{align}\left\{ \begin{array}{l}{Z_i} = {\rho _i}(t){E_i}({\varepsilon _i})\\[3pt]{\rho _i}(t) = \left\{ \begin{array}{l}{\rho _{ui}}(t)\ {\rm{ for\ sign(}}{Z_i}{\rm{(}}t){Z_i}{\rm{(}}t){\rm{)}} \geq {\rm{0}}\\[3pt]{\rho _{li}}(t)\ {\rm{ for\ sign(}}{Z_i}{\rm{(}}t){Z_i}{\rm{(}}t){\rm{) \lt 0}}\end{array} \right.\end{array} \right.,\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

where, ${k_{{\varepsilon _i}}} \gt 0$ , $0 \le {\delta _{{\varepsilon _i}}} \le 1$ are the positive constant, ${E_i}({\varepsilon _i})$ is the transformation function, and satisfies $ - 1 \lt {E_i}({\varepsilon _i}) \lt 1$ , ${\varepsilon _i}$ , $i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\}$ are the transformed variables.

Based on equation (48), we obtain

(49) \begin{align} - {\rho _{li}}(t) \lt {Z_i}(t) \lt {\rho _{ui}}(t) \end{align}

And the transformation function (50) is devised based on equations (48)–(49).

(50) \begin{align}{E_i}({\varepsilon _i}) = \frac{{1 + {\delta _{{\varepsilon _i}}}}}{\pi }\arctan ({k_{{\varepsilon _i}}}{\varepsilon _i}) + \frac{{1 - {\delta _{{\varepsilon _i}}}}}{2},\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

The equation (51) is yielded according to equations (48) and (50).

(51) \begin{align}\arctan \!({k_{{\varepsilon _i}}}{\varepsilon _i}) & = \frac{{\pi {Z_i}(t)}}{{(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}} - \frac{{\pi (1 - {\delta _{{\varepsilon _i}}})}}{{2(1 + {\delta _{{\varepsilon _i}}})}} \nonumber \\[3pt] & = \frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}} \end{align}

Taking the inverse of equation (51) and multiplying it by $1/k_{\varepsilon_{i}}$ , we get

(52) \begin{align}{\varepsilon _i} = \frac{1}{{{k_{{\varepsilon _i}}}}}\tan \left( {\frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}}} \right),\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

Differentiate equation (52) to yield the following equation (53).

(53) \begin{align} {\dot \varepsilon _i} & = \frac{1}{{{k_{{\varepsilon _i}}}{{\cos }^2}\!\left( {\frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}}} \right)}} \times \left( {\frac{{\left( {2\pi {{\dot Z}_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){{\dot \rho }_i}(t)} \right) \times 2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}}{{4{{(1 + {\delta _{{\varepsilon _i}}})}^2}\rho _i^2(t)}}} \right. \nonumber\\[3pt] & \quad \left. { - \frac{{\left( {2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)} \right) \times 2(1 + {\delta _{{\varepsilon _i}}}){{\dot \rho }_i}(t)}}{{4{{(1 + {\delta _{{\varepsilon _i}}})}^2}\rho _i^2(t)}}} \right) \nonumber\\[3pt]& = \frac{1}{{{k_{{\varepsilon _i}}}{{\cos }^2}\!\left( {\frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _i}){\rho _i}(t)}}} \right)}} \times \left( {\frac{{\pi {{\dot Z}_i}(t)(1 + {\delta _i}){\rho _i}(t) - \pi {Z_i}(t)(1 + {\delta _i}){{\dot \rho }_i}(t)}}{{{{(1 + {\delta _i})}^2}\rho _i^2(t)}}} \right) \nonumber \\[3pt]& = \frac{{\pi \!\left( {{{\dot Z}_i}(t){\rho _i}(t) - {Z_i}(t){{\dot \rho }_i}(t)} \right)}}{{{k_{{\varepsilon _i}}}(1 + {\delta _{{\varepsilon _i}}})\rho _i^2(t){{\cos }^2}\left( {\frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}}} \right)}},\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\} \end{align}

Substituting equations (39) and (40) into (53), we acquire

(54) \begin{align} {\dot \varepsilon _i} & = {N_i}(t){\rho _i}(t){F_i} + \frac{{{N_i}(t){\rho _i}(t){{\hat \varphi }_{j,4}}}}{{{\tau _{\rm{A}}}}}{u_{{\rm{AN}}}} + \frac{{{N_i}(t){\rho _i}(t){{\hat \varphi }_{j,10}}}}{{{\tau _{\rm{P}}}}}{u_{{\rm{PN}}}} \nonumber \\[3pt]& \quad + {N_i}(t){\dot \rho _i}(t){Z_i}(t) + {d_{i1}},\,\,i \in \left\{ {{\rm{AT}},{\rm{ L}}} \right\},\,\,j \in \left\{ {1,{\rm{ }}11} \right\} \end{align}

where, ${N_i}(t) = \frac{\pi }{{{k_{{\varepsilon _i}}}(1 + {\delta _{{\varepsilon _i}}})\rho _i^2(t){{\cos }^2}\left( {\frac{{2\pi {Z_i}(t) + \pi ({\delta _{{\varepsilon _i}}} - 1){\rho _i}(t)}}{{2(1 + {\delta _{{\varepsilon _i}}}){\rho _i}(t)}}} \right)}}$ , ${d_{i1}} = {N_i}{\rm{(}}t){\rho _i}{\rm{(}}t){d_i}$ .

3.2 Fixed-time disturbance observer design

In the guidance system, the external disturbance given in (54) consists of the players’ dynamic errors and PPF variable. Thus, a new disturbance observer is proposed to reduce the impact of the external disturbance on the guidance system.

Assumption 2. $ {d_{i1}} $ of (54) is differentiable and bounded. Therefore, we suppose that it satisfies

\begin{align*} \left\| {{{\dot d}_{i1}}} \right\| \le {\dot d_{i1M}},{\rm{ }}i \in \left\{ {{\rm{AT, L}}} \right\} \end{align*}

wherein, $ {d_{i1M}} $ is the upper bound of $ {d_{i1}},{\rm{ }}i \in \left\{ {{\rm{AT, L}}} \right\} $ .

Theorem 1. Considering the system described by (54), the fixed-time disturbance observer (55) is designed to estimate the unknown disturbance of the system within a fixed-time ${T_z}$ .

(55) \begin{align} \left\{ \begin{array}{l}{{\dot z}_{1i}} = - {v_1}\frac{{{e_i}}}{{{{\left\| {{e_i}} \right\|}^{1/2}}}} - {v_2}{e_i}{\left\| {{e_i}} \right\|^{p - 1}} + {z_{2i}} + {N_i}(t){\rho _i}(t){F_i} + \frac{{{N_i}(t){\rho _i}(t){{\hat \varphi }_{j,4}}}}{{{\tau _{\rm{A}}}}}{u_{{\rm{AN}}}}\\[3pt] \quad + \frac{{{N_i}(t){\rho _i}(t){{\hat \varphi }_{j,10}}}}{{{\tau _{\rm{P}}}}}{u_{{\rm{PN}}}} + {N_i}(t){\dot \rho}_{i}(t){Z_i}(t),\\[3pt]{{\dot z}_{2i}} = - {v_3} \frac{e_{i}}{||e_{i} ||^{1/2}}, \end{array} \right. i \in \{{\rm AT}, {\rm L} \},\,\,j \in \{ {1, 11} \end{align}

wherein, $ e_{i} = Z_{1i} - \varepsilon_{i}, p \gt 1, {v_1} \gt \sqrt {2{v_2}} $ and ${v_3} \gt 4{d_{i1M}}$ are constants, $\left\| \bullet \right\|$ is the Euclidean norm of vector or the induced norm of matrix.

Proof. Differentiate ${e_i} = {z_{1i}} - {\varepsilon _i},{\rm{ }}i \in \left\{ {{\rm{AT, L}}} \right\}$ to obtain (56).

(56) \begin{align}{\dot e_i} = {\dot z_{1i}} - {\dot \varepsilon _i} = - {v_1} \frac{e_i} {||{e_i}||^{1/2}} - {v_2}{e_i}{\left\| {{e_i}} \right\|^{p - 1}} + {z_{2i}} - {d_{i1}}\end{align}

Let ${\tilde z_i} = {z_{2i}} - {d_{i1}}$ , and take the derivative with respect to time to yield (57).

(57) \begin{align}\dot{\tilde{z_i}} = {\dot z_{2i}} - {\dot d_{i1}} = - {v_3}\frac{e_i} {||{e_i}||^{1/2}}- {\dot d_{i1}} \end{align}

And these completed the proof.

According to Assumption 2 and Ref. (Reference Michael, Chandrasekhara and Yuri40), ${e_i}$ and ${\tilde z_i}$ converge to zero within a fixed time, and the settling time ${T_z}$ is bounded by

(58) \begin{align}{T_z} \le \left( \frac{1}{{{v_2}(p - 1){\varepsilon ^{p - 1}}}} + \frac{2v^{1/2}}{v_{1}}\right) + \left( 1 +\frac{ {v_{3} + d_{i1M}}} {(v_{3}-d_{iM})(1-\sqrt{2v_{3}}/v_{1})} \right)\end{align}

wherein, $ v = (v_{1}/v_{2})^{1/(p+1/2)}$ . Based on the definition of ${\tilde z_i}$ , ${z_{2i}}$ converges to ${d_{i1}},$ ${\rm{ }}i \in $ $\left\{ {{\rm{AT,}}} \right.$ $\left. {\rm{L}} \right\}$ .

3.3 Cooperative guidance strategy design

In this subsection, a cooperative guidance strategy with a protection role, considering the anti-saturation condition, is proposed based on the fixed-time SMC theory to reduce the ${Z_{{\rm{AT}}}}$ and ${Z_{\rm{L}}}$ . It is meaningful to study an anti-saturation cooperative guidance strategy since the accelerations of the attacker and the protector are bounded in practical combat.

Lemma 1. [Reference Zou and Tie41] Let ${x_1},{x_2}, \ldots, {x_n} \geqslant 0$ , and $0 \lt \xi \le 1$ . Then

(59) \begin{align}\sum\limits_{i = 1}^n {x_i^\xi } \geq {\left( {\sum\limits_{i = 1}^n {{x_i}} } \right)^\xi } \end{align}

Based on the above analysis, (54) can be rewritten as

(60) \begin{align} \boldsymbol{\dot \varepsilon} = \boldsymbol{{F}} + \boldsymbol{{MZ}} + \boldsymbol{{Gu}} + \boldsymbol{{d}} \end{align}

where,

\begin{align*} \boldsymbol{\varepsilon} = \left[ {\begin{array}{*{20}{c}}{{ {\varepsilon} _{{\rm{AT}}}}}\\[3pt]{{\varepsilon _{\rm{L}}}}\end{array}} \right],\,\,\boldsymbol{{F}} = \left[ {\begin{array}{*{20}{c}}{{N_{{\rm{AT}}}}{\rho _{{\rm{AT}}}}{F_{{\rm{AT}}}}}\\[3pt]{{N_{\rm{L}}}{\rho _{\rm{L}}}{F_{\rm{L}}}}\end{array}} \right],\,\,\boldsymbol{{M}} = \left[ {\begin{array}{*{20}{c}}{{N_{{\rm{AT}}}}{{\dot \rho }_{{\rm{AT}}}}}\\[3pt]{{N_{\rm{L}}}{{\dot \rho }_{\rm{L}}}}\end{array}} \right] \end{align*}
\begin{align*}\boldsymbol{{u}} = \left[ {\begin{array}{*{20}{c}}{{u_{{\rm{AN}}}}}\\[3pt]{{u_{{\rm{PN}}}}}\end{array}} \right],\,\,\boldsymbol{{G}} = \left[ {\begin{array}{*{20}{c}}{\dfrac{{{N_{{\rm{AT}}}}{\rho _{{\rm{AT}}}}{{\hat \varphi }_{1,4}}}}{{{\tau _{\rm{A}}}}}} {}{\dfrac{{{N_{{\rm{AT}}}}{\rho _{{\rm{AT}}}}{{\hat \varphi }_{1,10}}}}{{{\tau _{\rm{P}}}}}}\\[6pt] {\dfrac{{{N_{\rm{L}}}{\rho _{\rm{L}}}{{\hat \varphi }_{11,4}}}}{{{\tau _{\rm{A}}}}}} {}{\dfrac{{{N_{\rm{L}}}{\rho _{\rm{L}}}{{\hat \varphi }_{11,10}}}}{{{\tau _{\rm{P}}}}}}\end{array}} \right],\,\,\boldsymbol{{d}} = \left[ {\begin{array}{*{20}{c}}{{N_{{\rm{AT}}}}{\rho _{{\rm{AT}}}}{d_{{\rm{AT}}}}}\\[3pt]{{N_{\rm{L}}}{\rho _{\rm{L}}}{d_{\rm{L}}}}\end{array}} \right] \end{align*}

$sat({u_i})$ is a saturated function to be devised as

(61) \begin{align}sat({u_i}) = \left\{ \begin{array}{l@{\quad}l}u_i^{\max }, & {\rm{ }}u_i^{\max } \lt {u_i}\\[3pt]{u_i}, & - u_i^{\max } \le {u_i} \le u_i^{\max }\\[3pt] - u_i^{\max }, & {u_i} \lt - u_i^{\max }\end{array} \right.,\,\,(i = {\rm{AN}},{\rm{PN}}) \end{align}

Let

(62) \begin{align}{\boldsymbol{{u}}_1} = sat(\boldsymbol{{u}}),\,{\rm{\Delta }}\boldsymbol{{u}} = \boldsymbol{{u}} - {\boldsymbol{{u}}_1} \end{align}

Define the sliding mode surface as

(63) \begin{align} \boldsymbol{{S}} = \boldsymbol{\varepsilon} - \frac{1}{{{k_\gamma }}}\tan\!(\boldsymbol{\unicode{x1D6EF}}) \end{align}

where, $\boldsymbol{\Xi} = {\left[ {\begin{array}{*{20}{c}}{\frac{{({\delta _{{\varepsilon _{{\rm{AT}}}}}} - 1)\pi }}{{2({\delta _{{\varepsilon _{{\rm{AT}}}}}} + 1)}}} {}{\frac{{({\delta _{{\varepsilon _{\rm{L}}}}} - 1)\pi }}{{2({\delta _{{\varepsilon _{\rm{L}}}}} + 1)}}}\end{array}} \right]^{\rm{T}}}$ .

Choose the following fixed-time reaching law in Ref. (Reference Zhang, Dong and Zhang42) to weaken the chattering phenomenon caused by the sign function.

(64) \begin{align}\dot{\boldsymbol{{S}}} = - {\theta _1}\frac{\boldsymbol{{S}}}{{{{\left\| \boldsymbol{{S}} \right\|}^{\frac{1}{2}}}}} - {\theta _2}\boldsymbol{{S}}{\left\| \boldsymbol{{S}} \right\|^{{p_1} - 1}} - \int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau \end{align}

where, $ {p_1} \gt 1 $ , ${\theta _1} \gt \sqrt {2{\theta _3}}, $ and $ {\theta _2} \gt 0 $ are positive constants. The sliding mode manifold $ \boldsymbol{{S}} $ converges to zero in a fixed-time, and the settling time is bounded by

(65) \begin{align}T_{r_1} \le \left(\frac{1} { {\theta _2}({p_1} - 1){\theta^{{p_1}-1}}} + \frac{{2\theta}^{1/2}}{\theta_{1}} \right) \left( 1 + \frac{{\theta_3} + d_{i1M}} {({\theta_3} - {d_{iM}}) (1 -\sqrt{2\theta_{3}}/\theta_{1}) } \right) \end{align}

where, $\theta = (\theta_{1}/ \theta_{2})^{1/(P_{1} + 1/2)}$ .

Theorem 2. For the system (60) with the acceleration constraint of the target and the defender, and based on the disturbance observer designed in (55) and the first-order fixed-time anti-saturation auxiliary variable $ \boldsymbol{\eta} $ , the cooperative guidance strategy (66) is proposed to ensure that the sliding surface $ \boldsymbol{{S}}$ converges to zero within a fixed-time under Assumptions 1 and 2.

(66) \begin{align} \boldsymbol{{u}} = - {\boldsymbol{{G}}^{ - 1}}\left[ { \boldsymbol{{F}} + \boldsymbol{{z}} + \boldsymbol{{MZ}} + {k_\eta } \boldsymbol{\eta} + {\theta _1}\frac{\boldsymbol{{S}}}{{{{\left\| \boldsymbol{{S}} \right\|}^{\frac{1}{2}}}}} + {\theta _2} \boldsymbol{{S}} {{\left\| \boldsymbol{{S}} \right\|}^{{p_1} - 1}} + \int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau } \right] \end{align}
\begin{align*} \boldsymbol{\dot \eta} = - {k_{\eta 1}} \boldsymbol{\eta} + {k_\eta } \boldsymbol{{S}} {\rm{ + \Delta }} \boldsymbol{{u}} - {k_{\eta 2}}\frac{\boldsymbol{\eta} }{{{{\left\| \boldsymbol{\eta} \right\|}^{\frac{1}{2}}}}} - {k_{\eta 3}}\boldsymbol{\eta} {\left\| \boldsymbol{\eta} \right\|^{{p_1} - 1}} - \int_{{t_0}}^t {{k_{\eta 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \end{align*}

where, $z = {\left[ {\begin{array}{l@{\quad}l}{{Z_{{\rm{2AT}}}}} & {}{{Z_{2{\rm{L}}}}}\end{array}} \right]^{\rm{T}}}$ is the fixed-time observer state given in (55).

Proof. The following Lyapunov function is given to prove the Theorem 2.

(67) \begin{align}V = \frac{1}{2}{ \boldsymbol{{S}}^{\rm{T}}} \boldsymbol{{S}} + \frac{1}{2}{ \boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{\eta} \end{align}

Differentiating the $V$ with respect to time and substituting (60) and $ \boldsymbol{\dot{\eta}}$ , we get

(68) \begin{align}\dot V &= { \boldsymbol{{S}}^{\rm{T}}} \dot{\boldsymbol{{S}}} + { \boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{\dot \eta} = {\boldsymbol{{S}}^{\rm{T}}}\left[ {\boldsymbol{{F}} + \boldsymbol{{MZ}} + \boldsymbol{{Gu}} + \boldsymbol{{d}}} \right] + {\boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{\dot \eta} \nonumber \\[3pt]&= {\boldsymbol{{S}}^{\rm{T}}}\left[ {\boldsymbol{{F}} + \boldsymbol{{MZ}} + \boldsymbol{{Gu}}_{1} + \boldsymbol{{d}} + \boldsymbol{{Gu}} - \boldsymbol{{Gu}}} \right] + {\boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{\dot \eta} \nonumber \\[3pt]&= {\boldsymbol{{S}}^{\rm{T}}}\left[ { - {\theta _1}\frac{\boldsymbol{{S}}}{{{{\left\| \boldsymbol{{S}} \right\|}^{\frac{1}{2}}}}} - {\theta _2}\boldsymbol{{S}}{{\left\| \boldsymbol{{S}} \right\|}^{{p_1} - 1}} - \int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau - {k_\eta } \boldsymbol{\eta} + \boldsymbol{{G}}{\rm{\Delta }}\boldsymbol{{u}}} \right] - {k_{\eta 1}}{\boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{\eta} \nonumber \\[3pt]&+ {k_\eta }{\boldsymbol{\eta} ^{\rm{T}}} \boldsymbol{{S}} + \boldsymbol{\eta} {\rm{\Delta }}\boldsymbol{{u}} - {k_{\eta 2}}{\boldsymbol{\eta} ^{^{\frac{1}{2}}}} - {k_{\eta 3}}{\left\| \boldsymbol{\eta} \right\|^{{p_1}}} - {\boldsymbol{\eta} ^{\rm{T}}}\int_{{t_0}}^t {{k_{\eta 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \nonumber \\[3pt]&\le - {\theta _1}{\left\| \boldsymbol{{S}} \right\|^{\frac{1}{2}}} - {\theta _2}{\left\| \boldsymbol{{S}} \right\|^{{p_1}}} - {\boldsymbol{{S}}^{\rm{T}}}\int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau - {k_{\eta 1}}{\boldsymbol{\eta} ^{\rm{T}}}\boldsymbol{\eta} - {\boldsymbol{{S}}^{\rm{T}}}\boldsymbol{{G}}{\rm{\Delta }}\boldsymbol{{u}} \nonumber \\[3pt]&+ \boldsymbol{\eta} {\rm{\Delta }} \boldsymbol{{u}} - {k_{\eta 2}}{\left\| \boldsymbol{\eta} \right\|^{^{\frac{1}{2}}}} - {k_{\eta 3}}{\left\| \boldsymbol{\eta} \right\|^{{p_1}}} - {\boldsymbol{\eta} ^{\rm{T}}}\int_{{t_0}}^t {{k_{\eta 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \end{align}

In light of Young’s inequality,

(69) \begin{align}{\boldsymbol{{S}}^{\rm{T}}}\boldsymbol{{G}}{\rm{\Delta }}\boldsymbol{{u}} &\le \frac{1}{2}{\boldsymbol{{S}}^{\rm{T}}}{\boldsymbol{{G}}^{\rm{T}}}\boldsymbol{{GS}} + \frac{1}{2}{\rm{\Delta }}{\boldsymbol{{u}}^{\rm{T}}}{\rm{\Delta }}\boldsymbol{{u}} \nonumber \\[3pt]& \le \frac{1}{2}{\left\| \boldsymbol{{G}} \right\|^2}{\boldsymbol{{S}}^{\rm{T}}}\boldsymbol{{S}} + \frac{1}{2}{\rm{\Delta }}{\boldsymbol{{u}}^{\rm{T}}}{\rm{\Delta }}\boldsymbol{{u}} \\[-24pt] \nonumber \end{align}
(70) \begin{align}{\boldsymbol{\eta} ^{\rm{T}}}{\rm{\Delta }}\boldsymbol{{u}} \le \frac{1}{2}{\boldsymbol{\eta} ^{\rm{T}}}\boldsymbol{\eta} + \frac{1}{2}{\rm{\Delta }}{\boldsymbol{{u}}^{\rm{T}}}{\rm{\Delta }}\boldsymbol{{u}} \end{align}

(68) is able to be written as following inequations based on the above analysis and Lemma 1.

(71) \begin{align}\dot V &\le - \frac{1}{2}{\left\| \boldsymbol{{G}} \right\|^2}{\boldsymbol{{S}}^{\rm{T}}}\boldsymbol{{S}} - {\theta _1}{\left\|\boldsymbol{{S}}\right\|^{\frac{1}{2}}} - {\theta _2}{\left\|\boldsymbol{{S}}\right\|^{{p_1}}} - {\boldsymbol{{S}}^{\rm{T}}}\int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau - \left( {{k_{\boldsymbol{\eta} 1}} - \frac{1}{2}} \right){\boldsymbol{\eta} ^{\rm{T}}}\boldsymbol{\eta} \nonumber\\[3pt] & - {k_{\boldsymbol{\eta} 2}}{\left\| \boldsymbol{\eta} \right\|^{^{\frac{1}{2}}}} - {k_{\boldsymbol{\eta} 3}}{\left\| \boldsymbol{\eta} \right\|^{{p_1}}} - {\boldsymbol{\eta} ^{\rm{T}}}\int_{{t_0}}^t {{k_{\boldsymbol{\eta} 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \nonumber \\[3pt]&\le - {\theta _1}{\left\|\boldsymbol{{S}}\right\|^{\frac{1}{2}}} - {\theta _2}{\left\|\boldsymbol{{S}}\right\|^{{p_1}}} - {\boldsymbol{{S}}^{\rm{T}}}\int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau - {k_{\boldsymbol{\eta} 2}}{\left\| \boldsymbol{\eta} \right\|^{^{\frac{1}{2}}}} - {k_{\boldsymbol{\eta} 3}}{\left\| \boldsymbol{\eta} \right\|^{{p_1}}} - {\boldsymbol{\eta} ^{\rm{T}}}\int_{{t_0}}^t {{k_{\boldsymbol{\eta} 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \nonumber\\[3pt] &\le - {k_{{\theta _1}}}{\left\| V \right\|^{\frac{1}{4}}} - {k_{{\theta _2}}}{\left\| V \right\|^{\frac{{{p_1}}}{2}}} - {\boldsymbol{{S}}^{\rm{T}}}\int_{{t_0}}^t {{\theta _3}} \frac{{\boldsymbol{{S}}(\tau )}}{{\left\| {\boldsymbol{{S}}(\tau )} \right\|}}d\tau - {\boldsymbol{\eta} ^{\rm{T}}}\int_{{t_0}}^t {{k_{\boldsymbol{\eta} 4}}} \frac{{\boldsymbol{\eta} (\tau )}}{{\left\| {\boldsymbol{\eta} (\tau )} \right\|}}d\tau \le 0 \end{align}

where, ${k_{\eta 3}}$ are positive constants. According to the above analysis, the boundary of the total settling time ${T_r} = {T_{{r_1}}} + {T_{{r_2}}} + {T_z}$ is

(72) \begin{align}{T_{{r_2}}} \le \left( {\frac{1}{{{k_{\eta 1}}({p_1} - 1){\varepsilon ^{{p_1} - 1}}}} + \frac{{2{k_{\eta 4}}^{{1 \mathord{\left/ {\vphantom {1 2}} \right.} 2}}}}{{{\theta _1}}}} \right)\left( {1 + \frac{{{k_{\eta 3}} + {d_{i1M}}}}{{({k_{\eta 3}} - {d_{iM}})(1 - {{\sqrt {2{k_{\eta 3}}} } \mathord{\left/ {\vphantom {{\sqrt {2{k_{\eta 3}}} } {{k_{\eta 1}}}}} \right. } {{k_{\eta 1}}}})}}} \right) \end{align}

where, ${k_{\eta 4}} = {({{{k_{\eta 1}}} \mathord{\left/{\vphantom {{{k_{\eta 1}}} {{k_{\eta 2}}}}} \right.} {{k_{\eta 2}}}})^{{1 \mathord{\left/{\vphantom {1 {({{{p_1} + 1} \mathord{\left/{\vphantom {{{p_1} + 1} 2}} \right.} 2})}}} \right.} {({{{p_1} + 1} \mathord{\left/{\vphantom {{{p_1} + 1} 2}} \right.} 2})}}}}$ .

Based on the above derivations and equation (63), the sliding surface converging to zero is proven to yield equation $\boldsymbol{\varepsilon} = \frac{1}{{{k_\gamma }}}\tan (\Xi )$ . The prescribed performance function variable $\boldsymbol{\varepsilon} \le \boldsymbol{0}$ is concluded by choosing the parameters ${k_\gamma } \gt 0$ and $0 \le {\delta _{{\varepsilon _i}}} \le 1$ , $i \in \left\{ {{\rm{AT, L}}} \right\}$ since $\tan ({\bullet})$ is the increasing function. Therefore, the state variables ${Z_i}$ , $i \in \left\{ {{\rm{AT, L}}} \right\}$ converge to zero and satisfy the prescribed performance constraint.

And these completed the proof.

It can be indicated that the ZEM between the attacker and the target, and the ZEL between $ {\rm{LO}}{{\rm{S}}_{{\rm{AD}}}} $ and $ {\rm{LO}}{{\rm{S}}_{{\rm{PD}}}} $ are reduced to zero according to the definition of sliding surface $ \boldsymbol{{S}} $ . Therefore, the protector remains on the LOS between the defender and the attacker to intercept the defender.

Based on Ref. (Reference Sun, Qi and Hou43), the target adopts the following optimal avoidance guidance law to evade the interception of the attacker.

(73) \begin{align}u_{\rm TN} = -{\rm sign} \left[{\hat \varphi}_{1,3} /\tau_{\rm T}\right] {\rm{sign}}\left[ {{Z_{{\rm{AT}}}}(t)} \right]u_{{\rm{TN}}}^{\max } \end{align}

where, ${\hat \varphi _{1,3}}$ , ${\tau _{\rm{T}}}$ and ${Z_{{\rm{AT}}}}(t)$ are predefined in (24) (26) and (28), $u_{{\rm{TN}}}^{\max }$ is the maximum manoeuver of the target.

Remark 4. The LOS guidance strategy proposed in this paper is a cooperation between the attacker and the protector against the defender-target team to reduce ZEM and ZEL while maintaining a stable communication connection. The development of the LOS guidance strategy, which controls the convergence of the LOS angle from a specific side (the attacker or the protector), can be inspired according to the design process of the cooperative LOS guidance strategy. For example, when the protector controls the LOS angle, the ZEL and the derivative are rewritten as $ {Z_{{\rm{PL}}}} = {Z_{\rm{L}}} + \int_t^{{t_{f{\rm{PD}}}}} {\frac{{{{\hat \varphi }_{11,4}}(\varepsilon )}}{{{\tau _{\rm{A}}}}}{u_{{\rm{AN}}}}(\varepsilon ){\rm{d}}\varepsilon } $ and $ {\dot Z_{{\rm{PL}}}} = {F_{\rm{L}}} + \frac{{{{\hat \varphi }_{11,10}}}}{{{\tau _{\rm{P}}}}}{u_{{\rm{PN}}}} + {d_{\rm{L}}} $ . By virtue of the SMC theory, the guidance scheme of the protector controlling the LOS angle is able to be devised for the above ZEL system.

4.0 Simulations

The feasibility and superiority of the cooperative guidance strategy (66) are illustrated by nonlinear numerical simulation in this section. The information required for the participants’ guidance scheme is assumed to be obtained by sensors onboard. The defender adopts the proportional navigation guidance law to intercept the attacker in the nonlinear simulations.

4.1 Simulation of the cooperative guidance scheme

The cooperative LOS guidance scheme (66) is verified in this subsection, which describes that the protector remains on the LOS between the defender and the attacker and intercepts the defender. The simulation parameters are listed in Table 1.

The parameters of (66) are separately chosen as $ {\theta _1} = 3.7,{\rm{ }}{\theta _2} = 3.7,{\rm{ }}{\theta _3} = 3.05,{\rm{ }}{k_\gamma } = 0.1, $ $ {k_\eta } = 0.001, $ ${\rm{ }}{k_{\eta 1}} = 0.6,$ ${k_{\eta 2}} = 2.05,$ ${k_{\eta 3}} = 5.05,$ ${k_{\eta 4}} = 0.11,$ ${p_1} = 1.5,$ $u_{{\rm{AN}}}^{\max } = 150{\rm m}/{\rm s}^{2}, u_{{\rm{PN}}}^{\max } = 150{\rm m}/{\rm s}^{2}, u_{{\rm{TN}}}^{\max } = 170{\rm m}/{\rm s}^{2}$ ; the prescribed performance parameters of (66) are respectively selected as ${\rho _{0{\rm{AT}}}} = 465.17,$ ${\rho _{{\rm{1AT}}}} = 465.05,\ \rho _{\infty {\rm{1AT}}} = 0.5$ , ${\rho _{{\rm{1AT}}}} = 465.05,$ ${y_1} = 0.8,$ ${y_2} = 0.1,$ ${\rho _{0{\rm{L}}}} = 3.71,$ ${\rho _{{\rm{1L}}}} = 3.75,$ ${\rho _{\infty {\rm{L}}}} = 0.01,$ ${T_{d{\rm{AT}}}} = {T_{d{\rm{L}}}} = 3.26{\rm{s}},$ ${k_\gamma } = 15,$ ${\delta _{{\varepsilon _{{\rm{AT}}}}}} = {\delta _{{\varepsilon _{\rm{L}}}}} = 1.$ The parameters of the fixed-time disturbance observer designed in (55) are $ {v_1} = 15,{v_2} = 22,{v_3} = 0.001,p = 1.2 $ . Without loss of generality, the initial values of the observer are given as zero.

The engagement trajectories of the players for the guidance strategy (66) are shown in Fig. 4. As can be seen from Fig. 4, the attacker is able to capture the target with the assistance of the protector intercepting the defender, which demonstrates that the engagement purpose is achieved.

Figure 4. Trajectories of the guidance strategy (66).

Figures 5 and 6 indicate the time evolutions of the zero-effort quantities ${Z_i}$ , $i \in \left\{ {{\rm{AT, L}}} \right\}$ . It can be seen from Fig. 5 that the ZEM ${Z_{{\rm{AT}}}}$ increases and converges to 0m in the upper boundary of the convergence time ${T_r} = {\rm{3}}{\rm{.6369s}}$ , which illustrates that the target is intercepted by the attacker; ${\rho _{u{\rm{AT}}}}$ and ${\rho _{l{\rm{AT}}}}$ converge to ${\rho _{\infty {\rm{AT}}}}$ in a fixed-time boundary ${T_{d{\rm{AT}}}} = 3.26{\rm{s}}$ . Based on Fig. 6, the phenomenon that the ZEL ${Z_{\rm{L}}}$ converges to 0 $\deg $ in the upper boundary of the convergence time ${T_r} = {\rm{3}}{\rm{.6369s}}$ is observed to demonstrate that the protector maintains on the LOS of the defender-attacker and captures the defender to guard the attacker. ${\rho _{u{\rm{L}}}}$ and ${\rho _{l{\rm{L}}}}$ converge to ${\rho _{\infty {\rm{L}}}}$ in a fixed-time boundary ${T_{d{\rm{L}}}} = 3.26{\rm{s}}$ , which ensures the prescribed performance of the cooperative guidance strategy.

Figure 5. ZEM between the attacker and the target.

Figure 6. ZEL between the protector and the attacker-defender.

The variation tendencies of fixed-time observer states ${z_{2i}},{\rm{ }}i \in \left\{ {{\rm{AT}},{\rm{L}}} \right\}$ with respect to time are illustrated in Fig. 7. Note that the states ${z_{{\rm{2AT}}}}$ and ${z_{{\rm{2L}}}}$ decrease and converge to 0 in the upper boundary of the convergence time ${T_z}{\rm{ = 1}}{\rm{.8535s}}$ . Based on Theorem 1, the fixed-time observer eliminates the external disturbance to improve the performance of the guidance system, according to Fig. 7.

Figure 7. Fixed-time observer state ${z_{2i}},{\rm{ }}i = {\rm{AT}},{\rm{L}}$ .

The simulation results of the attacker-protect team’s accelerations ${a_{\rm{P}}}$ and ${a_{\rm{A}}}$ with time are shown in Fig. 8. It is worth noting from Fig. 8 that the acceleration ${a_{\rm{P}}}$ increases and converges to $0{\rm{m/}}{{\rm{s}}^2}$ , and satisfies the constraint of the control input saturation. The attacker’s acceleration ${a_{\rm{A}}}$ reduces and converges to $0{\rm{m/}}{{\rm{s}}^2}$ , and meets the requirement of control input. The combat mission is completed according to the variation of the accelerations.

Figure 8. The accelerations of the attacker-protector team.

Figures 9 and 10 illustrate the time evolutions of ${a_{\rm{A}}}$ and ${a_{\rm{P}}}$ for different $u_{\rm{A}}^{\max }$ and $u_{\rm{P}}^{\max }$ , which demonstrate that the accelerations ${a_{\rm{A}}}$ and ${a_{\rm{P}}}$ satisfy the input saturation constraint and converge to $0{\rm m}/{\rm s}^{2}$ for for different $u_{\rm{A}}^{\max }$ and $u_{\rm{P}}^{\max }$ .

Figure 9. The acceleration ${a_{\rm{A}}}$ for different $u_{\rm{A}}^{\max }$ .

Figure 10. The acceleration ${a_{\rm{P}}}$ for different $u_{\rm{P}}^{\max }$ .

4.2 Comparison studies

The following scenarios are established to compare with the other guidance schemes in order to validate the superiority of the guidance strategy.

Case 1.

To illustrate the LOS guidance approach, the guidance schemes in Refs (Reference Kumar and Dwaipayan19Reference Luo, Tan, Yan, Wang and Ji21) are compared to the proposed guidance strategy. In Case 1, the engagement of the attacker intercepting the target is ignored for better comparison since the guidance schemes in Refs (Reference Kumar and Dwaipayan19Reference Luo, Tan, Yan, Wang and Ji21) are presented based on the three-player conflict scenario. Define the LOS angle error between the attacker-protector and the defender as $e = {\lambda _{{\rm{PD}}}} - {\lambda _{{\rm{AD}}}}$ .

The specific form of the guidance strategy in Refs (Reference Kumar and Dwaipayan19, Reference Kumar and Dwaipayan20) is

(74) \begin{align}{a_{\rm{A}}} = \frac{{{a_1}{T^2}{F_1}}}{{a_2^2 + a_1^2{T^2}}},{\rm{ }}{a_{\rm{P}}} = - \frac{{{a_2}{F_1}}}{{a_2^2 + a_1^2{T^2}}},{\rm{ }}{a_{\rm{D}}} = \frac{{5{V_{{R_{{\rm{AD}}}}}}{{\dot \lambda }_{{\rm{AD}}}}}}{{\cos \!({\gamma _{\rm{D}}} - {\lambda _{{\rm{AD}}}})}} \end{align}

where,

\begin{align*}{a_1} = \frac{{\cos ({\gamma _{\rm{A}}} - {\lambda _{{\rm{AD}}}})}}{{{R_{{\rm{AD}}}}}},\,\,{a_2} = \frac{{\cos ({\gamma _{\rm{P}}} - {\lambda _{{\rm{PD}}}})}}{{{R_{{\rm{PD}}}}}},{\rm{ }}T = \frac{{{\omega _{\rm{A}}}}}{{{\omega _{\rm{P}}}}} \end{align*}
\begin{align*} F_{1} = \frac{2V_{R_{\rm PD}} \dot{\lambda}_{\rm PD} }{R_{\rm PD}} - \frac{2V_{R_{\rm AD}} \dot{\lambda}_{\rm AD} }{R_{\rm AD}} + C\dot{e} +M{\rm sign}(S_{\varepsilon}) \end{align*}
\begin{align*}{S_\varepsilon } = \dot e + Ce \end{align*}

The cooperative LOS guidance scheme designed in Ref. (Reference Luo, Tan, Yan, Wang and Ji21) is

(75) \begin{align}{a_{\rm{A}}} = - \mu \frac{P}{\alpha },\,\,{a_{\rm{P}}} = \left( {1 - \mu } \right)\frac{P}{\beta },\,\,{a_{\rm{D}}} = {V_{\rm{D}}}\left( { - {\gamma _{\rm{D}}} + {\lambda _{{\rm{AD}}}} + {{\dot \lambda }_{{\rm{AD}}}}} \right) \end{align}

where,

\begin{align*}P = - \frac{{2{V_{{R_{{\rm{PD}}}}}}{V_{{\lambda _{{\rm{PD}}}}}}}}{{R_{{\rm{PD}}}^2}} + \frac{{2{V_{{R_{{\rm{AP}}}}}}{V_{{\lambda _{{\rm{AP}}}}}}}}{{R_{{\rm{AP}}}^2}} + \left( {1 - k_1^2} \right)e + \left( {2{k_1} + \frac{1}{{2{\delta ^2}}}} \right)\left( {\dot e + {k_1}e} \right) \end{align*}
\begin{align*}\alpha = \frac{{\cos ({\gamma _{\rm{A}}} - {\lambda _{{\rm{AP}}}})}}{{{R_{{\rm{AP}}}}}},\,\,\beta = \frac{{\cos ({\gamma _{\rm{P}}} - {\lambda _{{\rm{PD}}}})}}{{{R_{{\rm{PD}}}}}} + \frac{{\cos ({\gamma _{\rm{P}}} - {\lambda _{{\rm{AP}}}})}}{{{R_{{\rm{AP}}}}}} \end{align*}

The simulation parameters in Case 1 are listed in Table 2.

Table 1. Simulation parameters

Table 2. Simulation parameters in Case 1

Table 3. Simulation parameters in Case 2

The parameters of the guidance strategy (66) are selected as ${\theta _1} = 4.7$ , ${\theta _2} = 1.7$ , ${\theta _3} = 4.05$ , $ {k_\gamma } = 0.3 $ , ${k_\eta } = 0.001$ , ${k_{\eta 1}} = 0.62$ , ${k_{\eta 2}} = 2.05$ , ${k_{\eta 3}} = 3.05$ , ${k_{\eta 4}} = 0.2$ , ${p_1} = 1.5$ , $u_{{\rm{PN}}}^{\max } = 150{\rm m}/{\rm s}^{2}$ ; the prescribed performance parameters of (66) are respectively selected as ${\rho _{0{\rm{L}}}} = 6.01,$ ${\rho _{{\rm{1L}}}} = 6.05$ , ${\rho _{\infty {\rm{L}}}} = 0.01,$ ${y_1} = 0.7$ , ${y_2} = 0.1$ , ${T_{d{\rm{L}}}} = 2.05{\rm{s}},$ ${\delta _{{\varepsilon _{{\rm{AT}}}}}} = {\delta _{{\varepsilon _{\rm{L}}}}} = 1.$ The parameter selection for the fixed-time disturbance observer is the same as in Subsection 4.1.

The guidance parameters in (74) and (75) are respectively chosen as $C = 12$ , $M = 0.5$ , ${\omega _{\rm{A}}} = 2$ , ${\omega _{\rm{P}}} = 3$ , $a_{\rm{A}}^{\max } = 150{\rm m}/{\rm s}^{2}$ , $a_{\rm{P}}^{\max } = 150{\rm m}/{\rm s}^{2} $ ; ${k_1} = 1$ , $\delta = 10$ , $\mu = 0.68$ , $a_{\rm{A}}^{\max } = 150{\rm m}/{\rm s}^{2} $ , $a_{\rm{P}}^{\max } = 150{\rm m}/{\rm s}^{2} $ .

The engagement trajectories between the attacker-protector and the defender are shown in Fig. 11. The different guidance strategies are able to guarantee that the protector maintains on the LOS of the attacker-defender and intercepts the defender to guard the attacker based on Fig. 11. Compared with guidance strategies (74) and (75), the guidance scheme proposed in this paper has a better collision triangle to complete the interception mission.

Figure 11. Trajectories of the guidance strategies in Case 1.

The time evolutions of the ZEL between the protector and the attacker-defender are illustrated according to Fig. 12. It can be seen from Fig. 12 that ${Z_{\rm{L}}}$ converges to $0\deg $ faster and satisfies the prescribed performance condition more stable in contrast to $e$ in (74) and (75), which demonstrates the superiority of the proposed guidance strategy.

Figure 12. LOS angles between the protector and the attacker-defender.

The simulations in Fig. 13 compare the attacker’s accelerations over time for different guidance strategies. Based on Fig. 13, the guidance strategies (65), (73) and (74) are able to guarantee that the acceleration ${a_{\rm{A}}}$ converges to $0{\rm m}/{\rm s}^{2}$ with respect to time. The acceleration ${a_{\rm{A}}}$ in (73) satisfies the overload requirement with no control chattering according to the first-order fixed-time anti-saturation auxiliary variable. The acceleration ${a_{\rm{A}}}$ in (74) has the chattering phenomenon occurs due to the existence of the sign function. Compared the attacker’s accelerations in (66) and (74), ${a_{\rm{A}}}$ in (75) converges to $0 {\rm m}/{\rm s}^{2}$ at a slower rate.

Figure 13. The accelerations of the attacker.

Figure 14 shows the variation tendency of the protector’s accelerations with respect to time. According to Fig. 14, the accelerations of the protector ${a_{\rm{P}}}$ in (66) (74) and (75) increase and converge to $0 {\rm m}/{\rm s}^{2}$ , satisfying the overload constraint. The acceleration of the protector ${a_{\rm{P}}}$ in (74) exceeds the limitation of the manoeuver due to the chattering phenomenon from the sign function. The acceleration ${a_{\rm{P}}}$ in (75) has a slower convergence speed for the protector.

Figure 14. The accelerations of the protector.

Case 2.

The advantage of the cooperative guidance scheme (66) is verified by comparing it with the guidance strategies (74) and (75) in Case 1. However, the comparison simulation in Case 1 can’t demonstrate the superiority of the two-on-two engagement since the simulation scenario is established in the three-player conflict. To overcome this drawback, a simulation study in Case 2 is presented in contrast to the two-on-two engagement.

The specific form of the two-on-two guidance law in Ref. (Reference Liang, Wang, Liu and Liu31) is

(76) \begin{align}\left\{ \begin{array}{l}{u_{\rm{A}}} = - \frac{1}{{{\beta _1}}}\left[ {({K_{11}}{Z_{{\rm{AT}}}} + {K_{12}}{Z_{{\rm{AD}}}} + {K_{13}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{A1}}}} + ({K_{21}}{Z_{{\rm{AT}}}} + {K_{22}}{Z_{{\rm{AD}}}} + {K_{23}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{A2}}}}} \right]\\[3pt]{u_{\rm{P}}} = - \frac{1}{{{\beta _2}}}({K_{31}}{Z_{{\rm{AT}}}} + {K_{32}}{Z_{{\rm{AD}}}} + {K_{33}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{P3}}}}\\[3pt]{u_{\rm{D}}} = - \frac{1}{{{\beta _3}}}\left[ {({K_{21}}{Z_{{\rm{AT}}}} + {K_{22}}{Z_{{\rm{AD}}}} + {K_{23}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{D2}}}} + ({K_{31}}{Z_{{\rm{AT}}}} + {K_{32}}{Z_{{\rm{AD}}}} + {K_{33}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{D3}}}}} \right]\\[3pt]{u_{\rm{T}}} = - \frac{1}{{{\beta _4}}}({K_{11}}{Z_{{\rm{AT}}}} + {K_{12}}{Z_{{\rm{AD}}}} + {K_{13}}{Z_{{\rm{PD}}}}){\Theta _{{\rm{T1}}}}\end{array} \right. \end{align}

where,

$\begin{gathered}{\Theta _{i{\text{1}}}} = {\tau _i}\psi \left( {{t_{go{\text{AT}}}}/{\tau _i}} \right)\left\| {u_i^{\max }} \right\|,{\Theta _{j2}} = {\tau _j}\psi \left( {{t_{go{\text{AD}}}}/{\tau _j}} \right)\left\| {u_j^{\max }} \right\|,{\Theta _{l3}} = {\tau _l}\psi \left( {{t_{go{\text{AD}}}}/{\tau _l}} \right)\left\| {u_l^{\max }} \right\|,i \in \{ {\text{A}},{\text{T}}\} , \hfill \\ j \in \left\{ {{\text{A}},{\text{D}}} \right\},l \in \left\{ {{\text{P}},{\text{D}}} \right\},\psi ( \bullet ) = {e^{ - \chi }} + \chi - 1,{\mathbf{K}}(t) = {\left[ {diag\left( {1/{\alpha _1}\quad - 1/{\alpha _2}\quad 1/{\alpha _3}} \right) + \int_t^{{t_{f{\text{PD}}}}} {{\mathbf{\unicode{x1D6F1}}}dt} } \right]^{ - 1}} \hfill \\ \end{gathered}$

$\boldsymbol{\unicode{x1D6F1}} = \left[ {\begin{array}{l@{\quad}l@{\quad}l}{ - \frac{{\unicode{x1D6E9} _{{\rm{A1}}}^2}}{{{\beta _1}}} - \frac{{\unicode{x1D6E9} _{{\rm{T1}}}^2}}{{{\beta _4}}}} & {}{ - \frac{{{\unicode{x1D6E9} _{{\rm{A1}}}}{\unicode{x1D6E9} _{{\rm{A2}}}}}}{{{\beta _1}}}} {}& 0\\[3pt]{ - \frac{{{\unicode{x1D6E9} _{{\rm{A1}}}}{\unicode{x1D6E9} _{{\rm{A2}}}}}}{{{\beta _1}}}} & {}{ - \frac{{\unicode{x1D6E9} _{{\rm{A2}}}^2}}{{{\beta _1}}} + \frac{{\unicode{x1D6E9} _{{\rm{D2}}}^2}}{{{\beta _3}}}} & {}{\frac{{{\unicode{x1D6E9} _{{\rm{D2}}}}{\unicode{x1D6E9} _{{\rm{D3}}}}}}{{{\beta _3}}}}\\[3pt]0 & {}{\frac{{{\unicode{x1D6E9} _{{\rm{D2}}}}{\unicode{x1D6E9} _{{\rm{D3}}}}}}{{{\beta _3}}}} & {}{ - \frac{{\unicode{x1D6E9} _{{\rm{P3}}}^2}}{{{\beta _2}}} + \frac{{\unicode{x1D6E9} _{{\rm{D3}}}^2}}{{{\beta _3}}}}\end{array}} \right]$ , ${K_{pq}}$ , $p \in \left\{ {1,2,3} \right\}$ , $q \in \left\{ {1,2,3} \right\}$ are the elements of matrix $ \boldsymbol{{K}}(t)$ .

The simulation parameters are listed in Table 3.

Choose the parameters of the guidance strategy (66) as $ {\theta _1} = 2.7,{\rm{ }}{\theta _2} = 3.7,{\rm{ }}{\theta _3} = 4.05,{\rm{ }}{k_\gamma } = 0.5, $ $ {k_\eta } = 0.001 $ , ${\rm{ }}{k_{\eta 1}} = 0.72$ , ${k_{\eta 2}} = 3.05$ , ${k_{\eta 3}} = 4.05$ , ${k_{\eta 4}} = 0.31$ , ${p_1} = 1.5$ , $u_{{\rm{AN}}}^{\max } = 60{\rm m}/{\rm s}^{2} $ , $u_{{\rm{PN}}}^{\max } = 60{\rm m}/{\rm s}^{2} $ , $u_{{\rm{TN}}}^{\max } = 100{\rm m}/{\rm s}^{2}$ ; the prescribed performance parameters of (66) are respectively selected as ${\rho _{{\rm{0AT}}}} = 110.17,\ {\rho _{{\rm{1AT}}}} = 110.05$ , ${\rho _{\infty {\rm{AT}}}} = 0.1$ , ${\rho _{0{\rm{L}}}} = 8.35$ , ${\rho _{l{\rm{L}}}} = 8.305,\ {\rho _{\infty {\rm{AT}}}} = 0.01,$ ${\rho _{\infty {\rm{L}}}} = 0.01,$ ${y_1} = 0.7$ , ${y_2} = 0.1$ , ${T_{d{\rm{AT}}}} = {T_{d{\rm{L}}}} = 3.16{\rm{s}},$ ${k_\gamma } = 14$ , ${\delta _{{\varepsilon _{{\rm{AT}}}}}} = {\delta _{{\varepsilon _{\rm{L}}}}} = 1.$ The parameters of the fixed-time disturbance observer are chosen as ${v_1} = 14,$ ${v_2} = 21,$ ${v_3} = 0.001,$ $p = 1.4$ .

The parameters in the guidance law (76) are selected as ${\alpha _3} = 14.8$ , ${\alpha _2} = 4.2$ , ${\alpha _3} = 14.8$ , ${\beta _1} = 3$ , ${\beta _2} = 1.78$ , ${\beta _3} = 8.8,$ ${\beta _4} = 14$ . The maximum acceleration is $u_{\rm{A}}^{\max } = 50{\rm{m/}}{{\rm{s}}^2}$ , $u_{\rm{P}}^{\max } = 50{\rm{m/}}{{\rm{s}}^2}$ , $u_{\rm{D}}^{\max } = 80{\rm{m/}}{{\rm{s}}^2}$ , $u_{\rm{T}}^{\max } = 100{\rm{m/}}{{\rm{s}}^2}$ .

Figure 15 shows the engagement trajectories of the players for the guidance strategies (66) and (76). It is evident from Fig. 15 that the attacker is guided by the protector to intercept the defender in the proposed guidance scheme. Based on Fig. 15, the attacker is intercepted by the defender when the protector is unable to capture the defender, illustrating that the guidance strategy cannot fulfill the combat mission.

Figure 15. Trajectories of guidance strategies in Case 2.

The time evolutions of the ZEM ${Z_{{\rm{AT}}}}$ and the ZEL ${Z_{\rm{L}}}$ are illustrated in Figs 16 and 17. According to Fig. 16, ${Z_{{\rm{AT}}}}$ increases and converges to 0m within the upper boundary of the convergence time ${T_r} = {\rm{3}}{\rm{.4446s}}$ and satisfies the prescribed performance constraint, demonstrating that the attacker intercepts the target at the terminal time. Based on Fig. 17, ${Z_{\rm{L}}}$ converges to $0\deg $ in the upper boundary of the convergence time ${T_r} = {\rm{3}}{\rm{.4446s}}$ and satisfies the prescribed performance, which validates that the protector maintains on the LOS of the attacker-defender and intercepts the defender.

Figure 16. ZEM between the attacker and the target.

Figure 17. ZEL between the protector and the attacker-defender.

The graph in Fig. 18 illustrates the changes in the states ${z_{2i}},i \in \left\{ {{\rm{AT}},{\rm{L}}} \right\}$ of the fixed-time disturbance observer over time. It shows that the unknown disturbance (60) is estimated by the fixed-time disturbance observer by virtue of ${z_{2i}} = 0,i \in \left\{ {{\rm{AT}},{\rm{L}}} \right\}$ , and the upper bound of the convergent time is ${T_z}{\rm{ = 1}}{\rm{.5911s}}$ .

Figure 18. Fixed-time observer state $ {z_{2i}},{\rm{ }}i = {\rm{AT}},{\rm{L}} $ .

Figures 19 and 20 illustrate the time evolutions of the accelerations ${a_{\rm{A}}}$ and ${a_{\rm{P}}}$ in (66) and (76), which demonstrate that the accelerations of the attacker and the protector satisfy the input saturation constraint and ${a_{\rm{A}}}$ , ${a_{\rm{P}}}$ in (66) converge to $0{\rm m}/{\rm s}^{2}$ in contrast to ${a_{\rm{A}}}$ , ${a_{\rm{P}}}$ in (76).

Figure 19. The accelerations of the attacker.

Figure 20. The accelerations of the protector.

The ZEMs between the attacker-protector team and the defender-target team with time are shown in Fig. 21. It can be seen from Fig. 21 that the ZEM ${Z_{{\rm{AD}}}}$ converges to 0m, and the guidance strategy isn’t able to guarantee the convergence of the ZEMs ${Z_{{\rm{AT}}}}$ , ${Z_{{\rm{PD}}}}$ . Therefore, the defender evades the interception of the protector and captures the attacker, which causes the failure of the attacker-protector team’s combat mission.

Figure 21. ZEMs in the guidance scheme.

5.0 Conclusion

In this paper, the cooperative guidance strategy with prescribed performance and input saturation has been designed by virtue of the sliding mode control and line-of-sight guidance methods in the two-on-two engagement. The guidance strategy guarantees that the attacker intercepts the target when the protector captures the defender. The conclusions are elaborated as follows:

  1. (1) The cooperative guidance strategy addresses the combat issue that the attacker intercepts the target with the assistance of the protector maintaining on the line-of-sight of the defender-attacker to capture the defender. The chattering phenomenon doesn’t affect the guidance performance since the sign function isn’t included in the proposed guidance scheme.

  2. (2) The guidance strategy has developed a new type of fixed-time prescribed performance function to ensure that the zero-effort miss and zero-effort line-of-sight angle meet the prescribed performance constraint using dual prescribed performance functions for the upper and lower boundaries.

  3. (3) The proposed guidance scheme has met the constraint of the attacker-protector team’s input control by means of the first-order anti-saturation auxiliary variable.

  4. (4) The unknown disturbance of the guidance system has been estimated by virtue of presenting a novel fixed-time disturbance observer in the cooperative guidance strategy, which weakens the influence of the external disturbance.

The two-on-two line-of-sight guidance strategy proposed in this paper can be used as a framework to expand the game combat between multiple aircraft in a cluster by virtue of the line-of-sight guidance method. Furthermore, by considering the constraints, such as the terminal interception angle and field-of-view, the line-of-sight guidance strategy is designed to ensure that the attacker-protector team satisfies the field-of-view conditions and intercepts the target-defender team at the desired angle.

Acknowledgements

This work is supported by National Natural Science Foundation (NNSF) of China under Grant 62273119.

Competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

Xiong, S.F., Wang, W.H., Liu, X.D., Wang, S. and Chen, Z.Q. Guidance law against maneuvering targets with intercept angle constraint, ISA Trans., 2014, 53, pp 13321342.CrossRefGoogle ScholarPubMed
Wang, Z.K., Fu, W.X., Fang, Y.W., Zhu, S.P., Wu, Z.H. and Wang, M.G. Prescribed-time cooperative guidance law against maneuvering target based on leader-following strategy, ISA Trans., 2022, 129, pp 257270.CrossRefGoogle ScholarPubMed
Asher, R.B. and Matuszewski, J.P. Optimal guidance with maneuvering targets, J. Spacecraft Rockets, 1974, 11, (3), pp 204206.CrossRefGoogle Scholar
Faruqi, F.A. Differential Game Theory with Applications to Missiles And Autonomous Systems Guidance, John Wiley and Sons Limited, 2017, pp 102103.CrossRefGoogle Scholar
Isaacs, R. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization, New York, NY, USA, 1965.Google Scholar
Perelman, A., Shima, T. and Rusnak, I. Cooperative differential games strategies for active aircraft protection from a homing missile, J. Guid. Control Dyn., 2011, 34, (3), pp 761773.CrossRefGoogle Scholar
Sinha, A., Kumar, S.R. and Mukherjee, D. Three-agent time-constrained cooperative Pursuit-Evasion, J. Intell. Rob. Syst., 2022, 104, (2), pp 128.CrossRefGoogle Scholar
Singh, S.K. and Puduru, P.V. Dynamic network analysis of a target defense differential game with limited observations, IEEE Trans. Control Netw., 2023, 10, (1), pp 308320.CrossRefGoogle Scholar
Liang, H.Z., Wang, J.Y., Wang, Y.H., Wang, Y.H., Wang, L.L. and Liu, P. Optimal guidance against active defense ballistic missiles via differential game strategies, Chin. J. of Aeronaut., 2020, 33, (3), pp 978989.CrossRefGoogle Scholar
Liang, H.Z., Li, Z., Wu, J.Z., Zheng, Y., Chu, H.Y. and Wang, J.Y. Optimal guidance laws for a hypersonic multiplayer pursuit-evasion game based on a differential game strategy, Aerospace, 2022, 9, (2), pp 117.CrossRefGoogle Scholar
Liu, F., Dong, X.W., Li, Q.D. and Ren, Z. Cooperative differential games guidance strategies for multiple attackers against an active defense target, Chin. J. Aeronaut., 2022, 35, (5), pp 374389.CrossRefGoogle Scholar
Yan, T., Cai, Y.L. and Xu, B. Evasion guidance algorithms for air-breathing hypersonic vehicles in three-plyer pursuit-evasion game, Chin. J. Aeronaut., 2020, 33, (12), pp 34233436.CrossRefGoogle Scholar
Tang, X., Ye, D., Huang, L., Sun, Z.W. and Sun, J.Y. Pursuit-evasion game switching strategies for spacecraft with incomplete-information, Aerosp. Sci. Technol., 2021, 119, pp 120.CrossRefGoogle Scholar
Liu, F., Dong, X.W., Li, Q.D. and Ren, Z. Robust multi-agent differential games with application to cooperative guidance, Aerosp. Sci. Technol., 2021, 111, pp 120.CrossRefGoogle Scholar
Cheng, L. and Yuan, Y. Adaptive multi-player pursuit–evasion games with unknown general quadratic objectives, ISA Trans., 2022, 131, pp 7382.CrossRefGoogle ScholarPubMed
Wang, S.B., Guo, Y., Wang, S.C., Liu, Z.G. and Zhang, S. Cooperative interception with fast multiple model adaptive estimation, Def. Technol., 2021, 17, (6), pp 19051917.CrossRefGoogle Scholar
Yamasaki, T. and Balakrishnan, S.N. Triangle intercept guidance for aerial defense, AIAA Guidance, Navigation, and Control Conference, 2010, pp 78–76.CrossRefGoogle Scholar
Yamasaki, T. and Balakrishnan, S.N. Intercept guidance for cooperative aircraft defense against a guided missile, IFAC Proc. Volumes, 2010, 43, (15), pp 118123.CrossRefGoogle Scholar
Kumar, S. and Dwaipayan, M. Cooperative active aircraft protection guidance using line-of-sight approach, IEEE Trans. Aerospace Electron. Syst., 2021, 57, (2), pp 957967.CrossRefGoogle Scholar
Kumar, S. and Dwaipayan, M. Cooperative guidance strategies for active aircraft protection, Proceedings of the American Control Conference, 2019, pp 46414646.CrossRefGoogle Scholar
Luo, H.B., Tan, G.Y., Yan, H., Wang, X.H. and Ji, H.B. Cooperative line-of-sight guidance with optimal evasion strategy for three-body confrontation, ISA Trans., 2022, pp 111.Google ScholarPubMed
Luo, H.B., Ji, H.B. and Wang, X.H. Cooperative robust line-of-sight guidance law based on high-gain observers for active defense, Int. J. Robust Nonlin., 2023, 33, (16), pp 96029617.CrossRefGoogle Scholar
Tan, G.Y., Luo, H.B., Ji, H.B., Liao, F. and Wu, W.H. Cooperative line-of-sight guidance laws for active aircraft defense in three-dimensional space, Proceedings of the 40th Chinese Control Conference, Shanghai China, 2021, pp 35353540.CrossRefGoogle Scholar
Liu, S., Wang, Y., Li, Y., Yan, B. and Zhang, T. Cooperative guidance for active defence based on line-of-sight constraint under a low-speed ratio, Aeronaut. J., 2023, 127, (1309), pp 491509.CrossRefGoogle Scholar
Chen, C.D., Wang, J. and Huang, P. Optimal cooperative line-of-sight guidance for defending a guided missile, Aerospace, 2022, 9, (5), 232.Google Scholar
Han, T., Hu, Q.L., Wang, Q.Y., Xin, M. and Shin, H.S. Constrained 3-D trajectory planning for aerial vehicles without range measurement, IEEE Trans. Syst. Man Cybern. Syst., 2024, 54, (10), pp 60016013.CrossRefGoogle Scholar
Liu, S.X., Lin, Z.H., Wei, H. and Yan, B.B. Current development and future prospects of multi-target assignment problem: A bibliometric analysis review, Def. Technol., 2024.Google Scholar
Liu, S.X., Lin, Z.H., Wang, Y.C., Huang, W., Yan, B.B. and Li, Y. Three-body cooperative active defense guidance law with overload constraints: A small speed ratio perspective, Chin. J. Aeronaut., 2025, 38, (2), 103171.CrossRefGoogle Scholar
Tan, Z.W., Fonod, R. and Shima, T. Cooperative guidance law for target pair to lure two pursuers into collision, J. Guid. Control Dyn., 2018, 41, (8), pp 16871699.CrossRefGoogle Scholar
Manoharan, A. and Sujit, P.B. NMPC-based cooperative strategy to lure two attackers into collision by two targets, IEEE Control Syst. Lett., 2023, 7, pp 496501.CrossRefGoogle Scholar
Liang, H.Z., Wang, J.Y., Liu, J.Q. and Liu, P. Guidance strategies for interceptor against active defense spacecraft in two-on-two engagement, Aerosp. Sci. Technol., 2020, 96, pp 110.CrossRefGoogle Scholar
Zhuang, M.L., Tan, L.G., Li, K.H. and Song, S.M. Fixed-time formation control for spacecraft with prescribed performance guarantee under input saturation, Aerosp. Sci. Technol., 2021, 119, p 107176.CrossRefGoogle Scholar
Truong, T.N., Vo, A.T. and Kang, H.J. A model-free terminal sliding mode control for robots: Achieving fixed-time prescribed performance and convergence, ISA Trans., 2024, 144, pp 330341.CrossRefGoogle ScholarPubMed
Zhang, Y.C., Wu, G.Q., Yang, X.Y. and Song, S.M. Appointed-time prescribed performance control for 6-DOF spacecraft rendezvous and docking operations under input saturation, Aerosp. Sci. Technol., 2022, 128, p 107744.CrossRefGoogle Scholar
Li, H.J., Liu, Y.H., Li, K.B. and Liang, Y.G. Analytical prescribed performance guidance with field-of-view and impact-angle constraints, J. Guid. Control Dyn., 2024, 47, (4), pp 728741.CrossRefGoogle Scholar
Shima, T., Idan, M. and Golan, O.M. Sliding-mode control for integrated missile autopilot guidance, J. Guid. Control Dyn., 2006, 29, (2), pp 250260.CrossRefGoogle Scholar
Bryson, E. and Ho, C.Y. Applied Optimal Control, Blaisdell, 1969, Waltham, pp 154155, 282289.Google Scholar
Kumar, S. and Shima, T. Cooperative nonlinear guidance strategies for aircraft defense, J. Guid. Control Dyn., 2017, 40, (1), pp 124138.CrossRefGoogle Scholar
Ma, H., Zhou, Q., Li, H.Y. and Lu, R.Q. Adaptive prescribed performance control of a flexible-joint robotic manipulator with dynamic uncertainties, IEEE Trans. Cybern., 2022, 52, (12), pp 1290512915.CrossRefGoogle ScholarPubMed
Michael, B., Chandrasekhara, B.P. and Yuri, S. Multivariable continuous fixed-time second-order sliding mode control: design and convergence time estimation, IET Contr. Theory Appl., 2017, 11, (8), pp 11041111.Google Scholar
Zou, Z. and Tie, Y.L. Distributed robust finite-time nonlinear consensus protocols for multi-agent systems, Int. J. Syst. Sci., 2016, 47, (6), pp 13661375.Google Scholar
Zhang, X.Y., Dong, F. and Zhang, P. A new three-dimensional fixed time sliding mode guidance with terminal angle constraints, Aerosp. Sci. Technol., 2022, 121, p 107370.CrossRefGoogle Scholar
Sun, Q.L., Qi, N.M. and Hou, M.Y. Optimal strategy for target protection with a defender in the pursuit-evasion scenario, J. Def. Model. Simul. Appl. Methodol. Technol., 2018, 15, (3), pp 289301.Google Scholar
Figure 0

Figure 1. The relationship of two-on-two engagement.

Figure 1

Figure 2. Two-on-two engagement scenario.

Figure 2

Figure 3. Comparison between the proposed and the other PPFs.

Figure 3

Figure 4. Trajectories of the guidance strategy (66).

Figure 4

Figure 5. ZEM between the attacker and the target.

Figure 5

Figure 6. ZEL between the protector and the attacker-defender.

Figure 6

Figure 7. Fixed-time observer state ${z_{2i}},{\rm{ }}i = {\rm{AT}},{\rm{L}}$.

Figure 7

Figure 8. The accelerations of the attacker-protector team.

Figure 8

Figure 9. The acceleration ${a_{\rm{A}}}$ for different $u_{\rm{A}}^{\max }$.

Figure 9

Figure 10. The acceleration ${a_{\rm{P}}}$ for different $u_{\rm{P}}^{\max }$.

Figure 10

Table 1. Simulation parameters

Figure 11

Table 2. Simulation parameters in Case 1

Figure 12

Table 3. Simulation parameters in Case 2

Figure 13

Figure 11. Trajectories of the guidance strategies in Case 1.

Figure 14

Figure 12. LOS angles between the protector and the attacker-defender.

Figure 15

Figure 13. The accelerations of the attacker.

Figure 16

Figure 14. The accelerations of the protector.

Figure 17

Figure 15. Trajectories of guidance strategies in Case 2.

Figure 18

Figure 16. ZEM between the attacker and the target.

Figure 19

Figure 17. ZEL between the protector and the attacker-defender.

Figure 20

Figure 18. Fixed-time observer state $ {z_{2i}},{\rm{ }}i = {\rm{AT}},{\rm{L}} $.

Figure 21

Figure 19. The accelerations of the attacker.

Figure 22

Figure 20. The accelerations of the protector.

Figure 23

Figure 21. ZEMs in the guidance scheme.