
This paper focuses on achieving leader-follower mean square consensus in semi-Markov jump multi-agent systems. To effectively reduce communication costs and control updates, we propose an event-triggered protocol based on stochastic sampling. The stochastic sampling interval randomly switches between finite given values, while the event-triggered function depends on the stochastic sampled data from neighboring agents. Using the event-triggered strategy, we present sufficient conditions to ensure mean square consensus. Finally, we provide a numerical example demonstrating the effectiveness of the theoretical results.
Citation: Duoduo Zhao, Fang Gao, Jinde Cao, Xiaoxin Li, Xiaoqin Ma. Mean-square consensus of a semi-Markov jump multi-agent system based on event-triggered stochastic sampling[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 14241-14259. doi: 10.3934/mbe.2023637
[1] | Jiangtao Dai, Ge Guo . A leader-following consensus of multi-agent systems with actuator saturation and semi-Markov switching topologies. Mathematical Biosciences and Engineering, 2024, 21(4): 4908-4926. doi: 10.3934/mbe.2024217 |
[2] | Tong Guo, Jing Han, Cancan Zhou, Jianping Zhou . Multi-leader-follower group consensus of stochastic time-delay multi-agent systems subject to Markov switching topology. Mathematical Biosciences and Engineering, 2022, 19(8): 7504-7520. doi: 10.3934/mbe.2022353 |
[3] | Mingxia Gu, Zhiyong Yu, Haijun Jiang, Da Huang . Distributed consensus of discrete time-varying linear multi-agent systems with event-triggered intermittent control. Mathematical Biosciences and Engineering, 2024, 21(1): 415-443. doi: 10.3934/mbe.2024019 |
[4] | Wangming Lu, Zhiyong Yu, Zhanheng Chen, Haijun Jiang . Prescribed-time cluster practical consensus for nonlinear multi-agent systems based on event-triggered mechanism. Mathematical Biosciences and Engineering, 2024, 21(3): 4440-4462. doi: 10.3934/mbe.2024196 |
[5] | Dong Xu, Xinling Li, Weipeng Tai, Jianping Zhou . Event-triggered stabilization for networked control systems under random occurring deception attacks. Mathematical Biosciences and Engineering, 2023, 20(1): 859-878. doi: 10.3934/mbe.2023039 |
[6] | Siyu Li, Shu Li, Lei Liu . Fuzzy adaptive event-triggered distributed control for a class of nonlinear multi-agent systems. Mathematical Biosciences and Engineering, 2024, 21(1): 474-493. doi: 10.3934/mbe.2024021 |
[7] | Qiushi Wang, Hongwei Ren, Zhiping Peng, Junlin Huang . Dynamic event-triggered consensus control for nonlinear multi-agent systems under DoS attacks. Mathematical Biosciences and Engineering, 2024, 21(2): 3304-3318. doi: 10.3934/mbe.2024146 |
[8] | Na Zhang, Jianwei Xia, Tianjiao Liu, Chengyuan Yan, Xiao Wang . Dynamic event-triggered adaptive finite-time consensus control for multi-agent systems with time-varying actuator faults. Mathematical Biosciences and Engineering, 2023, 20(5): 7761-7783. doi: 10.3934/mbe.2023335 |
[9] | Run Tang, Wei Zhu, Huizhu Pu . Event-triggered distributed optimization of multi-agent systems with time delay. Mathematical Biosciences and Engineering, 2023, 20(12): 20712-20726. doi: 10.3934/mbe.2023916 |
[10] | Chao Ma, Yanfeng Lu . Distributed nonsynchronous event-triggered state estimation of genetic regulatory networks with hidden Markovian jumping parameters. Mathematical Biosciences and Engineering, 2022, 19(12): 13878-13910. doi: 10.3934/mbe.2022647 |
This paper focuses on achieving leader-follower mean square consensus in semi-Markov jump multi-agent systems. To effectively reduce communication costs and control updates, we propose an event-triggered protocol based on stochastic sampling. The stochastic sampling interval randomly switches between finite given values, while the event-triggered function depends on the stochastic sampled data from neighboring agents. Using the event-triggered strategy, we present sufficient conditions to ensure mean square consensus. Finally, we provide a numerical example demonstrating the effectiveness of the theoretical results.
It is universally acknowledged that multi-agent cooperative control has a lot of applications, such as unmanned air vehicles, traffic control, animal groups and automated highway systems [1,2,3,4]. The cooperative control consensus that has received a lot of attention can be roughly divided into leader-follower consensus and leaderless consensus [5,6]. Most of the research aims have been to design a consensus protocol to make agents exchange local information with their neighbors, so that a cluster of agents are capable of achieving the consistent state.
As a paradigmatic instance, a digital microprocessor is installed in each agent of the system, which is responsible for gathering information from its neighboring agents and updating the controller accordingly. The majority of research studies utilize continuous measurement signals, yet continuous communication in a constrained energy exchange network is not feasible. In order to avoid continuous communication, some studies have introduced sampling data control [7,8,9], and each agent transmits the corresponding data at the sampling instants. Control using periodic sampling data often requires estimation of an optimal sampling period since improper selection can result in either excessively frequent or infrequent sampling, both of which can be detrimental to system performance. Stochastic sampling can better solve this problem. Stochastic sampling has been gaining increasing attention due to its flexibility in terms of dynamically switching sampling periods between different values. In [10], the authors investigated the leaderless multi-agent consensus problem under stochastic sampling, where the sampling period was chosen randomly at a given value.
Compared to the conventional time-triggered mechanism, the utilization of an event-triggered mechanism presents a significant advantage in terms of improving the efficiency of communication resource allocation. Research on event-triggered mechanisms has been extensively conducted in various fields, including cyber-physical systems [11], cyber-control systems [12] and multi-agent systems [13,14,15,16]. To solve the high-order multi-agent consensus problem, in [16], Wu et al. presented by estimating the state of neighbor agents, a novel event-triggered protocol. For further research, many scholars put forward dynamic event-triggered protocol [17,18,19,20,21]. In [17], a dynamic event-triggered protocol was proposed for individual agents, which established a distributed adaptive consensus protocol. This protocol involves updating the coupling strength to achieve consensus among the agents. A dynamic event-triggered protocol was proposed in [20] for the investigation of consensus in multi-agent systems. This protocol involved internal dynamic variables, and the elimination of Zeno behavior played an important role. In [21], Du et al. studied the multi-agent problem of leader-follower consensus based on a dynamic event-triggered mechanism. However, since event-triggered control has its own limitation, there is a need for continuous event detection. To loosen that constraint, many researchers designed an event-triggered protocol based on sampling data [22,23,24]. The authors presented research on event-triggered consensus strategies for multi-agent systems in [25] and [26]. Specifically, Su et al. investigated sampled data-based leader-follower multi-agent systems with input delays, with a focus on making sampling-based mechanisms for event detection more realistic. Meanwhile, He et al. focused on mean-square leaderless consensus for networked non-linear multi-agent systems and presented an efficiently distributed event-triggered mechanism that reduces communication costs and controller updates for random sampling-based systems. In [27], Ruan et al. studied the consensus problem with bounded external disturbances under an event-triggered scheme based on two independent dynamic thresholds in the context of leader-follower dynamics. The authors presented a nonlinear dynamic event-triggered control strategy for achieving prescribed-time synchronization in networks of piecewise smooth systems in [28].
The above studies are mostly the ones based on the fixed topology of multiple agents. A large number of switching topologies, some of which are studied by establishing Markov models, can be observed in our real world [29,30,31]. In [29], Hu et al. investigated the multi-agent consensus of Markov jump systems based on event-triggered strategies. In [30,31], the scholars studied consensus issues in multi-agent systems with a Markov network structure. Nevertheless, the application of time-varying topology based on Markov processes has certain limitations, primarily due to the exponential distribution of jump times in Markov chains. Some researchers have focused on semi-Markovian jump topologies [32,33,34,35,36]. This is because the sojourn time in the semi-Markovian exchange topology is a generic continuous random variable. Its probability distribution is general. The H∞ consensus problem for multi-agent systems in a semi-Markov switching topology with incomplete known transmission rates was investigated in [35]. In a related research study by Xie et al. [36], the consensus problem of multi-agent systems was investigated under an attack scenario in which both the semi-Markov switching topology and network were susceptible to attacks, with the possibility of recovery. In [37], the authors presented a study on achieving cluster synchronization in finite/fixed time for semi-Markovian switching T−S fuzzy complex dynamical networks with discontinuous dynamic nodes.
Building upon the aforementioned literature review, this research article is focused on the development and analysis of an event-triggered mechanism for the stochastic sampling of leader-follower multi-agent systems. The key contributions of this study can be summarized as follows:
(1). This paper proposes a novel event-triggered methodology by utilizing stochastic sampling, which is capable of significantly reducing the frequency of control updates and communication overhead among agents. Furthermore, the proposed mechanism ensures the avoidance of Zeno behavior.
(2). The dynamic system switching investigated in this paper is modeled by using the semi-Markov jump process. A sufficient condition for mean-square consensus is derived.
(3). The sampling period of stochastic sampling is randomly selected from a finite set. Stochastic sampling differs from the periodic sampling and stochastic sampling in the literature [26,29].
The subsequent sections of this article are organized as follows. Section Ⅱ presents the problem formulation and introductory suggestions. Section Ⅲ reports the major findings. The numerical tests in Section Ⅳ validate the accuracy of the theoretical conclusions. Section Ⅴ concludes with final remarks.
Notations: The n-dimensional identity matrix is denoted by In. A zero matrix of appropriate dimension is represented by O. A positive (negative) definite matrix A is denoted by A>0 (A<0). The element implied by the symmetry of a matrix is denoted by ∗. The function C([a,b],Rn) maps the interval [a,b] to a continuous vector-valued function Rn. The Euclidean norm of a vector is represented by |⋅|. A superscript T and the symbol ⊗ indicate matrix transposition and the Kronecker product, respectively. Let IN={1,2,…,N} denote a finite index set.
We consider a directed graph, denoted by G={V,E,A}, where V represents a set of vertices {1,2,…,N}, E denotes a set of directed edges and A is a weighted adjacency matrix of size N×N. The elements of A, denoted by aij, are positive if there exists a directed edge going from vertex i to vertex j, and zero otherwise. We can refer to the set of neighbors of vertex i as Ni, which is defined as Ni={j∈V:(j,i)∈E}. The degree matrix D is defined as a diagonal matrix of size N×N, where the diagonal entries di represent the weighted degree of vertex i. Specifically, di=∑j∈Niaij. The Laplacian matrix of G is defined as L=D−A=(lij)N×N. The diagonal entries of L are given by lii=−∑j∈Nilij, and the off-diagonal entries are defined as follows: lij=−aij if (i,j) is an edge in G; otherwise, lij=0.
Consider a multi-agent system that has a leader and N followers. Label them respectively as 0 and 1,2,3,…,N. The dynamic equations for each agent are illustrated below:
{˙xi(t)=A(r(t))xi(t)+B(r(t))ui(t),i∈IN,˙x0(t)=A(r(t))x0(t). | (2.1) |
The control input and state for the ith state are denoted by ui(t)∈Rn and xi∈Rn, respectively. A state of x0(t)∈Rn indicates the leader agent. The constant matrices A(r(t)) and B(r(t)) have the appropriate dimensions. The semi-Markov chain {r(t),t≥0} is a right-continuous process defined on the complete probability space (Ω,F,P), where its values belong to the finite state space D=IM and its generators are denoted by λ=(λmn(v))M×M. The transition probabililty is
Pr{r(t+v)=n∣r(t)=m}={λmn(v)v+o(v)m≠n,1+λmn(v)v+o(v)m=n. | (2.2) |
v is the time interval, which stands for the amount of time that passes between two successive jumps. The transition rate from mode m at time t to mode n at time t+v is denoted by λmn(v), and it satisfies λmn(v)≥0. o(v) can be defined as limv→0+o(v)v=0. λmm(v)=−∑m≠nλmn(v).
Remark 1: The dwell time v is the time elapsed from the last jump of the system, and it is distinct from the time t. When the system jumps, v resets to 0 and the transition probability λmn(v) only depends on v.
A consensus protocol for event-triggered consensus that is grounded in the stochastic sampling of data is presented. Assuming that the sampling time is 0=t0<t1<t2<⋯<ts<⋯, the sampling period is h=ts+1−ts, in which h is selected from a random finite set h1,h2,⋯,hl. The probability is described as Pr{h=hs}=πs, s∈Il, πs∈[0,1], and ∑ls=1πs=1. For the sake of generality, we can set 0=h0<h1<h2<⋯<hs<⋯<hl,l>1.
Assuming that the ith agent has a K-time event-triggered time of tik, {tik}∞k=0 represents the event-triggered time sequence of the ith agent, tik∈{ts,s∈N}, ti0=0. tik+1represents the next event-triggering time of the ith agent, which is determined by the following formula
tik+1=mints>tk{ts:(ei(ts))TΦ(ei(ts))>σi(zi(ts))TΦ(zi(ts))}. | (2.3) |
The threshold parameter is represented in this case by σi>0. A positive definite event-triggered matrix is the intended matrix of matrix Φ. ei(ts)=xi(tik)−xi(ts) and zi(ts)=∑Nj=1aij(xi(tik)−xj(tjk′))+bi(xi(tik)−x0(ts)), where ts∈[tik,tik+1). tik′ is the latest transmitted sampled data of its neighbors before tik, that is tjk′=max{tjk∣tjk≤tik}, k′=0,1,2,⋯.
Remark 2: The stochastic sampling sequence is 0=t0<t1<t2<⋯<ts<⋯. The sampling period is h=ts+1−ts, where h is selected from a random finite set {h1,h2,⋯,hl}, where 0=h0<h1<h2<⋯<hl. This stochastic sampling differs from the periodic sampling and stochastic sampling in the literature [26,29]. They represent the special form of stochastic sampling when h is constant and l equals 2.
Remark 3: In accordance with the event-triggered condition (2.3), the ith agent broadcasts the most recent sampled data to its neighbors. The sampling sequence includes the event-triggered sequence because the sampling period h=ts+1−ts is stochastic in the set of {h1,h2,⋯,hl},h>0. Zeno behavior is precisely precluded.
Remark 4: It should be noted that in an attempt to decrease unnecessary communication between agents, a stochastic sampling static event-triggered protocol is proposed.
The following consensus protocol should be taken into consideration in light of the discussion above:
ui(t)=−K(r(t))[N∑j=1aij(xi(tik)−xj(tjk′))+bi(xi(tik)−x0(ts))]. | (2.4) |
Remark 5: The consensus protocol relies upon the stochastic sampling of event-triggered conditions as well as upon a semi-Markov switching system, where the feedback gain K(r(t)) depends on r(t), which will be given by a theorem later. When the ith agent and its neighbor agents satisfy the trigger conditions, the controller will be updated. With a zero-order holder, ui(t) remains constant between two successive event instants.
Define
ei(ts)=xi(tik)−xi(ts),ej(ts)=xj(tjk′)−xj(ts),δi(ts)=xi(ts)−x0(ts). | (2.5) |
Submitting (2.5) into (2.4), we can obtain
ui(t)=−K(r(t))[N∑j=1lij(ej(ts)+δj(ts))+bi(ei(ts)+δi(ts))]. | (2.6) |
Define τ(t)=t−ts, where τ(t) is a piecewise linear function with a slope of ˙τ(t)=1 for all t∈[ts,ts+1), except at time ts. The control protocol (2.6) may be expressed as
ui(t)=−K(r(t))[N∑j=1lij(ej(t−τ(t))+δj(t−τ(t)))+bi(ei(t−τ(t))+δi(t−τ(t)))]. | (2.7) |
A new stochastic variable is introduced as follows:
βs(t)={1ts−1≤τ(t)<ts,s=1,2,⋯,l0 otherwise. | (2.8) |
In this way, we can obtain
Pr{βs(t)=1}=Pr{ts−1≤τ(t)<ts}=l∑i=sπihi−hi−1hi=βs. | (2.9) |
The Bernoulli distribution is satisfied by βs(t), given that E{βs(t)}=βs and E{βs(t)−β2s}=βs(1−βs) respectively. We can obtain the calculation of the following formula from the above study:
˙δ(t)=(IN⊗A(r(t)))δ(t)−l∑s=1βs(t)(H⊗B(r(t))K(r(t)))e(t−τs(t))−l∑s=1βs(t)(H⊗B(r(t))K(r(t)))δ(t−τs(t)), | (2.10) |
where δ=diag{δ1,δ2,⋯,δN}, H=L+B1, B1=diag{b1,b2,⋯,bN}.
According to the initial conditions of Equation (2.10), let δ(t)=ϕ(t),−hl≤t≤0, ϕ(t)=[ϕT1(t),ϕT2(t),⋯,ϕTN(t)] and ϕi(t)∈C([−hl,0],Rn).
Definition 1 [34]. Under semi-Markov switching topologies, the leader-follower consensus of multi-agent system (2.1) with the consensus protocol is said to be achieved if limt→∞E‖xi(t)−x0(t)‖=0, i∈IN holds for any initial distribution r0∈D and any initial condition ϕ(t),∀t∈[−hl,0].
Assumption 1. The directed spanning tree in the network graph G has the leader's root.
Lemma 1 [34]. For symmetric matrices R>0 and X and any scalar μ, the following inequality holds:
−XR−1X≤μ2R−2μX. |
Theorem 1. Under Assumption 1 and utilizing the protocol given by (2.7), the constants provided are 0=h0<h1<⋯<hs<⋯<hl and πi∈[0,1], σi>0, i∈IN. Consensus can be achieved in the mean-square sense for the multi-agent system (2.10) by employing the stochastic sampling event-triggered strategy given by (2.3), provided that constant metrics are present, namely P(m)>0,Qs>0,Rs>0,Ws>0,m∈Dands∈Il. In such a way, the inequality below holds:
(ΞFT(m)Σ(m,v)∗Ψ0∗∗−X2(m))<0, | (3.1) |
where
Ξ=ℵ1+ℵ2+ℵ3+ℵ4+ℵ5+ℵ6.
ℵ1=FT(m)(IN⊗P(m))ε1+εT1(IN⊗P(m))F(m)+M∑n=1λmn(v)(εT1(IN⊗P(n))ε1).
ℵ2=β1εT1(IN⊗Qs)ε1−β1εTl+1(IN⊗Qs)εi+1+l∑s=2βs(εTl+s(IN⊗Qs)εl+s−εTl+s+1(IN⊗Qs)εl+s+1).
ℵ3=−l∑s=1βs1hs−hs−1((εTl+s−εTl+s+1)×(IN⊗(Rs+Ws))(εl+s−εl+s+1)).
ℵ4=εT1(IN⊗P(m))ε1.
ℵ5=−∑ls=1βsεTs+1Φεs+1.
ℵ6=(εs+1+ε2l+s+1)T(HTΛH⊗Φ)(εs+1+ε2l+s+1).
F(m)=(IN⊗A(m))ε1−l∑s=1βs(H⊗B(m)K(m))ε2l+s+1−l∑s=1βs(H⊗B(m)K(m))εs+1.
Φ=diag{Φ1,Φ2,⋯,ΦN}, Σ(m,v)=λ(m,v)X1(m).
Ψ=−(∑ls=1βs(hs−hs−1)(IN⊗(Rs+Ws)))−1.
λ(m,v)=(√λm1(v),√λm2(v),⋯,√λmm−1(v),√λmm+1(v),⋯,√λmM(v)).
X1(m)=diag{IN⊗P(m),⋯,IN⊗P(m)}M−1.
X2(m)=diag{IN⊗P(1),IN⊗P(2),⋯,IN⊗P(m−1),IN⊗P(m+1),⋯,IN⊗P(M)}M−1.
A(m)=A(r(t)=m),P(m)=P(r(t)=m) and βs is defined the same way as in (2.10). Define εs as a block matrix consisting of 3l+1 block elements. The s-th block element is an Nn×Nn identity matrix, denoted by INn, while all other block elements are zero matrices. Therefore, εs can be expressed as εs=[0,0,⋯,INn,0,0,⋯,0]∈RNn×(3l+1)Nn for s=1,2,⋯,3l+1.
Proof of Theorem 1. Think about the Lyapunov-Krasovskii functional presented as follows:
V(t,δt,˙δt,r(t))=3∑i=1Vi(t,δt,˙δt,r(t)),t∈[ts,ts+1], | (3.2) |
where
V1(t,δt,˙δt,r(t))=δT(t)(IN⊗P(r(t)))δ(t), | (3.3) |
V2(t,δt,˙δt)=l∑s=1βs∫t−hs−1t−hsδT(μ)×(IN⊗Qs)δ(μ)dμ, | (3.4) |
V3(t,δt,˙δt)=l∑s=1βs∫−hs−1−hs∫tt+v˙δT(μ)(IN⊗(Rs+Ws))˙δ(μ)dμdv. | (3.5) |
Consider the weak infinitesimal generator
ℑV(t,zt)=limΔ→0+1Δ{E{V(t+Δ,δt+Δ,˙δt+Δ,r(t+Δ))∣δt,r(t)=m}−V(t,δt,˙δt,r(t))}. | (3.6) |
Introduce y(t)=(δT(t),δT(t−τ1(t)),⋯,δT(t−τl(t)),δT(t−h1),⋯, δT(t−hl),eT(t−τ1(t)),⋯,eT(t−τl(t))),y(t)∈R(3l+1)Nn, A(m)=A(r(t)=m),P(m)=P(r(t)=m),∀m∈D.
Thus, we obtain
E[ℑV1(t,δt,˙δt,r(t))]=E[limΔ→0+1Δ{E{V1(t+Δ,δt+Δ,˙δt+Δ,r(t+Δ)∣δt,r(t)=m)}−V1(t,δi,˙δi,r(t))}]=E[yT(t)(FT(m)(IN⊗P(m))ε1+ετ1(IN⊗P(m))F(m)+M∑n=1λmn(v)ετ1(IN⊗P(n))ε1)y(t)], | (3.7) |
where
F(m)=(IN⊗A(m))ε1−l∑s=1βs(H⊗B(m)K(m))ε2l+s+1−l∑s=1βs(H⊗B(m)K(m))εs+1. |
E[ℑV2(t,δi,˙δt)]=E{yT(t)(β1δT(t−h0)(IN⊗Q1)δ(t−h0)−β1δT(t−h1)(IN⊗Q1)δ(t−h1)+l∑s=2βs(δT(t−hs−1)(IN⊗Qs)δ(t−hs−1)−δT(t−hs)(IN⊗Qs)δ(t−hs)))y(t)}, | (3.8) |
and
E[ℑV3(t,δt,˙δt)]=E[l∑s=1βs(hs−hs−1)˙δT(t)(IN⊗(Rs+Ws))˙δ(t)−l∑s=1βs∫t−hs−1t−hs˙δT(v)(IN⊗(Rs+Ws))˙δ(v)dv]. | (3.9) |
According to Jensen's inequality, it can be obtained that
−l∑s=1βs∫t−hs−1t−hs˙δT(ν)(IN⊗(Rs+Ws))˙δ(ν)dν≤−l∑s=1βs1hs−hs−1∫t−hs−1t−hs˙δT(ν)dν×(IN⊗(Rs+Ws))∫t−hs−1t−hs˙δ(ν)dν. | (3.10) |
Submit (2.10) and (3.10) into (3.9), and we can obtain
E[ℑV3(t,δt,˙δt,r(t))]≤yT(t)[l∑s=1βs(hs−hs−1)×FT(m)(IN⊗(Rs+Ws))F(m)−l∑s=1βs1hs−hs−1(εTl+s−εTl+s+1)×(IN⊗(Rs+Ws))(εTl+s−εTl+s+1)]y(t). | (3.11) |
From (2.3), we can obtain
eT(t−τs)Φe(t−τs)≤zT(t−τs)(Λ⊗Φ)z(t−τs)=(δ(t−τs)+e(t−τs))T(HTΛH⊗Φ)×(δ(t−τs)+e(t−τs)), | (3.12) |
where e(t−τs)=col{e1(t−τs),e2(t−τs),⋯,eN(t−τs)}, Λ=diag{σ1,σ2,⋯,σN}. Additionally, z(t−τs)=col{z1(t−τs),z2(t−τs),⋯,zN(t−τs)}.
Thus, combine (3.8–3.10) with (3.11), and we can obtain
E[ℑV(t,δt,˙δt,r(t))]≤E[yT(t)l∑s=1βs(Ξ−ETs+1ΦEs+1+(Es+1+E2l+s+1)T(HTΛH⊗Φ)(Es+1+E2l+s+1)]y(t), | (3.13) |
and
E[ℑV(t,δt,˙δt,r(t))]≤E[yT(t)l∑s=1βs(Ξ−ETs+1ΦEs+1+(Es+1+E2l+s+1)T(HTΛH⊗Φ)(Es+1+E2l+s+1)]y(t). | (3.14) |
Applying the Schur complement and (3.1) leads to the conclusion that
E[ℑV(t,δt,˙δt,r(t))]<0, | (3.15) |
where
Ξ=ℵ1+ℵ2+ℵ3+ℵ4+ℵ5+ℵ6. |
Hence, the consensus (3.1) of the multi-agent system can be attained in a mean squared sense under the event-triggered method given by (2.3).
Keeping Theorem 1's results in mind, this method provides an efficient approach to design consensus controller gains.
Theorem 2. Under Assumption 1, and by utilizing the protocol given by (2.7), we have the following constants 0=h0<h1<⋯<hs<⋯<hl,πi∈[0,1],σi>0,i∈IN and μ>0. The multi-agent system consensus (2.10) can be achieved in the mean-square sense with the stochastic sampled even-triggered strategy (2.3) if there exist the matrices ˆP(m)>0,ˆQs>0,ˆRs>0,ˆWs>0 and ˆΦ>0 and the matrices ˆK(m),m∈D and s∈Il satisfy the following inequality:
(ˆΞ(hm−h(m−1))ˆFT(m)ˆΣ(m,v)∗ˆΨO∗∗−ˆX2(m))<0, | (3.16) |
where ˆΞ=ˆℵ1+ˆℵ2+ˆℵ3+ˆℵ4+ˆℵ5+ˆℵ6.
ˆℵ1=ˆFT(m)ε1+εT1ˆF(m)+εT1λmm(v)(IN⊗ˆP(m))ε1.
ˆℵ2=β1εT1(IN⊗ˆQ1)ε1−β1εTl+1(IN⊗ˆQ1)εl+1 +l∑s=2βs(εTl+s(IN⊗ˆQs)εl+s−εTl+s+1(IN⊗ˆQs)εl+s+1).
ˆℵ3=−l∑s=1βs1hs−hs−1((εTl+s−εTl+s+1)(IN⊗(ˆRs+ˆWs))(εl+s−εl+s+1)).
ˆℵ4=εT1(IN⊗ˆP(m))ε1.
ˆℵ5=−∑ls=1βsεTs+1(ˆΦ)εs+1.
ˆℵ6=(εs+1+ε2l+s+1)T(HTΛH⊗ˆΦ)(εs+1+ε2l+s+1).
ˆF(m)=(IN⊗A(m)ˆP(m))ε1−l∑s=1βs(H⊗B(m)ˆK(m))ε2l+s+1−l∑s=1βs(H⊗B(m)ˆK(m))εs+1.
ˆΦ=diag{ˆΦ1,ˆΦ2,⋯,ˆΦN},ˆΦi=ˆP(m)ΦiˆP(m),ˆP(m)=P−1(m), ˆΦs=ˆP(m)ΦsˆP(m).
ˆΨ=l∑s=1(βs(hs−hs−1))−1(μ2(IN⊗(Rs+Ws))−2μ(IN⊗ˆP(m))).
ˆRs=ˆP(m)RsˆP(m),ˆWs=ˆP(m)WsˆP(m),ˆΣ(m,v)=λ(m,v)ˆX1(m).
λ(m,v)=(√λm1(v),√λm2(v),⋯,√λmm−1(v),√λmm+1(v),⋯,√λmM(v),0,⋯,0).
ˆX1(m)=diag{IN⊗ˆP(m),⋯,IN⊗ˆP(m),O,⋯,O}.
ˆX2(m)=diag{IN⊗ˆP(1),IN⊗ˆP(2),⋯,IN⊗ˆP(m−1),IN⊗ˆP(m+1),⋯,IN⊗ˆP(M),O,⋯,O}.
In addition, the feedback gain is supplied by K(m)=ˆK(m)ˆP−1(m) and the event-triggered parameter matrix is given by Φ(m)=ˆP−1(m)ˆΦ(m)ˆP−1(m).
Proof of Theorem 2. Here we present the definitions of matrix variables K(m)=ˆK(m)ˆP−1(m),ˆP(m)=P−1(m) and ˆΦ(m)=ˆP(m)Φ(m)ˆP(m). We pre- and post-multiply both sides of (3.3) by the matrix diag{IN⊗P−1(m),IN⊗P−1(m),IN⊗P−1(m),InN}, and both sides of (3.2) by the matrix diag{IN⊗P−1(m),IN⊗P−1(m)}, respectively.
Lemma 1 enables one to derive the subsequent inequality.
−(IN⊗ˆP(m))(∑ls=1β(hs−hs−1)(IN⊗(Rs+Ws)))−1×(IN⊗ˆP(m))
≤μ2(∑ls=1β(hs−hs−1)(IN⊗(Rs+Ws)))−2μ(IN⊗ˆP(m));
we can get
(ˆΞ(hm−h(m−1))ˆFT(m)ˆΣ(m,v)∗ˆΨO∗∗−ˆX2(m))<0, | (3.17) |
where ˆΞ=ˆℵ1+ˆℵ2+ˆℵ3+ˆℵ4+ˆℵ5+ˆℵ6,
ˆℵ1=ˆFT(m)ε1+εT1ˆF(m)+εT1λmm(v)(IN⊗ˆP(m))ε1,
ˆℵ2=β1εT1(IN⊗ˆQ1)ε1−β1εTl+1(IN⊗ˆQ1)εl+1+l∑s=2βs(εTl+s(IN⊗ˆQs)εl+s −εTl+s+1(IN⊗ˆQs)εl+s+1),
ˆℵ3=−l∑s=1βs1hs−hs−1((εTl+s−εTl+s+1)(IN⊗(ˆRs+ˆWs))(εl+s−εl+s+1)),
ˆℵ4=εT1(IN⊗ˆP(m))ε1,
ˆℵ5=−∑ls=1βsεTs+1(ˆΦ)εs+1,
ˆℵ6=(εs+1+ε2l+s+1)T(HTΛH⊗ˆΦ)(εs+1+ε2l+s+1),
ˆF(m)=(IN⊗A(m)ˆP(m))ε1−l∑s=1βs(H⊗B(m)ˆK(m))ε2l+s+1−l∑s=1βs(H⊗B(m)ˆK(m))εs+1,
ˆΦ=diag{ˆΦ1,ˆΦ2,⋯,ˆΦN}, ˆΦi=ˆP(m)ΦiˆP(m), ˆP(m)=P−1(m), ˆΦs=ˆP(m)ΦsˆP(m),
ˆΨ=l∑s=1(βs(hs−hs−1))−1(μ2(IN⊗(Rs+Ws))−2μ(IN⊗ˆP(m))),
ˆRs=ˆP(m)RsˆP(m), ˆWs=ˆP(m)WsˆP(m), ˆΣ(m,v)=λ(m,v)ˆX1(m),
λ(m,v)=(√λm1(v),√λm2(v),⋯,√λmm−1(v),√λmm+1(v),⋯,√λmM(v),0,⋯,0),
ˆX1(m)=diag{IN⊗ˆP(m),⋯,IN⊗ˆP(m),O,⋯,O},
ˆX2(m)=diag{IN⊗ˆP(1),IN⊗ˆP(2),⋯,IN⊗ˆP(m−1),
IN⊗ˆP(m+1),⋯,IN⊗ˆP(M),O,⋯,O}.
The proof is therefore complete.
In Theorems 1 and 2, we establish sufficient conditions for achieving consensus in event-triggered semi-Markov jump multi-agent systems through stochastic sampling. But the sufficient conditions do not satisfy linear matrix inequality (LMI) conditions, because λ(m,v) is time-varying. As a result, the problems cannot be directly solved by using the LMI toolbox in MATLAB. Nevertheless, we can establish lower and upper bounds for the transition rate and apply the theorem presented below to overcome this issue.
Theorem 3. Under Assumption 1, and by utilizing the protocol given by (2.7), we have the following constants 0=h0<h1<⋯<hs<⋯<hl,πi∈[0,1],σi>0,i∈IN and μ>0. By utilizing the stochastic sampled event-triggered strategy (2.3) and assuming the existence of positive matrices ˆP(m),ˆQs,ˆRs,ˆWs,ˆΦ, consensus of the multi-agent system (2.10) can be achieved in a mean square sense. This is subject to the condition that the matrices ˆK(m),m∈D and s∈Il satisfy the following inequality:
(ˆΞ_(hm−h(m−1))ˆFT(m)ˆΣ_(m)∗ˆΨO∗∗−ˆX2)<0, | (3.18) |
(¯ˆΞ(hm−h(m−1))ˆFT(m)¯ˆΣ(m)∗ˆΨO∗∗−ˆX2)<0, | (3.19) |
where ˆΞ_=ˆℵ_1+ˆℵ2+ˆℵ3+ˆℵ4+ˆℵ5+ˆℵ6,
ˆℵ_1=ˆFT(m)ε1+εT1ˆF(m)+εT1λ_mm(IN⊗ˆP(m))ε1,
¯ˆΞ=¯ˆℵ1+ˆℵ2+ˆℵ3+ˆℵ4+ˆℵ5+ˆℵ6,
¯ˆℵ1=ˆFT(m)ε1+εT1ˆF(m)+εT1¯λmm(IN⊗ˆP(m))ε1,
ˆΣ_(m)=λ_(m)ˆX1(m), ¯ˆΣ(m)=ˉλ(m)ˆX1(m),
λ_(m)=(√λ_m1,√λ_m2,⋯,√λ_mm−1,√λ_mm+1,⋯,√λ_mM,0,⋯,0),
¯λ(m)=(√¯λm1,√¯λm2,⋯,√¯λmm−1,√¯λmm+1,⋯,√¯λmM,0,⋯,0).
The definitions of Theorem 2 are applicable to the remaining terms in the inequalities. By using the same strategy for proof as Theorem 2 of [35], the theorem may be simply constructed. Therefore, it is omitted here.
Remark 6. It is worth mentioning that Theorem 3's conclusion is relatively conservative. To decrease conservativeness, the sojourn-time division method is used by dividing the sojourn time υ by J and denoting the pth segment as ¯λnm,p and λ_nm,p to represent the upper and lower bounds on the transmission probability, respectively. The conclusions drawn are relatively lenient.
Corollary 1. Under Assumption 1 and the protocol (2.7), where 0=h0<h1<⋯<hs<⋯<hl, πi∈[0,1], σi>0,i∈IN and μ>0, the multi-agent system consensus (2.10) can be achieved in a mean-square sense with the stochastic sampled event-triggered strategy (2.3). This can be achieved if there exists a positive matrix ˆP(m), and positive matrices ˆQs,ˆRs,ˆWs,ˆΦ, along with the matrices ˆK(m),m∈D,s∈Il, which satisfy the following LMI:
(ˆΞ_(m,p)(hm−h(m−1))ˆFT(m,p)ˆΣ_(m,p)∗ˆΨ(m,p)O∗∗−ˆX2(m,p))<0, |
(¯ˆΞ(m,p)(hm−h(m−1))ˆFT(m,p)¯ˆΣ(m,p)∗ˆΨ(m,p)O∗∗−ˆX2(m,p))<0, |
in which ˆΞ_(m,p),¯ˆΞ(m,p),ˆΣ_(m,p),¯ˆΣ(m,p),ˆF(m,p),ˆP(m,p), ˆX2(m,p),ˆΨ(m,p),ˆΥ1(m,p) and ˆΥ2(m,p) are similarly defined as in Theorem 3, with the exception that (m) is substituted by (m,p). Furthermore, the feedback gain is given by K(m,p)=ˆK(m,p)ˆP−1(m,p). Moreover, the expression for the feedback gain is defined as K(m,p)=ˆK(m,p)ˆP−1(m,p). The parameter matrix for the event-triggered strategy is denoted as Φ(m,p)=ˆP−1(m,p)ˆΦ(m,p)ˆP−1(m,p), where ˆΦ(m,p) is an estimated parameter and m is an element in the set D while p is an element in the index set IJ.
Within this part, we will provide a numerical illustration to showcase the efficacy of the suggested design methodology. For consideration of a {semi-Markov jump multi-agent system}, which contains a leader and five followers, we assume that the model is described in formula (2.1). The coefficient matrices of the system equation are Ar, Br, Cr, r=1,2,3. A(1)=(−14−18−11−28), A(2)=(−16−16−15−23), A(3)=(−13−18−11−20), B(1)=(172), B(2)=(612), B(3)=(68). The topology of the network is shown in Figure 1. The corresponding Laplacian matrix L and the leader adjacency matrix B can be derived in the manner shown as follows:
L=(1−100000000−1010000−11000−101), B=diag(1,1,0,0,0).
Let the event-trigged parameters be σ1=6.353,σ2=7.163,σ3=6.093,σ4=7.533 and σ5=6.312. The stochastic sampling period h takes values from the set {h1,h2}={0.1s,0.2s} with probabilities of occurrence π1=Pr{h=h1}=0.2 and θ2=Pr{h=h2}=0.8. With these values, we can obtain that ρ1=0.6, ρ2=0.4. By utilizing MATLAB's LMI toolbox, we can verify the feasibility of solutions to LMIs} (3.18, 3.19) for μ=4 in Theorem 3. The event-triggered parameter metrics are derived as follows: Φ1=(3.11590.01890.01893.1294), Φ2=(3.11530.01950.01963.1279), and Φ3=(3.11490.01950.01953.1279). The consensus feedback matrices are as follows: K(1)=(0.0058−0.0038), K(2)=(0.0027−0.0011), K(3)=(0.0048−0.0023). The initial states of the leader and followers were selected as follows: x0(0)=(10), x1(0)=(3.5479.553), x2(0)=(−6.1545.902), x3(0)=(3.594−7.611), x4(0)=(9.3417.841), x5(0)=(2.30112.24). The tracking errors between the leader and the followers are shown in Figures 2 and 3.
Figure 4 illustrates the point in time at which an event is triggered. It indicates that the triggering of the event occurs at a lower frequency than the sampling rate.
Furthermore, the stochastic sampling period h is shown in Figure 6.
We can compare our simulation example with that in [34]. Both our article and [34] share the same state equations. However, [34] adopted a static event-triggered protocol, while we added a process of stochastic sampling to its event-triggered protocol. Compared with the figure in [34], the multi-agent system controlled by stochastic sampling event-triggered control has a faster convergence rate and smaller steady-state error. Thus, this example validates the validity of Theorem 3.
The paper presented a study on the mean-square consensus of a semi-Markov jump multi-agent system based on event-triggered stochastic sampling. We have proposed a novel approach to improve the efficiency of multi-agent systems for consensus control.
The results of the study showed that the proposed approach was effective in achieving mean-square consensus in multi-agent systems. The use of event-triggering via stochastic sampling reduced the communication frequency and improved the computational efficiency of the system. The semi-Markov jump model provided a more accurate representation of the state transitions in the system.
However, there are some limitations to this study. The numerical examples presented in the paper were relatively small, and it is unclear how the proposed approach would scale to larger multi-agent systems. Additionally, the study assumed perfect knowledge of the system parameters, which may not be the case in real-world scenarios.
Future research can further investigate the robustness of the proposed approach against uncertainties and disturbances in the system. The scalability of the approach can also be explored in more details, and the approach can be tested on more complex multi-agent systems.
The study presented in this article focuses on the use of multi-agent systems for the leader-follower consensus control topic. We have proposed a novel event-triggered stochastic sampling approach and investigated the use of a semi-Markov switching system architecture. We have also developed appropriate measures for mean-square consensus in multi-agent systems.
The results of the numerical example presented in this study demonstrate the accuracy of the theoretical computations. The proposed approach has the potential to improve the efficiency of multi-agent systems for consensus control in various applications.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
This work was supported by the College Students' Innovation and Entrepreneurship Program of the Ministry of Education (202211306068), Excellent Scientific Research and Innovation Team of Anhui Colleges (2022AH010098), Innovation and Entrepreneurship Training Program for College Students in Anhui Province (S202211306114, S202211306134), Quality Engineering Project of Chizhou University (2022XXSKC09), Chizhou University Introducing Doctoral Research Startup Project (CZ2022YJRC08), and Key Research Project of Chizhou University (CZ2021ZR03, CZ2023ZRZ04).
The authors declare that there is no conflict of interest.
[1] |
X. Ren, D. Li, Y. Xi, H. Shao, Distributed multi-agent optimization via coordination with second-order nearest neighbors, IET Control Theory Appl., 14 (2020), 1733–1743. https://doi.org/10.1049/iet-cta.2019.0708 doi: 10.1049/iet-cta.2019.0708
![]() |
[2] |
X. Tan, M. Cao, J. Cao, Distributed dynamic event-based control for nonlinear multi-agent systems, IEEE Trans. Circuits Syst. Ⅱ Exp. Briefs, 68 (2021), 687–691. https://doi.org/10.1109/TCSII.2020.3006125 doi: 10.1109/TCSII.2020.3006125
![]() |
[3] |
Y. L. Wang, Q. L. Han, M. R. Fei, C. Peng, Network-based t−s fuzzy dynamic positioning controller design for unmanned marine vehicles, IEEE Trans. Cybern., 48 (2018), 1–14. https://doi.org/10.1109/TCYB.2018.2829730 doi: 10.1109/TCYB.2018.2829730
![]() |
[4] |
Y. L. Wang, Q. L. Han, Network-based modelling and dynamic output feedback control for unmanned marine vehicles in network environments, Automatica, 91 (2018), 43–53. https://doi.org/10.1016/j.automatica.2018.01.026 doi: 10.1016/j.automatica.2018.01.026
![]() |
[5] |
W. He, C. Xu, Q. L. Han, F. Qian, Z. Lang, Finite-time L2 leader-follower consensus of networked euler-lagrange systems with external disturbances, IEEE Trans. Syst. Man Cybern. Syst., 48 (2017), 1–9. 10.1109/TSMC.2017.2774251 doi: 10.1109/TSMC.2017.2774251
![]() |
[6] |
X. Wang, H. Wu, J. Cao, Global leader-following consensus in finite time for fractional-order multi-agent systems with discontinuous inherent dynamics subject to nonlinear growth, Nonlinear Anal.-Hybri. Syst., 37 (2020), 100888. https://doi.org/10.1016/j.nahs.2020.100888 doi: 10.1016/j.nahs.2020.100888
![]() |
[7] | S. V. Feofilov, A. Kozyr, Stability of periodic movements in sampled data relay feedback control systems, in 2019 1st International Conference on Control Systems, Mathematical Modelling, Automation and Energy Efficiency (SUMMA), (2019), 18–21. https://doi.org/10.1109/SUMMA48161.2019.8947604 |
[8] | E. Rosenwasser, W. Drewelow, T. Jeinsch, Synchronous sampled-data modal control of a linear periodic object with lti actuator, in 2020 International Russian Automation Conference (RusAutoCon), (2020), 49–56. https://doi.org/10.1109/RusAutoCon49822.2020.9208195 |
[9] |
W. He, S. Lv, C. Peng, N. Kubota, F. Qian, Improved leaderless consenus criteria of networked multi-agent systems based on the sampled data, Int. J. Syst. Sci., 49 (2018), 2737–2752. https://doi.org/10.1080/00207721.2018.1505005 doi: 10.1080/00207721.2018.1505005
![]() |
[10] |
W. He, S. Lv, X. Wang, F. Qian, Leaderless consensus of multi-agent systems via an event-triggered strategy under stochastic sampling, J. Franklin I., 356 (2019), 6502–6524. https://doi.org/10.1016/j.jfranklin.2019.05.033 doi: 10.1016/j.jfranklin.2019.05.033
![]() |
[11] |
Y. C. Sun, G. H. Yang, Periodic event-triggered resilient control for cyber-physical systems under denial-of-service attacks, J. Franklin I., 355 (2018), 5613–5631. https://doi.org/10.1016/j.jfranklin.2018.06.009 doi: 10.1016/j.jfranklin.2018.06.009
![]() |
[12] | H. Li, Y. Fan, G. Pan, C. Song, Event-triggered remote dynamic control for network control systems, in 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), (2020), 483–488. https://doi.org/10.1109/ICARCV50220.2020.9305348 |
[13] |
D. Liu, G. H. Yang, Robust event-triggered control for networked control systems, Inform. Sci., 459 (2018), 186–197. https://doi.org/10.1016/j.ins.2018.02.057 doi: 10.1016/j.ins.2018.02.057
![]() |
[14] | M. Hertneck, S. Linsenmayer, F. Allgower, Nonlinear dynamic periodic event-triggered control with robustness to packet loss based on non-monotonic lyapunov functions, in 2019 IEEE 58th Conference on Decision and Control (CDC), (2019), 1680–1685. https://doi.org/10.1109/CDC40024.2019.9029770 |
[15] |
T. Y. Zhang, D. Ye, Distributed event-triggered control for multi-agent systems under intermittently random denial-of-service attacks, Inform. Sci., 542 (2021), 380–390. https://doi.org/10.1016/j.ins.2020.06.070 doi: 10.1016/j.ins.2020.06.070
![]() |
[16] |
Z. G. Wu, Y. Xu, R. Lu, Y. Wu, T. Huang, Event-triggered control for consensus of multiagent systems with fixed/switching topologies, IEEE Trans. Syst. Man Cybern. Syst., 48 (2018), 1736–1746. http://dx.doi.org/10.1109/TSMC.2017.2744671 doi: 10.1109/TSMC.2017.2744671
![]() |
[17] |
S. Lv, W. He, F. Qian, J. Cao, Leaderless synchronization of coupled neural networks with the event-triggered mechanism, Neural Netw., 105 (2018), 316–327. https://doi.org/10.1016/j.neunet.2018.05.012 doi: 10.1016/j.neunet.2018.05.012
![]() |
[18] |
D. Liu, G. H. Yang, Dynamic event-triggered control for linear time-invariant systems with L2-gain performance, Int. J. Robust Nonlin., 29 (2018), 507–518. https://doi.org/10.1002/rnc.4403 doi: 10.1002/rnc.4403
![]() |
[19] |
X. Yi, K. Liu, D. V. Dimarogonas, K. H. Johansson, Dynamic event-triggered and self-triggered control for multi-agent systems, IEEE Trans. Automat. Contr., 64 (2019), 3300–3307. https://doi.org/10.1109/TAC.2018.2874703 doi: 10.1109/TAC.2018.2874703
![]() |
[20] |
D. Liu, G. H. Yang, A dynamic event-triggered control approach to leader-following consensus for linear multiagent systems, IEEE Trans. Syst. Man, and Cybern. Syst., 50 (2020), 1–9. https://doi.org/10.1109/TSMC.2019.2960062 doi: 10.1109/TSMC.2019.2960062
![]() |
[21] |
S. L. Du, T. Liu, D. W. C. Ho, Dynamic event-triggered control for leader-following consensus of multiagent systems, IEEE Trans. Syst. Man Cybern. Syst., 50 (2018), 2168–2216. https://doi.org/10.1109/TSMC.2018.2866853 doi: 10.1109/TSMC.2018.2866853
![]() |
[22] |
W. He, B. Xu, Q. L. Han, F. Qian, Adaptive consensus control of linear multiagent systems with dynamic event-triggered strategies, IEEE Trans. Cybern., 50 (2019), 1–13. https://doi.org/10.1109/TCYB.2019.2920093 doi: 10.1109/TCYB.2019.2920093
![]() |
[23] |
X. M. Zhang, Q. L. Han, B. L. Zhang, An overview and deep investigation on sampled-data-based event-triggered control and filtering for networked systems, IEEE Trans. Ind. Inform., 13 (2016), 4–16. https://doi.org/10.1109/TII.2016.2607150 doi: 10.1109/TII.2016.2607150
![]() |
[24] |
L. Liu, S. Zhu, B. Wu, Asynchronous sampled-data consensus of singular multi-agent systems based on event-triggered strategy, Int. J. Syst. Sci., 50 (2019), 1530–1542. https://doi.org/10.1080/00207721.2019.1616232 doi: 10.1080/00207721.2019.1616232
![]() |
[25] |
H. Su, Z. Wang, Z. Song, X. Chen, Event-triggered consensus of nonlinear multi-agent systems with sampling data and time delay, IET Control Theory Appl., 11 (2016), 1715–1725. https://doi.org/10.1049/iet-cta.2016.0865 doi: 10.1049/iet-cta.2016.0865
![]() |
[26] |
H. Wangli, L. Siqi, W. Xiaoqiang and Q. Feng, Leaderless consensus of multi-agent systems via an event-triggered strategy under stochastic sampling, J. Franklin I., 356 (2019), 6502–6524. https://doi.org/10.1016/j.jfranklin.2019.05.033 doi: 10.1016/j.jfranklin.2019.05.033
![]() |
[27] |
X. Ruan, J. Feng, C. Xu, J. Wang, Observer-based dynamic event-triggered strategies for leader-following consensus of multi-agent systems with disturbances, IEEE Trans. Netw. Sci. Eng., 7 (2020), 3148–3158. https://doi.org/10.1109/TNSE.2020.3017493 doi: 10.1109/TNSE.2020.3017493
![]() |
[28] |
X. Li, H. Wu, J. Cao, Prescribed-time synchronization in networks of piecewise smooth systems via a nonlinear dynamic event-triggered control strategy, Math. Comput. Simulat., 203 (2023), 647–668. https://doi.org/10.1016/j.matcom.2022.07.010 doi: 10.1016/j.matcom.2022.07.010
![]() |
[29] |
A. Hu, J. Cao, M. Hu, L. Guo, Event-triggered consensus of Markovian jumping multi-agent systems via stochastic sampling, IET Control Theory Appl., 9 (2015), 1964–1972. https://doi.org/10.1049/iet-cta.2014.1164 doi: 10.1049/iet-cta.2014.1164
![]() |
[30] |
X. H. Ge, Q. L. Han, Consensus of multiagent systems subject to partially accessible and overlapping markovian network topologies, IEEE Trans. Cyberne., 47 (2017), 1807–1819. https://doi.org/10.1109/TCYB.2016.2570860 doi: 10.1109/TCYB.2016.2570860
![]() |
[31] |
L. Wang, Y. Dong, D. Xie, J. Cao, Robust passivity analysis of markov-type lotka–volterra model with time-varying delay and uncertain mode transition rates, Math. Methods Applied Sci., 43 (2020), 6976–6984. https://doi.org/10.1002/mma.6447 doi: 10.1002/mma.6447
![]() |
[32] |
J. Dai, G. Guo, Exponential consensus of nonlinear multi-agent systems with semi-markov switching topologies, IET Control Theory and Appl., 11 (2017), 3363–3371. https://doi.org/10.1049/iet-cta.2017.0562 doi: 10.1049/iet-cta.2017.0562
![]() |
[33] |
B. Wang, Q. Zhu, Mode dependent H∞ filtering for semi-Markovian jump linear systems with sojourn time dependent transition rates, IET Control Theory Appl., 13 (2019), 3019–3025. https://doi.org/10.1049/iet-cta.2019.0141 doi: 10.1049/iet-cta.2019.0141
![]() |
[34] |
J. Dai, G. Guo, Event-triggered leader-following consensus for multi-agent systems with semi-markov switching topologies, Inform. Sci., 459 (2018), 290–301. https://doi.org/10.1016/j.ins.2018.04.054 doi: 10.1016/j.ins.2018.04.054
![]() |
[35] |
M. He, J. Mu, X. Mu, H∞ leader-following consensus of nonlinear multi-agent systems under semi-markovian switching topologies with partially unknown transition rates, Inform. Sci., 513 (2020), 168–179. https://doi.org/10.1016/j.ins.2019.11.002 doi: 10.1016/j.ins.2019.11.002
![]() |
[36] |
X. Xie, Z. Yang, X. Mu, Observer-based consensus control of nonlinear multi-agent systems under semi-markovian switching topologies and cyber attacks, Int. J. Robust Nonlin., 30 (2020), 5510–5528. https://doi.org/10.1002/rnc.5088 doi: 10.1002/rnc.5088
![]() |
[37] |
Z. Zhang, H. Wu, Cluster synchronization in finite/fixed time for semi-markovian switching t-s fuzzy complex dynamical networks with discontinuous dynamic nodes, AIMS Math., 7 (2022), 11942–11971. http://dx.doi.org/10.3934/math.2022666 doi: 10.3934/math.2022666
![]() |
1. | Jiangtao Dai, Ge Guo, A leader-following consensus of multi-agent systems with actuator saturation and semi-Markov switching topologies, 2024, 21, 1551-0018, 4908, 10.3934/mbe.2024217 |