
In the paper, the existence and uniqueness of the equilibrium point in the Cohen-Grossberg neural network (CGNN) are first studied. Additionally, a switched Cohen-Grossberg neural network (SCGNN) model with time-varying delay is established by introducing a switched system to the CGNN. Based on reducing the conservativeness of the system, a flexible terminal interpolation method is proposed. Using an adjustable parameter to divide the invariant time-delay interval into multiple adjustable terminal interpolation intervals (2ı+1−3), more moments when signals are transmitted slowly can be captured. To this end, a new Lyapunov-Krasovskii functional (LKF) is constructed, and the stability of SCGNN can be estimated. Using the LKF method, a quadratic convex inequality, linear matrix inequalities (LMIs) and ordinary differential equation theory, a new form of stability criterion is obtained and specific instances are given to prove the applicability of the new stability criterion.
Citation: Biwen Li, Yibo Sun. Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method[J]. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906
[1] | Pratap Anbalagan, Evren Hincal, Raja Ramachandran, Dumitru Baleanu, Jinde Cao, Chuangxia Huang, Michal Niezabitowski . Delay-coupled fractional order complex Cohen-Grossberg neural networks under parameter uncertainty: Synchronization stability criteria. AIMS Mathematics, 2021, 6(3): 2844-2873. doi: 10.3934/math.2021172 |
[2] | Qinghua Zhou, Li Wan, Hongshan Wang, Hongbo Fu, Qunjiao Zhang . Exponential stability of Cohen-Grossberg neural networks with multiple time-varying delays and distributed delays. AIMS Mathematics, 2023, 8(8): 19161-19171. doi: 10.3934/math.2023978 |
[3] | Yijia Zhang, Tao Xie, Yunlong Ma . Robustness analysis of exponential stability of Cohen-Grossberg neural network with neutral terms. AIMS Mathematics, 2025, 10(3): 4938-4954. doi: 10.3934/math.2025226 |
[4] | Tao Xie, Wenqing Zheng . Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances. AIMS Mathematics, 2024, 9(2): 3097-3125. doi: 10.3934/math.2024151 |
[5] | Ruoyu Wei, Jinde Cao, Wenhua Qian, Changfeng Xue, Xiaoshuai Ding . Finite-time and fixed-time stabilization of inertial memristive Cohen-Grossberg neural networks via non-reduced order method. AIMS Mathematics, 2021, 6(7): 6915-6932. doi: 10.3934/math.2021405 |
[6] | Er-Yong Cong, Yantao Wang, Xian Zhang, Li Zhu . A new approach for accuracy-preassigned finite-time exponential synchronization of neutral-type Cohen–Grossberg memristive neural networks involving multiple time-varying leakage delays. AIMS Mathematics, 2025, 10(5): 10917-10942. doi: 10.3934/math.2025496 |
[7] | Mohammed D. Kassim . A fractional Halanay inequality for neutral systems and its application to Cohen-Grossberg neural networks. AIMS Mathematics, 2025, 10(2): 2466-2491. doi: 10.3934/math.2025115 |
[8] | Shuting Chen, Ke Wang, Jiang Liu, Xiaojie Lin . Periodic solutions of Cohen-Grossberg-type Bi-directional associative memory neural networks with neutral delays and impulses. AIMS Mathematics, 2021, 6(3): 2539-2558. doi: 10.3934/math.2021154 |
[9] | Zhiying Li, Wei Liu . The anti-periodic solutions of incommensurate fractional-order Cohen-Grossberg neural network with inertia. AIMS Mathematics, 2025, 10(2): 3180-3196. doi: 10.3934/math.2025147 |
[10] | Patarawadee Prasertsang, Thongchai Botmart . Improvement of finite-time stability for delayed neural networks via a new Lyapunov-Krasovskii functional. AIMS Mathematics, 2021, 6(1): 998-1023. doi: 10.3934/math.2021060 |
In the paper, the existence and uniqueness of the equilibrium point in the Cohen-Grossberg neural network (CGNN) are first studied. Additionally, a switched Cohen-Grossberg neural network (SCGNN) model with time-varying delay is established by introducing a switched system to the CGNN. Based on reducing the conservativeness of the system, a flexible terminal interpolation method is proposed. Using an adjustable parameter to divide the invariant time-delay interval into multiple adjustable terminal interpolation intervals (2ı+1−3), more moments when signals are transmitted slowly can be captured. To this end, a new Lyapunov-Krasovskii functional (LKF) is constructed, and the stability of SCGNN can be estimated. Using the LKF method, a quadratic convex inequality, linear matrix inequalities (LMIs) and ordinary differential equation theory, a new form of stability criterion is obtained and specific instances are given to prove the applicability of the new stability criterion.
In 1983, M. Cohen and S. Grossberg proposed a new type of neural network model (CGNN). In real life, CGNN is broadly used in image processing, speed detection of moving targets, association memory and other fields. Scholars at home and abroad have also studied CGNN from different perspectives. In real-world use, it is paramount to ensure that the designed neural network has strong stability. However, owing to the switching speed of the amplifier and the time delay of the signal during transmission, CGNN may experience a time lag in actual work, which is an important factor causing network instability. In recent years, many studies on stability problems have also been carried out for time-delay neural networks [1,2,3,4,5,6,7,8,9,10,11].
The switched system is a model for studying complex systems from the perspective of systems and control[12,13,14]. Mechanical systems and power systems can be displayed in the form of switched systems, and they can also play a vital function in other fields, including ecological science, energy environment and other fields [15]. Simultaneously, the switched system is a complex system composed of a series of succession or dissociation subsystems and switched rules that enable subsystems to switch between them. Switched rules control the operation of the whole switched system, and the switched rules can also be called switched signals, switching laws or switched functions. They are usually based on the segmented constant function of time or status and events. D. Liberzon and A. S. Morse describes in detail the stability, design process and development of the switched system [16]. Compared with the previous CGNN research results [17,18,19,20], the value of the CGNN [21] connection the weight matrix will change over time when combined with a switching system. It can adjust the dynamic behavior of the system through switching rules and strategies, and respond to the dynamic evolution process of the system without cutting. The connection weight matrix of the system is usually fixed and cannot show dynamic changes. A method based on quantitative sliding mode was used to solve the synchronization problem of recursive neural networks with time-varying delays and discrete time [22]. In order to reduce the computational complexity, the authors introduced quantitative technology to discretely process the network state and finally used Lyapunov theory and the Barbalat lemma to deduce the convergence of the system. Among them, sliding mode control is a nonlinear control method with strong robustness. It realizes the switching and control of the system state by introducing a sliding surface. It has the characteristics of strong stability, short adjustment time and strong tracking ability, and it can also show the dynamic changes of the system. However, this paper adds the switching system to the traditional CGNN, uses LMIs and quadratic convex inequality to establish the criterion of the gradual stability of SCGNN, and it further studies the stability of SCGNN and the dynamic evolution process of SCGNN.
For CGNN with time-changing delay, it has been proven that the equilibrium point exists and is unique, and the analysis of stability has been widely studied. However, few people optimize the stability of the system. In [23] and [24], the weighting-delay method and the flexible terminal interpolation method were used to study the recursive neural network, respectively. The generation of time delay is not necessarily uniform, and there may be asymmetry, so it is more conservative to study the impact of time delay on system stability as a fixed interval [25,26,27,28], which usually cannot meet the actual needs. However, the above two methods, through one or more parameters, change the length of the interval, divide a fixed interval into multiple variable sub-intervals, and obtain the maximum allowable upper bound of time delay by using LMIs and constructing an appropriate LKF, which reduces the conservatism of the system. Comparing the experimental results of [23] and [24], the upper bound with the allowable delay obtained by using the flexible terminal interpolation method is larger. In [29], through the method of Halanay's inequality and Lyapunov's functional, the authors put forward a new sufficient condition to ensure that the time-changing delay CGNN has a unique equilibrium solution and global stability. In [30], based on the non-singular M-matrix theory, the method of transformation matrix is used to carry out an appropriate linear transformation of the M-matrix and turn it into a special form with good properties, so as to achieve the positive judgment of the system and obtain a new criterion to ensure the high-order delay discrete CGNN has global exponential stability. However, all these methods lack consideration for reducing the conservatism of the system.
Therefore, this paper uses the flexible terminal interpolation method to study CGNN with time delays, in order to reduce the conservatism of the system and make its results more general and more practical. Moreover, the flexible terminal interpolation method can be adjusted according to the characteristics of the data to adjust the size of the subinterval through a parameter, which greatly reduces the calculation burden and reduces the calculation time cost while ensuring the accuracy of interpolation.
The flexible terminal interpolation method uses ı interpolation and an adjustable parameter to divide the fixed time-delay interval [ℓ0,ℓ2] to 2ı+1−3 flexible time-delay intervals, as shown in Figure 1.
Let the adjustable parameter be ð, and ℓ1=ℓ(t), ϑ=1−ð. The terminal point of each subinterval can be expressed as (taking the second interpolation as an example)
{ℓ12=ðℓ(t)+ϑℓ0=ðℓ(t)+(1−ð)ℓ0,ℓ32=ðℓ(t)+ϑℓ2=ðℓ(t)+(1−ð)ℓ2,ℓ14=ðℓ12+ϑℓ0=ð2ℓ(t)+(1−ð2)ℓ0,ℓ34=ðℓ(t)+ϑℓ12=ð(2−ð)ℓ(t)+(1−ð)2ℓ0,ℓ2i−12ı={ðℓi2ı−1+ϑℓi−12ı−1,1≤i≤2ı−1,ðℓi−12ı−1+ϑℓi2ı−1, 2ı−1+1≤i≤2ı, |
˙ℓ12=ð˙ℓ(t), ˙ℓ32=ð˙ℓ(t), ˙ℓ14=ð2˙ℓ(t), ˙ℓ34=ð(2−ð)˙ℓ(t).
We can see that the endpoint value of each flexible subinterval is a convex combination of ℓ0 and ℓ(t), or a convex combination of ℓ2 and ℓ(t). The terminal of each time interval is adjustable, that is, the delay interval is adjusted as a whole, which will enable us to capture more time delay information, and the stability result will be more accurate and effective.
Notation: Rη×η represents the set of η-row η column matrices in which all elements are real numbers; YT represents the transpose of the matrix Y; Q>0 means that the matrix Q is called a positive fixed matrix; col[Y1,Y2]=[YT1,YT2]T; He[Y]=YT+Y; diag{...} is a matrix with all 0 elements except the diagonal. ∗ is the part of the matrix about the symmetry of the main diagonal.
The CGNN with time-varying delays can be described:
˙ˆxi(t)=di(ˆxi(t))[−ai(ˆxi(t))+η∑j=1bijˉFj(ˆxj(t))+η∑j=1cijˉFj(ˆxj(t−ℓj(t)))+ȷi], | (2.1) |
where ˆxi(t)∈Rη is the state vector of the ith neuron at t moment. di(ˆxi(t)) is the amplification function, ai(ˆxi(t)) represents a behaved function, and ˉFi(⋅) denotes the bounded neuronal activations function.
B=(bij)η×η, C=(cij)η×η are the connection weights that reflect how the neuron i connects with the neuron j. ℓj(t) are the time delay parameters. ȷi=[ȷ1,ȷ2,....,ȷη] denotes the external bias on ith neuron at time t and ȷ1,ȷ2,....,ȷη all are constant.
CGNN (2.1) is transformed into
˙ˆx(t)=D(ˆx(t))[−A(ˆx(t))+BˉF(ˆx(t))+CˉF(ˆx(t−ℓ(t)))+ȷ], | (2.2) |
where
ȷ=(ȷ1,ȷ2,....,ȷη)T,ˆx(t)=(ˆx1(t),ˆx2(t),....,ˆxη(t))TD(ˆx(t))=diag(d1(ˆx1(t)),d2(ˆx2(t)),....,dη(ˆxη(t))),A(ˆx(t))=(a1(ˆx1(t)),a2(ˆx2(t)),....,aη(ˆxη(t)))T,ˉF(ˆx(t−ℓ(t)))=(ˉF1(ˆx1(t−ℓ1(t))),ˉF2(ˆx2(t−ℓ2(t))),....,ˉFη(ˆxη(t−ℓη(t))))T. |
We will need to use the following assumptions and lemma.
Assumption 2.1. Each function di(ˆxi(t)) is bounded and locally continuous, and there exist non-negative constants li_ and ¯li such that 0≤li_≤di(ˆxi(t))≤¯li<+∞ for all ˆxi(t)∈Rη.
Assumption 2.2. Each function ai(ˆxi(t)) is bounded and continuous and there exist principal constants μi>0, such that
ai(ˆxi(t))−ai(ˇyi(t))ˆxi(t)−ˇyi(t)≥μi>0, i=1,2,...,η_,∀ˆxi,ˇyi∈Rη,ˆxi≠ˇyi. |
Assumption 2.3. ˉFi(⋅) is a bounded activation function and there are positive constants ki (i=1,2,...η_), such that
|ˉFi(ˆxi(t))−ˉFi(ˇyi(t))|≤ki|ˆxi−ˇyi|, i=1,2,...,η_,∀ˆxi,ˇyi∈Rη,ˆxi≠ˇyi. |
Definition 2.1. [31] For all possible coefficient matrices B and C in CGNN (2.1) with time-varying delay, when the system remains stable in a certain state, the state is the equilibrium point of the system, and this equilibrium point is asymptotically stable, so it can be said that the model is asymptotically stable.
Definition 2.2. [32] System (2.1) remains stable in some states, which are called equilibrium points. The equilibrium point is a constant vector ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η)T, which can make
−ai(ˆx∗i)+η∑j=1bijˉFj(ˆx∗j)+η∑j=1cijˉFj(ˆx∗j)+ȷi=0. |
Lemma 2.1. [33] (Quadratic reciprocally convex inequality) For given matrices Ji∈Rη×η, real number scalar qi,qj∈[0,1], and ℘i∈(0,1) with ∑ηi=1℘i=1, if there exists Ti∈Rη×η, Lij∈Rη×η(j>i),i=1,2,...η_, let the following matrix hold:
[Ji−TiLij∗Jj−qjLj]≥0, | (2.3) |
[Ji−qiTiLij∗Jj−Lj]≥0, | (2.4) |
[JiLij∗Jj]≥0. | (2.5) |
Then, for any vector ζi∈Rη, the following inequality holds [23]:
η∑i=11℘iζTiJiζi≥η∑j>i=1He[ζTiLijζj]+η∑i=1ζTi[Ji+(1−℘i)Ti]ζi+η∑j>i=1{ζTi(qi℘2j℘iTj)ζi+ζTj(qj℘2j℘jTi)ζj}. | (2.6) |
Theorem 2.1. Under Assumptions 2.1-2.3, if the following inequality condition is satisfied, then the system has only one stable state, and the state (balance point) is unique.
‖B‖1+‖C‖1<μmKm, |
where
μm=min1≤i≤η(μi),Km=max1<i<η(Ki), |
for any matrix B=(bij)η×η,
‖B‖1=max1≤j≤ηη∑i=1|bij|,‖B‖2=√λm(BTB), |
the λm(BTB) here is the largest of all eigenvalues of the matrix BTB.
Proof. Let ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η)T denote an equilibrium point of neural network model (2.2). Then,
D(ˆx∗)[−A(ˆx∗)+BˉF(ˆx∗)+CˉF(ˆx∗)+ȷ]=0. | (2.7) |
Because D(ˆx∗) is a positive matrix with zero elements outside the diagonal line, replace (2.7) with
−A(ˆx∗)+BˉF(ˆx∗)+CˉF(ˆx∗)+ȷ=0. | (2.8) |
Let
ˇℑ(ˆx)=−A(ˆx)+BˉF(ˆx)+CˉF(ˆx)+ȷ=0, | (2.9) |
where ˇℑ(ˆx)=(ˇk1(ˆx),ˇk2(ˆx),....,ˇkη(ˆx))T with
ˇki(ˆx)=−ai(ˆxi)+η∑j=1bijˉFj(ˆxj)+η∑j=1cijˉFj(ˆxj)+ȷi, i=1,2,...,η_. |
As is well-known, if ˇki(ˆx) is a homeomorphism of Rη, then (2.8) has a unique solution. From [2], it can be seen that ˇki(ˆx) in this paper is a homeomorphic map to Rη, if ˇℑ(ˆx)≠ˇℑ(ˇy),∀ˆx≠ˇy, and also ˆx,ˇy∈Rη, and ‖ˇℑ(ˆx)‖→∞ as ‖ˆx‖→∞.
Let ˆx≠ˇy, which implies two cases:
(I) ˆx≠ˇy and ˉF(ˆx)−ˉF(ˇy)≠0,
(⨿) ˆx≠ˇy and ˉF(ˆx)−ˉF(ˇy)=0.
Now, case (I):
ˇℑ(ˆx)−ˇℑ(ˇy)=−A(ˆx)+BˉF(ˆx)+CˉF(ˆx)+ȷ−[−A(ˇy)+BˉF(ˇy)+CˉF(ˇy)+ȷ]=−(A(ˆx)−A(ˇy))+B(ˉF(ˆx)−ˉF(ˇy))+C(ˉF(ˆx)−ˉF(ˇy)), | (2.10) |
and specify the above equation:
ˇki(ˆx)−ˇki(ˇy)=−(ai(ˆxi)−ai(ˇyi))+η∑j=1(bij+cij)(ˉFj(ˆxj)−ˉFj(ˇyj)), | (2.11) |
Multiply the left and right of Eq (2.11) by sgn(ˆxi−ˇyi), where
sgn(ˆx)={1, ˆx>0,0, ˆx=0,−1, ˆx<0, |
at this time. Then, (2.11) becomes
sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))=−sgn(ˆxi−ˇyi)(ai(ˆxi)−ai(ˇyi))+η∑j=1sgn(ˆxi−ˇyi)(bij+cij)(ˉFj(ˆxj)−ˉFj(ˇyj))≤−μi|ˆxi−ˇyi|+η∑j=1(|bij|+|cij|)Ki|ˆxi−ˇyi|≤−μm|ˆxi−ˇyi|+η∑j=1(|bij|+|cij|)Km|ˆxi−ˇyi|, |
form which we get
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))≤−η∑i=1μm|ˆxi−ˇyi|+η∑i=1η∑j=1(|bij|+|cij|)Km|ˆxi−ˇyi|≤−η∑i=1μm|ˆxi−ˇyi|+Kmη∑i=1η∑j=1(|bij|+|cij|)|ˆxi−ˇyi|≤−(μm−Km(‖B‖1+‖C‖1))‖ˆx−ˇy‖1, | (2.12) |
where ‖ˆx−ˇy‖1=∑ηi=1|ˆxi−ˇyi|, for ˆx−ˇy≠0,‖B‖1+‖C‖1<μmKm, implies that
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))≤0 |
or
η∑i=1|ˇki(ˆx)−ˇki(ˇy)|=‖ˇℑ(ˆx)−ˇℑ(ˇy)‖1>0. |
It can be seen that for any ˆx≠ˇy, the result of ˇℑ(ˆx)≠ˇℑ(ˇy).
Now, case (⨿):
ˇℑ(ˆx)−ˇℑ(ˇy)=−(A(ˆx)−A(ˇy)), |
from which one can obtain
sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))=−sgn(ˆxi−ˇyi)(ai(ˆxi)−ai(ˇyi))≤−μi|ˆxi−ˇyi|. |
ˆx−ˇy≠0 implies that
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))<0 |
or
η∑i=1|ˇki(ˆx)−ˇki(ˇy)|=‖ˇℑ(ˆx)−ˇℑ(ˇy)‖1>0. |
From the above two inequalities, we can understand that when any ˆx≠ˇy is arbitrary, it will make ˇℑ(ˆx)≠ˇℑ(ˇy).
Substitute ˇy=0 into inequality (2.12) to get
η∑i=1sgn(ˆxi)(ˇki(ˆx)−ˇki(0))≤−(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1. |
Therefore,
(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1≤|η∑i=1sgn(ˆxi)(ˇki(ˆx)−ˇki(0))|≤η∑i=1|ˇki(ˆx)−ˇki(0)|=‖ˇℑ(ˆx)−ˇℑ(0)‖1≤‖ˇℑ(ˆx)‖1+‖ˇℑ(0)‖1,‖ˇℑ(ˆx)‖1≥(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1−‖ˇℑ(0)‖1. |
That is, when ‖ˆx‖→∞, ‖ˇℑ(ˆx)‖→∞.
The above is the whole proof process of Theorem 2.1.
Remark 2.1. In terms of the construction of LKF, different construction methods can solve different types of time delay systems, and the utilization rates of different types of time-delay information are also different. How to construct a LKF with a small amount of computation and less conservativeness is worth further exploration.
The system state is translationally transformed so that the equilibrium point is at the origin of the coordinate system. The balance point is set to ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η), and ˉκ(t)=ˆx(t)−ˆx∗:
˙ˉκ(t)=αi(ˉκi(t)[−βi(ˉκi(t))+η∑j=1bijˉℏj(ˉκj(t))+η∑j=1cijˉℏj(ˉκj(t−ℓj(t)))], |
or with another way of representation,
˙ˉκ(t)=α(ˉκ(t)[−β(ˉκ(t))+Bˉℏ(ˉκ(t))+Cˉℏ(ˉκ(t−ℓ(t)))], | (3.1) |
where
ˉκ(t)=(ˉκ1(t),ˉκ2(t),...,ˉκη(t))T,α(ˉκ(t))=diag(α1(ˉκ1(t)),α2(ˉκ2(t)),...,αη(ˉκη(t))),β(ˉκ(t))=(β1(ˉκ1(t)),β2(ˉκ2(t)),...,βη(ˉκη(t)))T,ˉℏ(ˉκ(t−ℓ(t)))=(ˉℏ1(ˉκ1(t−ℓ1(t))),ˉℏ2(ˉκ2(t−ℓ2(t))),...,ˉℏη(ˉκη(t−ℓη(t))))T. |
For the transformed system (3.1), we have
αi(ˉκi(t))=di(ˉκi(t)+ˆx∗i),i=1,2,...,η_.βi(ˉκi(t))=ai(ˉκi(t)+ˆx∗i)−ai(ˆx∗i),i=1,2,...,η_.ˉℏi(ˉκi(t))=ˉFi(ˉκi(t)+ˆx∗i)−ˉFi(ˆx∗i),i=1,2,...,η_. |
The time-delay neural network model plays a vital role in practical applications: function approximation, parallel calculation, associative memory, etc. Moreover, the main feature of the switching system is that it has a constraint on the state variable. The state variable can be an input variable or output variable. Therefore, further research on SCGNN with time delay is the next step.
Consider the following SCGNN models with time-varying delay:
˙ˉκ(t)=α(ˉκ(t))[−β(ˉκ(t))+Bσ(t)ˉℏ(ˉκ(t))+Cσ(t)ˉℏ(ˉκ(t−ℓ(t)))], | (3.2) |
ˉℏ(ˉκ(t))=(ˉℏ1(ˉκ1(t)),ˉℏ2(ˉκ2(t)),....,ˉℏη(ˉκη(t)))T is an activation function that activates neurons. ℓ is bounded, and σ(t):[t0,+∞)⟶N={1,2,...,N_} is a segmented constant function and is related to time t. We call it a switch signal to activate a specific subsystem. There are N neural network subsystems. The corresponding switching sequence is represented as σ(t):{(t0,σ(to)),...,(tı,σ(tı)),...,∣σ(tı)∈N,ı=0,1,...}, to is the initial time, and tı is the switching moment of the ıth subsystem. At the same time, σ(tı) means that the ıth subsystem is activated. For any ı, the matrix (Bσ,Cσ) is included in the finite set {(B1,C1),(B2,C2),...(Bη,Cη)}.
In this article, it is assumed that the switching signal σ(t) is not known at the beginning, and σ(tı)=ı, define the function ξ(t)=(ξ1(t),ξ2(t),...,ξN(t))T, where ı=1,2,...,N.
ξı(t)={1,when the switched system is described by the kth mode Bσ(tı), Cσ(tı),0,otherwise.
Now, we can change CGNN (3.2) with switching signal to an expression, that is,
˙ˉκ(t)=α(ˉκ(t)){−β(ˉκ(t))+N∑ı=1ξı(t)[Bσ(tı)ˉℏ(ˉκ(t))+Cσ(tı)ˉℏ(ˉκ(t−ℓ(t))]}, | (3.3) |
and it follows that ∑Nı=1ξı(t)=1.
Translating the equilibrium point of system (2.1) to the origin: (ˆx∗1,ˆx∗2,...,ˆx∗η) to (0,0,...,0), at this time, ˉκ(0)=0. Know by Assumption 2.2:
β(ˉκ(t))−β(ˉκ(0))ˉκ(t)−ˉκ(0)≥μi>0, i=1,2,...,η_, |
and hence,
β(ˉκ(t))≥μiˉκ(t). |
Now,
˙ˉκ(t)≤−α(ˉκ(t))μiˉκ(t)+α(ˉκ(t))N∑ı=1ξı(t)Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))N∑ı=1ξı(t)Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))≤−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t))), | (3.4) |
where ˉκ(t)∈Rη is the state vector, μiα(ˉκ(t)), α(ˉκ(t))Bσ(tı), α(ˉκ(t))Cσ(tı) are known continuous function matrices.
The time-varying delay ℓ(t) satisfies
0≤ℓ0≤ℓ(t)≤ℓ2,ρ1≤˙ℓ(t)≤ρ2, | (3.5) |
then, ℓ0, ℓ2, ρ1, and ρ2>0 are constants, which we already knew.
For the activation function ˉℏl(⋅) (l=1,2,...,η_),
0≤ˉℏl(s1)−ˉℏl(s2)s1−s2≤r1,s1≠s2,0≤ˉℏl(s)s≤rl,s≠0, | (3.6) |
where rl are real numbers, and R=diag{r1,r2,...,rη}.
Combining the flexible terminal interpolation method with the secondary convex inequality is conducive to capturing more delay information and obtaining sufficient conditions to ensure the stability of CGNN (3.2) with switching signal by selecting the appropriate LKF function.
Theorem 4.1. For given scalars ℓ0, ℓ2, ρ1, ρ2, 0<δ≤min{1,1ρ2} and ℓ(d,b)=ℓd−ℓb, ϱ=1−δ, the SCGNN (3.2) is asymptotically stable if there exist Q>0, Pi>0, J>0, diagonal matrices W1>0, W2>0, Δi>0 (i=1,2,...,2ı+1), matrices Ti,Lij (j>i=1,2,...,2ı+1−2), N,M=[M1M2], making the LMIs (4.1)-(4.4) hold:
[˜Ji−TiLij∗˜Ji−℘jTj]≥0, | (4.1) |
[˜Ji−℘iTiLij∗˜Ji−Tj]≥0,[˜JiLij∗˜Ji]≥0, | (4.2) |
Ψˉκ=[NM∗13˙˜ℓ2ı+1−12ıJ]≥0, | (4.3) |
Ψ(ℓ0)<0,Ψ(ℓ2)<0,−ℓ22d2+Ψ(ℓ0)<0, | (4.4) |
where
Ψ(ℓ(t))=Ψ0−2ΨˉF,Ψ0=He[ΩT1QΩ2+ΩT3ˆMΩ3+eT2ı+2W1U+(Re1−e2ı+2)TW2U]+ℓ2ı+1−12ıUTJU−˙˜ℓ2ı+1−12ıℓ2{2ı+1−1∑i=1εTi[˜Ji−(1−℘i)Ti]εi+2ı+1−1∑j>i=1(He[εTiLijεj]+℘2jεTiTjεi+℘2iεTjTiεj)}+2ı+1−1∑i=1{˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1},ΨˉF=2ı+1−1∑i=1eT2ı+1−1+iΔi[e2ı+1−1+i−Rei]+2ı+1−1∑q=12ı+1∑p=2,p>q(e2ı+2−1+q−e2ı+2−1+p)TΔqp×[e2ı+2−1+q−e2ı+2−1+p−R(eq−ep)],Ω1=col[e1,ℓ12ıe2ı+1+1,ℓ(22ı,12ı)e2ı+1+2,...,ℓ(2ı+1−12ı,2ı+1−22ı)e2ı+1−1],Ω2=col[U,e1−˙˜ℓ12ıe2,˙˜ℓ12ıe2−˙˜ℓ22ıe3,...,˙˜ℓ2ı+1−22ıe2ı+1−1−˙˜ℓ2ı+1−12ıe2ı+1],Ω3=col[e2ı,e2ı+1], εi=col[ei−ei+1,ei+ei+1−3e2ı+1+i], ˆεi=col[ei,e2ı+2−1+i],ei=[0η×(i−1)η,Iη,0η×(3∗2ı+1−1−i)η] (i=1,2,...,3∗2ı+1−1),U=−μiα(ˉκ(t))e1+α(ˉκ(t))Bσ(tı)e2ı+2+α(ˉκ(t))Cσ(tı)e2ı+2+2ı,d2=122ı+1−1∑j>i=1d2([℘2iεTjTiεj+℘2jεTiTjεi])d2[ℓ(t)]2, ℘i=ℓ(i2ı,i−12ı)ℓ2, i=1,2,...,2ı+1−2,℘2ı+1−1=ℓ(2,2ı+1−22ı)ℓ2, ˙˜ℓi2ı=1−˙ℓi2ı, ˆM=[M1+MT1−M1+MT2∗−M2−MT2]+(ℓ(t)−ℓ2ı−12ı)N,˜Jl={diag[J,4J],l≠2ı,diag{J3,4J3},l=2ı. |
Proof. Choose the appropriate LKF:
V(t)=V1(t)+V2(t)+V3(t), | (4.5) |
where
V1(t)=˜ˉκT(t)Q˜ˉκ(t)+2η∑i=1w1i∫ˉκi(t)0ˉℏi(s)ds+2η∑i=1w2i∫ˉκi(t)0(ris−ˉℏi(s))ds,V2(t)=∫0−ℓ2ı+1−12ı∫tt+θ˙ˉκT(s)J˙ˉκ(s)dsdθ,V3(t)=2ı+1−1∑i=1∫t−ℓi−12ıt−ℓi2ıπT(s)Piπ(s)ds, |
with
˜ˉκ(t)=col[ˉκ(0),ℓ12ıˉv(12ı,0),ℓ(22ı,12ı)ˉv(22ı,12ı),...,ℓ(2ı+1−12ı,2ı+1−22ı)ˉv(2ı+1−12ı,2ı+1−22ı)],ς(t)=col[ˉκ(0),ˉκt(12ı),ˉκt(22ı),...,ˉκt(2ı+1−12ı),ˉv(12ı,0),ˉv(22ı,12ı),...,ˉv(2ı+1−12ı,2ı+1−22ı),ˉℏ(0),ˉℏt(12ı),...,ˉℏt(2ı+1−12ı)],π(s)=col[ˉκ(s),ˉℏ(s)], ˉκt(d)=ˉκ(t−d), π(d)=π(t−ℓd),ˉℏt(d)=ˉℏ(ˉκ(t−d)), ℓ(d,b)=ℓd−ℓb, ˉv=1ℓ(d,b)∫t−ℓbt−ℓdˉκ(s)ds. |
Find the derivatives of V1,V2,V3 along the trajectory of the system (3.2), respectively.
˙V1(t)=˙˜ˉκT(t)Q˜ˉκ(t)+˜ˉκT(t)Q˙˜ˉκ(t)+2W1ˉℏ(ˉκ(t))˙ˉκ(t)+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2˙ˉκ(t)=2˜ˉκT(t)Q˙˜ˉκ(t)+2ˉℏT(ˉκ(t))W1˙ˉκ(t)+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2˙ˉκ(t)=2˜ˉκT(t)Q˙˜ˉκ(t)+2ˉℏT(ˉκ(t))W1[−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))]+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2[−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))]=ςT(t)[ΩT1QΩ2+ΩT2QΩ1]ς(t)+ςT(t)[eT2k+2W1U+UTW1e2ı+2]ς(t)+ςT(t)[(Re1−e2ı+2)TW2U+UTW2(Re1−e2ı+2)T]ς(t)=ςT(t)He[ΩT1QΩ2+eT2ı+2W1U+(Re1−e2ı+2)TW2U]ς(t), | (4.6) |
˙V2(t)=∫ℓ2ı+1−12ı0(˙ˉκT(t)J˙ˉκ(t)−[˙ˉκT(t+θ)J˙ˉκ(t+θ)])dθ=ℓ2ı+1−12ı˙ˉκT(t)J˙ˉκ(t)−˙˜ℓ2ı+1−12ı∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds=−˙˜ℓ2ı+1−12ı∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds+ℓ2ı+1−12ıςT(t)UTJUς(t). | (4.7) |
Now, we use the Wirtinger integral inequality and the quadratic reciprocally convex inequality (Lemma 2.1) to calculate the u-related integral terms in the formula (4.7).
−∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds=−2ı+1−1∑i=1,i≠2ı∫t−ℓi−12ıt−ℓi2ı˙ˉκT(s)J˙ˉκ(s)ds−∫t−ℓ2ı−12ıt−ℓ2ı2ı˙ˉκT(s)J˙ˉκ(s)ds=−2ı+1−1∑i=1,i≠2ı1ℓ(i2ı,i−12ı)ˉvT(i2ı,i−12ı)Jˉv(i2ı,i−12ı)−∫t−ℓ2ı−12ıt−ℓ2ı2ı˙ˉκT(s)J˙ˉκ(s)ds≤−2ı+1−1∑i=1ςT(t)[1ℓ(i2ı,i−12ı)εi¯Jiεi]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤−1ℓ22ı+1−1∑i=1ςT(t)[1γiεi¯Jiεi]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤−1ℓ2{2ı+1−1∑i=1ςT(t)εTi[¯Ji−(1−γi)Ti]εi+2ı+1−1∑j>i=1(He[εTiLijεi]+℘2jεTiTjεi+℘2iεTjTiεj)}ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds, | (4.8) |
then, ℓ2ı2ı=ℓ1=ℓ(t),
˙V3(t)=2ı+1−1∑i=1{(1−˙ℓi−12ı)πT(t−ℓi−12ı)Piπ(t−ℓi−12ı)−[(1−˙ℓi2ı)πT(t−ℓi2ı)Piπ(t−ℓi2ı)]}=2ı+1−1∑i=1{˙ℓi−12ıπT(ℓi−12ı)Piπ(ℓi−12ı)−˙˜ℓi2ıπT(ℓi2ı)Piπ(ℓi2ı)}=2ı+1−1∑i=1{˙˜ℓi−12ıςT(t)ˆεTiPiˆεiς(t)−˙˜ℓi2ıςT(t)ˆεTi+1Piˆεi+1ς(t)}=ςT(t)2ı+1−1∑i=1[˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1]ς(t). | (4.9) |
For any dimension matrices M1 and M2, use Newton-Leibniz formula to get
2[ˉκTt(2ı−12ı)MT1+ˉκT(t−ℓ(t))MT2]×[ˉκt(2ı−12ı)−ˉκ(t−ℓ(t))−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκ(s)ds]=0, | (4.10) |
and for any matrix N, the zero equation below holds:
∫t−ℓ2ı−12ıt−ℓ(t)ςT1(t)Nς1(t)ds=(ℓ(t)−ℓ2ı−12ı)ςT1(t)Nς1(t), |
which is equivalent to
(ℓ(t)−ℓ2ı−12ı)ςT1(t)Nς1(t)−∫t−ℓ2ı−12ıt−ℓ(t)ςT1(t)Nς1(t)ds=0. | (4.11) |
Into there, ςT1(t)=[ˉκt(2ı−12ı),ˉκ(t−ℓ(t))], from (4.6) to (4.7), and it yields
˙V(t)≤ςT(t){He[ΩT1QΩ2+eT2ı+2W1U+(Re1−e2ı+2)TW2U]+2ı+1−1∑i=1[˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1]+ℓ2ı+12ıUTJU−1ℓ22ı+1−1∑i=1εTi[¯Ji−(1−℘i)Ti]εi−1ℓ22ı+1−1∑j>i=1(He[εTiLijεi]+℘2jεTiTjεi+℘2iεTjTiεj)}ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤ςT(t)Ψ0ς(t)−ςT(t)He[πT3ˆMπ3]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds. | (4.12) |
Based on formula (3.6), for any positive diagonal matrix Δi=diag{λ1,λ2,...,λη}, and there are
0≤−2ˉℏT(s)Δ[ˉℏ(s)−Rs]. |
Let s be t, t−ℓ12ı, t−ℓ22ı, ..., t−ℓ2ı+1−12ı, and replace Δ with Δ1, Δ2, ..., Δı+12, and we can obtain
0≤−2ςT(t)eT2ı+2−1+iΔi[e2ı+2−1+i−Rei]ς(t). |
There, i=1,2,...,2ı+1,
0≤−2ςT(t)2ı+1∑i=1eT2ı+1−1+iΔi[e2ı+1−1+i−Rei]ς(t), | (4.13) |
and for arbitrary straight diagonal matrix ˉΔ=diag{ˉλ1,ˉλ2,...,ˉλη}, one can gain
0≤−2(ˉℏ(s1)−ˉℏ(s2))TˉΔ[ˉℏ(s1)−ˉℏ(s2)−R(s1−s2)]. |
Let s1, s2 be t, t−ℓ12ı, t−ℓ22ı, ..., t−ℓ2ı+1−12ı, and replace ˉΔ with Δqp. This moment, q=1,2,...,2ı+1−1, p=2,3,...,2ı+1, p>q.
0≤−2ςT(t)2ı+1−1∑q=12ı+1∑p=2,p>q[e2ı+2−1+q−e2ı+2−1+p]TΔqp×[e2ı+2−1+q−e2ı+2−1+p−K(eq−ep)]ς(t). | (4.14) |
Therefore, the formula (4.11) can be written as
˙V(t)≤ςT(t)Ψ(t)ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)ςT2(t,s)Ψˉκς2(t,s)ds≤ςT(t)Ψ(t)ς(t), | (4.15) |
among ς(t,s)=[ς1(t,s),˙ˉκ(s)].
Now, defined Ψ(t)=d2ℓ2(t)+d1ℓ(t)+d0, d1 and d0 are matrices with suitable dimensions (i.e., free matrices). When Ψ(t) meets the condition (4.3) in Theorem 4.1, for ∀t∈[0,ℓ], Ψ(t)<0, that is, the system (3.2) is asymptotically stable. The proof is as follows:
Proof. When d2≥0, Ψ(t) is a quadratic function with the opening up, that is, the convex function.
By the property of the convex function, the tangent of its crossing point (ℓ,Ψ(ℓ)) is expressed as
Ψ(t)−Ψ(ℓ)=˙Ψ(ℓ)(t−ℓ),Ψ(t)=˙Ψ(ℓ)(t−ℓ)+Ψ(ℓ), | (4.16) |
from (4.3): Ψ(0)<0,Ψ(ℓ)<0, Ψ(t)<0.
When d2<0, Ψ(t) is the open and downward quadratic function, that is, the concave function. Choose any t∈[o,ℓ],
˙Ψ(ℓ)≤Ψ(t)−Ψ(ℓ)t−ℓ, | (4.17) |
˙Ψ(ℓ)(t−ℓ)≥Ψ(t)−Ψ(ℓ),Ψ(t)≤˙Ψ(ℓ)(t−ℓ)+Ψ(ℓ),Ψ≤(2d2ℓ+d1)(t−ℓ)+d2ℓ2+d1ℓ+d0≤(2d2ℓ+d1)t−d2ℓ2+d0,Γ(t)=(2d2ℓ+d1)t−a2ℓ2+d0,Γ(0)=−d2ℓ2+d0=−d2ℓ2+Ψ(0), |
by (4.3), we can get Γ(0)<0.
At the same time, Γ(ℓ)=Ψ(ℓ)<0; therefore, for any t∈[0,ℓ], Ψ(t)<0.
We know that ˙V(t)≤ςT(t)Ψ(t)ς(t) and ς(t)<0, and then ˙V(t)≤0, that is, ˙V(t) is negative. From Lyapunov's second theorem, system (3.2) has asymptotic stability.
Remark 4.1. It is proved that the balance point of SCGNN exists and is unique, and only the connection weight matrix with switching rules needs to be processed. That is, ˜B=Bσ(t)=(bıj)η×η, ˜C=Cσ(t)=(cıj)η×η where σ(t)=ı, ı=1,2,...,η, and its value may change over time. However, this paper sets a fixed switching rule, that is, the value of the connection weight matrix can remain unchanged for a period of time, which is equivalent to the connection weight matrix without a switching system, so as to prove that the process is consistent with this paper's Theorem 2.1.
Example 1. Take N=3 and consider the SCGNN model with two subsystems:
˙ˆxi(t)=di(ˆxi(t)){−ai(ˆxi(t))+N∑ı=1ξı(t)[BıˉFj(ˆxj(t))+CıˉFj(ˆxj(t−ℓj(t)))]}. | (5.1) |
Among them, the neural network system parameters are
di(ˆxi(t))=diag(2+sin2(ˆx1),2+cos2(ˆx2),2+tanh2(ˆx3)),ai(ˆxi(t))=ˆxi(t),ˉFj(ˆxj(t))=tanh(ˆxi(t)),i,j=1,2,i≤j, |
and the connection weight matrix is the following:
Subsystem 1:
B1=[−0.1−0.2−0.20.10.3−0.40.20.4−0.3], C1=[−0.1−0.3−11.3−0.2−0.41.21.1−0.2]. |
Subsystem 2:
B2=[−0.2−0.6−0.40.20.1−0.10.30.50.3], C2=[−0.252−0.70.90.4−0.50.30.20.2]. |
Subsystem 3:
B3=[0.1−0.6−0.40−0.210.30.2−0.3], C3=[−0.12−0.10.4−0.20.150.30.33−0.2−0.4]. |
In order to satisfy Assumptions 2.1-2.3, take the following parameters: li_=1, ¯li=2, μi=1.2, ki=1, r1=0.4, r2=0.8.
Figures 2-7 depicts the simulation results after customizing the initial value under three subsystems. Based on Theorem 4.1, the system has asymptotic stability.
When −ρ≤˙ℓ(t)≤ρ, i.e., −ρ1=ρ2=ρ, according to the judgment conditions of Theorem 4.1, we use the LMIs toolbox to calculate and get the maximum upper bounds (MAUBs) allowed to be achieved by time delay, see Table 1.
ρ | 0.8 | 0.9 | unknown |
Theorem 4.1 (ı=1) | 1.9384 | 1.4275 | 1.3128 |
Theorem 4.1 (ı=2) | 2.1139 | 1.5965 | 1.4902 |
As can be seen from Table 1, when ˙ℓ(t)=ρ=0.8,ð=0.642,ℓ0=0,q1=q2=0, the maximum delay obtained by two interpolation is greater than that obtained by one interpolation, which fully reflects that the flexible terminal interpolation method can capture more time-delay information, thus reducing the advantage of conservatism.
This paper analyzes CGNN with time-varying delay and adds a switching system to CGNN to study the asymptotic stability of SCGNN. Starting from the existence and uniqueness of the CGNN equilibrium point, it becomes easier to eliminate the offset. In order to capture more time-delay information, a flexible terminal interpolation method is adopted, and an LKF with more time-delay information is constructed and estimated using a quadratic convex inequality. Additionally, based on the linear matrix inequality, a new criterion for SCGNN asymptotic stability is obtained. Finally, numerical examples and simulation results show that the system can be asymptotically stable under the derivation criterion.
Compared with this paper, recent relevant results [34] use the quadratic inequality of real vectors and the LKF method to study the stability of a class of CGNN systems with neutral delay terms and discrete time delay. This CGNN system is more special and complex, fully considering the uncertainties and interference factors in practical applications, which has strong practical significance. In addition to studying the stability of this complex system, further research into the limits and singularities of the system is worth exploring.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the Natural Science Foundation of China (62072164 and 11704109).
The authors declare no conflicts of interest.
[1] | C. I. Byrnes, F. D. Priscoli, A. Isidori, Output regulation of uncertain nonlinear systems, Boston: Birkhäuser, 1997. https://doi.org/10.1007/978-1-4612-2020-6 |
[2] |
Z. H. Yuan, L. H. Huang, D. W. Hu, B. W. Liu, Convergence of nonautonomous Cohen-Grossberg-type neural networks with variable delays, IEEE Trans. Neural Netw., 19 (2008), 140-147. https://doi.org/10.1109/TNN.2007.903154 doi: 10.1109/TNN.2007.903154
![]() |
[3] |
H. Ye, A. N. Michel, K. N. Wang, Qualitative analysis of Cohen-Grossberg neural networks with multiple delays, Phys. Rev. E, 51 (1995), 2611. https://doi.org/10.1103/PhysRevE.51.2611 doi: 10.1103/PhysRevE.51.2611
![]() |
[4] |
J. D. Cao, K. Yuan, H. X. Li, Global asymoptotical stability of recurrent neural networks with multiple discrete delays and distributed delays, IEEE Trans. Neural Netw., 17 (2006), 1646-1651. https://doi.org/10.1109/TNN.2006.881488 doi: 10.1109/TNN.2006.881488
![]() |
[5] |
C. X. Huang, L. H. Huang, Dynamics of a class of Cohen-Grossberg neural networks with time-varying delays, Nonlinear Anal. Real World Appl., 8 (2007), 40-52. https://doi.org/10.1016/j.nonrwa.2005.04.008 doi: 10.1016/j.nonrwa.2005.04.008
![]() |
[6] |
J. D. Cao, J. L. Liang, Boundedness and stability for Cohen-Grossberg neural networks with time-varying delays, J. Math. Anal. Appl., 296 (2004), 665-685. https://doi.org/10.1016/j.jmaa.2004.04.039 doi: 10.1016/j.jmaa.2004.04.039
![]() |
[7] |
L. Wan, Q. H. Zhou, Attractor and ultimate boundedness for stochastic cellular neural networks with delays, Nonlinear Anal. Real World Appl., 12 (2011), 2561-2566. https://doi.org/10.1016/j.nonrwa.2011.03.005 doi: 10.1016/j.nonrwa.2011.03.005
![]() |
[8] |
K. Yuan, J. D. Cao, H. X. Li, Robust stability of switched Cohen-Grossberg neural networks with mixed time-varying delays, IEEE Trans. Syst. Man Cybernet. Part B (Cybernet.), 36 (2006), 1356-1363. https://doi.org/10.1109/TSMCB.2006.876819 doi: 10.1109/TSMCB.2006.876819
![]() |
[9] |
H. B. Zeng, H. C. Lin, Y. He, K. L. Teo, W. Wang, Hierarchical stability conditions for time-varying delay systems via an extended reciprocally convex quadratic inequality, J. Franklin Inst., 357 (2020), 9930-9941. https://doi.org/10.1016/j.jfranklin.2020.07.034 doi: 10.1016/j.jfranklin.2020.07.034
![]() |
[10] |
H. Y. Zhang, Z. P. Qiu, X. Z. Liu, L. L. Xiong, Stochastic robust finite-time boundedness for semi-Markov jump uncertain neutral-type neural networks with mixed time-varying delays via a generalized reciprocally convex combination inequality, Int. J. Robust Nonlinear Control, 30 (2020), 2001-2019. https://doi.org/10.1002/rnc.4859 doi: 10.1002/rnc.4859
![]() |
[11] |
W. J. Lin, Y. He, M. Wu, Q. P. Liu, Reachable set estimation for Markovian jump neural networks with time-varying delay, Neural Netw., 108 (2018), 527-532. https://doi.org/10.1016/j.neunet.2018.09.011 doi: 10.1016/j.neunet.2018.09.011
![]() |
[12] |
W. Y. Duan, Stability switches in a Cohen-Grossberg neural network with multi-delays, Int. J. Biomath., 10 (2017), 1750075. https://doi.org/10.1142/S1793524517500759 doi: 10.1142/S1793524517500759
![]() |
[13] | D. Liberzon, Switching in system and control, Boston: Birkhäuser, 2003. https://doi.org/10.1007/978-1-4612-0017-8 |
[14] |
J. Lian, K. Zhang, Exponential stability for switched Cohen-Grossberg neural networks with average dwell time, Nonlinear Dyn., 63 (2011), 331-343. https://doi.org/10.1007/s11071-010-9807-2 doi: 10.1007/s11071-010-9807-2
![]() |
[15] |
Z. G. Wu, P. Shi, H. Y. Su, J. Chu, Delay-dependent stability analysis for switched neural networks with time-verying delay, IEEE Trans. Syst. Man Cybernet. Part B (Cybernet.), 41 (2011), 1522-1530. https://doi.org/10.1109/TSMCB.2011.2157140 doi: 10.1109/TSMCB.2011.2157140
![]() |
[16] |
D. Liberzon, A. S. Morse, Basic problems in stability and design of switched systems, IEEE Control Syst. Mag., 19 (1999), 59-70. https://doi.org/10.1109/37.793443 doi: 10.1109/37.793443
![]() |
[17] |
Q. K. Song, J. Y. Zhang, Global exponential stability of impulsive Cohen-Grossberg neural network with time-varying delays, Nonlinear Anal. Real World Appl., 9 (2008), 500-510. https://doi.org/10.1016/j.nonrwa.2006.11.015 doi: 10.1016/j.nonrwa.2006.11.015
![]() |
[18] |
Q. T. Gan, Exponential synchronization of stochastic Cohen-Grossberg neural networks with mixed time-varying delays and reaction-diffusion via periodically intermittent control, Neural Netw., 31 (2012), 12-21. https://doi.org/10.1016/j.neunet.2012.02.039 doi: 10.1016/j.neunet.2012.02.039
![]() |
[19] |
M. H. Jiang, Y. Shen, X. X. Liao, Boundedness and global exponential stability for generalized Cohen-Grossberg neural networks with variable delay, Appl. Math. Comput., 172 (2006), 379-393. https://doi.org/10.1016/j.amc.2005.02.009 doi: 10.1016/j.amc.2005.02.009
![]() |
[20] |
L. G. Wan, A. L. Wu, Mittag-Leffler stability analysis of fractional-order fuzzy Cohen-Grossberg neural networks with deviating argument, Adv. Differ. Equ., 2017 (2017), 1-19. https://doi.org/10.1186/s13662-017-1368-y doi: 10.1186/s13662-017-1368-y
![]() |
[21] |
H. Q. Wu, G. H. Xu, C. Y. Wu, N. Li, K. W. Wang, Q. Q. Guo, Stability in switched Cohen-Grossberg neural networks with mixed time delays and non-Lipschitz activation functions, Discrete Dyn. Nat. Soc., 2012 (2012), 1-22. https://doi.org/10.1155/2012/435402 doi: 10.1155/2012/435402
![]() |
[22] |
B. Sun, Y. T. Cao, Z. Y. Guo, Z. Yan, S. P. Wen, Synchronization of discrete-time recurrent neural networks with time-varying delays via quantized sliding mode control, Appl. Math. Comput., 375 (2020), 125093. https://doi.org/10.1016/j.amc.2020.125093 doi: 10.1016/j.amc.2020.125093
![]() |
[23] | Z. S. Wang, Y. F. Tian, Stability analysis of recurrent neural networks with time-varying delay by flexible terminal interpolation method, IEEE Trans. Neural Netw. Learn. Syst., 2022. https://doi.org/10.1109/TNNLS.2022.3188161 |
[24] |
H. G. Zhang, Z. W. Liu, G. B. Huang, Z. S. Wang, Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay, IEEE Trans. Neural Netw., 21 (2010), 91-106. https://doi.org/10.1109/TNN.2009.2034742 doi: 10.1109/TNN.2009.2034742
![]() |
[25] |
Y. He, G. P. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying delay, IEEE Trans. Neural Netw., 18 (2007), 310-314. https://doi.org/10.1109/TNN.2006.888373 doi: 10.1109/TNN.2006.888373
![]() |
[26] |
M. N. A. Parlakçı, Robust stability of uncertain neutral systems: a novel augmented Lyapunov functional approach, IET Control Theory Appl., 1 (2007), 802-809. https://doi.org/10.1049/iet-cta:20050517 doi: 10.1049/iet-cta:20050517
![]() |
[27] |
C. Peng, Y. C. Tian, Delay-dependent robust stability criteria for uncertain systems with interval time-varying delay, J. Comput. Appl. Math., 214 (2008), 480-494. https://doi.org/10.1016/j.cam.2007.03.009 doi: 10.1016/j.cam.2007.03.009
![]() |
[28] |
T. Li, L. Guo, C. Y. Sun, C. Lin, Further result on delay-dependent stability criterion of neural networks with time-varying delays, IEEE Trans. Neural Netw., 19 (2008), 726-730. https://doi.org/10.1109/TNN.2007.914162 doi: 10.1109/TNN.2007.914162
![]() |
[29] |
S. Arik, Z. Orman, Global stability analysis of Cohen-Grossberg neural networks with time varying delays, Phys. Lett. A, 341 (2005), 410-421. https://doi.org/10.1016/j.physleta.2005.04.095 doi: 10.1016/j.physleta.2005.04.095
![]() |
[30] |
Z. Y. Dong, X. Wang, X. Zhang, A nonsingular M-matrix-based global exponential stability analysis of higher-order delayed discrete-time Cohen-Grossberg neural networks, Appl. Math. Comput., 385 (2020), 125401. https://doi.org/10.1016/j.amc.2020.125401 doi: 10.1016/j.amc.2020.125401
![]() |
[31] |
V. Singh, Improved global robust stability for interval-delayed Hopfield neural networks, Neural Process. Lett., 27 (2008), 257-265. https://doi.org/10.1007/s11063-008-9074-0 doi: 10.1007/s11063-008-9074-0
![]() |
[32] |
G. Bao, S. P. Wen, Z. G. Zeng, Robust stability analysis of interval fuzzy Cohen-Grossberg neural networks with piecewise constant argument of generalized type, Neural Netw., 33 (2012), 32-41. https://doi.org/10.1016/j.neunet.2012.04.003 doi: 10.1016/j.neunet.2012.04.003
![]() |
[33] |
G. Q. Tan, Z. S. Wang, Reachable set estimation of delayed Markovian jump neural networks based on an improved reciprocally convex inequality, IEEE Trans. Neural Netw. Learn. Syst., 33 (2022), 2737-2742. https://doi.org/10.1109/TNNLS.2020.3045599 doi: 10.1109/TNNLS.2020.3045599
![]() |
[34] |
Z. J. Zhang, X. Zhang, T. T. Yu, Global exponential stability of neutral-type Cohen-Grossberg neural networks with multiple time-varying neutral and discrete delays, Neurocomputing, 490 (2022), 124-131. https://doi.org/10.1016/j.neucom.2022.03.068 doi: 10.1016/j.neucom.2022.03.068
![]() |
1. | Ravi Agarwal, Snezhana Hristova, Donal O’Regan, Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions, 2024, 13, 2075-1680, 605, 10.3390/axioms13090605 | |
2. | Fengjiao Zhang, Yinfang Song, Chao Wang, α-Synchronization of a Class of Unbounded Delayed Inertial Cohen–Grossberg Neural Networks with Delayed Impulses, 2023, 11, 2227-7390, 4096, 10.3390/math11194096 |
ρ | 0.8 | 0.9 | unknown |
Theorem 4.1 (ı=1) | 1.9384 | 1.4275 | 1.3128 |
Theorem 4.1 (ı=2) | 2.1139 | 1.5965 | 1.4902 |