Loading [MathJax]/jax/output/SVG/jax.js
Research article

Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances

  • Received: 15 November 2023 Revised: 15 December 2023 Accepted: 20 December 2023 Published: 02 January 2024
  • MSC : 34D20

  • Robustness of neural networks has been a hot topic in recent years. This paper mainly studies the robustness of the global exponential stability of Cohen-Grossberg neural networks with a piecewise constant argument and stochastic disturbances, and discusses the problem of whether the Cohen-Grossberg neural networks can still maintain global exponential stability under the perturbation of the piecewise constant argument and stochastic disturbances. By using stochastic analysis theory and inequality techniques, the interval length of the piecewise constant argument and the upper bound of the noise intensity are derived by solving transcendental equations. In the end, we offer several examples to illustrate the efficacy of the findings.

    Citation: Tao Xie, Wenqing Zheng. Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances[J]. AIMS Mathematics, 2024, 9(2): 3097-3125. doi: 10.3934/math.2024151

    Related Papers:

    [1] Yijia Zhang, Tao Xie, Yunlong Ma . Robustness analysis of exponential stability of Cohen-Grossberg neural network with neutral terms. AIMS Mathematics, 2025, 10(3): 4938-4954. doi: 10.3934/math.2025226
    [2] Pratap Anbalagan, Evren Hincal, Raja Ramachandran, Dumitru Baleanu, Jinde Cao, Chuangxia Huang, Michal Niezabitowski . Delay-coupled fractional order complex Cohen-Grossberg neural networks under parameter uncertainty: Synchronization stability criteria. AIMS Mathematics, 2021, 6(3): 2844-2873. doi: 10.3934/math.2021172
    [3] Ruoyu Wei, Jinde Cao, Wenhua Qian, Changfeng Xue, Xiaoshuai Ding . Finite-time and fixed-time stabilization of inertial memristive Cohen-Grossberg neural networks via non-reduced order method. AIMS Mathematics, 2021, 6(7): 6915-6932. doi: 10.3934/math.2021405
    [4] Biwen Li, Yibo Sun . Stability analysis of Cohen-Grossberg neural networks with time-varying delay by flexible terminal interpolation method. AIMS Mathematics, 2023, 8(8): 17744-17764. doi: 10.3934/math.2023906
    [5] Er-Yong Cong, Yantao Wang, Xian Zhang, Li Zhu . A new approach for accuracy-preassigned finite-time exponential synchronization of neutral-type Cohen–Grossberg memristive neural networks involving multiple time-varying leakage delays. AIMS Mathematics, 2025, 10(5): 10917-10942. doi: 10.3934/math.2025496
    [6] Shuting Chen, Ke Wang, Jiang Liu, Xiaojie Lin . Periodic solutions of Cohen-Grossberg-type Bi-directional associative memory neural networks with neutral delays and impulses. AIMS Mathematics, 2021, 6(3): 2539-2558. doi: 10.3934/math.2021154
    [7] R. Sriraman, R. Samidurai, V. C. Amritha, G. Rachakit, Prasanalakshmi Balaji . System decomposition-based stability criteria for Takagi-Sugeno fuzzy uncertain stochastic delayed neural networks in quaternion field. AIMS Mathematics, 2023, 8(5): 11589-11616. doi: 10.3934/math.2023587
    [8] Qinghua Zhou, Li Wan, Hongshan Wang, Hongbo Fu, Qunjiao Zhang . Exponential stability of Cohen-Grossberg neural networks with multiple time-varying delays and distributed delays. AIMS Mathematics, 2023, 8(8): 19161-19171. doi: 10.3934/math.2023978
    [9] Mohammed D. Kassim . A fractional Halanay inequality for neutral systems and its application to Cohen-Grossberg neural networks. AIMS Mathematics, 2025, 10(2): 2466-2491. doi: 10.3934/math.2025115
    [10] Zhiying Li, Wangdong Jiang, Yuehong Zhang . Dynamic analysis of fractional-order neural networks with inertia. AIMS Mathematics, 2022, 7(9): 16889-16906. doi: 10.3934/math.2022927
  • Robustness of neural networks has been a hot topic in recent years. This paper mainly studies the robustness of the global exponential stability of Cohen-Grossberg neural networks with a piecewise constant argument and stochastic disturbances, and discusses the problem of whether the Cohen-Grossberg neural networks can still maintain global exponential stability under the perturbation of the piecewise constant argument and stochastic disturbances. By using stochastic analysis theory and inequality techniques, the interval length of the piecewise constant argument and the upper bound of the noise intensity are derived by solving transcendental equations. In the end, we offer several examples to illustrate the efficacy of the findings.



    Due to its parallel processing ability, high fault tolerance, adaptive ability, neural networks (NNs) have considerable application prospects in signal processing, automatic control, and artificial intelligence, which has gradually attracted great attention to NNs [1,2,3,4]. Research on NNs have yielded multiple excellent results up to this point [5,6,7,8,9,10,11,12,13,14,15,16]. For instance, the literature [5,6,7] discuss the stability of NNs, and [8,9] are about the robustness analysis of NNs. The literature [11,12,13,14] introduce the synchronization problems of NNs, which are fundamental to the application of NNs. The Cohen-Grossberg neural network (CGNN) is an important continuous-time feedback neural network. Analysis of the dynamic behavior of CGNNs was first started in 1983 [17], and further studied in [18]. The CGNN model includes well-known models from disciplines such as population biology and neurobiology. Moreover, the CGNN is more general as it includes a Hopfield neural network (HNN) and a cellular neural network (CNN) as its special cases [19,20,21,22,23]. The CGNN attracted wide attention when it was first proposed, and quickly became one of the hottest topics in the field of research at that time. With the passage of time and in-depth research, the CGNN model has been further improved and expanded, leading to numerous breakthrough achievements in various fields such as stability and periodicity [24,25,26,27,28,29,30,31]. For example, the stability of NNs is studied in [24,25,26], the periodic solution problem of NNs is mainly discussed in [27,28], and the synchronization problem is explored in [29,30,31].

    Stochastic disturbances (SDs) in nervous systems often arise from the stochastic processes involved in neural transmission. SDs can lead to unstable output results and even cause errors in NNs. Therefore, it is necessary to consider the impact of SDs on the dynamic behavior of NNs. In recent years, many results on the stability analysis of CGNNs with SDs have been proposed [32,33,34].

    In the implementation of CGNNs, time delay phenomenon is almost unavoidable, which may lead to instability or poor performance of CGNNs. Systems with piecewise constant argument (PCA) are a generalization of time-delay systems. Cooke and Wiener introduced the notion of differential equations for PCA (EQPCA) in [35]. The concept of EQPCA was extended in [36,37,38,39]. With the development and continuous improvement of EQPCA, PCA systems have attracted the interests of many researchers. Some new stability conditions of PCA systems were obtained in [40,41,42,43], and PCA systems have been successfully applied in many fields, such as biomedicine, mechanical engineering, physics, aerodynamic engineering, and other fields.

    Successful applications of NNs greatly depend on understanding the intrinsic dynamic behavior and characteristics of NNs. It is necessary to comprehensively and deeply analyze the dynamic behavior of NNs. In the field of control theory, robustness is an essential topic of study. Many authors have conducted in-depth investigations into the robustness of NNs in [8,9,10,44,45,46]. The researchers of [8] investigated the robustness of random disturbances and time delays on the global exponential stability (GES) of recurrent neural networks (RNNs). The robustness of RNNs was described by finding upper bounds for these parameters. Subsequently, Shen et al. studied the robustness of the GES of nonlinear systems with time delays and random disturbances in [44]. The researchers of [9] discussed robustness for connection weight matrices RNNs with time-varying delayed. The researchers of [45] examined the robustness of hybrid stochastic NNs with neutral terms and time-varying delay. The robustness of the GES of nonlinear systems with a deviation argument and SDs was primarily investigated in [10], where the upper bound of the noise intensity and the interval length of deviating function were estimated. Robustness of bidirectional associative memory neural network (BAMNN) with neutral terms and time delays was studied in [46]. It is worth noting that there are many articles on analyzing the stability of CGNNs, but few results on the robustness of CGNNs caused by PCA and SDs.

    Based on the above discussion, the aim of this article is to analyze the robustness of the GES of CGNNs with PCA and SDs. In this case, the stability of perturbed CGNNs is generally affected by the strengths of PCA and SDs. If PCA, SDs, or both are small enough, then the disturbed CGNN can still be stable. However, if the interval length of the deviation function or noise intensity exceeds a certain limit, the originally stable CGNN may become unstable. It would be interesting to determine this "certain limit". This paper applies stochastic analysis theory and inequality techniques to establish robustness results of the GES of CGNNs in the presence of PCA and SDs, and directly quantifies the PCA and SD levels of stable systems. That is, we estimate the upper bounds of PCA and SDs by solving transcendental equations, and further characterize the robustness of CGNNs with PCA and SDs. The following are this paper's primary works:

    (1) This paper studies the robustness of the GES of CGNNs with PCA and SDs. The research contents of literature [8,9] are about the robustness of RNNs, and the CGNN studied in this paper is a more general NN. However, the existing results of the robustness analysis for the GES of CGNNs are not common. Therefore, it is crucial to study the robustness of CGNNs with PCA and SDs.

    (2) In this paper, the effects of SDs and PCA on the stability of CGNNs are discussed. Robustness results of CGNNs with PCA and SDs are derived by applying stochastic analysis theory and inequality techniques, and the upper bounds of PCA and SDs are estimated by solving transcendental equations.

    (3) The existence of the amplification function of CGNNs provides a challenge for exploring the robustness of the GES for CGNNs in this paper. How to solve the influence of the amplification function on the study of CGNNs is a problem. In previous studies, it is found that we can apply a hypothetical condition to the amplification function. Therefore, the amplification functions are directly replaced by the upper and lower bounds of the amplification function in the operation process of this paper, so as to solve the influence of the amplification function on the CGNNs model.

    Finally, based on our findings in this article, if both PCA and SDs are below the upper bounds obtained here, the disturbed CGNNs will remain exponentially stable.

    Here is how we structure the rest: Section 2 introduces the system and the preliminaries we need in later. Section 3 to Section 4 give the main results, and we discuss the influence of SDs and PCA on the GES of CGNNs. Finally, some numerical examples are given to prove the validity of the results.

    Notations: Let R+=[0,) and N={1,2,...}, where Rn is defined as the n-dimensional Euclidean space. Denote ||ϕ||=ni=1|ϕi| for a vector ϕ, and for a matrix K, ||K||=max1jnni=1|kij|. Let (Ω,F,{Ft}t0,P) be a complete probability space with a filtration {Ft}t0 satisfying the usual conditions, i.e., the filtration is right continuous and contains all P-null sets. The scalar Brownian motion ω(t) is defined in the probability space, and E stands for the mathematical expectation operator about the probability measure P. Fix two real-value sequences {θi}, {ϑi}, iN, such that θi<θi+1, θi<ϑi<θi+1 for all iN with θi+ as i+.

    Consider the CGNN model:

    ˙ei(t)=di(ei(t))[ci(ei(t))+nj=1kijgj(ej(t))+ui],e(t0)=e0,i=1,...,n (2.1)

    where n refers to the number of units, t0 and e0 are the initial values of CGNN (2.1), e(t)=(e1(t),...,en(t))T is the state of the ith unit at time t, di() is an amplification function, ci() is an appropriately behaved function that makes the solutions of CGNN (2.1) be bounded, gi() is an activation function, kij is the connection strength between cell i and j, and ui is a constant representing the external input.

    Assume e=(e1,...,en) is an equilibrium point of CGNN (2.1), and we translate the equilibrium point to the orign. Let h(t)=e(t)e, then the model (2.1) can be converted into:

    ˙hi(t)=ai(hi(t))[bi(hi(t))+nj=1kijfj(hj(t))],h(t0)=h0,i=1,2,...,n (2.2)

    where h0=e0e, ai(hi(t))=di(hi(t)+ei), bi(hi(t))=ci(hi(t)+ei)ci(ei), fi(hi(t))=gi(hi(t)+ei)gi(ei). The origin is obviously an equilibrium point of CGNN (2.2). Then, the stability of e is the same as the stability for the origin of (2.2).

    Next, we give some assumptions:

    Assumption 1. Functions fi() satisfy the Lipschitz condition

    |fi(h)fi(l)|Fi|hl|,h,lR (2.3)

    with f(0)=0, i=1,...n, where Fi are known constants.

    Assumption 2. The functions ai() are continuous and bounded, and there exist constants μ_>0 and ˉμ>0 such that

    μ_ai(h)ˉμ,hR,i=1,2,...,n.

    Assumption 3. For bi(), there exist constants Bi>0, i=1,2,...,n such that

    bi(h)bi(l)hlBi,h,lR,hl.

    From Assumption 1, for any initial value t0, h0, CGNN (2.2) has a unique state h(t;t0,h0) on tt0. Now we give the definition for the GES of (2.2).

    Definition 1. CGNN (2.2) is globally exponentially stable if for t0R+, h0Rn, t>t0, there are positive constants α and ν such that

    ||h(t;t0,h0)||α||h(t0)||exp(ν(tt0))

    where h(t;t0,h0) is the state of CGNN (2.2).

    Consider the SCGNN perturbed by SDs:

    dli(t)={ai(li(t))[bi(li(t))+nj=1kijfj(lj(t))]}dt+σli(t)dω(t),l(t0)=l0=h0Rn,i=1,2,...,n (3.1)

    where ai(), bi(), fi(), and kij are same as in (2.2). σ is the noise intensity, and ω(t) is a scalar Brownian motion defined in (Ω,F,{Ft}t0,P).

    Obviously, if Assumption 1 holds, for t0R+, h0Rn, SCGNN (3.1) has a unique state l(t;t0,h0) on t>t0, and l=0 is the equilibrium point of (3.1).

    Definition 2. [47] SCGNN (3.1) is almost surely globally exponentially stable (ASGES), if for t0R+, h0Rn, the Lyapunov exponent

    limsupt(ln|l(t;t0,h0)|t)<0

    almost surely.

    SCGNN (3.1) is mean square globally exponentially stable (MSGES), if for t0R+, l0Rn, the Lyapunov exponent

    limsupt(ln(E|l(t;t0,l0)|2)t)<0

    where l(t;t0,l0) is the state of SCGNN (3.1).

    Remark 1. From Definition 2, it is clear that if SCGNN (3.1) is ASGES, then SCGNN (3.1) is MSGES, but not the contrary. It is worth noting that when A1 is true and SCGNN (3.1) is MSGES, then SCGNN (3.1) is also ASGES (see [47]).

    Assumption 4.

    [16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)]α2/ν×exp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)]}+2α2exp{2νΔ}<1.

    Theorem 1. Assume A1–A4 holds, and CGNN (2.2) is globally exponentially stable, SCGNN (3.1) is MSGES and also ASGES if |σ|<ˉσ, where ˉσ is a unique positive solution of the transcendental equation

    [16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/ν×exp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}+2α2exp{2νΔ}=1 (3.2)

    and >ln(2α2)/2ν>0.

    Proof. Let h(t;t0,h0)h(t), l(t;t0,l0)l(t). From (2.2) and (3.1), for tt0,

    hi(t)li(t)=tt0{ai(hi(s))[bi(hi(s))+nj=1kijfj(hj(s))]ai(li(s))[bi(li(s))+nj=1kijfj(lj(s))]}dstt0σli(s)dω(s).

    When tt0+2Δ, by the Cauchy-Schwarz inequality, A1 and the GES of (2.2),

    E||h(t)l(t)||22Eni=1|tt0{ai(hi(s))[bi(hi(s))+nj=1kijfj(hj(s))]ai(li(s))[bi(li(s))+nj=1kijfj(lj(s))]}ds|2+2Eni=1|tt0σli(s)dω(s)|2=2Eni=1|tt0{[ai(li(s))bi(li(s))ai(hi(s))bi(hi(s))]+nj=1kij[ai(hi(s))fj(hj(s))ai(li(s))fj(lj(s))]}ds|2+2Eni=1|tt0σli(s)dω(s)|22Eni=1|tt0{[ˉμbi(li(s))μ_bi(hi(s))]+nj=1kij[ˉμfj(hj(s))μ_fj(lj(s))]}ds|2+2Eni=1tt0|σli(s)|2ds2Eni=1{tt0[ˉμBi|li(s)hi(s)|+(ˉμμ_)Bi|hi(s)|+nj=1|kij|(ˉμFj|hj(s)lj(s)|+(ˉμμ_)Fj|lj(s)|)]ds}2+2Eni=1tt0|σli(s)|2ds16Δ[ˉμ2||B||2tt0E||h(s)l(s)||2ds+(ˉμμ_)2||B||2tt0E||h(s)||2ds+ˉμ2||K||2F2×tt0E||h(s)l(s)||2ds+(ˉμμ_)2||K||2F2tt0E||l(s)||2ds]+4σ2tt0E||h(s)l(s)||2ds+4σ2tt0E||h(s)||2ds16Δ[(ˉμ2||B||2+ˉμ2||K||2F2)tt0E||h(s)l(s)||2ds+2(ˉμμ_)2||K||2F2tt0E||h(s)l(s)||2ds+2(ˉμμ_)2||K||2F2tt0E||h(s)||2ds+(ˉμμ_)2||B||2tt0E||h(s)||2ds]+4σ2tt0E||h(s)l(s)||2ds+4σ2tt0E||h(s)||2ds[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]tt0E||h(s)l(s)||2ds+[16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]tt0E||h(s)||2ds[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]tt0E||h(s)l(s)||2ds+[8Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+2σ2]α2||h(t0)||2/ν.

    When t0+Δtt0+2Δ, from the Gronwall inequality, we get

    E||h(t)l(t)||2[8Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+2σ2]α2||h(t0)||2/νexp{[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2](tt0)}[8Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+2σ2]α2/νexp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}supt0st0+ΔE||l(s)||2 (3.3)

    and from (3.3) and the GES of (2.2), we obtain

    E||l(t)||2[16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/νexp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}supt0st0+ΔE||l(s)||2+2α2||h(t0)||2exp{2ν(tt0)}{[16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/νexp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}+2α2exp{2νΔ}}supt0st0+ΔE||l(s)||2. (3.4)

    Let Q(σ)=[16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/νexp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}+2α2exp{2νΔ}. It is easy to deduce that Q(σ) is strictly increasing for σ. According to A4, we get Q(0)<1, so there must be a positive constant ˉσ such that Q(ˉσ)=1. That is, there exists a positive constant σ(0,ˉσ) such that Q(σ)<1. Then, for |σ|ˉσ, we get

    [16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/νexp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}+2α2exp{2νΔ}<1.

    Let

    φ=ln{[16Δ(2(ˉμμ_)2||K||2F2+(ˉμμ_)2||B||2)+4σ2]α2/ν×exp{2Δ[16Δ(ˉμ2||B||2+ˉμ2||K||2F2+2(ˉμμ_)2||K||2F2)+4σ2]}+2α2exp{2νΔ}}/Δ

    so φ>0, and from (3.4), we have

    supt0+Δtt0+2ΔE||l(t)||2exp(φΔ)(supt0tt0+ΔE||l(t)||2). (3.5)

    From the existence and uniqueness of the state of SCGNN (3.1), for any ϖ=1,...,n, when tt0+(ϖ1)Δ,

    l(t;t0,h0)=l(t;t0+(ϖ1)Δ,l(t0+(ϖ1)Δ;t0,h0)). (3.6)

    From (3.5) and (3.6), we obtain

    supt0+ϖΔtt0+(ϖ+1)ΔE||l(t)||2exp(φΔ)supt0+(ϖ1)Δtt0+(ϖ1)Δ+ΔE||l(t;t0,h0)||2...exp(φϖΔ)supt0tt0+ΔE||l(t;t0,h0)||2=τexp(φϖΔ)

    where τ=supt0tt0+ΔE||l(t;t0,h0)||2. Hence, for all t>t0+Δ, there exists an integer ϖ>0 such that t0+ϖΔtt0+(ϖ+1)Δ, and then

    E||l(t;t0,h0)||2τexp(φt+φt0+φΔ)=(τexp(φΔ))exp(φ(tt0)).

    For t0tt0+Δ, the above formula also holds. So, SCGNN (3.1) is MSGES and also ASGES.

    Consider the model of CGNN with PCA:

    ˙li(t)=ai(li(t))[bi(li(t))+nj=1kijfj(lj(t))+nj=1wijfj(lj(ϱ(t)))],l(t0)=l0=h0,i=1,...,n (4.1)

    where ϱ(t)=ϑk, when t[θk,θk+1], kN. ϱ(t) is an identification function, ai(), bi(), kij, fj() are as the same as in (2.2).

    Model (4.1) is a hybrid system. Fix kN, and on the interval [θk,θk+1), if θkt<ϑk holds for the argument t, i.e., t<ϱ(t), then (4.1) is an advanced system. Similarly, if ϑkt<θk+1, (4.1) is a delayed system. System (4.1) is deviated if the argument is advanced or delayed.

    For each initial conditions t0 and l0, (4.1) has a unique state l(t;t0,l0) and clearly has the trivial state l=0.

    The model of the CGNN without PCA is as follows:

    ˙hi(t)=ai(hi(t))[bi(hi(t))+nj=1kijfj(hj(t))+nj=1wijfj(hj(t))],h(t0)=h0,i=1,2,...,n. (4.2)

    Assumption 5. There exists a constant θ>0 such that θk+1θkθ, kN.

    Assumption 6. θ[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+θˉμF||W|)exp(θ(ˉμ||B||+ˉμF||K||))]<1.

    Assumption 7.

    αexp(νΔ)+[(ˉμμ_)||B||+(ˉμμ_)||K||F+2ˉμ||W||F]α/νexp{2Δ[ˉμ||B||+2ˉμ||K||F+3ˉμ||W||Fμ_||K||F]}<1.

    Lemma 1. Let A1–A5 hold, and l(t) be a solution of system (4.1). For all tR+, the inequality

    ||l(ϱ(t))||λ||l(t)|| (4.3)

    holds, where λ={1θ[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+θˉμF||W||)exp(θ(ˉμ||B||+ˉμF||K||))]}1.

    Proof. Fix kN, for any t[θk,θk+1), we have

    li(t)=li(ϑk)+tϑkai(li(s))[bi(li(s))+nj=1kijfj(lj(s))+nj=1wijfj(lj(ϑk))]ds.

    From A1–A4, both sides take absolute values, and adding them together, we have

    ||l(t)||||l(ϑk)||+ni=1|tϑkai(li(s))[bi(li(s))+nj=1kijfj(lj(s))+nj=1wijfj(lj(ϑk))]ds|||l(ϑk)||+ˉμ||B||tϑk||l(s)||ds+ˉμF||K||tϑk||l(s)||ds+ˉμF||W||tϑk||l(ϑk)||ds||l(ϑk)||+(ˉμ||B||+ˉμF||K||)tϑk||l(s)||ds+θˉμ||W||F||l(ϑk)||=(1+θˉμF||W||)||l(ϑk)||+(ˉμ||B||+ˉμF||K||)tϑk||l(s)||ds

    and from the Gronwall inequality, we obtain

    ||l(t)||(1+θˉμF||W||)||l(ϑk)||exp{θ(ˉμ||B||+ˉμF||K||)}.

    Similarly, for t[θk,θk+1), we have

    ||l(ϑk)||||l(t)||+ˉμF||W||tϑk||l(ϑk)||ds+θ(ˉμ||B||+ˉμF||K||)||l(s)||.

    That is,

    ||l(ϑk)||||l(t)||+θ[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+θˉμF||W||)exp{θ(ˉμ||B||+ˉμF||K||)}]||l(ϑk)||.

    For t[θk,θk+1),

    ||l(ϑk)|{1θ[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+θˉμF||W|)exp(θ(ˉμ||B||+ˉμF||K||))]}1||l(t)||.

    The above formula holds for all tR+ due to the arbitrariness of t and k.

    We discuss the robustness for the GES of CGNNs with PCA in Theorem 2.

    Theorem 2. Suppose A1–A7 hold, and the CGNN (4.2) is globally exponentially stable. CGNN (4.1) is globally exponentially stable if θ<min(Δ2,ˉθ,ˉˉθ), where ˉθ is the unique positive solution ˆx of Eq (4.4):

    αexp(ν(Δˆx))+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμ||W||F(1ˆx(ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+ˆxˉμF||W||)exp{ˆx(ˉμ||B||+ˉμF||K||)}))]α/νexp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμ||W||F(1ˆx(ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+ˆxˉμF||W||)×exp{ˆx(ˉμ||B||+ˉμF||K||)}))]}=1, (4.4)

    and ˉˉθ is a unique positive solution ˇx of Eq (4.5)

    ˇx[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+ˇxˉμF||W||)exp{(ˉμ||B||+ˉμF||K||)ˇx}]=1 (4.5)

    and Δ>ln(α)ν>0.

    Proof. For convenience, we write h(t;t0,h0)h(t), l(t;t0,l0)l(t).

    Combined with (4.1), (4.2), and Lemma 1, tt0>0,

    ||h(t)l(t)||ni=1|tt0{(ai(li(s))bi(li(s))ai(hi(s))bi(hi(s)))+nj=1kij[ai(hi(s))fj(hj(s))ai(li(s))fj(lj(s))]+nj=1wij[ai(hi(s))fj(hj(s))ai(li(s))fj(lj(ϱ(s)))]}ds|ni=1|tt0{(ˉμbi(li(s))μ_bi(hi(s)))+nj=1kij[ˉμfj(hj(s))μ_fj(lj(s))]+nj=1wij[ˉμfj(hj(s))μ_fj(lj(ϱ(s)))]}ds|ni=1tt0{ˉμBi|li(s)hi(s)|+(ˉμμ_)Bi|hi(s)|+nj=1|kij|ˉμFj|hj(s)lj(s)|+nj=1|kij|(ˉμμ_)Fj|lj(s)|+nj=1|wij|ˉμFj|hj(s)lj(s)|+nj=1|wij|ˉμFj|lj(s)|+nj=1|wij|ˉμFj|lj(ϱ(s))|}ds(ˉμ||B||+ˉμ||K||F+ˉμ||W||F)tt0||h(s)l(s)||ds+(ˉμμ_)||B||tt0||h(s)||ds+[(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]tt0||l(s)||ds[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]tt0||h(s)l(s)||ds+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]tt0||h(s)||ds.

    In view of the GES of (4.2), for any tt00,

    ||h(t)l(t)||[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_|K||F+ˉμλ||W||F]tt0||h(s)l(s)||ds+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α||h(t0)||/ν. (4.6)

    Applying the Gronwall-Bellman lemma to (4.6), for t0+θtt0+2Δ,

    ||h(t)l(t)||[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α||h(t0)||/ν×exp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}

    then

    ||l(t)||||h(t)||+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]×α||h(t0)||/ν×exp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}. (4.7)

    Note that θ<min{Δ2,ˉθ}, and from the GES of (4.2) and (4.7), for t0θ+Δtt0θ+2Δ, we have

    ||l(t)||α||h(t0)||exp{ν(tt0)}+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α||h(t0)||/ν×exp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}{αexp{ν(Δθ)}+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α/ν×exp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}}||l0||.

    Let P(θ)=αexp{ν(Δθ)}+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α/νexp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}, R(θ)=θ[ˉμF||W||+(ˉμ||B||+ˉμF||K||)(1+θˉμF||W||)exp{(ˉμ||B||+ˉμF||K||)θ}]. Obviously, R(θ) is strictly increasing for θ, so there must exist ˉθ such that R(ˉθ)=1. In addition, P(θ) is also strictly increasing for θ, and from A7 we have P(0)<1. Thus, there must be a positive constant ˉˉθ such that P(ˉˉθ)=1. Since P(θ) is also increasing for θ on the interval (0,ˉθ), therefore, we can know that P(θ)<1 when θ<ˉθ. That is, we know that P(θ)<1 when θ<min{Δ2,ˉθ,ˉˉθ}.

    Letting

    ι=ln{αexp{ν(Δθ)}+[(ˉμμ_)||B||+(ˉμμ_)||K||F+ˉμ||W||F+ˉμλ||W||F]α/ν×exp{2Δ[ˉμ||B||+2ˉμ||K||F+2ˉμ||W||Fμ_||K||F+ˉμλ||W||F]}}/Δ

    we get

    ||l(t)||exp{ιΔ}||l0||. (4.8)

    Considering the uniqueness of the solution of CGNN (4.2), for a positive integer β, we have

    l(t;t0,l0)=l(t;t0+(β1)Δ,l(t0+(β1)Δ;t0,l0)). (4.9)

    Then, from (4.8) and (4.9), for tt0θ+βΔ,

    ||l(t;t0,l0)||exp{ιΔ}||l(t0+(β1)Δ;t0,l0)||...exp{βιΔ}||l0||

    and so, for any t>t0θ+Δ, there exists an integer β>0 such that t0θ+(β1)Δtt0θ+βΔ,

    ||l(t;t0,l0)||exp{ι(tt0)}exp{ι(Δθ)}||l0||. (4.10)

    Clearly, (4.10) also holds for t0tt0θ+Δ. That is, CGNN (4.1) is globally exponentially stable.

    Remark 2. Compared to other systems, systems with PCA are considered a hybrid system. Through addressing the transcendental equation, we can derive an upper bound on the interval length of PCA. If the interval length of PCA is less than min(Δ/2,ˉθ,ˉˉθ), then the perturbed CGNN can maintain a stable state.

    Consider the impacts of both SDs and PCA together on the GES of CGNNs:

    dli(t)={ai(li(t))[bi(li(t))+nj=1kijfj(lj(t))+nj=1wijfj(lj(ϱ(t)))]}dt+σli(t)dω(t),l(t0)=l0,tt00,i=1,2,...,n (5.1)

    where ϱ(t)=ϑk, when t[θk,θk+1], kN. ϱ(t) is an identification function, σ is the noise intensity, and ai(), bi(), kij and fj() are as the same as in (2.2).

    SPCGNN (5.1) is a hybrid system in a stochastic environment. Fix kN, and on the interval [θk,θk+1), if θkt<ϑk holds for argument t, (5.1) is an advanced system. Similarly, if ϑkt<θk+1, SPCGNN (5.1) is a delayed system.

    SPCGNN (5.1) can be viewed as the perturbed system of the following model:

    dhi(t)={ai(hi(t))[bi(hi(t))+nj=1kijfj(hj(t))+nj=1wijfj(hj(t))]}dt,h(t0)=h0=l0,tt00,i=1,2,...,n. (5.2)

    It is evident that l=0 is a trivial state of SPCGNN (5.1) and h=0 is a trivial state of CGNN (5.2). We suppose that SPCGNN (5.1) has a unique state l(t;t0,l0) for initial conditions t0 and l0.

    Now, the stability definition of SPCGNN (5.1) is described as follows.

    Definition 3 SPCGNN (5.1) is MSGES if, for any t0R+,l0Rn, there exist constants α>0 and ν>0 such that

    E||l(t;t0,l0)||2α||l0||2exp(ν(tt0)),tt00.

    Assumption 8. 9θ2ˉμ2F2||W||2+9θ(3θˉμ2||B||2+3θˉμ2F2||K||2+σ2)(1+3θ2ˉμ2F2||W||2)×exp{3θ(3θˉμ2||B||2+3θˉμ2F2||K||2+σ2)}<1.

    Assumption 9.

    2αexp{νΔ}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+8ˉμ2||W||2F2)]α/ν×exp{2Δ[24Δ(ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+8ˉμ2||W||2F2)]}<1.

    Lemma 2. Let A1–A6 hold, and let l(t) be a solution of SPCGNN (5.1). For all tR+, the inequality

    E||l(ϱ(t))||2ˉλE||l(t)||2 (5.3)

    holds, where ˉλ=3{1[9θ2ˉμ2F2||W||2+9θ(3θˉμ2||B||2+3θˉμ2F2||K||2+σ2)(1+3θ2ˉμ2F2||W||2)exp(3θ(3θˉμ2||B||2+3θˉμ2F2||K||2+σ2))]}1.

    Proof. Fix tR+, kN such that t[θk,θk+1), ϱ(t)=ϑk,

    E||l(t)||23E||l(ϑk)||2+3θEni=1tϑk|ˉμ[bi(li(s))+nj=1kijfj(lj(s))+nj=1wijfj(lj(ϑk))]|2ds+3Eni=1|tϑkσili(s)dω(s)|23E||l(ϑk)||2+9θˉμ2||B||2tϑkE||l(s)||2ds+9θˉμ2||K||2F2tϑkE||l(s)||2ds+9θ2ˉμ||W||2F2E||l(ϑk)||2+3σ2tϑkE||l(s)||2ds=3(1+3θ2ˉμ2||W||2F2)E||l(ϑk)||2+3(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)tϑkE||l(s)||2ds. (5.4)

    From the Gronwall-Bellman lemma, we have

    E||l(t)||23(1+3θ2ˉμ2||W||2F2)E||l(ϑk)||2exp{3θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)}. (5.5)

    Similary, for t[θk,θk+1), from (5.4), we have

    E||l(ϑk)||23E||l(t)||2+9θ2ˉμ2||W||2F2E||l(ϑk)||2+3(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)tϑkE||l(s)||2ds3E||l(t)||2+9θ2ˉμ2||W||2F2E||l(ϑk)||2+9θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)×(1+3θ2ˉμ2||W||2F2)exp{3θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)}E||l(ϑk)||2=3E||l(t)||2+[9θ2ˉμ2F2||W||2+9θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)×(1+3θ2ˉμ2F2||W||2)exp{3θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)}]E||l(ϑk)||2=3E||l(t)||2+ϖE||l(ϑk)||2,

    where ϖ=9θ2ˉμ2F2||W||2+9θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)(1+3θ2ˉμ2F2||W||2)exp{3θ(3θˉμ2||B||2+3θˉμ2||K||2F2+σ2)}.

    From A6, for t[θk,θk+1),

    E||l(ϱ(t))||231ϖE||l(t)||2=ˉλE||l(t)||2,

    where ˉλ=31ϖ.

    For all tR+, (5.3) holds due to the arbitrariness of t and k.

    Theorem 3. Let A1–A9 hold, and CGNN (5.2) be globally exponentially stable. SPCGNN (5.1) is MSGES and also ASGES if |σ|<ˉσ2, and θ<min(Δ2,ˉθ), where ˉσ is the unique positive solution ˆw of Eq (5.6):

    2αexp{νΔ}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+8ˉμ2||W||2F2)+4ˆw2]α/ν×exp{2Δ[24Δ(ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+8ˉμ2||W||2F2)+4ˆw2]}=1 (5.6)

    and ˉθ is the unique positive solution ˇw of Eq (5.7)

    2αexp{ν(Δˇw)}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2||W||2F2ˉˉλ)+2σ2]α/ν×exp{2Δ[24Δ(ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2ˉˉλ)+2σ2]}=1, (5.7)

    where ˉˉλ=3(19ˇw2ˉμ2F2||W||29ˇw(3ˇwˉμ2||B||2+3ˇwˉμ2||K||2F2+σ2)(1+3ˇw2ˉμ2||W||2F2)exp{3ˇw(3ˇwˉμ2||B||2+3ˇwˉμ2F2||K||2+σ2)})1, and Δ>ln(α)ν>0.

    Proof. For convenience, we write h(t;t0,h0)h(t) and l(t;t0,l0)l(t).

    Combined with (5.1), (5.2), and Lemma 2, tt0>0,

    E||h(t)l(t)||22Eni=1|tt0{(ai(li(s))bi(li(s))ai(hi(s))bi(hi(s)))+nj=1kij(ai(hi(s))fj(hj(s))ai(li(s))fj(lj(s)))+nj=1wij(ai(hi(s))fj(hj(s))ai(li(s))fj(lj(ϱ(s))))}ds|2+2Eni=1|tt0σili(s)dω(s)|22Eni=1{tt0[ˉμBi|li(s)hi(s)|+(ˉμμ_)Bi|hi(s)|+nj=1kijˉμFj|hj(s)lj(s)|+nj=1kij(ˉμμ_)Fj|lj(s)|+nj=1wijˉμFj|hj(s)lj(s)|+nj=1wij(ˉμFj|lj(s)|μ_Fj|lj(ϱ(s))|)]ds}2+2Eni=1tt0|σili(s)|2ds12(tt0)[ˉμ2||B||2tt0E||h(s)l(s)||2ds+(ˉμμ_)2||B||2tt0E||h(s)||2ds+ˉμ2||K||2F2tt0E||h(s)l(s)||2ds+(ˉμμ_)2||K||2F2tt0E||l(s)||2ds+ˉμ2||W||2F2tt0E||h(s)l(s)||2ds+2ˉμ||W||2F2×tt0E||l(s)||2ds+2ˉμ2ˉλ||W||2F2tt0E||l(s)||2ds]+4σ2tt0E||h(s)l(s)||2ds+4σ2tt0E||h(s)||2ds[12(tt0)(ˉμ2||B||2+ˉμ2||K||2F2+ˉμ2||W||2F2)+4σ2+12(tt0)[(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2]]tt0E||h(s)l(s)||2ds+[12(tt0)[(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2]+12(tt0)(ˉμμ_)2||B||2+4σ2]tt0E||h(s)||2ds[12(tt0)(ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]×tt0E||h(s)l(s)||2ds+[12(tt0)((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν||l0||2(tt0). (5.8)

    By using the Gronwall-Bellman Lemma, for t0+θtt0+2Δ, from (5.8) we get

    E||h(t)l(t)||22Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν||l0||2exp{2Δ[24Δ((ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2))+4σ2]}.

    Therefore, for t0+θtt0+2Δ,

    E||l(t)||22α||l0||2exp{ν(tt0)}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν||l0||2×exp{2Δ[24Δ((ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2))+4σ2]}

    and thus, for t0θ+Δtt0θ+2Δ,

    E||l(t)||22α||l0||2exp{ν(Δθ)}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν||l0||2×exp{2Δ[24Δ((ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2))+4σ2]}.

    Let M(σ,θ)=2αexp{ν(Δθ)}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν×exp{2Δ[24Δ((ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2))+4σ2]}. From A9, we can get M(0,0)<1. Moreover, it is easy to know that M(σ,0) is strictly increasing for σ. So, there exists a positive constant ˉσ such that M(ˉσ,0)=1. M(σ,θ) is obviously strictly increasing for θ, so there exists a positive constant ˉθ, and when θ<min{Δ2,ˉθ}, |σ|<ˉσ2 such that M(σ,θ)<1.

    Letting

    δ=ln{2α||l0||2exp{ν(Δθ)}+4Δ[24Δ((ˉμμ_)2||B||2+(ˉμμ_)2||K||2F2+2ˉμ2||W||2F2+2ˉμ2ˉλ||W||2F2)+4σ2]α/ν×exp{2Δ[24Δ((ˉμ2||B||2+ˉμ2||K||2F2+3ˉμ2||W||2F2+(ˉμμ_)2||K||2F2+2ˉμ2ˉλ||W||2F2))+4σ2]}}/Δ

    we obtain

    E||l(t)||2exp{δΔ}||l0||2. (5.9)

    From the uniqueness of the solution of SPCGNN (5.1), there exists a positive integer ε such that

    l(t;t0,l0)=l(t;t0+(ε1)Δ,l(t0+(ε1)Δ;t0,l0)). (5.10)

    Then, from (5.9) and (5.10), for tt0θ+εΔ,

    E||l(t;t0,l0)||2exp{δΔ}||l(t0+(ε1)Δ;t0,l0)||2...exp{εδΔ}||l0||2. (5.11)

    Thus, for any t>t0θ+Δ, there is an integer ε>0 such that t0θ+(ε1)Δtt0θ+εΔ,

    E||l(t;t0,l0)||2exp{δ(tt0)}exp{δ(Δθ)}||l0||2. (5.12)

    Clearly, (5.12) also holds for t0tt0θ+Δ. So, system (5.1) is MSGES and also is ASGES.

    Remark 3. In Section 5, the discussed system is hybrid and the robustness of CGNNs with PCA and SDs is considered. Based on the GES of CGNN (5.2), the perturbed SPCGNN (5.1) can stand MSGES and ASGES when both values are below the obtained upper bounds.

    Remark 4. We discuss the robustness of CGNNs with PCA and SDs in Theorem 3. The research topics of literature [8,9] are related to the robustness of RNNs. The system studied is a more general in this paper. When the amplification function ai(li(t))=1 in a CGNN, the CGNN is converted to an RNN or HNN, so the research results in this paper are generic. Similarly, we also characterize the robustness of CGNNs by the upper bounds of the perturbation factors obtained by solving transcendental equations. Aiming at the difficulties caused by the existence of the amplification function ai(li(t)), we solve the influence of the amplification function on a CGNN by a hypothesis. Finally, the desired results are obtained.

    The following section presents three instances to substantiate the obtained findings.

    Example 1. Consider a two-dimensional SCGNN

    dl1(t)=(1.5+0.5sin(2l1(t)))[0.005l1(t)+0.004f(l1(t))+0.002f(l2(t))]dt+σl1(t)dω(t),dl2(t)=(1.5+0.5cos(4l2(t)))[0.005l2(t)+0.006f(l1(t))+0.003f(l2(t))]dt+σl2(t)dω(t) (6.1)

    where f(l)=tanh(l), σ is the noise intensity, and ω(t) is a scalar Brownian motion defined in the probability space.

    Consider the model of a CGNN without SDs:

    dl1(t)dt=(1.5+0.5sin(2l1(t)))[0.005l1(t)+0.004f(l1(t))+0.002f(l2(t))],dl2(t)dt=(1.5+0.5cos(4l2(t)))[0.005l2(t)+0.006f(l1(t))+0.003f(l2(t))]. (6.2)

    It is easy to find that CGNN (7.2) is GES with α=0.8 and ν=0.5. Let =0.3>ln(2α2)2ν=0.2468, ˉμ=2, μ_=1, and F=1. Substituting them into (3.2), we get

    1.28[0.00144+4σ2]×exp{0.00288+2.4σ2}+1.28exp{0.3}=1.

    By solving the transcendental equation, we obtain the solution as ˉσ=0.0974. From Theorem 1, when |σ|<ˉσ, SCGNN (7.1) is MSGES and also ASGES. The changing behavior of SCGNN (7.1) is shown in Figure 1.

    Figure 1.  The states of (7.1) with σ=0.08.

    Example 2. Consider a two-dimensional CGNN with PCA

    ˙l1(t)=(1.5+0.5sin(2l1(t)))[0.005l1(t)+0.005f(l1(t))+0.003f(l2(t))+0.004f(l1(ϱ(t)))+0.005f(l2(ϱ(t)))],˙l2(t)=(1.5+0.5cos(4l2(t)))[0.005l2(t)+0.005f(l1(t))+0.005f(l2(t))+0.016f(l1(ϱ(t)))+0.009f(l2(ϱ(t)))] (6.3)

    where θk=k16, ϑk=2k+132, and ϱ(t)=ϑk, if t[θk,θk+1), kN, f(l)=tanh(l).

    Consider the CGNN without PCA as follows:

    ˙l1(t)=(1.5+0.5sin(2l1(t)))[0.005l1(t)+0.005f(l1(t))+0.003f(l2(t))+0.004f(l1(t))+0.005f(l2(t))],˙l2(t)=(1.5+0.5cos(4l2(t)))[0.005l2(t)+0.005f(l1(t))+0.005f(l2(t))+0.0016f(l1(t))+0.009f(l2(t))]. (6.4)

    It is easy to know that CGNN (7.4) is GES with α=1.2 and ν=0.9. Let =0.3>ln(α)ν=0.2025, ˉμ=2, μ_=1, and F=1. Substituting them into (4.4) and (4.5), we get

    1.2exp{0.9(0.3ˆx)}+[0.06+0.04(1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx)))]×1.2/0.9×exp{0.078+0.024[1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx))]}=1,
    ˇx[0.04+0.04(1+0.04ˇx)×exp(0.04ˇx)]=1.

    By solving the equation above, we obtain ˉθ=0.0806, ˉˉθ=8.6238. Thus, in line with Theorem 2, when θ<min{2,ˉθ,ˉˉθ}, CGNN (7.3) is GES. Figure 2 depicts the transient states of (7.3).

    Figure 2.  The states of (7.3) with θk={k16}.

    Example 3. Consider a one-dimensional CGNN

    ˙h(t)=(1.5+0.5sin(2h(t)))[0.01h(t)+0.03f(h(t))] (6.5)

    where f(l)=tanh(l). We can readily determine that CGNN (7.5) is GES with α=1.2 and ν=3 by many of the current criteria.

    The corresponding disturbed SPCGNN is given by

    dl(t)=(1.5+0.5sin(2l(t)))[0.01l(t)+0.01f(l(t))+0.02fl(ϱ(t))]dt+σl(t)dω(t) (6.6)

    where f(l)=tanh(l). When the mathematical parameters are put into (5.6), the result is

    2.4exp{1.5}+0.8(0.1944+4ˆw)exp{0.222+4ˆw2}=1.

    Thus, we have ˉσ=0.218. Note that |σ|<ˉσ2, which means that |σ|<0.1541.

    Combined with ˉσ=0.218, substituting the other computing parameters into (5.7), we get

    2.4exp{3(0.5ˇw)}+[0.10937+0.09216[10.0144ˇw29ˇw(0.0024ˇw+0.0757)(1+0.0048ˇw2)×exp{3ˇw(0.0024ˇw+0.0757)}]]×exp{0.1643+0.1152(10.0144ˇw29ˇw(0.024ˇw+0.0757)(1+0.0048ˇw2exp{3ˇw(0.0024ˇw+0.0757)}))}=1.

    Hence, it can be derived that ˉθ=0.1166. Note that θ<min(2,ˉθ), and therefore θ<0.1166. Figure 3 depicts the transient states of (7.6) with ˉσ=0.1 and {θk}=k20. This shows that the state of (7.6) is MSGES and also ASGES.

    Figure 3.  The states of (7.6) with σ=0.1 and θk={k20}.

    Figure 4 displays the change in behavior of SPCGNN (7.6) with σ=0.8>ˉσ2 and {θk}=k2. The unstable states of SPCGNN (7.6) are depicted in Figure 5 with σ=0.8 and {θk}=k20. Figure 6 illustrates that system (7.6) is unstable with σ=0.1 and {θk}=k2. Consequently, the CGNN can turn unstable if the conditions in the theorems cannot be fulfilled.

    Figure 4.  The states of (7.6) with σ=0.8 and θk={k20}.
    Figure 5.  The states of (7.6) with σ=0.8 and θk={k20}.
    Figure 6.  The states of (7.6) with σ=0.1 and θk={k2}.

    Example 4. When the amplification function ai(li(t))=1, the CGNN is converted to an RNN or HNN, and then SCGNN (3.1) is converted to

    dl1(t)=[l1(t)2f(l1(t))+2f(l2(t))]dt+σl1(t)dω(t),dl2(t)=[l2(t)+2f(l1(t))2f(l2(t))]dt+σl2(t)dω(t) (6.7)

    where f(l)=tanh(l), σ is the noise intensity, and ω(t) is a scalar Brownian motion.

    Consider model (7.7) without SDs:

    dl1(t)dt=l1(t)2f(l1(t))+2f(l2(t)),dl2(t)dt=l2(t)+2f(l1(t))2f(l2(t)). (6.8)

    It is easy to find that system (7.8) is GES with α=0.8 and ν=0.5. Let =0.3, ˉμ=μ_=1, and F=1. Substituting them into (3.2), we get

    5.12σ2×exp{48.96+2.4σ2}+1.28exp{0.3}=1.

    By solving the transcendental equation, we obtain the solution as ˉσ=2.3488×1012. According to Theorem 1, when |σ|<ˉσ, SCGNN (7.7) is MSGES and also ASGES.

    Remark 5. In Theorem 1 of reference [8], the author studied the robustness of RNNs with SDs. In Example 4, when the amplification function ai(li(t))=1, the SCGNN is converted to an RNN or HNN, and then SCGNN (3.1) is similar to the system of reference [8]. This shows that the results of this paper are more general.

    Examples 5 and 6 are variations of Examples 2 and 3 when the amplification function ai(li(t))=1 in the CGNN.

    Example 5. Consider the following model with PCA

    ˙l1(t)=0.005l1(t)+0.004f(l1(t))+0.006f(l2(t))+0.009f(l1(ϱ(t)))+0.008f(l2(ϱ(t))),˙l2(t)=0.005l2(t)+0.002f(l1(t))+0.012f(l2(t))+0.005f(l1(ϱ(t)))+0.004f(l2(ϱ(t))) (6.9)

    where ϱ(t)=ϑk, if t[θk,θk+1), kN, and θk=k16, ϑk=2k+132, f(l)=tanh(l).

    Consider model (7.9) without PCA as follows:

    ˙l1(t)=0.005l1(t)+0.004f(l1(t))+0.006f(l2(t))+0.009f(l1(t))+0.008f(l2(t)),˙l2(t)=0.005l2(t)+0.002f(l1(t))+0.012f(l2(t))+0.005f(l1(t))+0.004f(l2(t))]. (6.10)

    It is known that (7.10) is GES with α=1.2 and ν=0.9. Let =0.3, ˉμ=μ_=1, and F=1. Substituting them into (4.4) and (4.5), we get

    1.2exp{0.9(0.3ˆx)}+[0.02+0.02(1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx)))]×1.2/0.9×exp{0.036+0.012[1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx))]}=1,
    ˇx[0.04+0.04(1+0.04ˇx)×exp(0.04ˇx)]=1.

    By solving the equations above, we obtain ˉθ=0.0335, ˉˉθ=8.6238. From Theorem 2, when θ<min{2,ˉθ,ˉˉθ}, system (7.9) is GES.

    Example 6. Consider the system

    ˙h(t)=0.01h(t)+0.03f(h(t)) (6.11)

    where f(l)=tanh(l). We can readily determine that (7.11) is GES with α=1.2 and ν=3 by many of the current criteria.

    The corresponding disturbed system is given by

    dl(t)=[0.01l(t)+0.01f(l(t))+0.02fl(ϱ(t))]dt+σl(t)dω(t) (6.12)

    where f(l)=tanh(l). When the mathematical parameters are put into (5.6), we have

    2.4exp{1.5}+0.8(0.048+4ˆw)exp{0.0552+4ˆw2}=1.

    Thus, we have ˉσ=0.2925. Note that |σ|<ˉσ2, which means that |σ|<0.2068.

    Combined with ˉσ=0.2925, substituting the other computing parameters into (5.7), we get

    2.4exp{3(0.5ˇw)}+[0.14457+0.002304[10.0144ˇw29ˇw(0.0024ˇw+0.0757)(1+0.0048ˇw2)×exp{3ˇw(0.0024ˇw+0.0757)}]]×exp{0.1879+0.0288(10.0144ˇw29ˇw(0.024ˇw+0.0757)(1+0.0048ˇw2exp{3ˇw(0.0024ˇw+0.0757)}))}=1.

    Hence, it can be derived that ˉθ=0.1318. Note that θ<min(2,ˉθ), and therefore θ<0.1318. From Theorem 3, (7.12) is MSGES and also ASGES.

    Examples 7 and 8 are four-dimensional numerical examples of Theorems 1 and 2.

    Example 7. Consider the four-dimensional SCGNN

    dl1(t)=(1.5+0.5sin(4l1(t)))[0.02l1(t)+0.01f(l1(t))+0.003f(l2(t))+0.001f(l3(t))+0.005f(l4(t))]dt+σl1(t)dω(t),dl2(t)=(1.5+0.5cos(6l2(t)))[0.01l2(t)+0.012f(l1(t))+0.02f(l2(t))+0.018f(l3(t))+0.024f(l4(t))]dt+σl2(t)dω(t),dl3(t)=(1.5+0.5sin(5l3(t)))[0.01l3(t)+0.003f(l1(t))+0.012f(l2(t))+0.007f(l3(t))+0.002f(l4(t))]dt+σl3(t)dω(t),dl4(t)=(1.5+0.5cos(2l4(t)))[0.02l4(t)+0.015f(l1(t))+0.014f(l2(t))+0.004f(l3(t))+0.001f(l4(t))]dt+σl4(t)dω(t) (6.13)

    where f(l)=tanh(l), σ is the noise intensity, and ω(t) is a scalar Brownian motion.

    Consider the model of a CGNN without SDs:

    dl1(t)dt=(1.5+0.5sin(4l1(t)))[0.02l1(t)+0.01f(l1(t))+0.003f(l2(t))+0.001f(l3(t))+0.005f(l4(t))],dl2(t)dt=(1.5+0.5cos(6l2(t)))[0.01l2(t)+0.012f(l1(t))+0.02f(l2(t))+0.018f(l3(t))+0.024f(l4(t))],dl3(t)dt=(1.5+0.5sin(5l3(t)))[0.01l3(t)+0.003f(l1(t))+0.012f(l2(t))+0.007f(l3(t))+0.002f(l4(t))],dl4(t)dt=(1.5+0.5cos(2l4(t)))[0.02l4(t)+0.015f(l1(t))+0.014f(l2(t))+0.004f(l3(t))+0.001f(l4(t))]. (6.14)

    It is easily find that CGNN (7.14) is GES with α=0.8 and ν=0.5. Let =0.3, ˉμ=2, μ_=1, and F=1. Substituting them into (3.2), we get

    1.28exp{0.3}+1.28[0.03264+4σ2]×exp{0.06912+2.4σ2}=1.

    By solving the transcendental equation, we obtain the solution as ˉσ=0.0353. From Theorem 1, when |σ|<ˉσ, SCGNN (7.13) is MSGES and also ASGES. The changing behavior of SCGNN (7.13) is shown in Figure 7.

    Figure 7.  The states of SCGNN (7.13) with σ=0.02.

    Example 8. Consider the four-dimensional CGNN with PCA

    ˙l1(t)=(1.5+0.5sin(3l1(t)))[0.03l1(t)+0.022f(l1(t))+0.012f(l2(t))+0.008f(l3(t))+0.02f(l4(t))+0.003f(l1(ϱ(t)))+0.002f(l2(ϱ(t)))+0.01f(l3(ϱ(t)))+0.003f(l4(ϱ(t)))],˙l2(t)=(1.5+0.5cos(2l2(t)))[0.02l2(t)+0.01f(l1(t))+0.006f(l2(t))+0.003f(l3(t))+0.016f(l4(t))+0.005f(l1(ϱ(t)))+0.018f(l2(ϱ(t)))+0.007f(l3(ϱ(t)))+0.014f(l4(ϱ(t)))],˙l3(t)=(1.5+0.5sin(4l3(t)))[0.01l3(t)+0.081f(l1(t))+0.012f(l2(t))+0.015f(l3(t))+0.004f(l4(t))+0.014f(l1(ϱ(t)))+0.015f(l2(ϱ(t)))+0.015f(l3(ϱ(t)))+0.02f(l4(ϱ(t)))],˙l4(t)=(1.5+0.5sin(3l4(t)))[0.02l4(t)+0.02f(l1(t))+0.01f(l2(t))+0.015f(l3(t))+0.006f(l4(t))+0.016f(l1(ϱ(t)))+0.012f(l2(ϱ(t)))+0.018f(l3(ϱ(t)))+0.003f(l4(ϱ(t)))] (6.15)

    where θk=k16, ϑk=2k+132, and ϱ(t)=ϑk, if t[θk,θk+1), kN, f(l)=tanh(l).

    Consider the CGNN without PCA as follows:

    ˙l1(t)=(1.5+0.5sin(3l1(t)))[0.03l1(t)+0.022f(l1(t))+0.012f(l2(t))+0.008f(l3(t))+0.02f(l4(t))]+0.003f(l1(t))+0.002f(l2(t))+0.01f(l3(t))+0.003f(l4(t))],˙l2(t)=(1.5+0.5cos(2l2(t)))[0.02l2(t)+0.01f(l1(t))+0.006f(l2(t))+0.003f(l3(t))+0.016f(l4(t))]+0.005f(l1(t))+0.018f(l2(t))+0.007f(l3(t))+0.014f(l4(t))],˙l3(t)=(1.5+0.5sin(4l3(t)))[0.01l3(t)+0.008f(l1(t))+0.012f(l2(t))+0.015f(l3(t))+0.004f(l4(t))]+0.014f(l1(t))+0.015f(l2(t))+0.015f(l3(t))+0.02f(l4(t))],˙l4(t)=(1.5+0.5sin(3l4(t)))[0.02l4(t)+0.02f(l1(t))+0.01f(l2(t))+0.015f(l3(t))+0.006f(l4(t))]+0.016f(l1(t))+0.012f(l2(t))+0.018f(l3(t))+0.003f(l4(t))]. (6.16)

    It is easy to know that CGNN (7.16) is GES with α=1.2 and ν=0.9. Let =0.3, ˉμ=2, μ_=1, and F=1. Substituting them into (4.4) and (4.5), we get

    1.2exp{0.9(0.3ˆx)}+[0.24+0.1(1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx)))]×1.2/0.9×exp{0.324+0.06[1ˆx(0.04+0.04(1+0.04ˆx)×exp(0.04ˆx))]}=1,
    ˇx[0.1+0.28(1+0.1ˇx)×exp(0.28ˇx)]=1.

    By figuring out the equations above, we obtain ˉθ=1.1937, ˉˉθ=1.6291. Thus, according to Theorem 2, when θ<min{2,ˉθ,ˉˉθ}, CGNN (7.15) is GES. Figure 8 depicts the transient states of (7.15).

    Figure 8.  The states of (7.15) with θk={k10}.

    The main content in this article is about the robustness analysis of CGNNs with PCA and SDs. For an originally stable CGNN, we discuss the problem that how much the PCA and noise intensity the CGNN can withstand to be globally exponentially stable in the presence of PCA and SDs. We apply inequality techniques and stochastic analysis theory to obtain the upper bounds of PCA and SDs that the CGNN can withstand without losing stability by solving transcendental equations. It provides a theoretical basis for the designs and applications of CGNNs. Future work can explore the influence of delay and parameter uncertainty or other factors on the robustness of CGNNs, and the finite-time stability of CGNNs with PCA and SDs can also be considered to obtain richer results.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    We would like to thank you for following the instructions above very closely in advance. It will definitely save us lot of time and expedite the process of your paper's publication.

    The authors declare no conflicts of interest.



    [1] A. Krizhevsky, I. Sutskever, G. Hinton, Imagenet classification with deep convolutional neural networks, Adv. Neural Inform. Pro. Syst., 25 (2012).
    [2] E. Shelhamer, J. Long, T. Darrell, Fully convolutional networks for semantic segmentation, IEEE T. Pattern Anal., 2016, 3431–3440.
    [3] S. Levine, C. Finn, T. Darrell, P. Abbeel, End-to-end training of deep visuomotor policies, J. Mach. Learn. Res., 17 (2015), 1334–1373.
    [4] E. Domany, J. Hemmen, K. Schulten, Models of neural networks: Temporal aspects of coding and information processing in biological systems, Heidelberg: Springer, 1994. https://doi.org/10.1007/978-1-4612-4320-5
    [5] S. Arik, Global robust stability analysis of neural networks with discrete time delays, Chaos Soliton. Fract., 26 (2005), 1407–1414. https://doi.org/10.1016/j.chaos.2005.03.025 doi: 10.1016/j.chaos.2005.03.025
    [6] Y. Zhao, H. Gao, S. Mou, Asymptotic stability analysis of neural networks with successive time delay components, Neurocomputing, 71 (2008), 2848–2856. https://doi.org/10.1016/j.neucom.2007.08.015 doi: 10.1016/j.neucom.2007.08.015
    [7] Z. G. Zeng, C. J. Fu, X. X. Liao, Stability analysis of neural networks with infinite time-varying delay, J. Math., 22 (2022), 391–396.
    [8] Y. Shen, J. Wang, Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances, IEEE T. Neural Net. Lear., 23 (2012), 87–96. https://doi.org/10.1109/TNNLS.2011.2178326 doi: 10.1109/TNNLS.2011.2178326
    [9] S. Zhu, Y. Shen, Robustness analysis for connection weight matrices of global exponential stable time varying delayed recurrent neural networks, Neurocomputing, 4 (2013), 220–226. https://doi.org/10.1016/j.neucom.2013.01.006 doi: 10.1016/j.neucom.2013.01.006
    [10] J. E. Zhang, Robustness analysis of global exponential stability of nonlinear systems with deviating argument and stochastic disturbance, IEEE Access, 5 (2017), 13446–13454. https://doi.org/10.1109/ACCESS.2017.2727500 doi: 10.1109/ACCESS.2017.2727500
    [11] H. Zhang, T. Li, S. Fei, Synchronization for an array of coupled cohen-grossberg neural networks with time-varying delay, Math. Probl. Eng., 2011 (2011). https://doi.org/10.1155/2011/831695 doi: 10.1155/2011/831695
    [12] X. Li, H. Wu, J. Cao, Prescribed-time synchronization in networks of piecewise smooth systems via a nonlinear dynamic event-triggered control strategy, Math. Comput. Simulat., 203 (2023), 647–668. https://doi.org/10.1016/j.matcom.2022.07.010 doi: 10.1016/j.matcom.2022.07.010
    [13] X. Li, H. Wu, J. Cao, A new prescribed-time stability theorem for impulsive piecewise-smooth systems and its application to synchronization in networks, Appl. Math. Model., 115 (2023), 385–397. https://doi.org/10.1016/j.apm.2022.10.051 doi: 10.1016/j.apm.2022.10.051
    [14] Q. Gan, R. Xu, X. Kang, Synchronization of chaotic neural networks with mixed time delays, Commun. Nonlinear Sci., 16 (2011), 966–974. https://doi.org/10.1016/j.cnsns.2010.04.036 doi: 10.1016/j.cnsns.2010.04.036
    [15] L. Wang, Y. Zhou, D. Xu, Q. Lai, Fixed-/preassigned-time stability control of chaotic power systems, Int. J. Bifurcat. Chaos, 33 (2023), 2350110. https://doi.org/10.1142/S0218127423501109 doi: 10.1142/S0218127423501109
    [16] X. Hu, L. Wang, C. K. Zhang, Fixed-time stabilization of discontinuous spatiotemporal neural networks with time-varying coefficients via aperiodically switching control, Sci. China Inform. Sci., 66 (2023), 152204. https://doi.org/10.1007/s11432-022-3633-9 doi: 10.1007/s11432-022-3633-9
    [17] M. A. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Syst. Man Cybern., 13 (1983), 815–826. https://doi.org/10.1109/TSMC.1983.6313075 doi: 10.1109/TSMC.1983.6313075
    [18] W. Lu, T. Chen, New conditions on global stability of cohen-grossberg neural networks, Neural Comput., 15 (2003), 1173. https://doi.org/10.1162/089976603765202703 doi: 10.1162/089976603765202703
    [19] K. Gopalsamy, Global asymptotic stability in a periodic Lotka-Volterra system, J. Aust. Math. Soc. B, 27 (1985), 66–72. https://doi.org/10.1017/S0334270000004768 doi: 10.1017/S0334270000004768
    [20] J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, P. Natl. A. Sci., 79 (1984), 2554–2558. https://doi.org/10.1073/pnas.79.8.2554 doi: 10.1073/pnas.79.8.2554
    [21] L. O. Chua, L. Yang, Cellular neural networks: Theory, IEEE T. Circuits-I, 35 (1988), 1257–1272. https://doi.org/10.1109/31.7600 doi: 10.1109/31.7600
    [22] L. O. Chua, L. Yang, Cellular neural networks: Application, IEEE T. Circuits-I, 35 (1988), 1273–1290. https://doi.org/10.1109/31.7601 doi: 10.1109/31.7601
    [23] T. Roska, L. O. Chua, Cellular neural networks with nonlinear and delay-type template elements and non-uniform grids, Int. J. Circ. Theor. App., 20 (1992), 469–481. https://doi.org/10.1002/cta.4490200504 doi: 10.1002/cta.4490200504
    [24] L. Wang, X. Zou, Exponential stability of Cohen Grossberg neural networks, Neural Networks, 15 (2002), 415–422. https://doi.org/10.1016/S0893-6080(02)00025-4 doi: 10.1016/S0893-6080(02)00025-4
    [25] S. Arik, Z. Orman, Global stability analysis of cohen-grossberg neural networks with time varying delays, Phys. Lett. A, 341 (2005), 410–421. https://doi.org/10.1016/j.physleta.2005.04.095 doi: 10.1016/j.physleta.2005.04.095
    [26] R. Li, J. Cao, Fixed-time stabilization control of reaction-diffusion Cohen-Grossberg neural networks, 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 2017, 4328–4333. https://doi.org/10.1109/CCDC.2017.7979259
    [27] H. Xiang, J. Cao, Exponential stability of periodic solution to Cohen-Grossberg-type BAM networks with time-varying delays, Neurocomputing, 72 (2009), 1702–1711. https://doi.org/10.1016/j.neucom.2008.07.006 doi: 10.1016/j.neucom.2008.07.006
    [28] Y. Li, X. Chen, L. Zhao, Stability and existence of periodic solutions to delayed Cohen-Grossberg BAM neural networks with impulses on time scales, Neurocomputing, 72 (2009), 1621–630. https://doi.org/10.1016/j.neucom.2008.08.010 doi: 10.1016/j.neucom.2008.08.010
    [29] J. Yu, C. Hu, H. Jiang, Z. Teng, Exponential synchronization of cohen-grossberg neural networks via periodically intermittent control, Neurocomputing, 74 (2011), 1776–1782. https://doi.org/10.1016/j.neucom.2011.02.015 doi: 10.1016/j.neucom.2011.02.015
    [30] Y. Shi, J. Cao, Finite-time synchronization of memristive Cohen-Grossberg neural networks with time delays, Neurocomputing, 15 (2020), 159–167. https://doi.org/10.1016/j.neucom.2019.10.036 doi: 10.1016/j.neucom.2019.10.036
    [31] Y. Cheng, Y. Shi, The exponential synchronization and asymptotic synchronization of quaternion-valued memristor-based Cohen-Grossberg neural networks with time-varying delays, Neural Process. Lett., 55 (2023), 6637–6656. https://doi.org/10.1007/s11063-023-11152-0 doi: 10.1007/s11063-023-11152-0
    [32] Q. Song, Z. Wang, Stability analysis of impulsive stochastic cohen-grossberg neural networks with mixed time delays, Phys. A, 387 (2008), 3314–3326. https://doi.org/10.1016/j.physa.2008.01.079 doi: 10.1016/j.physa.2008.01.079
    [33] X. Li, X. Fu, Global asymptotic stability of stochastic cohen-grossberg-type bam neural networks with mixed delays: An lmi approach, J. Comput. Appl. Math., 235 (2011), 3385–3394. https://doi.org/10.1016/j.cam.2010.10.035 doi: 10.1016/j.cam.2010.10.035
    [34] Q. Zhu, X. Li, Exponential and almost sure exponential stability of stochastic fuzzy delayed cohen-grossberg neural networks, Fuzzy Set. Syst., 203 (2012), 74–94. https://doi.org/10.1016/j.fss.2012.01.005 doi: 10.1016/j.fss.2012.01.005
    [35] K. L. Cooke, J. Wiener, Retarded diferential equations with piecewise constant delays, J. Math. Anal. Appl., 99 (1984), 265–297. https://doi.org/10.1016/0022-247X(84)90248-8 doi: 10.1016/0022-247X(84)90248-8
    [36] M. U. Akhmet, On the integral manifolds of the differential equations with piecewise constant argument of generalized type, In R. P. Agarval, & K. Perera (Eds.), Proceedings of the conference on differential and difference equations at the Florida Institute of Technology, Hindawi Publishing Corporation, 2006.
    [37] M. U. Akhmet, On the reduction principle for differential equations with piecewise constant argument of generalized type, J. Math. Anal. Appl., 336 (2007), 646–663. https://doi.org/10.1016/j.jmaa.2007.03.010 doi: 10.1016/j.jmaa.2007.03.010
    [38] M. U. Akhmet, Stability of differential equations with piecewise constant arguments of generalized type, Nonlinear Anal., 68 (2008), 794–803. https://doi.org/10.1016/j.na.2006.11.037 doi: 10.1016/j.na.2006.11.037
    [39] M. U. Akhmet, Almost periodic solutions of differential equations with piecewise constant argument of generalized type, Nonlinear Anal.-Hybri., 2 (2008), 456–467. https://doi.org/10.1016/j.nahs.2006.09.002 doi: 10.1016/j.nahs.2006.09.002
    [40] M. U. Akhmet, D. Arugaslan, E. YiLmaz, Stability in cellular neural networks with a piecewise constant argument, J. Comput. Appl. Math., 233 (2010), 2365–2373. https://doi.org/10.1016/j.cam.2009.10.021 doi: 10.1016/j.cam.2009.10.021
    [41] M. U. Akhmet, D. Aruğaslan, E. Yılmaz, Stability analysis of recurrent neural networks with piecewise constant argument of generalized type, Neural Networks, 7 (2010), 805–811. https://doi.org/10.1016/j.neunet.2010.05.006 doi: 10.1016/j.neunet.2010.05.006
    [42] M. U. Akhmet, E. Yilmaz, Impulsive Hopfield-type neural network system with piecewise constant argument, Nonlinear Anal.-Real, 11 (2010), 2584–2593. https://doi.org/10.1016/j.nonrwa.2009.09.003 doi: 10.1016/j.nonrwa.2009.09.003
    [43] G. Bao, S. Wen, Z. Zeng, Robust stability analysis of interval fuzzy Cohen-Grossberg neural networks with piecewise constant argument of generalized type, Neural Networks, 33 (2012), 32–41. https://doi.org/10.1016/j.neunet.2012.04.003 doi: 10.1016/j.neunet.2012.04.003
    [44] Y. Shen, J. Wang, Robustness of global exponential stability of nonlinear systems With random disturbances and time delays, IEEE T. Syst. Man Cy.-S., 2016, 1157–1166. https://doi.org/10.1109/TSMC.2015.2497208
    [45] C. Wu, J. Hu, Y. Li, Robustness analysis of Hybrid stochastic neural networks with neutral terms and time-varying delays, Discrete Dyn. Nat. Soc., 2015 (2015), 1–12. https://doi.org/10.1155/2015/278571 doi: 10.1155/2015/278571
    [46] C. M. Wu, L. N. Jiang, Robustness analysis of neutral BAMNN with time delays, IEEE International Conference of Safety Produce Informatization, 2018,911–919.
    [47] X. Mao, Stochastic differential equations and applications, 2 Eds, Harwood: Chichester, 2007. https://doi.org/10.1533/9780857099402
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1249) PDF downloads(81) Cited by(0)

Figures and Tables

Figures(8)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog