
This paper discusses associative memories based on time-varying delayed fractional-order neural networks (DFNNs) with a type of piecewise nonlinear activation function from the perspective of multiple O(t−α) stability. Some sufficient conditions are gained to assure the existence of 5n equilibria for n-neuron DFNNs with the proposed piecewise nonlinear activation functions. Additionally, the criteria ensure the existence of at least 3n equilibria that are locally multiple O(t−α) stable. Furthermore, we apply these results to a more generic situation, revealing that DFNNs can attain (2k+1)n equilibria, and among them, (k+1)n equilibria are locally O(t−α) stable. Here, the parameter k is highly dependent on the sinusoidal function frequency in the expanded activation functions. Such DFNNs are well-suited to synthesize high-capacity associative memories; the design process is given via singular value decomposition. Ultimately, four illustrative examples, including applying neurodynamic associative memory to the explaining-lesson skills assessment of normal students, are supplied to validate the efficacy of the results.
Citation: Jiang-Wei Ke, Jin-E Zhang. Associative memories based on delayed fractional-order neural networks and application to explaining-lesson skills assessment of normal students: from the perspective of multiple O(t−α) stability[J]. AIMS Mathematics, 2024, 9(7): 17430-17452. doi: 10.3934/math.2024847
[1] | Er-yong Cong, Li Zhu, Xian Zhang . Global exponential synchronization of discrete-time high-order BAM neural networks with multiple time-varying delays. AIMS Mathematics, 2024, 9(12): 33632-33648. doi: 10.3934/math.20241605 |
[2] | N. Mohamed Thoiyab, Mostafa Fazly, R. Vadivel, Nallappan Gunasekaran . Stability analysis for bidirectional associative memory neural networks: A new global asymptotic approach. AIMS Mathematics, 2025, 10(2): 3910-3929. doi: 10.3934/math.2025182 |
[3] | Huahai Qiu, Li Wan, Zhigang Zhou, Qunjiao Zhang, Qinghua Zhou . Global exponential periodicity of nonlinear neural networks with multiple time-varying delays. AIMS Mathematics, 2023, 8(5): 12472-12485. doi: 10.3934/math.2023626 |
[4] | Pratap Anbalagan, Evren Hincal, Raja Ramachandran, Dumitru Baleanu, Jinde Cao, Chuangxia Huang, Michal Niezabitowski . Delay-coupled fractional order complex Cohen-Grossberg neural networks under parameter uncertainty: Synchronization stability criteria. AIMS Mathematics, 2021, 6(3): 2844-2873. doi: 10.3934/math.2021172 |
[5] | Ziqiang Wang, Qin Liu, Junying Cao . A higher-order numerical scheme for system of two-dimensional nonlinear fractional Volterra integral equations with uniform accuracy. AIMS Mathematics, 2023, 8(6): 13096-13122. doi: 10.3934/math.2023661 |
[6] | Dan-Ning Xu, Zhi-Ying Li . Mittag-Leffler stabilization of anti-periodic solutions for fractional-order neural networks with time-varying delays. AIMS Mathematics, 2023, 8(1): 1610-1619. doi: 10.3934/math.2023081 |
[7] | Aziz Belmiloudi . Cardiac memory phenomenon, time-fractional order nonlinear system and bidomain-torso type model in electrocardiology. AIMS Mathematics, 2021, 6(1): 821-867. doi: 10.3934/math.2021050 |
[8] | Li Wan, Qinghua Zhou, Hongbo Fu, Qunjiao Zhang . Exponential stability of Hopfield neural networks of neutral type with multiple time-varying delays. AIMS Mathematics, 2021, 6(8): 8030-8043. doi: 10.3934/math.2021466 |
[9] | Wenxiang Fang, Tao Xie, Biwen Li . Robustness analysis of fuzzy BAM cellular neural network with time-varying delays and stochastic disturbances. AIMS Mathematics, 2023, 8(4): 9365-9384. doi: 10.3934/math.2023471 |
[10] | Yueli Huang, Jin-E Zhang . Asymptotic stability of impulsive stochastic switched system with double state-dependent delays and application to neural networks and neural network-based lecture skills assessment of normal students. AIMS Mathematics, 2024, 9(1): 178-204. doi: 10.3934/math.2024011 |
This paper discusses associative memories based on time-varying delayed fractional-order neural networks (DFNNs) with a type of piecewise nonlinear activation function from the perspective of multiple O(t−α) stability. Some sufficient conditions are gained to assure the existence of 5n equilibria for n-neuron DFNNs with the proposed piecewise nonlinear activation functions. Additionally, the criteria ensure the existence of at least 3n equilibria that are locally multiple O(t−α) stable. Furthermore, we apply these results to a more generic situation, revealing that DFNNs can attain (2k+1)n equilibria, and among them, (k+1)n equilibria are locally O(t−α) stable. Here, the parameter k is highly dependent on the sinusoidal function frequency in the expanded activation functions. Such DFNNs are well-suited to synthesize high-capacity associative memories; the design process is given via singular value decomposition. Ultimately, four illustrative examples, including applying neurodynamic associative memory to the explaining-lesson skills assessment of normal students, are supplied to validate the efficacy of the results.
Associative memory is a content-addressing process similar to the brain that is aimed at storing a set of patterns in a manner resembling stable equilibrium points. This enables reliable retrieval of stored patterns with an initial probe containing adequate pattern-related information. In neural networks, associative memory can not only learn and store the correlations between different input patterns but also be used for prediction and the generation of sequence data. In addition, associative memory can be applied to categorize input patterns or recognize new patterns that are similar to the learned patterns, which is meaningful for tasks such as image recognition, speech recognition, and text categorization. In tackling associative memory, the multistable property is one of the key issues. By increasing their storage capacity, neural networks can store and retrieve large amounts of information more efficiently. At present, many studies have been carried out on neurodynamic associative memory [1,2,3]. Overall, the essence of neural network-based associative memory is to transform any input vector set into an output vector set related to patterns through nonlinear mapping. As a result, an inevitable requirement is that the corresponding network model possess multiple locally stable equilibria [4,5].
Recently, the analysis and design of fractional-order neural networks (FNNs) have been extensively implemented in many domains, including physics, engineering [6,7], control systems [8,9], and biological sciences. Fractional calculus, serving as its theoretical foundation, enables neural networks to better adapt to and address practical issues in these diverse domains. The emergence of FNNs signifies an impressive innovation in the field of neural networks. Compared to integer-order neural networks, FNNs offer advantages in enhancing degrees of freedom and providing better descriptions for processes with memory or hereditary features. Moreover, researchers can better comprehend the power-law phenomenon in fractional-order systems. This phenomenon may be misinterpreted in integer-order systems. But within the framework of fractional-order systems, it can more correctly account for the power-law decay of state variables, providing a more refined modeling of the actual observational result. Therefore, the exploration of the dynamic behavior of FNNs is amusing and challenging. Over the past decade, lots of researchers have achieved important and intriguing results in FNNs (see, e.g., [10,11,12,13,14,15]). Chen and Chen [10] studied the global O(t−α) stability for time-varying delayed fractional-order neural networks (DFNNs).
In the application area of associative memory, multistability proves significantly superior to monostability in terms of providing a greater number of patterns. Additionally, a larger number of stable equilibria usually indicates higher storage capacity. As a result, the analysis of multistable systems has captured the interest of many scholars [16,17,18,19]. From the perspective of system integration, attaining asymptotic stability is a prerequisite for achieving decent performance in FNNs. O(t−α) stability refers to a specific asymptotic stability property of fractional-order systems, where α is the order of the fractional term. This stability characterizes the unusual evolution of fractional dynamical systems; here, the system trajectory can converge to a steady state at a rate of t−α. Consequently, it is crucial to analyze and comprehend the multiple O(t−α) stability of FNNs which can help to evaluate the performance of FNNs and guide the development of network design and control methods.
It is well known that the quantity of equilibria is intimately linked to the type of activation functions in the multistability analysis. Accordingly, designing an activation function with excellent performance is important and indispensable. In previous analyses of multistability, researchers observed that neural networks with certain non-monotonic activation functions [16] possess more equilibrium points. Actually, there have been several fruitful works in the area of multistability analysis of neural networks with diverse activation functions, which are mainly based on non-decreasing or piecewise linear assumptions. Besides, several recently published literatures have concentrated on smooth activation functions like Gaussian function [20,21], sigmoidal function, Mexican hat function [22] and Morita-like function [23]. Nevertheless, it is vital to highlight that analyzing the dynamic behavior of smooth activation functions is more intricate than that of piecewise linear activation functions, owing to the heightened nonlinearity inherent in smooth activation functions. In [24], Liu et al. introduced a category of piecewise nonlinear activation function defined as follows:
f(r)={−1,−∞<r<−π2,(−1)k+1sin((2k−1)r),−π2≤r≤π2,1,π2<r<+∞, | (1.1) |
where k>0 is an integer. When k=1, the function (1.1) is a type of sigmoidal function. When k=2, the activation function (1.1) turns to
f(r)={−1,−∞<r<−π2,−sin(3r),−π2≤r≤π2,1,π2<r<+∞. | (1.2) |
The conditions are obtained to guarantee that the investigative neural networks with piecewise nonlinear activation functions (1.2) have a total of 5n equilibria. To the extent that the activation functions devised in [24] are almost always in integer-order systems, however, related situations in fractional-order systems have rarely been explored. Thus, it is of great interest to investigate the multistable properties of FNNs with activation functions (1.1) or (1.2).
It is worth pointing out that time delays, in particular time-varying delays, are prevalent in neural networks owing to the restricted propagation speed of signals as well as the finite switching speed of neuron amplifiers. Time delays complicate the dynamic behavior of neural networks and even cause instability or oscillations in originally stable networks. As such, it is essential and vital to look into the multistability of DFNNs. Currently, many scholars are researching the multistability issues of DFNNs, which has yielded some noteworthy achievements. In [25], Wan and Liu explored the multiple stability for time-varying DFNNs. The criteria are established to guarantee there are ∏ni=1(2Mi+1) equilibria, and among them, ∏ni=1(Mi+1) equilibria are locally O(t−α) stable. In [26], the authors introduced Gaussian activation functions to analyze the multiple stability of Cohen-Grossberg neural networks with delays. It is worth clarifying that there is very little literature exploring DFNNs and the activation function described in (1.2). This motivation aroused our interest in the multistability analysis of time-varying DFNNs with activation functions (1.1) or (1.2).
As indicated in the above analysis, this paper is dedicated to inquiring into associative memories from the perspective of multiple O(t−α) stability of DFNNs with piecewise nonlinear activation function (1.2). In general, the strengths of this paper can be generalized as follows: (1) Some sufficient criteria are deduced to ensure the existence of 5n equilibria by means of Brouwer's fixed point theorem. (2) Several invariant sets are got, and the multiple O(t−α) stability for DFNNs with activation function (1.2) is disclosed. (3) This paper offers a handy and useful approach to enhance the quantity of stable equilibria for FNNs by elevating the value of k within the proposed sinusoidal function, which can be used for high-capacity associative memories. (4) The results of this paper are complementary to existing analyses of related associative memories.
Notations. Consider C([t0−σ,t0],Rn) as the Banach space of continuous functions mapping [t0−σ,t0] into D⊂Rn, where the norm is given by ‖x‖=√∑ni=1x2i. For ϕ∈C([t0−σ,t0],Rn), let ‖ϕ‖M=supt0−σ≤s≤t0‖ϕ(s)‖.
First of all, we show some definitions of fractional-order calculus.
Definition 2.1. [27] For a function F(t), its fractional integral Iαt0(⋅) is defined as
Iαt0F(t)=1Γ(α)∫tt0(t−s)α−1F(s)ds, |
where α∈(0,1), Γ(α)=∫∞0uα−1exp(−u)du is the Gamma function.
Definition 2.2. [27] For a differentiable function F(t), its Caputo derivative of α order (when 0<α<1) is described as
CDαt0F(t)=1Γ(1−α)∫tt0(t−s)−α(dF(s)ds)ds. |
Next, this paper will take into account a general class of time-varying DFNNs with an activation function (1.2) as follows:
CDαt0xi(t)=−βixi(t)+n∑j=1ρijfj(xj(t))+n∑j=1φijfj(xj(t−τij(t)))+ui, i=1,2,...,n, | (2.1) |
where x(t)=(x1(t),x2(t),...,xn(t))T∈Rn denotes the state vector, A=diag(β1,β2,...,βn) stands for the neuron self-inhibition matrix with βi>0, B=(ρij)n×n and C=(φij)n×n stand for connection weight matrices, f(⋅)=(f1(⋅),f2(⋅),...,fn(⋅))T is the activation function, and ui is input. τij(⋅) is a time-varying delay that meets 0≤τij(t)≤σ=max1≤i,j≤n{supt≥t0τij(t)}, where σ>0 is a constant. The initial value of the neural network (2.1) is endowed with
x(t0+s)=ϕ(s),s∈[t0−σ,t0], | (2.2) |
where ϕ(s)=(ϕ1(s),ϕ2(s),...,ϕn(s))T∈C([t0−σ,t0],Rn).
In what follows, we introduce some definitions and lemmas that will be employed to investigate the multistability of DFNNs.
Definition 2.3. [28] If a constant vector x∗=(x∗1,x∗2,...,x∗n)T satisfies
−βix∗i+n∑j=1ρijfj(x∗j)+n∑j=1φijfj(x∗j)+ui=0, i=1,2,...,n, |
then x∗ can be called an equilibrium point of (2.1).
Definition 2.4. [14] Suppose that x∗∈D is an equilibrium point of (2.1) as well as that each D∈Rn is positively invariant. Then (2.1) is regarded as locally O(t−α) stable if
‖x(t)−x∗‖≤Λςα‖ϕ−x∗‖M(t−t0+ς)α, |
where Λ≥1,ς≥ σ are constants, and t≥t0.
Lemma 2.1. [17] Suppose that W(t) is differentiable on [t0,+∞) and α∈(0,1). Assume W(t)<0 (W(t)≤0) for t0≤t<ˉt and W(ˉt)=0 hold, then
CDαt0W(t)|t=ˉt>0 (CDαt0W(t)|t=ˉt≥0). |
Lemma 2.2. [10] For function P(t)∈C1([0,+∞),R) as well as 0<α<1,
CDαt0|P(t)|≤sign(P(t))CDαt0P(t),t≥t0, |
holds almost everywhere.
Lemma 2.3. [25] Select ς>0 is a constant, and 0<α<1. Suppose G(t)≥0 is a continuous function on [t0,+∞), then
CDαt0H(t)≤(t−t0+ς)α CDαt0G(t)+1−α+2α2−α3ςαΓ(2−α)ˆH(t), |
for t≥t0, where H(t)=(t−t0+ς)αG(t),ˆH(t)=(t−t0+ς)αˆG(t),ˆG(t)=supt0≤s≤tG(s).
We are going to delve into the multistability of time-varying DFNNs (2.1) in this section. The existence and stability of multiple equilibria of DFNNs (2.1) are also proved as follows:
For any given interval I⊂R, let I0=∅ and I1=I, then denote
(−∞,−π2)=(−∞,−π2)1×[−π2,−π6]0×(−π6,π6)0×[π6,π2]×(π2,+∞)0, |
[−π2,−π6]=(−∞,−π2)0×[−π2,−π6]1×(−π6,π6)0×[π6,π2]0×(π2,+∞)0, |
(−π6,π6)=(−∞,−π2)0×[−π2,−π6]0×(−π6,π6)1×[π6,π2]0×(π2,+∞)0, |
[π6,π2]=(−∞,−π2)0×[−π2,−π6]0×(−π6,π6)0×[π6,π2]1×(π2,+∞)0, |
(π2,+∞)=(−∞,−π2)0×[−π2,−π6]0×(−π6,π6)0×[π6,π2]1×(π2,+∞)1, |
and let
Θ={n∏i=1(−∞,−π2)δ(i)1×[−π2,−π6]δ(i)2×(−π6,π6)δ(i)3×[π6,π2]δ(i)4×(π2,+∞)δ(i)5,(δ(i)1,δ(i)2,δ(i)3,δ(i)4,δ(i)5)=(1,0,0,0,0) or (0,1,0,0,0) or (0,0,1,0,0) or (0,0,0,1,0) or (0,0,0,0,1)}. |
Hence, we can know that there are 5n regions in Θ. Suppose that ι is sufficiently small and meets 0<ι≪min{2π,βi/(∑nj=1|ρij|+∑nj=1|φij|+|ui|)}. Then, denote a set
Θι={n∏i=1[−1ι,−π2−ι]δ(i)1×[−π2+ι,−π6−ι]δ(i)2×[−π6+ι,π6−ι]δ(i)3×[π6+ι,π2−ι]δ(i)4×[π2+ι,1ι]δ(i)5,(δ(i)1,δ(i)2,δ(i)3,δ(i)4,δ(i)5)=(1,0,0,0,0) or (0,1,0,0,0) or (0,0,1,0,0) or (0,0,0,1,0) or (0,0,0,0,1)}. |
Consequently, any subset Θ(s)∈Θι is a bounded and closed set, where s∈{1,2,...,5n}.
Remark 3.1. In view of Figure 1, by means of the geometric properties of the activation function (1.2), it can be seen that we divide the interval (−∞,+∞) into five parts: (−∞,−π2)∪[−π2,−π6]∪(−π6,π6)∪[π6,π2]∪(π2,+∞). As a result, the region ∏ni=1(−∞,+∞) can be divided into 5n subsets.
Theorem 3.1. Assume that the following conditions meet
π2βi−ρii−φii+n∑j≠i,j=1|ρij+φij|+ui<0, | (3.1) |
−π2βi+ρii+φii−n∑j≠i,j=1|ρij+φij|+ui>0, | (3.2) |
for i=1,2,...,n. Then DFNNs (2.1) with activation functions (1.2) can possess at least 5n equilibria in Θι.
Proof. From (3.1) and the fact that fi(−π2)=fi(π6)=−1, we know
−π6βi−ρii−φii+n∑j≠i,j=1|ρij+φij|+ui<0. | (3.3) |
Taking arbitrarily a region ˜Φι∈Θι, which is denoted by:
˜Φι=∏i∈N1[−1ι,−π2−ι]×∏i∈N2[−π2+ι,−π6−ι]×∏i∈N3[−π6+ι,π6−ι]×∏i∈N4[π6+ι,π2−ι]×∏i∈N5[π2+ι,1ι]⊂Θι, |
where Ni∈{1,2,...,n} and Ni∩Nj=∅(i≠j,i,j=1,2,3,4,5),N1∪N2∪N3∪N4∪N5={1,2,...,n}.
Then, we are about to prove that there is an equilibrium point in ˜Φι for (2.1) with the activation function (1.2).
Choose a point (ξ1,ξ2,...,ξn)T∈˜Φι and fix ξ1,...,ξi−1,ξi+1,...,ξn except ξi. Define a function
Zi(x)=−βix+(ρii+φii)fi(x)+n∑j≠i,j=1(ρij+φij)fj(ξj)+ui. | (3.4) |
In the argument that follows, we categorize the discussion into five situations.
Situation 1. i∈N1. Since fi(−π2−ι)=−1, |fj(ξj)|≤1, based on (3.1) and the definition of ι, we get
Zi(−1ι)=βiι−(ρii+φii)+n∑j≠i,j=1(ρij+φij)fj(ξj)+ui≥βiι−n∑j=1|ρij|−n∑j=1|φij|−|ui|>0,Zi(−π2−ι)≤π2βi+βiι−ρii−φii+n∑j≠i,j=1|ρij+φij|+ui≤0. |
Hence, thanks to the continuity of Zi(x), there exists a ˉξi∈[−1ι,−π2−ι] such that Zi(ˉξi)=0.
Situation 2. i∈N2. Owing to fi(−π2)=−1, fi(−π6)=1, under the facts βi>0 and (3.2),
π6βi+ρii+φii−n∑j≠i,j=1|ρij+φij|+ui>0, | (3.5) |
so, according to (3.1) and (3.5), we can have
Zi(−π2+ι)≤π2βi+(ρii+φii)fi(−π2+ι)−βiι+n∑j≠i,j=1|ρij+φij|+ui≤0,Zi(−π6−ι)≥π6βi+(ρii+φii)fi(−π6−ι)+βiι−n∑j≠i,j=1|ρij+φij|+ui≥0, |
hence, there exists a ˉξi∈[−π2+ι,−π6−ι] such that Zi(ˉξi)=0.
Situation 3. i∈N3. fi(−π6)=1, and fi(π6)=−1, from (3.1), we have
−π6βi−ρii−φii+n∑j≠i,j=1|ρij+φij|+ui<0, | (3.6) |
due to (3.5) and (3.6),
Zi(−π6+ι)≥π6βi+(ρii+φii)fi(−π6+ι)−βiι+n∑j≠i,j=1|ρij+φij|+ui≥0,Zi(π6−ι)≤−π6βi+(ρii+φii)fi(π6−ι)+βiι+n∑j≠i,j=1|ρij+φij|+ui≤0, |
so there exists a ˉξi∈[−π6+ι,π6−ι] such that Zi(ˉξi)=0.
Situation 4. i∈N4. fi(π6)=−1, and fi(π2)=1, from (3.2) and (3.6), we can obtain
Zi(π6+ι)≤−π6βi+(ρii+φii)fi(π6+ι)−βiι+n∑j≠i,j=1|ρij+φij|+ui≤0,Zi(π2−ι)≥−π2βi+(ρii+φii)fi(π2−ι)−βiι−n∑j≠i,j=1|ρij+φij|+ui≥0, |
similarly, there exists a ˉξi∈[π6+ι,π2−ι] such that Zi(ˉξi)=0.
Situation 5. i∈N5. fi(π2+ι)=1, and |fj(ξj)|≤1, from (3.2), we can know
Zi(π2+ι)≥−π2βi+(ρii+φii)−βiι−n∑j≠i,j=1|ρij+φij|+ui≥0,Zi(1ι)=−βiι+(ρii+φii)+n∑j≠i,j=1(ρij+φij)fj(ξj)+ui<−βiι+n∑j=1|ρij|+n∑j=1|φij|+|ui|<0, |
therefore, there exists a ˉξi∈[π6+ι,π2−ι] such that Zi(ˉξi)=0.
That is, it is possible to draw the conclusion that there exists at least one zero about each interval in ˜Φι for Zi(x). Define a continuous mapping Ξ : Θ(1)→Θ(1), Ξ(x1,x2,...,xn)=(ˉx1,ˉx2,...,ˉxn)T. Taking advantage of Brouwer's fixed point theorem, there exists a fixed point x∗=(x∗1,x∗2,...,x∗n)T of Ξ, which simultaneously acts as the equilibrium point of DFNNs (2.1) in ˜Φι∈Θι. Hence, for DFNNs (2.1) with activation function (1.2), there are a minimum of 5n equilibria in Θι.
We will explore the stability of multiple equilibria of DFNNs (2.1) with activation function (1.2) in this subsection. For this purpose, some positive invariant sets of DFNNs (2.1) need to be determined in order to analyze the stability of equilibria.
Denote
ˉΘι={n∏i=1[−1ι,−π2−ι]δ(i)1×[−π6+ι,π6−ι]δ(i)2×[π2+ι,1ι]δ(i)3,(δ(i)1,δ(i)2,δ(i)3)=(1,0,0) or (0,1,0) or (0,0,1)}, |
apparently, the set ˉΘι has 3n regions.
Pick any region of ˉΘι,
ˉΦιL=∏i∈L1[−1ι,−π2−ι]×∏i∈L2[−π6+ι,π6−ι]×∏i∈L3[π2+ι,1ι]⊂ˉΘι, |
where L1∪L2∪L3={1,2,...,n},Li∩Lj=∅ (i≠j,i,j=1,2,3).
Theorem 3.2. Suppose that DFNNs (2.1) with activation functions (1.2) satisfy:
π2βi−ρii+n∑j≠i,j=1|ρij|+n∑j=1|φij|+ui<0, | (3.7) |
−π2βi+ρii−n∑j≠i,j=1|ρij|−n∑j=1|φij|+ui>0, | (3.8) |
for i=1,2,...,n. Then each region ˉΦιL∈ˉΘι is a positively invariant set of DFNNs (2.1).
Proof. From (3.7) and (3.8), the following conditions can be obtained
−π6βi−ρii+n∑j≠i,j=1|ρij|+n∑j=1|φij|+ui<0, | (3.9) |
π6βi+ρii−n∑j≠i,j=1|ρij|−n∑j=1|φij|+ui>0. | (3.10) |
Assume that x(t) is the solution of DFNNs (2.1) with the initial value (2.2). In the following, we will certify that each subspace ˉΦιL∈ˉΘι is positively invariant. Thus, for a given subspace ˉΦιL, if the initial condition ϕ(t0)∈ˉΦιL, then x(t)∈ˉΦιL for all t≥t0. If not, three cases will be considered.
Case 1. i∈L1. There exists a ˉt1>0 such that xi(ˉt1)<−1ι, or xi(ˉt1)>−π2−ι. Assume that xi(ˉt1)>−π2−ι without loss of generality. Then
{xi(t)=−π2−ι, t=ˉt1,xi(t)<−π2−ι, t0≤t<ˉt1. |
Denote W1(t)=xi(t)−(−π2−ι), according to Lemma 2.1,
CDαt0W1(t)|t=ˉt1=CDαt0xi(ˉt1)>0, | (3.11) |
on the other hand, owing to (3.7), fi(−π2−ι)=−1, and the sufficiently small positive ι,
CDαt0xi(ˉt1)=−βixi(ˉt1)+n∑j=1ρijfj(xj(ˉt1))+n∑j=1φijfj(xj(ˉt1−τij(ˉt1)))+ui≤π2βi+βiι−ρii+n∑j≠i,j=1|ρij|+n∑j=1|φij|+ui≤0, |
which shows that it is a paradox with (3.11).
So we can get that xi(t)≤−π2−ι. Similarly, we can prove xi(t)≥−1ι.
Case 2. i∈L2. There exists a ˉt2>0 such that xi(ˉt2)<−π6+ι, or xi(ˉt2)>π6−ι. Using the same approach as Case 1, assume that xi(ˉt3)<−π6+ι. Then
{xi(t)=−π6+ι, t=ˉt2,xi(t)>−π6+ι, t0≤t<ˉt2. |
Denote W2(t)=−π6+ι−xi(ˉt2), based on Lemma 2.1,
CDαt0W2(t)|t=ˉt2=−CDαt0xi(ˉt2)>0, | (3.12) |
on the other hand, in view of (3.10) and |fj(xj)|≤1,
CDαt0xi(ˉt2)=−βixi(ˉt2)+n∑j=1ρijfj(xj(ˉt2))+n∑j=1φijfj(xj(ˉt2−τij(ˉt2)))+ui≥π6βi−βiι+ρiifi(−π6+ι)−n∑j≠i,j=1|ρij|−n∑j=1|φij|+ui≥0, |
this contradicts (3.12). So we can get that xi(t)≥−π6+ι. Similarly, we can prove xi(t)≤π6−ι.
Case 3. i∈L3. There exists a ˉt3>0 such that xi(ˉt3)<π2+ι, or xi(ˉt3)>1ι. Assume that xi(ˉt3)<π2+ι. Then
{xi(t)=π2+ι, t=ˉt3,xi(t)>π2+ι, t0≤t<ˉt3. |
Let W3(t)=π2+ι−xi(ˉt3). According to Lemma 2.1,
CDαt0W3(t)|t=ˉt3=−CDαt0xi(ˉt3)>0, | (3.13) |
which implies xi(ˉt3)<0. On the other hand, on account of (3.8),
CDαt0xi(ˉt3)=−βixi(ˉt3)+n∑j=1ρijfj(xj(ˉt3))+n∑j=1φijfj(xj(ˉt3−τij(ˉt3)))+ui≥−π2βi−βiι+ρii−n∑j≠i,j=1|ρij|−n∑j=1|φij|+ui≥0, |
this is contradicted by xi(ˉt3)<0. Hence, we can get that xi(t)≥π2+ι. Similarly, we can prove xi(t)≤1ι.
Summing up, we conclude that the corresponding solution xi(t) would always stay in ˉΦιδ∈ˉΘι, which suggests that each subset ˉΦιδ∈ˉΘι is a positive invariant set of DFNNs (2.1).
Below, the stability of DFNNs (2.1) with activation functions (1.2) will be discussed.
Theorem 3.3. Under the conditions (3.7)–(3.8), further suppose that there are n positive constants ε1,ε2,...,εn and ς>σ satisfying the following condition:
βi−χi−3εin∑j=1,j≠i|ρij|εj−3εin∑j=1|φij|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α)>0, |
for i=1,2,...,n, where χi=max1≤i≤n{0,−3ρii}. Then, there are 3n locally O(t−α) stable equilibria in ˉΘι for DFNNs (2.1).
Proof. Based on Theorem 3.2, we get that there are 3n subsets in Θι and each ˜Φι∈Θι is positively invariant. Hence, all we need to do is verify that x∗ is a locally O(t−α) stable equilibrium point when ˜Φι∈Θι.
Let
Y(t)=x(t)−x∗, z(t)=max1≤i≤n{|Yi(t)|εi}. |
Since z(t)=max1≤i≤n{|yi(t)|εi}, thus, an index k∈{1,2,...,n} exists such that z(t)=|Yk(t)|εk.
Substituting x(t)=Y(t)+x∗ into (2.1), it shows that
CDαt0Yi(t)=−βiYi(t)+n∑j=1ρijFj(Yj(t))+n∑j=1φijFj(Yj(t−τij(t))), |
where Fj(Yj(t))=fj(Yj(t)+x∗j)−fj(x∗j), Fj(Yj(t−τij(t)))=fj(Yj(t−τij(t))+x∗j)−fj(x∗j).
Then, according to Lemma 2.2, we get that
CDαt0|Yi(t)|≤sign(Yi(t))⋅CDαt0Yi(t)≤sign(Yi(t))⋅(−βiYi(t)+n∑j=1ρijFj(Yj(t))+n∑j=1φijFj(Yj(t−τij(t)))≤−βi|Yi(t)|+ρiifi(xi(t))−fi(x∗i)xi(t)−x∗i|Yi(t)|+n∑j=1,j≠iρijfj(xj(t))−fj(x∗j)xj(t)−x∗j|Yj(t)|+n∑j=1φijfj(xj(t−τij(t)))−fj(x∗j)xj(t−τij(t))−x∗j|Yj(t−τij(t))|. | (3.14) |
Next, recalling (1.2), under the Lagrange mean value theorem, there exists ξ∗1j∈(x∗j,xj(t)),ξ∗2j∈(x∗j,xj(t−τij(t))) such that f′j(ξ∗1j)=fj(xj(t))−fj(x∗j)xj(t)−x∗j∈(−3,0),f′j(ξ∗2j)=fj(xj(t−τij(t)))−fj(x∗j)xj(t−τij(t))−x∗j∈(−3,0), i,j=1,2,...,n.
Consider the term ρiifi(xi(t))−fi(x∗i)xi(t)−x∗i|Yi(t)|, if ρii≥0, ρiifi(xi(t))−fi(x∗i)xi(t)−x∗i|Yi(t)|≤0; if ρii<0, ρiifi(xi(t))−fi(x∗i)xi(t)−x∗i|Yi(t)|≤−3ρii|Yi(t)|. In conclusion, ρiifi(xi(t))−fi(x∗i)xi(t)−x∗i|Yi(t)|≤χi|Yi(t)|. Combining with (3.14), we can have
CDαt0|Yi(t)|≤(−βi+χi)|Yi(t)|+3n∑j=1,j≠i|ρij||Yj(t)|+3n∑j=1|φij||Yj(t−τij(t))|. | (3.15) |
Invoking (3.15) and the definition of z(t),
CDαt0z(t)=1εκ⋅CDαt0|Yκ(t)|≤1εκ[(−βκ+χκ)|Yκ(t)|+3n∑j=1,j≠κ|ρκj||Yj(t)|+3n∑j=1|φκj||Yj(t−τκj(t))|]≤(−βκ+χκ)z(t)+3εκ[n∑j=1,j≠κ|ρκj|εjz(t)+n∑j=1|φκj|εjz(t−τκj(t))]. | (3.16) |
In view of Lemma 2.3, let
ϖ(t)=(t−t0+ς)αz(t),ˆϖ(t)=(t−t0+ς)αˆz(t),ˆz(t)=supt0−σ≤s≤tz(s). |
Then,
CDαt0ϖ(t)≤(t−t0+ς)α CDαt0z(t)+1−α+2α2−α3ςαΓ(2−α)ˆϖ(t), | (3.17) |
substituting (3.16) into (3.17), it yields
CDαt0ϖ(t)≤(−βκ+χκ)ϖ(t)+3εκ[n∑j=1,j≠κ|ρκj|εjϖ(t)+n∑j=1|φκj|εjϖ(t−τκj(t))×(t−t0+ς)α(t−τκj(t)−t0+ς)α]+1−α+2α2−α3ςαΓ(2−α)ˆϖ(t). | (3.18) |
Note that
ϖ(t−τκj(t))≤(t−τκj(t)−t0+ς)αˆz(t)≤ˆϖ(t), |
and
t−t0+ςt−τκj(t)−t0+ς≤ςς−τκj(t)≤ςς−σ, |
hence, (3.18) turns into
CDαt0ϖ(t)≤−(βκ−χκ−3εκn∑j=1,j≠κ|ρκj|εj)ϖ(t)+3εκn∑j=1|φκj|εj(ςς−σ)αˆϖ(t)+1−α+2α2−α3ςαΓ(2−α)ˆϖ(t). |
When ˆϖ(t)=ϖ(t) holds for t≥t0, it implies that
CDαt0ϖ(t)≤−[βκ−χκ−3εκn∑j=1,j≠κ|ρκj|εj−3εκn∑j=1|φκj|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α)]ϖ(t)≤−Υϖ(t), | (3.19) |
where Υ=min1≤i≤n(βi−χi−3εi∑nj=1,j≠i|ρij|εj−3εi∑nj=1|φij|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α))>0.
Next, we are about to demonstrate that ˆϖ(t)≤ˆϖ(t0), for t≥t0. Otherwise, there must be some ˜t>t0, then ˆϖ(˜t)=ϖ(˜t)>ˆϖ(t0)≥0.
Now, denote ℏ(t)=ϖ(t)−ˆϖ(t), then
{ℏ(t)=0, t=˜t,ℏ(t)≤0, t<˜t. |
Based on Lemma 2.1, we obtain that
CDαt0ℏ(t)|t=˜t=CDαt0ϖ(t)|t=˜t−CDαt0ˆϖ(t)|t=˜t≥0, |
from (3.19), we get that CDαt0ϖ(t)<0, thereupon, we get that
CDαt0ˆϖ(t)|t=˜t≤CDαt0ϖ(t)|t=˜t<0. | (3.20) |
On the other side, it's worth noting that the property of ˆϖ(t) is that when t0≤t≤˜t, dˆϖ(t)dt≥0, and dˆϖ(t)dt≢0, hence
CDαt0ˆϖ(t)|t=˜t=1Γ(1−α)∫tt0(t−s)−α(dˆϖ(s)ds)ds>0, |
which conflicts with (3.20). Therefore, ˆϖ(t)≤ˆϖ(t0) holds for t≥t0.
Recalling the norms ‖x‖ and z(t), it follows
||x(t)−x∗||=||Y(t)||≤√nεκz(t). |
Then
||x(t)−x∗||≤√n||ε||ϖ(t)(t−t0+ς)α≤√n||ε||ˆϖ(t0)(t−t0+ς)α≤√n||ε||ςα||ϕ−x∗||Mˇε(t−t0+ς)α=Λςα||ϕ−x∗||M(t−t0+ς)α, |
where Λ=√n||ε||ˇε≥1, ˇε=min1≤i≤n{εi}.
Therefore, x∗ is a locally O(t−α) stable equilibrium point; this accomplishes the proof.
Remark 3.2. Compared to the study of piecewise nonlinear activation functions (1.2) in integer-order neural networks in [24], this paper extends activation functions (1.2) to FNNs, thereby enhancing the generalizability of the results.
Remark 3.3. In contrast with the multiple O(t−α) stability of FNNs studied in [25,26], the piecewise nonlinear activation functions considered in this paper can achieve more stable equilibrium points. This implies the possibility of achieving greater storage capacity, thus leading to better performance in the application of associative memory.
The above mainly discusses the existence and stability of DFNNs (2.1) with k=2 in the proposed activation function (1.2). In order to attain a greater number of stable equilibria, we promote the activation function in a more generalized scenario. The subsequent theorem elucidates the conditions under k≥2 in the activation function (1.1).
Theorem 3.4. Under the conditions (3.7)–(3.8), further suppose that there are n positive constants ε1,ε2,...,εn and ς>σ satisfying the following condition:
βi−χi−2k−1εin∑j=1,j≠i|ρij|εj−2k−1εin∑j=1|φij|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α)>0, |
for i=1,2,...,n. Then, there are (2k+1)n equilibria, out of which (k+1)n equilibria exist local O(t−α) stability for DFNNs (2.1) with activation function (1.1). The positively invariant set of DFNNs (2.1) with activation function (1.1) is
Θ∗={n∏i=1(−∞,−π2)δ(i)1×[−π2+πk,−π2+2πk]δ(i)2×⋯×[π2−2πk,π2−πk]δ(i)k×(π2,+∞)δ(i)k+1,(δ(i)1,δ(i)2,...,δ(i)k+1)=(1,0,...,0) or (0,1,...,0) or... or (0,0,...,1)}. |
Proof. Since the proof process is similar to Theorems 3.1–3.3, we omit it here.
Remark 3.4. From Theorem 3.4, we can see that there is a connection between the amount of stable equilibria and the frequency k of the sinusoidal functions. The result shows that adding the frequency of the sinusoidal functions leads to an expansion of the quantity of stable equilibria.
In what follows, a design procedure for DFNNs (2.1) is introduced.
Let B={−1,1} and Bn={η∈Rn,η=(η1,η2,...,ηn)T,ηi∈B,i=1,2,...,n.}. For a given positive integer r, design neurodynamic associative memory utilizing DFNNs (2.1), with r vectors denoted as η1,η2,...,ηr acting as memory patterns, such that: (a) The vectors η1,η2,...,ηr represent memory vectors. (b) The system complies with the conditions outlined in Theorem 3.3. (c) Minimize the presence of spurious memory patterns.
With r vectors denoted as η1,η2,...,ηr∈Bn, we proceed as follows:
(1) Choose matrix A as the identity matrix E.
(2) Take a real constant γ>1 and choose r vectors ϵ1,ϵ2,...,ϵr such that ϵi=γηi, and ϵi=(B+C)ηi+I,i=1,2,...,r.
(3) Let Q=[η1−ηr,η2−ηr,...,ηr−1−ηr], and M=[ϵ1−ϵr,ϵ2−ϵr,...,ϵr−1−ϵr]. Performing singular value decomposition on Q yields
Q=t[U1,U2][D000][VT1VT2], |
where matrix D=diag(d1,d2,...,dm) and di>0 are singular values, as well as m=rank(Q),i=1,2,...,m. UT1U1=V1VT1=DD−1=E.U1∈Rn×m,U2∈Rn×(r−m−1),V1∈Rm×m.
(4) Compute the sum matrix T=(hij) of B,C, T=MV1DD−1UT1+WUT2, where W is an arbitrary n×(r−m−1) matrix.
(5) Choose B and C such that ρij+φij=hij.
(6) The vector of neuron inputs is calculated from I=ϵr−Tηr.
Remark 4.1. In [29], Theorem 3.2 solved the validity of the above-mentioned design process, and the authors employed this design procedure applied to the associative memory of integer-order neural networks in [3]. In contrast, this paper applies that design process to the associative memory of DFNNs.
Here, four examples are provided to demonstrate the validity of the theoretical results.
Example 5.1. Consider the following 2-dimensional DFNNs with α=0.98:
{CDαt0x1(t)=−x1(t)+2.9f1(x1(t))+0.1f2(x2(t))+0.1f1(x1(t−τ11(t)))+0.1f2(x2(t−τ12(t)))+0.9,CDαt0x2(t)=−x2(t)+0.1f1(x1(t))+2.7f2(x2(t))+0.1f1(x1(t−τ21(t)))+0.1f2(x2(t−τ22(t)))+1, | (5.1) |
where τ11(t)=τ21(t)=t1+t, τ12(t)=τ22(t)=et1+et, and
f(x)={−1,−∞<x<−π2,−sin(3x),−π2≤x≤π2,1,π2<x<+∞. | (5.2) |
For the following calculation formulas:
π2β1−ρ11+|ρ12|+|φ11|+|φ12|+u1=π2−1.6≈−0.0292<0, |
−π2β2+ρ22−|ρ21|−(|φ21|+|φ22|)+u2=3.4−π2≈1.8292>0, |
and pick ε1=ε2=1, and ς=50>σ=1, such that
βi−χi−3εin∑j=1,j≠i|ρij|εj−3εin∑j=1|φij|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α)=0.0661>0. |
As a result, we can see that the criteria introduced in Theorems 3.1–3.3 are met for DFNNs (5.1). Accordingly, there are 52=25 equilibria for (5.1), among which 32=9 equilibria are locally O(t−α) stable. Utilizing MATLAB, we can see the evolution behaviors of DFNNs (5.1) in Figures 2 and 3.
Example 5.2. Consider the following 2-dimensional DFNNs with α=0.98:
{CDαt0x1(t)=−x1(t)+2.9f1(x1(t))+0.1f2(x2(t))+0.1f1(x1(t−τ11(t)))+0.1f2(x2(t−τ12(t)))+0.9,CDαt0x2(t)=−x2(t)+0.1f1(x1(t))+2.7f2(x2(t))+0.1f1(x1(t−τ21(t)))+0.1f2(x2(t−τ22(t)))+1, | (5.3) |
where τ11(t)=τ21(t)=t1+t, τ12(t)=τ22(t)=et1+et, and
f(x)={−1,−∞<x<−π2,sin(5x),−π2≤x≤π2,1,π2<x<+∞. | (5.4) |
For the following calculation formulas:
π2β1−ρ11+|ρ12|+|φ11|+|φ12|+u1=π2−1.6≈−0.0292<0, |
−π2β2+ρ22−|ρ21|−(|φ21|+|φ22|)+u2=3.4−π2≈1.8292>0, |
and pick ε1=ε2=1, and ς=50>σ=1, such that
βi−χi−3εin∑j=1,j≠i|ρij|εj−3εin∑j=1|φij|εj(ςς−σ)α−1−α+2α2−α3ςαΓ(2−α)=0.0661>0. |
As a result, we can see that the criteria introduced in Theorems 3.1–3.3 are met for DFNNs (5.3). Accordingly, there are 72=49 equilibria for (5.3), among which 42=16 equilibria are locally O(t−α) stable. Using MATLAB, we can see the evolution behaviors of DFNNs (5.3) in Figures 4 and 5.
Example 5.3. Next, we verify neurodynamic associative memory. Assuming the target pattern that need to be memorized is a 6×3–pixel image, as shown in Figure 6, in which a black pixel represents '-1', and a white pixel represents '1', that is, the target pattern is (−1,−1,−1, 1, 1, 1; 1, 1,−1,−1,−1,−1;−1,−1,−1, 1, 1, 1)T. Here, we will synthesize associative memory by adopting model (5.1) with the activation function (5.2) in Example 1, and then we can get 9 locally O(t−α) stable equilibria, which are designated as X1,X2,...,X9. By applying MATLAB, these equilibria are X1=(−1.9,3.6), X2=(−2.151,0.086), X3=(−2.3,−2.0), X4=(0.119,3.7341), X5=(0.0848,0.1025), X6=(0.0705,−1.842), X7=(4.1,4.0), X8=(3.8236,0.1306), and X9=(3.7,−1.6). Besides, define Xi=(Xi1,Xi2)T,i=1,2,...,n.
By calculation,
f(X11)=−1,f(X12)=1,f(X21)=−1,f(X22)=−0.2551,f(X31)=−1,f(X32)=−1, |
f(X41)=−0.3294,f(X42)=1,f(X51)=−0.2517,f(X52)=−0.3027,f(X61)=−0.2099,f(X62)=−1, |
f(X71)=−1,f(X72)=1,f(X81)=1,f(X82)=−0.3819,f(X91)=1,f(X92)=−1. |
To get a better display, define nine vectors: ei=(ei1,ei2)T,i=1,2,...,9. The details are as follows:
e1=(0,−2)T,e2=(0,1.2551)T,e3=(2,2)T, |
e4=(1.3294,0)T,e5=(−0.7483,−0.6973)T,e6=(−0.7901,0)T, |
e7=(0,−2)T,e8=(−2,1.3819)T,e9=(0,2)T. |
Define Ji=(Ji1,Ji2)T=(f(Xi1)+ei1,f(Xi2)+ei2)T. Let J=(J1;J2;J3;J4;J5;J6;J7;J8;J9)=(−1,−1,−1, 1, 1, 1; 1, 1,−1,−1,−1,−1;−1,−1,−1, 1, 1, 1)T. By the aid of the proposed design procedure in Section 4, the matrices B and C need to satisfy B+C=[30.20.22.8]. Then we can pick the same connection weights ρij,φij as (5.1) in Example 1. The corresponding evolutionary pattern is shown in Figure 7.
Example 5.4. To further demonstrate the universality and applicability of theoretical results, we apply neurodynamic associative memory to the explaining-lesson skills assessment of normal students. In China, in order to further carry out the new normal construction, actively adapt and serve the needs of educational reform and development for teacher training, whose essence is also to promote the reform of teacher education training modes and test the results of teaching and training of basic teaching skills for normal students, explaining-lesson skills of normal students have received high attention from the governments and universities. Generally, the explaining-lesson process in China includes the following aspects: (1) textbook analysis, (2) student analysis, (3) determining teaching objectives and key and difficult points, (4) determining teaching and learning methods, (5) selecting teaching aids and learning aids, and (6) designing the teaching process. Based on the explaining-lesson process, there are nine indexes for evaluating explaining-lesson skills: (Ⅰ) textbook analysis, (Ⅱ) teaching objectives, (Ⅲ) teaching priorities, (Ⅳ) teaching difficulties, (Ⅴ) teaching methods, (Ⅵ) teaching process, (Ⅶ) blackboard design, (Ⅷ) post-class reflection, and (Ⅸ) explaining-lesson modes. For a specific normal student, which index has the most significant impact on his or her explaining-lesson skill is crucial for personalized discrimination and improvement of his or her teaching skill. From a signal processing perspective, this is essentially a nonlinear classification problem.
We know that high-dimensional mapping can effectively solve nonlinear classification problems. Firstly, based on the format of the data frames in the explaining-lesson videos, we extract the data fields that need to be converted into matrices through the algorithm in [30]. Then we determine the dimension n of the matrix by way of the characteristics and requirements of the data fields, where the choice of dimension n depends on the specific application scenario and requirement. The next question is how to map data fields to this n×n matrix according to a certain rule. Here, this rule will be the neurodynamic associative memory that is going to be explored. Based on DFNNs (5.1) with the activation function (5.2) in Example 1 (i.e., as an associative memory network), the network model is used to construct the structural elements of high-dimensional mapping, where 9 local O(t−α) stable equilibria represent the corresponding membership degree (i.e., corresponding to the previously stated nine indexes (Ⅰ)-(Ⅸ)). Once adaptive structural elements are obtained, the samples are preprocessed before classification so as to reduce the impact of irrelevant interference information on classification accuracy. Neurodynamic associative memory based on DFNNs (5.1) with the activation function (5.2) is a very useful intelligent classifier.
Figure 8 describes three scene segments of explaining-lesson for normal students, Figure 9 displays normalization a preprocessing of samples before classification; and Figure 10 shows a certain index that has the most significant impact on the explaining-lesson skill of a specific normal student.
This paper investigates the multiple O(t−α) stability of DFNNs with a type of piecewise nonlinear activation function. Some sufficient conditions are acquired to assure that there exist 5n equilibria for DFNNs (2.1), and 3n equilibria of them are locally multiple O(t−α) stable. Furthermore, we apply these results to a more general case, revealing that the time-varying DFNNs can attain (2k+1)n equilibria, among which (k+1)n equilibria exhibit O(t−α) stability. Here, the parameter k is highly dependent on the frequency of the sinusoidal functions in the expanded activation functions. Accordingly, this work offers effective assistance in obtaining a larger storage capacity for the application of DFNNs in associative memory.
Jiangwei Ke: Conceptualization, formal analysis, methodology, writing-original draft, writing-review and editing; Jine Zhang: Software, validation and supervision. All authors have read and approved the final version of the manuscript for publication.
The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.
This work was supported by Hubei Province Education Science Planning (Grant No. 2023GA058) and the Research Project on HBNU Education and Teaching Reform (Grant No. 2023002).
The authors declare that there are no conflicts of interest.
[1] |
Z. G. Zeng, J. Wang, Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks, IEEE Trans. Syst. Man Cybern. B, 38 (2008), 1525–1536. https://doi.org/10.1109/tsmcb.2008.927717 doi: 10.1109/tsmcb.2008.927717
![]() |
[2] |
Z. G. Zeng, J. Wang, Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates, Neural Netw., 22 (2009), 651–657. https://doi.org/10.1016/j.neunet.2009.06.031 doi: 10.1016/j.neunet.2009.06.031
![]() |
[3] |
G. Bao, Z. G. Zeng, Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions, Neurocomputing, 77 (2012), 701–707. https://doi.org/10.1016/j.neucom.2011.08.026 doi: 10.1016/j.neucom.2011.08.026
![]() |
[4] |
J. Park, H. Y. Kim, Y. Park, S. W. Lee, A synthesis procedure for associative memories based on space-varying cellular neural networks, Neural Netw., 14 (2001), 107–113. https://doi.org/10.1016/S0893-6080(00)00086-1 doi: 10.1016/S0893-6080(00)00086-1
![]() |
[5] |
Z. G. Zeng, J. Wang, Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays, Neural Comput., 19 (2007), 2149–2182. https://doi.org/10.1016/j.neucom.2011.08.026 doi: 10.1016/j.neucom.2011.08.026
![]() |
[6] |
R. Rakkiyappan, G. Velmurugan, J. D. Cao, Finite-time stability analysis of fractional-order complex-valued memristor-based neural networks with time delays, Nonlinear Dyn., 78 (2014), 2823–2836. https://doi.org/10.1016/j.neucom.2017.03.042 doi: 10.1016/j.neucom.2017.03.042
![]() |
[7] |
A. L. Wu, Z. G. Zeng, Global Mittag-Leffler stabilization of fractional-order memristive neural networks, IEEE Trans. Neural Netw. Learn Syst., 28 (2015), 206–217. https://doi.org/10.1109/tnnls.2015.2506738 doi: 10.1109/tnnls.2015.2506738
![]() |
[8] |
W. Zhang, R. Wu, J. Cao, A. Alsaedi, T. Hayat, Synchronization for stochastic fractional-order memristive BAM neural networks with multiple delays, Fractal Fract., 7 (2023), 678. https://doi.org/10.3390/fractalfract7090678 doi: 10.3390/fractalfract7090678
![]() |
[9] |
H. B. Bao, J. D. Cao, Projective synchronization of fractional-order memristor-based neural networks, Neural Netw., 63 (2015), 1–9. https://doi.org/10.1016/j.neunet.2014.10.007 doi: 10.1016/j.neunet.2014.10.007
![]() |
[10] |
B. S. Chen, J. J. Chen, Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks, Neural Netw., 68 (2015), 78–88. https://doi.org/10.1016/j.neunet.2015.04.006 doi: 10.1016/j.neunet.2015.04.006
![]() |
[11] |
[10.1016/j.neucom.2021.05.018]C. Y. Chen, S. Zhu, Y. C. Wei, C. Y. Yang, Finite-time stability of delayed memristor-based fractional-order neural networks, IEEE Trans Cybern., 50 (2018), 1607–1616. https://doi.org/10.1109/tcyb.2018.2876901 doi: 10.1109/tcyb.2018.2876901
![]() |
[12] |
J. E. Zhang, Linear-type discontinuous control of fixed-deviation stabilization and synchronization for fractional-order neurodynamic systems with communication delays, IEEE Access, 6 (2018), 52570–52581. https://doi.org/10.1109/access.2018.2870979 doi: 10.1109/access.2018.2870979
![]() |
[13] |
J. J. Chen, B. S. Chen, Z. G. Zeng, O(t−α) synchronization and Mittag-Leffler synchronization for the fractional-order memristive neural networks with delays and discontinuous neuron activations, Neural Netw., 100 (2018), 10–24. https://doi.org/10.1016/j.neunet.2018.01.004 doi: 10.1016/j.neunet.2018.01.004
![]() |
[14] |
[doi.org/10.1109/tnn.2006.875984] B. S. Chen, J. J. Chen, Global O(t−α) stability and global asymptotical periodicity for a non-autonomous fractional-order neural networks with time-varying delays, Neural Netw., 73 (2016), 47–57. https://doi.org/10.1016/j.neunet.2015.09.007 doi: 10.1016/j.neunet.2015.09.007
![]() |
[15] |
P. Liu, M. X. Kong, Z. G. Zeng, Projective synchronization analysis of fractional-order neural networks with mixed time delays, IEEE Trans. Cybern., 52 (2020), 6798–6808. https://doi.org/10.1109/tcyb.2020.3027755 doi: 10.1109/tcyb.2020.3027755
![]() |
[16] |
X. B. Nie, J. D. Cao, Multistability of competitive neural networks with time-varying and distributed delays, Nonlinear Anal. Real World Appl., 10 (2009), 928–942. https://doi.org/10.1016/j.nonrwa.2007.11.014 doi: 10.1016/j.nonrwa.2007.11.014
![]() |
[17] |
P. Liu, Z. G. Zeng, J. Wang, Multiple Mittag-Leffler stability of fractional-order recurrent neural networks, IEEE Trans. Syst. Man Cybern. Syst., 47 (2017), 2279–2288. https://doi.org/10.1109/tsmc.2017.2651059 doi: 10.1109/tsmc.2017.2651059
![]() |
[18] |
Z. G. Zeng, W. X. Zheng, Multistability of two kinds of recurrent neural networks with activation functions symmetrical about the origin on the phase plane, IEEE Trans. Neural Netw. Learn. Syst., 24 (2013), 1749–1762. https://doi.org/10.1109/tnnls.2013.2262638 doi: 10.1109/tnnls.2013.2262638
![]() |
[19] |
P. Liu, Z. G. Zeng, J. Wang, Multistability of recurrent neural networks with nonmonotonic activation functions and mixed time delays, IEEE Trans. Syst. Man Cybern. Syst., 46 (2015), 512–523. https://doi.org/10.1109/tsmc.2015.2461191 doi: 10.1109/tsmc.2015.2461191
![]() |
[20] |
P. P. Liu, X. B. Nie, J. L. Liang, J. D. Cao, Multiple Mittag-Leffler stability of fractional-order competitive neural networks with Gaussian activation functions, Neural Netw., 108 (2018), 452–465. https://doi.org/10.1016/j.neunet.2018.09.005 doi: 10.1016/j.neunet.2018.09.005
![]() |
[21] |
P. Liu, Z. G. Zeng, J. Wang, Complete stability of delayed recurrent neural networks with Gaussian activation functions, Neural Netw., 85 (2017), 21–32. https://doi.org/10.1016/j.neunet.2016.09.006 doi: 10.1016/j.neunet.2016.09.006
![]() |
[22] |
P. Liu, Z. G. Zeng, J Wang, Multistability of delayed recurrent neural networks with Mexican hat activation functions, Neural Comput., 29 (2017), 423–457. https://doi.org/10.1162/necoa00922 doi: 10.1162/necoa00922
![]() |
[23] |
Y. Shen, S. Zhu, X. Liu, S. Wen, Multistability and associative memory of neural networks with morita-like activation functions, Neural Netw., 142 (2021), 162–170. https://doi.org/10.1016/j.neunet.2021.04.035 doi: 10.1016/j.neunet.2021.04.035
![]() |
[24] |
Y. Liu, Z. Wang, Q. Ma, H. Shen, Multistability analysis of delayed recurrent neural networks with a class of piecewise nonlinear activation functions, Neural Netw., 152 (2022), 80–89. https://doi.org/10.1016/j.neunet.2022.04.015 doi: 10.1016/j.neunet.2022.04.015
![]() |
[25] |
L. G. Wan, Z. X. Liu, Multiple O(t−α) stability for fractional-order neural networks with time-varying delays, J Franklin Inst., 357 (2020), 12742–12766. https://doi.org/10.1016/j.jfranklin.2020.09.019 doi: 10.1016/j.jfranklin.2020.09.019
![]() |
[26] |
L. G. Wan, Z. X. Liu, Multiple O(t−q) stability and instability of time-varying delayed fractional-order Cohen-Grossberg neural networks with Gaussian activation functions, Neurocomputing, 454 (2021), 212–227. https://doi.org/10.1016/j.neucom.2021.05.018 doi: 10.1016/j.neucom.2021.05.018
![]() |
[27] |
C. P. Li, F. R. Zhang, A survey on the stability of fractional differential equations, Eur. Phys. J. Special Topics, 193 (2011), 27–47. https://doi.org/10.1140/epjst/e2011-01379-1 doi: 10.1140/epjst/e2011-01379-1
![]() |
[28] |
A. L. Wu, L. Liu, T. W. Huang, Z. G. Zeng, Mittag-Leffler stability of fractional-order neural networks in the presence of generalized piecewise constant arguments, Neural Netw., 85 (2017), 118–127. https://doi.org/10.1016/j.neunet.2016.10.002 doi: 10.1016/j.neunet.2016.10.002
![]() |
[29] |
D. Liu, A. N. Michel, Sparsely interconnected neural networks for associative memories with applications to cellular neural networks, IEEE Trans. Circuits Syst., 41 (1994), 295–307. https://doi.org/10.1007/bfb0032156 doi: 10.1007/bfb0032156
![]() |
[30] |
S. S. Bucak, B. Gunsel, Video content representation by incremental non-negative matrix factorization, Proc. Int. Conf. Image Proc., 2 (2007), 113–116. https://doi.org/10.1109/ICIP.2007.4379105 doi: 10.1109/ICIP.2007.4379105
![]() |