
Multiplicative calculus, or geometric calculus, is an alternative to classical calculus that relies on division and multiplication as opposed to addition and subtraction, which are the basic operations of classical calculus. It offers a geometric interpretation that is especially helpful for simulating systems that degrade or expand exponentially. Multiplicative calculus may be extended to fractional orders, much as classical calculus, which enables the analysis of systems having fractional scaling properties. So, in this paper, the well-known Sturm-Liouville problem in fractional calculus is reformulated in multiplicative fractional calculus. The considered problem consists of the Sturm-Liouville operator using multiplicative conformable derivatives on the equation and on boundary conditions. This research aimed to explore some of the problem's spectral aspects, like being self-adjointness of the operator, orthogonality of different eigenfunctions, and reality of all eigenvalues. In this specific situation, Green's function is also recreated.
Citation: Tuba Gulsen, Sertac Goktas, Thabet Abdeljawad, Yusuf Gurefe. Sturm-Liouville problem in multiplicative fractional calculus[J]. AIMS Mathematics, 2024, 9(8): 22794-22812. doi: 10.3934/math.20241109
[1] | Alexander Dudin, Sergey Dudin, Rosanna Manzo, Luigi Rarità . Queueing system with batch arrival of heterogeneous orders, flexible limited processor sharing and dynamical change of priorities. AIMS Mathematics, 2024, 9(5): 12144-12169. doi: 10.3934/math.2024593 |
[2] | Vladimir Vishnevsky, Valentina Klimenok, Olga Semenova, Minh Cong Dang . Retrial tandem queueing system with correlated arrivals. AIMS Mathematics, 2025, 10(5): 10650-10674. doi: 10.3934/math.2025485 |
[3] | Ciro D'Apice, Alexander Dudin, Sergei Dudin, Rosanna Manzo . Study of a semi-open queueing network with hysteresis control of service regimes. AIMS Mathematics, 2025, 10(2): 3095-3123. doi: 10.3934/math.2025144 |
[4] | K. Jeganathan, S. Selvakumar, N. Anbazhagan, S. Amutha, Porpattama Hammachukiattikul . Stochastic modeling on M/M/1/N inventory system with queue-dependent service rate and retrial facility. AIMS Mathematics, 2021, 6(7): 7386-7420. doi: 10.3934/math.2021433 |
[5] | Shaojun Lan, Yinghui Tang . An unreliable discrete-time retrial queue with probabilistic preemptive priority, balking customers and replacements of repair times. AIMS Mathematics, 2020, 5(5): 4322-4344. doi: 10.3934/math.2020276 |
[6] | Yuejiao Wang, Chenguang Cai . Equilibrium strategies of customers and optimal inventory levels in a make-to-stock retrial queueing system. AIMS Mathematics, 2024, 9(5): 12211-12224. doi: 10.3934/math.2024596 |
[7] | S. Sundarapandiyan, S. Nandhini . Sensitivity analysis of a non-Markovian feedback retrial queue, reneging, delayed repair with working vacation subject to server breakdown. AIMS Mathematics, 2024, 9(8): 21025-21052. doi: 10.3934/math.20241022 |
[8] | Bharathy Shanmugam, Mookkaiyah Chandran Saravanarajan . Unreliable retrial queueing system with working vacation. AIMS Mathematics, 2023, 8(10): 24196-24224. doi: 10.3934/math.20231234 |
[9] | Linhong Li, Wei Xu, Zhen Wang, Liwei Liu . Improving efficiency of the queueing system with two types of customers by service decomposition. AIMS Mathematics, 2023, 8(11): 25382-25408. doi: 10.3934/math.20231295 |
[10] | Rani Rajendiran, Indhira Kandaiyan . Transient scrutiny of MX/G(a,b)/1 queueing system with feedback, balking and two phase of service subject to server failure under Bernoulli vacation. AIMS Mathematics, 2023, 8(3): 5391-5412. doi: 10.3934/math.2023271 |
Multiplicative calculus, or geometric calculus, is an alternative to classical calculus that relies on division and multiplication as opposed to addition and subtraction, which are the basic operations of classical calculus. It offers a geometric interpretation that is especially helpful for simulating systems that degrade or expand exponentially. Multiplicative calculus may be extended to fractional orders, much as classical calculus, which enables the analysis of systems having fractional scaling properties. So, in this paper, the well-known Sturm-Liouville problem in fractional calculus is reformulated in multiplicative fractional calculus. The considered problem consists of the Sturm-Liouville operator using multiplicative conformable derivatives on the equation and on boundary conditions. This research aimed to explore some of the problem's spectral aspects, like being self-adjointness of the operator, orthogonality of different eigenfunctions, and reality of all eigenvalues. In this specific situation, Green's function is also recreated.
In 1983, M. Cohen and S. Grossberg proposed a new type of neural network model (CGNN). In real life, CGNN is broadly used in image processing, speed detection of moving targets, association memory and other fields. Scholars at home and abroad have also studied CGNN from different perspectives. In real-world use, it is paramount to ensure that the designed neural network has strong stability. However, owing to the switching speed of the amplifier and the time delay of the signal during transmission, CGNN may experience a time lag in actual work, which is an important factor causing network instability. In recent years, many studies on stability problems have also been carried out for time-delay neural networks [1,2,3,4,5,6,7,8,9,10,11].
The switched system is a model for studying complex systems from the perspective of systems and control[12,13,14]. Mechanical systems and power systems can be displayed in the form of switched systems, and they can also play a vital function in other fields, including ecological science, energy environment and other fields [15]. Simultaneously, the switched system is a complex system composed of a series of succession or dissociation subsystems and switched rules that enable subsystems to switch between them. Switched rules control the operation of the whole switched system, and the switched rules can also be called switched signals, switching laws or switched functions. They are usually based on the segmented constant function of time or status and events. D. Liberzon and A. S. Morse describes in detail the stability, design process and development of the switched system [16]. Compared with the previous CGNN research results [17,18,19,20], the value of the CGNN [21] connection the weight matrix will change over time when combined with a switching system. It can adjust the dynamic behavior of the system through switching rules and strategies, and respond to the dynamic evolution process of the system without cutting. The connection weight matrix of the system is usually fixed and cannot show dynamic changes. A method based on quantitative sliding mode was used to solve the synchronization problem of recursive neural networks with time-varying delays and discrete time [22]. In order to reduce the computational complexity, the authors introduced quantitative technology to discretely process the network state and finally used Lyapunov theory and the Barbalat lemma to deduce the convergence of the system. Among them, sliding mode control is a nonlinear control method with strong robustness. It realizes the switching and control of the system state by introducing a sliding surface. It has the characteristics of strong stability, short adjustment time and strong tracking ability, and it can also show the dynamic changes of the system. However, this paper adds the switching system to the traditional CGNN, uses LMIs and quadratic convex inequality to establish the criterion of the gradual stability of SCGNN, and it further studies the stability of SCGNN and the dynamic evolution process of SCGNN.
For CGNN with time-changing delay, it has been proven that the equilibrium point exists and is unique, and the analysis of stability has been widely studied. However, few people optimize the stability of the system. In [23] and [24], the weighting-delay method and the flexible terminal interpolation method were used to study the recursive neural network, respectively. The generation of time delay is not necessarily uniform, and there may be asymmetry, so it is more conservative to study the impact of time delay on system stability as a fixed interval [25,26,27,28], which usually cannot meet the actual needs. However, the above two methods, through one or more parameters, change the length of the interval, divide a fixed interval into multiple variable sub-intervals, and obtain the maximum allowable upper bound of time delay by using LMIs and constructing an appropriate LKF, which reduces the conservatism of the system. Comparing the experimental results of [23] and [24], the upper bound with the allowable delay obtained by using the flexible terminal interpolation method is larger. In [29], through the method of Halanay's inequality and Lyapunov's functional, the authors put forward a new sufficient condition to ensure that the time-changing delay CGNN has a unique equilibrium solution and global stability. In [30], based on the non-singular M-matrix theory, the method of transformation matrix is used to carry out an appropriate linear transformation of the M-matrix and turn it into a special form with good properties, so as to achieve the positive judgment of the system and obtain a new criterion to ensure the high-order delay discrete CGNN has global exponential stability. However, all these methods lack consideration for reducing the conservatism of the system.
Therefore, this paper uses the flexible terminal interpolation method to study CGNN with time delays, in order to reduce the conservatism of the system and make its results more general and more practical. Moreover, the flexible terminal interpolation method can be adjusted according to the characteristics of the data to adjust the size of the subinterval through a parameter, which greatly reduces the calculation burden and reduces the calculation time cost while ensuring the accuracy of interpolation.
The flexible terminal interpolation method uses ı interpolation and an adjustable parameter to divide the fixed time-delay interval [ℓ0,ℓ2] to 2ı+1−3 flexible time-delay intervals, as shown in Figure 1.
Let the adjustable parameter be ð, and ℓ1=ℓ(t), ϑ=1−ð. The terminal point of each subinterval can be expressed as (taking the second interpolation as an example)
{ℓ12=ðℓ(t)+ϑℓ0=ðℓ(t)+(1−ð)ℓ0,ℓ32=ðℓ(t)+ϑℓ2=ðℓ(t)+(1−ð)ℓ2,ℓ14=ðℓ12+ϑℓ0=ð2ℓ(t)+(1−ð2)ℓ0,ℓ34=ðℓ(t)+ϑℓ12=ð(2−ð)ℓ(t)+(1−ð)2ℓ0,ℓ2i−12ı={ðℓi2ı−1+ϑℓi−12ı−1,1≤i≤2ı−1,ðℓi−12ı−1+ϑℓi2ı−1, 2ı−1+1≤i≤2ı, |
˙ℓ12=ð˙ℓ(t), ˙ℓ32=ð˙ℓ(t), ˙ℓ14=ð2˙ℓ(t), ˙ℓ34=ð(2−ð)˙ℓ(t).
We can see that the endpoint value of each flexible subinterval is a convex combination of ℓ0 and ℓ(t), or a convex combination of ℓ2 and ℓ(t). The terminal of each time interval is adjustable, that is, the delay interval is adjusted as a whole, which will enable us to capture more time delay information, and the stability result will be more accurate and effective.
Notation: Rη×η represents the set of η-row η column matrices in which all elements are real numbers; YT represents the transpose of the matrix Y; Q>0 means that the matrix Q is called a positive fixed matrix; col[Y1,Y2]=[YT1,YT2]T; He[Y]=YT+Y; diag{...} is a matrix with all 0 elements except the diagonal. ∗ is the part of the matrix about the symmetry of the main diagonal.
The CGNN with time-varying delays can be described:
˙ˆxi(t)=di(ˆxi(t))[−ai(ˆxi(t))+η∑j=1bijˉFj(ˆxj(t))+η∑j=1cijˉFj(ˆxj(t−ℓj(t)))+ȷi], | (2.1) |
where ˆxi(t)∈Rη is the state vector of the ith neuron at t moment. di(ˆxi(t)) is the amplification function, ai(ˆxi(t)) represents a behaved function, and ˉFi(⋅) denotes the bounded neuronal activations function.
B=(bij)η×η, C=(cij)η×η are the connection weights that reflect how the neuron i connects with the neuron j. ℓj(t) are the time delay parameters. ȷi=[ȷ1,ȷ2,....,ȷη] denotes the external bias on ith neuron at time t and ȷ1,ȷ2,....,ȷη all are constant.
CGNN (2.1) is transformed into
˙ˆx(t)=D(ˆx(t))[−A(ˆx(t))+BˉF(ˆx(t))+CˉF(ˆx(t−ℓ(t)))+ȷ], | (2.2) |
where
ȷ=(ȷ1,ȷ2,....,ȷη)T,ˆx(t)=(ˆx1(t),ˆx2(t),....,ˆxη(t))TD(ˆx(t))=diag(d1(ˆx1(t)),d2(ˆx2(t)),....,dη(ˆxη(t))),A(ˆx(t))=(a1(ˆx1(t)),a2(ˆx2(t)),....,aη(ˆxη(t)))T,ˉF(ˆx(t−ℓ(t)))=(ˉF1(ˆx1(t−ℓ1(t))),ˉF2(ˆx2(t−ℓ2(t))),....,ˉFη(ˆxη(t−ℓη(t))))T. |
We will need to use the following assumptions and lemma.
Assumption 2.1. Each function di(ˆxi(t)) is bounded and locally continuous, and there exist non-negative constants li_ and ¯li such that 0≤li_≤di(ˆxi(t))≤¯li<+∞ for all ˆxi(t)∈Rη.
Assumption 2.2. Each function ai(ˆxi(t)) is bounded and continuous and there exist principal constants μi>0, such that
ai(ˆxi(t))−ai(ˇyi(t))ˆxi(t)−ˇyi(t)≥μi>0, i=1,2,...,η_,∀ˆxi,ˇyi∈Rη,ˆxi≠ˇyi. |
Assumption 2.3. ˉFi(⋅) is a bounded activation function and there are positive constants ki (i=1,2,...η_), such that
|ˉFi(ˆxi(t))−ˉFi(ˇyi(t))|≤ki|ˆxi−ˇyi|, i=1,2,...,η_,∀ˆxi,ˇyi∈Rη,ˆxi≠ˇyi. |
Definition 2.1. [31] For all possible coefficient matrices B and C in CGNN (2.1) with time-varying delay, when the system remains stable in a certain state, the state is the equilibrium point of the system, and this equilibrium point is asymptotically stable, so it can be said that the model is asymptotically stable.
Definition 2.2. [32] System (2.1) remains stable in some states, which are called equilibrium points. The equilibrium point is a constant vector ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η)T, which can make
−ai(ˆx∗i)+η∑j=1bijˉFj(ˆx∗j)+η∑j=1cijˉFj(ˆx∗j)+ȷi=0. |
Lemma 2.1. [33] (Quadratic reciprocally convex inequality) For given matrices Ji∈Rη×η, real number scalar qi,qj∈[0,1], and ℘i∈(0,1) with ∑ηi=1℘i=1, if there exists Ti∈Rη×η, Lij∈Rη×η(j>i),i=1,2,...η_, let the following matrix hold:
[Ji−TiLij∗Jj−qjLj]≥0, | (2.3) |
[Ji−qiTiLij∗Jj−Lj]≥0, | (2.4) |
[JiLij∗Jj]≥0. | (2.5) |
Then, for any vector ζi∈Rη, the following inequality holds [23]:
η∑i=11℘iζTiJiζi≥η∑j>i=1He[ζTiLijζj]+η∑i=1ζTi[Ji+(1−℘i)Ti]ζi+η∑j>i=1{ζTi(qi℘2j℘iTj)ζi+ζTj(qj℘2j℘jTi)ζj}. | (2.6) |
Theorem 2.1. Under Assumptions 2.1-2.3, if the following inequality condition is satisfied, then the system has only one stable state, and the state (balance point) is unique.
‖B‖1+‖C‖1<μmKm, |
where
μm=min1≤i≤η(μi),Km=max1<i<η(Ki), |
for any matrix B=(bij)η×η,
‖B‖1=max1≤j≤ηη∑i=1|bij|,‖B‖2=√λm(BTB), |
the λm(BTB) here is the largest of all eigenvalues of the matrix BTB.
Proof. Let ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η)T denote an equilibrium point of neural network model (2.2). Then,
D(ˆx∗)[−A(ˆx∗)+BˉF(ˆx∗)+CˉF(ˆx∗)+ȷ]=0. | (2.7) |
Because D(ˆx∗) is a positive matrix with zero elements outside the diagonal line, replace (2.7) with
−A(ˆx∗)+BˉF(ˆx∗)+CˉF(ˆx∗)+ȷ=0. | (2.8) |
Let
ˇℑ(ˆx)=−A(ˆx)+BˉF(ˆx)+CˉF(ˆx)+ȷ=0, | (2.9) |
where ˇℑ(ˆx)=(ˇk1(ˆx),ˇk2(ˆx),....,ˇkη(ˆx))T with
ˇki(ˆx)=−ai(ˆxi)+η∑j=1bijˉFj(ˆxj)+η∑j=1cijˉFj(ˆxj)+ȷi, i=1,2,...,η_. |
As is well-known, if ˇki(ˆx) is a homeomorphism of Rη, then (2.8) has a unique solution. From [2], it can be seen that ˇki(ˆx) in this paper is a homeomorphic map to Rη, if ˇℑ(ˆx)≠ˇℑ(ˇy),∀ˆx≠ˇy, and also ˆx,ˇy∈Rη, and ‖ˇℑ(ˆx)‖→∞ as ‖ˆx‖→∞.
Let ˆx≠ˇy, which implies two cases:
(I) ˆx≠ˇy and ˉF(ˆx)−ˉF(ˇy)≠0,
(⨿) ˆx≠ˇy and ˉF(ˆx)−ˉF(ˇy)=0.
Now, case (I):
ˇℑ(ˆx)−ˇℑ(ˇy)=−A(ˆx)+BˉF(ˆx)+CˉF(ˆx)+ȷ−[−A(ˇy)+BˉF(ˇy)+CˉF(ˇy)+ȷ]=−(A(ˆx)−A(ˇy))+B(ˉF(ˆx)−ˉF(ˇy))+C(ˉF(ˆx)−ˉF(ˇy)), | (2.10) |
and specify the above equation:
ˇki(ˆx)−ˇki(ˇy)=−(ai(ˆxi)−ai(ˇyi))+η∑j=1(bij+cij)(ˉFj(ˆxj)−ˉFj(ˇyj)), | (2.11) |
Multiply the left and right of Eq (2.11) by sgn(ˆxi−ˇyi), where
sgn(ˆx)={1, ˆx>0,0, ˆx=0,−1, ˆx<0, |
at this time. Then, (2.11) becomes
sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))=−sgn(ˆxi−ˇyi)(ai(ˆxi)−ai(ˇyi))+η∑j=1sgn(ˆxi−ˇyi)(bij+cij)(ˉFj(ˆxj)−ˉFj(ˇyj))≤−μi|ˆxi−ˇyi|+η∑j=1(|bij|+|cij|)Ki|ˆxi−ˇyi|≤−μm|ˆxi−ˇyi|+η∑j=1(|bij|+|cij|)Km|ˆxi−ˇyi|, |
form which we get
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))≤−η∑i=1μm|ˆxi−ˇyi|+η∑i=1η∑j=1(|bij|+|cij|)Km|ˆxi−ˇyi|≤−η∑i=1μm|ˆxi−ˇyi|+Kmη∑i=1η∑j=1(|bij|+|cij|)|ˆxi−ˇyi|≤−(μm−Km(‖B‖1+‖C‖1))‖ˆx−ˇy‖1, | (2.12) |
where ‖ˆx−ˇy‖1=∑ηi=1|ˆxi−ˇyi|, for ˆx−ˇy≠0,‖B‖1+‖C‖1<μmKm, implies that
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))≤0 |
or
η∑i=1|ˇki(ˆx)−ˇki(ˇy)|=‖ˇℑ(ˆx)−ˇℑ(ˇy)‖1>0. |
It can be seen that for any ˆx≠ˇy, the result of ˇℑ(ˆx)≠ˇℑ(ˇy).
Now, case (⨿):
ˇℑ(ˆx)−ˇℑ(ˇy)=−(A(ˆx)−A(ˇy)), |
from which one can obtain
sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))=−sgn(ˆxi−ˇyi)(ai(ˆxi)−ai(ˇyi))≤−μi|ˆxi−ˇyi|. |
ˆx−ˇy≠0 implies that
η∑i=1sgn(ˆxi−ˇyi)(ˇki(ˆx)−ˇki(ˇy))<0 |
or
η∑i=1|ˇki(ˆx)−ˇki(ˇy)|=‖ˇℑ(ˆx)−ˇℑ(ˇy)‖1>0. |
From the above two inequalities, we can understand that when any ˆx≠ˇy is arbitrary, it will make ˇℑ(ˆx)≠ˇℑ(ˇy).
Substitute ˇy=0 into inequality (2.12) to get
η∑i=1sgn(ˆxi)(ˇki(ˆx)−ˇki(0))≤−(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1. |
Therefore,
(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1≤|η∑i=1sgn(ˆxi)(ˇki(ˆx)−ˇki(0))|≤η∑i=1|ˇki(ˆx)−ˇki(0)|=‖ˇℑ(ˆx)−ˇℑ(0)‖1≤‖ˇℑ(ˆx)‖1+‖ˇℑ(0)‖1,‖ˇℑ(ˆx)‖1≥(μm−Km(‖B‖1+‖C‖1))‖ˆx‖1−‖ˇℑ(0)‖1. |
That is, when ‖ˆx‖→∞, ‖ˇℑ(ˆx)‖→∞.
The above is the whole proof process of Theorem 2.1.
Remark 2.1. In terms of the construction of LKF, different construction methods can solve different types of time delay systems, and the utilization rates of different types of time-delay information are also different. How to construct a LKF with a small amount of computation and less conservativeness is worth further exploration.
The system state is translationally transformed so that the equilibrium point is at the origin of the coordinate system. The balance point is set to ˆx∗=(ˆx∗1,ˆx∗2,....,ˆx∗η), and ˉκ(t)=ˆx(t)−ˆx∗:
˙ˉκ(t)=αi(ˉκi(t)[−βi(ˉκi(t))+η∑j=1bijˉℏj(ˉκj(t))+η∑j=1cijˉℏj(ˉκj(t−ℓj(t)))], |
or with another way of representation,
˙ˉκ(t)=α(ˉκ(t)[−β(ˉκ(t))+Bˉℏ(ˉκ(t))+Cˉℏ(ˉκ(t−ℓ(t)))], | (3.1) |
where
ˉκ(t)=(ˉκ1(t),ˉκ2(t),...,ˉκη(t))T,α(ˉκ(t))=diag(α1(ˉκ1(t)),α2(ˉκ2(t)),...,αη(ˉκη(t))),β(ˉκ(t))=(β1(ˉκ1(t)),β2(ˉκ2(t)),...,βη(ˉκη(t)))T,ˉℏ(ˉκ(t−ℓ(t)))=(ˉℏ1(ˉκ1(t−ℓ1(t))),ˉℏ2(ˉκ2(t−ℓ2(t))),...,ˉℏη(ˉκη(t−ℓη(t))))T. |
For the transformed system (3.1), we have
αi(ˉκi(t))=di(ˉκi(t)+ˆx∗i),i=1,2,...,η_.βi(ˉκi(t))=ai(ˉκi(t)+ˆx∗i)−ai(ˆx∗i),i=1,2,...,η_.ˉℏi(ˉκi(t))=ˉFi(ˉκi(t)+ˆx∗i)−ˉFi(ˆx∗i),i=1,2,...,η_. |
The time-delay neural network model plays a vital role in practical applications: function approximation, parallel calculation, associative memory, etc. Moreover, the main feature of the switching system is that it has a constraint on the state variable. The state variable can be an input variable or output variable. Therefore, further research on SCGNN with time delay is the next step.
Consider the following SCGNN models with time-varying delay:
˙ˉκ(t)=α(ˉκ(t))[−β(ˉκ(t))+Bσ(t)ˉℏ(ˉκ(t))+Cσ(t)ˉℏ(ˉκ(t−ℓ(t)))], | (3.2) |
ˉℏ(ˉκ(t))=(ˉℏ1(ˉκ1(t)),ˉℏ2(ˉκ2(t)),....,ˉℏη(ˉκη(t)))T is an activation function that activates neurons. ℓ is bounded, and σ(t):[t0,+∞)⟶N={1,2,...,N_} is a segmented constant function and is related to time t. We call it a switch signal to activate a specific subsystem. There are N neural network subsystems. The corresponding switching sequence is represented as σ(t):{(t0,σ(to)),...,(tı,σ(tı)),...,∣σ(tı)∈N,ı=0,1,...}, to is the initial time, and tı is the switching moment of the ıth subsystem. At the same time, σ(tı) means that the ıth subsystem is activated. For any ı, the matrix (Bσ,Cσ) is included in the finite set {(B1,C1),(B2,C2),...(Bη,Cη)}.
In this article, it is assumed that the switching signal σ(t) is not known at the beginning, and σ(tı)=ı, define the function ξ(t)=(ξ1(t),ξ2(t),...,ξN(t))T, where ı=1,2,...,N.
ξı(t)={1,when the switched system is described by the kth mode Bσ(tı), Cσ(tı),0,otherwise.
Now, we can change CGNN (3.2) with switching signal to an expression, that is,
˙ˉκ(t)=α(ˉκ(t)){−β(ˉκ(t))+N∑ı=1ξı(t)[Bσ(tı)ˉℏ(ˉκ(t))+Cσ(tı)ˉℏ(ˉκ(t−ℓ(t))]}, | (3.3) |
and it follows that ∑Nı=1ξı(t)=1.
Translating the equilibrium point of system (2.1) to the origin: (ˆx∗1,ˆx∗2,...,ˆx∗η) to (0,0,...,0), at this time, ˉκ(0)=0. Know by Assumption 2.2:
β(ˉκ(t))−β(ˉκ(0))ˉκ(t)−ˉκ(0)≥μi>0, i=1,2,...,η_, |
and hence,
β(ˉκ(t))≥μiˉκ(t). |
Now,
˙ˉκ(t)≤−α(ˉκ(t))μiˉκ(t)+α(ˉκ(t))N∑ı=1ξı(t)Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))N∑ı=1ξı(t)Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))≤−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t))), | (3.4) |
where ˉκ(t)∈Rη is the state vector, μiα(ˉκ(t)), α(ˉκ(t))Bσ(tı), α(ˉκ(t))Cσ(tı) are known continuous function matrices.
The time-varying delay ℓ(t) satisfies
0≤ℓ0≤ℓ(t)≤ℓ2,ρ1≤˙ℓ(t)≤ρ2, | (3.5) |
then, ℓ0, ℓ2, ρ1, and ρ2>0 are constants, which we already knew.
For the activation function ˉℏl(⋅) (l=1,2,...,η_),
0≤ˉℏl(s1)−ˉℏl(s2)s1−s2≤r1,s1≠s2,0≤ˉℏl(s)s≤rl,s≠0, | (3.6) |
where rl are real numbers, and R=diag{r1,r2,...,rη}.
Combining the flexible terminal interpolation method with the secondary convex inequality is conducive to capturing more delay information and obtaining sufficient conditions to ensure the stability of CGNN (3.2) with switching signal by selecting the appropriate LKF function.
Theorem 4.1. For given scalars ℓ0, ℓ2, ρ1, ρ2, 0<δ≤min{1,1ρ2} and ℓ(d,b)=ℓd−ℓb, ϱ=1−δ, the SCGNN (3.2) is asymptotically stable if there exist Q>0, Pi>0, J>0, diagonal matrices W1>0, W2>0, Δi>0 (i=1,2,...,2ı+1), matrices Ti,Lij (j>i=1,2,...,2ı+1−2), N,M=[M1M2], making the LMIs (4.1)-(4.4) hold:
[˜Ji−TiLij∗˜Ji−℘jTj]≥0, | (4.1) |
[˜Ji−℘iTiLij∗˜Ji−Tj]≥0,[˜JiLij∗˜Ji]≥0, | (4.2) |
Ψˉκ=[NM∗13˙˜ℓ2ı+1−12ıJ]≥0, | (4.3) |
Ψ(ℓ0)<0,Ψ(ℓ2)<0,−ℓ22d2+Ψ(ℓ0)<0, | (4.4) |
where
Ψ(ℓ(t))=Ψ0−2ΨˉF,Ψ0=He[ΩT1QΩ2+ΩT3ˆMΩ3+eT2ı+2W1U+(Re1−e2ı+2)TW2U]+ℓ2ı+1−12ıUTJU−˙˜ℓ2ı+1−12ıℓ2{2ı+1−1∑i=1εTi[˜Ji−(1−℘i)Ti]εi+2ı+1−1∑j>i=1(He[εTiLijεj]+℘2jεTiTjεi+℘2iεTjTiεj)}+2ı+1−1∑i=1{˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1},ΨˉF=2ı+1−1∑i=1eT2ı+1−1+iΔi[e2ı+1−1+i−Rei]+2ı+1−1∑q=12ı+1∑p=2,p>q(e2ı+2−1+q−e2ı+2−1+p)TΔqp×[e2ı+2−1+q−e2ı+2−1+p−R(eq−ep)],Ω1=col[e1,ℓ12ıe2ı+1+1,ℓ(22ı,12ı)e2ı+1+2,...,ℓ(2ı+1−12ı,2ı+1−22ı)e2ı+1−1],Ω2=col[U,e1−˙˜ℓ12ıe2,˙˜ℓ12ıe2−˙˜ℓ22ıe3,...,˙˜ℓ2ı+1−22ıe2ı+1−1−˙˜ℓ2ı+1−12ıe2ı+1],Ω3=col[e2ı,e2ı+1], εi=col[ei−ei+1,ei+ei+1−3e2ı+1+i], ˆεi=col[ei,e2ı+2−1+i],ei=[0η×(i−1)η,Iη,0η×(3∗2ı+1−1−i)η] (i=1,2,...,3∗2ı+1−1),U=−μiα(ˉκ(t))e1+α(ˉκ(t))Bσ(tı)e2ı+2+α(ˉκ(t))Cσ(tı)e2ı+2+2ı,d2=122ı+1−1∑j>i=1d2([℘2iεTjTiεj+℘2jεTiTjεi])d2[ℓ(t)]2, ℘i=ℓ(i2ı,i−12ı)ℓ2, i=1,2,...,2ı+1−2,℘2ı+1−1=ℓ(2,2ı+1−22ı)ℓ2, ˙˜ℓi2ı=1−˙ℓi2ı, ˆM=[M1+MT1−M1+MT2∗−M2−MT2]+(ℓ(t)−ℓ2ı−12ı)N,˜Jl={diag[J,4J],l≠2ı,diag{J3,4J3},l=2ı. |
Proof. Choose the appropriate LKF:
V(t)=V1(t)+V2(t)+V3(t), | (4.5) |
where
V1(t)=˜ˉκT(t)Q˜ˉκ(t)+2η∑i=1w1i∫ˉκi(t)0ˉℏi(s)ds+2η∑i=1w2i∫ˉκi(t)0(ris−ˉℏi(s))ds,V2(t)=∫0−ℓ2ı+1−12ı∫tt+θ˙ˉκT(s)J˙ˉκ(s)dsdθ,V3(t)=2ı+1−1∑i=1∫t−ℓi−12ıt−ℓi2ıπT(s)Piπ(s)ds, |
with
˜ˉκ(t)=col[ˉκ(0),ℓ12ıˉv(12ı,0),ℓ(22ı,12ı)ˉv(22ı,12ı),...,ℓ(2ı+1−12ı,2ı+1−22ı)ˉv(2ı+1−12ı,2ı+1−22ı)],ς(t)=col[ˉκ(0),ˉκt(12ı),ˉκt(22ı),...,ˉκt(2ı+1−12ı),ˉv(12ı,0),ˉv(22ı,12ı),...,ˉv(2ı+1−12ı,2ı+1−22ı),ˉℏ(0),ˉℏt(12ı),...,ˉℏt(2ı+1−12ı)],π(s)=col[ˉκ(s),ˉℏ(s)], ˉκt(d)=ˉκ(t−d), π(d)=π(t−ℓd),ˉℏt(d)=ˉℏ(ˉκ(t−d)), ℓ(d,b)=ℓd−ℓb, ˉv=1ℓ(d,b)∫t−ℓbt−ℓdˉκ(s)ds. |
Find the derivatives of V1,V2,V3 along the trajectory of the system (3.2), respectively.
˙V1(t)=˙˜ˉκT(t)Q˜ˉκ(t)+˜ˉκT(t)Q˙˜ˉκ(t)+2W1ˉℏ(ˉκ(t))˙ˉκ(t)+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2˙ˉκ(t)=2˜ˉκT(t)Q˙˜ˉκ(t)+2ˉℏT(ˉκ(t))W1˙ˉκ(t)+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2˙ˉκ(t)=2˜ˉκT(t)Q˙˜ˉκ(t)+2ˉℏT(ˉκ(t))W1[−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))]+2(Rˉκ(t)−ˉℏ(ˉκ(t)))TW2[−μiα(ˉκ(t))ˉκ(t)+α(ˉκ(t))Bσ(tı)ˉℏ(ˉκ(t))+α(ˉκ(t))Cσ(tı)ˉℏ(ˉκ(t−ℓ(t)))]=ςT(t)[ΩT1QΩ2+ΩT2QΩ1]ς(t)+ςT(t)[eT2k+2W1U+UTW1e2ı+2]ς(t)+ςT(t)[(Re1−e2ı+2)TW2U+UTW2(Re1−e2ı+2)T]ς(t)=ςT(t)He[ΩT1QΩ2+eT2ı+2W1U+(Re1−e2ı+2)TW2U]ς(t), | (4.6) |
˙V2(t)=∫ℓ2ı+1−12ı0(˙ˉκT(t)J˙ˉκ(t)−[˙ˉκT(t+θ)J˙ˉκ(t+θ)])dθ=ℓ2ı+1−12ı˙ˉκT(t)J˙ˉκ(t)−˙˜ℓ2ı+1−12ı∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds=−˙˜ℓ2ı+1−12ı∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds+ℓ2ı+1−12ıςT(t)UTJUς(t). | (4.7) |
Now, we use the Wirtinger integral inequality and the quadratic reciprocally convex inequality (Lemma 2.1) to calculate the u-related integral terms in the formula (4.7).
−∫tt−ℓ2ı+1−12ı˙ˉκT(s)J˙ˉκ(s)ds=−2ı+1−1∑i=1,i≠2ı∫t−ℓi−12ıt−ℓi2ı˙ˉκT(s)J˙ˉκ(s)ds−∫t−ℓ2ı−12ıt−ℓ2ı2ı˙ˉκT(s)J˙ˉκ(s)ds=−2ı+1−1∑i=1,i≠2ı1ℓ(i2ı,i−12ı)ˉvT(i2ı,i−12ı)Jˉv(i2ı,i−12ı)−∫t−ℓ2ı−12ıt−ℓ2ı2ı˙ˉκT(s)J˙ˉκ(s)ds≤−2ı+1−1∑i=1ςT(t)[1ℓ(i2ı,i−12ı)εi¯Jiεi]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤−1ℓ22ı+1−1∑i=1ςT(t)[1γiεi¯Jiεi]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤−1ℓ2{2ı+1−1∑i=1ςT(t)εTi[¯Ji−(1−γi)Ti]εi+2ı+1−1∑j>i=1(He[εTiLijεi]+℘2jεTiTjεi+℘2iεTjTiεj)}ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds, | (4.8) |
then, ℓ2ı2ı=ℓ1=ℓ(t),
˙V3(t)=2ı+1−1∑i=1{(1−˙ℓi−12ı)πT(t−ℓi−12ı)Piπ(t−ℓi−12ı)−[(1−˙ℓi2ı)πT(t−ℓi2ı)Piπ(t−ℓi2ı)]}=2ı+1−1∑i=1{˙ℓi−12ıπT(ℓi−12ı)Piπ(ℓi−12ı)−˙˜ℓi2ıπT(ℓi2ı)Piπ(ℓi2ı)}=2ı+1−1∑i=1{˙˜ℓi−12ıςT(t)ˆεTiPiˆεiς(t)−˙˜ℓi2ıςT(t)ˆεTi+1Piˆεi+1ς(t)}=ςT(t)2ı+1−1∑i=1[˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1]ς(t). | (4.9) |
For any dimension matrices M1 and M2, use Newton-Leibniz formula to get
2[ˉκTt(2ı−12ı)MT1+ˉκT(t−ℓ(t))MT2]×[ˉκt(2ı−12ı)−ˉκ(t−ℓ(t))−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκ(s)ds]=0, | (4.10) |
and for any matrix N, the zero equation below holds:
∫t−ℓ2ı−12ıt−ℓ(t)ςT1(t)Nς1(t)ds=(ℓ(t)−ℓ2ı−12ı)ςT1(t)Nς1(t), |
which is equivalent to
(ℓ(t)−ℓ2ı−12ı)ςT1(t)Nς1(t)−∫t−ℓ2ı−12ıt−ℓ(t)ςT1(t)Nς1(t)ds=0. | (4.11) |
Into there, ςT1(t)=[ˉκt(2ı−12ı),ˉκ(t−ℓ(t))], from (4.6) to (4.7), and it yields
˙V(t)≤ςT(t){He[ΩT1QΩ2+eT2ı+2W1U+(Re1−e2ı+2)TW2U]+2ı+1−1∑i=1[˙˜ℓi−12ıˆεTiPiˆεi−˙˜ℓi2ıˆεTi+1Piˆεi+1]+ℓ2ı+12ıUTJU−1ℓ22ı+1−1∑i=1εTi[¯Ji−(1−℘i)Ti]εi−1ℓ22ı+1−1∑j>i=1(He[εTiLijεi]+℘2jεTiTjεi+℘2iεTjTiεj)}ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds≤ςT(t)Ψ0ς(t)−ςT(t)He[πT3ˆMπ3]ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)˙ˉκT(s)J3˙ˉκ(s)ds. | (4.12) |
Based on formula (3.6), for any positive diagonal matrix Δi=diag{λ1,λ2,...,λη}, and there are
0≤−2ˉℏT(s)Δ[ˉℏ(s)−Rs]. |
Let s be t, t−ℓ12ı, t−ℓ22ı, ..., t−ℓ2ı+1−12ı, and replace Δ with Δ1, Δ2, ..., Δı+12, and we can obtain
0≤−2ςT(t)eT2ı+2−1+iΔi[e2ı+2−1+i−Rei]ς(t). |
There, i=1,2,...,2ı+1,
0≤−2ςT(t)2ı+1∑i=1eT2ı+1−1+iΔi[e2ı+1−1+i−Rei]ς(t), | (4.13) |
and for arbitrary straight diagonal matrix ˉΔ=diag{ˉλ1,ˉλ2,...,ˉλη}, one can gain
0≤−2(ˉℏ(s1)−ˉℏ(s2))TˉΔ[ˉℏ(s1)−ˉℏ(s2)−R(s1−s2)]. |
Let s1, s2 be t, t−ℓ12ı, t−ℓ22ı, ..., t−ℓ2ı+1−12ı, and replace ˉΔ with Δqp. This moment, q=1,2,...,2ı+1−1, p=2,3,...,2ı+1, p>q.
0≤−2ςT(t)2ı+1−1∑q=12ı+1∑p=2,p>q[e2ı+2−1+q−e2ı+2−1+p]TΔqp×[e2ı+2−1+q−e2ı+2−1+p−K(eq−ep)]ς(t). | (4.14) |
Therefore, the formula (4.11) can be written as
˙V(t)≤ςT(t)Ψ(t)ς(t)−∫t−ℓ2ı−12ıt−ℓ(t)ςT2(t,s)Ψˉκς2(t,s)ds≤ςT(t)Ψ(t)ς(t), | (4.15) |
among ς(t,s)=[ς1(t,s),˙ˉκ(s)].
Now, defined Ψ(t)=d2ℓ2(t)+d1ℓ(t)+d0, d1 and d0 are matrices with suitable dimensions (i.e., free matrices). When Ψ(t) meets the condition (4.3) in Theorem 4.1, for ∀t∈[0,ℓ], Ψ(t)<0, that is, the system (3.2) is asymptotically stable. The proof is as follows:
Proof. When d2≥0, Ψ(t) is a quadratic function with the opening up, that is, the convex function.
By the property of the convex function, the tangent of its crossing point (ℓ,Ψ(ℓ)) is expressed as
Ψ(t)−Ψ(ℓ)=˙Ψ(ℓ)(t−ℓ),Ψ(t)=˙Ψ(ℓ)(t−ℓ)+Ψ(ℓ), | (4.16) |
from (4.3): Ψ(0)<0,Ψ(ℓ)<0, Ψ(t)<0.
When d2<0, Ψ(t) is the open and downward quadratic function, that is, the concave function. Choose any t∈[o,ℓ],
˙Ψ(ℓ)≤Ψ(t)−Ψ(ℓ)t−ℓ, | (4.17) |
˙Ψ(ℓ)(t−ℓ)≥Ψ(t)−Ψ(ℓ),Ψ(t)≤˙Ψ(ℓ)(t−ℓ)+Ψ(ℓ),Ψ≤(2d2ℓ+d1)(t−ℓ)+d2ℓ2+d1ℓ+d0≤(2d2ℓ+d1)t−d2ℓ2+d0,Γ(t)=(2d2ℓ+d1)t−a2ℓ2+d0,Γ(0)=−d2ℓ2+d0=−d2ℓ2+Ψ(0), |
by (4.3), we can get Γ(0)<0.
At the same time, Γ(ℓ)=Ψ(ℓ)<0; therefore, for any t∈[0,ℓ], Ψ(t)<0.
We know that ˙V(t)≤ςT(t)Ψ(t)ς(t) and ς(t)<0, and then ˙V(t)≤0, that is, ˙V(t) is negative. From Lyapunov's second theorem, system (3.2) has asymptotic stability.
Remark 4.1. It is proved that the balance point of SCGNN exists and is unique, and only the connection weight matrix with switching rules needs to be processed. That is, ˜B=Bσ(t)=(bıj)η×η, ˜C=Cσ(t)=(cıj)η×η where σ(t)=ı, ı=1,2,...,η, and its value may change over time. However, this paper sets a fixed switching rule, that is, the value of the connection weight matrix can remain unchanged for a period of time, which is equivalent to the connection weight matrix without a switching system, so as to prove that the process is consistent with this paper's Theorem 2.1.
Example 1. Take N=3 and consider the SCGNN model with two subsystems:
˙ˆxi(t)=di(ˆxi(t)){−ai(ˆxi(t))+N∑ı=1ξı(t)[BıˉFj(ˆxj(t))+CıˉFj(ˆxj(t−ℓj(t)))]}. | (5.1) |
Among them, the neural network system parameters are
di(ˆxi(t))=diag(2+sin2(ˆx1),2+cos2(ˆx2),2+tanh2(ˆx3)),ai(ˆxi(t))=ˆxi(t),ˉFj(ˆxj(t))=tanh(ˆxi(t)),i,j=1,2,i≤j, |
and the connection weight matrix is the following:
Subsystem 1:
B1=[−0.1−0.2−0.20.10.3−0.40.20.4−0.3], C1=[−0.1−0.3−11.3−0.2−0.41.21.1−0.2]. |
Subsystem 2:
B2=[−0.2−0.6−0.40.20.1−0.10.30.50.3], C2=[−0.252−0.70.90.4−0.50.30.20.2]. |
Subsystem 3:
B3=[0.1−0.6−0.40−0.210.30.2−0.3], C3=[−0.12−0.10.4−0.20.150.30.33−0.2−0.4]. |
In order to satisfy Assumptions 2.1-2.3, take the following parameters: li_=1, ¯li=2, μi=1.2, ki=1, r1=0.4, r2=0.8.
Figures 2-7 depicts the simulation results after customizing the initial value under three subsystems. Based on Theorem 4.1, the system has asymptotic stability.
When −ρ≤˙ℓ(t)≤ρ, i.e., −ρ1=ρ2=ρ, according to the judgment conditions of Theorem 4.1, we use the LMIs toolbox to calculate and get the maximum upper bounds (MAUBs) allowed to be achieved by time delay, see Table 1.
ρ | 0.8 | 0.9 | unknown |
Theorem 4.1 (ı=1) | 1.9384 | 1.4275 | 1.3128 |
Theorem 4.1 (ı=2) | 2.1139 | 1.5965 | 1.4902 |
As can be seen from Table 1, when ˙ℓ(t)=ρ=0.8,ð=0.642,ℓ0=0,q1=q2=0, the maximum delay obtained by two interpolation is greater than that obtained by one interpolation, which fully reflects that the flexible terminal interpolation method can capture more time-delay information, thus reducing the advantage of conservatism.
This paper analyzes CGNN with time-varying delay and adds a switching system to CGNN to study the asymptotic stability of SCGNN. Starting from the existence and uniqueness of the CGNN equilibrium point, it becomes easier to eliminate the offset. In order to capture more time-delay information, a flexible terminal interpolation method is adopted, and an LKF with more time-delay information is constructed and estimated using a quadratic convex inequality. Additionally, based on the linear matrix inequality, a new criterion for SCGNN asymptotic stability is obtained. Finally, numerical examples and simulation results show that the system can be asymptotically stable under the derivation criterion.
Compared with this paper, recent relevant results [34] use the quadratic inequality of real vectors and the LKF method to study the stability of a class of CGNN systems with neutral delay terms and discrete time delay. This CGNN system is more special and complex, fully considering the uncertainties and interference factors in practical applications, which has strong practical significance. In addition to studying the stability of this complex system, further research into the limits and singularities of the system is worth exploring.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the Natural Science Foundation of China (62072164 and 11704109).
The authors declare no conflicts of interest.
[1] | T. Abdeljawad, M. Grossman, On geometric fractional calculus, J. Semigroup Theory Appl., 2016 (2016), 2. |
[2] | D. Baleanu, Z. B. Guvenlç, J. A. T. Machado, New trends in nanotechnology and fractional calculus applications, New York: Springer, 2010. https://doi.org/10.1007/978-90-481-3293-5 |
[3] | A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, New York: Elsevier, 2006. |
[4] | K. S. Miller, B. Ross, An introduction to fractional calculus and fractional differential equations, New York: Wiley, 1993. |
[5] | K. B. Oldham, J. Spanier, The fractional calculus theory and applications of differentiation and integration to arbitrary order, New York: Academic Press, 1974. https://doi.org/10.1016/S0076-5392(09)60219-8 |
[6] | I. Podlubny, Fractional differential equations: An introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications, New York: Academic Press, 1998. |
[7] |
T. Abdeljawad, On conformable fractional calculus, J. Comput. Appl. Math., 279 (2015), 57–66. https://doi.org/10.1016/j.cam.2014.10.016 doi: 10.1016/j.cam.2014.10.016
![]() |
[8] |
R. Khalil, M. Al Horani, A. Yousef, M. Sababheh, A new definition of fractional derivative, J. Comput. Appl. Math., 264 (2014), 65–70. https://doi.org/10.1016/j.cam.2014.01.002 doi: 10.1016/j.cam.2014.01.002
![]() |
[9] |
A. Atangana, D. Baleanu, A. Alsaedi, New properties of conformable derivative, Open Math., 13 (2015), 889–898. https://doi.org/10.1515/math-2015-0081 doi: 10.1515/math-2015-0081
![]() |
[10] |
T. Gülşen, E. Yilmaz, H. Kemaloǵlu, Conformable fractional Sturm-Liouville equation and some existence results on time scales, Turk. J. Math., 42 (2018), 1348–1360. https://doi.org/10.3906/mat-1704-120 doi: 10.3906/mat-1704-120
![]() |
[11] |
J. T. Machado, V. Kiryakova, F. Mainardi, Recent history of fractional calculus, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 1140–1153. https://doi.org/10.1016/j.cnsns.2010.05.027 doi: 10.1016/j.cnsns.2010.05.027
![]() |
[12] |
M. D. Ortigueira, J. A. T. Machado, What is a fractional derivative?, J. Comput. Phys., 293 (2015), 4–13. https://doi.org/10.1016/j.jcp.2014.07.019 doi: 10.1016/j.jcp.2014.07.019
![]() |
[13] |
R. Kumar, S. Kumar, S. Kaur, S. Jain, Time fractional generalized Korteweg-de Vries equation: Explicit series solutions and exact solutions, J. Frac. Calc. Nonlinear Sys., 2 (2021), 62–77. https://doi.org/10.48185/jfcns.v2i2.315 doi: 10.48185/jfcns.v2i2.315
![]() |
[14] |
R. Ferreira, Generalized discrete operators, J. Frac. Calc. Nonlinear Sys., 2 (2021), 18–23. https://doi.org/10.48185/jfcns.v2i1.279 doi: 10.48185/jfcns.v2i1.279
![]() |
[15] |
M. Grossman, An introduction to Non-Newtonian calculus, Int. J. Math. Educ. Sci. Technol., 10 (1979), 525–528. https://doi.org/10.1080/0020739790100406 doi: 10.1080/0020739790100406
![]() |
[16] | M. Grossman, R. Katz, Non-Newtonian calculus, Pigeon Cove, MA: Lee Press, 1972. |
[17] |
A. E. Bashirov, E. M. Kurpinar, A. Ozyapici, Multiplicative calculus and its applications, J. Math. Anal. Appl., 337 (2008), 36–48. https://doi.org/10.1016/j.jmaa.2007.03.081 doi: 10.1016/j.jmaa.2007.03.081
![]() |
[18] | A. E. Bashirov, M. Riza, On complex multiplicative differentiation, TWMS J. Appl. Eng. Math., 1 (2011), 75–85. |
[19] | K. Boruah, B. Hazarika, G-calculus, TWMS J. Appl. Eng. Math., 8 (2018), 94–105. |
[20] | D. A. Stanley, A multiplicative calculus, Primus IX, 9 (1999), 310–326. |
[21] |
D. Aniszewska, Multiplicative Runge-Kutta methods, Nonlinear Dyn., 50 (2007), 265–272. https://doi.org/10.1007/s11071-006-9156-3 doi: 10.1007/s11071-006-9156-3
![]() |
[22] | D. Aniszewska, M. Rybaczuk, Chaos in multiplicative systems, Chaotic Syst., 2010, 9–16. https://doi.org/10.1142/9789814299725_0002 |
[23] | A. E. Bashirov, G. Bashirova, Dynamics of literary texts and diffusion, Online J. Commun. Media Technol., 1 (2011), 60–82. |
[24] |
A. E. Bashirov, E. Misirli, Y. Tandogdu, A. Ozyapici, On modeling with multiplicative differential equations, Appl. Math. J. Chin. Univ., 26 (2011), 425–438. https://doi.org/10.1007/s11766-011-2767-6 doi: 10.1007/s11766-011-2767-6
![]() |
[25] | A. Benford, The Law of anomalous numbers, Proc. Am. Phil. Soc., 78 (1938), 551–572. |
[26] |
M. Cheng, Z. Jiang, A new class of production function model and its application, J. Syst. Sci. Inf., 4 (2016), 177–185. https://doi.org/10.21078/JSSI-2016-177-09 doi: 10.21078/JSSI-2016-177-09
![]() |
[27] | D. Filip, C. Piatecki, A non-Newtonian examination of the theory of exogenous economic growth, Math. Aeterna, 4 (2014), 101–117. |
[28] |
L. Florack, H. van Assen, Multiplicative calculus in biomedical image analysis, J. Math. Imaging Vis., 42 (2012), 64–75. https://doi.org/10.1007/s10851-011-0275-1 doi: 10.1007/s10851-011-0275-1
![]() |
[29] |
H. Özyapıcı, İ. Dalcı, A. Özyapıcı, Integrating accounting and multiplicative calculus: An effective estimation of learning curve, Comput. Math. Org. Theory, 23 (2017), 258–270. https://doi.org/10.1007/s10588-016-9225-1 doi: 10.1007/s10588-016-9225-1
![]() |
[30] |
N. Yalcin, The solutions of multiplicative Hermite diferential equation and multiplicative Hermite polynomials, Rend. Circ. Mat. Palermo II Ser., 70 (2021), 9–21. https://doi.org/10.1007/s12215-019-00474-5 doi: 10.1007/s12215-019-00474-5
![]() |
[31] |
N. Yalcin, E. Celik, Solution of multiplicative homogeneous linear differential equations with constant exponentials, New Trend Math. Sci., 6 (2018), 58–67. http://dx.doi.org/10.20852/ntmsci.2018.270 doi: 10.20852/ntmsci.2018.270
![]() |
[32] |
N. Yalcin, M. Dedeturk, Solutions of multiplicative ordinary differential equations via the multiplicative differential transform method, AIMS Mathematics, 6 (2021), 3393–3409. https://doi.org/10.3934/math.2021203 doi: 10.3934/math.2021203
![]() |
[33] | E. Yilmaz, Multiplicative Bessel equation and its spectral properties, Ricerche Mat., 2021. https://doi.org/10.1007/s11587-021-00674-1 |
[34] |
Z. Zhao, T. Nazir, Existence of common coupled fixed points of generalized contractive mappings in ordered multiplicative metric spaces, Electron. J. Appl. Math., 1 (2023), 1–15. https://doi.org/10.61383/ejam.20231341 doi: 10.61383/ejam.20231341
![]() |
[35] |
B. Meftah, H. Boulares, A. Khan, T. Abdeljawad, Fractional multiplicative Ostrowski-type inequalities for multiplicative differentiable convex functions, Jordan J. Math. Stat., 17 (2024), 113–128. https://doi.org/10.47013/17.1.7 doi: 10.47013/17.1.7
![]() |
[36] |
S. Goktas, A new type of Sturm-Liouville equation in the Non-Newtonian calculus, J. Funct. Spaces., 2021 (2021), 5203939. https://doi.org/10.1155/2021/5203939 doi: 10.1155/2021/5203939
![]() |
[37] |
S. Goktas, E. Yilmaz, A. C. Yar, Multiplicative derivative and its basic properties on time scales, Math. Methods Appl. Sci., 45 (2022), 2097–2109. https://doi.org/10.1002/mma.7910 doi: 10.1002/mma.7910
![]() |
[38] |
S. Goktas, H. Kemaloglu, E. Yilmaz, Multiplicative conformable fractional Dirac system, Turk. J. Math., 46 (2022), 973–990. https://doi.org/10.55730/1300-0098.3136 doi: 10.55730/1300-0098.3136
![]() |
[39] |
M. Al-Refai, T. Abdeljawad, Fundamental results of conformable Sturm-Liouville eigenvalue problems, Complexity, 2017 (2017), 3720471. https://doi.org/10.1155/2017/3720471 doi: 10.1155/2017/3720471
![]() |
[40] |
B. P. Allahverdiev, H. Tuna, Y. Yalçinkaya, Conformable fractional Sturm‐Liouville equation, Math. Methods Appl. Sci., 42 (2019), 3508–3526. https://doi.org/10.1002/mma.5595 doi: 10.1002/mma.5595
![]() |
[41] |
M. Klimek, O. P. Agrawal, Fractional Sturm-Liouville problem, Comput. Math. Appl., 66 (2013), 795–812. https://doi.org/10.1016/j.camwa.2012.12.011 doi: 10.1016/j.camwa.2012.12.011
![]() |
[42] |
M. Rivero, J. J. Trujillo, M. P. Velasco, A fractional approach to the Sturm-Liouville problem, Cent. Eur. J. Phys., 11 (2013), 1246–1254. https://doi.org/10.2478/s11534-013-0216-2 doi: 10.2478/s11534-013-0216-2
![]() |
[43] |
U. Kadak, Y. Gurefe, A generalization on weighted means and convex functions with respect to the Non-Newtonian calculus, Int. J. Anal., 2016 (2016), 5416751. https://doi.org/10.1155/2016/5416751 doi: 10.1155/2016/5416751
![]() |
[44] |
S. Goktas, Multiplicative conformable fractional differential equations, Turk. J. Sci. Tech., 17 (2022), 99–108. https://doi.org/10.55525/tjst.1065429 doi: 10.55525/tjst.1065429
![]() |
1. | Ravi Agarwal, Snezhana Hristova, Donal O’Regan, Cohen–Grossberg Neural Network Delay Models with Fractional Derivatives with Respect to Another Function—Theoretical Bounds of the Solutions, 2024, 13, 2075-1680, 605, 10.3390/axioms13090605 | |
2. | Fengjiao Zhang, Yinfang Song, Chao Wang, α-Synchronization of a Class of Unbounded Delayed Inertial Cohen–Grossberg Neural Networks with Delayed Impulses, 2023, 11, 2227-7390, 4096, 10.3390/math11194096 |
ρ | 0.8 | 0.9 | unknown |
Theorem 4.1 (ı=1) | 1.9384 | 1.4275 | 1.3128 |
Theorem 4.1 (ı=2) | 2.1139 | 1.5965 | 1.4902 |