
Let G be a graph and let δ be a distribution of pebbles on G. A pebbling move on the graph G consists of removing two pebbles from one vertex and then placing one pebble at an adjacent vertex. Given a positive integer d, if we can move pebbles to any target vertex v in G only from the vertices in the set Nd[v]={u∈V(G):d(u,v)≤d} by pebbling moves, where d(u,v) is the distance between u and v, then such a graph pebbling played on G is said to be distance d-restricted. For each target vertex v∈V(G), we use m(δ,d,v) to denote the maximum number of pebbles that can be moved to v only from the vertices in the set Nd[v]. If m(δ,d,v)≥t for each v∈V(G), then we say that δ is (d,t)-solvable. The optimal (d,t)-pebbling number of G, denoted by π∗(d,t)(G), is the minimum number of pebbles needed so that there is a (d,t)-solvable distribution of G. In this article, we study distance 2-restricted pebbling in cycles and show that for any n-cycle Cn with n≥6, π∗(2,t)(Cn)=π∗(2,t−10)(Cn)+4n for t≥13. It follows that if n≥6, then π∗(2,10k+r)(Cn)=π∗(2,r)(Cn)+4kn for k≥1 and 3≤r≤12. Consequently, for n≥6, the problem of determining the exact value of π∗(2,t)(Cn) for all t≥1 can be reduced to the problem of determining the exact value of π∗(2,r)(Cn) for r∈[1,12]. We also consider Cn with 3≤n≤5. When n=3, we have π∗(2,t)(C3)=π∗(1,t)(C3), since the diameter of C3 is one. The exact value of π∗(1,t)(C3) is known. When n=4,5, we determine the exact value of π∗(2,t)(Cn) for t≥1.
Citation: Chin-Lin Shiue, Tzu-Hsien Kwong. Distance 2-restricted optimal pebbling in cycles[J]. AIMS Mathematics, 2025, 10(2): 4355-4373. doi: 10.3934/math.2025201
[1] | Ahu Ercan . Comparative analysis for fractional nonlinear Sturm-Liouville equations with singular and non-singular kernels. AIMS Mathematics, 2022, 7(7): 13325-13343. doi: 10.3934/math.2022736 |
[2] | Rahat Zarin, Abdur Raouf, Amir Khan, Aeshah A. Raezah, Usa Wannasingha Humphries . Computational modeling of financial crime population dynamics under different fractional operators. AIMS Mathematics, 2023, 8(9): 20755-20789. doi: 10.3934/math.20231058 |
[3] | Anwar Ahmad, Dumitru Baleanu . On two backward problems with Dzherbashian-Nersesian operator. AIMS Mathematics, 2023, 8(1): 887-904. doi: 10.3934/math.2023043 |
[4] | Naveed Iqbal, Saleh Alshammari, Thongchai Botmart . Evaluation of regularized long-wave equation via Caputo and Caputo-Fabrizio fractional derivatives. AIMS Mathematics, 2022, 7(11): 20401-20419. doi: 10.3934/math.20221118 |
[5] | M. A. Zaky, M. Babatin, M. Hammad, A. Akgül, A. S. Hendy . Efficient spectral collocation method for nonlinear systems of fractional pantograph delay differential equations. AIMS Mathematics, 2024, 9(6): 15246-15262. doi: 10.3934/math.2024740 |
[6] | Abdelkader Lamamri, Iqbal Jebril, Zoubir Dahmani, Ahmed Anber, Mahdi Rakah, Shawkat Alkhazaleh . Fractional calculus in beam deflection: Analyzing nonlinear systems with Caputo and conformable derivatives. AIMS Mathematics, 2024, 9(8): 21609-21627. doi: 10.3934/math.20241050 |
[7] | Michael Precious Ineh, Edet Peter Akpan, Hossam A. Nabwey . A novel approach to Lyapunov stability of Caputo fractional dynamic equations on time scale using a new generalized derivative. AIMS Mathematics, 2024, 9(12): 34406-34434. doi: 10.3934/math.20241639 |
[8] | Deniz Uçar . Inequalities for different type of functions via Caputo fractional derivative. AIMS Mathematics, 2022, 7(7): 12815-12826. doi: 10.3934/math.2022709 |
[9] | Aisha Abdullah Alderremy, Mahmoud Jafari Shah Belaghi, Khaled Mohammed Saad, Tofigh Allahviranloo, Ali Ahmadian, Shaban Aly, Soheil Salahshour . Analytical solutions of q-fractional differential equations with proportional derivative. AIMS Mathematics, 2021, 6(6): 5737-5749. doi: 10.3934/math.2021338 |
[10] | Zaid Laadjal, Fahd Jarad . On a Langevin equation involving Caputo fractional proportional derivatives with respect to another function. AIMS Mathematics, 2022, 7(1): 1273-1292. doi: 10.3934/math.2022075 |
Let G be a graph and let δ be a distribution of pebbles on G. A pebbling move on the graph G consists of removing two pebbles from one vertex and then placing one pebble at an adjacent vertex. Given a positive integer d, if we can move pebbles to any target vertex v in G only from the vertices in the set Nd[v]={u∈V(G):d(u,v)≤d} by pebbling moves, where d(u,v) is the distance between u and v, then such a graph pebbling played on G is said to be distance d-restricted. For each target vertex v∈V(G), we use m(δ,d,v) to denote the maximum number of pebbles that can be moved to v only from the vertices in the set Nd[v]. If m(δ,d,v)≥t for each v∈V(G), then we say that δ is (d,t)-solvable. The optimal (d,t)-pebbling number of G, denoted by π∗(d,t)(G), is the minimum number of pebbles needed so that there is a (d,t)-solvable distribution of G. In this article, we study distance 2-restricted pebbling in cycles and show that for any n-cycle Cn with n≥6, π∗(2,t)(Cn)=π∗(2,t−10)(Cn)+4n for t≥13. It follows that if n≥6, then π∗(2,10k+r)(Cn)=π∗(2,r)(Cn)+4kn for k≥1 and 3≤r≤12. Consequently, for n≥6, the problem of determining the exact value of π∗(2,t)(Cn) for all t≥1 can be reduced to the problem of determining the exact value of π∗(2,r)(Cn) for r∈[1,12]. We also consider Cn with 3≤n≤5. When n=3, we have π∗(2,t)(C3)=π∗(1,t)(C3), since the diameter of C3 is one. The exact value of π∗(1,t)(C3) is known. When n=4,5, we determine the exact value of π∗(2,t)(Cn) for t≥1.
Inertial neural networks system is a class of second order delay differential equations proposed by Babcock and Westervelt [1]. It is established by introducing an inertia term into the multi-directional associative memory neural networks, and is widely used in the fields of optimization, associative memory, image processing, psychophysics, and adaptive pattern recognition [1]. Therefore, it is of great significance to study the dynamic behaviors (such as stability [2,3,4,5,6], dissipation [7,8,9], Hopf bifurcation [10,11,12], Lagrange stability [13,14,15], synchronization [16,17,18,19,20], etc.) of the system in the application of inertial neural networks. It is worth noting that the dynamics analysis on inertial neural networks is usually to convert them into a first-order differential system by reducing order variable substitution under the assumption that the activation functions are bounded [21,22,23,24]. In particular, the periodicity, stability and convergence for the inertial neural networks systems have been established in [25,26,27,28,29,30] by using the reduced order method. However, this method needs to introduce some new parameters, which will raise the dimension in the inertial neural networks system. This will increase huge amounts of computation and it is difficult to achieve in practice [3,4,20]. Therefore, the authors of [3,4,20] have developed some non-reduced order methods to establish the stability and synchronization conditions of inertial neural networks with constant or time-varying delays, respectively.
Because there are many parallel paths with a series of different axon sizes and lengths in the neural networks, it is necessary to introduce continuous distributed delays to describe the transmission of neuron signals. In recent years, a large number of literatures have studied the dynamic behaviors of inertial neural networks with unbounded distributed delays [31,32,33,34,35,36,37]. In particular, the author of literature [36] studied the global convergence of inertial neural networks with continuous distributed delays by using a non-reduced order method:
x″i(t)=−ai(t)x′i(t)−bi(t)xi(t)+n∑j=1cij(t)˜Pj(xj(t))+n∑j=1hij(t)∫+∞0Kij(u)˜Rj(xj(t−u))du+Ji(t),i∈S:={1,2,⋯,n}, | (1.1) |
and
xi(s)=φi(s), x′i(s)=ψi(s), −∞≤s≤0, φi, ψi∈BC((−∞,0],R), | (1.2) |
where BC((−∞,0],R) is the set of all continuous and bounded functions from (−∞,0] to R, x(t)=(x1(t), x2(t),⋯,xn(t)) is the state vector, x″(t) is called an inertial term of (1.1), the time-varying connection weights cij, hij:R→R and ai, bi:R→(0,+∞) are bounded and continuous functions, the delay kernel Kij:[0, +∞)→R is a continuous function, the external input Ji(t) and the activation function ˜Pj and ˜Rj are continuous, and i, j∈S.
Unfortunately, in the initial values (1.2) which also adopted in [24,35], the assumption that
x′i(s)=ψi(s), −∞≤s≤0, ψi∈BC((−∞, 0],R), |
is incorrect. In fact, in the system (1.1), the transmission term ai(t)x′i(t) is not affected by the delays. Combined with the theory of the delay differential equation, we can see that in the initial problem (1.2), it is not necessary to assume that x′i(t) is bounded and continuous on (−∞, 0], which leaves room for further improvement.
On the other hand, the dynamic characteristics of the inertial neural networks are usually affected by time-varying delays and distributed delays. Therefore, it is especially significant to study the following inertial neural networks system with bounded time-varying delays and unbounded continuously distributed delays:
x″i(t)=−ai(t)x′i(t)−bi(t)xi(t)+n∑j=1cij(t)˜Pj(xj(t))+n∑j=1dij(t)˜Qj(xj(t−τij(t)))+n∑j=1hij(t)∫+∞0Kij(u)˜Rj(xj(t−u))du+Ji(t),i∈S, | (1.3) |
where Ji,cij,dij,hij:R→R, ai, bi:R→(0,+∞) and τij:R→R+ are bounded and continuous functions, the delay kernel Kij∈C([0,+∞),R) is a continuous function, the activation functions ˜Pi, ˜Qi, ˜Ri are continuous, and i,j∈S.
As is well known that the Lyapunov function structure of unbounded time-delay systems is more complex than bounded time-delay systems, therefore, the stability of the former is more difficult to establish than the latter. Especially, there are few studies on the dynamic behaviors of inertial neural networks with bounded time-varying delays and unbounded distributed delays. So far, we only find that the authors of [38] have discussed the existence and exponential stability of the periodic solution of system (1.3) with periodic input functions. However, to the best of our knowledge, there has not yet been research work on the global convergence analysis for the system (1.3) by utilizing a non-reduce method.
Regarding the above discussions, in this manuscript, the initial value (1.2) is modified to
xi(s)=φi(s), x′i(0)=ψi, −∞≤s≤0, φi∈BC((−∞,0],R), ψi∈R, | (1.4) |
and the global convergence criterion of the inertial neural networks (1.3) is established by applying a non-reduce method. In a nutshell, the contributions of this paper can be summarized as follows. 1) Without adopting the periodicity on the input functions, a class of inertial neural networks with bounded time-varying delays and unbounded continuously distributed delays are proposed; 2) Under some appropriate assumptions, a non-reduce approach is developed to show that all solutions and their derivatives in the proposed model are convergent to the zero vector; 3) The initial value conditions (1.2) in [24,35,36] are relaxed into a broader situation; 4) Numerical results including comparisons are presented to verify the obtained theoretical results.
The structure of this paper is as follows. The global convergence of the solutions and their derivatives of system (1.3) is established in section 2. Then, section 3 presents a concrete example with the numeric simulations to show the feasibility of the main results. Section 4 contains a conclusion of this paper and the further research of the topic.
In this section, we use the following Barbarat's lemma to prove the global convergence of the system (1.3).
Lemma 2.1 [36] If g(t) is uniformly continuous on interval [0, +∞), and ∫+∞0g(s)ds exists and is bounded, then limt→+∞g(t)=0.
Assumptions:
(G1) there exist three constants LPj, LQj and LRj such that
|˜Pj(u)|≤LPj|u|, |˜Qj(u)|≤LQj|u|, |˜Rj(u)|≤LRj|u|, for all u∈R, j∈S. |
(G2) for i,j∈S, |Kij(t)| is integrable on [0,+∞).
(G3) u(t)=maxi∈S|Ji(t)|, U(t)=∫t0u(s)ds is bounded on [0,+∞).
(G4) for i,j∈S, a′i(t), b′i(t) and (|cij(t)|LPj+|dij(t)|LQj+|hij(t)|LRj∫+∞0|Kij(u)|du)′ are bounded and continuous on [0,+∞).
(G5) there exist constants βi>0 and αi≥0,γi≥0 satisfying
supt∈[0,+∞)Di(t)<0, inft∈[0,+∞){4Di(t)Ei(t)−F2i(t)}>0, ∀t∈R, i∈S, | (2.1) |
where
{Di(t)=αiγi−ai(t)α2i+12α2in∑j=1(|cij(t)|LPj+|dij(t)|LQj+|hij(t)|LRj∫+∞0|Kij(u)|du),Ei(t)=−bi(t)αiγi+12n∑j=1(|cij(t)|LPj+|dij(t)|LQj+|hij(t)|LRj∫+∞0|Kij(u)|du)|αiγi| +12n∑j=1α2j(|cji(t)|LPi+d+jiLQi11−˙τ+ji+h+jiLRi∫+∞0|Kji(u)|du) +12n∑j=1(|cji(t)|LPi+d+jiLQi11−˙τ+ji+h+jiLRi∫+∞0|Kji(u)|du)|αjγj|,Fi(t)=βi+γ2i−ai(t)αiγi−bi(t)α2i,˙τ+ij=supt∈[0, +∞)τ′ij(t), d+ij=supt∈[0, +∞)|dij(t)|, h+ij=supt∈[0, +∞)|hij(t)|, i,j∈S. |
(G6) for i, j∈S, τij is continuously differentiable, and τ′ij(t)=˙τij<1 for all t∈R.
Remark 2.1. Combining (G1), (G2) and the basic theory on functional differential equation with infinite delay in [40], one can show that all solutions of the initial value problem (1.3) and (1.4) exist on [0, +∞).
Remark 2.2. In this paper, (G3) means that the input functions are absolutely integrable on [0, +∞), and (G4), which is related the delay kernels, implies the specific effect of time delay functions on the convergence of system (1.3).
Theorem 2.1 Assume that (G1)–(G6) hold. Let x(t)=(x1(t),x2(t),⋯,xn(t)) be a solution of the initial value problem (1.3) and (1.4). Then limt→+∞xi(t)=0, limt→+∞x′i(t)=0.
Proof. According to (G3), (G5) and the boundedness of coefficients in (1.3), one can choose two positive constants ρ and ϱ such that
−ρ=maxi∈Ssupt∈[0,+∞)e−U(t)Di(t), −ϱ=maxi∈Ssupt∈[0,+∞)e−U(t)[Ei(t)−(Fi(t))24Di(t)]. | (2.2) |
Let
W(t)=e−U(t){12n∑i=1βix2i(t)+12n∑i=1(αix′i(t)+γixi(t))2+12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQj11−˙τ+ij∫tt−τij(t)x2j(s)ds+12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|∫tt−ux2j(s)dsdu+12n∑i=1α2i}. |
It follows from (G2), (G3), (G6) and (1.3) that
W′(t)=−u(t)W(t)+e−U(t){n∑i=1(βi+γ2i)xi(t)x′i(t)+n∑i=1(α2ix′i(t)+αiγixi(t))×[−ai(t)x′i(t)−bi(t)xi(t)+n∑j=1cij(t)˜Pj(xj(t))+n∑j=1dij(t)˜Qj(xj(t−τij(t)))+n∑j=1hij(t)∫+∞0Kij(u)˜Rj(xj(t−u))du+Ji(t)]+n∑i=1αiγi(x′i(t))2+12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQj11−˙τ+ijx2j(t)−12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQj11−˙τ+ijx2j(t−τij(t))(1−τ′ij(t))+12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|dux2j(t)−12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|x2j(t−u)du≤e−U(t){−u(t)12n∑i=1[(αix′i(t)+γixi(t))2−2αi|αix′i(t)+γixi(t)|+α2i]+n∑i=1(βi+γ2i−ai(t)αiγi−bi(t)α2i)xi(t)x′i(t)+n∑i=1(αiγi−ai(t)α2i)(x′i(t))2−n∑i=1bi(t)αiγix2i(t)+12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQj11−˙τ+ijx2j(t)−12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQjx2j(t−τij(t))+12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|dux2j(t)−12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|x2j(t−u)du+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|cij(t)||˜Pj(xj(t))|+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|dij(t)||˜Qj(xj(t−τij(t)))|+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|hij(t)|∫+∞0|Kij(u)||˜Rj(xj(t−u))|du}≤e−U(t){n∑i=1(βi+γ2i−ai(t)αiγi−bi(t)α2i)xi(t)x′i(t)+n∑i=1(αiγi−ai(t)α2i)(x′i(t))2+n∑i=1[−bi(t)αiγi+12n∑j=1(α2jd+ji+|αjγj|d+ji)LQi11−˙τ+ji+12n∑j=1(α2jh+ji+|αjγj|h+ji)LRi∫+∞0|Kji(u)|du]x2i(t)−12n∑i=1n∑j=1(α2id+ij+|αiγi|d+ij)LQjx2j(t−τij(t))−12n∑i=1n∑j=1(α2ih+ij+|αiγi|h+ij)LRj∫+∞0|Kij(u)|x2j(t−u)du+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|cij(t)||˜Pj(xj(t))|+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|dij(t)||˜Qj(xj(t−τij(t)))|+n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|hij(t)|∫+∞0|Kij(u)||˜Rj(xj(t−u))|du}. (2.3) |
The assumption (G1) and the fact that uv≤12(u2+v2)(u,v∈R) entail that
n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|cij(t)||˜Pj(xj(t))|≤12n∑i=1n∑j=1α2i|cij(t)|LPj(x′i(t))2+12n∑i=1n∑j=1(|αiγi||cij(t)|LPj+α2j|cji(t)|LPi+|αjγj||cji(t)|LPi)x2i(t), |
n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|dij(t)||˜Qj(xj(t−τij(t)))|≤12n∑i=1n∑j=1α2i|dij(t)|LQj(x′i(t))2+12n∑i=1n∑j=1|αiγi||dij(t)|LQjx2i(t)+12n∑i=1n∑j=1|(α2i|dij(t)|LQj+|αiγi||dij(t)|LQj)x2j(t−τij(t)), |
and
n∑i=1n∑j=1(α2i|x′i(t)|+|αiγi||xi(t)|)|hij(t)|∫+∞0|Kij(u)||˜Rj(xj(t−u))|du≤12n∑i=1n∑j=1α2i|hij(t)|LRj∫+∞0|Kij(u)|du(x′i(t))2+12n∑i=1n∑j=1|αiγi||hij(t)|LRj∫+∞0|Kij(u)|dux2i(t)+12n∑i=1n∑j=1(α2i|hij(t)|LRj+|αiγi||hij(t)|LRj)∫+∞0|Kij(u)|x2j(t−u)du, |
which, together with (G4), (2.2) and (2.3), give
W′(t)≤e−U(t){n∑i=1(βi+γ2i−ai(t)αiγi−bi(t)α2i)xi(t)x′i(t)+n∑i=1[αiγi−ai(t)α2i+12α2in∑j=1(|cij(t)|LPj+|dij(t)|LQj+|hij(t)|LRj∫+∞0|Kij(u)|du)](x′i(t))2+n∑i=1[−bi(t)αiγi+12n∑j=1(|cij(t)|LPj+|dij(t)|LQj+|hij(t)|LRj∫+∞0|Kij(u)|du)|αiγi|+12n∑j=1α2j(|cji(t)|LPi+d+jiLQi11−˙τ+ji+h+jiLRi∫+∞0|Kji(u)|du)+12n∑j=1(|cji(t)|LPi+d+jiLQi11−˙τ+ji+h+jiLRi∫+∞0|Kji(u)|du)|αjγj|]x2i(t)} =e−U(t){n∑i=1Di(t)(x′i(t)+Fi(t)2Di(t)xi(t))2+n∑i=1(Ei(t)−(Fi(t))24Di(t))x2i(t)}≤−ρn∑i=1(x′i(t)+Fi(t)2Di(t)xi(t))2−ϱn∑i=1x2i(t)≤0, ∀t∈[0,+∞). | (2.4) |
This implies that W(t)≤W(0) for all t∈[0,+∞), and
12n∑i=1βix2i(t)+12n∑i=1(αix′i(t)+γixi(t))2≤+∞, t∈[0,+∞). |
Since αi|x′i(t)|≤|αix′i(t)+γixi(t)|+|γixi(t)|, it follows that x′i(t) and xi(t) are uniformly bounded on [0, +∞) for all i∈S. According to the continuity of right-hand side functions in (1.3), it is easy to see that x″i(t) is also uniformly bounded on [0, +∞) for all i∈S, which combining with (G4) lead that n∑i=1(x′i(t)+Fi(t)2Di(t)xi(t))2 and n∑i=1x2i(t) are uniformly continuous on [0, +∞).
In addition, (2.4) entails that
n∑i=1(x′i(t)+Fi(t)2Di(t)xi(t))2≤−1ρW′(t), n∑i=1x2i(t)≤−1ϱW′(t), ∀t≤0, |
and
limt→∞∫t0n∑i=1(x′i(s)+Fi(s)2Di(s)xi(s))2ds≤W(0)ρ, limt→∞∫t0n∑i=1x2i(s)ds≤W(0)ϱ, |
which, together with Lemma 2.1, lead to
limt→∞xi(t)=0, limt→∞(x′i(t)+Fi(t)2Di(t)xi(t))=0, limt→∞x′i(t)=0, i∈S. |
The proof is complete.
Remark 2.3. Obviously, system (1.1) is a special case of system (1.3) when dij=0, i,j∈S, and the restrictions on initial value condition (1.4) are weaker than those ones in (1.2), hence all the results in [30] can be derived from theorem 2.1. Moreover, the global Lipschitz conditions on the activation functions were crucial in [3,20,31] where the convergence on the state vector of inertial neural networks system was considered. However, in this paper, the global Lipschitz conditions have been abandoned and the global convergence on the inertial neural networks system with bounded time-varying delays and unbounded continuously distributed delays has been established. This implies that Theorem 2.1 generalizes and complements the main results of [3,20,30,31].
Example 3.1. Regard the following inertial neural networks with mixed delays:
{x″1(t)=−(4.56+sin2t)x′1(t)−(13.58+sin2t)x1(t)+1.21(sint)˜P1(x1(t)) +1.51(cost)˜P2(x2(t))−1.42(sin2t)˜Q1(x1(t−0.2sin2t)) +1.95(cos2t)˜Q2(x2(t−0.2sin2t)) −1.83(sin2t)∫+∞0sin(2u)1+u2˜R1(x1(t−u))du +0.71(cos2t)∫+∞0sin(2u)1+u2˜R2(x2(t−u))du+20sin4te−t2,x″2(t)=−(4.71+sin2t)x′2(t)−(14.45+sin2t)x2(t)−0.83(sint)˜P1(x1(t)) −1.47(cost)˜P2(x2(t))−1.52(sin2t)˜Q1(x1(t−0.2sin2t)) +0.95(cos2t)˜Q2(x2(t−0.2sin2t)) −3.51(sin2t)∫+∞0 cos(2u)1+u2˜R1(x1(t−u))du +1.17(cos2t)∫+∞0 cos(2u)1+u2˜R2(x2(t−u))du+30sin4te−t2, | (3.1) |
where ˜P1(u)=˜Q1(u)=˜R1(u)=0.25(|u+1|−|u−1|), ˜P2(u)=˜Q2(u)=˜R2(u)=0.5u(sin3u). Take αi=γi=1, β1=18.14, β2=19.16, LPi=LQi=LKi=0.5, i=1,2, we gain Di(t)<0, 4Di(t)Ei(t)>(Fi(t))2, i=1,2, t∈R. By Theorem 2.1, we can conclude that all solutions of (3.1) and the derivatives are convergent to the zero vector, respectively. Simulations in Figures 1 and 2 reflect that the theoretical convergence is in sympathy with the numerically observed behavior. Here, it can be seen from the moving trend of the trajectories in Figures 1, 2 that xi(t) and x′i(t) are convergent to 0 as t→+∞.
Remark 3.1. It should be pointed out that ˜P2(u)=˜Q2(u)=˜R2(u)=0.5u(sin3u) does not satisfy the global Lipschitz condition, and d11, d12, d21,d22≠0, then all results in the references [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70] can not be straightly applied to show that every solution and its derivative convergent to the zero vector in system (3.1). Moreover, as far as the authors know, the convergence on inertial neural networks with bounded time-varying delays and unbounded continuously distributed delays without applying the reduced-order method has not been touched in the previous literature. Consequently, the main results established in this paper are essentially new and complement some existing ones.
In this paper, applying differential inequality techniques coupled with Lyapunov function method instead of the reduced order method, we study the global convergence on inertial neural networks with bounded time-varying delays and unbounded continuously distributed delays. Some sufficient assertions have been established to guarantee that every solution and its derivative of the addressed model are convergent to the zero vector. It should be mentioned that the method applied in this paper provides a possible approach to study the topic on dynamical behaviours of other inertial neural networks model with bounded time-varying delays and unbounded continuously distributed delays.
The authors would like to express the sincere appreciation to the editor and reviewers for their helpful comments in improving the presentation and quality of the paper. This work was supported by the Postgraduate Scientific Research Innovation Project of Hunan Province (No. CX20200892).
We confirm that we have no conflict of interest.
[1] |
H. Ahmad, M. K. Siddiqui, M. F. Hanif, B. Gegbe, Exploring the distance‐based topological indices for total graphs via numerical comparison, J. Math., 2024 (2024), 4423113. https://doi.org/10.1155/jom/4423113 doi: 10.1155/jom/4423113
![]() |
[2] |
D. P. Bunde, E. W. Chambers, D. Cranston, K. Milans, D. B. West, Pebbling and optimal pebbling in graphs, J. Graph Theory, 57 (2008), 215–238. https://doi.org/10.1002/jgt.20278 doi: 10.1002/jgt.20278
![]() |
[3] |
M. Chellali, T. W. Haynes, S. T. Hedetniemi, T. M. Lewis, Restricted optimal pebbling and domination in graphs, Discrete Appl. Math., 221 (2017), 46–53. https://doi.org/10.1016/j.dam.2016.12.029 doi: 10.1016/j.dam.2016.12.029
![]() |
[4] |
J. C. Chen, C. L. Shiue, An investigation of the game of defend the island, ICGA J., 40 (2018), 330–340. https://doi.org/10.3233/ICG-180052 doi: 10.3233/ICG-180052
![]() |
[5] |
A. Czygrinow, G. Hurlbert, G. Y. Katona, L. F. Papp, Optimal pebbling number of graphs with given minimum degree, Discrete Appl. Math., 260 (2019), 117–130. https://doi.org/10.1016/j.dam.2019.01.023 doi: 10.1016/j.dam.2019.01.023
![]() |
[6] |
Z. F. Dai, X. H. Chen, F. H. Wen, A modified Perry's conjugate gradient method-based derivative-free method for solving large-scale nonlinear monotone equations, Appl. Math. Comput., 270 (2015), 378–386. https://doi.org/10.1016/j.amc.2015.08.014 doi: 10.1016/j.amc.2015.08.014
![]() |
[7] |
E. Győri, G. Y. Katona, L. F. Papp, Optimal pebbling number of the square grid, Graphs Combin., 36 (2020), 803–829. https://doi.org/10.1007/s00373-020-02154-z doi: 10.1007/s00373-020-02154-z
![]() |
[8] |
D. Moews, Optimally pebbling hypercubes and powers, Discrete Math., 190 (1998), 271–276. https://doi.org/10.1016/S0012-365X(98)00154-X doi: 10.1016/S0012-365X(98)00154-X
![]() |
[9] | L. Pachter, H. S. Snevily, B. Voxman, On pebbling graph, Congr. Numer., 107 (1995), 65–80. |
[10] |
L. F. Papp, Restricted optimal pebbling is NP-hard, Discrete Appl. Math., 357 (2024), 258–263. https://doi.org/10.1016/j.dam.2024.06.013 doi: 10.1016/j.dam.2024.06.013
![]() |
[11] |
J. Petr, J. Portier, S. Stolarczyk, A new lower bound on the optimal pebbling number of the grid, Discrete Math., 346 (2023), 113212. https://doi.org/10.1016/j.disc.2022.113212 doi: 10.1016/j.disc.2022.113212
![]() |
[12] | C. L. Shiue, Optimally t-pebbling graphs, Util. Math., 98 (2015), 311–325. |
[13] |
C. L. Shiue, Distance restricted optimal pebbling in cycles, Discrete Appl. Math., 279 (2020), 125–133. https://doi.org/10.1016/j.dam.2019.10.017 doi: 10.1016/j.dam.2019.10.017
![]() |
[14] | C. L. Shiue, H. H. Chiang, M. M. Wong, H. M. Srivastava, Optimal t-pebbling in cycles, Util. Math., 111 (2019), 49–66. |
[15] | C. L. Shiue, T. H. Kwong, Distance and capacity restricted optimal 3-fold pebbling in cycles, SSRN, 2023, 1–21. https://doi.org/10.2139/ssrn.4694290 |
1. | M. Syed Ali, G. Narayanan, Sumit Saroha, Bandana Priya, Ganesh Kumar Thakur, Finite-time stability analysis of fractional-order memristive fuzzy cellular neural networks with time delay and leakage term, 2021, 185, 03784754, 468, 10.1016/j.matcom.2020.12.035 | |
2. | Xin Long, Novel stability criteria on a patch structure Nicholson’s blowflies model with multiple pairs of time-varying delays, 2020, 5, 2473-6988, 7387, 10.3934/math.2020473 | |
3. | Rui Zhao, Baoxian Wang, Jigui Jian, Global μ-stabilization of quaternion-valued inertial BAM neural networks with time-varying delays via time-delayed impulsive control, 2022, 202, 03784754, 223, 10.1016/j.matcom.2022.05.036 | |
4. | Silvia Cateni, Valentina Colla, Marco Vannucci, Improving the Stability of the Variable Selection with Small Datasets in Classification and Regression Tasks, 2022, 1370-4621, 10.1007/s11063-022-10916-4 | |
5. | Nengfa Wang, Changjin Xu, Zixin Liu, Fahd Jarad, Periodic Oscillatory Phenomenon in Fractional-Order Neural Networks Involving Different Types of Delays, 2021, 2021, 1563-5147, 1, 10.1155/2021/8685444 | |
6. | Alireza Khalili Golmankhaneh, Cemil Tunç, Hamdullah Şevli, Hyers–Ulam stability on local fractal calculus and radioactive decay, 2021, 230, 1951-6355, 3889, 10.1140/epjs/s11734-021-00316-5 | |
7. | Yang Gao, Global stability for a new predator–prey model with cross-dispersal among patches based on graph theory, 2021, 2021, 1687-1847, 10.1186/s13662-021-03645-w | |
8. | Xiaojin Guo, Chuangxia Huang, Zhichun Yang, Jiping Zhang, Jinde Cao, Stability Analysis of High-order Proportional Delayed Cellular Neural Networks with D Operators, 2022, 20, 1598-6446, 660, 10.1007/s12555-020-0902-y | |
9. | Zain Ul Abadin Zafar, Samina Younas, Sumera Zaib, Cemil Tunç, An efficient numerical simulation and mathematical modeling for the prevention of tuberculosis, 2022, 15, 1793-5245, 10.1142/S1793524522500152 | |
10. | Jian Zhang, Ancheng Chang, Gang Yang, Periodicity on Neutral-Type Inertial Neural Networks Incorporating Multiple Delays, 2021, 13, 2073-8994, 2231, 10.3390/sym13112231 | |
11. | Zhang Chen, Dandan Yang, Shitao Zhong, Periodic dynamics for nonlocal Hopfield neural networks with random initial data, 2021, 358, 00160032, 8656, 10.1016/j.jfranklin.2021.08.040 | |
12. | Wenke Wang, Le Li, Xuejun Yi, Chuangxia Huang, Convergence on Population Dynamics and High-Dimensional Haddock Conjecture, 2021, 13, 2073-8994, 2252, 10.3390/sym13122252 | |
13. | Yu-ting Bai, Zhi-yao Zhao, Xiao-yi Wang, Xue-bo Jin, Bai-hai Zhang, Continuous Positioning with Recurrent Auto-Regressive Neural Network for Unmanned Surface Vehicles in GPS Outages, 2022, 54, 1370-4621, 1413, 10.1007/s11063-021-10688-3 | |
14. | Wenjie Hu, Quanxin Zhu, Spatial-temporal dynamics of a non-monotone reaction-diffusion Hopfield’s neural network model with delays, 2022, 34, 0941-0643, 11199, 10.1007/s00521-022-07036-4 | |
15. | Lingzhong Zhang, Yongqing Yang, Different Control Strategies for Fixed-Time Synchronization of Inertial Memristive Neural Networks, 2022, 54, 1370-4621, 3657, 10.1007/s11063-022-10779-9 | |
16. | Famei Zheng, Finite-Time Synchronization for a Coupled Fuzzy Neutral-Type Rayleigh System , 2021, 53, 1370-4621, 2967, 10.1007/s11063-021-10532-8 | |
17. | Weiping Fan, Periodic Stability on a Class of D-Operator-Based Neutral-Type Rayleigh Equations Accompanying Mixed Delays, 2022, 1370-4621, 10.1007/s11063-022-11033-y | |
18. | Marat Akhmet, Madina Tleubergenova, Zakhira Nugayeva, Unpredictable and Poisson Stable Oscillations of Inertial Neural Networks with Generalized Piecewise Constant Argument, 2023, 25, 1099-4300, 620, 10.3390/e25040620 | |
19. | Soo-Oh Yang, Jea-Hyun Park, Analysis for the hierarchical architecture of the heterogeneous FitzHugh-Nagumo network inducing synchronization, 2023, 8, 2473-6988, 22385, 10.3934/math.20231142 | |
20. | Marat Akhmet, Madina Tleubergenova, Akylbek Zhamanshin, Zakhira Nugayeva, 2025, Chapter 1, 978-3-031-68965-9, 1, 10.1007/978-3-031-68966-6_1 |