Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

Some novel results for DNNs via relaxed Lyapunov functionals

  • Received: 03 October 2023 Revised: 10 January 2024 Accepted: 04 February 2024 Published: 02 April 2024
  • The focus of this paper was to explore the stability issues associated with delayed neural networks (DNNs). We introduced a novel approach that departs from the existing methods of using quadratic functions to determine the negative definite of the Lyapunov-Krasovskii functional's (LKFs) derivative ˙V(t). Instead, we proposed a new method that utilizes the conditions of positive definite quadratic function to establish the positive definiteness of LKFs. Based on this approach, we constructed a novel the relaxed LKF that contains delay information. In addition, some combinations of inequalities were extended and used to reduce the conservatism of the results obtained. The criteria for achieving delay-dependent asymptotic stability were subsequently presented in the framework of linear matrix inequalities (LMIs). Finally, a numerical example confirmed the effectiveness of the theoretical result.

    Citation: Guoyi Li, Jun Wang, Kaibo Shi, Yiqian Tang. Some novel results for DNNs via relaxed Lyapunov functionals[J]. Mathematical Modelling and Control, 2024, 4(1): 110-118. doi: 10.3934/mmc.2024010

    Related Papers:

    [1] Yao Yu, Chaobo Li, Dong Ji . Fixed point theorems for enriched Kannan-type mappings and application. AIMS Mathematics, 2024, 9(8): 21580-21595. doi: 10.3934/math.20241048
    [2] Gunaseelan Mani, Arul Joseph Gnanaprakasam, Choonkil Park, Sungsik Yun . Orthogonal F-contractions on O-complete b-metric space. AIMS Mathematics, 2021, 6(8): 8315-8330. doi: 10.3934/math.2021481
    [3] Abdullah Shoaib, Poom Kumam, Shaif Saleh Alshoraify, Muhammad Arshad . Fixed point results in double controlled quasi metric type spaces. AIMS Mathematics, 2021, 6(2): 1851-1864. doi: 10.3934/math.2021112
    [4] Xun Ge, Songlin Yang . Some fixed point results on generalized metric spaces. AIMS Mathematics, 2021, 6(2): 1769-1780. doi: 10.3934/math.2021106
    [5] Tahair Rasham, Abdullah Shoaib, Shaif Alshoraify, Choonkil Park, Jung Rye Lee . Study of multivalued fixed point problems for generalized contractions in double controlled dislocated quasi metric type spaces. AIMS Mathematics, 2022, 7(1): 1058-1073. doi: 10.3934/math.2022063
    [6] Hieu Doan . A new type of Kannan's fixed point theorem in strong b- metric spaces. AIMS Mathematics, 2021, 6(7): 7895-7908. doi: 10.3934/math.2021458
    [7] Muhammad Waseem Asghar, Mujahid Abbas, Cyril Dennis Enyi, McSylvester Ejighikeme Omaba . Iterative approximation of fixed points of generalized αm-nonexpansive mappings in modular spaces. AIMS Mathematics, 2023, 8(11): 26922-26944. doi: 10.3934/math.20231378
    [8] Mi Zhou, Naeem Saleem, Xiao-lan Liu, Nihal Özgür . On two new contractions and discontinuity on fixed points. AIMS Mathematics, 2022, 7(2): 1628-1663. doi: 10.3934/math.2022095
    [9] P. Dhivya, D. Diwakaran, P. Selvapriya . Best proximity points for proximal Górnicki mappings and applications to variational inequality problems. AIMS Mathematics, 2024, 9(3): 5886-5904. doi: 10.3934/math.2024287
    [10] Fatemeh Lael, Naeem Saleem, Işık Hüseyin, Manuel de la Sen . ˊCiriˊc-Reich-Rus type weakly contractive mappings and related fixed point results in modular-like spaces with application. AIMS Mathematics, 2022, 7(9): 16422-16439. doi: 10.3934/math.2022898
  • The focus of this paper was to explore the stability issues associated with delayed neural networks (DNNs). We introduced a novel approach that departs from the existing methods of using quadratic functions to determine the negative definite of the Lyapunov-Krasovskii functional's (LKFs) derivative ˙V(t). Instead, we proposed a new method that utilizes the conditions of positive definite quadratic function to establish the positive definiteness of LKFs. Based on this approach, we constructed a novel the relaxed LKF that contains delay information. In addition, some combinations of inequalities were extended and used to reduce the conservatism of the results obtained. The criteria for achieving delay-dependent asymptotic stability were subsequently presented in the framework of linear matrix inequalities (LMIs). Finally, a numerical example confirmed the effectiveness of the theoretical result.



    Modular metric spaces were introduced in [4,5]. Behind this new notion, there exists a physical interpretation of the modular. A modular on a set bases on a nonnegative (possibly infinite valued) “field of (generalized) velocities”: to each time λ>0 (the absoulute value of) an averge velocity ωλ(ρ,σ) is associated in such that in order to cover the distance between points ρ,σM, it takes time λ to move from ρ to σ with velocity ωλ(ρ,σ), while a metric on a set stands for non-negative finite distances between any two points of the set. The process of access to this notion of modular metric spaces is different. Actually we deal with these spaces as the nonlinear version of the classical modular spaces as introduced by Nakano [12] on vector spaces and modular function spaces introduced by Musielack [11] and Orlicz [13]. In [1,2] the authors have defined and investigated the fixed point property in the context of modular metric space and introduced several results. For more on modular metric fixed point theory, the reader may consult the books [7,8,9]. Some recent work in modular metric space has been represented in [14,15]. It is almost a century where several mathematicians have improved, extended and enriched the classical Banach contraction principle [1] in different directions along with variety of applications. In 1969, Kannan [6] proved that if X is complete, then a Kannan mapping has a fixed point. It is interesting that Kannan’s theorem is independent of the Banach contraction principle [3].

    In this research article, fixed point problem for Kannan mappings in the framework of modular metric spaces is investigated.

    Let M. Throughout this paper for a function ω:(0,)×M×M[0,], we will write

    ωλ(ρ,σ)=ω(λ,ρ,σ),

    for all λ>0 and ρ,σM.

    Definition 1. [4,5] A function ω:(0,)×M×M[0,] is called a modular metric on M if following axioms hold:

    (ⅰ) ρ=σωλ(ρ,σ)=0, for all λ>0;

    (ⅱ) ωλ(ρ,σ)=ωλ(σ,ρ), for all λ>0, and ρ,σM;

    (ⅲ) ωλ+μ(ρ,σ)ωλ(ρ,ς)+ωμ(ς,σ), for all λ,μ>0 and ρ,σ,ςM.

    A modular metric ω on M is called regular if the following weaker version of (ⅰ) is satisfied

    ρ=σif and only ifωλ(ρ,σ)=0, for some λ>0.

    Eventually, ω is called convex if for λ,μ>0 and ρ,σ,ςM, it satisfies

    ωλ+μ(ρ,σ)λλ+μωλ(ρ,ς)+μλ+μωμ(ς,σ).

    Throughout this work, we assume ω is regular.

    Definition 2. [4,5] Let ω be a modular on M. Fix ρ0M. The two sets

    Mω=Mω(ρ0)={ρM:ωλ(ρ,ρ0)0asλ},

    and

    Mω=Mω(ρ0)={ρM:λ=λ(ρ)>0such thatωλ(ρ,ρ0)<}

    are called modular spaces (around ρ0).

    It is obvious that MωMω but this involvement may be proper in general. It follows from [4,5] that if ω is a modular on M, then the modular space Mω can be equipped with a (nontrivial) metric, generated by ω and given by

    dω(ρ,σ)=inf{λ>0:ωλ(ρ,σ)λ},

    for any ρ,σMω. If ω is a convex modular on M, according to [4,5] the two modular spaces coincide, i.e. Mω=Mω, and this common set can be endowed with the metric dω given by

    dω(ρ,σ)=inf{λ>0:ωλ(ρ,σ)1},

    for any ρ,σMω. These distances will be called Luxemburg distances.

    Following example presented by Abdou and Khamsi [1,2] is an important motivation of the concept modular metric spaces.

    example 3. Let Ω be a nonempty set and Σ be a nontrivial σ-algebra of subsets of Ω. Let P be a δ-ring of subsets of Ω, such that EAP for any EP and AΣ. Let us assume that there exists an increasing sequence of sets KnP such that Ω=Kn. By E we denote the linear space of all simple functions with supports from P. By N we will denote the space of all extended measurable functions, i.e. all functions f:Ω[,] such that there exists a sequence {gn}E, |gn||f| and gn(ω)f(ω) for all ωΩ. By 1A we denote the characteristic function of the set A. Let ρ:N[0,] be a nontrivial, convex and even function. We say that ρ is a regular convex function pseudomodular if:

    (ⅰ) ρ(0)=0;

    (ⅱ) ρ is monotone, i.e. |f(ω)||g(ω)| for all ωΩ implies ρ(f)ρ(g), where f,gN;

    (ⅲ) ρ is orthogonally subadditive, i.e. ρ(f1AB)ρ(f1A)+ρ(f1B) for any A,BΣ such that AB, fN;

    (ⅳ) ρ has the Fatou property, i.e. |fn(ω)||f(ω)| for all ωΩ implies ρ(fn)ρ(f), where fN;

    (ⅴ) ρ is order continuous in E, i.e. gnE and |gn(ω)|0 implies ρ(gn)0.

    Similarly, as in the case of measure spaces, we say that a set AΣ is ρ-null if ρ(g1A)=0 for every gE. We say that a property holds ρ-almost everywhere if the exceptional set is ρ-null. As usual we identify any pair of measurable sets whose symmetric difference is ρ-null as well as any pair of measurable functions differing only on a ρ-null set. With this in mind we define

    N(Ω,Σ,P,ρ)={fN;|f(ω)|< ρa.e},

    where each fN(Ω,Σ,P,ρ) is actually an equivalence class of functions equal ρ-a.e. rather than an individual function. Where no confusion exists we will write M instead of N(Ω,Σ,P,ρ). Let ρ be a regular function pseudomodular.

    (a) We say that ρ is a regular function semimodular if ρ(αf)=0 for every α>0 implies f=0 ρa.e.;

    (b) We say that ρ is a regular function modular if ρ(f)=0 implies f=0 ρa.e.

    The class of all nonzero regular convex function modulars defined on Ω will be denoted by . Let us denote ρ(f,E)=ρ(f1E) for fN, EΣ. It is easy to prove that ρ(f,E) is a function pseudomodular in the sense of Def.2.1.1 in [10] (more precisely, it is a function pseudomodular with the Fatou property). Therefore, we can use all results of the standard theory of modular function space as per the framework defined by Kozlowski in [10], see also Musielak [11] for the basics of the general modular theory. Let ρ be a convex function modular.

    (a) The associated modular function space is the vector space Lρ(Ω,Σ), or briefly Lρ, defined by

    Lρ={fN;ρ(λf)0 as λ0}.

    (b) The following formula defines a norm in Lρ (frequently called Luxemburg norm):

    A modular function spaces furnishes a wonderful example of a modular metric space. Indeed, let L_{\rho} be modular function space.

    example 4. Define the function \omega by

    \omega_{\lambda}(f,g) = \rho\left( \frac{f-g}{\lambda}\right)

    for all \lambda > 0 , and f, g \in L_{\rho} . Then \omega is a modular metric on L_{\rho} . Note that \omega is convex if and only if \rho is convex. Moreover we have

    \|f-g\|_{\rho} = d_{\omega}^{*}(f,g),

    for any f, g \in L_{\rho} .

    For more examples readers can see [4,5]

    Definition 5. [1]

    (1). A sequence \{\rho _{n}\}\subset \mathcal{M}_{\omega } is \omega -convergent to \rho \in \mathcal{M}_{\omega } if and only if \omega _{1}(\rho _{n}, \rho)\rightarrow 0 .

    (2). A sequence \{\rho _{n}\}\subset \mathcal{M}_{\omega } is \omega -Cauchy if \omega _{1}(\rho _{n}, \rho _{m})\rightarrow 0 as n, m\rightarrow \infty .

    (3). A set K\subset \mathcal{M}_{\omega } is \omega -closed if the limit of \omega _{1} -convergent sequence of K always belongs to K .

    (4). A set K\subset \mathcal{M}_{\omega } is \omega -bounded if

    \begin{equation*} \delta _{\omega } = \sup \{\omega _{1}(\rho ,\sigma );\rho ,\sigma \in K\} \lt \infty . \end{equation*}

    (5). If any \omega -Cauchy sequence in a subset K of \mathcal{M} _{\omega } is a convergent sequence and its limit is in K, then K is called an \omega -complete.

    (6). The \rho -centered \omega -ball of radius r is defined as

    \begin{equation*} B_{\omega }(\rho ,r) = \{\sigma \in \mathcal{M}_{\omega };\;\omega _{1}(\rho ,\sigma )\leq r\}, \end{equation*}

    for any \rho \in \mathcal{M}_{\omega } and r\geq 0 .

    Let (\mathcal{M}, \omega) be a modular metric space. In the rest of this work, we assume that \omega satisfies the Fatou property, i.e. if \{\rho _{n}\} \; \omega -converges to \rho and \{\sigma _{n}\} \; \omega -converges to \sigma , then we must have

    \begin{equation*} \omega _{1}(\rho ,\sigma )\leq \liminf\limits_{n\rightarrow \infty }\omega _{1}(\rho _{n},\sigma _{n}), \end{equation*}

    for any \rho \in \mathcal{M}_{\omega } .

    Definition 6. Let (\mathcal{M}, \omega) be a modular metric space. We define an admissible subset of \mathcal{M}_{\omega } as intersection of modular balls.

    Note that if \omega satisfies the Fatou property, then the modular balls are \omega -closed. Hence any admissible subset is \omega -closed.

    The heading levels should not be more than 4 levels. The font of heading and subheadings should be 12 point normal Times New Roman. The first letter of headings and subheadings should be capitalized.

    It is well-known that every Banach contractive mapping is a continuous function. In 1968, Kannan [6] was the first mathematician who found the answer and presented a fixed point result in the seting of metric space as following.

    Theorem 7. [6] Let (\mathcal{M}, d) be a complete metric space and \mathcal{ J}:\mathcal{M\rightarrow M} be a self-mapping satisfying

    \begin{equation*} d(\mathcal{J}(\rho ),\mathcal{J}(\sigma ))\leq \alpha \ \Big(d(\rho , \mathcal{J}(\rho ))+d(\sigma ,\mathcal{J}(\sigma ))\Big), \end{equation*}

    \forall \; \rho, \sigma \in \mathcal{M} and \alpha \in \lbrack 0, \frac{1}{ 2}) . Then \mathcal{J} has a unique fixed point \varsigma \in \mathcal{M} , and for any \rho \in \mathcal{M} the sequence of itreaive (\mathcal{J} ^{n}(\rho)) converges to \varsigma .

    Before we state our results, we introduce the defintion of Kannan mappings in modular metic spaces.

    Definition 8. Let K be a nonempty subset of \mathcal{M}_{\omega } . A mapping \mathcal{J}:K\rightarrow K is called Kannan \omega -Lipschitzian if \exists \; \alpha \geq 0 such that

    \begin{equation*} \omega _{1}(\mathcal{J}(\rho ),\mathcal{J}(\sigma ))\leq \alpha \ \Big( \omega _{1}(\rho ,\mathcal{J}(\rho ))+\omega _{1}(\sigma ,\mathcal{J}(\sigma ))\Big), \end{equation*}

    \forall \; \rho, \sigma \in K . The mapping \mathcal{J} is said to be:

    (1). Kannan \omega -contraction if \alpha < 1/2 ;

    (2). Kannan \omega -nonexpansive if \alpha = 1/2 .

    (3) \varsigma \in K is said to be fixed point of \mathcal{J} if \mathcal{J}(\varsigma) = \varsigma.

    Note that all Kannan \omega -Lipschitzian mappings have at most one fixed point due to the regularty of \omega .

    The following result discusses the existence of fixed point for kannan contraction maps in the setting of modular metric spaces.

    Theorem 9. Let (\mathcal{M}, \omega) be a modular metric space. Assume that K is a nonempty \omega -complete of \mathcal{M} _{\omega } . Let \mathcal{J}:K\rightarrow K be a Kannan \omega -contraction mapping. Let \varsigma \in K be such that \omega _{1}(\varsigma, \mathcal{J}(\varsigma)) < \infty . Then \{\mathcal{J} ^{n}(\varsigma)\} \; \omega -converges to some \tau \in K . Furthermore, we have \omega _{1}(\tau, \mathcal{J}(\tau)) = \infty or \omega _{1}(\tau , \mathcal{J}(\tau)) = 0 (i.e., \tau is the fixed point of \mathcal{J} )

    Proof. Let \varsigma \in K such that \omega _{1}(\varsigma, \mathcal{J} (\varsigma)) < +\infty . Now we establish that \{\mathcal{J}^{n}(\varsigma)\} is \omega -convergent. As K is \omega -complete, it suffices to prove that \{\mathcal{J}^{n}(\varsigma)\} is \omega -Cauchy. Since \mathcal{J} is a Kannan \omega -contraction mapping, so \exists \; \alpha \in \lbrack 0, 1/2) such that

    \begin{equation*} \omega _{1}(\mathcal{J}(\rho ),\mathcal{J}(\sigma ))\leq \alpha \ \Big( \omega _{1}(\rho ,\mathcal{J}(\rho ))+\omega _{1}(\sigma ,\mathcal{J}(\sigma ))\Big), \end{equation*}

    for any \rho, \sigma \in K . Set k = \alpha /(1-\alpha) < 1 . Furthermore

    \begin{equation*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+1}(\varsigma ))\leq \alpha \ \Big(\omega _{1}(\mathcal{J}^{n-1}(\varsigma ),\mathcal{J} ^{n}(\varsigma ))+\omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J} ^{n+1}(\varsigma ))\Big), \end{equation*}

    which implies

    \begin{eqnarray*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+1}(\varsigma )) &\leq &\frac{\alpha }{1-\alpha }\ \omega _{1}(\mathcal{J}^{n-1}(\varsigma ), \mathcal{J}^{n}(\varsigma )) \\ & = &k\ \omega _{1}(\mathcal{J}^{n-1}(\varsigma ),\mathcal{J}^{n}(\varsigma )), \end{eqnarray*}

    for any n\geq 1 . Hence,

    \begin{equation*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+1}(\varsigma ))\leq k^{n}\ \omega _{1}(\varsigma ,\mathcal{J}(\varsigma )), \end{equation*}

    for any n\in \mathbb{N} . As \mathcal{J} is a Kannan \omega -contraction mapping, so we get

    \begin{equation*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+h}(\varsigma ))\leq \alpha \ \Big(\omega _{1}(\mathcal{J}^{n-1}(\varsigma ),\mathcal{J} ^{n}(\varsigma ))+\omega _{1}(\mathcal{J}^{n+h-1}(\varsigma ),\mathcal{J} ^{n+h}(\varsigma ))\Big), \end{equation*}

    which implies

    \begin{equation} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+h}(\varsigma ))\leq \alpha \ \Big(k^{n-1}+k^{n+h-1}\Big)\omega _{1}(\varsigma ,\mathcal{J} (\varsigma )), \end{equation} (NL)

    for n\geq 1 and h\in \mathbb{N} . As k < 1 and \omega _{1}(\varsigma, \mathcal{J}(\varsigma)) < +\infty , we conclude that \{\mathcal{J} ^{n}(\varsigma)\} is \omega -Cauchy, as claimed. Let \tau \in K be the \omega -limit of \{\mathcal{J}^{n}(\varsigma)\} . As K is \omega -closed, we get \tau \in K . Suppose that \omega _{1}(\tau, \mathcal{J} (\tau)) < +\infty ; then we will obtain that \omega _{1}(\tau, \mathcal{J} (\tau)) = 0 . As

    \begin{eqnarray*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}(\tau ))) &\leq &\alpha \Big(\omega _{1}(\mathcal{J}^{n-1}(\varsigma ),\mathcal{J}^{n}(\varsigma ))+\omega _{1}(\tau ,\mathcal{J}(\tau ))\Big) \\ &\leq &\alpha \Big(k^{n-1}\ \omega _{1}(\varsigma ,\mathcal{J}(\varsigma ))+\omega _{1}(\tau ,\mathcal{J}(\tau ))\Big), \end{eqnarray*}

    for any n\geq 1 . By the use of Fatou's property, we obtain

    \begin{eqnarray*} \omega _{1}(\tau ,\mathcal{J}(\tau ))) &\leq &\liminf\limits_{n\rightarrow \infty }\ \omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}(\tau ))) \\ &\leq &\alpha \ \omega _{1}(\tau ,\mathcal{J}(\tau )). \end{eqnarray*}

    Since \alpha < 1/2 , we conclude that \omega _{1}(\tau, \mathcal{J}(\tau)) = 0 , i.e., \tau is the fixed point of \mathcal{J} .

    The upcoming result is the analogue to Kannan's extention of the classical Banach contraction principle in modular metric space.

    Corollary 10. Let K be a nonempty \omega -closed subset of \mathcal{M}_{\omega } . Let \mathcal{J}:K\rightarrow K be a Kannan \omega -contraction mapping such that \omega _{1}(\rho, \mathcal{J}(\rho )) < +\infty , for any \rho \in K . Then for any \varsigma \in K , \{ \mathcal{J}^{n}(\varsigma)\} \; \omega -converges to the unique fixed point \varsigma of \mathcal{J} . Furthermore, if \alpha is the Kannan constant associated to \mathcal{J} , then we have

    \begin{equation*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\tau )\leq \alpha \left( \frac{ \alpha }{1-\alpha }\right) ^{n-1}\omega _{1}(\varsigma ,\mathcal{J} (\varsigma )), \end{equation*}

    for any \rho \in K and n\geq 1 .

    Proof. From Theorem 9, we can obtain the proof of first part directly. Using the inequality (NK) and since k < 1 , we get

    \begin{equation} \liminf\limits_{h\longrightarrow \infty}\omega _{1}(\mathcal{J}^{n}(\varsigma ),\mathcal{J}^{n+h}(\varsigma ))\leq \alpha \ \Big(k^{n-1}\Big)\omega _{1}(\varsigma ,\mathcal{J} (\varsigma )), \end{equation} (3.1)

    Now, using the fatou's property, we have

    \begin{equation*} \omega _{1}(\mathcal{J}^{n}(\varsigma ),\tau )\leq \alpha \ k^{n-1}\ \omega _{1}(\varsigma ,\mathcal{J}(\varsigma )) = \alpha \ \left( \frac{\alpha }{ 1-\alpha }\right) ^{n-1}\omega _{1}(\varsigma ,\mathcal{J}(\varsigma )), \end{equation*}

    for any n\geq 1 and \varsigma \in K .

    Recall that an admissible subset of \mathcal{M}_{\omega } is defined as an intersection of modular balls.

    Definition 11. We will say that:

    (ⅰ). if any decreasing sequence of nonempty \omega -bounded admissible subsets in \mathcal{M}_{\omega } have a nonempty intersection, then \mathcal{M}_{\omega } is said to satisfy the property (R),

    (ⅱ). if for any nonempty \omega -bounded admissible subset K with more than one point, there exists \rho \in K such that

    \begin{equation*} \omega _{1}(\rho ,\sigma ) \lt \delta _{\omega }(K) = \sup \{\omega _{1}(a,b);\ a,b\in K\}, \end{equation*}

    for any \sigma \in K , then \mathcal{M}_{\omega } is said to satisfy \omega -quasi-normal property.

    Following technical lemma is very useful in the proof of our theorem.

    Lemma 12. Suppose that \mathcal{M}_{\omega } satisfy the both (R) property and the \omega -quasi-normal property. Let K be a nonempty \omega -bounded admissible subset of \mathcal{M}_{\omega } and \mathcal{J }:K\rightarrow K be a Kannan \omega -nonexpansive mapping. Fix r > 0 . Suppose that A_{r} = \{\rho \in K; \ \omega _{1}(\rho, \mathcal{J}(\rho))\leq r\} \; \not = \emptyset . Set

    \begin{equation*} K_{r} = \bigcap \ \{B_{\omega }(a,t);\mathcal{J}(A_{r})\subset B_{\omega }(a,t)\}\cap K. \end{equation*}

    Then K_{r}\neq \emptyset , \omega -closed admissible subset of K and

    \begin{equation*} \mathcal{J}(K_{r})\subset K_{r}\subset A_{r}\;\;\mathit{\text{ and}}\;\;\;\delta _{\omega }(K_{r})\leq r. \end{equation*}

    Proof. As \mathcal{J}(A_{r}) is strictly contained in each balls and intersection of all balls contained in K_{r} . Thus \mathcal{J}(A_{r})\subset K_{r} , and K_{r} is not empty. From definition of admissible set, we deduce that K_{r} is an admissible subset of K . Let us prove that K_{r}\subset A_{r} . Let \rho \in K_{r} . If \omega _{1}(\rho, \mathcal{J}(\rho)) = 0 , then obviously we have \rho \in A_{r} . Otherwise, assume \omega _{1}(\rho, \mathcal{J}(\rho)) > 0 . Set

    \begin{equation*} s = \sup \ \{\omega _{1}(\mathcal{J}(\varsigma ),\mathcal{J}(\rho ));\;\varsigma \in A_{r}\}. \end{equation*}

    From the definition of s , we have \mathcal{J}(A_{r})\subset B_{\omega }(\mathcal{J}(\rho), s) . Hence K_{r}\subset B_{\omega }(\mathcal{J}(\rho), s) , which implies \omega _{1}(\rho, \mathcal{J}(\rho))\leq s . Let \varepsilon > 0 . Then \exists \; \varsigma \in A_{r} such that s-\varepsilon \leq \omega _{1}(\mathcal{J}(\rho), \mathcal{J}(\varsigma)) . Hence

    \begin{eqnarray*} \omega _{1}(\rho ,\mathcal{J}(\rho ))-\varepsilon &\leq &s-\varepsilon \\ &\leq &\omega _{1}(\mathcal{J}(\rho ),\mathcal{J}(\varsigma )) \\ &\leq &\frac{1}{2}\Big(\omega _{1}(\rho ,\mathcal{J}(\rho ))+\omega _{1}(\varsigma ,\mathcal{J}(\varsigma ))\Big) \\ &\leq &\frac{1}{2}\Big(\omega _{1}(\rho ,\mathcal{J}(\rho ))+r\Big). \\ && \end{eqnarray*}

    As we are taking \varepsilon an arbitrarily positive number, so we get

    \begin{equation*} \omega _{1}(\rho ,\mathcal{J}(\rho ))\leq \frac{1}{2}\Big(\omega _{1}(\rho , \mathcal{J}(\rho ))+r\Big), \end{equation*}

    which implies \omega _{1}(\rho, \mathcal{J}(\rho))\leq r , i.e., \rho \in A_{r} as claimed. Since \mathcal{J}(A_{r})\subset K_{r} , we get \mathcal{ J}(K_{r})\subset \mathcal{J}(A_{r})\subset K_{r} , i.e., K_{r} is \mathcal{J} -invariant. Now we prove that \delta _{\omega }(K_{r})\leq r . First, we observe that

    \begin{equation*} \omega _{1}(\mathcal{J}(\rho ),\mathcal{J}(\sigma ))\leq \frac{1}{2}\Big( \omega _{1}(\rho ,\mathcal{J}(\rho ))+\omega _{1}(\varsigma ,\mathcal{J} (\varsigma ))\Big)\leq r, \end{equation*}

    for any \rho, \sigma \in A_{r} . Fix \rho \in A_{r} . Then \mathcal{J} (A_{r})\subset B_{\omega }(\mathcal{J}(\rho), r) . The definition of K_{r} implies K_{r}\subset B_{\omega }(\mathcal{J}(\rho), r) . Thus \mathcal{J} (\rho)\in \bigcap\limits_{\sigma \in K_{r}}\ B_{\omega }(\sigma, r) , which implies \mathcal{J}(A_{r})\subset \bigcap\limits_{\sigma \in K_{r}}\ B_{\omega }(\sigma, r) . Again by the definition of K_{r} , we get K_{r}\subset \bigcap\limits_{\sigma \in K_{r}}\ B_{\omega }(\sigma, r) . Therefore, we have \omega _{1}(\sigma, \varsigma)\leq r , for any \sigma, \varsigma \in K_{r} , i.e., \delta _{\omega }(K_{r})\leq r .

    Now, we are able to state and prove our result for \omega -nonexpansive Kannan maps on modular metric spaces.

    Theorem 13. Suppose that \mathcal{M}_{\omega } satisfies both the (R) property and the \omega -quasi-normal property. Let K be a nonempty \omega -bounded admissible subset of \mathcal{M}_{\omega } and \mathcal{J}:K\rightarrow K is a Kannan \omega -nonexpansive mapping. Then \mathcal{J} has a fixed point.

    Proof. Set r_{0} = \inf \ \{\omega _{1}(\rho, \mathcal{J}(\rho)); \ \rho \in K\} and r_{n} = r_{0}+1/n , for n\geq 1 . By definition of r_{0} , the set A_{r_{n}} = \{\rho \in K; \ \omega _{1}(\rho, \mathcal{J}(\rho))\leq r_{n}\} is not empty, for any n\geq 1 . Taking K_{r_{n}} defined in Lemma 12. It is simple to analyze that \{K_{r_{n}}\} is a decreasing sequence of nonempty \omega -bounded admissible subsets of K . The property (R) implies that K_{\infty } = \bigcap\limits_{n\geq 1}\ K_{r_{n}} \neq \emptyset . Let \rho \in K_{\infty } . Then we have \omega _{1}(\rho, \mathcal{J}(\rho))\leq r_{n} , for any n\geq 1 . If we let n\rightarrow \infty , we get \omega _{1}(\rho, \mathcal{J}(\rho))\leq r_{0} which implies \omega _{1}(\rho, \mathcal{J}(\rho)) = r_{0} . Hence the set A_{r_{0}}\neq \emptyset . We claim that r_{0} = 0 . Otherwise, assume r_{0} > 0 which implies that \mathcal{J} fails to have a fixed point. Again consider the set K_{r_{0}} as defined in Lemma 12. Note that since \mathcal{J} fails to have a fixed point and K_{r_{0}} is \mathcal{J} -invariant, then K_{r_{0}} has more than one point, i.e., \delta _{\omega }(K_{r_{0}}) > 0 . It follows from the \omega -quasi-normal property that there exists \rho \in K_{r_{0}} such that

    \begin{equation*} \omega (\rho ,\sigma ) \lt \delta _{\omega }(K_{r_{0}})\leq r_{0}, \end{equation*}

    for any \sigma \in K_{r_{0}} . From Lemma 12, we know that K_{r_{0}}\subset A_{r_{0}} . From the definition of K_{r_{0}} , we have

    \mathcal{J}(\rho ) \in T(A_{r_{0}}) \subset K_{r_{0}}.

    Hence Obviously this will imply

    \begin{equation*} \omega _{1}(\rho ,\mathcal{J}(\rho )) \lt \delta _{\omega }(K_{r_{0}})\leq r_{0}, \end{equation*}

    which is a contradiction with the definition of r_{0} . Hence r_{0} = 0 which implies that any point in K_{\infty } is a fixed point of \mathcal{J } , i.e., \mathcal{J} has a fixed point in K .

    In this paper, we have introduced some notions to study the existence of fixed points for contractive and nonexpansive Kannan maps in the setting of modular metric spaces.Using the modular convergence sense, which is weaker than the metric convergence we have proved our results. The proved results generalized and improved some of the results of the literature.

    This work was funded by the University of Jeddah, Saudi Arabia, under grant No. UJ-02-081-DR. The author, therefore, acknowledges with thanks the University technical and financial support. The author would like to thank Prof. Mohamed Amine Khamsi for his fruitful discussion and continues supporting of this paper.

    The author declares that they have no competing interests.



    [1] Z. Tan, J. Chen, Q. Kang, M. Zhou, A. Abdullah, S. Khaled, Dynamic embedding projection-gated convolutional neural networks for text classification, IEEE Trans. Neural Networks Learn. Syst., 33 (2022), 973–982. https://doi.org/10.1109/TNNLS.2020.3036192 doi: 10.1109/TNNLS.2020.3036192
    [2] W. Niu, C. Ma, X. Sun, M. Li, Z. Gao, A brain network analysis-based double way deep neural network for emotion recognition, IEEE Trans. Neural Syst. Rehabil. Eng., 31 (2023), 917–925. https://doi.org/10.1109/TNSRE.2023.3236434 doi: 10.1109/TNSRE.2023.3236434
    [3] J. C. G. Diaz, H. Zhao, Y. Zhu, P. Samuel, H. Sebastian, Recurrent neural network equalization for wireline communication systems, IEEE Trans. Circuits Syst. II, 69 (2022), 2116–2120. https://doi.org/10.1109/TCSII.2022.3152051 doi: 10.1109/TCSII.2022.3152051
    [4] X. Li, R. Guo, J. Lu, T. Chen, X. Qian, Causality-driven graph neural network for early diagnosis of pancreatic cancer in non-contrast computerized tomography, IEEE Trans. Med. Imag., 42 (2023), 1656–1667. https://doi.org/10.1109/TMI.2023.3236162 doi: 10.1109/TMI.2023.3236162
    [5] F. Fang, Y. Liu, J. H. Park, Y. Liu, Outlier-resistant nonfragile control of T-S fuzzy neural networks with reaction-diffusion terms and its application in image secure communication, IEEE Trans. Fuzzy Syst., 31 (2023), 2929–2942. https://doi.org/10.1109/TFUZZ.2023.3239732 doi: 10.1109/TFUZZ.2023.3239732
    [6] Z. Zhang, J. Liu, G. Liu, J. Wang, J. Zhang, Robustness verification of swish neural networks embedded in autonomous driving systems, IEEE Trans. Comput. Soc. Syst., 10 (2023), 2041–2050. https://doi.org/10.1109/TCSS.2022.3179659 doi: 10.1109/TCSS.2022.3179659
    [7] S. Zhou, H. Xu, G. Zhang, T. Ma, Y. Yang, Leveraging deep convolutional neural networks pre-trained on autonomous driving data for vehicle detection from roadside LiDAR data, IEEE Trans. Intell. Transp. Syst., 23 (2022), 22367–22377. https://doi.org/10.1109/TITS.2022.3183889 doi: 10.1109/TITS.2022.3183889
    [8] Y. Bai, T. Chaolu, S. Bilige, The application of improved physics-informed neural network (IPINN) method in finance, Nonlinear Dyn., 107 (2022), 3655–3667. https://doi.org/10.1007/s11071-021-07146-z doi: 10.1007/s11071-021-07146-z
    [9] G. Rajchakit, R. Sriraman, Robust passivity and stability analysis of uncertain complex-valued impulsive neural networks with time-varying delays, Neural Process. Lett., 33 (2021), 581–606. https://doi.org/10.1007/s11063-020-10401-w doi: 10.1007/s11063-020-10401-w
    [10] A. Pratap, R. Raja, R. P. Agarwal, J. Alzabut, M. Niezabitowski, H. Evren, Further results on asymptotic and finite-time stability analysis of fractional-order time-delayed genetic regulatory networks, Neurocomputing, 475 (2022), 26–37. https://doi.org/10.1016/j.neucom.2021.11.088 doi: 10.1016/j.neucom.2021.11.088
    [11] G. Rajchakit, R. Sriraman, N. Boonsatit, P. Hammachukiattikul, C. P. Lim, P. Agarwal, Global exponential stability of Clifford-valued neural networks with time-varying delays and impulsive effects, Adv. Differ. Equations, 208 (2021), 26–37. https://doi.org/10.1186/s13662-021-03367-z doi: 10.1186/s13662-021-03367-z
    [12] H. Lin, H. Zeng, X. Zhang, W. Wang, Stability analysis for delayed neural networks via a generalized reciprocally convex inequality, IEEE Trans. Neural Networks Learn. Syst., 34 (2023), 7191–7499. https://doi.org/10.1109/TNNLS.2022.3144032 doi: 10.1109/TNNLS.2022.3144032
    [13] Z. Zhang, X. Zhang, T. Yu, Global exponential stability of neutral-type Cohen-Grossberg neural networks with multiple time-varying neutral and discrete delays, Neurocomputing, 490 (2022), 124–131. https://doi.org/10.1016/j.neucom.2022.03.068 doi: 10.1016/j.neucom.2022.03.068
    [14] H. Wang, Y. He, C. Zhang, Type-dependent average dwell time method and its application to delayed neural networks with large delays, IEEE Trans. Neural Networks Learn. Syst., 35 (2024), 2875–2880. https://doi.org/10.1109/TNNLS.2022.3184712 doi: 10.1109/TNNLS.2022.3184712
    [15] Z. Sheng, C. Lin, B. Chen, Q. Wang, Asymmetric Lyapunov-Krasovskii functional method on stability of time-delay systems, Int. J. Robust Nonlinear Control, 31 (2021), 2847–2854. https://doi.org/10.1002/rnc.5417 doi: 10.1002/rnc.5417
    [16] L. Guo, S. Huang, L. Wu, Novel delay-partitioning approaches to stability analysis for uncertain Lur'e systems with time-varying delays, J. Franklin Inst., 358 (2021), 3884–3900. https://doi.org/10.1016/j.jfranklin.2021.02.030 doi: 10.1016/j.jfranklin.2021.02.030
    [17] J. H. Kim, Further improvement of Jensen inequality and application to stability of time-delayed systems, Automatica, 64 (2016), 3884–3900. https://doi.org/10.1016/j.automatica.2015.08.025 doi: 10.1016/j.automatica.2015.08.025
    [18] J. Chen, X. Zhang, J. H. Park, S. Xu, Improved stability criteria for delayed neural networks using a quadratic function negative-definiteness approach, IEEE Trans. Neural Networks Learn. Syst., 33 (2020), 1348–1354. https://doi.org/10.1109/TNNLS.2020.3042307 doi: 10.1109/TNNLS.2020.3042307
    [19] G. Kong, L. Guo, Stability analysis of delayed neural networks based on improved quadratic function condition, Neurocomputing, 524 (2023), 158–166. https://doi.org/10.1016/j.neucom.2022.12.012 doi: 10.1016/j.neucom.2022.12.012
    [20] Z. Zhai, H. Yan, S. Chen, C. Chen, H. Zeng, Novel stability analysis methods for generalized neural networks with interval time-varying delay, Inf. Sci., 635 (2023), 208–220. https://doi.org/10.1016/j.ins.2023.03.041 doi: 10.1016/j.ins.2023.03.041
    [21] T. Lee, J. Park, M. Park, O. Kwon, H. Jung, On stability criteria for neural networks with time-varying delay using Wirtinger-based multiple integral inequality, J. Franklin Inst., 352 (2015), 5627–5645. https://doi.org/10.1016/j.jfranklin.2015.08.024 doi: 10.1016/j.jfranklin.2015.08.024
    [22] X. Zhang, Q. Han, X. Ge, The construction of augmented Lyapunov-Krasovskii functionals and the estimation of their derivatives in stability analysis of time-delay systems: a survey, Int. J. Syst. Sci., 53 (2022), 2480–2495. https://doi.org/10.1080/00207721.2021.2006356 doi: 10.1080/00207721.2021.2006356
    [23] L. V. Hien, H. Trinh, Refined Jensen-based inequality approach to stability analysis of time-delay systems, IET Control Theory Appl., 9 (2015), 2188–2194. https://doi.org/10.1049/iet-cta.2014.0962 doi: 10.1049/iet-cta.2014.0962
    [24] F. Yang, J. He, L. Li, Matrix quadratic convex combination for stability of linear systems with time-varying delay via new augmented Lyapunov functional, 2016 12th World Congress on Intelligent Control and Automation, 2016, 1866–1870. https://doi.org/10.1109/WCICA.2016.7578791 doi: 10.1109/WCICA.2016.7578791
    [25] C. Zhang, Y. He, L. Jiang, M. Wu, Stability analysis for delayed neural networks considering both conservativeness and complexity, IEEE Trans. Neural Networks Learn. Syst., 27 (2016), 1486–1501. https://doi.org/10.1109/TNNLS.2015.2449898 doi: 10.1109/TNNLS.2015.2449898
    [26] S. Ding, Z. Wang, Y. Wu, H. Zhang, Stability criterion for delayed neural networks via Wirtinger-based multiple integral inequality, Neurocomputing, 214 (2016), 53–60. https://doi.org/10.1016/j.neucom.2016.04.058 doi: 10.1016/j.neucom.2016.04.058
    [27] B. Yang, J. Wang, X. Liu, Improved delay-dependent stability criteria for generalized neural networks with time-varying delays, Inf. Sci., 214 (2017), 299–312. https://doi.org/10.1016/j.ins.2017.08.072 doi: 10.1016/j.ins.2017.08.072
    [28] B. Yang, J. Wang, J. Wang, Stability analysis of delayed neural networks via a new integral inequality, Neural Networks, 88 (2017), 49–57. https://doi.org/10.1016/j.neunet.2017.01.008 doi: 10.1016/j.neunet.2017.01.008
    [29] C. Hua, Y. Wang, S. Wu, Stability analysis of neural networks with time-varying delay using a new augmented Lyapunov-Krasovskii functional, Neurocomputing, 332 (2019), 1–9. https://doi.org/10.1016/j.neucom.2018.08.044 doi: 10.1016/j.neucom.2018.08.044
  • This article has been cited by:

    1. Alireza Alihajimohammad, Reza Saadati, Generalized modular fractal spaces and fixed point theorems, 2021, 2021, 1687-1847, 10.1186/s13662-021-03538-y
    2. Godwin Amechi Okeke, Daniel Francis, Aviv Gibali, On fixed point theorems for a class of \alpha-{\hat{v}}-Meir–Keeler-type contraction mapping in modular extended b-metric spaces, 2022, 30, 0971-3611, 1257, 10.1007/s41478-022-00403-3
    3. Mahpeyker Öztürk, Abdurrahman Büyükkaya, Fixed point results for Suzuki‐type Σ‐contractions via simulation functions in modular b ‐metric spaces , 2022, 45, 0170-4214, 12167, 10.1002/mma.7634
    4. Olivier Olela Otafudu, Katlego Sebogodi, On w-Isbell-convexity, 2022, 23, 1989-4147, 91, 10.4995/agt.2022.15739
    5. Maria del Mar Bibiloni-Femenias, Oscar Valero, Modular Quasi-Pseudo Metrics and the Aggregation Problem, 2024, 12, 2227-7390, 1826, 10.3390/math12121826
    6. Daniel Francis, Godwin Amechi Okeke, Aviv Gibali, Another Meir-Keeler-type nonlinear contractions, 2025, 10, 2473-6988, 7591, 10.3934/math.2025349
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1294) PDF downloads(102) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog