Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A deep learning framework for predicting the spread of diffusion diseases

  • In this paper, we considered a partitioned epidemic model with reaction-diffusion behavior, analyzed the dynamics of populations in various compartments, and explored the significance of spreading parameters. Unlike traditional approaches, we proposed a novel paradigm for addressing the dynamics of epidemic models: Inferring model dynamics and, more importantly, parameter inversion to analyze disease spread using Reaction-Diffusion Disease Information Neural Networks (RD-DINN). This method leverages the principles of hidden disease spread to overcome the black-box mechanism of neural networks relying on large datasets. Through an embedded deep neural network incorporating disease information, the RD-DINN approximates the dynamics of the model while predicting unknown parameters. To demonstrate the robustness of the RD-DINN method, we conducted an analysis based on two disease models with reaction-diffusion terms. Additionally, we systematically investigated the impact of the number of training points and noise data on the performance of the RD-DINN method. Our results indicated that the RD-DINN method exhibits relative errors less than 1% in parameter inversion with 10% noise data. In terms of dynamic predictions, the absolute error at any spatiotemporal point does not exceed 5%. In summary, we present a novel deep learning framework RD-DINN, which has been shown to be effective for reaction-diffusion disease modeling, providing an advanced computational tool for dynamic and parametric prediction of epidemic spread. The data and code used can be found at https://github.com/yuanfanglila/RD-DINN.

    Citation: Xiao Chen, Fuxiang Li, Hairong Lian, Peiguang Wang. A deep learning framework for predicting the spread of diffusion diseases[J]. Electronic Research Archive, 2025, 33(4): 2475-2502. doi: 10.3934/era.2025110

    Related Papers:

    [1] Aynur Şahin, Zeynep Kalkan . The AA-iterative algorithm in hyperbolic spaces with applications to integral equations on time scales. AIMS Mathematics, 2024, 9(9): 24480-24506. doi: 10.3934/math.20241192
    [2] Dong Ji, Yao Yu, Chaobo Li . Fixed point and endpoint theorems of multivalued mappings in convex b-metric spaces with an application. AIMS Mathematics, 2024, 9(3): 7589-7609. doi: 10.3934/math.2024368
    [3] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized α-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [4] Mohammed Shehu Shagari, Akbar Azam . Integral type contractions of soft set-valued maps with application to neutral differential equations. AIMS Mathematics, 2020, 5(1): 342-358. doi: 10.3934/math.2020023
    [5] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [6] Mohamed Jleli, Bessem Samet . On θ-hyperbolic sine distance functions and existence results in complete metric spaces. AIMS Mathematics, 2024, 9(10): 29001-29017. doi: 10.3934/math.20241407
    [7] Hasanen A. Hammad, Doha A. Kattan . Strong tripled fixed points under a new class of F-contractive mappings with supportive applications. AIMS Mathematics, 2025, 10(3): 5785-5805. doi: 10.3934/math.2025266
    [8] Noor Muhammad, Ali Asghar, Samina Irum, Ali Akgül, E. M. Khalil, Mustafa Inc . Approximation of fixed point of generalized non-expansive mapping via new faster iterative scheme in metric domain. AIMS Mathematics, 2023, 8(2): 2856-2870. doi: 10.3934/math.2023149
    [9] Saif Ur Rehman, Iqra Shamas, Shamoona Jabeen, Hassen Aydi, Manuel De La Sen . A novel approach of multi-valued contraction results on cone metric spaces with an application. AIMS Mathematics, 2023, 8(5): 12540-12558. doi: 10.3934/math.2023630
    [10] Buthinah A. Bin Dehaish, Rawan K. Alharbi . On fixed point results for some generalized nonexpansive mappings. AIMS Mathematics, 2023, 8(3): 5763-5778. doi: 10.3934/math.2023290
  • In this paper, we considered a partitioned epidemic model with reaction-diffusion behavior, analyzed the dynamics of populations in various compartments, and explored the significance of spreading parameters. Unlike traditional approaches, we proposed a novel paradigm for addressing the dynamics of epidemic models: Inferring model dynamics and, more importantly, parameter inversion to analyze disease spread using Reaction-Diffusion Disease Information Neural Networks (RD-DINN). This method leverages the principles of hidden disease spread to overcome the black-box mechanism of neural networks relying on large datasets. Through an embedded deep neural network incorporating disease information, the RD-DINN approximates the dynamics of the model while predicting unknown parameters. To demonstrate the robustness of the RD-DINN method, we conducted an analysis based on two disease models with reaction-diffusion terms. Additionally, we systematically investigated the impact of the number of training points and noise data on the performance of the RD-DINN method. Our results indicated that the RD-DINN method exhibits relative errors less than 1% in parameter inversion with 10% noise data. In terms of dynamic predictions, the absolute error at any spatiotemporal point does not exceed 5%. In summary, we present a novel deep learning framework RD-DINN, which has been shown to be effective for reaction-diffusion disease modeling, providing an advanced computational tool for dynamic and parametric prediction of epidemic spread. The data and code used can be found at https://github.com/yuanfanglila/RD-DINN.



    Fixed point theory is concerned with some properties which ensure that a self map M defined on a set B admits at least one fixed point. By fixed point of M, we mean a point wB which solves an operator equation w=Mw, known as fixed point equation. Now, let F(M)={wB:w=Mw} stand for the set of all fixed points of M. The theory of fixed point plays significant role in finding the solutions of problems which arise in different branches of mathematical analysis. For some years now, the advancement of fixed point theory in metric spaces has captured considerable interests from many authors as a result of its applications in many fields such variational inequality, approximation theory and optimization theory.

    Banach Contraction principle still remains one of the fundamental theorems in analysis. It states that if (B,d) is a complete metric space and M:BB fulfills

    d(Mf,Mh)ed(f,h), (1.1)

    for all g,hB with e[0,1), then there exists a unique fixed point of M.

    Mappings satisfying (1.1) are known as contraction mappings. In 2003, Berinde [10] introduced the class of weak contraction mappings in metric space. This class of mappings are also called almost contraction. He proved that this class of mappings is a superclass of the class of Zamfirescu mapping [53] which properly contains the classes of contraction, Kannan[23] and Chatterjea[11] mappings.

    Definition 1.1. A map M:BB is called almost contraction if some constants e(0,1] and L0 exists such that

    d(Mf,Mh)ed(f,h)+Ld(f,Mf),f,hB. (1.2)

    In [10], Berinde showed that every almost contraction mapping M has a unique fixed point in a complete metric space (B,d). The map M is termed nonexpansive if

    d(Mg,Mh)d(g,h), (1.3)

    for all g,hB. It is known as quasi-nonexpansive mapping if F(M) and

    d(Mg,w)d(g,w), (1.4)

    for all gB and wF(M). Due to the numerous applications of nonexpansive mappings in mathematics and other related fields, in recent years, their extensions and generalizations in many directions have been studied by different authors, see [7,40,41,42,44].

    In 2008, Suzuki [44] studied a class of mapping called the generalized nonexpasive mappings (or mappings satisfying condition (C). The author studied the existence and convergence analysis of mappings satisfying condition (C).

    Definition 1.2. A map M:BB is said to satisfy condition (C) if

    12d(f,Mf)d(f,h)d(Mf,Mh)d(f,h),f,hB. (1.5)

    In 2011, Garchía-Falset et al. [20] introduced a general class of nonexpansive mappings as follows:

    Definition 1.3. A mapping M:BB is said to satisfy condition Eμ, if there exists μ1 such that

    d(f,Mh)μd(f,Mf)+d(f,h), (1.6)

    for all f,hB. Now, M is said satisfy the condition E whenever M satisfies the condition Eμ for some μ1.

    Remark 1.4. As shown in [40], the classes of generalized α-nonexpansive mappings [42], Reich-Suzuki nonexpansive mappings [41], Suzuki generalized nonexpansive mappings [44], generalized α-Reich-Suzuki nonexpansive mapping [40] are properly included in the class of mappings satisfying (1.6).

    The celebrated Banach contraction principle works with Picard iteration process. This principle has some limitations when higher mappings are considered. To get a better rate of convergence and overcome these limitations, several authors have studied different iteration processes. Some of these prominent iteration processes include: Mann [31], Ishikawa [26], Noor [32], S [3], Abbas [1], Thakur [46], Picard-S [22], M [48] iteration processes etc. In [3], the authors showed that S-iterative scheme converges at the same rate as that of Picard iterative algorithm and faster than Mann iteration process. In [1], it is shown that Abbas iteration method converges faster than Picard, Mann [31] and S-iteration [3] processes. In 2016, Thakur et al. [46] defined a new iterative process. It was shown by the authors that their method enjoys a better speed of convergence than Mann [31], Ishikawa [26], Noor [32], S [3] and Abass [1] iteration processes. In 2021, the JK iteration process was constructed by Ahmad et al. [4] for mappings satisfying condition (C). In [4,5], the authors showed that JK iteration process converges faster than Mann [31], Ishikawa [26], Noor [32], S [3], Abbas [1] and Thakur [46] iteration processes for mappings satisfying condition (C) and generalized α-nonexpansive mappings, respectively.

    Recently, the following four steps iteration process known as the AH iteration process was introduced by Ofem et al. [33] in Banach spaces.

    {f1B,qk=(1δk)fk+δkMfk,vk=M2qk,hk=M2vk,fk+1=(1mk)hk+mkMhk,kN, (1.7)

    where {mk} and {δk} are sequences in (0,1). It was analytically shown in [33] that AH iterative algorithm (1.7) converges faster than JK iteration process [4] for contractive-like mappings. Furthermore, they showed numerically that AH iteration process (1.7) converges faster than several existing iteration processes for contractive–like mappings and Reich-Suzuki nonexpansive mappings, respectively.

    Motivated by the above results, in this article, we construct the hyperbolic space version AH iteration process (3.1). Furthermore, we prove that the modified iteration process is data dependent for almost contraction mappings. We study several strong and -convergence analysis of AH iterative scheme for mappings enriched with condition (E). Some numerical examples of the mappings enriched with condition (E) are provided to show the efficiency of our method over some existing methods. Finally, we apply our main results in solving nonlinear integral equation with two delays. Since hyperbolic spaces are more general than Banach spaces and by Remark 3.5 the class of mappings enriched with condition (E) is a super-class of those considered in Ahmad et al. [4,5] and Ofem et al. [33], it follows that our results will generalize and extend the results of Ahmad [4,5], Ofem et al. [33] and so many other existing results of well known authors.

    Throughout this paper, we will let N denote the set of natural numbers, R the set of real numbers and C the set of complex numbers. Any given space that is endowed with some convexity structure is an important tool for solving the operator equation w=Mw. Since every Banach space is a vector space, it follows that a Banach space naturally inherits the convexity structure. On the other hand, metric spaces do not naturally enjoy this convex structure.

    In [45], Takahashi developed the concept convex metric spaces and further investigated the fixed points of certain mappings in the setting of such spaces. It is well known that convex metric spaces contains all normed spaces as well as their convex subsets. But there are many examples of convex metric spaces which are not embedded in any normed space [45]. For some decades now, many authors have been introduced convex structures in metric spaces. The following W-hyperbolic spaces was introduced by Kohlenbach [25]:

    Definition 2.1. A W-hyperbolic space (M,d,W) is a metric space (B,d) together with a convexity mapping W:B2×[0,1]B satisfying the following properties:

    (1) d(q,W(f,h,α))(1α)d(q,f)+αd(q,h)

    (2) d(W(f,h,α),W(f,h,β))=|αβ|d(f,h)

    (3) W(f,h,α)=W(h,f,1α)

    (4) d(W(f,q,α),W(h,p,α))(1α)d(f,h)+αd(q,p)

    for all f,h,q,pB and α,β[0,1].

    Suppose (B,d,W) fulfils only condition (1), then (M,d,W) becomes the convex metric space considered by Takahashi [45]. It is well known that every hyperbolic space is a convex metric space but converse is not generally true [14].

    Normed linear space, CAT(0) spaces, the Hilbert ball and Busseman spaces are important examples of W-hyperbolic spaces [52].

    A hyperbolic space (B,d,W) is known as uniformly convex [12], if for f,h,qB, ε(0,2] and r>0, it follows that a constant γ(0,1] exists with d(f,h)r, d(q,f)r, and d(h,q)εr. Then, we get

    d(W(h,q,12),f)(1γ)r.

    The modulus of uniform convexity [56], of B is a mapping ξ:(0,)×(0,2](0,1] which gives γ=ξ(r,ε) for any r>0 and ε(0,2]. We call ξ monotone if it decreases with r (for fixed ε), see [56].

    A nonempty subset D of a hyperbolic space B is called convex if W(f,h,α)D for all f,hD and α[0,1]. If f,hB and α[0,1], then we denote W(f,h,α) by (1α)fαh. It is shown in [28] that any normed space (B,.) is a hyperbolic with (1α)fαh=(1α)f+αh. This implies that the class of uniformly convex hyperbolic spaces is a natural generalization of the class of uniformly convex Banach spaces.

    The concept of -convergence in the setting of general metric space was introduced by Lim [30]. This concept of convergence was used by Kirk and Panyanak [24] to proved some results in CAT(0) spaces that are analogous of some Banach space results involving weak convergence. Furthermore, -convergence results of the Picard, Mann[31] and Ishikawa[26] iteration processes in CAT(0) spaces were obtained by Dhompongsa and Panyanak [17]. In recent years, a number of articles concerning -convergence have been published (see [2,19,21,27,34,52] and the references therein). To define -convergence, we consider the following concept.

    Let {fk} be a sequence which is bounded in a hyperbolic space B. A function r(.,{fk}):B[0,) can be defined by

    r(f,{fk})=lim supkd(f,fk),for all,fB.

    An asymptotic radius of a bounded sequence {fk} with respect to a nonempty subset D of B is denoted and defined by

    rD({fk})=lim supkinf{r(f,{fk}):fD}.

    An asymptotic center of a bounded sequence {fk} with respect to a nonempty subset D of B is denoted and defined by

    AD({fk})={fB:r(f,{fk})r(h,{fk}),for allhD}.

    Suppose the asymptotic radius of the asymptotic center are taken with respect B, then these are simply denoted by r({fk}) and A({fk}), respectively. Generally, A({fk}) may be empty or may even have infinitely many points, see [2,19,21,27,52,56].

    The following lemmas, definitions and proposition will be useful in our main results.

    Definition 2.2. [24] The sequence {fk} in B is said to be -convergent to a point fB if f is the unique asymptotic center of every sub-sequence {fkj} of {fk}. For this, we write limkfk=f and call f the -limit of {fk}.

    Lemma 2.3. [28] In a complete uniformly hyperbolic space B with monotone modulus of convexity ξ, it is well known that every bounded sequence {fk} has a unique asymptotic center with respect to every nonempty closed convex subset D of B.

    Lemma 2.4. [29] Let (B,d,W) be a complete uniformly convex hyperbolic space with a monotone modulus of convexity ξ. Assume fB and {αk} is a sequence in [n,m] for some n,m(0,1). Suppose {fk} and {hk} are sequences in B such that lim supkd(fk,f)a, lim supkd(hk,f)a, limkd(W(fk,hk,αk),f)=a for some a0, then

    limkd(fk,hk)=0.

    Lemma 2.5. [47] Let {ak} be a non–negative sequence for which one assumes that there exists an n0N such that, for all kn0,

    ak+1=(1σk)αk+σkgk

    is satisfied, where σk(0,1) for all kN, k=0σk= and gk0 kN. Then the following holds:

    0lim supkaklim supkgk.

    Definition 2.6. [47] Let M, S:BB. Then S is an approximate operator of M if for all ϵ>0, implies that d(Mf,Sf)ϵ holds for any fB.

    Proposition 2.7. [20] Let M:BB be a mapping which satisfies the condition (E) with F(M), then M is quasi-nonexpansive.

    Lemma 2.8. [43] Let D be a subset of (B,d,W). A mapping M:DD is said to fulfil the condition (I) if a non-decreasing function ϱ:[0,)[0,) exists with ϱ(0)=0 such that ϱ(r)>0 for any r(0,) we have d(f,Mf)ϱ(dist(f,F(M))) for all fD, where dist(f,F(M)) stands for the distance of f from F(M).

    Throughout the remaining part of this article, let (B,d,W) denote a complete uniformly convex hyperbolic space with a monotone modulus of convexity ξ and D be a nonempty closed convex subset of B.

    In this section, we construct a modified form of AH iteration process (1.7) in hyperbolic spaces as follows:

    {f1D,qk=W(fk,Mfk,δk)vk=M2qk,hk=M2vk,fk+1=W(hk,Mhk,mk)kN, (3.1)

    where {mk}, {δk} are sequences in (0,1) and M is a mapping enriched with condition (E).

    In this section, we show the data dependence result of the iteration process (3.1) for almost contraction mappings. The following convergence theorem will be useful in obtaining the data dependence result.

    Theorem 3.1. Let D be a nonempty, closed and convex subset of hyperbolic space B and M:DD an almost contraction mapping. If the {fk} is the sequence defined by (3.1), then limkfk=w, where wF(M).

    Proof. Suppose that wF(M), from (1.2), (3.1) and Proposition 2.7, we have

    d(qk,w)=d(W(fk,Mfk,δk),w)(1δk)d(fk,w)+δkd(Mfk,w)(1δk)d(fk,w)+δked(fk,w)=(1(1e)δk)d(fk,w). (3.2)

    Since δk(0,1), e[0,1) then 0<(1(1e)δk)1, thus (3.2) yields

    d(qk,w)d(fk,w). (3.3)

    Also, by (3.1) and (3.3), we have

    d(vk,w)=d(M2qk,w)=d(M(M)qk,w)ed(Mqk,w)e2d(qk,w)e2d(fk,w). (3.4)

    Again, from (3.1) and (3.4), we get

    d(hk,w)=d(M2vk,w)=d(M(M)vk,w)ed(Mvk,w)e2d(vk,w)e4d(fk,w). (3.5)

    Finally, using (3.1) and (3.5), we obtain

    d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+mkd(Mhk,w)(1mk)d(hk,w)+mked(hk,w)=(1(1e)δk)d(hk,w)e4d(fk,w). (3.6)

    Inductively, we obtain

    d(fk+1,w)e4(k+1)d(f0,w).

    Since 0e<1, it follows that limkfk=w.

    Theorem 3.2. Let D, B and M be same as defined in Theorem 3.1. Let S an approximate operator of M and {fk} a sequence defined by (3.1). We define an iterative sequence {xk} for an almost contraction mapping M as follows:

    {x1D,zk=W(xk,Sxk,δk)wk=S2zkyk=S2wkxk+1=W(yk,Syk,mk)kN, (3.7)

    where {mk} and {δk} are sequences in (0,1) satisfying 12mk, kN and k=0mk=. If Mw=w and St=t such that xkt as k, then we have

    d(w,t)11ϵ1e,

    where ϵ is a fixed number.

    Proof. Using (1.2), (3.1) and (3.7), we have

    d(qk,zk)=d(W(qk,Mfk,δk),W(xk,Mxk,δk))(1δk)d(fk,xk)+δkd(Mfk,Sxk)(1δk)d(fk,xk)+δkd(Mfk,Mxk)+δkd(Mxk,Sxk)(1δk)d(fk,xk)+δked(fk,xk)+δkLd(fk,Mfk)+δkϵ[1(1e)δk]d(fk,xk)+δkL(1+e)d(fk,w)+δkϵ (3.8)
    d(vk,wk)=d(M2qk,S2zk)=d(M(Mqk),S(Szk))d(M(Mqk),M(Szk))+d(M(Szk),S(Szk))ed(Mqk,Szk)+Ld(Mqk,M(Mqk))+ϵe(d(Mqk,Mzk)+d(Mzk,Szk))+Ld(Mqk,M(Mqk))+ϵe2d(qk,zk)+eLd(qk,Mqk)+eϵ+L(d(Mqk,w)+d(w,M(Mqk)))+ϵe2d(qk,zk)+eL(d(qk,w)+d(w,Mqk))+eϵ+L(ed(qk,w)+ed(w,Mqk))+ϵe2d(qk,zk)+eL(1+e)d(qk,w)+eϵ+Le(1+e)d(qk,w)+ϵ (3.9)
    d(hk,yk)=d(M2vk,S2wk)=d(M(Mvk),S(Swk))d(M(Mvk),M(Swk))+d(M(Swk),S(Swk))ed(Mvk,Swk)+Ld(Mvk,M(Mvk))+ϵe(d(Mvk,Mwk)+d(Mwk,Swk))+Ld(Mvk,M(Mvk))+ϵe2d(vk,wk)+eLd(vk,Mvk)+eϵ+L(d(Mvk,w)+d(w,M(Mvk)))+ϵe2d(vk,wk)+eL(d(vk,w)+d(w,Mvk))+eϵ+L(ed(vk,w)+ed(w,Mvk))+ϵe2d(vk,wk)+eL(1+e)d(vk,w)+eϵ+Le(1+e)d(vk,w)+ϵ (3.10)
    d(fk+1,xk+1)=d(W(hk,Mhk,mk),W(yk,Myk,mk))(1mk)d(hk,hk)+mkd(Mhk,Syk)(1mk)d(hk,yk)+mkd(Mhk,Myk)+mkd(Myk,Syk)(1mk)d(hk,yk)+mked(hk,yk)+mkLd(hk,Mhk)+δkϵ[1(1e)mk]d(hk,yk)+δkL(1+e)d(hk,w)+mkϵ (3.11)

    Using (3.8)–(3.11), we have

    d(fk+1,xk+1)e4[1(1e)mk][1(1e)δk]d(fk,xk)+e4δk[1(1e)mk]L(1+e)d(fk,w)+δk[1(1e)mk]ϵ+e3[1(1e)mk]L(1+e)d(qk,w)+e3[1(1e)mk]ϵ+e2[1(1e)mk]Le(1+e)d(qk,w)+e2[1(1e)mk]ϵ+e[1(1e)mk]L(1+e)d(vk,w)+e[1(1e)mk]ϵ+[1(1e)mk]Le(1+e)d(vk,w)+[1(1e)mk]ϵ+mkL(1+e)d(hk,w)+mkϵ=e4[1(1e)mk][1(1e)δk]d(fk,xk)+e4δk[1(1e)mk]L(1+e)d(fk,w)+δkϵ+mkδk(e1)ϵ+e3[1(1e)mk]L(1+e)d(qk,w)+e3ϵ+e4mkϵ+e2[1(1e)mk]Le(1+e)d(qk,w)+e2ϵ+e[1(1e)mk]L(1+e)d(vk,w)+eϵ+[1(1e)mk]Le(1+e)d(vk,w)+ϵ+mkL(1+e)d(hk,w). (3.12)

    Since {mk},{δk}(0,1) and e(0,1], then it follows that (e1)0, [1(1e)mk]1 and [1(1e)δk]1. Therefore, (3.12) becomes

    d(fk+1,xk+1)[1(1e)mk]d(fk,xk)+L(1+e)d(fk,w)+L(1+e)d(qk,w)+Le(1+e)d(qk,w)+Le(1+e)d(vk,w)+Le(1+e)d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+5ϵ=[1(1e)mk]d(fk,xk)+L(1+e)d(fk,w)+L(1+e)2d(qk,w)+L(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+5ϵ. (3.13)

    Since 12mk, k1, then 12mk, k1. Thus, (3.13) becomes

    d(fk+1,xk+1)[1(1e)mk]d(fk,xk)+2mkL(1+e)d(fk,w)+2mkL(1+e)2d(qk,w)+2mkL(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+10mkϵ.=[1(1e)mk]d(fk,xk)+mk(1e)×{2L(1+e)d(fk,w)+2L(1+e)2d(qk,w)+2L(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+11ϵ1e}. (3.14)

    Therefore,

    ak+1=(1σk)ak+σkgk,

    where

    ak+1=d(fk+1,xk+1),

    σk=(1e)mk(0,1),

    and

    gk=2L(1+e)d(fk,w)+2L(1+e)2d(qk,w)+2L(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+11ϵ1e0.

    From Theorem 3.1, we have that limkd(fk,w)=limkd(hk,w)=limkd(vk,w)=limkd(qk,w)=0. By the hypothesis xkt as k and using Lemma 2.5, we obtain

    d(w,t)11ϵ1e.

    This completes the proof.

    Now, we obtain the strong and -convergence results of (3.1). To obtain these results, we will use the following lemmas.

    Lemma 3.3. Let D and B be same as defined in Theorem 3.2 and Let M:DD be a mapping enriched with the condition (E) such that F(M). Suppose {fk} is the sequence iteratively generated by (3.1). Then, limkd(fk,w) exists for all wF(M).

    Proof. Assume that wF(M). From Proposition 2.7 and (3.1), we have

    d(qk,w)=d(W(fk,Mfk,δk),w)(1δk)d(fk,w)+δkd(Mfk,w)(1δk)d(fk,w)+δkd(fk,w)=d(fk,w). (3.15)

    From (3.15) and (3.1), we have

    d(vk,w)=d(M2qk,w)=d(M(M)qk,w)d(Mqk,w)d(qk,w)d(fk,w). (3.16)

    Using (3.16) and (3.1), we get

    d(hk,w)=d(M2vk,w)=d(M(M)vk,w)d(Mvk,w)d(vk,w)d(fk,w). (3.17)

    Finally, (3.17) and (3.1), we obtain

    d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+δkd(Mhk,w)(1mk)d(hk,w)+mkd(hk,w)=d(hk,w). (3.18)

    This shows that {d(fk,w)} is a non-increasing sequence which is bounded below. Thus, limkd(fk,w) exists for each wF(M).

    Lemma 3.4. Let D, B and M be same as defined in Lemma 3.3. Let {fk} be the sequence defined by (3.1). Then, F(M) if and only if {fk} is bounded and limkd(fk,Mfk)=0.

    Proof. Assume that {fk} is a bounded sequence with limkd(fk,Mfk)=0. Let wA(D,{fk}). By the definition of asymptotic radius, we have

    r(Mw,{fk})=lim supkd(fk,Mw).

    Since M is a mapping which satisfies condition (E), we obtain

    r(Mw,{fk})=lim supkd(fk,Mw)μlim supkd(Mfk,fk)+lim supkd(fk,w)=r(w,{fk}).

    Recalling the uniqueness of the asymptotic center of {fk}, we get Mw=w.

    Conversely, let F(M) and wF(M). Then by Lemma 3.3, limkd(fk,w) exists. Now, suppose

    limkd(fk,w)=c. (3.19)

    From (3.15), (3.16) and (3.19), it follows that

    lim supkd(vk,w)c, (3.20)
    lim supkd(qk,w)c. (3.21)

    Using Proposition 2.7, we get

    lim supkd(Mfk,w)lim supkd(fk,w)=c. (3.22)

    By Lemma 3.3 and (3.1), one obtains

    d(fk+1,w)=d(W(fk,Mhk,mk),p)(1mk)d(fk,w)+mkd(Mhk,w)(1mk)d(fk,w)+mkd(hk,w)=(1mk)d(fk,w)+mkd(M2vk,w)=(1mk)d(fk,w)+mkd(M(M)vk,w)(1mk)d(fk,w)+mkd(Mvk,w)(1mk)d(fk,w)+mkd(vk,w)=(1mk)d(fk,w)+mkd(M2qk,w)=(1mk)d(fk,w)+mkd(M(M)qk,w)(1mk)d(fk,w)+mkd(Mqk,w)(1mk)d(fk,w)+mkd(qk,w). (3.23)

    From (3.23), it follows that

    d(fk+1,w)d(fk,w)d(fk+1,w)d(fk,w)mkd(qk,w)d(fk,w). (3.24)

    Thus,

    clim infkd(qk,w). (3.25)

    From (3.21) and (3.25), we get

    c=limkd(qk,w). (3.26)

    Using (3.1) and (3.26), we have

    c=limkd(qk,w)=limkd(W(fk,Mfk,δk),w). (3.27)

    So, from Lemma 2.4 we have

    limkd(fk,Mfk)=0.

    Now, we show the -convergence result of the iteration process (3.1) for class of mappings enriched with condition (E).

    Theorem 3.5. Let D, B and M be same as in Theorem 3.4 such that F(M). Let {fk} is the sequence defined by (3.1). Then, {fk} -converges to a fixed point of M.

    Proof. By Lemma 3.3, we know that {fk} is a bounded sequence. It follows that {fk} has a -convergent sub-sequence. Now, we show that every -convergent sub-sequence of {fk} has a unique -limit in F(M). Let y and z stand for the -limits of the subsequences {fki} and {fkj} of {fk}, respectively. Recalling Lemma 2.3, we have A(D,{fki})={y} and A(D,{fkj})={z}. From Lemma 3.4, it follows that limid(fki,Mfki)=0 and limjd(fkj,Mfkj)=0. We assume that wF(M). Again, we know that

    r({fki},Mw)=lim supid(fki,My).

    Since M is a mapping which satisfies condition (E), we obtain

    r({fki},My)=lim supid(fki,My)μlim supid(Mfki,fki)+lim supid(fki,y)lim supid(fki,y)=r(y,{fki}).

    From the uniqueness of the asymptotic center, we know that Mw=w. Now, it is left to prove that w=z. We claim that wz and then from the uniqueness of asymptotic center, it follows that

    lim supkd(fk,y)=lim supid(fki,y)<lim supid(fki,z)=lim supkd(fk,z)=lim supjd(fkj,z)<lim supjd(fkj,y)=lim supkd(fk,y),

    which is clearly a contradiction. Therefore, y=z and hence, {fk} -converges to a point of M.

    Next, prove some strong convergence theorems as follows:

    Theorem 3.6. Let D, B and M be same as in Theorem 3.4 such that F(M). If {fk} is the sequence iteratively generated by (3.1). Then, {fk} converges strongly to a fixed point of M if and only if lim infkdist(fk,F(M))=0, where dist(fk,F(M))=inf{d(fk,w):wF(M)}.

    Proof. If lim infkdist(fk,F(M))=0. By Lemma 3.3, it follows that lim infkd(fk,F(M)). Thus,

    limkd(fk,F(M))=0. (3.28)

    From (3.28), a sub-sequence {fki} of {fk} exists with d(fki,ti)12i for all i1, where {ti} is a sequence in F(M). In view of Lemma 3.34, we obtain

    d(fki+1,ti)d(fki,ti)12i. (3.29)

    Using (3.29), we have

    d(ti+1,ti)d(ti+1,fki+1)+d(fki+1,ti) (3.30)
    12i+1+12i<12i1. (3.31)

    It follows clearly that {fk} is a Cauchy sequence in D. Since D is a closed subset of B, limkfk=z for some zD. Now, we prove that z is a fixed point of M. Since M is a mapping which satisfies condition (E), we have

    d(fk,Mz)μd(fk,Mfk)+d(fk,z).

    Letting k, then by Lemma 3.5 we have d(fk,Mfk)=0 and then it follows that d(z,Mz)=0. So, z is a fixed point of M. Thus, {fk} converges strongly to a point in F(M).

    Theorem 3.7. Let D, B and M be same as in Theorem 3.4 such that F(M). If {fk} is the sequence defined by (3.1) and M satisfies condition (I). Then, {fk} convergences strongly to an element in F(M).

    Proof. Using Lemma 3.4, We have

    lim infkd(Mfk,fk)=0. (3.32)

    Since M fulfills condition (I), we get d(Mfk,fk)ϱ(dist(fk,F(M))). From (3.32), we obtain

    lim infkϱ(dist(fk,F(M)))=0.

    Again, since the function ϱ:[0,)[0,) is non-decreasing such that ϱ(0)=0 and ϱ(r)>0 for all r(0,), then we have

    lim infkdist(fk,F(M))=0.

    Thus, all the conditions of Theorem 3.6 are performed. Hence, {fk} converges strongly to a fixed point of M.

    Now, we give the following example to authenticate Theorem 3.7.

    Example 3.8. Let B=R with the metric d(f,h)=|fh| and D=[3,). Define W:B2×[0,1]B by W(f,h,α)=αf+(1α)h for all f,hB and α[0,1]. Then (B,d,W) is a complete uniformly Hyperbolic space with monotone modulus of convexity and D is a nonempty closed convex subset of B. Let M:DD be defined by

    Mf={f6,iff[3,13],f7,iff(13,).

    Since M is not continuous at f=13 and owing to the fact every nonexpansive mapping is continuous, then it implies that M is not a nonexpansive mapping. Next, we show that M is enriched with condition (E). To see this, we consider the following cases:

    Case Ⅰ: Let f,h[3,13], then we have

    d(f,Mh)=|fh6|=|ff6+f6h6||ff6|+16|fh|3635|ff6|+|fh|=3635d(f,Mf)+d(f,h).

    Case Ⅱ: Let f,h(13,), the we have

    d(f,Mh)=|fh7|=|ff7+f7h7||ff7|+17|fh|3635|ff7|+|fh|=3635d(f,Mf)+d(f,h).

    Case Ⅲ: Let f[3,13] and h(13,), then we obtain

    d(f,Mh)=|fh7|=|ff7+f7h7||ff7|+17|fh||ff6+f6f7|+|fh|=|(ff6)+135(ff6)|+|fh|=3635|ff6|+|fh|=3635d(f,Mf)+d(f,h).

    Case Ⅳ: Let f(13,) and h[3,13], then we get

    d(f,Mh)=|fh6|=|ff6+f6h6||ff6|+16|fh||ff7+f7f6|+|fh|=|(ff7)136(ff7)|+|fh|=3536|ff7|+|fh|3635|ff7|+|fh|=3635d(f,Mf)+d(f,h).

    Clearly, from the cases shown above, M satisfies the condition (E) with μ=3635 and the fixed point is w=0. Hence, F(M)={0}. Now, we consider a function ϱ(f)=f4, where f(0,), then ϱ is non-decreasing with ϱ(0)=0 and ϱ(f)>0 for all f(0,).

    Observe that

    dist(f,F(M))=inffF(M)d(f,w)=infd(f,0)={0,iff[3,13],13,iff(13,).ϱ(dist(f,F(M)))={0,iff[3,13],112,iff(13,).

    Finally, we consider the following cases:

    Case 1. Let f[3,13], then we obtain

    d(f,Mf)=|ff6|=56|f|0=ϱ(dist(f,F(M))).

    Case 2. Let f(13,), then we obtain

    d(f,Mf)=|ff7|=67|f|112=ϱ(dist(f,F(M))).

    Thus, the above considered cases proves that

    d(f,Mf)ϱ(dist(f,F(M))).

    Therefore, the mapping M satisfies the condition (I). Clearly, all the hypothesis of Theorem (3.7) are fulfilled. Thus, using Theorem (3.7), it follows that the sequence {fk} defined by (3.1) converges strongly to the fixed point w=0 of M.

    In this section, we construct a mapping enriched with condition (E), but does not satisfies condition (C). Then, using this example we will illustrate that the modified AH iterative algorithm (3.1) converges faster than several known methods.

    Example 3.9. Let B=R and D=[1,1] with the usual metric, that is d(f,h)=|fh|. Define M:→DD by

    Mf={f,iff[1,0]{15},0,iff=15,f5,iff(0,1].

    If f=1 and h=15, we have

    12d(h,Mh)=12|15M(15)|=11045=d(f,y).

    But,

    d(Mf,Mh)=1>45=d(f,y).

    Thus, the mapping M does not satisfy condition (C). Next, we show that M is a mapping which satisfies condition (E). For this, the following cases will be considered:

    Case a: When f,h[1,0]{15}, we have

    d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+d(f,h).

    Case b: When f,h(0,1], we have

    d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+|f5+h5|d(f,Mf)+|fh|=d(f,Mf)+d(f,h).

    Case c: When f[1,0]{15} and h(0,1], we have

    d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+|f+h5|d(f,Mf)+|f+h|(asf0,h>0)=d(f,Mf)+d(f,h).

    Case d: When f[1,0]{15} and h=15, we have

    d(f,Mh)=|f|2|f|+|f+15|=d(f,Mf)+d(f,h).

    Case e: When f(0,1] and h=15, we have

    d(f,Mh)=|f|65|f|+|f+15|=d(f,Mf)+d(f,h).

    Thus, M satisfies the condition (E) with μ1.

    In this work, we will be using MATLAB R2015a to obtain our numerical results.

    Now, we will study the influence of the control parameters mk,δk and initial value on AH iteration process (3.1).

    Case Ⅰ: Here, we will examine the convergence behavior of (3.1) for different choices of control parameters with the same initial value. For this, we consider the following set of parameters and initial value:

    (1) mk=0.70, δk=0.30 for all kN and f1=0.8,

    (2) mk=0.65, δk=0.35 for all kN and f1=0.8,

    (3) mk=0.55, δk=0.35 for all kN and f1=0.8.

    We obtain the following Table 1 and Figure 1 for an initial of 0.8.

    Table 1.  Tabular values of AH iteration (3.1) for Case Ⅰ.
    Step Parameter 1 Parameter 2 Parameter 3
    1 0.8000000000 0.8000000000 0.8000000000
    2 -0.1280000000 -0.0720000000 -0.0240000000
    3 0.0204800000 0.0064800000 0.0007200000
    4 -0.0032768000 -0.0005832000 -0.0000216000
    5 0.0005242880 0.0000524880 0.0000006480
    6 -0.0000838861 -0.0000047239 -0.0000000194
    7 0.0000134218 0.0000004252 0.0000000006
    8 -0.0000021475 -0.0000000383 -0.0000000000
    9 0.0000003436 0.0000000034 0.0000000000
    10 -0.0000000550 -0.0000000003 -0.0000000000
    11 0.0000000088 0.0000000000 0.0000000000
    12 -0.0000000014 -0.0000000000 -0.0000000000
    13 0.0000000002 0.0000000000 0.0000000000
    14 -0.0000000000 -0.0000000000 -0.0000000000

     | Show Table
    DownLoad: CSV
    Figure 1.  Graph corresponding to Table 1.
    Table 2.  Number of iteration and CPU time for Case Ⅰ.
    Parameter 1 Parameter 2 Parameter 3
    No of Iter. 14 11 8
    Sec. 40.4460 37.5415 30.9468

     | Show Table
    DownLoad: CSV

    Case Ⅱ: Here, we will show again the convergence behavior of our iterative method (3.1) for three different starting points with the same parameter. We consider the following set of parameters:

    (a) mk=0.53, δk=0.40 for all kN and f1=0.2,

    (b) mk=0.53, δk=0.40 for all kN and f1=0.6,

    (c) mk=0.53, δk=0.40 for all kN and f1=0.9.

    For the three initial values, we have the following Table 3 and Figure 2, respectively.

    Table 3.  Tabular values of AH iteration (3.1) for Case Ⅱ.
    Step Initial Value 1 Initial Value 2 Initial Value 3
    1 0.2000000000 -0.6000000000 -0.9000000000
    2 -0.0193200000 0.0579600000 0.0869400000
    3 0.0018663120 -0.0055989360 -0.0083984040
    4 -0.0001802857 0.0005408572 0.0008112858
    5 0.0000174156 -0.0000522468 -0.0000783702
    6 -0.0000016823 0.0000050470 0.0000075706
    7 0.0000001625 -0.0000004875 -0.0000007313
    8 -0.0000000157 0.0000000471 0.0000000706
    9 0.0000000015 -0.0000000045 -0.0000000068
    10 -0.0000000001 0.0000000004 0.0000000007
    11 0.0000000000 -0.0000000000 -0.0000000001
    12 -0.0000000000 0.0000000000 0.0000000000

     | Show Table
    DownLoad: CSV
    Figure 2.  Graph corresponding to Table 3.
    Table 4.  Number of iteration and CPU time for Case Ⅱ.
    Initial Value 1 Initial Value 2 Initial Value 3
    No of Iter. 11 11 12
    Sec. 35.5363 36.7645 38.4356

     | Show Table
    DownLoad: CSV

    Next, with the aid of Example 3.8, we will show that the AH iterative method (3.1) enjoys better speed of convergence than many known iterative scheme. We take mk=0.51, δk=0.40, θk=0.30 and f1=0.4. In the following Tables 57 and Figures 3 and 4, it is clear that AH iteration process (3.1) converges faster to w=0 than Noor [32], S [3] Abbas [1], Picard-S[22], M [48] and JK [4] iteration processes.

    Table 5.  Comparison of convergence behavior of AH iteration process (3.1) with Noor, S, Abbas iteration processes.
    Step Noor S Abbas AH
    1 0.40000000 0.40000000 0.40000000 0.40000000
    2 0.10624000 -0.23680000 0.06736000 -0.00160000
    3 0.02821734 0.14018560 0.01134342 0.00000640
    4 0.00749453 -0.08298988 0.00191023 -0.00000003
    5 0.00199055 0.04913001 0.00032168 0.00000000
    6 0.00052869 -0.02908496 0.00005417 -0.00000000
    7 0.00014042 0.01721830 0.00000912 0.00000000
    8 0.00003730 -0.01019323 0.00000154 -0.00000000
    9 0.00000991 0.00603439 0.00000026 0.00000000
    10 0.00000263 -0.00357236 0.00000004 -0.00000000
    11 0.00000070 0.00211484 0.00000001 0.00000000
    12 0.00000019 -0.00125198 0.00000000 -0.00000000
    13 0.00000005 0.00074117 0.00000000 0.00000000
    14 0.00000001 -0.00043878 0.00000000 -0.00000000
    15 0.00000000 0.00025975 0.00000000 0.00000000

     | Show Table
    DownLoad: CSV
    Table 6.  Number of iteration and CPU time for various iterative methods.
    Noor S Abbas AH
    No of Iter. 15 40 12 5
    Sec. 46.7935 89.1234 31.4352 15.2456

     | Show Table
    DownLoad: CSV
    Table 7.  Comparison of convergence behavior of AH iteration process (3.1) with Picard-S, M, JK iteration processes.
    Step Picard-S M JK AH
    1 0.40000000 0.40000000 0.40000000 0.40000000
    2 0.23680000 -0.00800000 -0.01600000 -0.00160000
    3 0.14018560 0.00016000 0.00064000 0.00000640
    4 0.08298988 -0.00000320 -0.00002560 -0.00000003
    5 0.04913001 0.00000006 0.00000102 0.00000000
    6 0.02908496 -0.00000000 -0.00000004 -0.00000000
    7 0.01721830 0.00000000 0.00000000 0.00000000
    8 0.01019323 -0.00000000 -0.00000000 -0.00000000
    9 0.00603439 0.00000000 0.00000000 0.00000000
    10 0.00357236 -0.00000000 -0.00000000 -0.00000000
    11 0.00211484 0.00000000 0.00000000 0.00000000
    12 0.00125198 -0.00000000 -0.00000000 -0.00000000
    13 0.00074117 0.00000000 0.00000000 0.00000000
    14 0.00043878 -0.00000000 -0.00000000 -0.00000000
    15 0.00025975 0.00000000 0.00000000 0.00000000

     | Show Table
    DownLoad: CSV
    Figure 3.  Graph corresponding to Table 5.
    Figure 4.  Graph corresponding to Table 7.
    Table 8.  Number of iteration and CPU time for various iterative methods.
    Picard-S M JK AH
    No of Iter. 37 6 7 5
    Sec. 46.7638 17.6372 18.5637 15.2456

     | Show Table
    DownLoad: CSV

    Integral equations (IEs) are equations in which the unknown functions appear under one or more integral signs [49]. Delay integral equations (DIEs) are those IEs in which the solution of the unknown function is given in the previous time interval [8]. DIEs are further classified into two main types: Fredhom DIEs and Volterra DIEs on the basis of the limits of integration. Fredhom DIEs are those IEs in which limits of the integration are constant, while in Volterra DIEs, one of the limits of the integration is a constant and the other is a variable. A Volterra-Fredhom DIEs consist of disjoint Volterra and Fredhom IEs [49]. The DIEs play an important role in mathematics [51]. These equations are used for modelling of various phenomena such as modelling of systems with memory [6], mathematical modelling, electric circuits, and mechanical systems [9,50]. Several researchers are trying to find out the numerical solution of delay IEs [35,36,37,38,39,54,55].

    In this article, our interest is to approximate the solution of the following nonlinear integral equation with two delays via of new iterative method (3.1):

    x(z)=g(z,x(z),x(α(z)),nmp(z,ϑ,x(ϑ),x(β(ϑ)))dϑ) (3.33)

    where m,n are fixed real numbers, g:[m,n]×C×C×CC and p:[m,n]×[m,n]×C×CC are continuous function, and α,β:[m,n][m,n] are continuous delay functions which further satisfies α(ϑ)ϑ and β(ϑ) for all ϑ[m,n].

    Let I=[m,n] (m<n) be a fixed finite interval and ϖ:I(0,) a nondecreasing function. We will consider the space C(I) of continuous functions, f:IC, endowed with the Bielecki metric

    d(f,h)=supzI|f(z)h(z)|ϖ(z). (3.34)

    It is well known that (C(I),d) is a complete metric space [15] and hence, it is a hyperbolic space.

    The following result regarding the existence of solution for the problem (3.12) was proved by Castro and Simoes [16].

    Theorem 3.10. Let α,β:II be continuous delay function with α(d)d and β(d)d for all d[m,n] and ϖ:I(0,1) a nondecreasing function. Moreover, if there exists ηR such that

    nmϖ(ϑ)dϑηϖ(z),

    for each zI. In addition, assume that g:I×C×C×CC is a continuous function which satisfies the Lipschitz condition:

    |g(z,f(z),f(α(z)),ρ(z))g(z,h(z),h(α(z)),φ(z))|λ(|f(z)h(z)|+|f(α(z))h(α(z))|+|ρ(z)φ(z)|),

    where λ>0 and the kernel p:I×I×C×CC is continuous kernel function which fulfills the Lipschitz condition:

    |p(z,ϑ,f(ϑ),f(β(ϑ)))p(z,ϑ,h(ϑ),h(β(ϑ)))|L|f(β(ϑ))h(β(ϑ))|, (3.35)

    where L>0. If λ(2+Lη)<1, then the unique solution of the problem (3.33) existsn, say, wC(I).

    Next, we will prove that our new iteration process converges strongly to the unique solution of nonlinear integral equation (3.33). For this, we give our main result in this section as follows:

    Theorem 3.11. Let C(I) be a hyperbolic space with the Bielecki metric. Assume that M:C(I)C(I) is the mapping defined by

    (Mf)(z)=g(z,f(z),f(α(z)),nmp(z,ϑ,f(ϑ),f(β(ϑ)))dϑ), (3.36)

    for all zI and fC(I). Suppose all the assumptions in Theorem 3.10 are performed and if {fk} is the sequence defined by (3.1), then {fk} converges to the unique solution, say wC(I) of the problem (3.33).

    Proof. By Theorem 3.10, it is shown that (3.33) has a unique solution, so let us assume that w is the fixed point of M. We now show that fkw as k. Now, using (3.1), (3.36) and under the present assumptions with respect to the metric (3.34), we have

    d(qk,w)=d(W(fk,Mfk,δk),w) (3.37)
    (1δk)d(fk,w)+δkd(Mfk,w)=(1δk)d(fk,w)+δksupzI|(Mfk)(z)(Mw)(z)|ϖ(z)=(1δk)d(fk,w)+δksupzI1ϖ(z)|g(z,fk(z),fk(α(z)),nmp(z,ϑ,fk(ϑ),fk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+|nmp(z,ϑ,fk(ϑ),fk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+nm|p(z,ϑ,fk(ϑ),fk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+Lnm|fk(β(ϑ))w(β(ϑ))|dϑ}(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|fk(β(ϑ))w(β(ϑ))|dϑ}=(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|fk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupϑI|fk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}(1δk)d(fk,w)+δkλ{2d(fk,w)+Ld(fk,w)supzIηϖ(z)ηϖ(z)}=(1δk)d(fk,w)+δkλ(2+Lη)d(fk,w)=[1(1λ(2+Lη))δk]d(fk,w). (3.38)
    d(vk,w)=d(M2qk,w)=d(M(M)qk,w)=supzI|(M(M)qk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,Mqk(z),Mqk(α(z)),nmp(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+|nmp(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+nm|p(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+Lnm|Mqk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|Mqk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|Mqk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupϑI|Mqk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(Mqk,w)+Ld(Mqk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(Mqk,w). (3.39)
    d(Mqk,w)=supzI|(Mqk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,qk(z),qk(α(z)),nmp(z,ϑ,qk(ϑ),qk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+|nmp(z,ϑ,qk(ϑ),qk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+nm|p(z,ϑ,qk(ϑ),qk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+Lnm|qk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|qk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|qk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|qk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|qk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|qk(z)w(z)|ϖ(z)+LsupϑI|qk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(qk,w)+Ld(qk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(qk,w). (3.40)
    d(hk,w)=d(M2vk,w)=d(M(M)vk,w)=supzI|(M(M)vk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,Mvk(z),Mvk(α(z)),nmp(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+|nmp(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+nm|p(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+Lnm|Mvk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|Mvk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|Mvk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupϑI|Mvk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(Mvk,w)+Ld(Mvk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(Mvk,w). (3.41)
    d(Mvk,w)=supzI|(Mvk)(z)(Mw)(z)|ϖ(z).=supzI1ϖ(z)|g(z,vk(z),vk(α(z)),nmp(z,ϑ,vk(ϑ),vk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+|nmp(z,ϑ,vk(ϑ),vk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+nm|p(z,ϑ,vk(ϑ),vk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+Lnm|vk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|vk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|vk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|vk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|vk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|vk(z)w(z)|ϖ(z)+LsupϑI|vk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(vk,w)+Ld(vk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(vk,w). (3.42)
    d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+mkd(Mhk,w)=(1mk)d(hk,w)+mksupzI|(Mhk)(z)(Mw)(z)|ϖ(z).=(1mk)d(hk,w)+mksupzI1ϖ(z)|g(z,hk(z),hk(α(z)),nmp(z,ϑ,hk(ϑ),hk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|(1mk)d(hk,w)+mkλsupzI1ϖ(z){|hk(z)w(z)|+|hk(α(z))w(α(z))|+|nmp(z,ϑ,hk(ϑ),hk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}(1mk)d(hk,w)+mkλsupzI1ϖ(z){|hk(z)w(z)|+|hk(α(z))w(α(z))|+nm|p(z,ϑ,hk(ϑ),hk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}(1mk)d(hk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|hk(α(z))w(α(z))|+Lnm|hk(β(ϑ))w(β(ϑ))|dϑ}(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|hk(β(ϑ))w(β(ϑ))|dϑ}=(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|hk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupϑI|hk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}(1mk)d(hk,w)+δkλ{2d(hk,w)+Ld(hk,w)supzIηϖ(z)ηϖ(z)}=(1mk)d(hk,w)+mkλ(2+Lη)d(hk,w)=[1(1λ(2+Lη))mk]d(hk,w). (3.43)

    Combining (3.38)–(3.43), we have

    d(fk+1,w)[λ(2+Lη)]4[1(1λ(2+Lη))δk][1(1λ(2+Lη))δk]d(fk,w). (3.44)

    Since λ(2+Lη)<1, 0<δk,mk<1, it follows that [1(1λ(2+Lη))δk]<1 and [1(1λ(2+Lη))mk]<1. Thus, (3.44) reduces to

    d(fk+1,w)d(fk,w). (3.45)

    If we set d(fk,w)=Ωk, then we obtain

    Ωk+1Ωk,kN.

    Hence, {Ωk} is a monotone decreasing sequence of real numbers. Furthermore, it is a bounded sequence, so we have

    limkΩk=inf{Ωk}=0.

    Therefore,

    limkd(fk,w)=0.

    This ends the proof.

    Now, we present an example which validates all the conditions given in Theorem 3.7.

    Example 3.12. Let x:[0,1]R be a continuous integral equation defined as follows:

    x(z)=z6120z310+z+16x(α(z))+14z0((ϑz)x(β(ϑ)))dϑ,z[0,1]. (3.46)

    Let ϖ:[0,1](0,) be a nondecreasing continuous function which is defined by ϖ(z)=0.0094z+0.0006 and α,β:[0,1][0,1] be a continuous delay functions defined by α(z)=z3 and β(z)=z4, respectively. We have all assumptions of Theorem 3.7 being fulfilled. In deed, it is not hard for the reader to see that α and β are continuous functions such that α(z)z and β(z)z. Moreover, for η=0.63951 we have that ϖ:[0,1](0,) which is defined by ϖ(z)=0.0094z+0.0006 is a continuous function satisfying

    z00.0094ϑ+0.0006dϑη(0.0094z+0.0006)=0.63951ϖ(z),z[0,1]. (3.47)

    The function f:[0,1]×C×C×CC defined by g(z,x(z),x(α(z)),ρ(z))=z6120z310+z+16x(α(z))+14ρ(z) is a continuous function which satisfies

    |g(z,f(z),f(α(z)),ρ(z))g(z,h(z),h(α(z)),φ(z))|14(|f(z)h(z)|+|f(α(z))h(α(z))|+|ρ(z)φ(z)|),

    for all z[0,1]. Observe that λ=14. Now, the kernel p:[0,1]×[0,1]×C×CC which is defined by p(z,ϑ,x(ϑ),x(β(ϑ)))=((ϑz)x(β(ϑ))) is a continuous function satisfying the following condition

    |p(z,ϑ,f(ϑ),f(β(ϑ)))p(z,ϑ,h(ϑ),h(β(ϑ)))|L|f(β(ϑ))h(β(ϑ))|,ϑ[0,z],z[0,1]. (3.48)

    Clearly, we have that L=1. Thus, λ(2+Lη)=263951400000<1. Therefore, we have shown above that all the conditions of Theorem 3.11 hold. Hence, our results are applicable.

    Open Question

    Is it possible to obtain the results in this article for iterative methods involving two or more mappings in the setting of single-valued or multivalued mappings?

    In this article, we have considered the modified version of AH iteration process as given in (3.1). The data dependence result of the modified AH iteration process (3.1) for almost contraction mappings has been studied. We proved several strong and -convergence results of (3.1) for mappings enriched with condition (E) in hyperbolic spaces. We provided two nontrivial examples of mappings which satisfy condition (E). The first example was used to validate our assumptions in Theorem (3.7) and the second was used to study the convergence behavior of AH iteration process (3.1) for different control parameters and initial values. The second example was equally used to compare the speed of convergence of AH iteration (3.1) with several existing iteration processes. It was observed that AH iteration process (3.1) converges faster than Noor [32], S [3], Abbas [1], Picard-S [22], M [48] and JK [4] iteration processes. Finally, we applied our main results to solve a nonlinear integral equation with two delays. Since hyperbolic spaces are more general than Banach spaces and by Remark 3.5 the class of mapping satisfying condition (E) are more general than those considered in Ahmad et al. [4,5] and Ofem et al. [33]. It follows that our results generalize and extend the results of Ahmad [4,5], Ofem et al. [33] and several other related results in the existing literature.

    The data used to support the findings of this study are included within the article.

    This study is supported via funding from Prince Sattam bin Abdulaziz University project number (No. PSAU/2023/R/1444).

    The authors declare no conflict of interests.



    [1] W. O. Kermack, A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proc. R. Soc. Lond. A, 115 (1927), 700–721. http://doi.org/10.1098/rspa.1927.0118 doi: 10.1098/rspa.1927.0118
    [2] T. Sigler, S. Mahmuda, A. Kimpton, J. Loginova, P. Wohland, E. Charles-Edwards, et al., The socio-spatial determinants of COVID-19 diffusion: The impact of globalisation, settlement characteristics and population, Global. Health, 17 (2021), 56. https://doi.org/10.1186/s12992-021-00707-2 doi: 10.1186/s12992-021-00707-2
    [3] N. Sene, SIR epidemic model with Mittag–Leffler fractional derivative, Chaos, Solitons Fractals, 137 (2020), 109833. https://doi.org/10.1016/j.chaos.2020.109833 doi: 10.1016/j.chaos.2020.109833
    [4] J. A. Backer, A. Berto, C. McCreary, F. Martelli, W. H. M. van der Poel, Transmission dynamics of hepatitis E virus in pigs: Estimation from field data and effect of vaccination, Epidemics, 4 (2012), 86–92. https://doi.org/10.1016/j.epidem.2012.02.002 doi: 10.1016/j.epidem.2012.02.002
    [5] R. Anderson, R. May, Infectious Diseases of Humans: Dynamics and Control, Oxford Academic, 1991. https://doi.org/10.1093/oso/9780198545996.001.0001
    [6] A. Nauman, M. Rafiq, M. A. Rehman, M. Ali, M. O. Ahmad, Numerical modeling of SEIR measles dynamics with diffusion, Commun. Math. Appl., 9 (2018), 315–326.
    [7] T. N. Sindhu, A. Shafiq, Z. Huassian, Generalized exponentiated unit Gompertz distribution for modeling arthritic pain relief times data: Classical approach to statistical inference, J. Biopharm. Stat., 34 (2024), 323–348. https://doi.org/10.1080/10543406.2023.2210681 doi: 10.1080/10543406.2023.2210681
    [8] T. N. Sindhu, A. Shafiq, M. B. Riaz, T. A. Abushal, H. Ahmad, E. M. Almetwally, et al., Introducing the new arcsine-generator distribution family: An in-depth exploration with an illustrative example of the inverse weibull distribution for analyzing healthcare industry data, J. Radiat. Res. Appl. Sci., 17 (2024), 100879. https://doi.org/10.1016/j.jrras.2024.100879 doi: 10.1016/j.jrras.2024.100879
    [9] T. N. Sindhu, A. Shafiq, Q. M. Al-Mdallal, Exponentiated transformation of Gumbel Type-Ⅱ distribution for modeling COVID-19 data, Alexandria Eng. J., 60 (2021), 671–689. https://doi.org/10.1016/j.aej.2020.09.060 doi: 10.1016/j.aej.2020.09.060
    [10] A. Shafiq, A. B. Çolak, T. N. Sindhu, S. A. Lone, A. Alsubie, F. Jarad, Comparative study of artificial neural network versus parametric method in COVID-19 data analysis, Results Phys., 38 (2022), 105613. https://doi.org/10.1016/j.rinp.2022.105613 doi: 10.1016/j.rinp.2022.105613
    [11] A. Shafiq, A. B. Çolak, T. N. Sindhu, S. A. Lone, T. A. Abushal, Modeling and survival exploration of breast carcinoma: A statistical, maximum likelihood estimation, and artificial neural network perspective, Artif. Intell. Life Sci., 4 (2023), 100082. https://doi.org/10.1016/j.ailsci.2023.100082 doi: 10.1016/j.ailsci.2023.100082
    [12] G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nat. Rev. Phys., 3 (2021), 422–440. https://doi.org/10.1038/s42254-021-00314-5 doi: 10.1038/s42254-021-00314-5
    [13] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., 378 (2019), 686–707. https://doi.org/10.1016/j.jcp.2018.10.045 doi: 10.1016/j.jcp.2018.10.045
    [14] S. Cai, Z. Mao, Z. Wang, M. Yin, G. E. Karniadakis, Physics-informed neural networks (PINNs) for fluid mechanics: A review, Acta Mech. Sin., 37 (2021), 1727–1738. https://doi.org/10.1007/s10409-021-01148-1 doi: 10.1007/s10409-021-01148-1
    [15] A. Yazdani, L. Lu, M. Raissi, G. E. Karniadakis, Systems biology informed deep learning for inferring parameters and hidden dynamics, PLOS Comput. Biol., 16 (2020), e1007575. https://doi.org/10.1371/journal.pcbi.1007575 doi: 10.1371/journal.pcbi.1007575
    [16] M. Raissi, N. Ramezani, P. N. Seshaiyer, On parameter estimation approaches for predicting disease transmission through optimization, deep learning and statistical inference methods, Lett. Biomath., 6 (2019), 1–26.
    [17] S. Shaier, M. Raissi, P. Seshaiyer, Data-driven approaches for predicting spread of infectious diseases through DINNs: Disease informed neural networks, Lett. Biomath., 9 (2022), 71–105.
    [18] S. Yin, J. Wu, P. Song, Optimal control by deep learning techniques and its applications on epidemic models, J. Math. Biol., 86 (2023), 36. https://doi.org/10.1007/s00285-023-01873-0 doi: 10.1007/s00285-023-01873-0
    [19] F. V. Difonzo, L. Lopez, S. F. Pellegrino, Physics informed neural networks for learning the horizon size in bond-based peridynamic models, Comput. Methods Appl. Mech. Eng., 436 (2025), 117727. https://doi.org/10.1016/j.cma.2024.117727 doi: 10.1016/j.cma.2024.117727
    [20] F. V. Difonzo, L. Lopez, S. F. Pellegrino, Physics informed neural networks for an inverse problem in peridynamic models, Eng. Comput., 2024 (2024). https://doi.org/10.1007/s00366-024-01957-5
    [21] D. Ouedraogo, I. Ibrango, A. Guiro, Global stability for reaction-diffusion SIR model with general incidence function, Malaya J. Mat., 10 (2022), 139–150. https://doi.org/10.26637/mjm1002/004 doi: 10.26637/mjm1002/004
    [22] J. P. C. Santos, E. Monteiro, J. C. Ferreira, N. H. T. Lemes, D. S. Rodrigues, Well-posedness and qualitative analysis of a SEIR model with spatial diffusion for COVID-19 spreading, Biomath, 12 (2023), 2307207. https://doi.org/10.55630/j.biomath.2023.07.207 doi: 10.55630/j.biomath.2023.07.207
    [23] S. Chinviriyasit, W. Chinviriyasit, Numerical modelling of an SIR epidemic model with diffusion, Appl. Math. Comput., 216 (2010), 395–409. https://doi.org/10.1016/j.amc.2010.01.028 doi: 10.1016/j.amc.2010.01.028
    [24] P. van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29–48. https://doi.org/10.1016/S0025-5564(02)00108-6 doi: 10.1016/S0025-5564(02)00108-6
    [25] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature, 521 (2015), 436–444. https://doi.org/10.1038/nature14539
    [26] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic differentiation in machine learning: A survey, J. Mach. Learn. Res., 18 (2018), 1–43.
    [27] M. Raissi, G. E. Karniadakis, Hidden physics models: Machine learning of nonlinear partial differential equations, J. Comput. Phys., 357 (2018), 125–141. https://doi.org/10.1016/j.jcp.2017.11.039 doi: 10.1016/j.jcp.2017.11.039
    [28] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [29] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, preprint, arXiv: 1706.03762.
    [30] M. D. McKay, R. J. Beckman, W. J. Conover, A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21 (1979), 239–245. https://doi.org/10.2307/1268522 doi: 10.2307/1268522
    [31] E. Massad, M. N. Burattini, F. B. A. Coutinho, L. F. Lopez, The 1918 influenza A epidemic in the city of São Paulo Brazil, Med. Hypotheses, 68 (2007), 442–445. https://doi.org/10.1016/j.mehy.2006.07.041 doi: 10.1016/j.mehy.2006.07.041
    [32] N. Sene, 2-Numerical methods applied to a class of SEIR epidemic models described by the Caputo derivative, Methods Math. Model., 2022 (2022), 23–40. https://doi.org/10.1016/B978-0-323-99888-8.00003-6 doi: 10.1016/B978-0-323-99888-8.00003-6
    [33] S. L. Brunton, J. L. Proctor, J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proc. Natl. Acad. Sci. U.S.A., 113 (2016), 3932–3937. https://doi.org/10.1073/pnas.1517384113 doi: 10.1073/pnas.1517384113
  • This article has been cited by:

    1. Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain, Double inertial steps extragadient-type methods for solving optimal control and image restoration problems, 2024, 9, 2473-6988, 12870, 10.3934/math.2024629
    2. Ahmed Alamer, Faizan Ahmad Khan, Boyd-Wong type functional contractions under locally transitive binary relation with applications to boundary value problems, 2024, 9, 2473-6988, 6266, 10.3934/math.2024305
    3. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, 2024, 96, 1017-1398, 1465, 10.1007/s11075-023-01674-y
    4. Quan Zhou, Yinkun Wang, Yicheng Liu, Chebyshev–Picard iteration methods for solving delay differential equations, 2024, 217, 03784754, 1, 10.1016/j.matcom.2023.09.023
    5. M. Elzawy, S. Mosa, Equiform rectifying curves in Galilean space G4, 2023, 22, 24682276, e01931, 10.1016/j.sciaf.2023.e01931
    6. Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Ojen Kumar Narain, Hassen Aydi, Choonkil Park, A MIXED-TYPE PICARD-S ITERATIVE METHOD FOR ESTIMATING COMMON FIXED POINTS IN HYPERBOLIC SPACES, 2024, 14, 2156-907X, 1302, 10.11948/20230125
    7. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, A novel method for solving split variational inequality and fixed point problems, 2025, 0003-6811, 1, 10.1080/00036811.2025.2505615
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(545) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(15)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog