Processing math: 61%
Research article Special Issues

Applying faster algorithm for obtaining convergence, stability, and data dependence results with application to functional-integral equations

  • The goal of this manuscript is to create a new faster iterative algorithm than the previous writing's sober algorithms. In the setting of Banach spaces, this algorithm is used to analyze convergence, stability, and data-dependence results. Basic numerical examples are also provided to highlight the behavior and effectiveness of our approach. Ultimately, the proposed approach is used to solve the functional Volterra-Fredholm integral problem as an application.

    Citation: Hasanen A. Hammad, Habib Ur Rehman, Mohra Zayed. Applying faster algorithm for obtaining convergence, stability, and data dependence results with application to functional-integral equations[J]. AIMS Mathematics, 2022, 7(10): 19026-19056. doi: 10.3934/math.20221046

    Related Papers:

    [1] Lale Cona . Convergence and data dependence results of the nonlinear Volterra integral equation by the Picard's three step iteration. AIMS Mathematics, 2024, 9(7): 18048-18063. doi: 10.3934/math.2024880
    [2] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Melike Kaplan, W. Eltayeb Ahmed . A novel iterative scheme for solving delay differential equations and third order boundary value problems via Green's functions. AIMS Mathematics, 2024, 9(3): 6468-6498. doi: 10.3934/math.2024315
    [3] Austine Efut Ofem, Hüseyin Işik, Godwin Chidi Ugwunnadi, Reny George, Ojen Kumar Narain . Approximating the solution of a nonlinear delay integral equation by an efficient iterative algorithm in hyperbolic spaces. AIMS Mathematics, 2023, 8(7): 14919-14950. doi: 10.3934/math.2023762
    [4] Muhammad Waseem Asghar, Mujahid Abbas, Cyril Dennis Enyi, McSylvester Ejighikeme Omaba . Iterative approximation of fixed points of generalized $ \alpha _{m} $-nonexpansive mappings in modular spaces. AIMS Mathematics, 2023, 8(11): 26922-26944. doi: 10.3934/math.20231378
    [5] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [6] Junaid Ahmad, Kifayat Ullah, Hasanen A. Hammad, Reny George . A solution of a fractional differential equation via novel fixed-point approaches in Banach spaces. AIMS Mathematics, 2023, 8(6): 12657-12670. doi: 10.3934/math.2023636
    [7] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [8] Abdurrahman Büyükkaya, Mudasir Younis, Dilek Kesik, Mahpeyker Öztürk . Some convergence results in modular spaces with application to a system of integral equations. AIMS Mathematics, 2024, 9(11): 31030-31056. doi: 10.3934/math.20241497
    [9] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized $ \alpha $-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [10] Thongchai Botmart, Aasma Shaheen, Afshan Batool, Sina Etemad, Shahram Rezapour . A novel scheme of $ k $-step iterations in digital metric spaces. AIMS Mathematics, 2023, 8(1): 873-886. doi: 10.3934/math.2023042
  • The goal of this manuscript is to create a new faster iterative algorithm than the previous writing's sober algorithms. In the setting of Banach spaces, this algorithm is used to analyze convergence, stability, and data-dependence results. Basic numerical examples are also provided to highlight the behavior and effectiveness of our approach. Ultimately, the proposed approach is used to solve the functional Volterra-Fredholm integral problem as an application.



    Throughout this paper, we assume that Δ is a non-empty, closed and convex subset (CCS) of a Banach space (BS) Λ, R+ is the set of nonnegative real numbers and N is the set of natural numbers. In addition, the symbol refers to the weak convergence and to the strong convergence.

    The set of all fixed points (FPs) for an operator Ξ:ΔΔ is denoted by Υ(Ξ), which is defined by the point νΔ such that the equation ν=Ξν is satisfied.

    Let Ξ:ΔΔ be a self-mapping, then Ξ is called:

    (1) Contraction if there exists a constant α[0,1) such that d(Ξν,Ξϖ)αd(ν,ϖ).

    (1) Nonexoansive if d(Ξν,Ξϖ)d(ν,ϖ), for all ν,ϖΔ.

    Clearly, the contraction mapping is nonexpansive when α=1.

    FP techniques are applied in many solid applications due to their ease and smoothness such as optimization theory, approximation theory, fractional derivative, dynamic theory, and game theory. This is the reason why researchers are attracted to this technique. Also, this technique plays a significant role not only in the above applications but also in nonlinear analysis and many other engineering sciences. One of the important trends of FP methods is the study of the behavior and performance of algorithms that contribute greatly to real-world applications.

    One of the well-established principles of the FP theory is Banach's contraction principle (BCP). This principle is significant as a source of existence and uniqueness theorem in various parts of science. BCP depends on the Picard one-step iteration, which is given by:

    νi+1=Ξνi,i1,

    where Ξ is a contraction mapping defined on a complete metric space (MS). When the existence of the FP theorem is guaranteed in the setting of complete MS, BCP is not well applied to nonexpansive mapping because Picard's iteration gives poor results for the convergence of FP. So, many authors tended to create many iterative methods for approximating FPs in terms of improving the performance and convergence behavior of algorithms for nonexpansive mappings. Moreover, data-dependent results and the stability results with respect to Ξ via these methods have been introduced. For more details, we refer to some iterative methods such as the iteration of Mann [1], Ishikawa [2], Noor [3], Argawal et al. [4], Abbas and Nazir [5]. In addition, SP iteration [6], S-iteration [7], CR-iteration [8], Normal-S iteration [9], Picard-S iteration [10], Thakur iteration [11], M-iteration [12], M- iteration [13], Garodia and Uddin iteration [14], two-step Mann iteration [15]. Also, for more applications involved in iteration methods, see Hasanen et al. [16,17], and many others.

    Assume that {ηi} and {γi} are nonnegative sequences in [0,1]. The algorithms below are known as S algorithm [4], Picard- S algorithm [10], Thakur algorithm [11] and K- algorithm [18], respectively:

    {νΔ,ϖi=(1ηi)νi+ηiΞνi,νi+1=(1γi)νi+γiΞϖi,i1. (1.1)
    {νΔ,ϖi=(1ηi)νi+ηiΞνii=(1γi)νi+γiΞϖi,νi+1=Ξi,i1. (1.2)
    {νΔ,ϖi=(1ηi)νi+ηiΞνi,i=Ξ((1γi)νi+γiϖi),νi+1=Ξi,i1. (1.3)
    {νΔ,ϖi=(1αi)νi+αiΞνi,i=Ξ((1γi)ϖi+γiΞϖi),νi+1=Ξi,i1. (1.4)

    In 2014, Gursoy and Karakaya [10] presented the iterative method (1.2) and called it the Picard-S iteration. They proved numerically and analytically that the Picard-S iteration converges faster than Picard, Mann, Ishikawa, Noor, SP, CR, S, S*, Abbas and Nazir, normal-S and two-step Mann iteration procedures for almost contraction mappings (ACMs).

    In 2016, Thakur et al. [11] illustrated that the iteration (1.3) converges faster than Picard, Mann, Ishikawa, Agarwal, Noor and Abbas iteration for Suzuki generalized nonexpansive mappings (SGNMs) by a numerical example.

    Recently, Ullah and Arshad [18] gave K-algorithm (1.4) and proved that K-algorithm (1.4) converges faster than S algorithm (1.1), Picard-S algorithm (1.2) and Thakur algorithm (1.3) for SGNMs. Moreover, they noted that the Picard-S iteration (1.2) and Thakur iteration (1.3) have the same rate of convergence.

    On the other hand, nonlinear integral equations are used to describe mathematical models arising from mathematical physics, engineering, economics, biology, etc [19]. In particular, Volterra-Friedholm equations arise from boundary value problems and mathematical modeling of the spatiotemporal evolution of the epidemic. For various biological models, see [20,21]. Recently, a large number of researchers turned to solve nonlinear integral equations with the involvement of iterative methods, for example, see [22,23,24,25,26].

    Based on the works mentioned above, in this manuscript, we construct a new algorithm to get a better affinity rate of ACMs and SGNMs as follows:

    {νΔ,ϖi=(1αi)νi+αiΞνi,i=Ξ((1ηi)ϖi+ηiΞϖi),i=Ξ((1γi)i+γiΞi),νi+1=Ξi, (1.5)

    for each i1, where αi, ηi and γi are sequences in [0,1].

    Our work is organized as follows: In section 2, we give some definitions, propositions, and lemmas, which facilitates the reader's palatability of our results. In section 3, the performance and convergence rate of our algorithms are analyzed analytically and we found that the convergence rate is satisfactory for ACMs in a BS. Moreover, the weak and strong convergence of the proposed algorithm is discussed for SGNMs in the context of UCBSs in section 4. In section 5, we proved that our new iterative algorithm is G-stable. Further, data-dependence results for ACMs under our iterative scheme (1.5) are studied in section6. In addition, we presented two examples to show that our method is faster than the iteration schemes (1.1)–(1.4) in section 7. Ultimately, in section 8, the proposed algorithm is implicated to find the solution to the Volterra-Fredholm integral equation. In section 9, the conclusion and future works are derived.

    This part is intended to give some definitions, propositions and lemmas that will comfort the reader in understanding our manuscript and will be useful in the sequel.

    Definition 2.1. A mapping Ξ:ΛΛ is called SGNM if

    12νΞννϖΞνΞϖνϖ,ν,ϖΛ.

    Definition 2.2. A BS Λ is called a uniformly convex if for each ϵ(0,2], there exists δ>0 such that for ν,ϖΛ satisfying ν1, ϖ1 and νϖ>ϵ, we get ν+ϖ2<1δ.

    Definition 2.3. A BS Λ is called satisfy Opial's condition if for any sequence {νi} in Λ so that νiνΛ, implies

    lim supiνiν<lim supiνiϖ,forallϖΛwithνϖ.

    Definition 2.4. Let {νi} be a bounded sequence in a BS Λ. For νΔΛ, we set

    R(ν,{νi})=lim supiνiν.

    The asymptotic radius of {νi} relative to Λ is defined by

    R(Λ,{νi})=inf{R(ν,{νi}):νΛ}.

    The asymptotic center of {νi} relative to Λ is defined by

    Q(Λ,{νi})={νΛ:R(ν,{νi})=R(Λ,{νi})}.

    It should be noted that, Q(Λ,{νi}) consists of exactly one point in a UCBS.

    Definition 2.5. [27] A mapping Ξ:ΛΛ is called ACM, if there exists θ(0,1) and some constant 0 so that

    ΞνΞϖθνϖ+νΞν,ν,ϖΛ. (2.1)

    Definition 2.6. [28] Suppose that the real sequences {ηi} and {γi} converge to η and γ, respectively. Suppose also there is

    ϰ=limiηiηγiγ.

    Then, we say that

    (1) the sequence {ηi} converges faster to η than {γi} does to γ, if ϰ=0,

    (2) {ηi} and {γi} have the same rate of convergence, if ϰ(0,).

    Definition 2.7. [28] Suppose that Ξ,ˆΞ:ΔΔ are given operators. An operator ˆΞ is called an approximate operator for Ξ, if for some ϵ>0, we get

    ΞηˆΞηϵ,ηΔ.

    Definition 2.8. [29] Let :R+R+ be a nondecreasing function satisfies (0)=0 and (s)>0 for all s>0. A mapping Ξ:ΛΛ is called satisfy a condition (I) if

    νΞν(d(ν,Υ(Ξ))),νΛ,

    where d(ν,Υ(Ξ))=inf{νζ:ζΥ(Ξ)}.

    Proposition 2.1. [30] Let Ξ:ΛΛ be a given map. If Ξ is

    (1) nonexpansive mapping, then it is SGNM.

    (2) SGNM with a non-empty FP set, then it is quasi-nonexpansive mapping.

    (3) SGNEM, then it is satisfied the following inequality

    νΞϖ3Ξνν+νϖ,ν,ϖΛ.

    Lemma 2.1. [30] Let Δ be a subset of a BS Λ, which satisfies Opial's condition and Ξ:ΔΔ be a SGNM. If {νi}ζ and limiΞϖiνi=0, then Ξζ=ζ, i.e., IΞ is demiclosed at zero.

    Lemma 2.2. [30] Let Δ be a weakly compact convex subset of a BS Λ with the Opial's property. If Ξ:ΔΔ is a SGNM, then Ξ possess a FP.

    Lemma 2.3. [28] Assume that {φi} and {φi} is nonnegative real sequences satisfy the inequality below:

    φi+1(1zi)φi+φi,

    where zi(0,1) for each i1, i=0zi= and limiφizi=0, then limiφi=0.

    Lemma 2.4. [31] Let {φi} and {φi} be nonnegative real sequences satisfy the inequality below:

    φi+1(1zi)φi+ziφi,

    where zi(0,1) for each i1, i=0zi= and φi0, then

    0lim supiφilim supiφi.

    Lemma 2.5. [32] Assume that Λ is a UCBS and {ξi} is a sequence satisfies 0<nξin<1, for all iN. Assume also {ϖi} and {νi} are two sequences in Λ so that lim supi{ϖi}ρ, lim supi{νi}ρ and lim supiξϖi+(1ξ)νi=ρ for some ρ0. Then limiϖiνi=0.

    In this section, we discuss the rate of convergence of our iterative algorithm for ACMs.

    Theorem 3.1. Assume that Λ is a BS and Δ is a closed convex subset (CCS) of Λ. Let Ξ:ΛΛ be a mapping satisfying (2.1) with Υ(Ξ). Suppose that {νi} is the iterative sequence generated by (1.5) with {αi},{ηi},{γi}[0,1] such that i=0γi=. Then {νi}ζΥ(Ξ).

    Proof. Consider ζΥ(Ξ), then from (1.5), we get

    ϖiζ=(1αi)νi+αiΞνiζ=(1αi)(νiζ)+αi(Ξνiζ)(1αi)νiζ+αiΞνiζ(1αi)νiζ+θαiνiζ=(1(1θ)αi)νiζ. (3.1)

    Using (1.5) and (3.1), we have

    iζ=Ξ((1ηi)ϖi+ηiΞϖi)ζ=ΞζΞ((1ηi)ϖi+ηiΞϖi)θζ((1ηi)ϖi+ηiΞϖi)+ζΞζ=θ(1ηi)(ϖiζ)+ηi(Ξϖiζ)θ[(1ηi)ϖiζ+θηiϖiζ]θ[1(1θ)ηi]ϖiζθ(1(1θ)ηi)(1(1θ)αi)νiζ. (3.2)

    Similarly, from (1.5) and (3.2), one can write

    iζθ(1(1θ)γi)iζθ2(1(1θ)γi)(1(1θ)ηi)(1(1θ)αi)νiζ. (3.3)

    It follows from (1.5) and (3.3) that

    νi+1ζ=Ξiζθiζθ3(1(1θ)γi)(1(1θ)ηi)(1(1θ)αi)νiζ. (3.4)

    Because θ(0,1) and ηi,αi[0,1], for all i1, then (1(1θ)ηi)(1(1θ)αi)<1. Hence (3.4) reduces to

    νi+1ζθ3(1(1θ)γi)νiζ. (3.5)

    It follows from (3.5) that

    νi+1ζθ3(1(1θ)γi)νiζθ3(1(1θ)γi1)νi1ζν1ζθ3(1(1θ)γ0)ν0ζ. (3.6)

    From (3.6), we have

    νi+1ζθ3(i+1)ν0ζiu=0(1(1θ)γu). (3.7)

    Again, since θ(0,1) and γu[0,1], for all u1, then (1(1θ)γu)<1. It is known that 1νeν for each ν[0,1], then by (3.7), we obtain

    νi+1ζθ3(i+1)e(1θ)u=0γuν0ζ. (3.8)

    Taking the limit as i in (3.8), we have limiνiζ=0, that is {νi}ζΥ(Ξ).

    For the uniqueness. Let ζ,ζΥ(Ξ) such that ζζ, from the definition of Ξ, we can write

    ζζ=ΞζΞζθζζ+ζΞζ=θζζ<ζζ,

    a contrary. Hence ζ=ζ.

    The next theorem shows that our iteration (1.5) converges faster than the iteration (1.4) in the sense of Berinde [28].

    Theorem 3.2. Let Δ be a CCS of a BS Λ and Ξ:ΛΛ be a mapping satisfies (2.1) with Υ(Ξ). Consider {νi} is the iterative sequence generated by the algorithm (1.5) with {αi},{ηi},{γi}[0,1] such that 0<γγi1, for all i1. Then the sequence {νi} converges faster to ν than the iterative scheme (1.4).

    Proof. It follows from (3.7) and the hypothesis 0<γγi1 that

    νi+1ζθ3(i+1)ν0ζiu=0(1(1θ)γu)=θ3(i+1)ν0ζ(1(1θ)γ)i+1.

    Analogously, the iterative process (1.4) ([18], Theorem 3.2) takes the form

    li+1ζθ2(i+1)l0ζiu=0(1(1θ)γu). (3.9)

    Because γγi1, for some γ>0 and all i1, then (3.9) can be written as

    li+1ζθ2(i+1)l0ζiu=0(1(1θ)γu)=θ2(i+1)l0ζ(1(1θ)γ)i+1.

    Put

    ζ=θ3(i+1)ν0ζ(1(1θ)γ)i+1,

    and

    ˆζ=θ2(i+1)l0ζ(1(1θ)γ)i+1.

    Define

    ϱi=ζˆζ=θ3(i+1)ν0ζ(1(1θ)γ)i+1θ2(i+1)l0ζ(1(1θ)γ)i+1=θi+1.

    Taking the limit as i, we have limiϱi=0. This means that {νi} converges faster than {li} to ν.

    This part has been enriched to obtain some convergence results of our iteration procedure (1.5) for SGNMs in the setting of UCBSs.

    We begin with the proof of the following lemmas:

    Lemma 4.1. Let Δ be a CCS of a BS Λ and Ξ:ΛΛ be SGNM with Υ(Ξ). If {νi} is the iterative sequence given by the algorithm (1.5), then limiνiζ exists for each ζΥ(Ξ).

    Proof. Let ζΥ(Ξ) and δΔ. Clearly, from Proposition 2.1 (2), every SGNM is quasi-nonexpansive mapping, so

    12ζΞζ=0ζδΞζΞδζδ.

    Now, using (1.5), we obtain

    ϖiζ=(1αi)νi+αiΞνiζ(1αi)νiζ+αiΞνiζ(1αi)νiζ+αiνiζ=νiζ. (4.1)

    It follows from (1.5) and (4.1) that

    iζ=Ξ((1ηi)ϖi+ηiΞϖi)ζ(1ηi)ϖi+ηiΞϖiζ(1ηi)ϖiζ+ηiΞϖiζ(1ηi)ϖiζ+ηiϖiζ=ϖiζνiζ. (4.2)

    Similarly, from (1.5) and (4.2), one can write

    iζ=Ξ((1γi)i+γiΞi)ζiζνiζ.

    At the last, using (1.5) and (4.2), we get

    νi+1ζ=Ξiζiζνiζ.

    This leads to the sequence {νiζ} is bounded and nondecreasing for all ζΥ(Ξ). Hence limiνiζ exists.

    Lemma 4.2. Let Δ be a non-empty CCS of a UCBS Λ and Ξ:ΛΛ be SGNM. If {νi} is the iterative sequence defined by the algorithm (1.5). Then Υ(Ξ) if and only if {νi} is bounded and limiΞνiνi=0.

    Proof. Assume that Υ(Ξ) and take ζΥ(Ξ). Based on Lemma 4.1, limiνiζ exists and {νi} is bounded. Put

    limiνiζ=φ. (4.3)

    Applying (4.3) in (4.1) and taking limsup, we can write

    lim supiϖiζlim supiνiζ=φ.

    According to Proposition 2.1 (2), we have

    lim supiΞνiζlim supiνiζ=φ. (4.4)

    Again, using (1.5) and (4.1)–(4.3), we obtain

    νi+1ζ=ΞiζΞ((1γi)i+γiΞi)ζ(1γi)i+γiΞiζ(1γi)iζ+γiΞiζ(1γi)iζ+iζ=iζ=Ξ((1ηi)ϖi+ηiΞϖi)ζ(1ηi)ϖi+ηiΞϖiζ(1ηi)ϖiζ+ηiΞϖiζ(1ηi)νiζ+ηiϖiζ=νiζηiνiζ+ηiϖiζ.

    Thus, we have

    νi+1ζνiζηiϖiζνiζ. (4.5)

    Since ηi[0,1], then by (4.5), we get

    νi+1ζνiζνi+1ζνiζηiϖiζνiζ,

    which implies that νi+1ζϖiζ. Then from (4.3), we have

    φlim infiϖiζ. (4.6)

    It follows from (4.4) and (4.6) that

    φ=limiϖiζ=limi(1αi)νi+αiΞνiζ=limi(1αi)(νiζ)+αi(Ξνiζ)=limiαi(Ξνiζ)+(1αi)(νiζ). (4.7)

    From (4.3), (4.4), (4.7) and Lemma 2.5, we have limiΞνiνi=0.

    Conversely, suppose that {νi} is bounded and limiΞνiνi=0. Suppose also ΞζQ(Λ,{νi}), then by Definition 2.4 and Proposition 2.1 (3), we can write

    R(Ξζ,{νi})=lim supiνiΞζlim supi(3Ξνiνi+νiζ)=lim supiνiζ=R(ζ,{νi}),

    which leads to ζQ(Λ,{νi}). Since Q(Λ,{νi}) is singleton and Λ is a uniformly convex, thus, we get Ξζ=ζ.

    Theorem 4.1. Let Λ, Δ and Ξ be as in Lemma 4.2. If Λ satisfies Opial's condition and Υ(Ξ), then the sequence {νi} iterated by (1.5) converges weakly to a FP of Ξ, that is {νi}ζΥ(Ξ).

    Proof. Let ζΥ(Ξ), based on Lemma 4.1, limiνiζ exists.

    Now, we prove that {νi} has a weak sequential limit in Υ(Ξ). Let {νij} and {νik} be subsequences of {νi} so that {νij}ν and {νik}ν for each ν,νΔ. Using Lemma 4.2, we obtain limiΞνiνi=0. From Lemma 2.1 and IΞ is demiclosed at zero, we have (IΞ)ν=0Ξν=ν, analogously, Ξν=ν. For the uniqueness. Let νν, then by Opial's property, we find that

    limiνiν=limjνijν<limjνijν=limiνiν=limkνikν<limkνikν=limiνiν,

    a contradiction, so ν=ν and {νi}ζΥ(Ξ).

    Theorem 4.2. Let Λ be a UCBS and Δ be a non-empty compact convex subset of Λ. Assume that Ξ:ΔΔ is SGNM and {νi} is the iterative sequence given by (1.5). Then {νi}ζΥ(Ξ).

    Proof. According to Lemmas 2.2 and 4.2, we have Υ(Ξ) and limiΞνiνi=0. The compactness of Δ implies that there is a subsequence {νik} of {νi} so that νikζ for some ζΔ. Using Proposition 2.1 (3), one can get

    νikΞζ3νikΞνik+νikζ,i1.

    Passing k, we obtain Ξζ=ζ, that is ζΥ(Ξ). Also, by Lemma 4.1, we get limiνiζ exists for all ζΥ(Ξ), thus νiζ strongly.

    Theorem 4.3. Let Λ, Δ and Ξ be described as Lemma 4.2 and {νi} be an iterative sequence defined by (1.5). Then

    {νi}ζΥ(Ξ)lim infid(νi,Υ(Ξ))=0,

    where d(ν,Υ(Ξ))=inf{νζ:ζΥ(Ξ)}.

    Proof. A necessary condition is clear. Let lim infid(νi,Υ(Ξ))=0, by Lemma 4.1, we get limiνiζ exists for all ζΥ(Ξ), this implies that lim infid(νi,Υ(Ξ)) exists. From our assumption lim infid(νi,Υ(Ξ))=0, we have limid(νi,Υ(Ξ))=0.

    Next, we shall show that the sequence {νi} is a Cauchy in Δ. Because lim infid(νi,Υ(Ξ))=0, then for a given ϵ>0, there is i01 so that, for each i,ji0, we obtain

    d(νi,Υ(Ξ))ϵ2 and d(νj,Υ(Ξ))ϵ2.

    Hence, we get

    νiνjνiΥ(Ξ)+Υ(Ξ)νj=d(νi,Υ(Ξ))+d(νj,Υ(Ξ)ϵ2+ϵ2=ϵ,

    This proves that {νi} is a Cauchy sequence in Δ. As Δ is closed, then there is ˆνΔ so that limiνi=ˆν. Since limid(νi,Υ(Ξ))=0, yields limid(ˆν,Υ(Ξ))=0. Thus, ˆνΥ(Ξ) as Δ is closed. This finishes the proof.

    Theorem 4.4. Let Λ, Δ and Ξ be defined in Lemma 4.2 and {νi} be an iterative sequence generated by (1.5). If Ξ fulfills the condition (I), then {νi}ζΥ(Ξ).

    Proof. It follows from Lemma 4.2 that

    limiΞνiνi=0. (4.8)

    Based on the condition (I) in Definition 2.8 and using (4.8), we observe that

    limi(d(νi,Υ(Ξ)))limiΞνiνi=0,

    which leads to limi(d(νi,Υ(Ξ)))=0. From the definition of , we get limid(νi,Υ(Ξ))=0. Applying Theorem 4.3, we conclude that {νi}ζΥ(Ξ).

    In this part, we show that our iteration process (1.5) is Ξ-stable.

    Theorem 5.1. Let Λ be a BS and Δ be a CCS of Λ. Suppose that Ξ:ΛΛ is a self-mapping satisfies (2.1) and {νi} is the iterative sequence iterated by (1.5) with {αi},{ηi},{γi}[0,1] so that i=0γi=. Then the algorithm (1.5) is Ξ-stable.

    Proof. Let {σi} be an arbitrary sequence in Δ and assume that the sequence of our iteration (1.5) can be written as νi+1=f(Ξ,νi), which converges to a unique point ζ and that ϕi=σi+1f(Ξ,σi). In order to show that Ξ is stable, we must prove that limiϕi=0 if and only if limiσi=ζ.

    Assume that limiϕi=0, then from (1.5) and (3.5), we get

    σi+1ζ=σi+1f(Ξ,σi)+f(Ξ,σi)ζσi+1f(Ξ,σi)+f(Ξ,σi)ζ=ϕi+f(Ξ,σi)ζ=ϕi+Ξ(Ξ((1γi)[Ξ((1ηi)[(1αi)σi+αiΞσi]+ηiΞ[(1αi)σi+αiΞσi])]+γiΞ[Ξ((1ηi)[(1αi)σi+αiΞσi]+ηiΞ[(1αi)σi+αiΞσi])]))ζ=θ3(1(1θ)γi)σiζ+ϕi,

    for i1, set

    φi=σiζ, zi=(1θ)γi(0,1) and φi=ϕi.

    Because limiϕi=0, then limiφizi=limiϕizi=0. Thus, all requirements of Lemma 2.3 are fulfilled. Hence limiσiζ=0, that is limiσi=ζ.

    Conversely, assume that limiσi=ζ, then, we get

    ϕi=σi+1f(Ξ,σi)=σi+1ζ+ζf(Ξ,σi)σi+1ζ+ζf(Ξ,σi)σi+1ζ+θ3(1(1θ)γi)σiζ,

    taking the limit, we have limiϕi=0. This finishes the proof.

    The following example support Theorem 5.1.

    Example 5.1. Consider Λ=[0,1] and Ξν=ν2. Clearly, 0 is a FP of Ξ. To prove Ξ satisfies the condition (2.1), let θ=12 and for each 0, we get

    ΞνΞϖθνϖνΞν=12νϖ12νϖνν2=2ν0,forallν,ϖΛ.

    Now, we illustrate that our algorithm (1.5) is Ξ-stable. Suppose that αi=ηi=γi=1i+1 and ν0[0,1], then we obtain

    ϖi=(11i+1)νi+1i+1(νi2)=(112(i+1))νi,i=12((11i+1)ϖi+1i+1(ϖi2))=12(11(i+1)+14(i+1)2)νi,i=Ξ((1γi)i+γiΞi)=14(132(i+1)+34(i+1)218(i+1)3)νi,νi+1=Ξi=18(132(i+1)+34(i+1)218(i+1)3)νi=(1(78+32(i+1)34(i+1)2+18(i+1)3))νi.

    Take σi=78+32(i+1)34(i+1)2+18(i+1)3. It is clear that σi(0,1) for all iN, i=0σi=0. According to Lemma 2.3, limiνi=0. Choose i=1i+2, we get

    ϕi=i+1f(Ξ,i)1i+314((11i+1)[((11i+1)[(11i+1)i+1i+1i2]+12(i+1)[(11i+1)i+1i+1i2])]+12(i+1)[((11i+1)[(11i+1)i+1i+1i2]+12(i+1)[(11i+1)i+1i+1i2])])=1i+314((11i+1)[(11(i+1)+14(i+1)2)i]+12(i+1)[(11(i+1)+14(i+1)2)i])=1i+3(1434(i+1)+316(i+1)2132(i+1)3)i=1i+3(14(i+2)34(i+1)(i+2)+316(i+1)2(i+2)132(i+1)3(i+2)),

    when i, we have limiϕi=0. This implies that our iterative scheme is Ξ-stable with respect to Ξ.

    In this part, we present the data-dependence result for the operator Ξ satisfies the inequality (2.1) via our iterative algorithm (1.5).

    Theorem 6.1. Let ˆΞ be an approximation operator for a mapping Ξ fulfills (2.1). Suppose that {νi} is the iterative sequence given by (1.5) for Ξ. Define an iterative sequence {ˆνi} for ˆΞ as follows:

    {ˆνΔ,ˆϖi=(1αi)ˆνi+αiˆΞˆνi,ˆi=ˆΞ((1ηi)ˆϖi+ηiˆΞˆϖi),ˆi=ˆΞ((1γi)ˆi+γiˆΞˆi),ˆνi+1=ˆΞˆi, (6.1)

    for all i1, where {αi}, {ηi} and {γi} are sequences in [0,1] such that

    (p1) for all i1, 2γi1,

    (p2) i=0γi=.

    If Ξζ=ζ and ˆΞˆζ=ˆζ so that limiˆνi=ˆζ, then

    ζˆζ14ϵ(1θ),

    for any fixed number ϵ>0.

    Proof. It follows from (2.1), (1.5), (6.1) and Definition 2.7 that

    ϖiˆϖi=(1αi)νi+αiΞνi((1αi)ˆνi+αiˆΞˆνi)=(1αi)(νiˆνi)+αi(ΞνiˆΞˆνi)(1αi)νiˆνi+αi(ΞνiΞˆνi+ΞˆνiˆΞˆνi)(1αi)νiˆνi+αiθνiˆνi+αiνiΞνi+αiϵ=(1(1θ)αi)νiˆνi+αiνiΞνi+αiϵ. (6.2)

    Again, using (2.1), (1.5), (6.1) and Definition 2.7, we have

    iˆi=Ξ((1ηi)ϖi+ηiΞϖi)ˆΞ((1ηi)ˆϖi+ηiˆΞˆϖi)Ξ((1ηi)ϖi+ηiΞϖi)Ξ((1ηi)ˆϖi+ηiˆΞˆϖi)+Ξ((1ηi)ˆϖi+ηiˆΞˆϖi)ˆΞ((1ηi)ˆϖi+ηiˆΞˆϖi)θ((1ηi)ϖiˆϖi+ηiΞϖiˆΞˆϖi)+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+ϵθ((1ηi)ϖiˆϖi+ηi(ΞϖiΞˆϖi+ΞˆϖiˆΞˆϖi))+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+ϵθ((1ηi)ϖiˆϖi+ηiθϖiˆϖi+ηiϖiΞϖi+ϵ)+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+ϵ=θ(1(1θ)ηi)ϖiˆϖi+θηiϖiΞϖi+θϵ+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+ϵ. (6.3)

    Applying (6.2) on (6.3), we get

    iˆi=θ(1(1θ)ηi)[(1(1θ)αi)νiˆνi+αiνiΞνi+αiϵ]+θηiϖiΞϖi+θϵ+ϵ+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]=θ(1(1θ)ηi)(1(1θ)αi)νiˆνi+θ(1(1θ)ηi)αiνiΞνi+θαiϵ+θηiαiϵ(θ1)+θηiϖiΞϖi+θϵ+ϵ+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]=θ(1(1θ)ηi)(1(1θ)αi)νiˆνi+θ(1(1θ)ηi)αiνiΞνi+θηiϖiΞϖi+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+θαiϵ+θηiαiϵ(θ1)+θϵ+ϵ. (6.4)

    Similar to (6.3), one can write

    iˆiθ(1(1θ)γi)iˆi+θγiiΞi+θϵ+[(1γi)i+γiΞiΞ((1γi)i+γiΞi)]+ϵ. (6.5)

    Applying (6.4) on (6.5), we have

    iˆiθ(1(1θ)γi){θ(1(1θ)ηi)(1(1θ)αi)νiˆνiθ(1(1θ)ηi)αiνiΞνi+θηiϖiΞϖi+[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+θαiϵ+θηiαiϵ(θ1)+θϵ+ϵ}+θγiiΞi+θϵ+ϵ+[(1γi)i+γiΞiΞ((1γi)i+γiΞi)]=θ2(1(1θ)γi)(1(1θ)ηi)(1(1θ)αi)νiˆνi+θ2(1(1θ)γi)(1(1θ)ηi)αiνiΞνi+θ2(1(1θ)γi)ηiϖiΞϖi+θ(1(1θ)γi)[(1ηi)ϖi+ηiΞϖiΞ((1ηi)ϖi+ηiΞϖi)]+θ2αiϵθ2γiαiϵ+θ3γiαiϵ+θ2ηiαiϵ(θ1)θ2γiηiαiϵ(θ1)+θ3γiηiαiϵ(θ1)+θ2ϵθ2γiϵ+θ3γiϵ+θϵθγiϵ+θ2γiϵ+θγiiΞi+θϵ+ϵ+[(1γi)i+γiΞiΞ((1γi)i+γiΞi)],

    it follows that

    \begin{eqnarray} \left\Vert \Im _{i}-\widehat{\Im }_{i}\right\Vert &\leq &\theta ^{2}\left( 1-(1-\theta )\gamma _{i}\right) \left( 1-(1-\theta )\eta _{i}\right) (1-(1-\theta )\alpha _{i})\left\Vert \nu _{i}-\widehat{\nu }_{i}\right\Vert \\ &&+\theta ^{2}\left( 1-(1-\theta )\gamma _{i}\right) \left( 1-(1-\theta )\eta _{i}\right) \alpha _{i}\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert \\ &&+\theta ^{2}\left( 1-(1-\theta )\gamma _{i}\right) \eta _{i}\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert \\ &&+\theta \left( 1-(1-\theta )\gamma _{i}\right) \ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] \\ &&+\ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] \\ &&+\theta \gamma _{i}\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert +\theta ^{2}\alpha _{i}\epsilon +\theta ^{2}\gamma _{i}\alpha _{i}\epsilon \left( \theta -1\right) +\theta ^{2}\eta _{i}\alpha _{i}\epsilon \left( \theta -1\right) \\ &&+\theta ^{2}\gamma _{i}\eta _{i}\alpha _{i}\epsilon \left( \theta -1\right) ^{2}+\theta ^{2}\epsilon +\theta ^{2}\gamma _{i}\epsilon \left( \theta -1\right) +2\theta \epsilon +\theta \gamma _{i}\epsilon (\theta -1)+\epsilon . \end{eqnarray} (6.6)

    Finally, from (2.1), (1.5) and (6.6), we get

    \begin{eqnarray} \left\Vert \nu _{i+1}-\widehat{\nu }_{i+1}\right\Vert & = &\left\Vert \Xi \Im _{i}-\widehat{\Xi }\widehat{\Im }_{i}\right\Vert \\ & = &\left\Vert \Xi \Im _{i}-\Xi \widehat{\Im }_{i}+\Xi \widehat{\Im }_{i}- \widehat{\Xi }\widehat{\Im }_{i}\right\Vert \\ &\leq &\left\Vert \Xi \Im _{i}-\Xi \widehat{\Im }_{i}\right\Vert +\left\Vert \Xi \widehat{\Im }_{i}-\widehat{\Xi }\widehat{\Im }_{i}\right\Vert \\ &\leq &\theta \left\Vert \Im _{i}-\widehat{\Im }_{i}\right\Vert +\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\epsilon \\ &\leq &\theta ^{3}\left( 1-(1-\theta )\gamma _{i}\right) \left( 1-(1-\theta )\eta _{i}\right) (1-(1-\theta )\alpha _{i})\left\Vert \nu _{i}-\widehat{\nu }_{i}\right\Vert \\ &&+\theta ^{3}\left( 1-(1-\theta )\gamma _{i}\right) \left( 1-(1-\theta )\eta _{i}\right) \alpha _{i}\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert \\ &&+\theta ^{3}\left( 1-(1-\theta )\gamma _{i}\right) \eta _{i}\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert \\ &&+\theta ^{2}\left( 1-(1-\theta )\gamma _{i}\right) \ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] \\ &&+\theta \ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] \\ &&+\theta ^{2}\gamma _{i}\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert +\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\theta ^{3}\alpha _{i}\epsilon +\theta ^{3}\gamma _{i}\alpha _{i}\epsilon \left( \theta -1\right) \\ &&+\theta ^{3}\eta _{i}\alpha _{i}\epsilon \left( \theta -1\right) +\theta ^{3}\gamma _{i}\eta _{i}\alpha _{i}\epsilon \left( \theta -1\right) ^{2}+\theta ^{3}\epsilon +\theta ^{3}\gamma _{i}\epsilon \left( \theta -1\right) +2\theta ^{2}\epsilon \\ &&+\theta ^{2}\gamma _{i}\epsilon (\theta -1)+\theta \epsilon +\epsilon . \end{eqnarray} (6.7)

    Since \alpha _{i}, \eta _{i}, \gamma _{i}\in \lbrack 0, 1] and \theta \in (0, 1), we conclude that

    \begin{equation} \left\{ \begin{array}{c} (1-(1-\theta )\alpha _{i}) < 1, \\ \left( 1-(1-\theta )\eta _{i}\right) < 1, \\ (1-(1-\theta )\gamma _{i}) < 1, \\ \left( \theta -1\right) < 0, \\ \theta ^{3},\theta ^{2},\theta < 1, \\ \theta ^{3}\alpha _{i},\theta ^{3}\gamma _{i},\theta ^{3}\eta _{i} < 1. \end{array} \right. \end{equation} (6.8)

    Applying (6.8) in (6.7), we have

    \begin{eqnarray*} \left\Vert \nu _{i+1}-\widehat{\nu }_{i+1}\right\Vert &\leq &\left( 1-(1-\theta )\gamma _{i}\right) \left\Vert \nu _{i}-\widehat{\nu } _{i}\right\Vert +\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert +\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert \\ &&+\ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] \\ &&+\ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] \\ &&+\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\gamma _{i}\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert +2\gamma _{i}\epsilon +6\epsilon . \end{eqnarray*}

    From the axiom 2\gamma _{i}\geq 1, we can obtain

    \begin{eqnarray*} &&\left\Vert \nu _{i+1}-\widehat{\nu }_{i+1}\right\Vert \\ &\leq &\left( 1-(1-\theta )\gamma _{i}\right) \left\Vert \nu _{i}-\widehat{ \nu }_{i}\right\Vert +2\gamma _{i}\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert \\ &&+2\gamma _{i}\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert +2\gamma _{i}\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\gamma _{i}\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert \\ &&+2\gamma _{i}\ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] \\ &&+2\gamma _{i}\ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] \\ &&+2\gamma _{i}\epsilon +12\gamma _{i}\epsilon \\ & = &\left( 1-(1-\theta )\gamma _{i}\right) \left\Vert \nu _{i}-\widehat{\nu } _{i}\right\Vert \\ &&+\gamma _{i}(1-\theta )\times \left\{ \frac{2\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert +2\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert +2\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert }{(1-\theta )}\right. \\ &&+\frac{2\ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] }{ (1-\theta )} \\ &&\left. +\frac{2\ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] +14\epsilon }{(1-\theta )}\right\} . \end{eqnarray*}

    Put \varphi _{i} = \left\Vert \nu _{i}-\widehat{\nu }_{i}\right\Vert, z_{i} = (1-\theta)\gamma _{i}\in (0, 1) and

    \begin{eqnarray*} \varphi _{i}^{\ast } & = &\left\{ \frac{2\ell \left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert +2\ell \left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert +2\ell \left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert +\ell \left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert }{(1-\theta )}\right. \\ &&+\frac{2\ell \left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] }{ (1-\theta )} \\ &&+\left. \frac{2\ell \left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] +14\epsilon }{(1-\theta )}\right\} . \end{eqnarray*}

    We know that from Theorem 3.1, \lim\limits_{i\rightarrow \infty }\nu _{i} = \zeta and since \Xi \zeta = \zeta, we have

    \begin{eqnarray*} \lim\limits_{i\rightarrow \infty }\left\Vert \nu _{i}-\Xi \nu _{i}\right\Vert & = &\lim\limits_{i\rightarrow \infty }\left\Vert \varpi _{i}-\Xi \varpi _{i}\right\Vert \\ & = &\lim\limits_{i\rightarrow \infty }\left\Vert \Im _{i}-\Xi \Im _{i}\right\Vert = \lim\limits_{i\rightarrow \infty }\left\Vert \wp _{i}-\Xi \wp _{i}\right\Vert \\ & = &\lim\limits_{i\rightarrow \infty }\left[ (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}-\Xi \left( (1-\eta _{i})\varpi _{i}+\eta _{i}\Xi \varpi _{i}\right) \right] \\ & = &\lim\limits_{i\rightarrow \infty }\left[ (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}-\Xi \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}\Xi \wp _{i}\right) \right] = 0. \end{eqnarray*}

    Hence, from Lemma 2.4, we get

    \begin{equation} 0\leq \limsup\limits_{i\rightarrow \infty }\left\Vert \nu _{i}-\widehat{\nu } _{i}\right\Vert \leq \limsup\limits_{i\rightarrow \infty }\frac{14\epsilon }{ (1-\theta )}. \end{equation} (6.9)

    Since \lim\limits_{i\rightarrow \infty }\nu _{i} = \zeta and by our axiom \lim\limits_{i\rightarrow \infty }\widehat{\nu }_{i} = \widehat{\zeta }, the inequality (6.9) leads to

    \begin{equation*} \left\Vert \zeta -\widehat{\zeta }\right\Vert \leq \frac{14\epsilon }{ (1-\theta )}. \end{equation*}

    This completes the proof.

    The following example supports the analytical results obtained from Theorem 3.2 and studies the performance and speed of our algorithm compared with the previous algorithms.

    Example 7.1. Let \Lambda = \mathbb{R} , \Delta = [0, 50] , and \Xi :\Delta \rightarrow \Delta be a mapping defined by

    \begin{equation*} \Xi (\nu ) = \sqrt{\nu ^{2}-9\nu +54}. \end{equation*}

    Clearly, 6.0000 is a FP of the mapping \Xi. Take \alpha _{i} = \eta _{i} = \gamma _{i} = \frac{1}{5(i+2)}, with different initial values. Then we obtain the following tables (see Tables 13) and graph (see Figures 13) for comparison of the various iterative methods.

    Table 1.  Example 7.1: Numerical effectiveness comparison of the proposed algorithm ( HR algorithm) when \nu _{\circ} = 1 .
    Iter (n) S algorithm Picard- S algorithm Thakur algorithm K^{\ast } -algorithm HR algorithm
    1 8.70091704981746 7.16921920849454 7.16914772374443 6.36059443309194 6.00727524833131
    2 7.16526232112099 6.10977177957558 6.10975923118057 6.01468466644452 6.00002780412529
    3 6.39088232469395 6.00714403607375 6.00714316980988 6.00065450058063 6.00000018328059
    4 6.10931564922512 6.00044721072023 6.00044715630748 6.00003168526056 6.00000000153606
    5 6.02823416956883 6.00002793225376 6.00002792885454 6.00000161316022 6.00000000001474
    6 6.00711636371271 6.00000174471605 6.00000174450373 6.00000008491151 6.00000000000015
    7 6.00178220920879 6.00000010899371 6.00000010898045 6.00000000457683
    8 6.00044563525887 6.00000000680959 6.00000000680876 6.00000000025113
    9 6.00011139089900 6.00000000042547 6.00000000042542 6.00000000001397
    10 6.00002784178933 6.00000000002659 6.00000000002658 6.00000000000079
    11 6.00000695905778 6.00000000000166 6.00000000000166
    12 6.00000173945939
    13 6.00000043479852
    14 6.00000010868515
    15 6.00000002716811
    16 6.00000000679132
    17 6.00000000169767
    18 6.00000000042438
    19 6.00000000010609
    20 6.00000000002652
    Note: CPU time in seconds, respectively: 0.0036537, 0.0067923, 0.0067674, 0.0067261, 0.0066002.

     | Show Table
    DownLoad: CSV
    Table 2.  Example 7.1: Numerical effectiveness comparison of the proposed algorithm ( HR algorithm) when \nu _{\circ} = 23 .
    Iter (n) S algorithm Picard- S algorithm Thakur algorithm K^{\ast } -algorithm HR algorithm
    1 19.3563152555029 15.9518056335603 15.9517798586745 12.5877267284391 6.85064626172682
    2 15.9377280808459 10.1547429779848 10.1547063746371 7.21310921795745 6.00404110670722
    3 12.8216267389821 6.83637415013381 6.83635232023217 6.07773436689889 6.00002666859578
    4 10.1453665006161 6.07061076131832 6.07060799598213 6.00385998151440 6.00000022350839
    5 8.09904894091384 6.00453182122915 6.00453163699128 6.00019677562843 6.00000000214472
    6 6.83342864338475 6.00028356649349 6.00028355493982 6.00001035833149 6.00000000002245
    7 6.26044908931579 6.00001771655983 6.00001771583789 6.00000055832830 6.00000000000025
    8 6.07032543085564 6.00000110688254 6.00000110683744 6.00000003063582
    9 6.01796113338512 6.00000006915944 6.00000006915662 6.00000000170455
    10 6.00451434416413 6.00000000432139 6.00000000432122 6.00000000009591
    11 6.00112994220417 6.00000000027003 6.00000000027002 6.00000000000545
    12 6.00028253511929 6.00000000001687 6.00000000001687
    13 6.00007062920329 6.00000000000105 6.00000000000105
    14 6.00001765533615
    15 6.00000441334114
    16 6.00000110322227
    17 6.00000027578013
    18 6.00000006893931
    19 6.00000001723354
    20 6.00000000430809
    21 6.00000000107696
    22 6.00000000026923
    23 6.00000000006730
    24 6.00000000001683
    Note: CPU time in seconds, respectively: 0.0088935, 0.0065673, 0.0062978, 0.0054715, 0.0052041

     | Show Table
    DownLoad: CSV
    Table 3.  Example 7.1: Numerical effectiveness comparison of the proposed algorithm ( HR algorithm) when \nu _{\circ} = 41 .
    Iter (n) S algorithm Picard- S algorithm Thakur algorithm K^{\ast } -algorithm HR algorithm
    1 36.9195393902310 32.9359459295575 32.9359410372494 28.6777050430159 18.0010100671820
    2 32.9185209048623 25.1854713159820 25.1854637821918 19.1184050275537 6.99692674147933
    3 28.9966652255498 17.9433658908193 17.9433563898048 11.5396125686070 6.00870649597241
    4 25.1701649172411 11.6863804499820 11.6863697408416 7.09098245036886 6.00007316294814
    5 21.4670877470757 7.49707008913586 7.49706191231517 6.07877285288080 6.00000070206652
    6 17.9313757495734 6.15612940775542 6.15612775011077 6.00426075582864 6.00000000734890
    7 14.6320375697776 6.01035659987680 6.01035647945727 6.00023000499555 6.00000000008173
    8 11.6781275774842 6.00064966715110 6.00064965955411 6.00001262154899 6.00000000000095
    9 9.23371552492475 6.00004060231534 6.00004060184040 6.00000070225384
    10 7.49350575600870 6.00000253705577 6.00000253702609 6.00000003951159
    11 6.53524954796329 6.00000015853311 6.00000015853125 6.00000000224357
    12 6.15563769964487 6.00000000990656 6.00000000990645 6.00000000012838
    13 6.04078294734902 6.00000000061907 6.00000000061906 6.00000000000739
    14 6.01032406840519 6.00000000003869 6.00000000003869 6.00000000000043
    15 6.00258903649963 6.00000000000242 6.00000000000242
    16 6.00064771546926
    17 6.00016194664418
    18 6.00004048534517
    19 6.00001012070523
    20 6.00000253001219
    21 6.00000063246434
    22 6.00000015810715
    23 6.00000003952473
    24 6.00000000988071
    25 6.00000000247007
    26 6.00000000061749
    27 6.00000000015437
    28 6.00000000003859
    29 6.00000000000965
    Note: CPU time in seconds, respectively: 0.0070634, 0.0068518, 0.0076826, 0.0067694, 0.006222

     | Show Table
    DownLoad: CSV
    Figure 1.  Graphically comparison of the proposed algorithm ( HR algorithm) when \nu _{\circ} = 1 .
    Figure 2.  Graphically comparison of the proposed algorithm ( HR algorithm) when \nu _{\circ} = 23 .
    Figure 3.  Graphically comparison of proposed algorithm ( HR algorithm) when \nu _{\circ} = 41 .

    In the next example, we consider a mapping \Xi as SGNM but not nonexpansive and we will show under certain conditions that our algorithm (1.5) is superior in behavior to some of the leading iterative methods in the previous literature in terms of convergence speed.

    Example 7.2. Consider a mapping \Xi :[0, 1]\rightarrow \lbrack 0, 1] described by

    \begin{equation*} \Xi \left( \nu \right) = \left\{ \begin{array}{cc} 1-\nu , &\;{ {if }}\;\nu \in \lbrack 0,\frac{1}{14}), \\ \frac{\nu +13}{14}, &\;{ {if }}\;\nu \in \lbrack \frac{1}{14},1]. \end{array} \right. \end{equation*}

    Now, we illustrate that \Xi is a SGNM but not nonexpansive. Set \nu = \frac{7}{100} and \varpi = \frac{1}{14}, we get

    \begin{eqnarray*} \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \left\vert \Xi \nu -\Xi \varpi \right\vert = \left\vert 1-\nu -\left( \frac{\varpi +13}{14}\right) \right\vert = \left\vert \frac{93}{100}-\frac{183}{196}\right\vert = \frac{9}{2450}, \end{eqnarray*}

    and

    \begin{equation*} \left\Vert \nu -\varpi \right\Vert = \left\vert \nu -\varpi \right\vert = \frac{1}{700}. \end{equation*}

    It follows that \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \frac{9}{2450} > \frac{1}{700} = \left\Vert \nu -\varpi \right\Vert. Thus \Xi is not nonexpansive mapping. To prove that \Xi is a SGNM, we consider the cases below:

    (1) If \nu \in \lbrack 0, \frac{1}{14}), then

    \begin{equation*} \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert = \frac{1}{2}\left\vert \nu -\left( 1-\nu \right) \right\vert = \frac{1-2\nu }{2}\in (\frac{3}{7},\frac{1 }{2}]. \end{equation*}

    For \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert, we must obtain \frac{1-2\nu }{2}\leq \left\vert \nu -\varpi \right\vert. Clearly \varpi < \nu impossible. So, we must take \varpi > \nu. Thus, \frac{1-2\nu }{2}\leq \varpi -\nu, which yields \varpi \geq \frac{1}{2} and hence \varpi \in \lbrack \frac{1}{2}, 1]. Now,

    \begin{equation*} \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \left\vert \frac{\varpi +13}{14} -\left( 1-\nu \right) \right\vert = \left\vert \frac{\varpi +14\nu -1}{14} \right\vert < \frac{1}{14}, \end{equation*}

    and

    \begin{equation*} \left\Vert \nu -\varpi \right\Vert = \left\vert \frac{1}{14}-\frac{1}{2} \right\vert = \frac{3}{7}. \end{equation*}

    Hence

    \begin{equation*} \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert \;{{ implies }}\;\left\Vert \Xi \nu -\Xi \varpi \right\Vert < \frac{1}{14} < \frac{3}{7} = \left\Vert \nu -\varpi \right\Vert . \end{equation*}

    (2) If \nu \in \lbrack \frac{1}{14}, 1], then

    \begin{equation*} \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert = \frac{1}{2}\left\vert \frac{ \nu +13}{14}-\nu \right\vert = \frac{13-13\nu }{28}\in \lbrack 0,\frac{169}{ 392}]. \end{equation*}

    For \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert, we have \frac{13-13\nu }{28}\leq \left\vert \nu -\varpi \right\vert, which leads to the following possibilities:

    (i) When \nu < \varpi, we have

    \begin{equation*} \frac{13-13\nu }{28}\leq \varpi -\nu \Longrightarrow \varpi \geq \frac{ 13+15\nu }{28}\Longrightarrow \varpi \in \lbrack \frac{197}{392},1]\subset \lbrack \frac{1}{14},1]. \end{equation*}

    So

    \begin{equation*} \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \left\vert \frac{\nu +13}{14}- \frac{\varpi +13}{14}\right\vert = \frac{1}{14}\left\vert \nu -\varpi \right\vert \leq \left\vert \nu -\varpi \right\vert . \end{equation*}

    Hence

    \begin{equation*} \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert \Rightarrow \left\Vert \Xi \nu -\Xi \varpi \right\Vert \leq \left\Vert \nu -\varpi \right\Vert . \end{equation*}

    (ii) When \nu > \varpi, we get

    \begin{equation*} \frac{13-13\nu }{28}\leq \nu -\varpi \Longrightarrow \varpi \leq \frac{41\nu -13}{28}\Longrightarrow \varpi \in \lbrack \frac{-141}{392},1]. \end{equation*}

    Because \varpi \in \lbrack 0, 1] and \varpi \leq \frac{41\nu -13}{28}, we can write \nu \geq \frac{28\varpi +13}{41}\Rightarrow \nu \in \lbrack \frac{ 13}{41}, 1].

    It should be noted that the case of \nu \in \lbrack \frac{13}{41}, 1] and \varpi \in \lbrack \frac{1}{14}, 1] is similar to case (i), so, we will discuss when \nu \in \lbrack \frac{13}{41}, 1] and \varpi \in \lbrack 0, \frac{1}{14}). So, we have

    \begin{equation*} \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \left\vert \frac{\nu +13}{14} -\left( 1-\varpi \right) \right\vert = \left\vert \frac{\nu +14\varpi -1}{14} \right\vert < \frac{1}{14}, \end{equation*}

    and \left\Vert \nu -\varpi \right\Vert = \left\vert \nu -\varpi \right\vert > \left\vert \frac{13}{41}-\frac{1}{14}\right\vert = \frac{141}{574} > \frac{1}{ 14}. Thus \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert \Rightarrow \left\Vert \Xi \nu -\Xi \varpi \right\Vert \leq \left\Vert \nu -\varpi \right\Vert. Hence \Xi is SGNM.

    Now, we will discuss the behavior of the iterative scheme (1.5) and illustrate that it is faster than S, Tharkur and K^{\ast } iteration procedure by using different control conditions \alpha _{i} = \eta _{i} = \gamma _{i} = \frac{i}{(i+1)}.

    Remark 7.1. The effectiveness and success of the iterative method are measured by two main factors: The time and the number of repetitions. When obtaining strong convergence in a short time using the least possible repetitions, saves effort and time in many problems of optimization and variational inequalities. Based on what has been shown from the tables (see Tables 4 and 5) and figures (see Figures 4 and 5), it is clear that our method is successful and the behavior of our algorithm is satisfactory compared to some sober iterations in this direction.

    Table 4.  Example 7.2: Numerical effectiveness comparison of proposed algorithm ( HR algorithm) when \nu _{\circ} = 0.30 .
    Iter (n) S algorithm Picard- S algorithm Thakur algorithm K^{\ast } -algorithm HR algorithm
    1 0.918925619834711 0.992629601803156 0.992629601803156 0.999385287890171 0.999999491973463
    2 0.992659381189810 0.999939333728841 0.999939333728841 0.999997438874483 0.999999999938058
    3 0.999334187674034 0.999999499765345 0.999999499765345 0.999999986205653 0.999999999999984
    4 0.999939559648030 0.999999995871843 0.999999995871843 0.999999999916912
    5 0.999994510972627 0.999999999965918 0.999999999965918 0.999999999999467
    6 0.999999501367829 0.999999999999719 0.999999999999719
    7 0.999999954695558
    8 0.999999995883263
    9 0.999999999625887
    10 0.999999999966000
    11 0.999999999996910
    Note: CPU time in seconds, respectively: 0.0021438, 0.0086125, 0.0079631, 0.0063799, 0.0054917.

     | Show Table
    DownLoad: CSV
    Table 5.  Example 7.2: Numerical effectiveness comparison of proposed algorithm ( HR algorithm) when \nu _{\circ} = 0.30 .
    Iter (n) S algorithm Picard- S algorithm Thakur algorithm K^{\ast } -algorithm HR algorithm
    1 0.981983471074380 0.998362133734035 0.998362133734035 0.999863397308927 0.999999887105214
    2 0.998368751375513 0.999986518606409 0.999986518606409 0.999999430860996 0.999999999986235
    3 0.999852041705341 0.999999888836743 0.999999888836743 0.999999996934589 0.999999999999996
    4 0.999986568810673 0.999999999082632 0.999999999082632 0.999999999981536
    5 0.999998780216139 0.999999999992426 0.999999999992426 0.999999999999881
    6 0.999999889192851 0.999999999999938 0.999999999999938
    7 0.999999989932346
    8 0.999999999085170
    9 0.999999999916864
    10 0.999999999992444
    Note: CPU time in seconds, respectively: 0.0053331, 0.0078199, 0.0048301, 0.005278, 0.0001471.

     | Show Table
    DownLoad: CSV
    Figure 4.  Graphically comparison of proposed algorithm ( HR algorithm) when \nu _{\circ} = 0.30 .
    Figure 5.  Graphically comparison of proposed algorithm ( HR algorithm) when \nu _{\circ} = 0.80 .

    In this part, we apply our algorithm (1.5) to solve Volterra-Fredholm integral equation which was suggested by Lungu and Rus [23].

    Consider the following problem:

    \begin{equation} \xi (\nu ,\varpi ) = \aleph (\nu ,\varpi ,z(\xi (\nu ,\varpi ))+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\xi (\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }, \end{equation} (8.1)

    for all \nu, \varpi \in \mathbb{R} _{+}. Assume that \left(\Gamma, \left\vert.\right\vert \right) is a BS, s > 0 and

    \begin{equation*} \chi _{s} = \left\{ \xi \in C\left( \mathbb{R} _{+}^{2},\Gamma \right) :{\rm{ there \;is }}\;U(\xi ) > 0\;{\rm{ so\; that }}\; \left\vert \xi (\nu ,\varpi )\right\vert e^{-s(\nu +\varpi )}\leq U(\xi )\right\} . \end{equation*}

    Define the norm on \chi _{s} as follows:

    \begin{equation*} \left\Vert \xi \right\Vert _{s} = \sup\limits_{\nu ,\varpi \in \mathbb{R} _{+}}\left( \left\vert \xi (\nu ,\varpi )\right\vert e^{-s(\nu +\varpi )}\right) . \end{equation*}

    It follows from the paper [33] that \left(\chi _{s}, \left\Vert \xi \right\Vert _{s}\right) is a BS.

    The following theorem helps us for proving our main result in this section.

    Theorem 8.1. [23] Assume that the postulates below are satisfied:

    (P_{i}) \aleph \in C\left(\mathbb{R} _{+}^{2}\times \Gamma, \Gamma \right) and \mho \in C\left(\mathbb{R} _{+}^{4}\times \Gamma, \Gamma \right);

    (P_{ii}) There are z:\chi _{s}\rightarrow \chi _{s} and \pi _{z} > 0 so that

    \begin{equation*} \left\vert z(\xi (\nu ,\varpi ))-z(\xi ^{\ast }(\nu ,\varpi ))\right\vert \leq \pi _{z}\left\Vert \xi -\xi ^{\ast }\right\Vert e^{s(\nu +\varpi )}, \end{equation*}

    for all \nu, \varpi \in \mathbb{R} _{+} and \xi, \xi ^{\ast }\in \chi _{s};

    (P_{iii}) For all \nu, \varpi \in \mathbb{R} _{+} and c, c^{\ast }\in \Gamma, there is \pi _{\aleph } > 0 so that

    \begin{equation*} \left\vert \aleph (\nu ,\varpi ,c)-\aleph (\nu ,\varpi ,c^{\ast })\right\vert \leq \pi _{\aleph }\left\vert c-c^{\ast }\right\vert ; \end{equation*}

    (P_{iv}) For all \nu, \varpi, \kappa ^{\ast }, \tau ^{\ast }\in \mathbb{R} _{+} and c, c^{\ast }\in \Gamma, there is \pi _{\mho }\left(\nu, \varpi, \kappa ^{\ast }, \tau ^{\ast }\right) > 0 so that

    \begin{equation*} \left\vert \mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },c\right) -\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },c^{\ast }\right) \right\vert \leq \pi _{\mho }\left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast }\right) \left\vert c-c^{\ast }\right\vert ; \end{equation*}

    (P_{v}) \pi _{\mho }\in C\left(\mathbb{R} _{+}^{4}, \mathbb{R} _{+}\right) and

    \begin{equation*} \int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\pi _{\mho }\left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast }\right) e^{s(\kappa ^{\ast }+\tau ^{\ast })}d\kappa ^{\ast }d\tau ^{\ast }\leq \pi e^{s(\kappa ^{\ast }+\tau ^{\ast })}, \end{equation*}

    for all \nu, \varpi \in \mathbb{R} _{+};

    (P_{vi}) \pi _{z}\pi _{\aleph }+\pi < 1.

    Then the problem (8.1) has a unique solution \zeta \in \chi _{s} and the iterative sequence

    \begin{equation*} \xi _{i+1}(\nu ,\varpi ) = \aleph (\nu ,\varpi ,z(\xi (\nu ,\varpi ))+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\xi _{i}(\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }, \end{equation*}

    for all i\geq 1 converges uniformly to \zeta.

    Now, after the above hypotheses, we can present our main theorem as follows:

    Theorem 8.2. Let \{\nu _{i}\} be an iterative sequence generated by (1.5) with sequences \{\alpha _{i}\}, \{\eta _{i}\}, \{\gamma _{i}\}\in \lbrack 0, 1] so that \sum\limits_{i = 0}^{\infty }\gamma _{i} = \infty. If the postulates (P_{i})-(P_{vi}) of Theorem 8.1 hold. Then the problem (8.1) has a unique solution \zeta \in \chi _{s} and the intended algorithm (1.5) converges strongly to \zeta.

    Proof. Let \{\nu _{i}\} be an iterative sequence generated by (1.5) and define the operator H:\chi _{s}\rightarrow \chi _{s} by

    \begin{equation*} H\left( \xi (\nu ,\varpi )\right) = \aleph (\nu ,\varpi ,z(\xi (\nu ,\varpi ))+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\xi (\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }. \end{equation*}

    We shall prove that \lim\limits_{i\rightarrow \infty }\nu _{i} = 0. Based on (1.5), we get

    \begin{equation*} \left\Vert \nu _{i+1}-\zeta \right\Vert = \sup\limits_{\nu ,\varpi \in \mathbb{R} _{+}}\left( \left\vert H\left( \Im _{i}(\nu ,\varpi )\right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert e^{-s(\nu +\varpi )}\right) . \end{equation*}

    Now,

    \begin{eqnarray*} &&\left\vert H\left( \Im _{i}(\nu ,\varpi )\right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert \\ &\leq &\left\vert \aleph (\nu ,\varpi ,z(\Im _{i}(\nu ,\varpi )))-\aleph (\nu ,\varpi ,z(\zeta (\nu ,\varpi ))\right\vert \\ &&+\left\vert \int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\Im _{i}(\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }-\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }\right\vert \\ &\leq &\pi _{\aleph }\left\vert z(\Im _{i}(\nu ,\varpi ))-z(\zeta (\nu ,\varpi ))\right\vert \\ &&+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\left\vert \mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\Im _{i}(\kappa ^{\ast },\tau ^{\ast })\right) -\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) \right\vert d\kappa ^{\ast }d\tau ^{\ast } \\ &\leq &\pi _{\aleph }\pi _{z}\left\Vert \Im _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\pi _{\mho }\left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast }\right) \left\vert \Im _{i}(\kappa ^{\ast },\tau ^{\ast })-\zeta (\kappa ^{\ast },\tau ^{\ast })\right\vert d\kappa ^{\ast }d\tau ^{\ast } \\ &\leq &\pi _{\aleph }\pi _{z}\left\Vert \Im _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}+\pi \left\Vert \Im _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )} \\ & = &\left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \Im _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}. \end{eqnarray*}

    Thus,

    \begin{equation} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \Im _{i}-\zeta \right\Vert _{s}. \end{equation} (8.2)

    Again

    \begin{equation*} \left\Vert \Im _{i}-\zeta \right\Vert _{s} = \sup\limits_{\nu ,\varpi \in \mathbb{R} _{+}}\left( \left\vert H\left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}\right) \left( \nu ,\varpi \right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert e^{-s(\nu +\varpi )}\right) , \end{equation*}

    and

    \begin{eqnarray*} &&\left\vert H\left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}\right) \left( \nu ,\varpi \right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert \\ &\leq &\left\vert \aleph \left( \nu ,\varpi ,z\left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i})\left( \nu ,\varpi \right) \right) \right) -\aleph (\nu ,\varpi ,z(\zeta (\nu ,\varpi ))\right\vert \\ &&+\left\vert \int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}\right) \left( \kappa ^{\ast },\tau ^{\ast }\right) \right) d\kappa ^{\ast }d\tau ^{\ast }\right. \\ &&-\left. \int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }\right\vert \\ &\leq &\pi _{\aleph }\left\vert z\left( \left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}\right) \left( \kappa ^{\ast },\tau ^{\ast }\right) \right) -z(\zeta (\nu ,\varpi )\right\vert \\ &&+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\left\vert \mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\left( (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}\right) \left( \kappa ^{\ast },\tau ^{\ast }\right) \right) \right. \\ &&\left. -\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) \right\vert d\kappa ^{\ast }d\tau ^{\ast } \\ &\leq &\pi _{\aleph }\pi _{z}\left\Vert (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )} \\ &&+\pi \left\Vert (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )} \\ & = &\left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}. \end{eqnarray*}

    Thus

    \begin{equation} \left\Vert \Im _{i}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert (1-\gamma _{i})\wp _{i}+\gamma _{i}H\wp _{i}-\zeta \right\Vert _{s}. \end{equation} (8.3)

    Similarly, one can write

    \begin{equation} \left\Vert \wp _{i}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert (1-\eta _{i})\varpi _{i}+\eta _{i}H\varpi _{i}-\zeta \right\Vert _{s}. \end{equation} (8.4)

    Since

    \begin{eqnarray} \left\Vert (1-\eta _{i})\varpi _{i}+\eta _{i}H\varpi _{i}-\zeta \right\Vert _{s} & = &\left\Vert \left( (1-\eta _{i})\varpi _{i}-\zeta \right) +\eta _{i}\left( H\varpi _{i}-\zeta \right) \right\Vert _{s} \\ &\leq &(1-\eta _{i})\left\Vert \varpi _{i}-\zeta \right\Vert _{s}+\eta _{i}\left\Vert H\varpi _{i}-\zeta \right\Vert _{s}. \end{eqnarray} (8.5)

    Now

    \begin{equation*} \left\Vert H\varpi _{i}-\zeta \right\Vert _{s} = \sup\limits_{\nu ,\varpi \in \mathbb{R} _{+}}\left( \left\vert H\left( \varpi _{i}\left( \nu ,\varpi \right) \right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert e^{-s(\nu +\varpi )}\right) , \end{equation*}

    and

    \begin{eqnarray*} &&\left\vert H\left( \varpi _{i}(\nu ,\varpi )\right) -H\left( \zeta (\nu ,\varpi )\right) \right\vert \\ &\leq &\left\vert \aleph (\nu ,\varpi ,z(\varpi _{i}(\nu ,\varpi )))-\aleph (\nu ,\varpi ,z(\zeta (\nu ,\varpi ))\right\vert \\ &&+\left\vert \int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\varpi _{i}(\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }-\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) d\kappa ^{\ast }d\tau ^{\ast }\right\vert \\ &\leq &\pi _{\aleph }\left\vert z(\varpi _{i}(\nu ,\varpi ))-z(\zeta (\nu ,\varpi ))\right\vert \\ &&+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\left\vert \mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\varpi _{i}(\kappa ^{\ast },\tau ^{\ast })\right) -\mho \left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast },\zeta (\kappa ^{\ast },\tau ^{\ast })\right) \right\vert d\kappa ^{\ast }d\tau ^{\ast } \\ &\leq &\pi _{\aleph }\pi _{z}\left\Vert \varpi _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}+\int\limits_{0}^{\nu }\int\limits_{0}^{\varpi }\pi _{\mho }\left( \nu ,\varpi ,\kappa ^{\ast },\tau ^{\ast }\right) \left\vert \varpi _{i}(\kappa ^{\ast },\tau ^{\ast })-\zeta (\kappa ^{\ast },\tau ^{\ast })\right\vert d\kappa ^{\ast }d\tau ^{\ast } \\ &\leq &\pi _{\aleph }\pi _{z}\left\Vert \varpi _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}+\pi \left\Vert \varpi _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )} \\ & = &\left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )}. \end{eqnarray*}

    Thus

    \begin{equation} \left\Vert H\varpi _{i}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}. \end{equation} (8.6)

    Applying (8.6) in (8.5), we have

    \begin{eqnarray} &&\left\Vert (1-\eta _{i})\varpi _{i}+\eta _{i}H\varpi _{i}-\zeta \right\Vert _{s} \\ &\leq &(1-\eta _{i})\left\Vert \varpi _{i}-\zeta \right\Vert _{s}+\eta _{i}\left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}e^{s(\nu +\varpi )} \\ & = &(1-\eta _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}. \end{eqnarray} (8.7)

    Using (8.4) and (8.7), we get

    \begin{equation} \left\Vert \wp _{i}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) (1-\eta _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}. \end{equation} (8.8)

    By the same manner, (8.3) can be written as

    \begin{equation} \left\Vert \Im _{i}-\zeta \right\Vert _{s}\leq \left( \pi _{\aleph }\pi _{z}+\pi \right) (1-\gamma _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \wp _{i}-\zeta \right\Vert _{s}. \end{equation} (8.9)

    From (8.8) in (8.9), we find that

    \begin{eqnarray} \left\Vert \Im _{i}-\zeta \right\Vert _{s} \leq \left( \pi _{\aleph }\pi _{z}+\pi \right) ^{2}(1-\gamma _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \times (1-\eta _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}. \end{eqnarray} (8.10)

    Applying (8.10) in (8.2), we get

    \begin{eqnarray} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s} \leq \left( \pi _{\aleph }\pi _{z}+\pi \right) ^{3}(1-\gamma _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \times (1-\eta _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \varpi _{i}-\zeta \right\Vert _{s}. \end{eqnarray} (8.11)

    Using (1.5) and similar to (8.6), we obtain that

    \begin{eqnarray} \left\Vert \varpi _{i}-\zeta \right\Vert _{s} & = &\left\Vert (1-\alpha _{i})\nu _{i}+\alpha _{i}H\nu _{i}-\zeta \right\Vert _{s} \\ &\leq &(1-\alpha _{i})\left\Vert \nu _{i}-\zeta \right\Vert _{s}+\alpha _{i}\left\Vert H\nu _{i}-\zeta \right\Vert _{s} \\ &\leq &(1-\alpha _{i})\left\Vert \nu _{i}-\zeta \right\Vert _{s}+\alpha _{i}\left( \pi _{\aleph }\pi _{z}+\pi \right) \left\Vert \nu _{i}-\zeta \right\Vert _{s} \\ & = &(1-(1-\alpha _{i}\left( \pi _{\aleph }\pi _{z}+\pi \right) )\left\Vert \nu _{i}-\zeta \right\Vert _{s}. \end{eqnarray} (8.12)

    Substituting from (8.12) in (8.11), we find that

    \begin{eqnarray*} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s} &\leq & \left( \pi _{\aleph }\pi _{z}+\pi \right) ^{3}(1-\gamma _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \\ & \times & (1-\eta _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) (1-(1-\alpha _{i}\left( \pi _{\aleph }\pi _{z}+\pi \right) )\left\Vert \nu _{i}-\zeta \right\Vert _{s}. \end{eqnarray*}

    Since \alpha _{i}, \eta _{i}\in \lbrack 0, 1] and \pi _{\aleph }\pi _{z}+\pi < 1, then we have

    \begin{equation*} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s}\leq (1-\gamma _{i}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) \left\Vert \nu _{i}-\zeta \right\Vert _{s}, \end{equation*}

    by induction, we can write

    \begin{equation} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s}\leq \left\Vert \nu _{0}-\zeta \right\Vert _{s}\prod\limits_{r}^{i}(1-\gamma _{r}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) . \end{equation} (8.13)

    It follows from the postulate (P_{vi}) and \gamma _{r}\in \lbrack 0, 1] that

    \begin{equation*} 1-\gamma _{r}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) < 1. \end{equation*}

    From our information in classical analysis, we can write for \nu \in \lbrack 0, 1], 1-\nu \leq e^{-\nu }. Therefore, (8.13) take the form

    \begin{equation*} \left\Vert \nu _{i+1}-\zeta \right\Vert _{s}\leq \left\Vert \nu _{0}-\zeta \right\Vert _{s}e^{-[(1-\gamma _{r}\left( 1-\left( \pi _{\aleph }\pi _{z}+\pi \right) \right) ]\sum\limits_{r = 0}^{i}\gamma _{r}}, \end{equation*}

    which implies that \lim\limits_{i\rightarrow \infty }\left\Vert \nu _{i}-\zeta \right\Vert _{s} = 0. This completes the proof.

    It is well known that the efficiency and effectiveness of iterative methods are measured by two main factors. The first factor is the speed of convergence and the second is the number of repetitions, meaning if the convergence is faster with fewer repetitions, the method was successful in approximating the fixed points. So, in this article, we demonstrated analytically and numerically that our algorithm is better in conduct than a portion of the main iterative techniques in the past writing [4,10,11,18] as far as convergence speed.

    Also, the prevalence and speed of convergence, stability, and data dependence results were displayed in comparison graphs of calculations. Moreover, our approach was eventually supported by a solution to an integral problem as an application. In light of references [4,10,11,18], our method is therefore successful or effective. Finally, as future works for this paper, we appointed the following:

    (1) If we define a mapping \Xi in a Hilbert space \Delta endowed with inner product space, we can find a common solution to the variational inequality problem by using our iteration (1.5). This problem can be stated as follows: find \wp ^{\ast }\in \Delta such that

    \begin{equation*} \langle \Xi \wp ^{\ast },\wp -\wp ^{\ast }\rangle \geq 0\text{ for all }\wp \in \Delta , \end{equation*}

    where \Xi :\Delta \rightarrow \Delta is a nonlinear mapping. Variational inequalities are an important and essential modeling tool in many fields such as engineering mechanics, transportation, economics, and mathematical programming, see [34,35].

    (2) We can generalize our algorithm to gradient and extra-gradient projection methods, these methods are very important for finding saddle points and solving many problems in optimization, see [36].

    (3) We can accelerate the convergence of the proposed algorithm by adding shrinking projection and CQ terms. These methods stimulate algorithms and improve their performance to obtain strong convergence, for more details, see [1,37,38].

    (4) If we consider the mapping \Xi as an \alpha - inverse strongly monotone and the inertial term is added to our algorithm, then we have the inertial proximal point algorithm. This algorithm is used in many applications such as monotone variational inequalities, image restoration problems, convex optimization problems, and split convex feasibility problems, see [40,41,42,43]. For more accuracy, these problems can be expressed as mathematical models such as machine learning and the linear inverse problem.

    (5) We can also use our algorithm to solve second-order differential equations and fractional differential equations, where these equations can be converted into integral equations by Green's function. So it is easy to treat and solve with the same approach used in Part 8.

    (6) We can try to determine the error of our present iteration.

    M. Zayed extends her appreciation to the Deanship of Scientific Research at King Khalid University, Saudi Arabia. for supporting this work through research groups program grant R.G.P.2/207/43

    The authors declare that they have no conflicts of interest.



    [1] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
    [2] S. Ishikawa, Fixed points by a new iteration method, Proc. Amer. Math. Soc., 44 (1974), 147–150. https://doi.org/10.1090/S0002-9939-1974-0336469-5 doi: 10.1090/S0002-9939-1974-0336469-5
    [3] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. https://doi.org/10.1006/jmaa.2000.7042 doi: 10.1006/jmaa.2000.7042
    [4] R. P. Agarwal, D. O. Regan, D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex Anal., 8 (2007), 61–79.
    [5] M. Abbas, T. Nazir, A new faster iteration process applied to constrained minimization and feasibility problems, Math. Vesn., 66 (2014), 223–234.
    [6] W. Phuengrattana, S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math., 235 (2011), 3006–3014. https://doi.org/10.1016/j.cam.2010.12.022 doi: 10.1016/j.cam.2010.12.022
    [7] I. Karahan, M. Ozdemir, A general iterative method for approximation of fixed points and their applications, Adv. Fixed Point Theory, 3 (2013), 510–526.
    [8] R. Chugh, V. Kumar, S. Kumar, Strong convergence of a new three step iterative scheme in Banach spaces, Amer. J. Comput. Math., 2 (2012), 345–357. https://doi.org/10.4236/ajcm.2012.24048 doi: 10.4236/ajcm.2012.24048
    [9] D. R. Sahu, A. Petruşel, Strong convergence of iterative methods by strictly pseudocontractive mappings in Banach spaces, Nonlinear Anal.: Theory Methods Appl., 74 (2011), 6012–6023. https://doi.org/10.1016/j.na.2011.05.078 doi: 10.1016/j.na.2011.05.078
    [10] F. Gürsoy, V. Karakaya, A Picard-S hybrid type iteration method for solving a differential equation with retarded argument, arXiv, 2014. https://arXiv.org/abs/1403.2546
    [11] B. S. Thakur, D. Thakur, M. Postolache, A new iterative scheme for numerical reckoning fixed points of Suzuki's generalized nonexpansive mappings, Appl. Math. Comput., 275 (2016), 147–155. https://doi.org/10.1016/j.amc.2015.11.065 doi: 10.1016/j.amc.2015.11.065
    [12] K. Ullah, M. Arshad, Numerical reckoning fixed points for Suzuki's generalized nonexpansive mappings via new iteration process, Filomat, 32 (2018), 187–196. https://doi.org/10.2298/FIL1801187U doi: 10.2298/FIL1801187U
    [13] K. Ullah, M. Arshad, New iteration process and numerical reckoning fixed points in Banach spaces, U. P. B. Sci. Bull., Ser. A, 79 (2017), 113–122.
    [14] C. Garodia, I. Uddin, A new fixed point algorithm for finding the solution of a delay differential equation, AIMS Math., 5 (2020), 3182–3200. https://doi.org/10.3934/math.2020205 doi: 10.3934/math.2020205
    [15] S. Thianwan, Common fixed points of new iterations for two asymptotically nonexpansive nonself-mappings in a Banach space, J. Comput. Appl. Math., 224 (2009), 688–695. https://doi.org/10.1016/j.cam.2008.05.051 doi: 10.1016/j.cam.2008.05.051
    [16] H. A. Hammad, H. ur Rehman, M. De la Sen, A novel four-step iterative scheme for approximating the fixed point with a supportive application, Inf. Sci. Lett., 10 (2021), 14.
    [17] H. A. Hammad, H. ur Rehman, H. Almusawa, Tikhonov regularization terms for accelerating inertial Mann-Like algorithm with applications, Symmetry, 13 (2021), 554. https://doi.org/10.3390/sym13040554 doi: 10.3390/sym13040554
    [18] K. Ullah, M. Arshad, New three-step iteration process and fixed point approximation in Banach spaces, J. Linear Topol. Algebra, 7 (2018), 87–100.
    [19] K. Maleknejad, P. Torabi, Application of fixed point method for solving Volterra-Hammerstein integral equation, U. P. B. Sci. Bull., Ser. A., 74 (2012), 45–56.
    [20] K. Maleknejad, M. Hadizadeh, A New computational method for Volterra-Fredholm integral equations, Comput. Math. Appl., 37 (1999), 1–8. https://doi.org/10.1016/S0898-1221(99)00107-8 doi: 10.1016/S0898-1221(99)00107-8
    [21] A. M. Wazwaz, A reliable treatment for mixed Volterra-Fredholm integral equations, Appl. Math. Comput., 127 (2002), 405–414. https://doi.org/10.1016/S0096-3003(01)00020-0 doi: 10.1016/S0096-3003(01)00020-0
    [22] Y. Atlan, V. Karakaya, Iterative solution of functional Volterra-Fredholm integral equation with deviating argument, J. Nonlinear Convex Anal., 18 (2017), 675–684.
    [23] N. Lungu, I. A. Rus, On a functional Volterra-Fredholm integral equation, via Picard operators, J. Math. Ineq., 3 (2009), 519–527.
    [24] A. E. Ofem, D. I. Igbokwe, An efficient iterative method and its applications to a nonlinear integral equation and a delay differential equation in Banach spaces, Turkish J. Ineq., 4 (2020), 79–107.
    [25] A. E. Ofem, U. E. Udofia, Iterative solutions for common fixed points of nonexpansive mappings and strongly pseudocontractive mappings with applications, Canad. J. Appl. Math., 3 (2021), 18-36.
    [26] H. A. Hammad, H. Aydi, M. De la Sen, Solutions of fractional differential type equations by fixed point techniques for multivalued contractions, Complexity, 2021 (2021), 5730853. https://doi.org/10.1155/2021/5730853 doi: 10.1155/2021/5730853
    [27] V. Berinde, On the approximation of fixed points of weak contractive mapping, Carpathian J. Math., 19 (2003), 7–22.
    [28] V. Berinde, Picard iteration converges faster than Mann iteration for a class of quasicontractive operators, Fixed Point Theory Appl., 2004 (2004), 716359. https://doi.org/10.1155/S1687182004311058 doi: 10.1155/S1687182004311058
    [29] H. F. Senter, W. G. Dotson, Approximating fixed points of nonexpansive mapping, Proc. Amer. Math. Soc., 44 (1974), 375–380. https://doi.org/10.1090/S0002-9939-1974-0346608-8 doi: 10.1090/S0002-9939-1974-0346608-8
    [30] T. Suzuki, Fixed point theorems and convergence theorems for some generalized nonexpansive mappings, J. Math. Anal. Appl. Math., 340 (2008), 1088–10995. https://doi.org/10.1016/j.jmaa.2007.09.023 doi: 10.1016/j.jmaa.2007.09.023
    [31] S. M. Şoltuz, T. Grosan, Data dependence for Ishikawa iteration when dealing with contractive like operators, Fixed Point Theory Appl., 2008 (2008), 242916. https://doi.org/10.1155/2008/242916 doi: 10.1155/2008/242916
    [32] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, B. Aust. Math. Soc., 43 (1991), 153–159. https://doi.org/10.1017/S0004972700028884 doi: 10.1017/S0004972700028884
    [33] A. Bielecki, Une remarque sur l'application de la méthode de Banach-Cacciopoli-Tikhonov dans la théorie de l'équation s = f (x, y, z, p, q), Bull. Acad. Polon. Sci. Sér. Sci. Math. Phys. Astr., 4 (1956), 265–357.
    [34] F. Facchinei, J. S. Pang, Finite-dimensional variational inequalities and complementarity problems, Springer Series in Operations Research, New York: Springer, 2003. https://doi.org/10.1007/b97543
    [35] I. Konnov, Combined relaxation methods for variational inequalities, Lecture Notes in Economics and Mathematical Systems, Berlin: Springer-Verlag, 2001. https://doi.org/10.1007/978-3-642-56886-2
    [36] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [37] C. Martinez-Yanes, H. K. Xu, Strong convergence of the CQ method for fixed point iteration processes, Nonlinear Anal., 64 (2006), 2400–2411. https://doi.org/10.1016/j.na.2005.08.018 doi: 10.1016/j.na.2005.08.018
    [38] H. A. Hammad, H. ur. Rahman, M. De la Sen, Shrinking projection methods for accelerating relaxed inertial Tseng-type algorithm with applications, Math. Probl. Eng., 2020 (2020), 7487383. https://doi.org/10.1155/2020/7487383 doi: 10.1155/2020/7487383
    [39] T. M. Tuyen, H. A. Hammad, Effect of shrinking projection and CQ-methods on two inertial forward-backward algorithms for solving variational inclusion problems, Rend. Circ. Mat. Palermo, II. Ser., 70 (2021), 1669–1683. https://doi.org/10.1007/s12215-020-00581-8 doi: 10.1007/s12215-020-00581-8
    [40] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011. https://doi.org/10.1007/978-1-4419-9467-7
    [41] H. H. Bauschke, J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Rev., 38 (1996), 367–426. https://doi.org/10.1137/S0036144593251710 doi: 10.1137/S0036144593251710
    [42] P. Chen, J. Huang, X. Zhang, A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration, Inverse Probl., 29 (2013), 025011. https://doi.org/10.1088/0266-5611/29/2/025011 doi: 10.1088/0266-5611/29/2/025011
    [43] Y. Dang, J. Sun, H. Xu, Inertial accelerated algorithms for solving a split feasibility problem, J. Ind. Manag. Optim., 13 (2017), 1383–1394. https://doi.org/10.3934/jimo.2016078 doi: 10.3934/jimo.2016078
  • This article has been cited by:

    1. Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen, A New Four-Step Iterative Procedure for Approximating Fixed Points with Application to 2D Volterra Integral Equations, 2022, 10, 2227-7390, 4257, 10.3390/math10224257
    2. Junaid Ahmad, Kifayat Ullah, Hasanen A. Hammad, Reny George, A solution of a fractional differential equation via novel fixed-point approaches in Banach spaces, 2023, 8, 2473-6988, 12657, 10.3934/math.2023636
    3. Junaid Ahmad, Kifayat Ullah, Hasanen A. Hammad, Reny George, On fixed-point approximations for a class of nonlinear mappings based on the JK iterative scheme with application, 2023, 8, 2473-6988, 13663, 10.3934/math.2023694
    4. Rekha Srivastava, Wakeel Ahmed, Asifa Tassaddiq, Nouf Alotaibi, Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces, 2024, 13, 2075-1680, 502, 10.3390/axioms13080502
    5. A. Murali, K. Muthunagai, Disquisition on convergence, stability, and data dependence for a new fast iterative process, 2024, 14, 2045-2322, 10.1038/s41598-024-73261-7
    6. Fayyaz Ahmad, Kifayat Ullah, Junaid Ahmad, Hasanen A. Hammad, Reny George, A novel iterative process for numerical reckoning of fixed points via generalized nonlinear mappings with qualitative study, 2024, 57, 2391-4661, 10.1515/dema-2023-0151
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2057) PDF downloads(76) Cited by(6)

Figures and Tables

Figures(5)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog