Research article

Ethanol production at high temperature from cassava pulp by a newly isolated Kluyveromyces marxianus strain, TISTR 5925

  • Kluyveromyces marxianus TISTR 5925, isolated from rotten fruit in Thailand, can ferment at pH 3 at temperatures between 42 and 45 ℃. Bioethanol production from cassava pulp using the simultaneous saccharification and fermentation (SSF) process was evaluated and compared with the separated hydrolysis and fermentation (SHF) process using K. marxianus TISTR 5925. The ethanol concentrations obtained from the SSF process were higher than those from the SHF process. The optimum conditions for ethanol production were investigated by response surface methodology (RSM) based on a five level central composite design involving the following variables: enzyme dilution (times), temperature (℃) and fermentation time (h). Cassava pulp was pretreated by boiling for 10 min, treated with a mixture of enzymes (cellulase, pectinase, α-amylase and glucoamylase), then fermented by K. marxianus TISTR 5925. Data obtained from the RSM were subjected to analysis of variance and fit to a second order polynomial equation. At optimum enzyme dilution (0.1 times), temperature (41 ℃) and fermentation time (27 h), the maximum obtained concentration of ethanol was 5.0% (w/v), which is very close to the predicted ethanol concentration of 5.3% (w/v).

    Citation: Waraporn Apiwatanapiwat, Prapassorn Rugthaworn, Pilanee Vaithanomsat, Warunee Thanapase, Akihiko Kosugi, Takamitsu Arai, Yutaka Mori, Yoshinori Murata. Ethanol production at high temperature from cassava pulp by a newly isolated Kluyveromyces marxianus strain, TISTR 5925[J]. AIMS Energy, 2013, 1(1): 3-16. doi: 10.3934/energy.2013.1.3

    Related Papers:

    [1] Aynur Şahin, Zeynep Kalkan . The $ AA $-iterative algorithm in hyperbolic spaces with applications to integral equations on time scales. AIMS Mathematics, 2024, 9(9): 24480-24506. doi: 10.3934/math.20241192
    [2] Dong Ji, Yao Yu, Chaobo Li . Fixed point and endpoint theorems of multivalued mappings in convex $ b $-metric spaces with an application. AIMS Mathematics, 2024, 9(3): 7589-7609. doi: 10.3934/math.2024368
    [3] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized $ \alpha $-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [4] Mohammed Shehu Shagari, Akbar Azam . Integral type contractions of soft set-valued maps with application to neutral differential equations. AIMS Mathematics, 2020, 5(1): 342-358. doi: 10.3934/math.2020023
    [5] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [6] Mohamed Jleli, Bessem Samet . On $ \theta $-hyperbolic sine distance functions and existence results in complete metric spaces. AIMS Mathematics, 2024, 9(10): 29001-29017. doi: 10.3934/math.20241407
    [7] Hasanen A. Hammad, Doha A. Kattan . Strong tripled fixed points under a new class of F-contractive mappings with supportive applications. AIMS Mathematics, 2025, 10(3): 5785-5805. doi: 10.3934/math.2025266
    [8] Noor Muhammad, Ali Asghar, Samina Irum, Ali Akgül, E. M. Khalil, Mustafa Inc . Approximation of fixed point of generalized non-expansive mapping via new faster iterative scheme in metric domain. AIMS Mathematics, 2023, 8(2): 2856-2870. doi: 10.3934/math.2023149
    [9] Saif Ur Rehman, Iqra Shamas, Shamoona Jabeen, Hassen Aydi, Manuel De La Sen . A novel approach of multi-valued contraction results on cone metric spaces with an application. AIMS Mathematics, 2023, 8(5): 12540-12558. doi: 10.3934/math.2023630
    [10] Buthinah A. Bin Dehaish, Rawan K. Alharbi . On fixed point results for some generalized nonexpansive mappings. AIMS Mathematics, 2023, 8(3): 5763-5778. doi: 10.3934/math.2023290
  • Kluyveromyces marxianus TISTR 5925, isolated from rotten fruit in Thailand, can ferment at pH 3 at temperatures between 42 and 45 ℃. Bioethanol production from cassava pulp using the simultaneous saccharification and fermentation (SSF) process was evaluated and compared with the separated hydrolysis and fermentation (SHF) process using K. marxianus TISTR 5925. The ethanol concentrations obtained from the SSF process were higher than those from the SHF process. The optimum conditions for ethanol production were investigated by response surface methodology (RSM) based on a five level central composite design involving the following variables: enzyme dilution (times), temperature (℃) and fermentation time (h). Cassava pulp was pretreated by boiling for 10 min, treated with a mixture of enzymes (cellulase, pectinase, α-amylase and glucoamylase), then fermented by K. marxianus TISTR 5925. Data obtained from the RSM were subjected to analysis of variance and fit to a second order polynomial equation. At optimum enzyme dilution (0.1 times), temperature (41 ℃) and fermentation time (27 h), the maximum obtained concentration of ethanol was 5.0% (w/v), which is very close to the predicted ethanol concentration of 5.3% (w/v).


    Fixed point theory is concerned with some properties which ensure that a self map $ \mathcal{M} $ defined on a set $ \mathcal{B} $ admits at least one fixed point. By fixed point of $ \mathcal{M} $, we mean a point $ w\in \mathcal{B} $ which solves an operator equation $ w = \mathcal{M}w $, known as fixed point equation. Now, let $ F(\mathcal{M}) = \{w\in \mathcal{B}:w = \mathcal{M}w\} $ stand for the set of all fixed points of $ \mathcal{M} $. The theory of fixed point plays significant role in finding the solutions of problems which arise in different branches of mathematical analysis. For some years now, the advancement of fixed point theory in metric spaces has captured considerable interests from many authors as a result of its applications in many fields such variational inequality, approximation theory and optimization theory.

    Banach Contraction principle still remains one of the fundamental theorems in analysis. It states that if $ (\mathcal{B}, d) $ is a complete metric space and $ \mathcal{M} : \mathcal{B}\to \mathcal{B} $ fulfills

    $ d(Mf,Mh)ed(f,h),
    $
    (1.1)

    for all $ g, h\in\mathcal{B} $ with $ e\in [0, 1) $, then there exists a unique fixed point of $ M $.

    Mappings satisfying (1.1) are known as contraction mappings. In 2003, Berinde [10] introduced the class of weak contraction mappings in metric space. This class of mappings are also called almost contraction. He proved that this class of mappings is a superclass of the class of Zamfirescu mapping [53] which properly contains the classes of contraction, Kannan[23] and Chatterjea[11] mappings.

    Definition 1.1. A map $ \mathcal{M}:\mathcal{B}\to\mathcal{B} $ is called almost contraction if some constants $ e\in (0, 1] $ and $ L\geq0 $ exists such that

    $ d(Mf,Mh)ed(f,h)+Ld(f,Mf),f,hB.
    $
    (1.2)

    In [10], Berinde showed that every almost contraction mapping $ \mathcal{M} $ has a unique fixed point in a complete metric space $ (\mathcal{B}, d) $. The map $ \mathcal{M} $ is termed nonexpansive if

    $ d(Mg,Mh)d(g,h),
    $
    (1.3)

    for all $ g, h\in\mathcal{B} $. It is known as quasi-nonexpansive mapping if $ F(\mathcal{M})\neq\emptyset $ and

    $ d(Mg,w)d(g,w),
    $
    (1.4)

    for all $ g\in\mathcal{B} $ and $ w\in F(\mathcal{M}) $. Due to the numerous applications of nonexpansive mappings in mathematics and other related fields, in recent years, their extensions and generalizations in many directions have been studied by different authors, see [7,40,41,42,44].

    In 2008, Suzuki [44] studied a class of mapping called the generalized nonexpasive mappings (or mappings satisfying condition $ (C) $. The author studied the existence and convergence analysis of mappings satisfying condition $ (C) $.

    Definition 1.2. A map $ \mathcal{M}:\mathcal{B}\to \mathcal{B} $ is said to satisfy condition (C) if

    $ 12d(f,Mf)d(f,h)d(Mf,Mh)d(f,h),f,hB.
    $
    (1.5)

    In 2011, Garchía-Falset et al. [20] introduced a general class of nonexpansive mappings as follows:

    Definition 1.3. A mapping $ \mathcal{M}:\mathcal{B}\to \mathcal{B} $ is said to satisfy condition $ E_\mu $, if there exists $ \mu\geq1 $ such that

    $ d(f,Mh)μd(f,Mf)+d(f,h),
    $
    (1.6)

    for all $ f, h\in\mathcal{B} $. Now, $ \mathcal{M} $ is said satisfy the condition $ E $ whenever $ \mathcal{M} $ satisfies the condition $ E_\mu $ for some $ \mu\geq1 $.

    Remark 1.4. As shown in [40], the classes of generalized $ \alpha $-nonexpansive mappings [42], Reich-Suzuki nonexpansive mappings [41], Suzuki generalized nonexpansive mappings [44], generalized $ \alpha $-Reich-Suzuki nonexpansive mapping [40] are properly included in the class of mappings satisfying (1.6).

    The celebrated Banach contraction principle works with Picard iteration process. This principle has some limitations when higher mappings are considered. To get a better rate of convergence and overcome these limitations, several authors have studied different iteration processes. Some of these prominent iteration processes include: Mann [31], Ishikawa [26], Noor [32], S [3], Abbas [1], Thakur [46], Picard-S [22], M [48] iteration processes etc. In [3], the authors showed that S-iterative scheme converges at the same rate as that of Picard iterative algorithm and faster than Mann iteration process. In [1], it is shown that Abbas iteration method converges faster than Picard, Mann [31] and S-iteration [3] processes. In 2016, Thakur et al. [46] defined a new iterative process. It was shown by the authors that their method enjoys a better speed of convergence than Mann [31], Ishikawa [26], Noor [32], S [3] and Abass [1] iteration processes. In 2021, the JK iteration process was constructed by Ahmad et al. [4] for mappings satisfying condition $ (C) $. In [4,5], the authors showed that JK iteration process converges faster than Mann [31], Ishikawa [26], Noor [32], S [3], Abbas [1] and Thakur [46] iteration processes for mappings satisfying condition $ (C) $ and generalized $ \alpha $-nonexpansive mappings, respectively.

    Recently, the following four steps iteration process known as the AH iteration process was introduced by Ofem et al. [33] in Banach spaces.

    $ {f1B,qk=(1δk)fk+δkMfk,vk=M2qk,hk=M2vk,fk+1=(1mk)hk+mkMhk,kN,
    $
    (1.7)

    where $ \{m_k\} $ and $ \{\delta_k\} $ are sequences in $ (0, 1) $. It was analytically shown in [33] that AH iterative algorithm (1.7) converges faster than JK iteration process [4] for contractive-like mappings. Furthermore, they showed numerically that AH iteration process (1.7) converges faster than several existing iteration processes for contractive–like mappings and Reich-Suzuki nonexpansive mappings, respectively.

    Motivated by the above results, in this article, we construct the hyperbolic space version AH iteration process (3.1). Furthermore, we prove that the modified iteration process is data dependent for almost contraction mappings. We study several strong and $ \vartriangle $-convergence analysis of AH iterative scheme for mappings enriched with condition $ (E) $. Some numerical examples of the mappings enriched with condition $ (E) $ are provided to show the efficiency of our method over some existing methods. Finally, we apply our main results in solving nonlinear integral equation with two delays. Since hyperbolic spaces are more general than Banach spaces and by Remark 3.5 the class of mappings enriched with condition $ (E) $ is a super-class of those considered in Ahmad et al. [4,5] and Ofem et al. [33], it follows that our results will generalize and extend the results of Ahmad [4,5], Ofem et al. [33] and so many other existing results of well known authors.

    Throughout this paper, we will let $ \mathbb{N} $ denote the set of natural numbers, $ \mathbb{R} $ the set of real numbers and $ \mathbb{C} $ the set of complex numbers. Any given space that is endowed with some convexity structure is an important tool for solving the operator equation $ w = \mathcal{M}w $. Since every Banach space is a vector space, it follows that a Banach space naturally inherits the convexity structure. On the other hand, metric spaces do not naturally enjoy this convex structure.

    In [45], Takahashi developed the concept convex metric spaces and further investigated the fixed points of certain mappings in the setting of such spaces. It is well known that convex metric spaces contains all normed spaces as well as their convex subsets. But there are many examples of convex metric spaces which are not embedded in any normed space [45]. For some decades now, many authors have been introduced convex structures in metric spaces. The following $ \mathcal{W} $-hyperbolic spaces was introduced by Kohlenbach [25]:

    Definition 2.1. A $ \mathcal{W} $-hyperbolic space $ (\mathcal{M}, d, \mathcal{W}) $ is a metric space $ (\mathcal{B}, d) $ together with a convexity mapping $ \mathcal{W} : \mathcal{B}^2\times[0, 1] \to \mathcal{B} $ satisfying the following properties:

    $ (1) $ $ d(q, \mathcal{W}(f, h, \alpha)) \leq(1-\alpha)d(q, f) + \alpha d(q, h) $

    $ (2) $ $ d(\mathcal{W}(f, h, \alpha), \mathcal{W}(f, h, \beta)) = |\alpha-\beta|d(f, h) $

    $ (3) $ $ \mathcal{W}(f, h, \alpha) = \mathcal{W}(h, f, 1-\alpha) $

    $ (4) $ $ d(\mathcal{W}(f, q, \alpha), \mathcal{W}(h, p, \alpha))\leq(1-\alpha)d(f, h) + \alpha d(q, p) $

    for all $ f, h, q, p\in\mathcal{B} $ and $ \alpha, \beta\in [0, 1] $.

    Suppose $ (\mathcal{B}, d, \mathcal{W}) $ fulfils only condition (1), then $ (\mathcal{M}, d, \mathcal{W}) $ becomes the convex metric space considered by Takahashi [45]. It is well known that every hyperbolic space is a convex metric space but converse is not generally true [14].

    Normed linear space, CAT(0) spaces, the Hilbert ball and Busseman spaces are important examples of $ \mathcal{W} $-hyperbolic spaces [52].

    A hyperbolic space $ (\mathcal{B}, d, \mathcal{W}) $ is known as uniformly convex [12], if for $ f, h, q\in \mathcal{B} $, $ \varepsilon\in (0, 2] $ and $ r > 0 $, it follows that a constant $ \gamma\in (0, 1] $ exists with $ d(f, h)\leq r $, $ d(q, f)\leq r $, and $ d(h, q)\geq \varepsilon r $. Then, we get

    $ d(W(h,q,12),f)(1γ)r.
    $

    The modulus of uniform convexity [56], of $ \mathcal{B} $ is a mapping $ \xi:(0, \infty)\times(0, 2]\to(0, 1] $ which gives $ \gamma = \xi(r, \varepsilon) $ for any $ r > 0 $ and $ \varepsilon\in (0, 2] $. We call $ \xi $ monotone if it decreases with $ r $ (for fixed $ \varepsilon $), see [56].

    A nonempty subset $ \mathcal{D} $ of a hyperbolic space $ \mathcal{B} $ is called convex if $ \mathcal{W}(f, h, \alpha)\in \mathcal{D} $ for all $ f, h\in\mathcal{D} $ and $ \alpha\in [0, 1] $. If $ f, h\in\mathcal{B} $ and $ \mathcal{\alpha}\in[0, 1] $, then we denote $ \mathcal{W}(f, h, \alpha) $ by $ (1-\alpha)f\oplus\alpha h $. It is shown in [28] that any normed space $ (\mathcal{B}, \|.\|) $ is a hyperbolic with $ (1-\alpha)f\oplus \alpha h = (1-\alpha)f+ \alpha h $. This implies that the class of uniformly convex hyperbolic spaces is a natural generalization of the class of uniformly convex Banach spaces.

    The concept of $ \vartriangle $-convergence in the setting of general metric space was introduced by Lim [30]. This concept of convergence was used by Kirk and Panyanak [24] to proved some results in CAT(0) spaces that are analogous of some Banach space results involving weak convergence. Furthermore, $ \vartriangle $-convergence results of the Picard, Mann[31] and Ishikawa[26] iteration processes in CAT(0) spaces were obtained by Dhompongsa and Panyanak [17]. In recent years, a number of articles concerning $ \vartriangle $-convergence have been published (see [2,19,21,27,34,52] and the references therein). To define $ \vartriangle $-convergence, we consider the following concept.

    Let $ \{f_k\} $ be a sequence which is bounded in a hyperbolic space $ \mathcal{B} $. A function $ r(., {\{f_k\}}) :\mathcal{B}\to [0, \infty) $ can be defined by

    $ r(f,{fk})=lim supkd(f,fk),for all,fB.
    $

    An asymptotic radius of a bounded sequence $ \{f_k\} $ with respect to a nonempty subset $ \mathcal{D} $ of $ \mathcal{B} $ is denoted and defined by

    $ rD({fk})=lim supkinf{r(f,{fk}):fD}.
    $

    An asymptotic center of a bounded sequence $ \{f_k\} $ with respect to a nonempty subset $ \mathcal{D} $ of $ \mathcal{B} $ is denoted and defined by

    $ AD({fk})={fB:r(f,{fk})r(h,{fk}),for allhD}.
    $

    Suppose the asymptotic radius of the asymptotic center are taken with respect $ \mathcal{B} $, then these are simply denoted by $ r(\{f_k\}) $ and $ A(\{f_k\}) $, respectively. Generally, $ A(\{f_k\}) $ may be empty or may even have infinitely many points, see [2,19,21,27,52,56].

    The following lemmas, definitions and proposition will be useful in our main results.

    Definition 2.2. [24] The sequence $ \{f_k\} $ in $ \mathcal{B} $ is said to be $ \vartriangle $-convergent to a point $ f\in \mathcal{B} $ if $ f $ is the unique asymptotic center of every sub-sequence $ \{f_{k_j}\} $ of $ \{f_k\} $. For this, we write $ \vartriangle-\lim\limits_{k\to\infty}f_k = f $ and call $ f $ the $ \vartriangle $-limit of $ \{f_k\} $.

    Lemma 2.3. [28] In a complete uniformly hyperbolic space $ \mathcal{B} $ with monotone modulus of convexity $ \xi $, it is well known that every bounded sequence $ \{f_k\} $ has a unique asymptotic center with respect to every nonempty closed convex subset $ \mathcal{D} $ of $ \mathcal{B} $.

    Lemma 2.4. [29] Let $ (\mathcal{B}, d, \mathcal{W}) $ be a complete uniformly convex hyperbolic space with a monotone modulus of convexity $ \xi $. Assume $ f\in \mathcal{B} $ and $ \{\alpha_k\} $ is a sequence in $ [n, m] $ for some $ n, m\in (0, 1) $. Suppose $ \{f_k\} $ and $ \{h_k\} $ are sequences in $ \mathcal{B} $ such that $ \limsup\limits_{k\to\infty}d(f_k, f)\leq a $, $ \limsup\limits_{k\to\infty}d(h_k, f)\leq a $, $ \lim\limits_{k\to\infty}d(\mathcal{W}(f_k, h_k, \alpha_k), f) = a $ for some $ a\geq 0 $, then

    $ limkd(fk,hk)=0.
    $

    Lemma 2.5. [47] Let $ \{a_k\} $ be a non–negative sequence for which one assumes that there exists an $ n_0\in\mathbb{N} $ such that, for all $ k \geq n_0 $,

    $ ak+1=(1σk)αk+σkgk
    $

    is satisfied, where $ \sigma_k\in(0, 1) $ for all $ k\in\mathbb{N} $, $ \sum_{k = 0}^{\infty}\sigma_k = \infty $ and $ g_k\geq0 $ $ \forall k\in \mathbb{N} $. Then the following holds:

    $ 0lim supkaklim supkgk.
    $

    Definition 2.6. [47] Let $ \mathcal{M} $, $ \mathcal{S}:\mathcal{B}\to \mathcal{B} $. Then $ \mathcal{S} $ is an approximate operator of $ \mathcal{M} $ if for all $ \epsilon > 0 $, implies that $ d(\mathcal{M}f, \mathcal{S}f)\leq\epsilon $ holds for any $ f\in\mathcal{B} $.

    Proposition 2.7. [20] Let $ \mathcal{M} :\mathcal{B} \to\mathcal{B} $ be a mapping which satisfies the condition $ (E) $ with $ F(\mathcal{M})\neq\emptyset $, then $ \mathcal{M} $ is quasi-nonexpansive.

    Lemma 2.8. [43] Let $ \mathcal{D} $ be a subset of $ (\mathcal{B}, d, \mathcal{W}) $. A mapping $ \mathcal{M}:\mathcal{D}\to\mathcal{D} $ is said to fulfil the condition $ (I) $ if a non-decreasing function $ \varrho:[0, \infty)\to[0, \infty) $ exists with $ \varrho(0) = 0 $ such that $ \varrho(r) > 0 $ for any $ r\in (0, \infty) $ we have $ d(f, \mathcal{M}f)\geq \varrho(dist(f, F(\mathcal{M}))) $ for all $ f\in \mathcal{D} $, where $ dist(f, F(\mathcal{M})) $ stands for the distance of $ f $ from $ F(\mathcal{M}) $.

    Throughout the remaining part of this article, let $ (\mathcal{B}, d, \mathcal{W}) $ denote a complete uniformly convex hyperbolic space with a monotone modulus of convexity $ \xi $ and $ \mathcal{D} $ be a nonempty closed convex subset of $ \mathcal{B} $.

    In this section, we construct a modified form of AH iteration process (1.7) in hyperbolic spaces as follows:

    $ {f1D,qk=W(fk,Mfk,δk)vk=M2qk,hk=M2vk,fk+1=W(hk,Mhk,mk)kN,
    $
    (3.1)

    where $ \{m_k\} $, $ \{\delta_k\} $ are sequences in $ (0, 1) $ and $ \mathcal{M} $ is a mapping enriched with condition $ (E) $.

    In this section, we show the data dependence result of the iteration process (3.1) for almost contraction mappings. The following convergence theorem will be useful in obtaining the data dependence result.

    Theorem 3.1. Let $ \mathcal{D} $ be a nonempty, closed and convex subset of hyperbolic space $ \mathcal{B} $ and $ \mathcal{M}:\mathcal{D}\to \mathcal{D} $ an almost contraction mapping. If the $ \{f_k\} $ is the sequence defined by (3.1), then $ \lim\limits_{k\to\infty}f_k = w $, where $ w\in F(\mathcal{M}) $.

    Proof. Suppose that $ w\in F(\mathcal{M}) $, from (1.2), (3.1) and Proposition 2.7, we have

    $ d(qk,w)=d(W(fk,Mfk,δk),w)(1δk)d(fk,w)+δkd(Mfk,w)(1δk)d(fk,w)+δked(fk,w)=(1(1e)δk)d(fk,w).
    $
    (3.2)

    Since $ \delta_k\in (0, 1) $, $ e\in [0, 1) $ then $ 0 < (1-(1-e)\delta_k)\leq1 $, thus (3.2) yields

    $ d(qk,w)d(fk,w).
    $
    (3.3)

    Also, by (3.1) and (3.3), we have

    $ d(vk,w)=d(M2qk,w)=d(M(M)qk,w)ed(Mqk,w)e2d(qk,w)e2d(fk,w).
    $
    (3.4)

    Again, from (3.1) and (3.4), we get

    $ d(hk,w)=d(M2vk,w)=d(M(M)vk,w)ed(Mvk,w)e2d(vk,w)e4d(fk,w).
    $
    (3.5)

    Finally, using (3.1) and (3.5), we obtain

    $ d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+mkd(Mhk,w)(1mk)d(hk,w)+mked(hk,w)=(1(1e)δk)d(hk,w)e4d(fk,w).
    $
    (3.6)

    Inductively, we obtain

    $ d(fk+1,w)e4(k+1)d(f0,w).
    $

    Since $ 0\leq e < 1 $, it follows that $ \lim\limits_{k\to\infty}f_k = w $.

    Theorem 3.2. Let $ \mathcal{D} $, $ \mathcal{B} $ and $ \mathcal{M} $ be same as defined in Theorem 3.1. Let $ \mathcal{S} $ an approximate operator of $ \mathcal{M} $ and $ \{f_k\} $ a sequence defined by (3.1). We define an iterative sequence $ \{x_k\} $ for an almost contraction mapping $ \mathcal{M} $ as follows:

    $ {x1D,zk=W(xk,Sxk,δk)wk=S2zkyk=S2wkxk+1=W(yk,Syk,mk)kN,
    $
    (3.7)

    where $ \{m_k\} $ and $ \{\delta_k\} $ are sequences in $ (0, 1) $ satisfying $ \frac{1}{2}\leq m_k $, $ k\in \mathbb{N} $ and $ \sum_{k = 0}^{\infty}m_k = \infty $. If $ \mathcal{M}w = w $ and $ \mathcal{S}t = t $ such that $ x_k\to t $ as $ k\to\infty $, then we have

    $ d(w,t)11ϵ1e,
    $

    where $ \epsilon $ is a fixed number.

    Proof. Using (1.2), (3.1) and (3.7), we have

    $ d(qk,zk)=d(W(qk,Mfk,δk),W(xk,Mxk,δk))(1δk)d(fk,xk)+δkd(Mfk,Sxk)(1δk)d(fk,xk)+δkd(Mfk,Mxk)+δkd(Mxk,Sxk)(1δk)d(fk,xk)+δked(fk,xk)+δkLd(fk,Mfk)+δkϵ[1(1e)δk]d(fk,xk)+δkL(1+e)d(fk,w)+δkϵ
    $
    (3.8)
    $ d(vk,wk)=d(M2qk,S2zk)=d(M(Mqk),S(Szk))d(M(Mqk),M(Szk))+d(M(Szk),S(Szk))ed(Mqk,Szk)+Ld(Mqk,M(Mqk))+ϵe(d(Mqk,Mzk)+d(Mzk,Szk))+Ld(Mqk,M(Mqk))+ϵe2d(qk,zk)+eLd(qk,Mqk)+eϵ+L(d(Mqk,w)+d(w,M(Mqk)))+ϵe2d(qk,zk)+eL(d(qk,w)+d(w,Mqk))+eϵ+L(ed(qk,w)+ed(w,Mqk))+ϵe2d(qk,zk)+eL(1+e)d(qk,w)+eϵ+Le(1+e)d(qk,w)+ϵ
    $
    (3.9)
    $ d(hk,yk)=d(M2vk,S2wk)=d(M(Mvk),S(Swk))d(M(Mvk),M(Swk))+d(M(Swk),S(Swk))ed(Mvk,Swk)+Ld(Mvk,M(Mvk))+ϵe(d(Mvk,Mwk)+d(Mwk,Swk))+Ld(Mvk,M(Mvk))+ϵe2d(vk,wk)+eLd(vk,Mvk)+eϵ+L(d(Mvk,w)+d(w,M(Mvk)))+ϵe2d(vk,wk)+eL(d(vk,w)+d(w,Mvk))+eϵ+L(ed(vk,w)+ed(w,Mvk))+ϵe2d(vk,wk)+eL(1+e)d(vk,w)+eϵ+Le(1+e)d(vk,w)+ϵ
    $
    (3.10)
    $ d(fk+1,xk+1)=d(W(hk,Mhk,mk),W(yk,Myk,mk))(1mk)d(hk,hk)+mkd(Mhk,Syk)(1mk)d(hk,yk)+mkd(Mhk,Myk)+mkd(Myk,Syk)(1mk)d(hk,yk)+mked(hk,yk)+mkLd(hk,Mhk)+δkϵ[1(1e)mk]d(hk,yk)+δkL(1+e)d(hk,w)+mkϵ
    $
    (3.11)

    Using (3.8)–(3.11), we have

    $ d(fk+1,xk+1)e4[1(1e)mk][1(1e)δk]d(fk,xk)+e4δk[1(1e)mk]L(1+e)d(fk,w)+δk[1(1e)mk]ϵ+e3[1(1e)mk]L(1+e)d(qk,w)+e3[1(1e)mk]ϵ+e2[1(1e)mk]Le(1+e)d(qk,w)+e2[1(1e)mk]ϵ+e[1(1e)mk]L(1+e)d(vk,w)+e[1(1e)mk]ϵ+[1(1e)mk]Le(1+e)d(vk,w)+[1(1e)mk]ϵ+mkL(1+e)d(hk,w)+mkϵ=e4[1(1e)mk][1(1e)δk]d(fk,xk)+e4δk[1(1e)mk]L(1+e)d(fk,w)+δkϵ+mkδk(e1)ϵ+e3[1(1e)mk]L(1+e)d(qk,w)+e3ϵ+e4mkϵ+e2[1(1e)mk]Le(1+e)d(qk,w)+e2ϵ+e[1(1e)mk]L(1+e)d(vk,w)+eϵ+[1(1e)mk]Le(1+e)d(vk,w)+ϵ+mkL(1+e)d(hk,w).
    $
    (3.12)

    Since $ \{m_k\}, \{\delta_k\}\in (0, 1) $ and $ e\in (0, 1] $, then it follows that $ (e-1)\leq 0 $, $ [1-(1-e)m_k]\leq 1 $ and $ [1-(1-e)\delta_k]\leq 1 $. Therefore, (3.12) becomes

    $ d(fk+1,xk+1)[1(1e)mk]d(fk,xk)+L(1+e)d(fk,w)+L(1+e)d(qk,w)+Le(1+e)d(qk,w)+Le(1+e)d(vk,w)+Le(1+e)d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+5ϵ=[1(1e)mk]d(fk,xk)+L(1+e)d(fk,w)+L(1+e)2d(qk,w)+L(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+5ϵ.
    $
    (3.13)

    Since $ \frac{1}{2}\leq m_k $, $ \forall k\geq1 $, then $ 1\leq 2m_k $, $ \forall k\geq1 $. Thus, (3.13) becomes

    $ d(fk+1,xk+1)[1(1e)mk]d(fk,xk)+2mkL(1+e)d(fk,w)+2mkL(1+e)2d(qk,w)+2mkL(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+mkϵ+10mkϵ.=[1(1e)mk]d(fk,xk)+mk(1e)×{2L(1+e)d(fk,w)+2L(1+e)2d(qk,w)+2L(1+e)2d(vk,w)+mkL(1+e)d(hk,w)+11ϵ1e}.
    $
    (3.14)

    Therefore,

    $ ak+1=(1σk)ak+σkgk,
    $

    where

    $ a_{k+1} = d(f_{k+1}, x_{k+1}) $,

    $ \sigma_k = (1-e)m_k\in (0, 1) $,

    and

    $ g_k = \frac{2L(1+e)d(f_k, w)+2L(1+e)^2d(q_k, w)+2L(1+e)^2d(v_k, w) +m_kL(1+e)d(h_k, w)+11\epsilon}{1-e}\geq 0 $.

    From Theorem 3.1, we have that $ \lim\limits_{k\to\infty}d(f_k, w) = \lim\limits_{k\to\infty}d(h_k, w) = \lim\limits_{k\to\infty}d(v_k, w) = \lim\limits_{k\to\infty}d(q_k, w) = 0 $. By the hypothesis $ x_k\to t $ as $ k\to\infty $ and using Lemma 2.5, we obtain

    $ d(w,t)11ϵ1e.
    $

    This completes the proof.

    Now, we obtain the strong and $ \vartriangle $-convergence results of (3.1). To obtain these results, we will use the following lemmas.

    Lemma 3.3. Let $ \mathcal{D} $ and $ \mathcal{B} $ be same as defined in Theorem 3.2 and Let $ \mathcal{M}:\mathcal{D}\to\mathcal{D} $ be a mapping enriched with the condition $ (E) $ such that $ F(\mathcal{M})\neq\emptyset $. Suppose $ \{f_k\} $ is the sequence iteratively generated by (3.1). Then, $ \lim\limits_{k\to\infty}d(f_k, w) $ exists for all $ w\in F(\mathcal{M}) $.

    Proof. Assume that $ w\in F(\mathcal{M}) $. From Proposition 2.7 and (3.1), we have

    $ d(qk,w)=d(W(fk,Mfk,δk),w)(1δk)d(fk,w)+δkd(Mfk,w)(1δk)d(fk,w)+δkd(fk,w)=d(fk,w).
    $
    (3.15)

    From (3.15) and (3.1), we have

    $ d(vk,w)=d(M2qk,w)=d(M(M)qk,w)d(Mqk,w)d(qk,w)d(fk,w).
    $
    (3.16)

    Using (3.16) and (3.1), we get

    $ d(hk,w)=d(M2vk,w)=d(M(M)vk,w)d(Mvk,w)d(vk,w)d(fk,w).
    $
    (3.17)

    Finally, (3.17) and (3.1), we obtain

    $ d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+δkd(Mhk,w)(1mk)d(hk,w)+mkd(hk,w)=d(hk,w).
    $
    (3.18)

    This shows that $ \{d(f_k, w)\} $ is a non-increasing sequence which is bounded below. Thus, $ \lim\limits_{k\to\infty}d(f_k, w) $ exists for each $ w\in F(\mathcal{M}) $.

    Lemma 3.4. Let $ \mathcal{D} $, $ \mathcal{B} $ and $ \mathcal{M} $ be same as defined in Lemma 3.3. Let $ \{f_k\} $ be the sequence defined by (3.1). Then, $ F(\mathcal{M})\neq \emptyset $ if and only if $ \{f_k\} $ is bounded and $ \lim\limits_{k\to\infty}d(f_k, \mathcal{M}f_k) = 0 $.

    Proof. Assume that $ \{f_k\} $ is a bounded sequence with $ \lim\limits_{k\to\infty}d(f_k, \mathcal{M}f_k) = 0 $. Let $ w\in A(\mathcal{D}, \{f_k\}) $. By the definition of asymptotic radius, we have

    $ r(Mw,{fk})=lim supkd(fk,Mw).
    $

    Since $ \mathcal{M} $ is a mapping which satisfies condition $ (E) $, we obtain

    $ r(Mw,{fk})=lim supkd(fk,Mw)μlim supkd(Mfk,fk)+lim supkd(fk,w)=r(w,{fk}).
    $

    Recalling the uniqueness of the asymptotic center of $ \{f_k\} $, we get $ \mathcal{M}w = w $.

    Conversely, let $ F(\mathcal{M})\neq \emptyset $ and $ w\in F(\mathcal{M}) $. Then by Lemma 3.3, $ \lim\limits_{k\to\infty}d(f_k, w) $ exists. Now, suppose

    $ limkd(fk,w)=c.
    $
    (3.19)

    From (3.15), (3.16) and (3.19), it follows that

    $ lim supkd(vk,w)c,
    $
    (3.20)
    $ lim supkd(qk,w)c.
    $
    (3.21)

    Using Proposition 2.7, we get

    $ lim supkd(Mfk,w)lim supkd(fk,w)=c.
    $
    (3.22)

    By Lemma 3.3 and (3.1), one obtains

    $ d(fk+1,w)=d(W(fk,Mhk,mk),p)(1mk)d(fk,w)+mkd(Mhk,w)(1mk)d(fk,w)+mkd(hk,w)=(1mk)d(fk,w)+mkd(M2vk,w)=(1mk)d(fk,w)+mkd(M(M)vk,w)(1mk)d(fk,w)+mkd(Mvk,w)(1mk)d(fk,w)+mkd(vk,w)=(1mk)d(fk,w)+mkd(M2qk,w)=(1mk)d(fk,w)+mkd(M(M)qk,w)(1mk)d(fk,w)+mkd(Mqk,w)(1mk)d(fk,w)+mkd(qk,w).
    $
    (3.23)

    From (3.23), it follows that

    $ d(fk+1,w)d(fk,w)d(fk+1,w)d(fk,w)mkd(qk,w)d(fk,w).
    $
    (3.24)

    Thus,

    $ clim infkd(qk,w).
    $
    (3.25)

    From (3.21) and (3.25), we get

    $ c=limkd(qk,w).
    $
    (3.26)

    Using (3.1) and (3.26), we have

    $ c=limkd(qk,w)=limkd(W(fk,Mfk,δk),w).
    $
    (3.27)

    So, from Lemma 2.4 we have

    $ limkd(fk,Mfk)=0.
    $

    Now, we show the $ \vartriangle $-convergence result of the iteration process (3.1) for class of mappings enriched with condition $ (E) $.

    Theorem 3.5. Let $ \mathcal{D} $, $ \mathcal{B} $ and $ \mathcal{M} $ be same as in Theorem 3.4 such that $ F(\mathcal{M})\neq\emptyset $. Let $ \{f_k\} $ is the sequence defined by (3.1). Then, $ \{f_k\} $ $ \vartriangle $-converges to a fixed point of $ \mathcal{M} $.

    Proof. By Lemma 3.3, we know that $ \{f_k\} $ is a bounded sequence. It follows that $ \{f_k\} $ has a $ \vartriangle $-convergent sub-sequence. Now, we show that every $ \vartriangle $-convergent sub-sequence of $ \{f_k\} $ has a unique $ \vartriangle $-limit in $ F(\mathcal{M}) $. Let $ y $ and $ z $ stand for the $ \vartriangle $-limits of the subsequences $ \{f_{k_i}\} $ and $ \{f_{k_j}\} $ of $ \{f_k\} $, respectively. Recalling Lemma 2.3, we have $ A(\mathcal{D}, \{f_{k_i}\}) = \{y\} $ and $ A(\mathcal{D}, \{f_{k_j}\}) = \{z\} $. From Lemma 3.4, it follows that $ \lim\limits_{i\to\infty}d(f_{k_{i}}, \mathcal{M}f_{k_{i}}) = 0 $ and $ \lim\limits_{j\to\infty}d(f_{k_{j}}, \mathcal{M}f_{k_{j}}) = 0 $. We assume that $ w\in F(\mathcal{M}) $. Again, we know that

    $ r({fki},Mw)=lim supid(fki,My).
    $

    Since $ \mathcal{M} $ is a mapping which satisfies condition $ (E) $, we obtain

    $ r({fki},My)=lim supid(fki,My)μlim supid(Mfki,fki)+lim supid(fki,y)lim supid(fki,y)=r(y,{fki}).
    $

    From the uniqueness of the asymptotic center, we know that $ \mathcal{M}w = w $. Now, it is left to prove that $ w = z $. We claim that $ w\neq z $ and then from the uniqueness of asymptotic center, it follows that

    $ lim supkd(fk,y)=lim supid(fki,y)<lim supid(fki,z)=lim supkd(fk,z)=lim supjd(fkj,z)<lim supjd(fkj,y)=lim supkd(fk,y),
    $

    which is clearly a contradiction. Therefore, $ y = z $ and hence, $ \{f_k\} $ $ \vartriangle $-converges to a point of $ \mathcal{M} $.

    Next, prove some strong convergence theorems as follows:

    Theorem 3.6. Let $ \mathcal{D} $, $ \mathcal{B} $ and $ \mathcal{M} $ be same as in Theorem 3.4 such that $ F(\mathcal{M})\neq\emptyset $. If $ \{f_k\} $ is the sequence iteratively generated by (3.1). Then, $ \{f_k\} $ converges strongly to a fixed point of $ \mathcal{M} $ if and only if $ \liminf\limits_{k\to \infty}\mathit{{\text{dist}}}(f_k, F(\mathcal{M})) = 0 $, where $ \mathit{{\text{dist}}}(f_k, F(\mathcal{M})) = \inf\{d(f_k, w):w\in F(\mathcal{M})\} $.

    Proof. If $ \liminf\limits_{k\to \infty}dist(f_k, F(\mathcal{M})) = 0 $. By Lemma 3.3, it follows that $ \liminf\limits_{k\to\infty}d(f_k, F(\mathcal{M})) $. Thus,

    $ limkd(fk,F(M))=0.
    $
    (3.28)

    From (3.28), a sub-sequence $ \{f_{k_i}\} $ of $ \{f_k\} $ exists with $ d(f_{k_i}, t_i)\leq \frac{1}{2^i} $ for all $ i\geq 1 $, where $ \{t_i\} $ is a sequence in $ F(\mathcal{M}) $. In view of Lemma 3.34, we obtain

    $ d(fki+1,ti)d(fki,ti)12i.
    $
    (3.29)

    Using (3.29), we have

    $ d(ti+1,ti)d(ti+1,fki+1)+d(fki+1,ti)
    $
    (3.30)
    $ 12i+1+12i<12i1.
    $
    (3.31)

    It follows clearly that $ \{f_k\} $ is a Cauchy sequence in $ \mathcal{D} $. Since $ \mathcal{D} $ is a closed subset of $ \mathcal{B} $, $ \lim\limits_{k\to\infty}f_k = z $ for some $ z\in \mathcal{D} $. Now, we prove that $ z $ is a fixed point of $ \mathcal{M} $. Since $ \mathcal{M} $ is a mapping which satisfies condition $ (E) $, we have

    $ d(fk,Mz)μd(fk,Mfk)+d(fk,z).
    $

    Letting $ k\to\infty $, then by Lemma 3.5 we have $ d(f_k, \mathcal{M}f_k) = 0 $ and then it follows that $ d(z, \mathcal{M}z) = 0 $. So, $ z $ is a fixed point of $ \mathcal{M} $. Thus, $ \{f_k\} $ converges strongly to a point in $ F(\mathcal{M}) $.

    Theorem 3.7. Let $ \mathcal{D} $, $ \mathcal{B} $ and $ \mathcal{M} $ be same as in Theorem 3.4 such that $ F(\mathcal{M})\neq\emptyset $. If $ \{f_k\} $ is the sequence defined by (3.1) and $ \mathcal{M} $ satisfies condition $ (I) $. Then, $ \{f_k\} $ convergences strongly to an element in $ F(\mathcal{M}) $.

    Proof. Using Lemma 3.4, We have

    $ lim infkd(Mfk,fk)=0.
    $
    (3.32)

    Since $ \mathcal{M} $ fulfills condition $ (I) $, we get $ d(\mathcal{M}f_k, f_k)\geq \varrho({\text{dist}}(f_k, F(\mathcal{M}))) $. From (3.32), we obtain

    $ lim infkϱ(dist(fk,F(M)))=0.
    $

    Again, since the function $ \varrho:[0, \infty)\to[0, \infty) $ is non-decreasing such that $ \varrho(0) = 0 $ and $ \varrho(r) > 0 $ for all $ r\in (0, \infty) $, then we have

    $ lim infkdist(fk,F(M))=0.
    $

    Thus, all the conditions of Theorem 3.6 are performed. Hence, $ \{f_k\} $ converges strongly to a fixed point of $ \mathcal{M} $.

    Now, we give the following example to authenticate Theorem 3.7.

    Example 3.8. Let $ \mathcal{B} = \mathbb{R} $ with the metric $ d(f, h) = |f-h| $ and $ \mathcal{D} = [-3, \infty) $. Define $ \mathcal{W}:\mathcal{B}^2\times [0, 1]\to \mathcal{B} $ by $ \mathcal{W}(f, h, \alpha) = \alpha f+(1-\alpha)h $ for all $ f, h\in \mathcal{B} $ and $ \alpha\in [0, 1] $. Then $ (\mathcal{B}, d, \mathcal{W}) $ is a complete uniformly Hyperbolic space with monotone modulus of convexity and $ \mathcal{D} $ is a nonempty closed convex subset of $ \mathcal{B} $. Let $ \mathcal{M}:\mathcal{D}\to \mathcal{D} $ be defined by

    $ Mf={f6,iff[3,13],f7,iff(13,).
    $

    Since $ \mathcal{M} $ is not continuous at $ f = \frac{1}{3} $ and owing to the fact every nonexpansive mapping is continuous, then it implies that $ \mathcal{M} $ is not a nonexpansive mapping. Next, we show that $ \mathcal{M} $ is enriched with condition $ (E) $. To see this, we consider the following cases:

    Case Ⅰ: Let $ f, h\in [-3, \frac{1}{3}] $, then we have

    $ d(f,Mh)=|fh6|=|ff6+f6h6||ff6|+16|fh|3635|ff6|+|fh|=3635d(f,Mf)+d(f,h).
    $

    Case Ⅱ: Let $ f, h\in (\frac{1}{3}, \infty) $, the we have

    $ d(f,Mh)=|fh7|=|ff7+f7h7||ff7|+17|fh|3635|ff7|+|fh|=3635d(f,Mf)+d(f,h).
    $

    Case Ⅲ: Let $ f\in[-3, \frac{1}{3}] $ and $ h\in (\frac{1}{3}, \infty) $, then we obtain

    $ d(f,Mh)=|fh7|=|ff7+f7h7||ff7|+17|fh||ff6+f6f7|+|fh|=|(ff6)+135(ff6)|+|fh|=3635|ff6|+|fh|=3635d(f,Mf)+d(f,h).
    $

    Case Ⅳ: Let $ f\in(\frac{1}{3}, \infty) $ and $ h\in [-3, \frac{1}{3}] $, then we get

    $ d(f,Mh)=|fh6|=|ff6+f6h6||ff6|+16|fh||ff7+f7f6|+|fh|=|(ff7)136(ff7)|+|fh|=3536|ff7|+|fh|3635|ff7|+|fh|=3635d(f,Mf)+d(f,h).
    $

    Clearly, from the cases shown above, $ \mathcal{M} $ satisfies the condition $ (E) $ with $ \mu = \frac{36}{35} $ and the fixed point is $ w = 0 $. Hence, $ F(\mathcal{M}) = \{0\} $. Now, we consider a function $ \varrho (f) = \frac{f}{4} $, where $ f\in (0, \infty) $, then $ \varrho $ is non-decreasing with $ \varrho(0) = 0 $ and $ \varrho(f) > 0 $ for all $ f\in (0, \infty) $.

    Observe that

    $ dist(f,F(M))=inffF(M)d(f,w)=infd(f,0)={0,iff[3,13],13,iff(13,).ϱ(dist(f,F(M)))={0,iff[3,13],112,iff(13,).
    $

    Finally, we consider the following cases:

    Case 1. Let $ f\in [-3, \frac{1}{3}] $, then we obtain

    $ d(f,Mf)=|ff6|=56|f|0=ϱ(dist(f,F(M))).
    $

    Case 2. Let $ f\in(\frac{1}{3}, \infty) $, then we obtain

    $ d(f,Mf)=|ff7|=67|f|112=ϱ(dist(f,F(M))).
    $

    Thus, the above considered cases proves that

    $ d(f,Mf)ϱ(dist(f,F(M))).
    $

    Therefore, the mapping $ \mathcal{M} $ satisfies the condition $ (I) $. Clearly, all the hypothesis of Theorem (3.7) are fulfilled. Thus, using Theorem (3.7), it follows that the sequence $ \{f_k\} $ defined by (3.1) converges strongly to the fixed point $ w = 0 $ of $ \mathcal{M} $.

    In this section, we construct a mapping enriched with condition $ (E) $, but does not satisfies condition $ (C) $. Then, using this example we will illustrate that the modified AH iterative algorithm (3.1) converges faster than several known methods.

    Example 3.9. Let $ \mathcal{B} = \mathbb{R} $ and $ \mathcal{D} = [-1, 1] $ with the usual metric, that is $ d(f, h) = |f-h| $. Define $ \mathcal{M}:\to \mathcal{D}\to \mathcal{D} $ by

    $ Mf={f,iff[1,0]{15},0,iff=15,f5,iff(0,1].
    $

    If $ f = -1 $ and $ h = -\frac{1}{5} $, we have

    $ 12d(h,Mh)=12|15M(15)|=11045=d(f,y).
    $

    But,

    $ d(Mf,Mh)=1>45=d(f,y).
    $

    Thus, the mapping $ \mathcal{M} $ does not satisfy condition $ (C) $. Next, we show that $ \mathcal{M} $ is a mapping which satisfies condition $ (E) $. For this, the following cases will be considered:

    Case a: When $ f, h\in [-1, 0]\backslash\{\frac{1}{5}\} $, we have

    $ d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+d(f,h).
    $

    Case b: When $ f, h\in(0, 1] $, we have

    $ d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+|f5+h5|d(f,Mf)+|fh|=d(f,Mf)+d(f,h).
    $

    Case c: When $ f\in [-1, 0]\backslash\{\frac{1}{5}\} $ and $ h\in(0, 1] $, we have

    $ d(f,Mh)d(f,Mf)+d(Mf,Mh)d(f,Mf)+|f+h5|d(f,Mf)+|f+h|(asf0,h>0)=d(f,Mf)+d(f,h).
    $

    Case d: When $ f\in [-1, 0]\backslash\{\frac{1}{5}\} $ and $ h = -\frac{1}{5} $, we have

    $ d(f,Mh)=|f|2|f|+|f+15|=d(f,Mf)+d(f,h).
    $

    Case e: When $ f\in (0, 1] $ and $ h = -\frac{1}{5} $, we have

    $ d(f,Mh)=|f|65|f|+|f+15|=d(f,Mf)+d(f,h).
    $

    Thus, $ \mathcal{M} $ satisfies the condition $ (E) $ with $ \mu\geq 1 $.

    In this work, we will be using MATLAB R2015a to obtain our numerical results.

    Now, we will study the influence of the control parameters $ m_k, \delta_k $ and initial value on AH iteration process (3.1).

    Case Ⅰ: Here, we will examine the convergence behavior of (3.1) for different choices of control parameters with the same initial value. For this, we consider the following set of parameters and initial value:

    $ (1) $ $ m_k = 0.70 $, $ \delta_k = 0.30 $ for all $ k\in \mathbb{N} $ and $ f_1 = 0.8 $,

    $ (2) $ $ m_k = 0.65 $, $ \delta_k = 0.35 $ for all $ k\in \mathbb{N} $ and $ f_1 = 0.8 $,

    $ (3) $ $ m_k = 0.55 $, $ \delta_k = 0.35 $ for all $ k\in \mathbb{N} $ and $ f_1 = 0.8 $.

    We obtain the following Table 1 and Figure 1 for an initial of 0.8.

    Table 1.  Tabular values of AH iteration (3.1) for Case Ⅰ.
    Step Parameter 1 Parameter 2 Parameter 3
    1 0.8000000000 0.8000000000 0.8000000000
    2 -0.1280000000 -0.0720000000 -0.0240000000
    3 0.0204800000 0.0064800000 0.0007200000
    4 -0.0032768000 -0.0005832000 -0.0000216000
    5 0.0005242880 0.0000524880 0.0000006480
    6 -0.0000838861 -0.0000047239 -0.0000000194
    7 0.0000134218 0.0000004252 0.0000000006
    8 -0.0000021475 -0.0000000383 -0.0000000000
    9 0.0000003436 0.0000000034 0.0000000000
    10 -0.0000000550 -0.0000000003 -0.0000000000
    11 0.0000000088 0.0000000000 0.0000000000
    12 -0.0000000014 -0.0000000000 -0.0000000000
    13 0.0000000002 0.0000000000 0.0000000000
    14 -0.0000000000 -0.0000000000 -0.0000000000

     | Show Table
    DownLoad: CSV
    Figure 1.  Graph corresponding to Table 1.
    Table 2.  Number of iteration and CPU time for Case Ⅰ.
    Parameter 1 Parameter 2 Parameter 3
    No of Iter. 14 11 8
    Sec. 40.4460 37.5415 30.9468

     | Show Table
    DownLoad: CSV

    Case Ⅱ: Here, we will show again the convergence behavior of our iterative method (3.1) for three different starting points with the same parameter. We consider the following set of parameters:

    (a) $ m_k = 0.53 $, $ \delta_k = 0.40 $ for all $ k\in \mathbb{N} $ and $ f_1 = 0.2 $,

    (b) $ m_k = 0.53 $, $ \delta_k = 0.40 $ for all $ k\in \mathbb{N} $ and $ f_1 = -0.6 $,

    (c) $ m_k = 0.53 $, $ \delta_k = 0.40 $ for all $ k\in \mathbb{N} $ and $ f_1 = -0.9 $.

    For the three initial values, we have the following Table 3 and Figure 2, respectively.

    Table 3.  Tabular values of AH iteration (3.1) for Case Ⅱ.
    Step Initial Value 1 Initial Value 2 Initial Value 3
    1 0.2000000000 -0.6000000000 -0.9000000000
    2 -0.0193200000 0.0579600000 0.0869400000
    3 0.0018663120 -0.0055989360 -0.0083984040
    4 -0.0001802857 0.0005408572 0.0008112858
    5 0.0000174156 -0.0000522468 -0.0000783702
    6 -0.0000016823 0.0000050470 0.0000075706
    7 0.0000001625 -0.0000004875 -0.0000007313
    8 -0.0000000157 0.0000000471 0.0000000706
    9 0.0000000015 -0.0000000045 -0.0000000068
    10 -0.0000000001 0.0000000004 0.0000000007
    11 0.0000000000 -0.0000000000 -0.0000000001
    12 -0.0000000000 0.0000000000 0.0000000000

     | Show Table
    DownLoad: CSV
    Figure 2.  Graph corresponding to Table 3.
    Table 4.  Number of iteration and CPU time for Case Ⅱ.
    Initial Value 1 Initial Value 2 Initial Value 3
    No of Iter. 11 11 12
    Sec. 35.5363 36.7645 38.4356

     | Show Table
    DownLoad: CSV

    Next, with the aid of Example 3.8, we will show that the AH iterative method (3.1) enjoys better speed of convergence than many known iterative scheme. We take $ m_k = 0.51 $, $ \delta_k = 0.40 $, $ \theta_k = 0.30 $ and $ f_1 = 0.4 $. In the following Tables 57 and Figures 3 and 4, it is clear that AH iteration process (3.1) converges faster to $ w = 0 $ than Noor [32], S [3] Abbas [1], Picard-S[22], M [48] and JK [4] iteration processes.

    Table 5.  Comparison of convergence behavior of AH iteration process (3.1) with Noor, S, Abbas iteration processes.
    Step Noor S Abbas AH
    1 0.40000000 0.40000000 0.40000000 0.40000000
    2 0.10624000 -0.23680000 0.06736000 -0.00160000
    3 0.02821734 0.14018560 0.01134342 0.00000640
    4 0.00749453 -0.08298988 0.00191023 -0.00000003
    5 0.00199055 0.04913001 0.00032168 0.00000000
    6 0.00052869 -0.02908496 0.00005417 -0.00000000
    7 0.00014042 0.01721830 0.00000912 0.00000000
    8 0.00003730 -0.01019323 0.00000154 -0.00000000
    9 0.00000991 0.00603439 0.00000026 0.00000000
    10 0.00000263 -0.00357236 0.00000004 -0.00000000
    11 0.00000070 0.00211484 0.00000001 0.00000000
    12 0.00000019 -0.00125198 0.00000000 -0.00000000
    13 0.00000005 0.00074117 0.00000000 0.00000000
    14 0.00000001 -0.00043878 0.00000000 -0.00000000
    15 0.00000000 0.00025975 0.00000000 0.00000000

     | Show Table
    DownLoad: CSV
    Table 6.  Number of iteration and CPU time for various iterative methods.
    Noor S Abbas AH
    No of Iter. 15 40 12 5
    Sec. 46.7935 89.1234 31.4352 15.2456

     | Show Table
    DownLoad: CSV
    Table 7.  Comparison of convergence behavior of AH iteration process (3.1) with Picard-S, M, JK iteration processes.
    Step Picard-S M JK AH
    1 0.40000000 0.40000000 0.40000000 0.40000000
    2 0.23680000 -0.00800000 -0.01600000 -0.00160000
    3 0.14018560 0.00016000 0.00064000 0.00000640
    4 0.08298988 -0.00000320 -0.00002560 -0.00000003
    5 0.04913001 0.00000006 0.00000102 0.00000000
    6 0.02908496 -0.00000000 -0.00000004 -0.00000000
    7 0.01721830 0.00000000 0.00000000 0.00000000
    8 0.01019323 -0.00000000 -0.00000000 -0.00000000
    9 0.00603439 0.00000000 0.00000000 0.00000000
    10 0.00357236 -0.00000000 -0.00000000 -0.00000000
    11 0.00211484 0.00000000 0.00000000 0.00000000
    12 0.00125198 -0.00000000 -0.00000000 -0.00000000
    13 0.00074117 0.00000000 0.00000000 0.00000000
    14 0.00043878 -0.00000000 -0.00000000 -0.00000000
    15 0.00025975 0.00000000 0.00000000 0.00000000

     | Show Table
    DownLoad: CSV
    Figure 3.  Graph corresponding to Table 5.
    Figure 4.  Graph corresponding to Table 7.
    Table 8.  Number of iteration and CPU time for various iterative methods.
    Picard-S M JK AH
    No of Iter. 37 6 7 5
    Sec. 46.7638 17.6372 18.5637 15.2456

     | Show Table
    DownLoad: CSV

    Integral equations (IEs) are equations in which the unknown functions appear under one or more integral signs [49]. Delay integral equations (DIEs) are those IEs in which the solution of the unknown function is given in the previous time interval [8]. DIEs are further classified into two main types: Fredhom DIEs and Volterra DIEs on the basis of the limits of integration. Fredhom DIEs are those IEs in which limits of the integration are constant, while in Volterra DIEs, one of the limits of the integration is a constant and the other is a variable. A Volterra-Fredhom DIEs consist of disjoint Volterra and Fredhom IEs [49]. The DIEs play an important role in mathematics [51]. These equations are used for modelling of various phenomena such as modelling of systems with memory [6], mathematical modelling, electric circuits, and mechanical systems [9,50]. Several researchers are trying to find out the numerical solution of delay IEs [35,36,37,38,39,54,55].

    In this article, our interest is to approximate the solution of the following nonlinear integral equation with two delays via of new iterative method (3.1):

    $ x(z)=g(z,x(z),x(α(z)),nmp(z,ϑ,x(ϑ),x(β(ϑ)))dϑ)
    $
    (3.33)

    where $ m, n $ are fixed real numbers, $ g:[m, n]\times\mathbb{C}\times\mathbb{C}\times\mathbb{C}\to\mathbb{C} $ and $ p:[m, n]\times[m, n]\times\mathbb{C}\times\mathbb{C}\to\mathbb{C} $ are continuous function, and $ \alpha, \beta :[m, n]\to[m, n] $ are continuous delay functions which further satisfies $ \alpha(\vartheta)\leq \vartheta $ and $ \beta(\vartheta)\leq $ for all $ \vartheta\in [m, n] $.

    Let $ I = [m, n] $ $ (m < n) $ be a fixed finite interval and $ \varpi:I\to(0, \infty) $ a nondecreasing function. We will consider the space $ C(I) $ of continuous functions, $ f:I\to \mathbb{C} $, endowed with the Bielecki metric

    $ d(f,h)=supzI|f(z)h(z)|ϖ(z).
    $
    (3.34)

    It is well known that $ (C(I), d) $ is a complete metric space [15] and hence, it is a hyperbolic space.

    The following result regarding the existence of solution for the problem (3.12) was proved by Castro and Simoes [16].

    Theorem 3.10. Let $ \alpha, \beta:I\to I $ be continuous delay function with $ \alpha(d)\leq d $ and $ \beta(d)\leq d $ for all $ d\in [m, n] $ and $ \varpi:I\to (0, 1) $ a nondecreasing function. Moreover, if there exists $ \eta\in \mathbb{R} $ such that

    $ nmϖ(ϑ)dϑηϖ(z),
    $

    for each $ z\in I $. In addition, assume that $ g:I\times\mathbb{C}\times\mathbb{C}\times\mathbb{C}\to\mathbb{C} $ is a continuous function which satisfies the Lipschitz condition:

    $ |g(z,f(z),f(α(z)),ρ(z))g(z,h(z),h(α(z)),φ(z))|λ(|f(z)h(z)|+|f(α(z))h(α(z))|+|ρ(z)φ(z)|),
    $

    where $ \lambda > 0 $ and the kernel $ p:I\times I\times \mathbb{C}\times\mathbb{C}\to\mathbb{C} $ is continuous kernel function which fulfills the Lipschitz condition:

    $ |p(z,ϑ,f(ϑ),f(β(ϑ)))p(z,ϑ,h(ϑ),h(β(ϑ)))|L|f(β(ϑ))h(β(ϑ))|,
    $
    (3.35)

    where $ L > 0 $. If $ \lambda(2+L\eta) < 1 $, then the unique solution of the problem (3.33) existsn, say, $ w\in C(I) $.

    Next, we will prove that our new iteration process converges strongly to the unique solution of nonlinear integral equation (3.33). For this, we give our main result in this section as follows:

    Theorem 3.11. Let $ C(I) $ be a hyperbolic space with the Bielecki metric. Assume that $ \mathcal{M}:C(I)\to C(I) $ is the mapping defined by

    $ (Mf)(z)=g(z,f(z),f(α(z)),nmp(z,ϑ,f(ϑ),f(β(ϑ)))dϑ),
    $
    (3.36)

    for all $ z\in I $ and $ f\in C(I) $. Suppose all the assumptions in Theorem 3.10 are performed and if $ \{f_k\} $ is the sequence defined by (3.1), then $ \{f_k\} $ converges to the unique solution, say $ w\in C(I) $ of the problem (3.33).

    Proof. By Theorem 3.10, it is shown that (3.33) has a unique solution, so let us assume that $ w $ is the fixed point of $ \mathcal{M} $. We now show that $ f_k\to w $ as $ k\to\infty $. Now, using (3.1), (3.36) and under the present assumptions with respect to the metric (3.34), we have

    $ d(qk,w)=d(W(fk,Mfk,δk),w)
    $
    (3.37)
    $ (1δk)d(fk,w)+δkd(Mfk,w)=(1δk)d(fk,w)+δksupzI|(Mfk)(z)(Mw)(z)|ϖ(z)=(1δk)d(fk,w)+δksupzI1ϖ(z)|g(z,fk(z),fk(α(z)),nmp(z,ϑ,fk(ϑ),fk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+|nmp(z,ϑ,fk(ϑ),fk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+nm|p(z,ϑ,fk(ϑ),fk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}(1δk)d(fk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|fk(α(z))w(α(z))|+Lnm|fk(β(ϑ))w(β(ϑ))|dϑ}(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|fk(β(ϑ))w(β(ϑ))|dϑ}=(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|fk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}(1δk)d(fk,w)+δkλ{2supzI|fk(z)w(z)|ϖ(z)+LsupϑI|fk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}(1δk)d(fk,w)+δkλ{2d(fk,w)+Ld(fk,w)supzIηϖ(z)ηϖ(z)}=(1δk)d(fk,w)+δkλ(2+Lη)d(fk,w)=[1(1λ(2+Lη))δk]d(fk,w).
    $
    (3.38)
    $ d(vk,w)=d(M2qk,w)=d(M(M)qk,w)=supzI|(M(M)qk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,Mqk(z),Mqk(α(z)),nmp(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+|nmp(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+nm|p(z,ϑ,Mqk(ϑ),Mqk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|Mqk(z)w(z)|+|Mqk(α(z))w(α(z))|+Lnm|Mqk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|Mqk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|Mqk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|Mqk(z)w(z)|ϖ(z)+LsupϑI|Mqk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(Mqk,w)+Ld(Mqk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(Mqk,w).
    $
    (3.39)
    $ d(Mqk,w)=supzI|(Mqk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,qk(z),qk(α(z)),nmp(z,ϑ,qk(ϑ),qk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+|nmp(z,ϑ,qk(ϑ),qk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+nm|p(z,ϑ,qk(ϑ),qk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|qk(z)w(z)|+|qk(α(z))w(α(z))|+Lnm|qk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|qk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|qk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|qk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|qk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|qk(z)w(z)|ϖ(z)+LsupϑI|qk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(qk,w)+Ld(qk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(qk,w).
    $
    (3.40)
    $ d(hk,w)=d(M2vk,w)=d(M(M)vk,w)=supzI|(M(M)vk)(z)(Mw)(z)|ϖ(z)=supzI1ϖ(z)|g(z,Mvk(z),Mvk(α(z)),nmp(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+|nmp(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+nm|p(z,ϑ,Mvk(ϑ),Mvk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|Mvk(z)w(z)|+|Mvk(α(z))w(α(z))|+Lnm|Mvk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|Mvk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|Mvk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|Mvk(z)w(z)|ϖ(z)+LsupϑI|Mvk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(Mvk,w)+Ld(Mvk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(Mvk,w).
    $
    (3.41)
    $ d(Mvk,w)=supzI|(Mvk)(z)(Mw)(z)|ϖ(z).=supzI1ϖ(z)|g(z,vk(z),vk(α(z)),nmp(z,ϑ,vk(ϑ),vk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+|nmp(z,ϑ,vk(ϑ),vk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+nm|p(z,ϑ,vk(ϑ),vk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}λsupzI1ϖ(z){|vk(z)w(z)|+|vk(α(z))w(α(z))|+Lnm|vk(β(ϑ))w(β(ϑ))|dϑ}λ{2supzI|vk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|vk(β(ϑ))w(β(ϑ))|dϑ}=λ{2supzI|vk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|vk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}λ{2supzI|vk(z)w(z)|ϖ(z)+LsupϑI|vk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}λ{2d(vk,w)+Ld(vk,w)supzIηϖ(z)ηϖ(z)}=λ(2+Lη)d(vk,w).
    $
    (3.42)
    $ d(fk+1,w)=d(W(hk,Mhk,mk),w)(1mk)d(hk,w)+mkd(Mhk,w)=(1mk)d(hk,w)+mksupzI|(Mhk)(z)(Mw)(z)|ϖ(z).=(1mk)d(hk,w)+mksupzI1ϖ(z)|g(z,hk(z),hk(α(z)),nmp(z,ϑ,hk(ϑ),hk(β(ϑ)))dϑ)g(z,w(z),w(α(z)),nmp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ)|(1mk)d(hk,w)+mkλsupzI1ϖ(z){|hk(z)w(z)|+|hk(α(z))w(α(z))|+|nmp(z,ϑ,hk(ϑ),hk(β(ϑ)))dϑp(z,ϑ,w(ϑ),w(β(ϑ)))dϑ|}(1mk)d(hk,w)+mkλsupzI1ϖ(z){|hk(z)w(z)|+|hk(α(z))w(α(z))|+nm|p(z,ϑ,hk(ϑ),hk(β(ϑ)))p(z,ϑ,w(ϑ),w(β(ϑ)))|dϑ}(1mk)d(hk,w)+δkλsupzI1ϖ(z){|fk(z)w(z)|+|hk(α(z))w(α(z))|+Lnm|hk(β(ϑ))w(β(ϑ))|dϑ}(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nm|hk(β(ϑ))w(β(ϑ))|dϑ}=(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupzI1ϖ(z)nmϖ(ϑ)|hk(β(ϑ))w(β(ϑ))|ϖ(ϑ)dϑ}(1mk)d(hk,w)+mkλ{2supzI|hk(z)w(z)|ϖ(z)+LsupϑI|hk(ϑ)w(ϑ)|ϖ(ϑ)supzI1ϖ(z)nmϖ(ϑ)dϑ}(1mk)d(hk,w)+δkλ{2d(hk,w)+Ld(hk,w)supzIηϖ(z)ηϖ(z)}=(1mk)d(hk,w)+mkλ(2+Lη)d(hk,w)=[1(1λ(2+Lη))mk]d(hk,w).
    $
    (3.43)

    Combining (3.38)–(3.43), we have

    $ d(fk+1,w)[λ(2+Lη)]4[1(1λ(2+Lη))δk][1(1λ(2+Lη))δk]d(fk,w).
    $
    (3.44)

    Since $ \lambda(2+L\eta) < 1 $, $ 0 < \delta_k, m_k < 1 $, it follows that $ [1-(1-\lambda(2+L\eta))\delta_k] < 1 $ and $ [1-(1-\lambda(2+L\eta))m_k] < 1 $. Thus, (3.44) reduces to

    $ d(fk+1,w)d(fk,w).
    $
    (3.45)

    If we set $ d(f_k, w) = \Omega_k $, then we obtain

    $ Ωk+1Ωk,kN.
    $

    Hence, $ \{\Omega_k\} $ is a monotone decreasing sequence of real numbers. Furthermore, it is a bounded sequence, so we have

    $ limkΩk=inf{Ωk}=0.
    $

    Therefore,

    $ limkd(fk,w)=0.
    $

    This ends the proof.

    Now, we present an example which validates all the conditions given in Theorem 3.7.

    Example 3.12. Let $ x:[0, 1]\to\mathbb{R} $ be a continuous integral equation defined as follows:

    $ x(z)=z6120z310+z+16x(α(z))+14z0((ϑz)x(β(ϑ)))dϑ,z[0,1].
    $
    (3.46)

    Let $ \varpi:[0, 1]\to (0, \infty) $ be a nondecreasing continuous function which is defined by $ \varpi(z) = 0.0094z+0.0006 $ and $ \alpha, \beta:[0, 1]\to [0, 1] $ be a continuous delay functions defined by $ \alpha(z) = z^3 $ and $ \beta(z) = z^4 $, respectively. We have all assumptions of Theorem 3.7 being fulfilled. In deed, it is not hard for the reader to see that $ \alpha $ and $ \beta $ are continuous functions such that $ \alpha(z)\leq z $ and $ \beta(z)\leq z $. Moreover, for $ \eta = 0.63951 $ we have that $ \varpi:[0, 1]\to (0, \infty) $ which is defined by $ \varpi(z) = 0.0094z+0.0006 $ is a continuous function satisfying

    $ z00.0094ϑ+0.0006dϑη(0.0094z+0.0006)=0.63951ϖ(z),z[0,1].
    $
    (3.47)

    The function $ f:[0, 1]\times\mathbb{C}\times \mathbb{C}\times\mathbb{C}\to \mathbb{C} $ defined by $ g(z, x(z), x(\alpha(z)), \rho(z)) = \frac{z^6}{120}-\frac{z^3}{10}+z+\frac{1}{6}x(\alpha(z))+\frac{1}{4}\rho(z) $ is a continuous function which satisfies

    $ |g(z,f(z),f(α(z)),ρ(z))g(z,h(z),h(α(z)),φ(z))|14(|f(z)h(z)|+|f(α(z))h(α(z))|+|ρ(z)φ(z)|),
    $

    for all $ z\in[0, 1] $. Observe that $ \lambda = \frac{1}{4} $. Now, the kernel $ p:[0, 1]\times[0, 1]\times \mathbb{C}\times \mathbb{C}\to \mathbb{C} $ which is defined by $ p(z, \vartheta, x(\vartheta), x(\beta(\vartheta))) = ((\vartheta-z)x(\beta(\vartheta))) $ is a continuous function satisfying the following condition

    $ |p(z,ϑ,f(ϑ),f(β(ϑ)))p(z,ϑ,h(ϑ),h(β(ϑ)))|L|f(β(ϑ))h(β(ϑ))|,ϑ[0,z],z[0,1].
    $
    (3.48)

    Clearly, we have that $ L = 1 $. Thus, $ \lambda(2+L\eta) = \frac{263951}{400000} < 1 $. Therefore, we have shown above that all the conditions of Theorem 3.11 hold. Hence, our results are applicable.

    Open Question

    Is it possible to obtain the results in this article for iterative methods involving two or more mappings in the setting of single-valued or multivalued mappings?

    In this article, we have considered the modified version of AH iteration process as given in (3.1). The data dependence result of the modified AH iteration process (3.1) for almost contraction mappings has been studied. We proved several strong and $ \vartriangle $-convergence results of (3.1) for mappings enriched with condition $ (E) $ in hyperbolic spaces. We provided two nontrivial examples of mappings which satisfy condition $ (E) $. The first example was used to validate our assumptions in Theorem (3.7) and the second was used to study the convergence behavior of AH iteration process (3.1) for different control parameters and initial values. The second example was equally used to compare the speed of convergence of AH iteration (3.1) with several existing iteration processes. It was observed that AH iteration process (3.1) converges faster than Noor [32], S [3], Abbas [1], Picard-S [22], M [48] and JK [4] iteration processes. Finally, we applied our main results to solve a nonlinear integral equation with two delays. Since hyperbolic spaces are more general than Banach spaces and by Remark 3.5 the class of mapping satisfying condition $ (E) $ are more general than those considered in Ahmad et al. [4,5] and Ofem et al. [33]. It follows that our results generalize and extend the results of Ahmad [4,5], Ofem et al. [33] and several other related results in the existing literature.

    The data used to support the findings of this study are included within the article.

    This study is supported via funding from Prince Sattam bin Abdulaziz University project number (No. PSAU/2023/R/1444).

    The authors declare no conflict of interests.

    [1] Gray, KA.; Zhao, L.; Emptage, M. et al. (2006) Bioethanol: a review. Curr Opin Chem Biol 10: 141-146. doi: 10.1016/j.cbpa.2006.02.035
    [2] El-Sharkawy, MA. Cassava biology and physiology. (2004) Plant Mol Biol 56: 481-501.
    [3] Sorapipatana, C.; Yoosin, S. (2011) Life cycle cost of ethanol production from cassava in Thailand. Renew Sust Energ Rev 15: 1343-1349. doi: 10.1016/j.rser.2010.10.013
    [4] Sriroth, K.; Chollakup, R.; Chotineeranat, S.; Piyachomkwan, K.; Oates, CG. et al. (2000) Processing of cassava waste for improved biomass utilization. Bioresour Technol 71: 63-69. doi: 10.1016/S0960-8524(99)00051-6
    [5] Kosugi, A.; Kondo, A.; Ueda, M.; Murata, Y.; Vaithanomsat, P.; Thanapase, W. et al. (2009) Production of ethanol from cassava pulp via fermentation with a surface-engineered yeast strain displaying glucoamylase. Renew Ener 34: 1354-1358. doi: 10.1016/j.renene.2008.09.002
    [6] Rattanachomsri, U.; Tanapongpipat, S.; Eurwilaichitr, L.; Champreda, V. et al. (2009) Simultaneous non-thermal saccharification of cassava pulp by multi-enzyme activity and ethanol fermentation by Candida tropicalis. J Biosc Bioeng 107: 488-493. doi: 10.1016/j.jbiosc.2008.12.024
    [7] Apiwatanapiwat, W.; Murata, Y.; Kosugi, A.; Yamada, R.; Kondo, A.; Arai, T. et al. (2011) Direct ethanol production from cassava pulp using a surface-engineered yeast strain co-displaying two amylases, two cellulases, and β-glucosidase. Appl. Microbiol Biotechnol 90: 377-384. doi: 10.1007/s00253-011-3115-8
    [8] Abdel-Banat, BM.; Hoshida, H.; Ano, A.; Nonklang, S.; Akada, R. et al. (2009) High-temperature fermentation: how can processes for ethanol production at high temperatures become superior to the traditional process using mesophilic yeast? Appl Microbiol Biotechnol 85: 861-867.
    [9] Limtong, S.; Sringiew, C.; Yongmanitchai, W. et al. (2007) Production of fuel ethanol at high temperature from sugar cane juice by a newly isolated Kluyveromyces marxianus. Bioresour Technol 98: 3367-3374. doi: 10.1016/j.biortech.2006.10.044
    [10] Anderson, PJ.; McNeil, K.; Watson, K. et al. (1986) High-efficiency carbohydrate fermentation to ethanol at temperatures above 40℃ by Kluyveromyces marxianus var. marxianus isolated from sugar mills. Appl Environ Microbiol 51: 1314-1320.
    [11] Nonklang, S.; Abdel-Banat, BM.; Cha-aim, K.; Moonjai, N.; Hoshida, H.; Limtong, S. et al. (2008) High-temperature ethanol fermentation and transformation with linear DNA in the thermotolerant yeast Kluyveromyces marxianus DMKU3-1042. Appl Environ Microbiol 74: 7514-7521. doi: 10.1128/AEM.01854-08
    [12] Fonseca, GG.; Heinzle, E.; Wittmann, C.; Gombert, AK. et al. (2008) The yeast Kluyveromyces marxianus and its biotechnological potential. Appl Microb Biotechnol 79: 339-354. doi: 10.1007/s00253-008-1458-6
    [13] Chen, QH.; He, GQ.; Ali, MAM. et al. (2002) Optimization of medium composition for the production of elastase by Bacillus sp. EL31410 with response surface methodology. Enzyme Microb Technol 30: 667-672.
    [14] Altschul, SF.; Gish, W.; Miller, W.; Meyers, EW.; Lipman, DJ. et al. (1990) Basic local 129 alignment search tool. J Mol Biol 215: 403-410. doi: 10.1016/S0022-2836(05)80360-2
    [15] Somogyi, M. (1952) Notes in Sugar Determination. J Biol Chem 195: 19-23.
    [16] Sreekumar, O.; Chand, N.; Basappa, SC. et al. (1999) Optimization and interaction of media components in ethanol production using Zymomonas mobilis by response surface methodology. J Biosci Bioeng 88: 334-338. doi: 10.1016/S1389-1723(00)80021-3
    [17] Manikandan, K.; Viruthagiri, T. (2010) Optimization of C/N ratio of the medium and fermentation conditions of ethanol production from tapioca starch using co-culture of Aspergillus niger and Sacharomyces cerevisiae. International Journal of Chem. Tech Research 2: 947-955.
    [18] Ubalue, AO. (2007) Cassava wastes: treatment options and value addition alternatives: a review. Afri J Biotechnol 6: 2065-2073.
    [19] Srichuwong, S.; Fujiwara, M.; Wang, X.; Seyama, T.; Shiroma, R.; Arakane, M. et al. (2009) Simultaneous saccharification and fermentation (SSF) of very high gravity (VHG) potato mash for the production of ethanol. Biomass Bioenerg 33: 890-986. doi: 10.1016/j.biombioe.2009.01.012
    [20] Olofsson, K.; Bertilsson, M.; Lidén, G. et al. (2008) A short review on SSF - an interesting process option for ethanol production from lignocellulosic feedstocks: a review. Biotechnol. Biofuels 1: 1-14. doi: 10.1186/1754-6834-1-1
    [21] Sasikumar, E.; Viruthagiri, T. (2010) Simultaneous saccharification and fermentation (SSF) ofsugarcane bagasse-kinetics and modeling. International Journal of Chemical and Biological Engineering 3: 57-64.
  • This article has been cited by:

    1. Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain, Double inertial steps extragadient-type methods for solving optimal control and image restoration problems, 2024, 9, 2473-6988, 12870, 10.3934/math.2024629
    2. Ahmed Alamer, Faizan Ahmad Khan, Boyd-Wong type functional contractions under locally transitive binary relation with applications to boundary value problems, 2024, 9, 2473-6988, 6266, 10.3934/math.2024305
    3. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems, 2024, 96, 1017-1398, 1465, 10.1007/s11075-023-01674-y
    4. Quan Zhou, Yinkun Wang, Yicheng Liu, Chebyshev–Picard iteration methods for solving delay differential equations, 2024, 217, 03784754, 1, 10.1016/j.matcom.2023.09.023
    5. M. Elzawy, S. Mosa, Equiform rectifying curves in Galilean space G4, 2023, 22, 24682276, e01931, 10.1016/j.sciaf.2023.e01931
    6. Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Ojen Kumar Narain, Hassen Aydi, Choonkil Park, A MIXED-TYPE PICARD-S ITERATIVE METHOD FOR ESTIMATING COMMON FIXED POINTS IN HYPERBOLIC SPACES, 2024, 14, 2156-907X, 1302, 10.11948/20230125
    7. Austine Efut Ofem, Akindele Adebayo Mebawondu, Godwin Chidi Ugwunnadi, Prasit Cholamjiak, Ojen Kumar Narain, A novel method for solving split variational inequality and fixed point problems, 2025, 0003-6811, 1, 10.1080/00036811.2025.2505615
  • Reader Comments
  • © 2013 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6632) PDF downloads(1320) Cited by(14)

Figures and Tables

Figures(4)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog