1.
Introduction
Throughout this paper, we assume that Δ is a non-empty, closed and convex subset (CCS) of a Banach space (BS) Λ, R+ is the set of nonnegative real numbers and N is the set of natural numbers. In addition, the symbol ⇀ refers to the weak convergence and ⟶ to the strong convergence.
The set of all fixed points (FPs) for an operator Ξ:Δ→Δ is denoted by Υ(Ξ), which is defined by the point ν∈Δ such that the equation ν=Ξν is satisfied.
Let Ξ:Δ→Δ be a self-mapping, then Ξ is called:
(1) Contraction if there exists a constant α∈[0,1) such that d(Ξν,Ξϖ)≤αd(ν,ϖ).
(1) Nonexoansive if d(Ξν,Ξϖ)≤d(ν,ϖ), for all ν,ϖ∈Δ.
Clearly, the contraction mapping is nonexpansive when α=1.
FP techniques are applied in many solid applications due to their ease and smoothness such as optimization theory, approximation theory, fractional derivative, dynamic theory, and game theory. This is the reason why researchers are attracted to this technique. Also, this technique plays a significant role not only in the above applications but also in nonlinear analysis and many other engineering sciences. One of the important trends of FP methods is the study of the behavior and performance of algorithms that contribute greatly to real-world applications.
One of the well-established principles of the FP theory is Banach's contraction principle (BCP). This principle is significant as a source of existence and uniqueness theorem in various parts of science. BCP depends on the Picard one-step iteration, which is given by:
where Ξ is a contraction mapping defined on a complete metric space (MS). When the existence of the FP theorem is guaranteed in the setting of complete MS, BCP is not well applied to nonexpansive mapping because Picard's iteration gives poor results for the convergence of FP. So, many authors tended to create many iterative methods for approximating FPs in terms of improving the performance and convergence behavior of algorithms for nonexpansive mappings. Moreover, data-dependent results and the stability results with respect to Ξ via these methods have been introduced. For more details, we refer to some iterative methods such as the iteration of Mann [1], Ishikawa [2], Noor [3], Argawal et al. [4], Abbas and Nazir [5]. In addition, SP iteration [6], S∗-iteration [7], CR-iteration [8], Normal-S iteration [9], Picard-S iteration [10], Thakur iteration [11], M-iteration [12], M∗- iteration [13], Garodia and Uddin iteration [14], two-step Mann iteration [15]. Also, for more applications involved in iteration methods, see Hasanen et al. [16,17], and many others.
Assume that {ηi} and {γi} are nonnegative sequences in [0,1]. The algorithms below are known as S algorithm [4], Picard- S algorithm [10], Thakur algorithm [11] and K∗- algorithm [18], respectively:
In 2014, Gursoy and Karakaya [10] presented the iterative method (1.2) and called it the Picard-S iteration. They proved numerically and analytically that the Picard-S iteration converges faster than Picard, Mann, Ishikawa, Noor, SP, CR, S, S*, Abbas and Nazir, normal-S and two-step Mann iteration procedures for almost contraction mappings (ACMs).
In 2016, Thakur et al. [11] illustrated that the iteration (1.3) converges faster than Picard, Mann, Ishikawa, Agarwal, Noor and Abbas iteration for Suzuki generalized nonexpansive mappings (SGNMs) by a numerical example.
Recently, Ullah and Arshad [18] gave K∗-algorithm (1.4) and proved that K∗-algorithm (1.4) converges faster than S algorithm (1.1), Picard-S algorithm (1.2) and Thakur algorithm (1.3) for SGNMs. Moreover, they noted that the Picard-S iteration (1.2) and Thakur iteration (1.3) have the same rate of convergence.
On the other hand, nonlinear integral equations are used to describe mathematical models arising from mathematical physics, engineering, economics, biology, etc [19]. In particular, Volterra-Friedholm equations arise from boundary value problems and mathematical modeling of the spatiotemporal evolution of the epidemic. For various biological models, see [20,21]. Recently, a large number of researchers turned to solve nonlinear integral equations with the involvement of iterative methods, for example, see [22,23,24,25,26].
Based on the works mentioned above, in this manuscript, we construct a new algorithm to get a better affinity rate of ACMs and SGNMs as follows:
for each i≥1, where αi, ηi and γi are sequences in [0,1].
Our work is organized as follows: In section 2, we give some definitions, propositions, and lemmas, which facilitates the reader's palatability of our results. In section 3, the performance and convergence rate of our algorithms are analyzed analytically and we found that the convergence rate is satisfactory for ACMs in a BS. Moreover, the weak and strong convergence of the proposed algorithm is discussed for SGNMs in the context of UCBSs in section 4. In section 5, we proved that our new iterative algorithm is G-stable. Further, data-dependence results for ACMs under our iterative scheme (1.5) are studied in section6. In addition, we presented two examples to show that our method is faster than the iteration schemes (1.1)–(1.4) in section 7. Ultimately, in section 8, the proposed algorithm is implicated to find the solution to the Volterra-Fredholm integral equation. In section 9, the conclusion and future works are derived.
2.
Preliminaries
This part is intended to give some definitions, propositions and lemmas that will comfort the reader in understanding our manuscript and will be useful in the sequel.
Definition 2.1. A mapping Ξ:Λ→Λ is called SGNM if
Definition 2.2. A BS Λ is called a uniformly convex if for each ϵ∈(0,2], there exists δ>0 such that for ν,ϖ∈Λ satisfying ‖ν‖≤1, ‖ϖ‖≤1 and ‖ν−ϖ‖>ϵ, we get ‖ν+ϖ2‖<1−δ.
Definition 2.3. A BS Λ is called satisfy Opial's condition if for any sequence {νi} in Λ so that νi⇀ν∈Λ, implies
Definition 2.4. Let {νi} be a bounded sequence in a BS Λ. For ν∈Δ⊂Λ, we set
The asymptotic radius of {νi} relative to Λ is defined by
The asymptotic center of {νi} relative to Λ is defined by
It should be noted that, Q(Λ,{νi}) consists of exactly one point in a UCBS.
Definition 2.5. [27] A mapping Ξ:Λ→Λ is called ACM, if there exists θ∈(0,1) and some constant ℓ≥0 so that
Definition 2.6. [28] Suppose that the real sequences {ηi} and {γi} converge to η and γ, respectively. Suppose also there is
Then, we say that
(1) the sequence {ηi} converges faster to η than {γi} does to γ, if ϰ=0,
(2) {ηi} and {γi} have the same rate of convergence, if ϰ∈(0,∞).
Definition 2.7. [28] Suppose that Ξ,ˆΞ:Δ→Δ are given operators. An operator ˆΞ is called an approximate operator for Ξ, if for some ϵ>0, we get
Definition 2.8. [29] Let ℷ:R+→R+ be a nondecreasing function satisfies ℷ(0)=0 and ℷ(s)>0 for all s>0. A mapping Ξ:Λ→Λ is called satisfy a condition (I) if
where d(ν,Υ(Ξ))=inf{‖ν−ζ‖:ζ∈Υ(Ξ)}.
Proposition 2.1. [30] Let Ξ:Λ→Λ be a given map. If Ξ is
(1) nonexpansive mapping, then it is SGNM.
(2) SGNM with a non-empty FP set, then it is quasi-nonexpansive mapping.
(3) SGNEM, then it is satisfied the following inequality
Lemma 2.1. [30] Let Δ be a subset of a BS Λ, which satisfies Opial's condition and Ξ:Δ→Δ be a SGNM. If {νi}⇀ζ and limi→∞‖Ξϖi−νi‖=0, then Ξζ=ζ, i.e., I−Ξ is demiclosed at zero.
Lemma 2.2. [30] Let Δ be a weakly compact convex subset of a BS Λ with the Opial's property. If Ξ:Δ→Δ is a SGNM, then Ξ possess a FP.
Lemma 2.3. [28] Assume that {φi} and {φ∗i} is nonnegative real sequences satisfy the inequality below:
where zi∈(0,1) for each i≥1, ∞∑i=0zi=∞ and limi→∞φ∗izi=0, then limi→∞φi=0.
Lemma 2.4. [31] Let {φi} and {φ∗i} be nonnegative real sequences satisfy the inequality below:
where zi∈(0,1) for each i≥1, ∞∑i=0zi=∞ and φ∗i≥0, then
Lemma 2.5. [32] Assume that Λ is a UCBS and {ξi} is a sequence satisfies 0<n≤ξi≤n∗<1, for all i∈N. Assume also {ϖi} and {νi} are two sequences in Λ so that lim supi→∞{ϖi}≤ρ, lim supi→∞{νi}≤ρ and lim supi→∞‖ξϖi+(1−ξ)νi‖=ρ for some ρ≥0. Then limi→∞‖ϖi−νi‖=0.
3.
Convergence rate
In this section, we discuss the rate of convergence of our iterative algorithm for ACMs.
Theorem 3.1. Assume that Λ is a BS and Δ is a closed convex subset (CCS) of Λ. Let Ξ:Λ→Λ be a mapping satisfying (2.1) with Υ(Ξ)≠∅. Suppose that {νi} is the iterative sequence generated by (1.5) with {αi},{ηi},{γi}∈[0,1] such that ∞∑i=0γi=∞. Then {νi}⟶ζ∈Υ(Ξ).
Proof. Consider ζ∈Υ(Ξ), then from (1.5), we get
Using (1.5) and (3.1), we have
Similarly, from (1.5) and (3.2), one can write
It follows from (1.5) and (3.3) that
Because θ∈(0,1) and ηi,αi∈[0,1], for all i≥1, then (1−(1−θ)ηi)(1−(1−θ)αi)<1. Hence (3.4) reduces to
It follows from (3.5) that
From (3.6), we have
Again, since θ∈(0,1) and γu∈[0,1], for all u≥1, then (1−(1−θ)γu)<1. It is known that 1−ν≤e−ν for each ν∈[0,1], then by (3.7), we obtain
Taking the limit as i→∞ in (3.8), we have limi→∞‖νi−ζ‖=0, that is {νi}⟶ζ∈Υ(Ξ).
For the uniqueness. Let ζ,ζ∗∈Υ(Ξ) such that ζ≠ζ∗, from the definition of Ξ, we can write
a contrary. Hence ζ=ζ∗.
The next theorem shows that our iteration (1.5) converges faster than the iteration (1.4) in the sense of Berinde [28].
Theorem 3.2. Let Δ be a CCS of a BS Λ and Ξ:Λ→Λ be a mapping satisfies (2.1) with Υ(Ξ)≠∅. Consider {νi} is the iterative sequence generated by the algorithm (1.5) with {αi},{ηi},{γi}∈[0,1] such that 0<γ≤γi≤1, for all i≥1. Then the sequence {νi} converges faster to ν than the iterative scheme (1.4).
Proof. It follows from (3.7) and the hypothesis 0<γ≤γi≤1 that
Analogously, the iterative process (1.4) ([18], Theorem 3.2) takes the form
Because γ≤γi≤1, for some γ>0 and all i≥1, then (3.9) can be written as
Put
and
Define
Taking the limit as i→∞, we have limi→∞ϱi=0. This means that {νi} converges faster than {li} to ν.
4.
Weak and strong convergence
This part has been enriched to obtain some convergence results of our iteration procedure (1.5) for SGNMs in the setting of UCBSs.
We begin with the proof of the following lemmas:
Lemma 4.1. Let Δ be a CCS of a BS Λ and Ξ:Λ→Λ be SGNM with Υ(Ξ)≠∅. If {νi} is the iterative sequence given by the algorithm (1.5), then limi→∞‖νi−ζ‖ exists for each ζ∈Υ(Ξ).
Proof. Let ζ∈Υ(Ξ) and δ∈Δ. Clearly, from Proposition 2.1 (2), every SGNM is quasi-nonexpansive mapping, so
Now, using (1.5), we obtain
It follows from (1.5) and (4.1) that
Similarly, from (1.5) and (4.2), one can write
At the last, using (1.5) and (4.2), we get
This leads to the sequence {‖νi−ζ‖} is bounded and nondecreasing for all ζ∈Υ(Ξ). Hence limi→∞‖νi−ζ‖ exists.
Lemma 4.2. Let Δ be a non-empty CCS of a UCBS Λ and Ξ:Λ→Λ be SGNM. If {νi} is the iterative sequence defined by the algorithm (1.5). Then Υ(Ξ)≠∅ if and only if {νi} is bounded and limi→∞‖Ξνi−νi‖=0.
Proof. Assume that Υ(Ξ)≠∅ and take ζ∈Υ(Ξ). Based on Lemma 4.1, limi→∞‖νi−ζ‖ exists and {νi} is bounded. Put
Applying (4.3) in (4.1) and taking limsup, we can write
According to Proposition 2.1 (2), we have
Again, using (1.5) and (4.1)–(4.3), we obtain
Thus, we have
Since ηi∈[0,1], then by (4.5), we get
which implies that ‖νi+1−ζ‖≤‖ϖi−ζ‖. Then from (4.3), we have
It follows from (4.4) and (4.6) that
From (4.3), (4.4), (4.7) and Lemma 2.5, we have limi→∞‖Ξνi−νi‖=0.
Conversely, suppose that {νi} is bounded and limi→∞‖Ξνi−νi‖=0. Suppose also Ξζ∈Q(Λ,{νi}), then by Definition 2.4 and Proposition 2.1 (3), we can write
which leads to ζ∈Q(Λ,{νi}). Since Q(Λ,{νi}) is singleton and Λ is a uniformly convex, thus, we get Ξζ=ζ.
Theorem 4.1. Let Λ, Δ and Ξ be as in Lemma 4.2. If Λ satisfies Opial's condition and Υ(Ξ)≠∅, then the sequence {νi} iterated by (1.5) converges weakly to a FP of Ξ, that is {νi}⇀ζ∈Υ(Ξ).
Proof. Let ζ∈Υ(Ξ), based on Lemma 4.1, limi→∞‖νi−ζ‖ exists.
Now, we prove that {νi} has a weak sequential limit in Υ(Ξ). Let {νij} and {νik} be subsequences of {νi} so that {νij}⇀ν and {νik}⇀ν∗ for each ν,ν∗∈Δ. Using Lemma 4.2, we obtain limi→∞‖Ξνi−νi‖=0. From Lemma 2.1 and I−Ξ is demiclosed at zero, we have (I−Ξ)ν=0⇒Ξν=ν, analogously, Ξν∗=ν∗. For the uniqueness. Let ν≠ν∗, then by Opial's property, we find that
a contradiction, so ν=ν∗ and {νi}⇀ζ∈Υ(Ξ).
Theorem 4.2. Let Λ be a UCBS and Δ be a non-empty compact convex subset of Λ. Assume that Ξ:Δ→Δ is SGNM and {νi} is the iterative sequence given by (1.5). Then {νi}⟶ζ∈Υ(Ξ).
Proof. According to Lemmas 2.2 and 4.2, we have Υ(Ξ)≠∅ and limi→∞‖Ξνi−νi‖=0. The compactness of Δ implies that there is a subsequence {νik} of {νi} so that νik→ζ for some ζ∈Δ. Using Proposition 2.1 (3), one can get
Passing k→∞, we obtain Ξζ=ζ, that is ζ∈Υ(Ξ). Also, by Lemma 4.1, we get limi→∞‖νi−ζ‖ exists for all ζ∈Υ(Ξ), thus νi→ζ strongly.
Theorem 4.3. Let Λ, Δ and Ξ be described as Lemma 4.2 and {νi} be an iterative sequence defined by (1.5). Then
where d(ν,Υ(Ξ))=inf{‖ν−ζ‖:ζ∈Υ(Ξ)}.
Proof. A necessary condition is clear. Let lim infi→∞d(νi,Υ(Ξ))=0, by Lemma 4.1, we get limi→∞‖νi−ζ‖ exists for all ζ∈Υ(Ξ), this implies that lim infi→∞d(νi,Υ(Ξ)) exists. From our assumption lim infi→∞d(νi,Υ(Ξ))=0, we have limi→∞d(νi,Υ(Ξ))=0.
Next, we shall show that the sequence {νi} is a Cauchy in Δ. Because lim infi→∞d(νi,Υ(Ξ))=0, then for a given ϵ>0, there is i0≥1 so that, for each i,j≥i0, we obtain
Hence, we get
This proves that {νi} is a Cauchy sequence in Δ. As Δ is closed, then there is ˆν∈Δ so that limi→∞νi=ˆν. Since limi→∞d(νi,Υ(Ξ))=0, yields limi→∞d(ˆν,Υ(Ξ))=0. Thus, ˆν∈Υ(Ξ) as Δ is closed. This finishes the proof.
Theorem 4.4. Let Λ, Δ and Ξ be defined in Lemma 4.2 and {νi} be an iterative sequence generated by (1.5). If Ξ fulfills the condition (I), then {νi}⟶ζ∈Υ(Ξ).
Proof. It follows from Lemma 4.2 that
Based on the condition (I) in Definition 2.8 and using (4.8), we observe that
which leads to limi→∞ℷ(d(νi,Υ(Ξ)))=0. From the definition of ℷ, we get limi→∞d(νi,Υ(Ξ))=0. Applying Theorem 4.3, we conclude that {νi}⟶ζ∈Υ(Ξ).
5.
Stability theorem
In this part, we show that our iteration process (1.5) is Ξ-stable.
Theorem 5.1. Let Λ be a BS and Δ be a CCS of Λ. Suppose that Ξ:Λ→Λ is a self-mapping satisfies (2.1) and {νi} is the iterative sequence iterated by (1.5) with {αi},{ηi},{γi}∈[0,1] so that ∞∑i=0γi=∞. Then the algorithm (1.5) is Ξ-stable.
Proof. Let {σi} be an arbitrary sequence in Δ and assume that the sequence of our iteration (1.5) can be written as νi+1=f(Ξ,νi), which converges to a unique point ζ and that ϕi=‖σi+1−f(Ξ,σi)‖. In order to show that Ξ is stable, we must prove that limi→∞ϕi=0 if and only if limi→∞σi=ζ.
Assume that limi→∞ϕi=0, then from (1.5) and (3.5), we get
for i≥1, set
Because limi→∞ϕi=0, then limi→∞φ∗izi=limi→∞ϕizi=0. Thus, all requirements of Lemma 2.3 are fulfilled. Hence limi→∞‖σi−ζ‖=0, that is limi→∞σi=ζ.
Conversely, assume that limi→∞σi=ζ, then, we get
taking the limit, we have limi→∞ϕi=0. This finishes the proof.
The following example support Theorem 5.1.
Example 5.1. Consider Λ=[0,1] and Ξν=ν2. Clearly, 0 is a FP of Ξ. To prove Ξ satisfies the condition (2.1), let θ=12 and for each ℓ≥0, we get
Now, we illustrate that our algorithm (1.5) is Ξ-stable. Suppose that αi=ηi=γi=1i+1 and ν0∈[0,1], then we obtain
Take σi=78+32(i+1)−34(i+1)2+18(i+1)3. It is clear that σi∈(0,1) for all i∈N, ∞∑i=0σi=0. According to Lemma 2.3, limi→∞νi=0. Choose ℏi=1i+2, we get
when i→∞, we have limi→∞ϕi=0. This implies that our iterative scheme is Ξ-stable with respect to Ξ.
6.
Data-dependence theorem
In this part, we present the data-dependence result for the operator Ξ satisfies the inequality (2.1) via our iterative algorithm (1.5).
Theorem 6.1. Let ˆΞ be an approximation operator for a mapping Ξ fulfills (2.1). Suppose that {νi} is the iterative sequence given by (1.5) for Ξ. Define an iterative sequence {ˆνi} for ˆΞ as follows:
for all i≥1, where {αi}, {ηi} and {γi} are sequences in [0,1] such that
(p1) for all i≥1, 2γi≥1,
(p2) ∞∑i=0γi=∞.
If Ξζ=ζ and ˆΞˆζ=ˆζ so that limi→∞ˆνi=ˆζ, then
for any fixed number ϵ>0.
Proof. It follows from (2.1), (1.5), (6.1) and Definition 2.7 that
Again, using (2.1), (1.5), (6.1) and Definition 2.7, we have
Applying (6.2) on (6.3), we get
Similar to (6.3), one can write
Applying (6.4) on (6.5), we have
it follows that
Finally, from (2.1), (1.5) and (6.6), we get
Since \alpha _{i}, \eta _{i}, \gamma _{i}\in \lbrack 0, 1] and \theta \in (0, 1), we conclude that
Applying (6.8) in (6.7), we have
From the axiom 2\gamma _{i}\geq 1, we can obtain
Put \varphi _{i} = \left\Vert \nu _{i}-\widehat{\nu }_{i}\right\Vert, z_{i} = (1-\theta)\gamma _{i}\in (0, 1) and
We know that from Theorem 3.1, \lim\limits_{i\rightarrow \infty }\nu _{i} = \zeta and since \Xi \zeta = \zeta, we have
Hence, from Lemma 2.4, we get
Since \lim\limits_{i\rightarrow \infty }\nu _{i} = \zeta and by our axiom \lim\limits_{i\rightarrow \infty }\widehat{\nu }_{i} = \widehat{\zeta }, the inequality (6.9) leads to
This completes the proof.
7.
Numerical examples
The following example supports the analytical results obtained from Theorem 3.2 and studies the performance and speed of our algorithm compared with the previous algorithms.
Example 7.1. Let \Lambda = \mathbb{R} , \Delta = [0, 50] , and \Xi :\Delta \rightarrow \Delta be a mapping defined by
Clearly, 6.0000 is a FP of the mapping \Xi. Take \alpha _{i} = \eta _{i} = \gamma _{i} = \frac{1}{5(i+2)}, with different initial values. Then we obtain the following tables (see Tables 1–3) and graph (see Figures 1–3) for comparison of the various iterative methods.
In the next example, we consider a mapping \Xi as SGNM but not nonexpansive and we will show under certain conditions that our algorithm (1.5) is superior in behavior to some of the leading iterative methods in the previous literature in terms of convergence speed.
Example 7.2. Consider a mapping \Xi :[0, 1]\rightarrow \lbrack 0, 1] described by
Now, we illustrate that \Xi is a SGNM but not nonexpansive. Set \nu = \frac{7}{100} and \varpi = \frac{1}{14}, we get
and
It follows that \left\Vert \Xi \nu -\Xi \varpi \right\Vert = \frac{9}{2450} > \frac{1}{700} = \left\Vert \nu -\varpi \right\Vert. Thus \Xi is not nonexpansive mapping. To prove that \Xi is a SGNM, we consider the cases below:
(1) If \nu \in \lbrack 0, \frac{1}{14}), then
For \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert, we must obtain \frac{1-2\nu }{2}\leq \left\vert \nu -\varpi \right\vert. Clearly \varpi < \nu impossible. So, we must take \varpi > \nu. Thus, \frac{1-2\nu }{2}\leq \varpi -\nu, which yields \varpi \geq \frac{1}{2} and hence \varpi \in \lbrack \frac{1}{2}, 1]. Now,
and
Hence
(2) If \nu \in \lbrack \frac{1}{14}, 1], then
For \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert, we have \frac{13-13\nu }{28}\leq \left\vert \nu -\varpi \right\vert, which leads to the following possibilities:
(i) When \nu < \varpi, we have
So
Hence
(ii) When \nu > \varpi, we get
Because \varpi \in \lbrack 0, 1] and \varpi \leq \frac{41\nu -13}{28}, we can write \nu \geq \frac{28\varpi +13}{41}\Rightarrow \nu \in \lbrack \frac{ 13}{41}, 1].
It should be noted that the case of \nu \in \lbrack \frac{13}{41}, 1] and \varpi \in \lbrack \frac{1}{14}, 1] is similar to case (i), so, we will discuss when \nu \in \lbrack \frac{13}{41}, 1] and \varpi \in \lbrack 0, \frac{1}{14}). So, we have
and \left\Vert \nu -\varpi \right\Vert = \left\vert \nu -\varpi \right\vert > \left\vert \frac{13}{41}-\frac{1}{14}\right\vert = \frac{141}{574} > \frac{1}{ 14}.
Thus \frac{1}{2}\left\Vert \nu -\Xi \nu \right\Vert \leq \left\Vert \nu -\varpi \right\Vert \Rightarrow \left\Vert \Xi \nu -\Xi \varpi \right\Vert \leq \left\Vert \nu -\varpi \right\Vert.
Hence \Xi is SGNM.
Now, we will discuss the behavior of the iterative scheme (1.5) and illustrate that it is faster than S, Tharkur and K^{\ast } iteration procedure by using different control conditions \alpha _{i} = \eta _{i} = \gamma _{i} = \frac{i}{(i+1)}.
Remark 7.1. The effectiveness and success of the iterative method are measured by two main factors: The time and the number of repetitions. When obtaining strong convergence in a short time using the least possible repetitions, saves effort and time in many problems of optimization and variational inequalities. Based on what has been shown from the tables (see Tables 4 and 5) and figures (see Figures 4 and 5), it is clear that our method is successful and the behavior of our algorithm is satisfactory compared to some sober iterations in this direction.
8.
Solve Volterra-Fredholm integral equation
In this part, we apply our algorithm (1.5) to solve Volterra-Fredholm integral equation which was suggested by Lungu and Rus [23].
Consider the following problem:
for all \nu, \varpi \in \mathbb{R} _{+}. Assume that \left(\Gamma, \left\vert.\right\vert \right) is a BS, s > 0 and
Define the norm on \chi _{s} as follows:
It follows from the paper [33] that \left(\chi _{s}, \left\Vert \xi \right\Vert _{s}\right) is a BS.
The following theorem helps us for proving our main result in this section.
Theorem 8.1. [23] Assume that the postulates below are satisfied:
(P_{i}) \aleph \in C\left(\mathbb{R} _{+}^{2}\times \Gamma, \Gamma \right) and \mho \in C\left(\mathbb{R} _{+}^{4}\times \Gamma, \Gamma \right);
(P_{ii}) There are z:\chi _{s}\rightarrow \chi _{s} and \pi _{z} > 0 so that
for all \nu, \varpi \in \mathbb{R} _{+} and \xi, \xi ^{\ast }\in \chi _{s};
(P_{iii}) For all \nu, \varpi \in \mathbb{R} _{+} and c, c^{\ast }\in \Gamma, there is \pi _{\aleph } > 0 so that
(P_{iv}) For all \nu, \varpi, \kappa ^{\ast }, \tau ^{\ast }\in \mathbb{R} _{+} and c, c^{\ast }\in \Gamma, there is \pi _{\mho }\left(\nu, \varpi, \kappa ^{\ast }, \tau ^{\ast }\right) > 0 so that
(P_{v}) \pi _{\mho }\in C\left(\mathbb{R} _{+}^{4}, \mathbb{R} _{+}\right) and
for all \nu, \varpi \in \mathbb{R} _{+};
(P_{vi}) \pi _{z}\pi _{\aleph }+\pi < 1.
Then the problem (8.1) has a unique solution \zeta \in \chi _{s} and the iterative sequence
for all i\geq 1 converges uniformly to \zeta.
Now, after the above hypotheses, we can present our main theorem as follows:
Theorem 8.2. Let \{\nu _{i}\} be an iterative sequence generated by (1.5) with sequences \{\alpha _{i}\}, \{\eta _{i}\}, \{\gamma _{i}\}\in \lbrack 0, 1] so that \sum\limits_{i = 0}^{\infty }\gamma _{i} = \infty. If the postulates (P_{i})-(P_{vi}) of Theorem 8.1 hold. Then the problem (8.1) has a unique solution \zeta \in \chi _{s} and the intended algorithm (1.5) converges strongly to \zeta.
Proof. Let \{\nu _{i}\} be an iterative sequence generated by (1.5) and define the operator H:\chi _{s}\rightarrow \chi _{s} by
We shall prove that \lim\limits_{i\rightarrow \infty }\nu _{i} = 0. Based on (1.5), we get
Now,
Thus,
Again
and
Thus
Similarly, one can write
Since
Now
and
Thus
Applying (8.6) in (8.5), we have
Using (8.4) and (8.7), we get
By the same manner, (8.3) can be written as
From (8.8) in (8.9), we find that
Applying (8.10) in (8.2), we get
Using (1.5) and similar to (8.6), we obtain that
Substituting from (8.12) in (8.11), we find that
Since \alpha _{i}, \eta _{i}\in \lbrack 0, 1] and \pi _{\aleph }\pi _{z}+\pi < 1, then we have
by induction, we can write
It follows from the postulate (P_{vi}) and \gamma _{r}\in \lbrack 0, 1] that
From our information in classical analysis, we can write for \nu \in \lbrack 0, 1], 1-\nu \leq e^{-\nu }. Therefore, (8.13) take the form
which implies that \lim\limits_{i\rightarrow \infty }\left\Vert \nu _{i}-\zeta \right\Vert _{s} = 0. This completes the proof.
9.
Conclusions and future works
It is well known that the efficiency and effectiveness of iterative methods are measured by two main factors. The first factor is the speed of convergence and the second is the number of repetitions, meaning if the convergence is faster with fewer repetitions, the method was successful in approximating the fixed points. So, in this article, we demonstrated analytically and numerically that our algorithm is better in conduct than a portion of the main iterative techniques in the past writing [4,10,11,18] as far as convergence speed.
Also, the prevalence and speed of convergence, stability, and data dependence results were displayed in comparison graphs of calculations. Moreover, our approach was eventually supported by a solution to an integral problem as an application. In light of references [4,10,11,18], our method is therefore successful or effective. Finally, as future works for this paper, we appointed the following:
(1) If we define a mapping \Xi in a Hilbert space \Delta endowed with inner product space, we can find a common solution to the variational inequality problem by using our iteration (1.5). This problem can be stated as follows: find \wp ^{\ast }\in \Delta such that
where \Xi :\Delta \rightarrow \Delta is a nonlinear mapping. Variational inequalities are an important and essential modeling tool in many fields such as engineering mechanics, transportation, economics, and mathematical programming, see [34,35].
(2) We can generalize our algorithm to gradient and extra-gradient projection methods, these methods are very important for finding saddle points and solving many problems in optimization, see [36].
(3) We can accelerate the convergence of the proposed algorithm by adding shrinking projection and CQ terms. These methods stimulate algorithms and improve their performance to obtain strong convergence, for more details, see [1,37,38].
(4) If we consider the mapping \Xi as an \alpha - inverse strongly monotone and the inertial term is added to our algorithm, then we have the inertial proximal point algorithm. This algorithm is used in many applications such as monotone variational inequalities, image restoration problems, convex optimization problems, and split convex feasibility problems, see [40,41,42,43]. For more accuracy, these problems can be expressed as mathematical models such as machine learning and the linear inverse problem.
(5) We can also use our algorithm to solve second-order differential equations and fractional differential equations, where these equations can be converted into integral equations by Green's function. So it is easy to treat and solve with the same approach used in Part 8.
(6) We can try to determine the error of our present iteration.
Acknowledgments
M. Zayed extends her appreciation to the Deanship of Scientific Research at King Khalid University, Saudi Arabia. for supporting this work through research groups program grant R.G.P.2/207/43
Conflict of interest
The authors declare that they have no conflicts of interest.