1.
Introduction
Throughout this paper, H denotes a real Hilbert space with inner product ⟨⋅,⋅⟩ and the induced ‖⋅‖, I the identity operator on H, N the set of all natural numbers and R the set of all real numbers. For a self-operator T on H, F(T) denotes the set of all fixed points of T.
Let H1 and H2 be real Hilbert spaces and let T:H1→H2 be bounded linear operator. Let {Uj}tj=1:H1→H1 and {Ti}ri=1:H2→H2 be two finite families of operators, where t,r∈N. The split common fixed point problem (SCFPP) is formulated as finding a point x∗∈H1 such that
In particular, if t=r=1, the SCFPP (1.1) reduces to finding a point x∗∈H1 such that
The above problem is usually called the two-set SCFPP.
In recent years, the SCFPP (1.1) and the two-set SCFPP (1.2) have been studied and extended by many authors, see for instance [15,20,23,27,36,37,38,39,40,47,48,49]. It is known that the SCFPP includes the multiple-set split feasibility problem and split feasibility problem as a special case. In fact, let {Cj}tj=1 and {Qi}ri=1 be two finite families of nonempty closed convex subsets in H1 and H2, respectively. Let Uj=PCj and Ti=PQi; then SCFPP (1.1) becomes the multiple-set split feasibility problem (MSSFP) as follows:
When t=r=1 the MSSFP (1.3) is reduced to the split feasibility problem (SFP) which is described as finding a point x∗∈H1 satisfying the following property
The SFP was first introduced by Censor and Elfving [22] with the aim of modeling certain inverse problems. It has turned out to also play an important role in, for example, medical image reconstruction and signal processing (see [2,4,15,17,21]). Since then, several iterative algorithms for solving (1.4) have been presented and analyzed. See, for instance [1,5,14,15,16,18,19,23,24,27] and references therein.
The CQ algorithm has been extended by several authors to solve the multiple-set split convex feasibility problem. See, for instance, the papers by Censor and Segal [25], Elfving, Kopf and Bortfeld [23], Masad and Reich [35], and by Xu [53,54].
In 2020, Reich and Tuyen [45] proposed and analyzied the following split feasibility problem with multiple output sets in Hilbert spaces: let H,Hi,i=1,2,…,m be real Hilbert spaces. Let Ti:H→Hi, i=1,2,…,m, be bounded linear operators. Furthermore, let C and Qi be nonempty, closed and convex subsets of H and Hi,i=1,2,…,m, respectively. Find an element u†, such that:
that is, u†∈C and Tu†∈Qi, for all i=1,2,…,m.
To solve problem (1.5), Reich et al. [46] proposed the following iterative methods: for any u0,v0∈C, let {un} and {vn} be two sequence generated by:
where f:C→C is a strict k-contraction with k∈[0,1), {γn}⊂(0,∞) and {αn}⊂(0,1). They established the weak and strong convergence of iterative methods (1.6) and (1.7), respectively.
In 2021, Reich and Tuyen [44] considered the following split common null point problem with multiple output sets in Hilbert spaces: let H,Hi,i=1,2,…,N, be real Hilbert spaces and let Ti:H→Hi,i=1,2,…,N, be bounded linear operators. Let B:H→2Hi,i=1,2,…,N be maximal monotone operators. Given H,Hi and Ti as defined above, the split common null point problem with multiple output sets is to find a point u† such that
To solve problem (1.8), Reich and Tuyen [44] proposed the following iterative method:
Algorithm 1.1. For any u0∈H, Let H0=H,T0=IH,B0=B, and let {un} be the sequence generated by:
where {αn}⊂(0,1), and {βi,n} and {ri,n},i=0,1,…,N, are sequences of positive real numbers, such that {βi,n}⊂[a,b]⊂(0,1) and ∑Ni=0βi,n=1 for each n≥0, and τi,n=ρi,n‖(IHi−JBiri,n)Tiun‖2‖T∗i(IHi−JBiri,n)Tiun‖2+θi,n, where {ρi,n}⊂[c,d]⊂(0,2) and {θi,n} is a sequence of positive real numbers for each i=0,1,…,N, and f:H→H is a strict contraction mapping H into itself with the contraction coefficient k∈[0,1).
They established the strong convergence of the sequence {un} generated by Algorithm 1.1 which is a solution of the Problem (1.8)
Alvarez and Attouch [7] applied the following inertial technique to develop an inertial proximal method for finding the zero of a monotone operator, i.e.,
where G:H→2H is a set-valued monotone operator. Given xn−1, xn∈H and two parameters θn∈[0,1), λn>0, find xn+1∈H such that
Here, the inertia is induced by the term θn(xn−xn−1). The equation (1.10) may be thought as coming from the implicit discretization of the second-other differential system
where ρ>0 is a damping or a friction parameter. This point of view inspired various numerical methods related to the inertial terminology which has a nice convergence property [6,7,8,28,29,33] by incorporating second order information and helps in speeding up the convergence speed of an algorithm (see, e.g., [3,7,9,10,11,12,13,51,52] and the references therein).
Recently, Thong and Hieu [50] introduced an inertial algorithm to solve split common fixed point problem (1.1). The algorithm is of the form
Under approximate conditions, they show that the sequence {xn} generated by (1.12) converges weakly to some solution of SCFPP (1.1).
It was shown in [43, Section 4] by example that one-step inertial extrapolation wn=xn+θ(xn−xn−1), θ∈[0,1) may fail to provide acceleration. It was remarked in [32, Chapter 4] that the use of inertial of more than two points xn,xn−1 could provide acceleration. For example, the following two-step inertial extrapolation
with θ>0 and δ<0 can provide acceleration. The failure of one-step inertial acceleration of ADMM was also discussed in [42, Section 3] and adaptive acceleration for ADMM was proposed instead. Polyak [41] also discussed that the multi-step inertial methods can boost the speed of optimization methods even though neither the convergence nor the rate result of such multi-step inertial methods was established in [41]. Some results on multi-step inertial methods have recently be studied in [26].
Our Contributions. Motivated by [44,50], in this paper, we consider the following split common null point problem with multiple output sets in Hilbert spaces: Let H1 and H2 be real Hilbert spaces. Let {Uj}rj=1:H1→H1 be a finite family of quasi-nonexpansive operators and Bi:H2→2H2,i=1,2,…,t. be maximal monotone operators and {Ti}ti=1:H1→H2 be a bounded linear operator. The split common null point problem with multiple output set is to find a point x∗∈H1 such that
Let Υ be the solution set of (1.14). We propose a two-step inertial extrapolation algorithm with self-adaptive step sizes for solving problem (1.14) and give the weak convergence result of our problem in real Hilbert spaces. We give numerical computations to show the efficiency of our proposed method.
2.
Preliminaries
Let C be a nonempty, closed, and convex subset of a real Hilbert spaces H. We know that for each point u∗∈H, there is a unique element PCu∗∈C, such that:
We recall that the mapping PC:H→C defined by (2.1) is said to be metric projection of H onto C. Moreover, we have (see, for instance, Section 3 in [31]):
Definition 2.1. Let T:H→H be an operator with F(T)≠∅. Then
● T:H→H is called nonexpansive if
● T:H→H is quasi-nonexpansive if
We denote by F(T) the set of fixed points of mapping T; that is, F(T)={u∗∈C:Tu∗=u∗}. Given an operator E:H→2H, its domain, range, and graph are defined as follows:
and
The inverse operator E−1 of E is defined by:
Recall that the operator E is said to be monotone if, for each u∗,v∗∈D(E), we have ⟨f−g,u∗−v∗⟩≥0 for all f∈E(u∗) and g∈E(v∗). We denote by IH the identity mapping on H. A monotone operator E is said to be maximal monotone if there is no proper monotone extension of E or, equivalently, by Minty's theorem, if R(IH+λE)=H, for all λ>0. If E is maximal monotone, then we can define, for each λ>0, a nonexpansive single-valued operator JEλ:R(IH+λE)→D(E) by
This operator is called the resolvent of E. It is easy to see that E−1(0)=F(JEλ), for all λ>0.
Lemma 2.2. [45] Suppose that E:D(E)⊂H→2H is a monotone operator. Then, we have the following statements:
(i) For r≥s>0, we have:
for all elements u∈R(IH+rE)∩R(IH+sE).
(ii) For all numbers r>0 and for all points u,v∈R(IH+rE), we have:
(iii) For all numbers r>0 and for all points u,v∈R(IH+rE), we have:
(iv) If S=E−1(0)≠∅, then for all elements u∗∈S and u∈R(IH+rE), we have:
Lemma 2.3. [30] Suppose that T is a nonexpansive mapping from a closed and convex subset of a Hilbert space H into H. Then, the mapping IH−T is demiclosed on C; that is, for any {un}⊂C, such that un⇀u∈C and the sequence (IH−T)(un)→v, we have (IH−T)(u)=v.
Lemma 2.4. [34] Given an integer N≥1. Assume that for each i=1,…,N, Ti:H→H is a ki-demicontractive operator such that ∩Ni=1F(Ti)≠∅. Assume that {wi}Ni=1 is a finite sequence of positive numbers such that ∑Ni=1wi=1. Setting U=∑Ni=1wiTi, then the following results hold:
(i) F(U)=∩Ni=1F(Ti).
(ii) U is λ-demicontractive operator, where λ=max{ki|i=1,…,N}.
(iii) ⟨x−Ux,x−z⟩≥1−λ2∑Ni=1wi‖x−Tix‖2 for all x∈H and z∈∩Ni=1F(Ti).
Lemma 2.5. Let x,y,z∈H and α,β∈R. Then
3.
Main result
We give the following assumptions in order to obtain our convergence analysis.
Assumptions 3.1. We assume that the inertial parameters θ∈[0,12), ρ∈(0,1) and δ∈(−∞,0] satisfies the following conditions.
(a)
(b)
(c)
Now, we present our proposed method and our convergence analysis as follows:
Lemma 3.3. For t∈N, let {Bi}ti=1:H2→2H2 be a finite family maximal monotone operators. Let {Ti}ti=1:H1→H2, be a finite family of bounded linear operators. Define the operator V:H1→H1 by
where τi,n is as defined in (3.2), {δi,n}ti=1⊂(0,1) and ∑ti=1δi,n=1. Then we have the following results:
(1)
(2) x∈F(V) if and only if Tix∈∩ti=1F(JBiri,n), for i=1,2,…,t.
Proof. (1) Given a point z∈Υ, it follows from the convexity of the function ‖⋅‖2 that:
Using JBiri,n(Tiz)=Tiz and Lemma 2.2(iii), for each i=1,2,…,t, we see that
Hence, from (3.4) and (3.5), we get
(2) It is obvious that if Tix∈∩ti=1F(JBiri,n) then x∈F(V). We show the converse, let x∈F(V) and z∈T−1i(F(JBiri,n)) we have
Since ρi,n⊂(0,2), we obtain
That is, Tix∈∩ti=1F(JBiri,n).
□
Lemma 3.4. For t,r∈N, let {Bi}ti=1:H2→H2 be maximal monotone operators such that ∩ti=1F(JBiri,n)≠∅ and {Uj}rj=1:H1→H1 be a finite family of quasi-nonexpansive operators such that ∩rj=1F(Uj)≠∅. Assume that {I−Uj}rj=1 and {I−JBiri,n}ti=1 are demiclosed at zero. Let {Ti}ti=1:H1→H2, be bounded linear operators suppose Υ≠∅. Let S:H1→H1 be defined by
where {τi,n}, is as defined in (3.2), {wj}rj=1 and {δi,n}ti=1 are in (0,1) with ∑rj=1wj=1 and ∑ti=1δi,n=1. Assume that the following conditions are satisfied
(A1) mini=1,2,…,t{infn{ri,n}}=r>0;
(A2) maxi=1,2,…,t{supn{θi,n}}=K<∞.
Then the following hold:
(a) The operator S is quasi-nonexpansive.
(b) F(S)=Υ.
(c) I−S is demiclosed at zero.
Proof. From the definition of V we can rewrite the operator S as
We show the following
(i) {UjV}rj=1 is a finite family of quasi-nonexpansive operator,
(ii) ∩rj=1F(UjV)=Υ,
(iii) for each j=1,2,…,r then I−UjV is demiclosed at zero.
By Lemma 3.3, V is quasi-nonexpansive. Therefore, for each j=1,2,…,r the operator UjV is quasi-nonexpansive. Next, we show that for each j=1,2,…,r, then
Indeed, it suffices to show that for each j=1,2,…,r F(UjV)⊂F(Uj)∩F(V). Let p∈F(UjV). It is enough to show that p∈F(V). Now, taking z∈F(Uj)∩F(V); we have
This implies that
That is JBiri,n(Tip)=Tip, ∀ i=1,2,…,t. This implies that Tip∈∩ti=1F(JBiri,n). Thus, p∈F(V).
Therefore, F(Uj)∩F(V)=F(UjV), ∀j=1,…,r. We now show that
By Lemma 3.3, we have
Finally, we show that for each j=1,..r, I−UjV is demiclosed at zero. Let {xn}⊂H1 be a sequence such that xn⇀z∈H1 and UjVxn−xn→0 we have
This implies that
By Lemma 3.3, we have
This implies that
Since {δi,n}⊂[a,b]⊂(0,1), {ρi,n}⊂[c,d]⊂(0,2), and (3.7) implies
∀ i=0,1,2…t. It follows from the boundedness of the sequence {xn} that L:=maxi=0,1,…,N{sup{‖T∗i(IH2−JBiri,n)Tixn‖2}}<∞. Thus from Condition (A2), It follows that
Combining this with (3.8), we deduce that
∀ i=0,1,…,N, Lemma 2.2(i) and Condition (A1) now imply that
∀ i=0,1,…,N. Thus using (3.9) and (3.10), we are able to deduce that
∀i=0,1,…,N.
From ‖Vxn−xn‖=‖∑ti=0δi,nτi,nT∗i(IH2−JBiri,n)Tixn‖, the assumptions on {δi,n} and {τi,n} and (3.10), it follows that
On the other hand
Since xn⇀z, we have Vxn⇀z and by the demiclosedness of Uj we have z∈F(Uj). Since, for each i=1,2,…,N, Ti is a bounded linear operator, it follows that Tixnk⇀Tiz. Thus by Lemma 2.3 and (3.11) implies that Tiz∈F(JBir) ∀i=1,…,t that is Tiz∈∩ti=1F(JBir). By Lemma 3.3 we get z∈F(V). Therefore, z∈F(Uj)∩F(V)=F(UjV).
By Claim (i) and Lemma 2.4, we obtain Sx=∑rj=1wjUjVx is quasi-nonexpansive and F(S)=∩tj=1F(UjV)=Υ.
Finally, we show that I−S is demiclosed at zero. Indeed, Let {xn}⊂H1 be a sequence such that xn⇀z∈H1 and ‖xn−Sxn‖→0. Let p∈F(S) by Lemma 2.4, we have
This imples that, for each j=1,…,t we have
By the demiclosedness of I−UjV we have z∈F(UjV). Therefore z∈∩tj=1F(UjV)=F(S). □
Theorem 3.5. For t,r∈N. Let {Bi}ti=1:H2→H2 be a finite family of maximal monotone operators such that ∩ti=1F(JBir)≠∅ and {Uj}rj=1:H1→H1 be a finite family of quasi-nonexpansive operators such that ∩rj=1F(Uj)≠∅. Assume that {I−Uj}rj=1 and {I−JBir}ti=1 are demiclosed at zero. Let Ti:H1→H2, i=1,2,…,N be bounded linear operators. Suppose Υ≠∅. Let {xn} be a sequence generated by Algorithm 3.2. and suppose that Assumptions (3.1) (a)-(c) are fulfilled. Then {xn} converges weakly to an element of Υ.
Proof. Let S=∑rj=1wjUj(I−∑ti=1δi,nτi,nT∗i(IH2−JBiri,n)Ti), then the sequence {xn+1} can be rewritten as follows
By Lemma 3.3, we have that S is quasi-nonexpansive. Let z∈Υ, from (3.13), we get
Observe that
Hence by Lemma 2.5, we have
Note also that
and so,
Also,
which implies that
Similarly, we have
and thus
By (3.16)–(3.18) and Cauchy-Schwartz inequality one has
Observe that
Putting (3.20) in (3.14), we get
Combining (3.15) and (3.19) in (3.21) with noting that δ≤0 we obtain
By rearranging we get
Define
Let us show that \Upsilon_n \geq 0, \forall n\geq 1. Now
Since \theta < 1/2, \delta \leq 0, \frac{2\theta\rho}{1-\rho} - (1-\theta) < \delta and 0\leq \theta < \frac{1-\rho}{1+\rho}, it follows from (3.24) that \Upsilon_n \geq 0. Furthermore, we drive from (3.23)
where
and
By our assumption, it holds that
As a result q_1 > 0. Also q_2 > 0 by Assumption 3.1 (c). Then by (3.25) we have
Letting \bar{\Upsilon}_n: = \Upsilon_n + q_1\|x_{n-1} - x_{n-2}\|^2. Then \bar{\Upsilon}_n \geq 0, \ \forall n\geq 1. Also, it follows from (3.28) that
These facts imply that the sequence \{\bar{\Upsilon}_n\} is decreasing and bounded from below and thus \underset{n\to \infty}\lim\bar{\Upsilon}_n exists. Consequently, we get from (3.28) and the squeeze theorem that
Hence
As a result
as n\to \infty. By \underset{n\to \infty}\lim\|x_{n+1} - x_n\| = 0 , one has
By (3.31) and the existence of \underset{n\to \infty}\lim\bar{\Upsilon}_n, we have that \underset{n\to \infty}\lim\Upsilon_n exists and hence \{\Upsilon_n\} is bounded. Now, since \underset{n\to \infty}\lim\|x_{n+1} - x_n\| = 0, we have from the definition of \Upsilon_n that
exists. Using the boundedness of \{\Upsilon_n\}, we obtain from (3.24) that \{x_n\} is bounded. Consequently \{y_n\} is bounded. From (3.8), we obtain
This implies that
Finally, we show that the sequence \{x_n\} converges weakly to x^*\in \Upsilon. Indeed, since \{x_n\} is bounded we assume that there exists a subsequence \{x_{n_j}\} of \{x_n\} such that x_{n_j} \rightharpoonup x^*\in H. Since \|x_n - y_n\| \to 0, we also have y_{n_j} \rightharpoonup x^* . Then by the demiclosedness of I-S , we obtain x^*\in F(S) = \Upsilon.
Now, we show that \{x_n\} has unique weak limit point in \Upsilon. Suppose that \{x_{m_j}\} is another subsequence of \{x_n\} such that x_{m_j} \rightharpoonup v^* as j\to \infty . Observe that
and
Therefore
and
Addition of (3.34)–(3.36) gives
According to (3.32), we get
exists and
exists. This implies from (3.21) that \underset{n\to \infty}\lim \langle x_{n} - \theta x_{n-1} - \delta x_{n-2}, x^* - v^*\rangle exists. Consequently,
Hence
Since \delta \leq 0 < 1-\theta, we obtain that x^* = v^*. Therefore, the sequence \{x_n\} converges weakly to x^*\in \Upsilon. This completes the proof. □
4.
Numerical examples
In this section, we give a numerical description to illustrate how our proposed algorithm can be implemented in the setting of the real Hilbert space \mathbb{R} . Furthermore, we shall show the effect of the double inertia in the fast convergence of the sequence generated by our proposed Algorithm 3.2. First, we give the set of parameters that satisfy the conditions given in assumption 3.1. To this end, fix \rho = \frac{1}{2} and take
Clearly, these parameters satisfy the conditions given in assumption 3.1. Next, we define the operators to be used in the implementation of Algorithm 3.2. In Algorithm 3.2, fix t = N = r = 3 . Set H_1 = H_2 = H_3 = \mathbb{R}. Let \delta_{i, n} = \frac{1}{3} , \rho_{i, n} = \theta_{i, n} = \frac{2}{3} , r_{i, n} = \frac{1}{2} and w_j = \frac{1}{3} , where i, j\in\{1, 2, 3\} and n\geq 1 . Let B_i, \; T_i, \; U_j :\mathbb{R} \to \mathbb{R} be defined by
Then,
With this, we are ready to implement our proposed Algorithm 3.2 on MATLAB. Choosing x_0 = 1, \; x_1 = -2 and x_2 = 0.5 , and setting maximum number of iterations to 150 or 10^{-16} , as our stopping criteria, we varied the double inertial parameters as given above. We obtained the following successive approximations:
5.
Discussion
From the numerical simulations presented in Table 1 and Figures 1–3, we saw that in this example, the best choice for the double inertial parameters is \theta = \frac{1}{4} and \delta = -\frac{1}{1000} . Furthermore, we observed that as \theta decreases and \delta approaches 0 , the number of iterations required to satisfy the stopping criteria increases.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) [grant number FRB660012/0168]. This research block grants was managed under Rajamangala University of Technology Thanyaburi [grant number FRB66E0628].
Conflict of interest
All authors declare no conflicts of interest in this paper.