ˉθn | 0 | 1(n+1)1.1 | 1(n+1)2 | 100(n+1)2 | 1(n+100)2 |
No. of Iter. | 3237 | 2293 | 2289 | 3286 | 2278 |
Elapsed Time (s) | 2.5272 | 1.5865 | 1.5386 | 2.1956 | 1.5113 |
This study investigates the weak convergence of the sequences generated by the inertial technique combining the parallel monotone hybrid method for finding a common fixed point of a finite family of G-nonexpansive mappings under suitable conditions in Hilbert spaces endowed with graphs. Some numerical examples are also presented, providing applications to signal recovery under situations without knowing the type of noises. Besides, numerical experiments of the proposed algorithms, defined by different types of blurred matrices and noises on the algorithm, are able to show the efficiency and the implementation for LASSO problem in signal recovery.
Citation: Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak. An inertial parallel algorithm for a finite family of G-nonexpansive mappings applied to signal recovery[J]. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102
[1] | Damrongsak Yambangwai, Tanakit Thianwan . An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081 |
[2] | Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283 |
[3] | Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108 |
[4] | Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839 |
[5] | Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet . Iterative solutions via some variants of extragradient approximants in Hilbert spaces. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768 |
[6] | Kaiwich Baewnoi, Damrongsak Yambangwai, Tanakit Thianwan . A novel algorithm with an inertial technique for fixed points of nonexpansive mappings and zeros of accretive operators in Banach spaces. AIMS Mathematics, 2024, 9(3): 6424-6444. doi: 10.3934/math.2024313 |
[7] | Hasanen A. Hammad, Hassan Almusawa . Modified inertial Ishikawa iterations for fixed points of nonexpansive mappings with an application. AIMS Mathematics, 2022, 7(4): 6984-7000. doi: 10.3934/math.2022388 |
[8] | Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084 |
[9] | Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364 |
[10] | Aynur Şahin, Zeynep Kalkan . The AA-iterative algorithm in hyperbolic spaces with applications to integral equations on time scales. AIMS Mathematics, 2024, 9(9): 24480-24506. doi: 10.3934/math.20241192 |
This study investigates the weak convergence of the sequences generated by the inertial technique combining the parallel monotone hybrid method for finding a common fixed point of a finite family of G-nonexpansive mappings under suitable conditions in Hilbert spaces endowed with graphs. Some numerical examples are also presented, providing applications to signal recovery under situations without knowing the type of noises. Besides, numerical experiments of the proposed algorithms, defined by different types of blurred matrices and noises on the algorithm, are able to show the efficiency and the implementation for LASSO problem in signal recovery.
Let H be a real Hilbert space with inner product ⟨.,.⟩ and the induced norm ‖.‖. Let C be a nonempty subset of H, and let △ denotes the diagonal of the cartesian product C×C, i.e., △={(x,x):x∈C}. For a directed graph G such that the set V(G) of its vertices coincides with C and the set E(G) of its edges contains all loops, i.e., E(G)⊇△. We assume G has no parallel edge. So we can identify the graph G with the pair (V(G),E(G)). A mapping T:C→C is said to be G-contraction if T satisfies the conditions:
(G1) T is edge-preserving, i.e.,
(x,y)∈E(G)⇒(Tx,Ty)∈E(G). |
(G2) T decreases weights of edges of G, i.e., there exists δ∈(0,1) such that
(x,y)∈E(G)⇒‖Tx−Ty‖≤δ‖x−y‖. |
A mapping T:C→C is said to be G-nonexpansive if T satisfies the condition (G1) and
(G3) T non-increases weights of edges of G, i.e.,
(x,y)∈E(G)⇒‖Tx−Ty‖≤‖x−y‖. |
The set of a fixed point of T is denoted by F(T), that is, F(T)={z∈H:Tz=z}.
In 2008, by using the concepts in fixed point theory and graph theory, Jachymski [17] proved some generalizations of Banach's contraction principle in complete metric spaces endowed with a graph. Then in 2012, Aleomraninejed et al. [2] introduced some iterative G-contraction schemes with G-nonexpansive mappings in Banach spaces endowed with a graph. Recently, Alfuraidan and Khamsi [3] studied the existence of fixed points and proved a convergence result of monotone nonexpansive mapping on a Banach space endowed with a directed graph. Later on, many authors have discussed the Browder convergence theorem that deliberated the weak and strong convergence of some methods for G-nonexpansive mapping in a Hilbert space with a directed graph (see for example [2,3,4,13,32,33]).
Motivated by the work of [1,23], Suparatulatorn et al. [28] scrutinized the following modified S-iteration scheme:
{x0∈C,yn=(1−σn)xn+σnT1xn,xn+1=(1−δn)T1xn+(1−δn)T2yn, n≥0, |
where {δn} and {σn} are sequences in (0,1) and T1,T2:C→C are G-nonexpansive mappings. Additionally, they proved weak and strong convergence in order to approximate common fixed points of two G-nonexpansive mappings in a uniformly convex Banach space X endowed with a graph under this iteration.
Otherwise, speeding up the convergence of the algorithm has been interesting by many mathematicians, one of that is inertial extrapolation, which was initially proposed by Polyak [22] as an acceleration process. This algorithm was used to solve various convex minimization problems based on the heavy ball method of the two-order time dynamical system. Inertial type methods involve two iterative steps that the second step is obtained from the previous two iterates. These methods are committed to being considered as an efficient technique to deal with various iterative algorithms, particularly with the projection-based algorithms, see in [5,8,9,21,30,31,34].
Very recently, Suantai et al. [27] used the idea of Anh and Hieu [6,7] to present the convergence of the algorithm using the shrinking projection method with the parallel monotone hybrid method for approximating common fixed points of a finite family of G-nonexpansive mappings. The application of the algorithm has been provided to signal recovery in a situation without knowing the type of noise. This algorithm is defined in a real Hilbert space as follows:
{x1∈C, C0=C,vin=αinxn+(1−αin)Tixn, i=1,2,...,N,in=argmax{‖vin−xn‖:i=1,2,...,N},ˉvn:=vinn,Cn+1={v∈Cn:‖v−ˉvn‖≤‖v−xn‖},xn+1=PCn+1x1, n≥1, | (1.1) |
where {αin} is a sequence in [0,1] such that lim infn→∞αin(1−αin)>0 for all i=1,2,...,N. ˉvn is chosen by the optimization all vin with xn. After that, the closed convex set Cn+1 was constructed by ˉvn. Finally, the next approximation xn+1 is defined as the projection of x1 on to Cn+1. More recently, Cholamjiak et al. [11] proposed an inertial forward-backward splitting algorithm for finding the solution of common variational inclusion problems based on the inertial technique and parallel monotone hybrid methods. They proved strong convergence results under some suitable conditions in Hilbert spaces. Here in this paper, the algorithm was very useful in image restoration. For given initial points x0,x1∈C1=H, let the sequences {xn},{yn} be generated by
{yn=xn+θn(xn−xn−1),zin=(1−αin)yn+αinJBrn(I−rnAi)yn,i=1,2,...,N,i=argmax{‖zin−xn‖:i=1,2,...,N},ˉzn=zin,Cn+1={v∈Cn:‖ˉzn−v‖2≤‖xn−v‖2+θ2n‖xn−xn−1‖2−2θn⟨xn−v,xn−1−xn⟩},xn+1=PCn+1x1,n≥1, | (1.2) |
where Ai:H→H and B:H→2H are monotone operator with JBrn=(I+rnB)−1, {rn}⊂(0,2α),{θn}⊂[0,θ] for some θ∈[0,1] and {αin} is a sequence in [0,1] for all i=1,2,...,N. It has been notable that if {rn}⊂(0,2α) where α is a constant of inverse strongly monotone operator A, then the mapping JBrn(I−rnA) is nonexpansive. Later on, there have been some results involving the parallel method for solving the fixed point problem (see [10,12,14,15,16,29]). One of the algorithms for solving common fixed point problems of the concerned nonexpansive operators is distributed inexact averaged operator algorithm (DIO) which is introduced by Li and Feng [20]. The DIO algorithm is proposed as follow:
xi,n+1=ˆxi,n+αi,n(Fi(ˆxi,n)+ϵi,n−ˆxi,n), |
for all i=1,2,...,N, where ˆxi,n is defined by ˆxi,n:=∑Nj=1aij,nxj,n, ϵi,n∈H is an error for Fi(ˆxi,n) and Fi:H→H is a nonexpansive for all i=1,2,...,N. Under the conditions ∑Nj=1aij,n=1 for all i=1,2,...,N with aij,n≥a>0 and αi,n∈[α,1−α] for some constant α∈(0,12), weak convergence theorem was proved in Hilbert spaces.
In this paper, a parallel algorithm for finding a common fixed point of a finite family of G-nonexpansive mappings using inertial technique is proposed. Also, the weak convergence theorem is proved by assuming some control conditions in a Hilbert space endowed with graphs. Furthermore, examples and numerical results for supporting the main results of this study are provided, and the convergence rate of the iterative methods from this study is compared. Moreover, the proposed algorithm is applied to solve signal recovery problems. Finally, the last section presents the numerical results.
In this section, some known definitions and lemmas which will be used in the later sections are stated.
Lemma 2.1. [5] Let {ψn}, {δn} and {αn} be the sequences in [0,+∞) such that ψn+1≤ψn+αn(ψn−ψn−1)+δn for all n≥1, ∑∞n=1δn<+∞ and there exists a real number α with 0≤αn≤α<1 for all n≥1. Then the followings hold:
(i) Σn≥1[ψn−ψn−1]+<+∞, where [t]+=max{t,0};
(ii) There exists ψ∗∈[0,+∞) such that limn→+∞ψn=ψ∗.
Lemma 2.2. [26] Let X be a Banach space satisfying Opial's condition and let {xn} be a sequence in X. Let u, v ∈ X be such that
limn→∞‖xn−u‖andlimn→∞‖xn−v‖exist. |
If {xnk} and {xmk} are subsequences of {xn} which converge weakly to u and v, respectively, then u=v.
Definition 2.3. Let G=(V(G),E(G)) be a directed graph and (u,v) be a directed edge from vertex u to vertex v. A graph G is called transitive if for any u,v,z∈V(G) such that (u,v) and (v,z) are in E(G), then (u,z)∈E(G).
Definition 2.4. [28] Let u0∈V(G) and A subset of V(G). We say that
(i) A is dominated by u0 if (u0,u)∈E(G) for all u∈A.
(ii) A dominates u0 if for each u∈A, (u,u0)∈E(G).
Definition 2.5. Let G=(V(G),E(G)) be a directed graph. The set of edges E(G) is said to be convex if (ui,vi)∈E(G) for all i=1,2,...,N and αi∈(0,1) such that ∑Ni=1αi=1, then (∑Ni=1αiui,∑Ni=1αivi)∈E(G).
Lemma 2.6. [24] Let C be a nonempty, closed and convex subset of a Hilbert space H and G=(V(G),E(G)) a directed graph such that V(G)=C. Let T:C→C be a G−nonexpansive mapping and {un} be a sequence in C such that un⇀u for some u∈C. If there exists a subsequence {unk} of {un} such that (unk,u)∈E(G) for all k∈N and {un−Tun}→v for some v∈H. Then (I−T)u=v.
In this section, we prove the following weak convergence theorem to find a common fixed point of a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph.
Theorem 3.1. Let C be a nonempty closed and convex subset of a real Hilbert space H and let G=(V(G),E(G)) be a transitive directed graph such that V(G)=C and E(G) is convex. Let Ti:C⟶C be a family of G-nonexpansive mappings for all i=1,2,...,N such that F=N⋂i=1F(Ti)≠∅. Let {xn}, {wn} generated by x0,x1∈C and
{wn=xn+θn(xn−xn−1),yin=(1−βin)wn+βinTiwn,zin=(1−αin)Tiwn+αinTiyin,xn+1=argmax{‖zin−wn‖:i=1,2,...,N}, | (3.1) |
where {θn}⊂[0,θ] for each θ∈(0,1] and {αin} and {βin} are sequences in [0,1]. Assume that the following conditions are satisfied:
(i) ∑∞n=1θn‖xn−xn−1‖<∞;
(ii) {wn} is dominated by t and {wn} dominates t for all t∈F, and if there exists a subsequence {wnk} of {wn} such that {wnk}⇀u∈C, then ({wnk},u)∈E(G);
(iii) lim infn→∞αin>0;
(iv) 0<lim infn→∞βin≤lim supn→∞βin<1.
Then the sequence {xn} converges weakly to an element in F.
Proof. Let t∈F. Since {wn} dominates t and Ti is edge-preserving, we get (Tiwn,t)∈E(G) for all i=1,2,...,N. Implying thereby (yin,t)=((1−βin)wn+βinTiwn,t)∈E(G) by E(G) is convex. For all i=1,2,...,N, we get
‖zin−t‖=‖(1−αin)(Tiwn−t)+αin(Tiyin−t)‖≤(1−αin)‖Tiwn−t‖+αin‖Tiyin−t‖≤(1−αin)‖wn−t‖+αin‖yin−t‖=(1−αin)‖wn−t‖+αin‖(1−βin)(wn−t)+βin(Tiwn−t)‖≤(1−αin)‖wn−t‖+αin{(1−βin)‖wn−t‖+βin‖Tiwn−t‖}≤‖wn−t‖≤‖xn−t‖+θn‖xn−xn−1‖ |
This implies that ‖xn+1−t‖≤‖xn−t‖+θn‖xn−xn−1‖. From Lemma 2.1 and the assumption (i), we obtain limn→∞‖xn−t‖ exists, in particular, {xn} is bounded and also {yin} and {zin}. By the properties in a real Hilbert space H, we have
‖zin−t‖2≤(1−αin)‖Tiwn−t‖2+αin‖Tiyin−t‖2≤(1−αin)‖wn−t‖2+αin‖yin−t‖2≤(1−αin)‖wn−t‖2+αin((1−βin)‖wn−t‖2+βin‖Tiwn−t‖2−(1−βin)βin‖Tiwn−wn‖2)≤‖wn−t‖2−αin(1−βin)βin‖Tiwn−wn‖2≤‖xn−t‖2+2θn⟨xn−xn−1,wn−t⟩−αin(1−βin)βin‖Tiwn−wn‖2. | (3.2) |
This implies that there exist in∈{1,2,...,N} such that
αinn(1−βinn)βinn‖Tinwn−wn‖2≤‖xn−t‖2−‖xn+1−t‖2+2θn⟨xn−xn−1,wn−t⟩. | (3.3) |
By the assumptions (i), (iii) and (iv), from (3.3) and limn→∞‖xn−t‖ exist, we have
limn→∞‖Tinwn−wn‖=0. | (3.4) |
By the definition of our algorithm and the assumption (iv), we have
‖yinn−wn‖=βinn‖Tinwn−wn‖→0 | (3.5) |
as n→∞. Since (wn,t),(t,yin)∈E(G), so (wn,yin)∈E(G). It follows from (3.5) that
‖xn+1−Tinwn‖=αinn‖Tinyinn−Tinwn‖≤αinn‖yinn−wn‖→0 | (3.6) |
as n→∞. From (3.4) and (3.6), we have
limn→∞‖xn+1−wn‖=0. | (3.7) |
This implies that
‖zin−wn‖≤‖xn+1−wn‖→0 | (3.8) |
as n→∞ for all i=1,2,...,N. From (3.2), we have
αin(1−βin)βin‖Tiwn−wn‖2≤‖wn−t‖2−‖zin−t‖2. | (3.9) |
By our assumptions (iii) and (iv), it follows from (3.8) and (3.9) that
limn→∞‖Tiwn−wn‖=0, | (3.10) |
for all i=1,2,...,N.
Since {wn} is bounded and H is reflexive, ωw(wn)={x∈H:wnk⇀p,{wnk}⊂{wn}} is nonempty. Let p∈ωw(wn) be an arbitrary element. Then there exists a subsequence {wnk}⊂{wn} converging weakly to p. Let q∈ωw(wn) and {wnm}⊂{wn} be such that wnm⇀q. From Lemma 2.6 and (3.10), we have p,q∈F. Applying Lemma 2.2, we obtain p=q.
We know that if T is nonexpansive, that T is G-nonexpansive. From direct consequences of Theorem 3.1, we have the following corollary:
Corollary 3.2. Let C be a nonempty closed and convex subset of a real Hilbert space H, and let Ti:C⟶C be a family of nonexpansive mappings for all i=1,2,...,N such that F=N⋂i=1F(Ti)≠∅. Let {xn}, {wn} generated by x0,x1∈C and
{wn=xn+θn(xn−xn−1),yin=(1−βin)wn+βinTiwn,zin=(1−αin)Tiwn+αinTiyin,xn+1=argmax{‖zin−wn‖:i=1,2,...,N}, | (3.11) |
where {θn}⊂[0,θ] for each θ∈(0,1] and {αin} and {βin} are sequences in [0,1]. Assume that the following conditions are satisfied:
(i) ∑∞n=1θn‖xn−xn−1‖<∞;
(ii) lim infn→∞αin>0;
(iii) 0<lim infn→∞βin≤lim supn→∞βin<1.
Then the sequence {xn} converges weakly to an element in F.
A signal recovery problem can be modeled as the following underdetermined linear equation system:
v=Au+ϵ, | (4.1) |
where u∈RˉN is an original signal, v∈RM is the observed signal which is squashed by the filter matrix A:RˉN→RM and noisy ϵ. It is well known that the problem (4.1) can be solved by the LASSO problem:
minu∈RˉN12‖v−Au‖22+λ‖u‖1, | (4.2) |
where λ>0. As a result, various techniques and iterative schemes have been developed over the years to solve the Lasso problem, see [18,19,25]. In this case, we set Tu=proxλg(u−λ∇f(u)), where f(u)=12‖v−Au‖22 and g(u)=λ‖u‖1. It is known that T is a nonexpansive mapping when λ∈(0,2/L) and L is the Lipschitz constant of ∇f.
The goal of this paper is to remove noise without knowing the type of filter and noise. Thus, we are interested in the following common problems which are introduced by Suantai et al. [27]:
minu∈RˉN12‖A1u−v‖22+λ1‖u‖1,minu∈RˉN12‖A2u−v‖22+λ2‖u‖1,⋮minu∈RˉN12‖ANu−v‖22+λN‖u‖1, | (4.3) |
where u is an original signal, Ai is a bounded linear operator and vi is an observed signal with noisy for all i=1,2,...,N. We can apply our proposed algorithm (3.1) to solve the problem (4.3) by setting Tiu=proxλigi(u−λi∇fi(u)).
For all experiments in this section, the size of signal is selected to be ˉN=1024 and M=512, where the original signal x is generated by the uniform distribution in [−2,2] with m nonzero elements. Suppose that
θn={min{ˉθn‖xn−xn−1‖2,0.3}if xn≠xn−1,0.3otherwise |
for all n∈N. In the first part, we solve the problem (4.2) by considering the different components within algorithm (3.1): λ, ˉθn, β1n and α1n. Let A be the Gaussian matrix generated by commend randn(M,ˉN), the observation b be generated by white Gaussian noise with signal-to-noise ratio SNR = 40 and m=25. Given that the initial points x0, x1 are generated by commend randn(ˉN,1). We use the mean-squared error to measure the restoration accuracy defined as follows: MSEn=11024‖xn−x‖22<10−5. To find suitable parameters for the next numerical experiments, we present numerical results through Tables 1–4 with different parameters ˉθn, λ, β1n and α1n, respectively.
Case 1. We compare the performance of the algorithm with different parameters ˉθn by setting λ=1‖A‖22,β1n=0.5 and α1n=n5(n+1) for all n∈N. Then the results are presented in Table 1.
ˉθn | 0 | 1(n+1)1.1 | 1(n+1)2 | 100(n+1)2 | 1(n+100)2 |
No. of Iter. | 3237 | 2293 | 2289 | 3286 | 2278 |
Elapsed Time (s) | 2.5272 | 1.5865 | 1.5386 | 2.1956 | 1.5113 |
Case 2. We compare the performance of the algorithm with different parameters λ by setting ˉθn=1(n+100)2 and select α1n and β1n are the same as in Case 1. Then the results are presented in Table 2.
λ | 12‖A‖22 | 32‖A‖22 | 85‖A‖22 | 1710‖A‖22 | 1810‖A‖22 |
No. of Iter. | 4709 | 1571 | 1473 | 1387 | 1309 |
Elapsed Time (s) | 3.2404 | 1.0867 | 1.0199 | 0.9493 | 0.8978 |
Case 3. We compare the performance of the algorithm with different parameters β1n by setting λ=1810‖A‖22, ˉθn=1(n+100)2 and select α1n is the same as in Case 1. Then the results are presented in Table 3.
β1n | 0.7 | 0.8 | 0.9 | 0.95 | 0.99 |
No. of Iter. | 1282 | 1259 | 1238 | 1228 | 1220 |
Elapsed Time (s) | 1.4545 | 0.9254 | 0.8734 | 0.8860 | 0.8718 |
Case 4. We compare the performance of the algorithm with different parameters α1n by setting λ=1810‖A‖22, ˉθn=1(n+100)2 and β1n=0.99. Then the results are presented in Table 4.
α1n | n10(n+1) | n4(n+1) | n2(n+1) | nn+1 | nn+100 |
No. of Iter. | 1358 | 1197 | 1000 | 753 | 863 |
Elapsed Time (s) | 0.9990 | 1.0939 | 1.2328 | 0.5437 | 0.6133 |
We noticed that in all the above 4 cases, selecting α1n=nn+1 for all n∈N and setting λ,ˉθn and β1n as in Case 4, yield the best results.
In the next experiment, we would like to compare the performance of the parallel monotone hybrid algorithm (1.1), inertial parallel monotone hybrid algorithm (1.2) and algorithm (3.1) for solving the problem (4.3) with three filters, that is N=3. The original signal is generated by the uniform distribution in the interval [−2,2] with m nonzero element. Let Ai be the Gaussian matrix generated by commend randn(M,ˉN), the observation bi be generated by white Gaussian noise with signal-to-noise ratio SNR = 40, we choose λi=1810‖Ai‖22, βin=0.99 and αin=nn+1 for all i=1,2,3, n∈N and ˉθn=1(n+100)2 for our algorithm (3.1). Choosing
αin={10n+10, if 1≤n<K,10K+10, otherwise, | (4.4) |
for all i=1,2,3, n∈N where K is the number of iterations that we want to stop for the parallel monotone hybrid algorithm (1.1) and αin=nn+1 for all i=1,2,3, n∈N and ˉθn=1(n+100)2 for the inertial parallel monotone hybrid algorithm (1.2). We use MSEn<10−5. Further, we select x0 and x1 are the same as in the first part. The results are presented in Table 5.
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
Parallel monotone hybrid | Elapsed Time (s) | 0.3687 | 0.4073 | 1.6993 |
No. of Iter. | 290 | 317 | 357 | |
Inertial parallel monotone hybrid | Elapsed Time (s) | 0.3297 | 0.3966 | 0.9045 |
No. of Iter. | 260 | 286 | 324 | |
Our | Elapsed Time (s) | 0.1545 | 0.1721 | 0.2365 |
No. of Iter. | 65 | 72 | 89 |
In the next comparison, we will show the performance of our algorithm comparing with DIO algorithm [20] when N=3. The original signal is generated by the uniform distribution in the interval [−2,2] with m nonzero element. We suppose Ai, bi, λi, ˉθn, βin, αin, x0 and x1 are the same as in the second part for our algorithm. For DIO algorithm, we choose x1,1=x2,1=x3,1=x1, ai1,n=ai2,n=n4n+1, ai3,n=(1−ai1,n−ai2,n) and αi,n=0.3 for all i=1,2,3 and all n∈N. We use MSEn<10−5. The results are presented in Table 6.
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
DIO | Elapsed Time (s) | 0.4371 | 0.4862 | 0.5984 |
No. of Iter. | 487 | 550 | 604 | |
Our | Elapsed Time (s) | 0.1268 | 0.1295 | 0.1986 |
No. of Iter. | 65 | 74 | 89 |
Table 5 shows the comparison of the number of iterations and the time elapsed with m=50,100,150 nonzero elements for the three algorithms: Parallel monotone hybrid algorithm, Inertial parallel monotone hybrid algorithm, and our algorithm. For comparison of our algorithm and the DIO are showed in Table 6. In case m=150, the original signal and the measurement of Tables 5 and 6 can be seen in Figures 1 and 4, respectively. The results of the comparison of the three algorithms can be seen in Figures 2 and 3, and the results of DIO algorithm can be seen in Figures 5 and 6. Based on the above results, we can see that our proposed algorithm is less time-consuming and requires fewer iterations than the other three algorithms. We note that the numerical results of the DIO algorithm have been shown for the best of the sequences {x1,n}, {x2,n}, {x3,n}.
The final experiment considers algorithm (3.1) for solving (4.3) with multiple inputs Ai. The original signal is generated by the uniform distribution in the interval [−2,2] with m nonzero element. We suppose Ai, bi, λi, ˉθn, βin, αin, x0 and x1 are the same as in the second part. We use MSEn<5×10−5. The results are following presented in Table 7.
Inputting | m Nonzero Elements | ||||
m=25 | m=50 | m=75 | m=100 | ||
A1 | Elapsed Time (s) | 0.6693 | 0.6887 | 1.0002 | 1.1109 |
No. of Iter. | 776 | 871 | 1057 | 1276 | |
A2 | Elapsed Time (s) | 0.5325 | 0.6218 | 1.2367 | 1.0217 |
No. of Iter. | 753 | 831 | 1032 | 1246 | |
A3 | Elapsed Time (s) | 0.5636 | 0.6798 | 0.7639 | 1.0629 |
No. of Iter. | 751 | 863 | 1052 | 1338 | |
A1,A2 | Elapsed Time (s) | 0.3752 | 0.4090 | 0.4521 | 0.4320 |
No. of Iter. | 222 | 216 | 259 | 248 | |
A1,A3 | Elapsed Time (s) | 0.3718 | 0.4255 | 0.4889 | 0.5027 |
No. of Iter. | 206 | 240 | 275 | 295 | |
A2,A3 | Elapsed Time (s) | 0.4876 | 0.3809 | 0.4176 | 0.4791 |
No. of Iter. | 220 | 212 | 234 | 243 | |
A1,A2,A3 | Elapsed Time (s) | 0.1978 | 0.1707 | 0.1716 | 0.1785 |
No. of Iter. | 62 | 64 | 68 | 68 |
Table 7 presents the numerical results of the number of iterations and the time elapsed with multiple inputs Ai and m=25,50,75,100 nonzero elements for our algorithm. The original signal and the measurement by using A1–A3 of Table 7 are shown in Figure 7. From Figures 8 and 9, it can be observed that incorporating all 3 Gaussian matrices (A1–A3) into algorithm (3.1) is more effective with respect to time and number of iterations than involving only one or two of them.
We introduce a new inertial parallel algorithm to solve the common fixed point problem for a finite family of G-nonexpansive mappings in a Hilbert space with a directed graph. Our primary theorems ensure that this algorithm converges weakly to an element of the problem's solution set under certain conditions. The algorithm is then used to solve the signal recovery problem involving several filters.
This research project was supported by the Thailand Science Research and Innovation Fund and the University of Phayao (Grant No. FF64-UoE002).
The authors declare that they have no competing interests.
[1] |
R. P. Agarwal, D. O'Regan, D. R. Sahu, Iterative construction of fixed points of nearly asymptotically nonexpansive mappings, J. Nonlinear Convex A., 8 (2007), 61–79. doi: 10.1007/s10851-006-9699-4. doi: 10.1007/s10851-006-9699-4
![]() |
[2] |
S. M. A. Aleomraninejad, S. Rezapour, N. Shahzad, Some fixed point results on a metric space with a graph, Topol. Appl., 159 (2012), 659–663. doi: 10.1016/j.topol.2011.10.013. doi: 10.1016/j.topol.2011.10.013
![]() |
[3] |
M. R. Alfuraidan, M. A. Khamsi, Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph, Fixed Point Theory A., 2015 (2015), 1–10. doi: 10.1186/s13663-015-0294-5. doi: 10.1186/s13663-015-0294-5
![]() |
[4] |
M. R. Alfuraidan, On monotone Ćirić quasi-contraction mappings with a graph, Fixed Point Theory A., 2015 (2015), 1–11. doi: 10.1186/s13663-015-0341-2. doi: 10.1186/s13663-015-0341-2
![]() |
[5] |
F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. doi: 10.1023/A:1011253113155. doi: 10.1023/A:1011253113155
![]() |
[6] |
P. K. Anh, D. V. Hieu, Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings, J. Appl. Math. Comput., 48 (2015), 241–263. doi: 10.1007/s12190-014-0801-6. doi: 10.1007/s12190-014-0801-6
![]() |
[7] |
P. K. Anh, D. V. Hieu, Parallel hybrid iterative methods for variational inequalities, equilibrium problems, and common fixed point problems, Vietnam J. Math., 44 (2016), 351–374. doi: 10.1007/s10013-015-0129-z. doi: 10.1007/s10013-015-0129-z
![]() |
[8] |
H. Attouch, J. Peypouquet, P. Redont, A dynamical approach to an inertial forward-backward algorithm for convex minimization, SIAM J. Optimiz., 24 (2014), 232–256. doi: 10.1137/130910294. doi: 10.1137/130910294
![]() |
[9] |
A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183–202. doi: 10.1137/080716542. doi: 10.1137/080716542
![]() |
[10] |
P. Cholamjiak, S. Suantai, P. Sunthrayuth, An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach spaces, Banach J. Math. Anal., 14 (2020), 20–40. doi: 10.1007/s43037-019-00030-4. doi: 10.1007/s43037-019-00030-4
![]() |
[11] |
W. Cholamjiak, S. A. Khan, D. Yambangwai, K. R. Kazmi, Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration, RACSAM Rev. R. Acad. A, 114 (2020), 1–20. doi: 10.1007/s13398-020-00827-1. doi: 10.1007/s13398-020-00827-1
![]() |
[12] |
D. V. Hieu, A parallel hybrid method for equilibrium problems, variational inequalities and nonexpansive mappings in Hilbert space, J. Korean Math. Soc., 52 (2015), 373–388. doi: 10.4134/JKMS.2015.52.2.373. doi: 10.4134/JKMS.2015.52.2.373
![]() |
[13] |
D. V. Hieu, Y. J. Cho, Y. B. Xiao, P. Kumam, Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces, Vietnam J. Math., 49 (2021), 1165–1183. doi: 10.1007/s10013-020-00447-7. doi: 10.1007/s10013-020-00447-7
![]() |
[14] |
D. V. Hieu, L. D. Muu, P. K. Anh, Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings, Numer. Algorithms, 73 (2016), 197–217. doi: 10.1007/s11075-015-0092-5. doi: 10.1007/s11075-015-0092-5
![]() |
[15] |
D. V. Hieu, Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities, Afr. Math., 28 (2017), 677–692. doi: 10.1007/s13370-016-0473-5. doi: 10.1007/s13370-016-0473-5
![]() |
[16] |
D. V. Hieu, Parallel hybrid methods for generalized equilibrium problems and asymptotically strictly pseudocontractive mappings, J. Appl. Math. Comput., 53 (2017), 531–554. doi: 10.1007/s12190-015-0980-9. doi: 10.1007/s12190-015-0980-9
![]() |
[17] |
J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Am. Math. Soc., 136 (2008), 1359–1373. doi: 10.1090/S0002-9939-07-09110-1. doi: 10.1090/S0002-9939-07-09110-1
![]() |
[18] |
K. Kankam, N. Pholasa, P. Cholamjiak, On convergence and complexity of the modified forward-backward method involving new linesearches for convex minimization, Math. Method. Appl. Sci., 42 (2019), 1352–1362. doi: 10.1002/mma.5420. doi: 10.1002/mma.5420
![]() |
[19] |
D. Kitkuan, P. Kumam, J. Martínez-Moreno, K. Sitthithakerngkiet, Inertial viscosity forward-backward splitting algorithm for monotone inclusions and its application to image restoration problems, Int. J. Comput. Math., 97 (2020), 482–497. doi: 10.1080/00207160.2019.1649661. doi: 10.1080/00207160.2019.1649661
![]() |
[20] |
X. Li, G. Feng, Distributed algorithms for computing a common fixed point of a group of nonexpansive operators, IEEE T. Automat Contr., 66 (2020), 2130–2145. doi: 10.1109/TAC.2020.3004773. doi: 10.1109/TAC.2020.3004773
![]() |
[21] |
P. E. Maingé, Regularized and inertial algorithms for common fixed points of nonlinear operators, J. Math. Anal. Appl., 344 (2008), 876–887. doi: 10.1016/j.jmaa.2008.03.028. doi: 10.1016/j.jmaa.2008.03.028
![]() |
[22] |
B. T. Polyak, Some methods of speeding up the convergence of iterative methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. doi: 10.1016/0041-5553(64)90137-5. doi: 10.1016/0041-5553(64)90137-5
![]() |
[23] |
P. Sridarat, R. Suparatulatorn, S. Suantai, Y. J. Cho, Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs, Bull. Malays. Math. Sci. Soc., 42 (2019), 2361–2380. doi: 10.1007/s40840-018-0606-0. doi: 10.1007/s40840-018-0606-0
![]() |
[24] |
S. Suantai, M. Donganont, W. Cholamjiak, Hybrid methods for a countable family of G-nonexpansive mappings in Hilbert spaces endowed with graphs, Mathematics, 7 (2019), 1–13. doi: 10.3390/math7100936. doi: 10.3390/math7100936
![]() |
[25] |
S. Suantai, K. Kankam, P. Cholamjiak, A novel forward-backward algorithm for solving convex minimization problem in Hilbert spaces, Mathematics, 8 (2020), 1–13. doi: 10.3390/math8010042. doi: 10.3390/math8010042
![]() |
[26] |
S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 311 (2005), 506–517. doi: 10.1016/j.jmaa.2005.03.002. doi: 10.1016/j.jmaa.2005.03.002
![]() |
[27] |
S. Suantai, K. Kankam, P. Cholamjiak, W. Cholamjiak, A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery, Comp. Appl. Math., 40 (2021), 1–17. doi: 10.1007/s40314-021-01530-6. doi: 10.1007/s40314-021-01530-6
![]() |
[28] |
R. Suparatulatorn, W. Cholamjiak, S. Suantai, A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs, Numer. Algorithms, 77 (2018), 479–490. doi: 10.1007/s11075-017-0324-y. doi: 10.1007/s11075-017-0324-y
![]() |
[29] |
R. Suparatulatorn, S. Suantai, W. Cholamjiak, Hybrid methods for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with graphs, AKCE Int. J. Graphs Co., 14 (2017), 101–111. doi: 10.1016/j.akcej.2017.01.001. doi: 10.1016/j.akcej.2017.01.001
![]() |
[30] |
D. V. Thong, D. V. Hieu, Inertial extragradient algorithms for strongly pseudomonotone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80–98. doi: 10.1016/j.cam.2018.03.019. doi: 10.1016/j.cam.2018.03.019
![]() |
[31] |
D. V. Thong, D. V. Hieu, Modified subgradient extragradient method for variational inequality problems, Numer. Algorithms, 79 (2018), 597–610. doi: 10.1007/s11075-017-0452-4. doi: 10.1007/s11075-017-0452-4
![]() |
[32] |
J. Tiammee, A. Kaewkhao, S. Suantai, On Browder's convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs, Fixed Point Theory A., 2015 (2015), 1–12. doi: 10.1186/s13663-015-0436-9. doi: 10.1186/s13663-015-0436-9
![]() |
[33] |
O. Tripak, Common fixed points of G-nonexpansive mappings on Banach spaces with a graph, Fixed Point Theory A., 2016 (2016), 1–8. doi: 10.1186/s13663-016-0578-4. doi: 10.1186/s13663-016-0578-4
![]() |
[34] |
L. Y. Zhang, H. Zhao, Y. B. Lv, A modified inertial projection and contraction algorithms for quasivariational inequalities, Appl. Set-Valued Anal. Optim., 1 (2019), 63–76. doi: 10.23952/asvao.1.2019.1.06. doi: 10.23952/asvao.1.2019.1.06
![]() |
ˉθn | 0 | 1(n+1)1.1 | 1(n+1)2 | 100(n+1)2 | 1(n+100)2 |
No. of Iter. | 3237 | 2293 | 2289 | 3286 | 2278 |
Elapsed Time (s) | 2.5272 | 1.5865 | 1.5386 | 2.1956 | 1.5113 |
λ | 12‖A‖22 | 32‖A‖22 | 85‖A‖22 | 1710‖A‖22 | 1810‖A‖22 |
No. of Iter. | 4709 | 1571 | 1473 | 1387 | 1309 |
Elapsed Time (s) | 3.2404 | 1.0867 | 1.0199 | 0.9493 | 0.8978 |
β1n | 0.7 | 0.8 | 0.9 | 0.95 | 0.99 |
No. of Iter. | 1282 | 1259 | 1238 | 1228 | 1220 |
Elapsed Time (s) | 1.4545 | 0.9254 | 0.8734 | 0.8860 | 0.8718 |
α1n | n10(n+1) | n4(n+1) | n2(n+1) | nn+1 | nn+100 |
No. of Iter. | 1358 | 1197 | 1000 | 753 | 863 |
Elapsed Time (s) | 0.9990 | 1.0939 | 1.2328 | 0.5437 | 0.6133 |
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
Parallel monotone hybrid | Elapsed Time (s) | 0.3687 | 0.4073 | 1.6993 |
No. of Iter. | 290 | 317 | 357 | |
Inertial parallel monotone hybrid | Elapsed Time (s) | 0.3297 | 0.3966 | 0.9045 |
No. of Iter. | 260 | 286 | 324 | |
Our | Elapsed Time (s) | 0.1545 | 0.1721 | 0.2365 |
No. of Iter. | 65 | 72 | 89 |
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
DIO | Elapsed Time (s) | 0.4371 | 0.4862 | 0.5984 |
No. of Iter. | 487 | 550 | 604 | |
Our | Elapsed Time (s) | 0.1268 | 0.1295 | 0.1986 |
No. of Iter. | 65 | 74 | 89 |
Inputting | m Nonzero Elements | ||||
m=25 | m=50 | m=75 | m=100 | ||
A1 | Elapsed Time (s) | 0.6693 | 0.6887 | 1.0002 | 1.1109 |
No. of Iter. | 776 | 871 | 1057 | 1276 | |
A2 | Elapsed Time (s) | 0.5325 | 0.6218 | 1.2367 | 1.0217 |
No. of Iter. | 753 | 831 | 1032 | 1246 | |
A3 | Elapsed Time (s) | 0.5636 | 0.6798 | 0.7639 | 1.0629 |
No. of Iter. | 751 | 863 | 1052 | 1338 | |
A1,A2 | Elapsed Time (s) | 0.3752 | 0.4090 | 0.4521 | 0.4320 |
No. of Iter. | 222 | 216 | 259 | 248 | |
A1,A3 | Elapsed Time (s) | 0.3718 | 0.4255 | 0.4889 | 0.5027 |
No. of Iter. | 206 | 240 | 275 | 295 | |
A2,A3 | Elapsed Time (s) | 0.4876 | 0.3809 | 0.4176 | 0.4791 |
No. of Iter. | 220 | 212 | 234 | 243 | |
A1,A2,A3 | Elapsed Time (s) | 0.1978 | 0.1707 | 0.1716 | 0.1785 |
No. of Iter. | 62 | 64 | 68 | 68 |
ˉθn | 0 | 1(n+1)1.1 | 1(n+1)2 | 100(n+1)2 | 1(n+100)2 |
No. of Iter. | 3237 | 2293 | 2289 | 3286 | 2278 |
Elapsed Time (s) | 2.5272 | 1.5865 | 1.5386 | 2.1956 | 1.5113 |
λ | 12‖A‖22 | 32‖A‖22 | 85‖A‖22 | 1710‖A‖22 | 1810‖A‖22 |
No. of Iter. | 4709 | 1571 | 1473 | 1387 | 1309 |
Elapsed Time (s) | 3.2404 | 1.0867 | 1.0199 | 0.9493 | 0.8978 |
β1n | 0.7 | 0.8 | 0.9 | 0.95 | 0.99 |
No. of Iter. | 1282 | 1259 | 1238 | 1228 | 1220 |
Elapsed Time (s) | 1.4545 | 0.9254 | 0.8734 | 0.8860 | 0.8718 |
α1n | n10(n+1) | n4(n+1) | n2(n+1) | nn+1 | nn+100 |
No. of Iter. | 1358 | 1197 | 1000 | 753 | 863 |
Elapsed Time (s) | 0.9990 | 1.0939 | 1.2328 | 0.5437 | 0.6133 |
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
Parallel monotone hybrid | Elapsed Time (s) | 0.3687 | 0.4073 | 1.6993 |
No. of Iter. | 290 | 317 | 357 | |
Inertial parallel monotone hybrid | Elapsed Time (s) | 0.3297 | 0.3966 | 0.9045 |
No. of Iter. | 260 | 286 | 324 | |
Our | Elapsed Time (s) | 0.1545 | 0.1721 | 0.2365 |
No. of Iter. | 65 | 72 | 89 |
Algorithms | m Nonzero Elements | |||
m=50 | m=100 | m=150 | ||
DIO | Elapsed Time (s) | 0.4371 | 0.4862 | 0.5984 |
No. of Iter. | 487 | 550 | 604 | |
Our | Elapsed Time (s) | 0.1268 | 0.1295 | 0.1986 |
No. of Iter. | 65 | 74 | 89 |
Inputting | m Nonzero Elements | ||||
m=25 | m=50 | m=75 | m=100 | ||
A1 | Elapsed Time (s) | 0.6693 | 0.6887 | 1.0002 | 1.1109 |
No. of Iter. | 776 | 871 | 1057 | 1276 | |
A2 | Elapsed Time (s) | 0.5325 | 0.6218 | 1.2367 | 1.0217 |
No. of Iter. | 753 | 831 | 1032 | 1246 | |
A3 | Elapsed Time (s) | 0.5636 | 0.6798 | 0.7639 | 1.0629 |
No. of Iter. | 751 | 863 | 1052 | 1338 | |
A1,A2 | Elapsed Time (s) | 0.3752 | 0.4090 | 0.4521 | 0.4320 |
No. of Iter. | 222 | 216 | 259 | 248 | |
A1,A3 | Elapsed Time (s) | 0.3718 | 0.4255 | 0.4889 | 0.5027 |
No. of Iter. | 206 | 240 | 275 | 295 | |
A2,A3 | Elapsed Time (s) | 0.4876 | 0.3809 | 0.4176 | 0.4791 |
No. of Iter. | 220 | 212 | 234 | 243 | |
A1,A2,A3 | Elapsed Time (s) | 0.1978 | 0.1707 | 0.1716 | 0.1785 |
No. of Iter. | 62 | 64 | 68 | 68 |