Loading [MathJax]/jax/element/mml/optable/GreekAndCoptic.js
Research article

An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems

  • Received: 26 May 2021 Accepted: 20 October 2021 Published: 25 October 2021
  • MSC : 47E10, 47H09, 47H10, 54H25

  • In this paper, three G-nonexpansive mappings are implemented and analyzed using an efficient modified three-step iteration algorithm. Assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, conditions for the weak and strong convergence of the scheme are determined. We give numerical comparisons to back up our main theorem, and compare our algorithm's convergence behavior to that of the three-step Noor iteration and SP-iteration. We use our proposed algorithm to solve image deblurring problems as an application. In addition, we discuss a novel approach to signal recovery in situations where the type of noise is unknown.

    Citation: Damrongsak Yambangwai, Tanakit Thianwan. An efficient iterative algorithm for common solutions of three G-nonexpansive mappings in Banach spaces involving a graph with applications to signal and image restoration problems[J]. AIMS Mathematics, 2022, 7(1): 1366-1398. doi: 10.3934/math.2022081

    Related Papers:

    [1] Damrongsak Yambangwai, Chonjaroen Chairatsiripong, Tanakit Thianwan . Iterative manner involving sunny nonexpansive retractions for nonlinear operators from the perspective of convex programming as applicable to differential problems, image restoration and signal recovery. AIMS Mathematics, 2023, 8(3): 7163-7195. doi: 10.3934/math.2023361
    [2] Maryam Iqbal, Afshan Batool, Aftab Hussain, Hamed Al-Sulami . A faster iterative scheme for common fixed points of G-nonexpansive mappings via directed graphs: application in split feasibility problems. AIMS Mathematics, 2024, 9(5): 11941-11957. doi: 10.3934/math.2024583
    [3] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [4] Nipa Jun-on, Raweerote Suparatulatorn, Mohamed Gamal, Watcharaporn Cholamjiak . An inertial parallel algorithm for a finite family of G-nonexpansive mappings applied to signal recovery. AIMS Mathematics, 2022, 7(2): 1775-1790. doi: 10.3934/math.2022102
    [5] Shahram Rezapour, Maryam Iqbal, Afshan Batool, Sina Etemad, Thongchai Botmart . A new modified iterative scheme for finding common fixed points in Banach spaces: application in variational inequality problems. AIMS Mathematics, 2023, 8(3): 5980-5997. doi: 10.3934/math.2023301
    [6] Aftab Hussain, Danish Ali, Amer Hassan Albargi . On assessing convergence and stability of a novel iterative method for fixed-point problems. AIMS Mathematics, 2025, 10(7): 15333-15357. doi: 10.3934/math.2025687
    [7] Hamza Bashir, Junaid Ahmad, Walid Emam, Zhenhua Ma, Muhammad Arshad . A faster fixed point iterative algorithm and its application to optimization problems. AIMS Mathematics, 2024, 9(9): 23724-23751. doi: 10.3934/math.20241153
    [8] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [9] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized α-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [10] Buthinah A. Bin Dehaish, Rawan K. Alharbi . On fixed point results for some generalized nonexpansive mappings. AIMS Mathematics, 2023, 8(3): 5763-5778. doi: 10.3934/math.2023290
  • In this paper, three G-nonexpansive mappings are implemented and analyzed using an efficient modified three-step iteration algorithm. Assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, conditions for the weak and strong convergence of the scheme are determined. We give numerical comparisons to back up our main theorem, and compare our algorithm's convergence behavior to that of the three-step Noor iteration and SP-iteration. We use our proposed algorithm to solve image deblurring problems as an application. In addition, we discuss a novel approach to signal recovery in situations where the type of noise is unknown.



    Fixed point theory is an extremely active research area because of its multifaceted applications. The typical situation is that a self-map on a set admits a fixed point under certain conditions. In a complete metric space, Banach [1] proved the existence of unique fixed point for contractions. The Banach contraction principle has been generalized and extended in several ways due to its applications in mathematics and other related disciplines. The most recent version of the theorem was proved in Banach spaces endowed with a graph G, where G=(V(G),E(G)) is a directed graph such that the set V(G) of its vertices of a graph and the set E(G) of its edges contains all loops.

    In order to extend the Banach contraction principle, by combination of the concepts in fixed point theory and graph theory, Banach G-contractions were introduced by Jachymaski [2] in complete metric spaces accompanied with a graph G which has a convex subset of the underlying metric space as its set of vertices, see e.g., [3,4,5,6,7,8,9,10].

    In the last few decades finding of fixed points by some iterative schemes for G-contraction, G-nonexpansive and G-monotone nonexpansive mappings have been studied extensively by various authors In 2012, Aleomraninejad et al. [11] presented some iterative scheme for G-contraction and G-nonexpansive mappings in a metric space with a directed graph and stated the convergence for such mappings. In 2015, Alfuraidan and Khamsi [12] defined the concept of G-monotone nonexpansive multivalued mappings defined on a hyperbolic metric space with a graph. Alfuraidan [13] studied the existence of fixed points of monotone nonexpansive mappings on a Banach space endowed with a directed graph. Tiammee et al. [14] proved the Browder convergence theorem for G-nonexpansive mappings and studied the convergence of the Halpern iteration process to the projection of initial point where the projection is onto the set of fixed points of the G-nonexpansive mapping in Hilbert space with the directed graph. Tripak [15] defined the Ishikawa type iteration process for two G-nonexpansive mappings and gave some weak and strong convergence theorems of such iterations in a Banach space endowed with a graph.

    The three-step iterative scheme, usually called the Noor iteration method, [16] is used to approximate common fixed point of three G-nonexpansive mappings via

     zn=(1γn)xn+γnT3xn,yn=(1βn)xn+βnT2zn,xn+1=(1αn)xn+αnT1yn,n0, (1.1)

    where {αn}, {βn} and {γn} are sequences in [0,1].

    Glowinski and Le Tallec [17] employed three-step iterative approach in elastoviscoplasticity, eigenvalue computation and theory of liquid crystals. In [17], it was shown that the three-step iterative process yields better numerical results than the estimated iterations in two and one steps. In 1998, Haubruge, Nguyen and Strodiot [18] investigated the convergence analysis of three-step methods of Glowinski and Le Tallec [17] and applied these methods to obtain new splitting-type algorithms for solving variation inequalities, separable convex programming and minimization of a sum of convex functions. They also proved that three-step iterations lead to highly parallelized algorithms under certain conditions.

    Recently, Sridarat et al. [19] modified the SP-iteration process for three G-nonexpansive mappings T1,T2 and T3 as follows:

    zn=(1γn)xn+γnT3xn,yn=(1βn)zn+βnT2zn,xn+1=(1αn)yn+αnT1yn,n0, (1.2)

    where {αn}, {βn} and {γn} are appropriate real sequences in [0,1]. They studied the weak and strong convergence of the iterative scheme (1.2) under proper conditions in a uniformly convex Banach space endowed with a graph.

    The main purpose of this paper is to construct an efficient modified three-step iteration algorithm for approximating common fixed points of three G-nonexpansive mappings in a uniformly convex Banach space endowed with a graph. This paper is organized as follows.

    The notations, basic definitions, and some lemmas for proving our main result are given in Section 2. Our main result is presented in Section 3. In this section, we also prove weak and strong convergence results of the proposed method under mild conditions in a uniformly convex Banach space endowed with a graph. In Section 4, we study the convergence behavior of our method, and compare its efficiency with competing methods. In Section 5, we apply our proposed method to solve certain image deblurring and signal recovering problems.

    In this section, we recall a few basic notions concerning the connectivity of graphs. All of these notions can be found, e.g., in [20].

    Let C be a nonempty subset of a real Banach space X. We identify the graph G with the pair (V(G),E(G)), where the set V(G) of its vertices coincide with set C and the set of edges E(G) contains {(x,x):xC}. Also, G is such that no two edges are parallel. A mapping T:CC is said to be G-contraction if T preserves edges of G (or T is edge-preserving), i.e.,

    (x,y)E(G)(Tx,Ty)E(G),

    and T decreases weights of edges of G in the following way: there exists α(0,1) such that

    (x,y)E(G)TxTyαxy.

    A mapping T:CC is said to be G-nonexpansive (see [12], Definition 2.3 (iii)) if T preserves edges of G, i.e.,

    (x,y)E(G)(Tx,Ty)E(G),

    and T non-increases weights of edges of G in the following way:

    (x,y)E(G)TxTyxy.

    If x and y are vertices in a graph G, then a path in G from x to y of length N (NN{0}) is a sequence {xi}Ni=0 of N+1 vertices such that x0=x, xN=y and (xi,xi+1)E(G) for i=0,1,N1. A graph G is connected if there is a path between any two vertices. A directed graph G=(V(G),E(G)) is said to be transitive if, for any x,y,zV(G) such that (x,y) and (y,z) are in E(G), we have (x,z)E(G). We denote G1 the conversion of a graph G and

    E(G1)={(x,y)X×X:(y,x)E(G)}.

    Let x0V(G) and A a subset of V(G). We say that A is dominated by x0 if (x0,x)E(G) for all xA. A dominates x0 if for each xA, (x,x0)E(G).

    In this paper, we use and to denote the strong convergence and weak convergence, respectively.

    A mapping T:CC is said to be G-demiclosed at 0 if, for any sequence {xn} in C such that (xn,xn+1)E(G), xnx and Txn0 imply Tx=0.

    Let X be a Banach space with dimension X2. The modulus of X is the function δX:(0,2][0,1] defined by

    δX=inf{112(x+y):x=1,ϵ=xy}.

    Banach space X is uniformly convex if and only if δX(ϵ)>0 for all ϵ(0,2].

    Recall that a Banach space X is said to satisfy Opial's condition [21] if xnx and xy implying that

    lim supnxnx<lim supnxny.

    Let C be a nonempty closed convex subset of a real uniformly convex Banach space X. Recall that the mappings Ti(i=1,2,3) on C are said to satisfy condition (C) [19] if there exists a nondecreasing function f:[0,)[0,) with f(0)=0 and f(r)>0 for all r>0 such that for all xC,

    max{xT1x,xT2x,xT3x}f(d(x,F)),

    where F=F(T1)F(T2)F(T3), F(Ti)(i=1,2,3) are the sets of fixed points of Ti and d(x,F)=inf{xq:qF}.

    Let C be a subset of a metric space (X,d). A mapping T:CC is semi-compact [22] if for a sequence {xn} in C with limnd(xn,Txn)=0, there exists a subsequence {xnj} of {xn} such that xnjpC.

    Let C be a nonempty subset of a normed space X and let G=(V(G),E(G)) be a directed graph such that V(G)=C. Then, C is said to have Property G (see [19]) if for each sequence in C converging weakly to xC and (xn,xn+1)E(G), there is a subsequence {xnj} of {xn} such that (xnj,x)E(G) for all jN.

    Remark 1. If G is transitive, then Property G is equivalent to the property: if {xn} is a sequence in C with (xn,xn+1)E(G) such that for any subsequence {xnj} of the sequence {xn} converging weakly to x in X, then (xn,x)E(G) for all nN.

    Definition 1. ([23], Definition 3.1) Let X be a vector space and D be a nonempty subset of X×X. Then D is said to be coordinate-convex if for all (p,u),(p,v),(u,p),(v,p)D and for all t[0,1], we have

    t(p,u)+(1t)(p,v)Dandt(u,p)+(1t)(v,p)D.

    Remark 2. If D is convex in X×X, then D is coordinate-convex in X×X.

    In the sequel, the following lemmas are needed to prove our main results.

    Lemma 1. ([24]) Let X be a uniformly convex Banach space, and {αn} a sequence in [δ,1δ] for some δ(0,1). Suppose that sequences {xn} and {yn} in X are such that lim supn||xn||c,lim supn||yn||c and lim supn||αnxn+(1αn)yn||=c for some c0. Then limn||xnyn||=0.

    Lemma 2. ([25]) Let X be a Banach space that satisfies Opial's condition and let {xn} be a sequence in X. Let u,vX be such that limn||xnu|| and limn||xnv|| exist. If {xnj} and {xnk} are subsequences of {xn} that converge weakly to u and v, respectively, then u=v.

    The following lemmas show some useful Property G of C on a uniformly convex Banach space.

    Lemma 3. ([19]) Suppose that X is a Banach space having Opial's condition, C has Property G and let T:CC be a G-nonexpansive mapping. Then, IT is G-demiclosed at 0, i.e., if xnx and xnTxn0, then xF(T), where F(T) is the set of fixed points of T.

    Lemma 4. ([26]) Let C be a nonempty closed convex subset of a uniformly convex Banach space X and suppose that C has Property G. Let T be a G-nonexpansive mapping on C. Then IT is G-demiclosed at 0.

    The following useful result is due to [27].

    Lemma 5. ([27]) Let {xn} be a bounded sequence in a reflexive Banach space X. If for any weakly convergent subsequence {xnj} of {xn}, both {xnj} and {xnj+1} converge weakly to the same point in X, then the sequence {xn} is weakly convergent.

    Throughout the section, we let C be a nonempty closed convex subset of a Banach space X endowed with a directed graph G such that V(G)=C and E(G) is coordinate-convex. We also suppose that the graph G is transitive. The mappings Ti (i=1,2,3) are G-nonexpansive from C to C with F=F(T1)F(T2F(T3) nonempty. For an arbitrary x0C, defined the sequence {xn} by

    zn=(1γn)xn+γnT3xn,yn=(1βn)T3xn+βnT2zn,xn+1=(1αn)T1yn+αnT2zn,n0, (3.1)

    where {αn}, {βn} and {γn} are appropriate real sequences in [0,1].

    Proposition 1. Let q0F be such that (x0,q0),(q0,x0) are in E(G). Then (xn,q0),(zn,q0),(yn,q0), (q0,xn),(q0,zn),(q0,yn),(xn,yn),(xn,zn) and (xn,xn+1) are in E(G).

    Proof. We proceed by induction. Since T3 is edge-preserving and (x0,q0)E(G), we have (T3x0,q0)E(G). Note that

    (z0,q0)=((1γ0)x0+γ0T3x0,q0)=(1γ0)(x0,q0)+γ0(T3x0,q0).

    Using (x0,q0),(T3x0,q0)E(G) and the coordinate-convexity of E(G), we have (z0,q0)E(G). Again, by edge-preserving of T2 and (z0,q0)E(G), we have (T2z0,q0)E(G). We also have

    (y0,q0)=((1β0)T3x0+β0T2z0,q0)=(1β0)(T3x0,q0)+β0(T2z0,q0).

    Since (T3x0,q0),(T2z0,q0)E(G) by the coordinate-convexity of E(G), we have (y0,q0)E(G). Then, since T1 is edge-preserving, we get (T1y0,q0)E(G). Note that

    (x1,q0)=((1α0)T1y0+α0T2z0,q0)=(1α0)(T1y0,q0)+α0(T2z0,q0).

    By the coordinate-convexity of E(G) and (T2z0,q0),(T1y0,q0)E(G), we get (x1,q0)E(G). Thus, by edge-preserving of T3, (T3x1,q0)E(G). Again, by the coordinate-convexity of E(G) and (T3x1,q0),(x1,q0)E(G), we have

    (z1,q0)=((1γ1)x1+γ1T3x1,q0)=(1γ1)(x1,q0)+γ1(T3x1,q0)E(G).

    Then, since T2 is edge-preserving, (T2z1,q0)E(G). By the coordinate-convexity of E(G) and (T2z1,q0),(T3x1,q0)E(G), we get

    (y1,q0)=((1β1)T3x1+β1T2z1,q0)=(1β1)(T3x1,q0)+β1(T2z1,q0)E(G),

    and hence, (T1y1,q0)E(G).

    Next, we assume that (xk,q0)E(G). Since T3 is edge-preserving, we get (T3xk,q0)E(G). Thus

    (zk,q0)=((1γk)xk+γkT3xk,q0)=(1γk)(xk,q0)+γk(T3xk,q0)E(G),

    since E(G) is coordinate-convex. Hence, by edge-preserving of T2 and (zk,q0)E(G), we have (T2zk,q0)E(G). By the coordinate-convexity of E(G) and (T3x1,q0),(T2zk,q0)E(G), we get

    (yk,q0)=((1βk)T3xk+βkT2zk,q0)=(1βk)(T3xk,q0)+βk(T2zk,q0)E(G).

    Also, since T1 is edge-preserving and (yk,q0)E(G), we get (T1yk,q0)E(G). By the coordinate-convexity of E(G) and (T2zk,q0),(T1yk,q0)E(G), we obtain

    (xk+1,q0)=(1αk)(T1yk+αkT2zk,q0)=(1αk)(T1yk,q0)+αk(T2zk,q0)E(G).

    Hence, by edge-preserving of T3, we get (T3xk+1,q0)E(G), and so (zk+1,q0)=((1γk+1)xk+1+γk+1T3xk+1,q0)=(1γk+1)(xk+1,q0) +γk+1(T3xk+1,q0)E(G), since E(G) is coordinate-convex. Again, by edge-preserving of T2, we obtain (T2zk+1,q0)E(G). By the coordinate-convexity of E(G) and (T3xk+1,q0), (T2zk+1,q0) E(G), we obtain (yk+1,q0)=((1βk+1)T3xk+1+βk+1T2zk+1,q0)=(1βk+1)(T3xk+1,q0)+βk+1(T2zk+1,q0)E(G). Therefore, (xn,q0),(zn,q0), (yn,q0)E(G) for all n1.

    Since T3 is edge-preserving and (q0,x0)E(G), we have (q0,T3x0)E(G), and so (q0,z0)=(q0,(1γ0)x0+γ0T3x0)=(1γ0)(q0,x0)+γ0(q0,T3x0)E(G). Again, since T2 is edge-preserving and (q0,z0)E(G), we have (q0,T2z0) E(G). By the coordinate-convexity of E(G) and (q0,T3x0),(q0,T2z0) E(G), so (q0,y0)=(q0,(1β0)T3x0+β0T2z0)=(1β0)(q0,T3x0)+β0(q0,T2z0) E(G).

    Using a similar argument, we can show that (q0,xn),(q0,zn),(q0,yn)E(G) under the assumption that (q0,x0),(q0,z0), (q0,y0)E(G). By transitivity of G, we get (xn,yn),(xn,zn),(xn,xn+1)E(G). This completes the proof.

    Lemma 6. Let X be a uniformly convex Banach space. Suppose that {αn}, {βn} and {γn} are real sequences in [δ,1δ] for some δ(0,1) and (x0,q0), (q0,x0)E(G) for arbitrary x0C and q0F. Then

    (i) limnxnq0 exists;

    (ii) limnxnT1xn=limnxnT2xn=limnxnT3xn=0.

    Proof. (i) Let q0F. By Proposition 1, we have (xn,q0),(yn,q0) and (zn,q0)E(G). Then, by G-nonexpansiveness of Ti(i=1,2,3) and using (3.1), we have

    znq0=(1γn)xn+γnT3xnq0=(1γn)(xnq0)+γn(T3xnq0)(1γn)xnq0+γnT3xnq0(1γn)xnq0+γnxnq0=xnq0, (3.2)
    ynq0=(1βn)T3xn+βnT2znq0=(1βn)(T3xnq0)+βn(T2znq0)(1βn)T3xnq0+βnT2znq0(1βn)xnq0+βnznq0(1βn)xnq0+βnxnq0=xnq0, (3.3)

    and so

    xn+1q0=(1αn)T1yn+αnT2znq0=(1αn)(T1ynq0)+αn(T2znq0)(1αn)T1ynq0+αnT2ynq0(1αn)ynq0+αnznq0(1αn)xnq0+αnxnq0xnq0. (3.4)

    As {xnq0:nN} is decreasing sequence which is obviously bounded below. Therefore limnxnq0 exists. In particular, the sequence {xn} is bounded.

    (ii) Assume that limnxnq0=c. If c=0, then by G-nonexpansiveness of Ti (i=1,2,3), we get

    xnTixnxnq0+q0Tixnxnq0+q0xn.

    Therefore, the result follows. Suppose that c>0. Taking the lim sup on both sides in the inequality (3.3), we obtain

    lim supnynq0lim supnxnq0=c. (3.5)

    In addition, by G-nonexpansiveness of T1, we have T1ynq0ynq0, taking the lim sup on both sides in this inequality and using (3.5), we obtain

    lim supnT1ynq0c. (3.6)

    Taking the lim sup on both sides in the inequality (3.2), we obtain

    lim supnznq0lim supnxnq0=c. (3.7)

    In addition, by G-nonexpansiveness of T2, we have T2znq0znq0, taking the lim sup on both sides in this inequality and using (3.7), we obtain

    lim supnT2znq0c. (3.8)

    Since limnxn+1q0=c. Letting n in the inequality (3.4), we have

    limn(1αn)(T1ynq0)+αn(T2znq0)=c. (3.9)

    By using (3.6), (3.8) and (3.9) and Lemma 1, we have

    limnT2znT1yn=0. (3.10)

    Moreover, we see that

    xn+1q0=(1αn)T1yn+αnT2znq0(1αn)T1ynq0+αnT2znq0(1αn)(T1ynT2zn+T2znq0)+αn(T2znq0)=(1αn)T1ynT2zn+(1αn)T2znq0+αn(T2znq0)=(1αn)T1ynT2zn+T2znq0. (3.11)

    Using (3.10) and (3.11), we have

    lim infnT2znq0c, (3.12)

    and so by (3.8) and (3.12), we have

    limnT2znq0=c. (3.13)

    On the other hand,

    T2znq0T2znT1yn+T1ynq0T2znT1yn+ynq0,

    and this yields to

    lim infnynq0c. (3.14)

    From (3.5) and (3.14),

    limnynq0=c. (3.15)

    In addition, by G-noneapansives of T3 we have T3xnq0xnq0, taking the lim sup on both sides in this inequality, we obtain

    lim supnT3xnq0c. (3.16)

    By (3.3) and (3.15), we have

    limn(1βn)(T3xnq0)+βn(T2znq0)=c. (3.17)

    Using (3.8), (3.16) and (3.17) and Lemma 1,

    limnT2znT3xn=0. (3.18)

    Moreover,

    xn+1q0=(1αn)T1yn+αnT2znq0(1αn)T1ynq0+αnT2znq0(1αn)T1ynq0+αn(T2znT1yn+T1ynq0)=(1αn)T1ynq0+αnT2znT1yn+αnT1ynq0=T1ynq0+αnT2znT1yn. (3.19)

    Using (3.10) and (3.19), we have

    lim infnT1ynq0c. (3.20)

    and so by (3.6) and (3.20), we get

    limnT1ynq0=c. (3.21)

    On the other hand,

    T1ynq0T1ynT2zn+T2znq0T1ynT2zn+znq0,

    and this yields to

    lim infnznq0c. (3.22)

    From (3.7) and (3.22), we get

    limnznq0=c. (3.23)

    From (3.2) and (3.23), we have

    limn(1γn)(xnq0)+γn(T3xnq0)=c. (3.24)

    By using (3.16), (3.24) and Lemma 1,

    limnT3xnxn=0. (3.25)

    Thus, it follows from (3.25) that

    znxn=(1γn)xn+γnT3xnxnγnT3xnxn0 (as n). (3.26)

    Using (3.18), (3.25) and (3.26) together with G-nonexpansiveness of T2,

    T2xnxn=T2xnT2zn+T2znxnxnzn+T2znT3xn+T3xnxn0 (as n). (3.27)

    From (3.18) and (3.25), we have

    ynxn=(1βn)(T3xnxn)+βn(T2znxn)(1βn)T3xnxn+βnT2znxn(1βn)T3xnxn+βn(T2znT3xn+T3xnxn)=(1βn)T3xnxn+βnT2znT3xn+βnT3xnxn=T3xnxn+βnT2znT3xn0 (as n). (3.28)

    Using (3.10), (3.18), (3.25) and (3.28) together with G-nonexpansiveness of T1, we have

    T1xnxnT1xnT1yn+T1ynxnxnyn+T1ynT2zn+T2znT3xn+T3xnxn0 (as n).

    Therefore, we conclude that limnxnT1xn=limnxnT2xn=limnxnT3xn=0. This completes the proof.

    We now prove the weak convergence of the sequence generated by the new modified three-step iteration method (3.1) for three G-nonexpansive mappings in a uniformly convex Banach space satisfying Opial's condition.

    Theorem 1. Let X be a uniformly convex Banach space which satisfies Opial's condition and C has Property G. Suppose that {αn}, {βn} and {γn} are real sequences in [δ,1δ] for some δ(0,1). If (x0,q0),(q0,x0)E(G) for arbitrary x0C and q0F, then {xn} converges weakly to a common fixed point of T1,T2 and T3.

    Proof. Let q0F be such that (x0,q0),(q0,x0)E(G). From Lemma 6 (i), limnxnq0 exists, thus {xn} is bounded. It follows from Lemma 6 (ii) that limnxnT1xn=limnxnT2xn=limnxnT3xn=0. Since X is uniformly convex and {xn} is bounded, we may assume that xnu as n, without loss of generality. By Lemma 3, we have uF. Suppose that subsequences {xnk} and {xnj} of {xn} converge weakly to u and v, respectively. By Lemma 6 (ii), we obtain that xnkTixnk0 and xnjTixnj0 as k,j. Using Lemma 3, we have u,vF. By Lemma 6 (i), limnxnu and limnxnv exist. It follows from Lemma 3 that u=v. Therefore, {xn} converges weakly to a common fixed point of T1, T2 and T3.

    It is worth noting that the Opial's condition has remained crucial in proving weak convergence theorems. However, each lp (1p<) satisfies the Opial's condition, while Lp do not have the property unless p=2.

    Next, we deal with the weak convergence of the sequence {xn} generated by (3.1) for two G-nonexpansive mappings without assuming the Opial's condition in a uniformly convex Banach space with a directed graph.

    Theorem 2. Let X be a uniformly convex Banach space. Suppose that C has Property G, {αn},{βn} and {γn} are real sequences in [δ,1δ] for some δ(0,1), F is dominated by x0 and F dominates x0. If (x0,q0),(q0,x0)E(G) for arbitrary x0C and q0F, then {xn} converges weakly to a common fixed point of T1, T2 and T3.

    Proof. Let q0F be such that (x0,q0),(q0,x0) are in E(G). From Lemma 6 (i), limnxnq0 exists, so {xn} is bounded in C. Since C is nonempty closed convex subset of a uniformly convex Banach space X, it is weakly compact and hence there exists a subsequence {xnj} of the sequence {xn} such that {xnj} converges weakly to some point pC. By Lemma 6 (ii) we obtain that

    limjxnjT1xnj=limjxnjT2xnj=limjxnjT3xnj=0. (3.29)

    In addition, ynznynxn+xnzn. Using (3.26) and (3.28), we have

    limnynzn=0. (3.30)

    From (3.26) and G-nonexpansiveness of T2,

    T2znznT2znT2xn+T2xnxn+xnznznxn+T2xnxn+xnzn0 (as n). (3.31)

    Thus, it follows from (3.10), (3.30) and (3.31) that

    T1ynynT1ynT2zn+T2znzn+ynzn0 (as n). (3.32)

    Using Lemma 4, we have IT1, IT2 and IT3 are G-demiclosed at 0 so that pF. To complete the proof it suffices to show that {xn} converges weakly to p. To this end we need to show that {xn} satisfies the hypothesis of Lemma 5. Let {xnj} be a subsequence of {xn} which converges weakly to some qC. By similar arguments as above q is in F. Now for each j1, using (3.1), we have

    xnj+1=(1αnj)T1ynj+αnjT2znj. (3.33)

    It follows from (3.29) that

    T3xnj=(T3xnjxnj)+xnjq. (3.34)

    Now from (3.1) and (3.34),

    znj=(1γnj)xnj+γnjT3xnjq. (3.35)

    Using (3.31) and (3.35), we have

    T2znj=(T2znjznj)+znjq. (3.36)

    Now from (3.1), (3.34) and (3.36),

    ynj=(1βnj)T3xnj+βnjT2znjq. (3.37)

    Also from (3.32) and (3.37),

    T1ynj=(T1ynjynj)+ynjq. (3.38)

    It follows from (3.33), (3.36) and (3.38) that

    xnj+1q.

    Therefore, the sequence {xn} satisfies the hypothesis of Lemma 5 which in turn implies that {xn} weakly converges to q so that p=q. This completes the proof.

    The strong convergence of the sequence generated by the new modified three-step iteration method (3.1) for three G-nonexpansive mappings in a uniformly convex Banach space with a directed graph is discussed in the rest of this section.

    Theorem 3. Let X be a uniformly convex Banach space. Suppose that {αn}, {βn} and {γn} are real sequences in [δ,1δ] for some δ(0,1), Ti(i=1,2,3) satisfy condition (C), F is dominated by x0 and F dominates x0. Then {xn} converges strongly to a common fixed point of T1,T2 and T3.

    Proof. By Lemma 6 (i), limnxnq exists and so limnd(xn,F) exists for any qF. Also by Lemma 6 (ii), limnxnT1xn=limnxnT2xn=limnxnT3xn=0. It follows from condition (C) that limnf(d(xn, F))=0. Since f:[0,)[0,) is a nondecreasing function satisfying f(0)=0, f(r)>0 for all r(0,), we obtain that limnd(xn,F)=0. Hence, we can find a subsequence {xnj} of {xn} and a sequence {uj}F such that xnjuj12j. Put nj+1=nj+k for some k1. Then

    xnj+1ujxnj+k1ujxnjuj12j.

    We obtain that uj+1uj32j+1, thus {uj} is a Cauchy sequence. We assume that ujq0C as j. Since F is closed, we get q0F. Therefore, we have xnjq0C as j. Since limnxnq0 exists, we obtain xnq0. This completes the proof.

    Theorem 4. Let X be a uniformly convex Banach space. Suppose that C has Property G, {αn},{βn} and {γn} are real sequences in [δ,1δ] for some δ(0,1), F is dominated by x0 and F dominates x0. If one of Ti (i=1,2,3) is semi-compact, then {xn} converges strongly to a common fixed point of T1, T2 and T3.

    Proof. It follows from Lemma 6 that {xn} is bounded and limnxnT1xn=limnxnT2xn=limnxnT3xn=0. Since one of T1, T2 and T3 is semi-compact, then there exists a subsequence {xnj} of {xn} such that xnjqC as j. Since C has Property G and transitivity of graph G, we obtain (xnj,q)E(G). Notice that, for each i{1,2,3}, limjxnjTixnj=0. Then

    qTiqqxnj+xnjTixnj+TixnjTiqqxnj+xnjTixnj+xnjq0(asj).

    Hence qF. Thus limnd(xn,F) exists by Theorem 3. We note that d(xnj,F) d(xnj,q)0 as j, hence limnd(xn,F)=0. It follows, as in the proof of Theorem 3, that {xn} converges strongly to a common fixed point of T1, T2 and T3. This completes the proof.

    For supporting our main theorem, the approximate solutions of common fixed-point problems for a class of G-nonexpansive mapping on the closed interval are discussed. The following definition will be useful in this context.

    Definition 2. [28] Suppose {ζn} is a sequence that converges to ζ, with ζnζ for all n. If positive constants ν and ψ exist with

    limn|ζn+1ζ||ζnζ|ψ=ν,

    then {ζn} converges to ζ of order ψ, with asymptotic error constant ν. If ψ=1 (and ν<1), the sequence is linearly convergent.

    In 2011, Phuengrattana and Suantai [29] showed that the Ishikawa iteration converges faster than the Mann iteration for a class of continuous functions on the closed interval in a real line. In order to study the order of convergence of a real sequence {ζn} converging to ζ, we usually use the well-known terminology in numerical analysis, see [28], for example.

    Example 1. Let X=R, C=[0,2] and G=(V(G),E(G)) be a directed graph defined by V(G)=C and (x,y)E(G) if and only if 0.50xy1.70 or x=yC. Then E(G) is coordinate-convex and {(x,x):xV(G)}E(G). Define mappings T1,T2,T3:CC where

    T1x=23arcsin(x1)+1,T2x=13tan(x1)+1,T3x=x,

    for any xC. It is easy to show that T1,T2,T3 are G-nonexpansive but T1,T2,T3 are not nonexpansive because

    |T1xT1y|>0.50=|xy|,
    |T2uT2v|>0.07=|uv|,

    and

    |T3pT3q|>0.45=|pq|,

    when x=1.95,y=1.45,u=0.08,v=0.01,p=0.5 and q=0.05.

    Let

    αn=n+15n+3,βn=n+28n+5andγn=n+410n+7. (4.1)

    Let {xn} be a sequence generated by (3.1) and {yn}, {zn} be sequences generated by the three-step Noor iteration (1.1) and SP-iteration, respectively. Example 1 shows the convergence behavior of these three comparative methods and x=1 is a common fixed point of T1,T2 and T3. We choose z1=y1=w1=x1=1.65 and set the relative error |ζnx|/|x|<1.00e08 as stopping criterion where {ζn} be all of comparative sequences. The results of three comparative algorithms with the permutation of the operators T1,T2 and T3 are presented. However, the permutation of the group of these three operators gives 27 case studies that we need to consider. But we are interested in the absence of duplicate operator being applied to the proposed algorithm (3.1). That is the following 6 cases of three comparative algorithms consists of the proposed algorithm, three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with the operator order T1T2T3.

    Case II. Three comparative algorithms with the operator order T1T3T2.

    Case III. Three comparative algorithms with the operator order T2T1T3.

    Case IV. Three comparative algorithms with the operator order T2T3T1.

    Case V. Three comparative algorithms with the operator order T3T1T2.

    Case VI. Three comparative algorithms with the operator order T3T2T1.

    All numerical experiments for common fixed point solution by using three comparative methods with the permutation of the operators T1,T2 and T3 are shown in Figures 16. Each figure contains three graphs showing the behavior of numerical solution sequences, relative error sequences and asymptotic error sequences for three comparative methods, respectively. The first two graphs of each figure shows that all sequences generated by these three comparative methods converge to a common fixed point solution x=1 and the relative errors of these three comparative methods are also decreased to zero when the number of iteration increased. The remainning graph of each figure shows the tendency of the asymptotic error constant σ for sequence {ζn} results from the formula |ζn+11|/|ζn1| of three-step Noor, SP and proposed methods.

    Figure 1.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case I.
    Figure 2.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case II.
    Figure 3.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case III.
    Figure 4.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case IV.
    Figure 5.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case V.
    Figure 6.  Numerical solution, relative error and aymtotic error constant of three comparative methods for case VI.

    It can bee seen from the remainning graph of each figure that all methods are linearly convergent. This message is being made more confident by using Definition 2. Since, the smaller of asymptotic error constant gives us the faster convergence of the considering sequence then it can be concluded from the remainning graph that the proposed method converge faster than three-step Noor and SP methods.

    Thus, it can be concluded from Figures 16 that the proposed method with the absence of duplicate operator being applied to the proposed algorithm give us faster convergence compare with three-step Noor and SP methods. And the fastest case occurs when the proposed method with the operator order T3T1T2 is used.

    Now, we apply our proposed algorithms to solve certain image deblurring and signal recovering problems where all codes were written in Matlab and run on laptop Intel core i7, 16.00 GB RAM, windows 10 (64-bit).

    The minimization problem of the sum of two functions is to find a solution of

    minxRn{F(x):=f(x)+h(x)}, (5.1)

    where h:RnR{} is proper convex and lower semicontinuous function, and f:RnR is convex differentiable function with gradient f being L-Lipschitz constant for some L>0. The solution of (5.1) can be characterized by using Fermat's rule, Theorem 16.3 of Bauschke and Combettes [30] as follows:

    xis a minimizer of(f+h)if and only if0h(x)+f(x),

    where h is the subdifferential of h and f is the gradient of f. The subdifferential of h at x, denoted by h(x), is defined by

    h(x):={u:h(x)h(x)u,xxfor allx}.

    It is also well-known that the solution of (5.1) is characterized by the following fixed point problem:

    xis a minimizer of(f+h)if and only ifx=proxμh(Iμf)(x),

    where c>0, proxh is the proximity operator of h defined by proxh:=argmin{h(y)+12xy22}, see [31] for more details. It is also known that proxμh(Iμf) is a nonexpansive mapping when μ(0,2L).

    Let B is the degraded image of the true image X in the matrix form of ˜m rows and ˜n columns (B,XR˜mטn). The key to obtaining the image restoration model is to rearrange the elements of the images B and X into the column vectors by stacking the columns of these images into two long vectors b and x where b=vec(B) and x=vec(X), both of length n=˜m˜n. The image restoration problem can be modelled in one dimensional vector by the following linear equation system:

    b=Mx, (5.2)

    where xRn is an original image, bRn is the observed image, MRn×n is the blurring matrix and n=˜m˜n. In order to solve problem (5.2), we aim to approximate the original image, vector b, which is known as the following least squares (LS) problem:

    minx12bMx22, (5.3)

    where .2 is defined by x2=ni=1|xi|2. We can apply the minimization problem of the sum of two functions (5.1) to the LS-problem (5.3) by setting h(x)=0 and f(x)=12bMx22 where f(x)=MT(Mxy). Thus, LS-problem can be solved with our method (3.1) by setting

    Tx=xμMT(Mxb),μ(0,2||MTM||2).

    And, the following proposed method is obtained in finding the solution of the image deblurring problem:

    zn=(1γn)xn+γn(xnμMT(Mxnb)),yn=(1βn)(xnμMT(Mxnb))+βn(znμMT(Mznb)),xn+1=(1αn)(ynμMT(Mynb))+αn(znμMT(Mznb)), (5.4)

    where μ(0,2MTM2) and {αn}, {βn} and {γn} are sequences in [δ,1δ] for all nN and for some δ in (0,1). The algorithm (5.4) is used in solving the image deblurring problem (5.2) with the default parameter (4.1) and μ=1/MTM2.

    The goal on image deblurring problem is to find the original image from the observed image without knowing which one is the blurring matrix. However, the blurring matrix M must be known in applying algorithm (5.4). Now, we present the new idea in solving the image deblurring problem when the observed image b1,b2,...,bN can be restored by using the blurring matrices M1,M2,...,MN, repectively in which

    bi=Mix,i=1,2,,N. (5.5)

    That is, the original image x is a common solution of the N-determinated linear Eq (5.5). Now, Let us consider the following LS-problems:

    minxRn12M1xb122,minxRn12M2xb222,...,minxRn12MNxbN22, (5.6)

    where x is the common solution of problem (5.6). In applying the proposed algorithm (5.4) in finding the original image x, Only a group of the chosen three blurring matrices Mj,Mk,Ml from known blurring matrices M1,M2,...,MN. And, we can apply the minimization problem of the sum of two functions with our method (3.1) for solving the LS-problems by setting

    Tix=xμifi(x)c

    with hi(x)=0 and fi(x)=12biMix22 where fi(x)=MTi(Mixbi). Now, we presented the proposed algorithm with MjMkMl:

    zn=(1γn)xn+γn(xnμjMTj(Mjxnbj)),yn=(1βn)(xnμjMTj(Mjxnbj))+βn(znμkMTk(Mkznbk)),xn+1=(1αn)(ynμlMTl(Mlynbl))+αn(znμkMTk(Mkznbk)), (5.7)

    with the default parameter (4.1), μj=1/MTjMj2,μk=1/MTkMk2,μl=1/MTlMl2 and called it as the proposed algorithm with MjMkMl. The implemented algorithm (5.7) is proposed in solving the image deblurring problem by using every three blurring matrices from N known blurring matrices with the default parameter (4.1). The original RGB format for color image shown in Figure 7 is used to demonstrate the practicability of the proposed algorithm. The relative errors with the stopping criterion of the proposed algorithms are defined as xnx/x<103. The performance of the comparing algorithms at xn on image deblurring process is measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is defined by

    PSNR(xn)=20log10(2552MSE),
    Figure 7.  The original RGB image with matrix size 248×356×3.

    where MSE=xnx22. Three different types of the original RGB image degraded by the blurring matrices M1,M2 and M3 are shown in Figure 8. These are used to test the implemented algorithm.

    Figure 8.  The original RGB image degraded by blurred matrices M1, M2 and M3 respectively.

    Next, we present restoration of images that have been corrupted by the following blur types:

    Type I. Gaussian blur of filter size 9×9 with standard deviation σ=4 (The original image has been degraded by the blurring matrix M1).

    Type II. Out of focus blur (Disk) with radius r=6 (The original image has been degraded by the blurring matrix M2).

    Type III. Motion blur specifying with motion length of 21 pixels (len =21) and motion orientation 11 (θ=11) (The original image has been degraded by the blurring matrix M3).

    Since, the using image X and three different types of the blurring image B (See on Figure 8) are represented in the red-green-blue component. Then, we denote Xr,Xg,Xb and Br,Bg,Bb as the gray-scale images that constitute the red-green-blue channels of the using image X and the blurring image B respectively. Thus, we define the column vector x and b from color image X and B and both of length n=3˜m˜n. After that, we apply the proposed algorithms in getting the common solution of the image deblurring problem with these three blurring matrices. And, we are also compare with three-step Noor iteration (1.1) and SP-iteration (1.2).

    The permutation of the group of these three blurring matrices gives 27 case studies that we need to consider. First, we are interested in the simple three cases in which all three blurring matrices are the same. Second, we are also interested in the case of the absence of duplicate blurring matrices. That is the following 9 cases of three comparative algorithms consists of the proposed algorithm (5.7), three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with M1M1M1.

    Case II. Three comparative algorithms with M1M2M3.

    Case III. Three comparative algorithms with M1M3M2.

    Case IV. Three comparative algorithms with M2M2M2.

    Case V. Three comparative algorithms with M2M1M3.

    Case VI. Three comparative algorithms with M2M3M1.

    Case VII. Three comparative algorithms with M3M3M3.

    Case VIII. Three comparative algorithms with M3M1M2.

    Case IX. Three comparative algorithms with M3M2M1.

    Figures 917 show the comparative plots behavior of Cauchy error, relative error and PSNR quality of the reconstructed RGB image with 9 cases.

    Figure 9.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case I.
    Figure 10.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case II.
    Figure 11.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case III.
    Figure 12.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case IV.
    Figure 13.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case V.
    Figure 14.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VI.
    Figure 15.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VII.
    Figure 16.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case VIII.
    Figure 17.  The Cauchy norm, relative norm and PSNR quality plots of three comparative algorithms with case IX.

    It is remarkable that, the Cauchy error and relative error plots on each case of three comparative algorithms decrease as the iteration number increases. Since, the Cauchy and relative errors plot show the validity and confirm the convergence of the presented methods then, it can guarantee that all presented methods on Figures 917 converge to common solution of deblurring problem (5.7). Based on the PSNR plots of each case, all restored image using these three comparative algorithms in solving the deblurring problem get the quality improvements when the iteration number increases. It can be conclude from the comparative plots behavior on Figures 917 that the proposed method is more efficiency than three-step Noor and SP methods. Moreover, the PSNR quality of the observed image is much better when the proposed method with MjMkMl and jkl is used for solving deblurring problem compare with the proposed methods with M1M1M1, M2M2M2 and M3M3M3. And, the best case in recovering the observed image occurs when the proposed methods with M2M1M3 and M3M1M2 are used. Figure 11 demonstrates the crop of reconstructed RGB image presented in 10,000th iteration by using the proposed algorithms in getting the common solution of the deblurring problem with operators M1M1M1 and M2M2M2, M3M3M3 and M3M1M2 respectively.

    It can be seen from these figures that the quality of restored image by using the proposed algorithms with operators M3M2M1 get the smooth quality of the using degraded image.

    Next, we give some numerical examples to the signal recovery.

    In signal processing, compressed sensing can be modeled as the following undetermined linear equation:

    y = Ax + ν,

    where x \in \mathbb{R}^{n} is an original signal with n components to be recovered, ν, y \in \mathbb{R}^{m} are noise and the observed signal with noisy for m components respectively and A \in \mathbb{R}^{m \times n} is a degraded matrix. Finding the solutions of previous undetermined linear equation can be seen as solving the LASSO problem:

    \begin{eqnarray} \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| y - Ax \|^{2}_{2} +\lambda\|x\|_1, \end{eqnarray} (5.8)

    where \lambda > 0 . The new idea in solving the signal recovering problem is presented when the observed signal y_1, y_2, ..., y_N can be recovered by using the known degraded matrices A_1, A_2, ..., A_N in which

    \begin{equation} y_i = A_i \mathbf{x} + \nu_i, \quad i = 1, 2, \ldots, N. \end{equation} (5.9)

    where a true signal x is common solution of problem (5.9).

    Let us consider the following LASSO problems:

    \begin{eqnarray} \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{1}x - y_{1} \|^{2}_{2} &+&\lambda_1\|x\|_1 , \\ \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{2}x - y_{2} \|^{2}_{2} &+&\lambda_2\|x\|_1 , \\ &\vdots& \\ \min\limits_{x \in \mathbb{R}^{N}} \frac{1}{2} \| A_{N}x - y_{N} \|^{2}_{2} &+&\lambda_N\|x\|_1 , \end{eqnarray} (5.10)

    where a true signal x is common solution of problem (5.10). We can apply the minimization problem of the sum of two functions with our method (3.1) for solving the LASSO problems (5.10) by setting

    \mathcal{T}_ix = prox_{\mu_i h_i}\big(x -\mu_i\nabla f_i(x)\big), \quad \mu_i \subset \left( 0, \frac{2}{||A_i^T A_i||_2} \right), \quad i = 1, 2, \ldots, N,

    with h_i(x) = \lambda_i\|x\|_1 and f_i(x) = \frac{1}{2}\|y_i-A_ix\|_2^2 where \nabla f_i(x) = A_i^T(A_ix-y_i) . Now, we obtain the following proposed method to find the common solution of LASSO problems (5.10) for a group of three blurring matrices A_j, A_k, A_l chosen from N blurring matrices A_1, A_2, ..., A_N :

    \begin{align} z_{n} & = (1 - \gamma_{n})x_{n} + \gamma_{n} prox_{\lambda g}\left( x_{n} - \mu_j A_j^T(A_jx_n - y_j) \right), \\ y_{n} & = (1 - \beta_{n}) prox_{\lambda g}\left( x_{n} - \mu_j A_j^T(A_j x_n - y_j) \right) + \beta_{n} prox_{\lambda g}\left( z_{n} - \mu_k A_k^T(A_k z_{n} - y_k) \right), \\ x_{n+1} & = (1 - \alpha_{n})prox_{\lambda g} \left( y_{n} -\mu_l A_l^T(A_l y_{n} - y_l) \right) + \alpha_{n} prox_{\lambda g}\left( z_{n} -\mu_k A_k^T(A_k z_{n} - y_k) \right), \end{align} (5.11)

    with the default parameter (4.1), \mu_j = 1/\|A_j^T A_j\|_2, \mu_k = 1/\|A_k^T A_k\|_2, \mu_l = 1/\|A_l^T A_l\|_2 and called it as the proposed algorithm with A_j-A_k-A_l . The implemented algorithm (5.11) is proposed in solving the signal recovering problem by using every three blurring matrices from N blurring matrices on Eq (5.9) with the default parameter (4.1). And, we are also compare the proposed algorithm (5.11) with three-step Noor iteration (1.1) and SP-iteration (1.2). The following 9 cases of three comparative algorithms consists of the proposed algorithm (5.11), three-step Noor iteration (1.1) and SP-iteration (1.2) are demonstrated and discussed.

    Case I. Three comparative algorithms with A_1-A_1-A_1 .

    Case II. Three comparative algorithms with A_1-A_2-A_3 .

    Case III. Three comparative algorithms with A_1-A_3-A_2 .

    Case IV. Three comparative algorithms with A_2-A_2-A_2 .

    Case V. Three comparative algorithms with A_2-A_1-A_3 .

    Case VI. Three comparative algorithms with A_2-A_3-A_1 .

    Case VII. Three comparative algorithms with A_3-A_3-A_3 .

    Case VIII. Three comparative algorithms with A_3-A_1-A_2 .

    Case IX. Three comparative algorithms with A_3-A_2-A_1 .

    Next, some experiments are provided to illustrate the convergence and the effectiveness of the proposed algorithm (5.11). The original signal x with n = 1024 generated by the uniform distribution in the interval [-2, 2] with 70 nonzero elements is used to create the observation signal y_i = A_i x + \nu_i, i = 1, 2, 3 with m = 512 (See on Figure 19).

    Figure 18.  The reconstructed images being used the proposed algorithms with cases I, IV, VI and VIII, respectively present in 10, 000^{ \rm{th}} iterations.
    Figure 19.  Original Signal ( x ) with m = 70 .

    The observation signal y_i = A_ix + n_i shows on Figure 20.

    Figure 20.  Degraded Signals y_1 , y_2 , and y_3 , respectively.

    The process is started when the signal initial data x_0 with n = 1024 is picked randomly (See on Figure 21).

    Figure 21.  Initial Signals x_0 .

    The matrices A_i that generated by the normal distribution with mean zero and variance one and the white Gaussian noise \nu_i for all i = 1, 2, 3 (See on Figure 22).

    Figure 22.  Noise Signals \nu_1 , \nu_2 , and \nu_3 respectively.

    The Cauchy error and the relative signal error are measured by using max-norm \|x_{n} - x_{n-1} \|_\infty and \|x_{n} -x\|_\infty/\|x\|_\infty , respectively. The performance of the comparing method at n^{th} iteration is measured quantitatively by the means of the the signal-to-noise ratio (SNR), which is defined by

    \text{SNR}(x_{n} ) = 20 \log_{10} \left( \dfrac{ \|x_{n} \|_2}{ \|x_{n} - x\|_2} \right),

    where x_{n} is the recovered signal at n^{th} iteration by using the proposed method. The Cauchy error plots and relative error plots on each case of three comparative methods on Figures 2331 guarantee that all presented methods converge to common solution of signal recovering problem. Based on the SNR plots on each case of three comparative methods, all recovering signal using the proposed method, three-step Noor method and Sridarat method witness quality improvement when the iteration number increases. It can be conclude from the comparative plots behavior on Figures 2331 that the proposed method is more efficiency than three-step Noor and SP methods.

    Figure 23.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case I.
    Figure 24.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case II.
    Figure 25.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case III.
    Figure 26.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case IV.
    Figure 27.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case V.
    Figure 28.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VI.
    Figure 29.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VII.
    Figure 30.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case VIII.
    Figure 31.  The Cauchy norm, relative norm plots and SNR quality plots of three comparative algorithms with case IX.

    Moreover, the SNR quality of the recovering signal is significantly improved when the proposed method with A_j-A_k-A_l and j \neq k \neq l is used compare with the proposed methods with A_1-A_1-A_1 , A_2-A_2-A_2 and A_3-A_3-A_3 . And, the best case in recovering the observed signal occurred when the proposed method with the operator order A_2-A_1 -A_3 and A_3-A_1 -A_2 are used. Figure 32 shows an excellent quality of the restored signal using the proposed algorithm with A_3-A_1-A_2 .

    Figure 32.  Recovering signals being used the proposed algorithms with cases I, IV, VII and VIII, respectively presented in 10, 000^{ \rm{th}} iterations.

    In this article, the efficient modified three-step iteration algorithm is proposed for approximating a common fixed point of three G-nonexpansive mappings on Banach spaces involving a graph. By assuming coordinate-convexity in a uniformly convex Banach space endowed with a directed graph, we have proved strong convergence theorem for above said algorithm and mappings by using condition (C) which is a generalization of condition (A) [32] and a weak convergence theorem by using Opial's condition [21]. Also we have proved weak convergence theorem without using Opial's condition. The conditions for convergence of the method are established by systematic proof. Numerical example illustrating the performance of the suggested algorithm was provided. All numerical experiments for common fixed point solution by using the three-step Noor iteration, SP-iteration and the proposed method with the permutation of three operators are shown in Figures 16. Our algorithm was found to be faster than Noor and SP iterations. As applications, we applied our proposed algorithm to solve the image restoration problems with the permutation of the three blurring operators (see Figures 917). We also applied our algorithms for solving signal recovery in situations where the type of noise is unknown (see Figures 2331). We found that our proposed algorithm is flexible and has good quality for common types of blur and noise effects in image deblurring and signal recovering problems. Moreover, we found that when the proposed is used in solving the common solution of image and signal recovering problems, it enhanced the quality range of the recovered image and signal.

    This research project was supported by the thailand science research and innovation fund and the University of Phayao (Grant No. FF64-UoE011). The authors acknowledge the partially support provided by Thailand Science Research and Innovation under the Project IRN62W0007. The opinions in the research report belong to the researchers; Thailand Science Research and Innovation do not always have to agree. Also, T.Thianwan was supported by Contract No. PBTSC64019.

    The authors declare no conflict of interest.



    [1] S. Banach, Sur les oprations dans les ensembles abstraits et leur application aux quations intgrales, Fund. Math., 3 (1922), 133–181. doi: 10.4064/fm-3-1-133-181. doi: 10.4064/fm-3-1-133-181
    [2] J. Jachymski, The contraction principle for mappings on a metric space with a graph, Proc. Amer. Math. Soc., 136 (2008), 1359–1373. doi: 10.1090/S0002-9939-07-09110-1. doi: 10.1090/S0002-9939-07-09110-1
    [3] R. P. Kelisky, T. J. Rivlin, Iterates of Bernstein polynomials, Pac. J. Math., 21 (1967), 511–520. doi: 10.2140/pjm.1967.21.511. doi: 10.2140/pjm.1967.21.511
    [4] M. Samreen, T. Kamran, Fixed point theorems for integral G-contractions, Fixed Point Theory Appl., 2013 (2013), 149. doi: 10.1186/1687-1812-2013-149. doi: 10.1186/1687-1812-2013-149
    [5] J. Tiammee, S. Suantai, Coincidence point theorems for graph-preserving multivalued mappings, Fixed Point Theory Appl., 2014 (2014), 70. doi: 10.1186/1687-1812-2014-70. doi: 10.1186/1687-1812-2014-70
    [6] A. Nicolae, D. O. Regan, A. Petrusel, Fixed point theorems for single-valued and multivalued generalized contractions in metric spaces endowed with a graph, Georgian Math. J., 18 (2011), 307–327. doi: 10.1515/gmj.2011.0019. doi: 10.1515/gmj.2011.0019
    [7] F. Bojor, Fixed point of \phi-contraction in metric spaces endowed with a graph, Ann. Univ. Craiova Mat., 37 (2010), 85–92.
    [8] F. Bojor, Fixed point theorems for Reich type contractions on metric spaces with a graph, Nonlinear Anal., 75 (2012), 3895–3901. doi: 10.1016/j.na.2012.02.009. doi: 10.1016/j.na.2012.02.009
    [9] F. Bojor, Fixed points of Kannan mappings in metric spaces endowed with a graph, An. St. Univ. Ovidius Constanta, 20 (2012), 31–40.
    [10] J. H. Asl, B. Mohammadi, S. Rezapour, S. M. Vaezpour, Some fixed point results for generalized quasi-contractive multifunctions on graphs, Filomat, 27 (2013), 311–315. doi: 10.2298/FIL1302311A. doi: 10.2298/FIL1302311A
    [11] S. M. A. Aleomraninejad, S. Rezapour, N. Shahzad, Some fixed point result on a metric space with a graph, Topol. Appl., 159 (2012), 659–663. doi: 10.1016/j.topol.2011.10.013. doi: 10.1016/j.topol.2011.10.013
    [12] M. R. Alfuraidan, M. A. Khamsi, Fixed points of monotone nonexpansive mappings on a hyperbolic metric space with a graph, Fixed Point Theory Appl., 2015 (2015), 44. doi: 10.1186/s13663-015-0294-5. doi: 10.1186/s13663-015-0294-5
    [13] M. R. Alfuraidan, Fixed points of monotone nonexpansive mappings with a graph, Fixed Point Theory Appl., 2015 (2015), 49. doi: 10.1186/s13663-015-0299-0. doi: 10.1186/s13663-015-0299-0
    [14] J. Tiammee, A. Kaewkhao, S. Suantai, On Browder's convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs, Fixed Point Theory Appl., 2015 (2015), 187. doi: 10.1186/s13663-015-0436-9. doi: 10.1186/s13663-015-0436-9
    [15] O. Tripak, Common fixed points of G-nonexpansive mappings on Banach spaces with a graph, Fixed Point Theory Appl., 2016 (2016), 87. doi: 10.1186/s13663-016-0578-4. doi: 10.1186/s13663-016-0578-4
    [16] M. A. Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl., 251 (2000), 217–229. doi: 10.1006/jmaa.2000.7042.
    [17] R. Glowinski, P. L. Tallec, Augmented Lagrangian and operator-splitting methods in nonlinear mechanic, Society for Industrial and Applied Mathematics, 1989.
    [18] S. Haubruge, V. H. Nguyen, J. J. Strodiot, Convergence analysis and applications of the Glowinski Le Tallec splitting method for finding a zero of the sum of two maximal monotone operators, J. Optimiz. Theory App., 97 (1998), 645–673. doi: 10.1023/a:1022646327085. doi: 10.1023/a:1022646327085
    [19] P. Sridarat, R. Suparaturatorn, S. Suantai, Y. J. Cho, Covergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs, Bull. Malays. Math. Sci. Soc., 42 (2019), 2361–2380. doi: 10.1007/s40840-018-0606-0. doi: 10.1007/s40840-018-0606-0
    [20] R. Johnsonbaugh, Discrete mathematics, Prentice Hall, 1997.
    [21] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597. doi: 10.1090/S0002-9904-1967-11761-0. doi: 10.1090/S0002-9904-1967-11761-0
    [22] S. Shahzad, R. Al-Dubiban, Approximating common fixed points of nonexpansive mappings in Banach spaces, Georgian Math. J., 13 (2006), 529–537. doi: 10.1515/GMJ.2006.529
    [23] N. V. Dung, N. T. Hieu, Convergence of a new three-step iteration process to common fixed points of three G-nonexpansivemappings in Banach spaces with directed graphs, RACSAM, 114 (2020), 140. doi: 10.1007/s13398-020-00872-w. doi: 10.1007/s13398-020-00872-w
    [24] J. Schu, Weak and strong convergence to fixed points of asymptotically nonexpansive mappings, B. Aust. Math. Soc., 43 (1991), 153–159. doi: 10.1017/S0004972700028884. doi: 10.1017/S0004972700028884
    [25] S. Suantai, Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings, J. Math. Anal. Appl., 331 (2005), 506–517. doi: 10.1016/j.jmaa.2005.03.002. doi: 10.1016/j.jmaa.2005.03.002
    [26] D. Yambangwai, S. Aunruean, T. Thianwan, A new modified three-step iteration method for Gnonexpansive mappings in Banach spaces with a graph, Numer. Algor., 20 (2019), 1–29. doi: 10.1007/s11075-019-00768-w. doi: 10.1007/s11075-019-00768-w
    [27] M. G. Sangago, Convergence of iterative schemes for nonexpansive mappings, Asian-Eur. J. Math., 4 (2011), 671–682. doi: 10.1142/S1793557111000551. doi: 10.1142/S1793557111000551
    [28] R. L. Burden, J. D. Faires, Numerical analysis: Cengage learning, Brooks/Cole, 2010.
    [29] W. Phuengrattana, S. Suantai, On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval, J. Comput. Appl. Math., 235 (2011), 3006–3014. doi: 10.1016/j.cam.2010.12.022. doi: 10.1016/j.cam.2010.12.022
    [30] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011.
    [31] J. J. Moreau, Proximit\acute{e} et dualit\acute{e} dans un espace hilbertien, B. Soc. Math. Fr., 93 (1965), 273–299.
    [32] H. F. Senter, W. G. Dotson, Approximating fixed points of nonexpansive mappings, Proc. Amer. Math. Soc., 44 (1974), 375–380. doi: 10.1090/S0002-9939-1974-0346608-8. doi: 10.1090/S0002-9939-1974-0346608-8
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2583) PDF downloads(80) Cited by(0)

Figures and Tables

Figures(32)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog