Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A modified inertial proximal gradient method for minimization problems and applications

  • In this paper, the aim is to design a new proximal gradient algorithm by using the inertial technique with adaptive stepsize for solving convex minimization problems and prove convergence of the iterates under some suitable assumptions. Some numerical implementations of image deblurring are performed to show the efficiency of the proposed methods.

    Citation: Suparat Kesornprom, Prasit Cholamjiak. A modified inertial proximal gradient method for minimization problems and applications[J]. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453

    Related Papers:

    [1] Suparat Kesornprom, Papatsara Inkrong, Uamporn Witthayarat, Prasit Cholamjiak . A recent proximal gradient algorithm for convex minimization problem using double inertial extrapolations. AIMS Mathematics, 2024, 9(7): 18841-18859. doi: 10.3934/math.2024917
    [2] Yu Zhang, Xiaojun Ma . An accelerated conjugate method for split variational inclusion problems with applications. AIMS Mathematics, 2025, 10(5): 11465-11487. doi: 10.3934/math.2025522
    [3] Weibo Guan, Wen Song . Convergence rates of the modified forward reflected backward splitting algorithm in Banach spaces. AIMS Mathematics, 2023, 8(5): 12195-12216. doi: 10.3934/math.2023615
    [4] Kasamsuk Ungchittrakool, Natthaphon Artsawang . A generalized viscosity forward-backward splitting scheme with inertial terms for solving monotone inclusion problems and its applications. AIMS Mathematics, 2024, 9(9): 23632-23650. doi: 10.3934/math.20241149
    [5] Adisak Hanjing, Pachara Jailoka, Suthep Suantai . An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications. AIMS Mathematics, 2021, 6(6): 6180-6200. doi: 10.3934/math.2021363
    [6] Hasanen A. Hammad, Habib ur Rehman, Manuel De la Sen . Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of $ \alpha - $inverse strongly monotone operators. AIMS Mathematics, 2021, 6(8): 9000-9019. doi: 10.3934/math.2021522
    [7] Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283
    [8] Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364
    [9] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [10] Premyuda Dechboon, Abubakar Adamu, Poom Kumam . A generalized Halpern-type forward-backward splitting algorithm for solving variational inclusion problems. AIMS Mathematics, 2023, 8(5): 11037-11056. doi: 10.3934/math.2023559
  • In this paper, the aim is to design a new proximal gradient algorithm by using the inertial technique with adaptive stepsize for solving convex minimization problems and prove convergence of the iterates under some suitable assumptions. Some numerical implementations of image deblurring are performed to show the efficiency of the proposed methods.



    In this paper, we investigate the following convex minimization problem

    minxH(f(x)+g(x)), (1.1)

    where H is a real Hilbert space, g:H(,+] is proper, lower semicontinuous and covex and f:HR is convex and differentiable with the Lipschitz continuous gradient denoted by f. It is known that x is a minimizer of f+g if and only if

    0(g+f)(x), (1.2)

    where g denotes the subdifferential of g.

    The convex minimization problem is an important mathematical models which unify numerous issues in applied mathematics for example, signal processing, image reconstruction, machine learning and so on. See [1,3,8,9,11,22,31].

    The most popular algorithm for solving the convex minimization problem is the so-called forward-backward algorithm (FB), which generates by a starting point x1H and

    xn+1=proxλg(xnλf(xn)), n1 (1.3)

    where proxg is the proximal operator of g and the stepsize λ(0,2/L), L is the Lipschitz constant of f.

    Polyak [21] first proposed the inertial idea to improve the convergence speed of the method. In recent years, many authors introduced various fast iterative methods via inertial technique, for example, [7,8,10,15,16,18,23,25,26,32].

    In 2009, Beck and Teboulle [4] introduced the fast iterative shrinkage-thresholding algorithm for linear inverse problem (FISTA). Let t0=1 and x0=x1H. Compute

    tn+1=1+1+4t2n12,θn=tn11tn,yn=xn+θn(xnxn1),xn+1=prox1Lg(yn1Lf(yn)) n1. (1.4)

    This improves the convergence speed for O(1/n2). However, the stepsize is established under the condition of the Lipschitz constant which is not known in general.

    In 2000, Tseng [29] proposed a modified forward-backward algorithm (MFB) via the stepsize with linesearch technique as follows. Given σ>0, ρ(0,1), δ(0,1) and x1H. Compute

    yn=proxλng(xnλnf(xn)),xn+1=proxλng(ynλn(f(yn)f(xn))), n1 (1.5)

    where λn is the largest λ{σ,σρ,σρ2,...} satisfying λf(yn)f(xn)ynxn.

    In 2020, Padcharoen et al. [20] proposed the modified forward-backward splitting method based on inertial Tseng method (IMFB). Given {λn}(0,1L), {αn}[0,α][0,1). Let x0,x1H and compute

    wn=xn+θn(xnxn1),yn=proxλng(wnλnf(wn)),xn+1=ynλn(f(yn)f(wn)), n1. (1.6)

    They established weak convergence of the proposed method.

    In 2015, Shehu et al. [24] introduced the modified split proximal method (MSP). Let r:HH be a contraction mapping with constant α(0,1). Set φ(x)=h(x)2+(x)2 with h(x) = 12(Iproxλg)Ax2, (x)=12(Iproxλμnf)x2. Given an initial point x1H and construct

    yn=xnμnA(Iproxλg)Axn,xn+1=αnr(xn)+(1αn)proxλμnfyn, n1, (1.7)

    where the stepsize μn=ψnh(xn)+(xn)φ2(xn) with 0<ψn<4. They proved strong convergence theorem for proximal split feasibility problems.

    In 2016, Cruz and Nghia [5] presented a fast multistep forward-backward method (FMFB) with a linesearch. Given σ>0, μ(0,12), ρ(0,1) and t0=1. Choose x0,x1H and compute

    tn+1=1+1+4t2n12,θn=tn11tn,yn=xn+θn(xnxn1)xn+1=proxλng(ynλnf(yn)), n1 (1.8)

    where λn=σρmn and mn is the smallest nonnegative integer such that

    λnf(proxλng(ynλnf(yn))f(yn))μproxλng(ynλnf(yn))yn. (1.9)

    Very recently, Malitsky and Tam [17] introduced the forward-reflected-backward algorithm (FRB). Given λ0>0, δ(0,1), γ{1,β1} and β(0,1). Compute

    xn+1=proxλng(xnλnf(xn)λn1(f(xn)f(xn1))), n1 (1.10)

    where the stepsize λn=γλn1βi with i being the smallest nonnegative integer satisfying λnf(xn+1)f(xn)δ2xn+1xn.

    Very recently, Hieu et al. [13] proposed the modified forward-reflected-backward method (MFRB) with adaptive stepsize. Given x0,x1H, λ0,λ1>0, μ(0,12):

    xn+1=proxλng(xnλnf(xn)λn1(f(xn)f(xn1))),λn+1=min{λn,μxn+1xnf(xn+1)f(xn)}, n1. (1.11)

    This stepsize allows the proposed method without knowing the Lipschitz constant to solve the problem.

    Inspired and motivated by previous works, we propose based on the adaptive stepsize, the inertial proximal gradient algorithm for convex minimization problems. This method requires more flexible conditions than the fixed stepsize does. We then establish weak convergence of our scheme under some assumptions. Moreover, we present some numerical experiments in image deblurring. It reveals that our algorithm outperforms other methods.

    In this section, we provide some definitions and lemmas for proving our theorem.

    Weak and strong convergence of a sequence {xn}Ω to zΩ are denoted by xnz and xnz, respectively.

    Let g:H(,+] be a proper, lower semicontinuous and convex function. We denote the domain of g by domg={xH|g(x)<+}. For any xdomg, the subdifferential of g at x is defined by

    g(x)={vH|v,yxg(y)g(x), yH}.

    Recall that the proximal operator proxg:dom(g)H is given by proxg(x)=(I+g)1(z), zH. It is known that the proximal operator is single-valued. Moreover, we have

    zproxλg(z)λg(proxλg(z))for all zH, λ>0. (2.1)

    Definition 2.1. Let S be a nonempty subset of H. A sequence {xn} in H is said to be quasi-Fejér convergent to S if and only if for all xS there exists a positive sequence {εn} such that n=1εn<+ and xn+1x2xnx2+εn for all n1. If {εn} is a null sequence, we say that {xn} is Fejér convergent to S.

    Lemma 2.1. [6] The subdifferential operator g is maximal monotone. Moreover, the graph of g, Gph(g)={(x,v)H×H:vg(x)} is demiclosed, i.e., if the sequence {(xn,vn)}Gph(g) satisfies that {xn} converges weakly to x and {vn} converges strongly to v, then (x,v)Gph(g).

    Lemma 2.2. [19] Let {an}, {bn} and {cn} be real positive sequences such that

    an+1(1+cn)an+bn, n1.

    If Σn=1cn<+ and Σn=1bn<+, then limn+an exists.

    Lemma 2.3. [12] Let {an} and {θn} be real positive sequences such that

    an+1(1+θn)an+θnan1, n1.

    Then, an+1Kni=1(1+2θi) where K=max{a1,a2}. Moreover, if n=1θn<+, then {an} is bounded.

    Lemma 2.4. [2,14] If {xn} is quasi-Fejér convergent to S, then we have:

    (i) {xn} is bounded.

    (ii) If all weak accumulation points of {xn} is in S, then {xn} weakly converges to a point in S.

    In this section, we assume that the following conditions are satisfied for our convergence analysis:

    (A1) The solution set of the convex minimization problem (1.1) is nonempty, i.e., Ω=argmin(f+g).

    (A2) f,g:H(,+] are two proper, lower semicontinuous and convex functions.

    (A3) f is differentiable on H and f is Lipschitz continuous on H with the Lipschitz constant L>0.

    We next introduce a new inertial forward-backward method for solving (1.1).

    Algorithm 3.1. Inertial modified forward-backward method (IMFB)

    Initialization: Let x0=x1H, λ1>0, θ1>0 and δ(0,1).

    Iterative step: For n1, calculate xn+1 as follows:

    Step 1. Compute the inertial step:

    wn=xn+θn(xnxn1). (3.1)

    Step 2. Compute the forward-backward step:

    yn=proxλng(wnλnf(wn)). (3.2)

    Step 3. Compute the xn+1 step:

    xn+1=ynλn(f(yn)f(wn)) (3.3)

    where

    λn+1={min{δwnynf(wn)f(yn),λn}if f(wn)f(yn)0;λnotherwise. (3.4)

    Set n=n+1 and return to Step 1.

    Remark 3.1. It is easy to see that the sequence {λn} is non-increasing. From the Lipschitz continuity of f, there exists L>0 such that f(wn)f(yn)Lwnyn. Hence,

    λn+1=min{δwnynf(wn)f(yn),λn}min{δL,λn}. (3.5)

    By the definition of {λn}, it implies that the sequence {λn} is bounded from below by min{λ0,δL}. So, we obtain limnλn=λ>0.

    Lemma 3.1. Let {xn} be generated by Algorithm 3.1. Then

    xn+1x2wnx2(1δ2λ2nλ2n+1)ynwn2,xΩ. (3.6)

    Proof. Let xΩ. Then

    xn+1x2=ynλn(f(yn)f(wn))x2=ynx2+λ2nf(yn)f(wn)22λnynx,f(yn)f(wn)=ynwn+wnx2+λ2nf(yn)f(wn)22λnynx,f(yn)f(wn)=wnx2+ynwn2+2wnx,ynwn2λnynx,f(yn)f(wn)+λ2nf(yn)f(wn)2=wnx2+ynwn2+2wnyn+ynx,ynwn2λnynx,f(yn)f(wn)+λ2nf(yn)f(wn)2=wnx2+ynwn22ynwn,ynwn+2ynx,ynwn2λnynx,f(yn)f(wn)+λ2nf(yn)f(wn)2=wnx2+ynwn22ynwn2+2ynx,ynwn2ynx,λn(f(yn)f(wn))+λ2nf(yn)f(wn)2=wnx2ynwn22ynx,wnyn+λn(f(yn)f(wn))+λ2nf(yn)f(wn)2. (3.7)

    Note that

    λn+1=min{δwnynf(wn)f(yn),λn}δwnynf(wn)f(yn). (3.8)

    It follows that

    f(wn)f(yn)δλn+1wnyn. (3.9)

    Combining (3.7) and (3.9), we obtain

    xn+1x2wnx2ynwn2+δ2λ2nλ2n+1ynwn22ynx,wnyn+λn(f(yn)f(wn))=wnx2(1δ2λ2nλ2n+1)ynwn22ynx,wnyn+λn(f(yn)f(wn)). (3.10)

    From (3.2), we see that wnλnf(wn)(I+λng)yn. Since g is maximal monotone, then there is ung(yn) such that

    wnλnf(wn)=yn+λnun. (3.11)

    This shows that

    un=1λn(wnλnf(wn)yn). (3.12)

    Since 0(f+g)(x) and f(yn)+un(f+g)yn, we get

    f(yn)+un,ynx0. (3.13)

    Substituting (3.12) into (3.13), we have

    1λnwnλnf(wn)yn+λnf(yn),ynx0. (3.14)

    This implies that wnλnf(wn)yn+λnf(yn),ynx0. Using (3.10), we derive

    xn+1x2wnx2(1δ2λ2nλ2n+1)ynwn2. (3.15)

    Lemma 3.2. Let {xn} be generated by Algorithm 3.1. If n=1θn<, then limnxnx exists for all xΩ.

    Proof. Let xΩ. From Lemma 3.1, we see that

    xn+1xwnx. (3.16)

    So, we have

    xn+1xwnx=xn+θn(xnxn1)xxnx+θnxnxn1xnx+θn(xnx+xn1x). (3.17)

    Hence

    xn+1x(1+θn)xnx+θnxn1x. (3.18)

    By Lemma 2.3, we conclude that

    xn+1xKni=1(1+2θi) (3.19)

    where K=max{x1x,x2x}. Since n=1θn<+, by Lemma 2.3, we have {xnx} is bounded. Hence n=1θnxnxn1<+. By Lemma 2.2 and (3.17), we have limnxnx exists.

    Lemma 3.3. Let {xn} be generated by Algorithm 3.1. If n=1θn<, then

    limnxn+1xn=0.

    Proof. We see that

    wnx2=xn+θn(xnxn1)x2=xnx2+2θnxnx,xnxn1+θ2nxnxn12xnx2+2θnxnxxnxn1+θ2nxnxn12. (3.20)

    From (3.15) and (3.20), we have

    xn+1x2xnx2+2θnxnxxnxn1+θ2nxnxn12(1δ2λ2nλ2n+1)wnyn2. (3.21)

    Note that θnxnxn10 and limnxnx exists by Lemma 3.2. From (3.1) and (3.21), we have wnxn0 and wnyn0, respectively. It is easy to see that xnyn0. Since f is uniformly continuous, we obtain

    limnf(wn)f(yn)=0. (3.22)

    From (3.3) and (3.22), we get

    xn+1yn=λnf(yn)f(wn)0. (3.23)

    Thus, we have

    xn+1xnxn+1yn+ynxn0. (3.24)

    Theorem 3.1. Let {xn} be generated by Algorithm 3.1. If n=1θn<, then {xn} weakly converges to a point in Ω.

    Proof. Since {xn} is bounded, there exists a subsequence {xnk} of {xn} such that xnkˉxH. From Lemma 3.3, we obtain xnk+1ˉx. We note that

    ynk=proxλnkg(wnkλnkf(wnk)). (3.25)

    From (2.1), we obtain

    wnkλnkf(wnk)ynkλnkg(ynk). (3.26)

    Hence

    wnkynkλnkf(wnk)+f(ynk)g(ynk)+f(ynk). (3.27)

    Since xnyn0, we also have ynkˉx. Letting k in (3.27) and using (3.22), by Lemma 2.1 and Remark 3.1, we get

    0(f+g)(ˉx). (3.28)

    So ˉxΩ. From (3.21) we see that {xn} is a quasi-Fejer sequence. Hence, by Lemma 2.4, we conclude that {xn} weakly converges to a point in Ω. This completes the proof.

    The image deblurring can be modeled by the following linear equation system:

    Ax=b+v,

    where ARN×M is the blurring matrix, xRN the original image, bRM the degraded image and vRM is the noisy.

    An approximation of the clean image can be found by the following LASSO problem [27]:

    minxRN{12bAx22+τx1}, (4.1)

    where τ is a positive parameter, 1 is the 1-norm, and 2 is the Euclidean norm.

    It is known that (4.1) can be written in the form (1.1) by defining f(x)=12bAx22 and g(x)=τx1. We compare our algorithm (IMFB) with FISTA, MFB, FRB, MFRB, IMFB, MSP and FMFB.

    In method IMFB, we set t0=1, tn=1+1+4t2n12 and

    θn={tn11tn,if 1n10000otherwise.

    The regularization parameters are chosen by τ=105 and x0=x1=(1,1,1,...,1)RN. We set the following parameters in Table 1.

    Table 1.  The parameters for each methods.
    Parameters FISTA MFB FRB MFRB IMFB MSP FMFB IMFB
    λ0=0.1 - - - -
    λ1=0.5 - - - -
    σ=0.1 - - - -
    ρ=0.8 - - - -
    β=0.5 - - - - -
    γ=1/β λn=1/A2 - - λn=1/A2 - - -
    δ=0.5 - - -
    μ=0.2 - - - -
    αn=1n+1 - - - - -
    ψ=2 - - - - -
    r(xn)=12xn - - - - -

     | Show Table
    DownLoad: CSV

    In this example, we set all parameters as in Table 1. For the experiments, we use the sizes 251×189 for RGB images which are blurred by the following blur types:

    (ⅰ) Motion blur with motion length of 45 pixels and motion orientation 180.

    (ⅱ) Gaussian blur of filter size 5×5 with standard deviation 5.

    (ⅲ) Out of focus with radius 7.

    We add Poisson noise and use a Fast Fourier Transform (FFT) for converting it to the frequency domain. Structural similarity index measure (SSIM) [30] is used for measuring the similarity between two images. Peak-signal-to-noise ratio (PSNR) in decibel (dB) [28] is defined by

    PSNR=10log10(2552MSE)

    where MSE=xnx2 and x is the original image. It is noted that, a higher PSNR generally indicates that the reconstruction is of higher quality. The resultant SSIM index is a decimal value between 0 and 1, and value 1 is indicates perfect structural similarity.

    The numerical experiments have been carried out in Matlab environment (version R2020b) on MacBook Pro M1 with ram 8 GB. For the results recovering the degraded RGB images, we limit the iterations to 1,000. We report the numerical results in Table 2.

    Table 2.  The comparison of PSNR, SSIM and CPU time in seconds for each methods of the restored images.
    Methods Motion blur Gaussian blur Out of focus
    PSNR SSIM CPU PSNR SSIM CPU PSNR SSIM CPU
    FISTA 25.1122 0.7694 48.0184 34.3744 0.9320 47.2637 30.9043 0.8672 47.2831
    MFB 24.8516 0.7640 71.2241 34.9546 0.9405 71.4400 28.3107 0.8152 70.4725
    FRB 27.5733 0.8536 113.2112 38.6119 0.9703 112.5153 31.8188 0.8886 111.3155
    MFRB 25.5158 0.7893 77.6740 36.0870 0.9515 70.3914 29.3660 0.8412 69.7562
    IMFB 33.6105 0.9453 42.8921 41.1854 0.9818 42.7064 34.8978 0.9293 42.9258
    MSP 34.8218 0.9560 92.9287 38.0609 0.9675 93.2503 32.0816 0.8941 93.2242
    FMFB 40.8550 0.9785 64.8279 43.9280 0.9888 64.9999 38.0780 0.9544 64.8632
    IMFB 46.7885 0.9920 75.7321 47.3368 0.9939 75.5435 41.0665 0.9743 74.7891

     | Show Table
    DownLoad: CSV

    In Table 2, we see that IMFB has a higher PSNR than FISTA, MFB, FRB, MFRB, IMFB, MSP, FMFB for the same number of iterations. Moreover, SSIM of IMFB is closer to 1 than other methods. This shows that our algorithm has a better convergence than other methods for this example. However, we observe that IMFB has a less CPU time than other methods.

    We next show the different types of blurred RGB images with the PSNR in Figure 1.

    Figure 1.  (a), (c) and (e) show the original images for each blurred RGB images with noise, (b), (d) and (f) show the images degraded by each blurred.

    We next show the restored images of RGB images for Motion blur with the PSNR in Figure 2.

    Figure 2.  Recovered images via the different methods for degraded images by Motion blur.

    We next show the restored images of RGB images for Gaussian blur with the PSNR in Figure 3.

    Figure 3.  Recovered images via the different methods for degraded images by Gaussian blur.

    We next show the restored images of RGB images for out of focus with the PSNR in Figure 4.

    Figure 4.  Recovered images via the different methods for degraded images by out of focus.

    The results of the numerical experiments are summarized in Table 2. Figure 1 shows the original and blurred images for this experiment. In Figures 25, we report all results that include the recovered images via each algorithms. It is shown that IMFB outperforms FISTA, MFB, FRB, MFRB, IMFB, MSP and FMFB in terms of PSNR and SSIM.

    Figure 5.  Graphs of PSNR and SSIM values for each blurred image and restored images by FISTA, MFB, FRB, MFRB, IMFB, MSP, FMFB and IMFB.

    In this paper, we established the convergence theorem of the iterates generated by a new modified inertial forward-backward algorithm with adaptive stepsize under some suitable conditions for convex minimization problems. We applied our main result to image recovery. It was shown that our proposed method outperforms other methods in terms of PSNR and SSIM. In future work, we will study the convergence rate of the iteration.

    This project is funded by National Research Council of Thailand (NRCT) under grant No. N41A640094. The authors wish to thank University of Phayao and Thailand Science Research and Innovation grant No. FF65-UOE001.

    The authors declare no conflict of interest.



    [1] Q. Ansari, A. Rehan, Split feasibility and fixed point problems, In: Nonlinear analysis, New Delhi: Birkhäuser, 2014,281–322. http://dx.doi.org/10.1007/978-81-322-1883-8_9
    [2] H. Bauschke, P. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011. http://dx.doi.org/10.1007/978-1-4419-9467-7
    [3] H. Bauschke, M. Bui, X. Wang, Applying FISTA to optimization problems (with or) without minimizers, Math. Program., 184 (2020), 349–381. http://dx.doi.org/10.1007/s10107-019-01415-x doi: 10.1007/s10107-019-01415-x
    [4] A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183–202. http://dx.doi.org/10.1137/080716542 doi: 10.1137/080716542
    [5] J. Bello Cruz, T. Nghia, On the convergence of the forward-backward splitting method with linesearches, Optim. Method. Softw., 31 (2016), 1209–1238. http://dx.doi.org/10.1080/10556788.2016.1214959 doi: 10.1080/10556788.2016.1214959
    [6] R. Burachik, A. Iusem, Enlargements of monotone operators, In: Set-valued mappings and enlargements of monotone operators, Boston: Springer, 2008,161–220. http://dx.doi.org/10.1007/978-0-387-69757-4_5
    [7] W. Cholamjiak, P. Cholamjiak, S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory Appl., 20 (2018), 42. http://dx.doi.org/10.1007/s11784-018-0526-5 doi: 10.1007/s11784-018-0526-5
    [8] P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math., 64 (2019), 409–435. http://dx.doi.org/10.21136/AM.2019.0323-18 doi: 10.21136/AM.2019.0323-18
    [9] F. Cui, Y. Tang, C. Zhu, Convergence analysis of a variable metric forward–backward splitting algorithm with applications, J. Inequal. Appl., 2019 (2019), 141. http://dx.doi.org/10.1186/s13660-019-2097-4 doi: 10.1186/s13660-019-2097-4
    [10] M. Farid, R. Ali, W. Cholamjiak, An inertial iterative algorithm to find common solution of a split generalized equilibrium and a variational inequality problem in hilbert spaces, J. Math., 2021 (2021), 3653807. http://dx.doi.org/10.1155/2021/3653807 doi: 10.1155/2021/3653807
    [11] R. Gu, A. Dogandžić, Projected nesterov's proximal-gradient algorithm for sparse signal recovery, IEEE T. Signal Proces., 65 (2017), 3510–3525. http://dx.doi.org/10.1109/TSP.2017.2691661 doi: 10.1109/TSP.2017.2691661
    [12] A. Hanjing, S. Suantai, A fast image restoration algorithm based on a fixed point and optimization method, Mathematics, 8 (2020), 378. http://dx.doi.org/10.3390/math8030378 doi: 10.3390/math8030378
    [13] D. Hieu Van, P. Anh, L. Muu, Modified forward-backward splitting method for variational inclusions, 4OR-Q. J. Oper. Res., 19 (2021), 127–151. http://dx.doi.org/10.1007/s10288-020-00440-3 doi: 10.1007/s10288-020-00440-3
    [14] A. Iusem, B. Svaiter, M. Teboulle, Entropy-like proximal methods in convex programming, Math. Oper. Res., 19 (1994), 790–814. http://dx.doi.org/10.1287/moor.19.4.790 doi: 10.1287/moor.19.4.790
    [15] S. Khan, W. Cholamjiak, K. Kazmi, An inertial forward-backward splitting method for solving combination of equilibrium problems and inclusion problems, Comp. Appl. Math., 37 (2018), 6283–6307. http://dx.doi.org/10.1007/s40314-018-0684-5 doi: 10.1007/s40314-018-0684-5
    [16] J. Liang, T. Luo, C. Schönlieb, Improving "fast iterative shrinkage-thresholding algorithm": faster, smarter and greedier, arXiv: 1811.01430.
    [17] Y. Malitsky, M. Tam, A forward-backward splitting method for monotone inclusions without cocoercivity, SIAM J. Optimiz., 30 (2020), 1451–1472. http://dx.doi.org/10.1137/18M1207260 doi: 10.1137/18M1207260
    [18] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. http://dx.doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [19] M. Osilike, S. Aniagbosor, G. Akuchu, Fixed points of asymptotically demicontractive mappings in arbitrary Banach spaces, Panamerican Mathematical Journal, 12 (2002), 77–88.
    [20] A. Padcharoen, D. Kitkuan, W. Kumam, P. Kumam, Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems, Comput. Math. Method., 3 (2021), 1088. http://dx.doi.org/10.1002/cmm4.1088 doi: 10.1002/cmm4.1088
    [21] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comp. Math. Math. Phys., 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [22] D. Reem, S. Reich, A. De Pierro, A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption, J. Optim. Theory Appl., 182 (2019), 851–884. http://dx.doi.org/10.1007/s10957-019-01509-8 doi: 10.1007/s10957-019-01509-8
    [23] Y. Shehu, P. Cholamjiak, Iterative method with inertial for variational inequalities in Hilbert spaces, Calcolo, 56 (2019), 4. http://dx.doi.org/10.1007/s10092-018-0300-5 doi: 10.1007/s10092-018-0300-5
    [24] Y. Shehu, G. Cai, O. Iyiola, Iterative approximation of solutions for proximal split feasibility problems, Fixed Point Theory Appl., 2015 (2015), 123. http://dx.doi.org/10.1186/s13663-015-0375-5 doi: 10.1186/s13663-015-0375-5
    [25] S. Suantai, N. Pholasa, P. Cholamjiak, The modified inertial relaxed CQ algorithm for solving the split feasibility problems, J. Ind. Manag. Optim., 14 (2018), 1595–1615. http://dx.doi.org/10.3934/jimo.2018023 doi: 10.3934/jimo.2018023
    [26] R. Suparatulatorn, W. Cholamjiak, S. Suantai, Existence and convergence theorems for global minimization of best proximity points in Hilbert spaces, Acta Appl. Math., 165 (2020), 81–90. http://dx.doi.org/10.1007/s10440-019-00242-8 doi: 10.1007/s10440-019-00242-8
    [27] R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. B, 58 (1996), 267–288. http://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [28] K. H. Thung, P. Raveendran, A survey of image quality measures, Proceeding of International Conference for Technical Postgraduates, 2009, 1–4. http://dx.doi.org/10.1109/TECHPOS.2009.5412098 doi: 10.1109/TECHPOS.2009.5412098
    [29] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2020), 431–446. http://dx.doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [30] Z. Wang, A. Bovik, H. Sheikh, E. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. http://dx.doi.org/10.1109/tip.2003.819861 doi: 10.1109/tip.2003.819861
    [31] F. Wang, H. Xu, Weak and strong convergence of two algorithms for the split fixed point problem, Numer. Math. Theor. Meth. Appl., 11 (2018), 770–781. http://dx.doi.org/10.4208/nmtma.2018.s05 doi: 10.4208/nmtma.2018.s05
    [32] D. Yambangwai, S. Khan, H. Dutta, W. Cholamjiak, Image restoration by advanced parallel inertial forward–backward splitting methods, Soft Comput., 25 (2021), 6029–6042. http://dx.doi.org/10.1007/s00500-021-05596-6 doi: 10.1007/s00500-021-05596-6
  • This article has been cited by:

    1. Simeon Reich, Adeolu Taiwo, Fast hybrid iterative schemes for solving variational inclusion problems, 2023, 46, 0170-4214, 17177, 10.1002/mma.9494
    2. Thanasak Mouktonglang, Wipawinee Chaiwino, Raweerote Suparatulatorn, A proximal gradient method with double inertial steps for minimization problems involving demicontractive mappings, 2024, 2024, 1029-242X, 10.1186/s13660-024-03145-x
    3. Chaiporn Thangthong, Raweerote Suparatulatorn, Tanadon Chaobankoh, Khuanchanok Chaichana, Refined Iterative Method for a Common Variational Inclusion and Common Fixed-Point Problem with Practical Applications, 2024, 13, 2075-1680, 740, 10.3390/axioms13110740
    4. Adeolu Taiwo, Simeon Reich, Two regularized inertial Tseng methods for solving inclusion problems with applications to convex bilevel programming, 2023, 0233-1934, 1, 10.1080/02331934.2023.2284970
    5. Thanasak Mouktonglang, Kanyuta Poochinapan, Pariwate Varnakovida, Raweerote Suparatulatorn, Sompop Moonchai, Mohammad W. Alomari, Convergence Analysis of Two Parallel Methods for Common Variational Inclusion Problems Involving Demicontractive Mappings, 2023, 2023, 2314-4785, 1, 10.1155/2023/1910411
    6. Joshua Olilima, Adesanmi Mogbademu, M. Asif Memon, Adebowale Martins Obalalu, Hudson Akewe, Jamel Seidu, An innovative inertial extra-proximal gradient algorithm for solving convex optimization problems with application to image and signal processing, 2023, 9, 24058440, e20513, 10.1016/j.heliyon.2023.e20513
    7. Khuanchanok Chaichana, Woratham Khangtragool, Raweerote Suparatulatorn, An iterative scheme for solving minimization and fixed point problems with medical image restoration, 2024, 0971-3611, 10.1007/s41478-024-00846-w
    8. Nguyen Thi Thu Thuy, Tran Thanh Tung, Le Xuan Ly, A new algorithm for approximating solutions of the common variational inclusion, 2024, 43, 2238-3603, 10.1007/s40314-024-02911-3
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2287) PDF downloads(126) Cited by(8)

Figures and Tables

Figures(5)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog