Research article Special Issues

Preconditioned augmented Lagrangian method for mean curvature image deblurring

  • Image deblurring models with a mean curvature functional has been widely used to preserve edges and remove the staircase effect in the resulting images. However, the Euler-Lagrange equations of a mean curvature model can be used to solve fourth-order non-linear integro-differential equations. Furthermore, the discretization of fourth-order non-linear integro-differential equations produces an ill-conditioned system so that the numerical schemes like Krylov subspace methods (conjugate gradient etc.) have slow convergence. In this paper, we propose an augmented Lagrangian method for a mean curvature-based primal form of the image deblurring problem. A new circulant preconditioned matrix is introduced to overcome the problem of slow convergence when employing a conjugate gradient method inside of the augmented Lagrangian method. By using the proposed new preconditioner fast convergence has been observed in the numerical results. Moreover, a comparison with the existing numerical methods further reveal the effectiveness of the preconditioned augmented Lagrangian method.

    Citation: Shahbaz Ahmad, Faisal Fairag, Adel M. Al-Mahdi, Jamshaid ul Rahman. Preconditioned augmented Lagrangian method for mean curvature image deblurring[J]. AIMS Mathematics, 2022, 7(10): 17989-18009. doi: 10.3934/math.2022991

    Related Papers:

    [1] Shahbaz Ahmad, Adel M. Al-Mahdi, Rashad Ahmed . Two new preconditioners for mean curvature-based image deblurring problem. AIMS Mathematics, 2021, 6(12): 13824-13844. doi: 10.3934/math.2021802
    [2] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [3] Hong-Mei Song, Shi-Wei Wang, Guang-Xin Huang . Tensor Conjugate-Gradient methods for tensor linear discrete ill-posed problems. AIMS Mathematics, 2023, 8(11): 26782-26800. doi: 10.3934/math.20231371
    [4] Yanlin Li, Mehraj Ahmad Lone, Umair Ali Wani . Biharmonic submanifolds of Kaehler product manifolds. AIMS Mathematics, 2021, 6(9): 9309-9321. doi: 10.3934/math.2021541
    [5] Min Xiao, Jinkang Zhang, Zijin Zhu, Meina Zhang . Blind deblurring with intermediate correction using the dark channel prior. AIMS Mathematics, 2025, 10(3): 7086-7098. doi: 10.3934/math.2025323
    [6] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [7] Yue Zhao, Meixia Li, Xiaowei Pan, Jingjing Tan . Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Mathematics, 2025, 10(2): 3041-3061. doi: 10.3934/math.2025142
    [8] Yiping Meng . On the Rayleigh-Taylor instability for the two coupled fluids. AIMS Mathematics, 2024, 9(11): 32849-32871. doi: 10.3934/math.20241572
    [9] Kai Wang, Xiheng Wang, Xiaoping Wang . Curvature estimation for point cloud 2-manifolds based on the heat kernel. AIMS Mathematics, 2024, 9(11): 32491-32513. doi: 10.3934/math.20241557
    [10] Mouhamad Al Sayed Ali, Miloud Sadkane . Acceleration of implicit schemes for large systems of nonlinear differential-algebraic equations. AIMS Mathematics, 2020, 5(1): 603-618. doi: 10.3934/math.2020040
  • Image deblurring models with a mean curvature functional has been widely used to preserve edges and remove the staircase effect in the resulting images. However, the Euler-Lagrange equations of a mean curvature model can be used to solve fourth-order non-linear integro-differential equations. Furthermore, the discretization of fourth-order non-linear integro-differential equations produces an ill-conditioned system so that the numerical schemes like Krylov subspace methods (conjugate gradient etc.) have slow convergence. In this paper, we propose an augmented Lagrangian method for a mean curvature-based primal form of the image deblurring problem. A new circulant preconditioned matrix is introduced to overcome the problem of slow convergence when employing a conjugate gradient method inside of the augmented Lagrangian method. By using the proposed new preconditioner fast convergence has been observed in the numerical results. Moreover, a comparison with the existing numerical methods further reveal the effectiveness of the preconditioned augmented Lagrangian method.



    Image deblurring or deconvolution is an important topic in image processing due to its wide range of applications in astronomical imaging, robot vision, medical image processing, remote sensing, virtual reality and many others. In the field of image deconvolution, the use of non-linear variational methods have received a great amount of attention in the last few decades. While applying these methods to blurred and noisy images researchers have to face two main difficulties. One difficulty is non-linearity, and the other is related to solve the underlying large matrix system. For this paper, our primary goal is also to deal with these two computational difficulties for a high-order variational model. The total variation (TV) model [1,29,36]

    (u)=12Kuz2+αΩ|u|dx (1.1)

    is the most famous non-linear variational model. It has so many desirable properties, but this model transforms smooth functions into piecewise constant functions which generates staircase effects in the deblurred images. That is why the resulting images look blocky. To overcome the problem of staircase effects, one idea is to use mean curvature (MC)-based regularization models [12,19,31,39,40,41],

    (u)=12Kuz2+α2Ω(u|u|)2dx. (1.2)

    The MC-based regularization models not only remove the staircase effects but they also preserve the edges in the deblurred images. However, the Euler-Lagrange equations of mean curvature models can be used to solve non-linear fourth-order integro-differential equations. This non-linear high order term comes from the MC functional. Furthermore the discretization of an integro-differential equation results in a large ill-conditioned system that causes numerical algorithms like Krylov subspace methods (e.g. conjugate gradient (CG) method) to slowly converge. The Jacobian matrix of the ill-conditioned system has a block-banded structure with a large bandwidth. The efficient and robust numerical solution is a key issue for MC-based regularization methods due to the ill-conditioned system and high non-linearity.

    In this paper, we propose an augmented Lagrangian method for an MC-based image deblurring problem. The augmented Lagrangian methods have already been successfully applied to the optimization problems in image processing and computer vision [9,22,32,37]. In these works, the augmented Lagrangian methods achieve much higher speeds than other numerical methods. The augmented Lagrangian methods have been shown to decompose the original nontrivial minimization problem into a number of subproblems that are easy and fast to solve. Some of them can be solved by fast solvers like the fast Fourier transform (FFT), and others have 'closed-form' solutions. Therefore, the construction of an efficient augmented Lagrangian method for the minimization of a given functional depends on whether one can break down the original functional into simple ones. In our proposed augmented Lagrangian method, we use the cell-centered finite difference scheme along with the midpoint quadrature scheme for discretizing the subproblem for the primary variable u (image) of the associated Lagrangian equations. However, this leads to a large non-linear matrix system. The coefficient matrix of this system is symmetric positive definite (SPD). So the CG method is suitable for the solution of this matrix system. But, the SPD matrix is quite dense which causes the CG method to slowly converge. To overcome the problem of slow convergence due to the CG method, we have introduced a new circulant preconditioned matrix. So, for the solution of the system, instead of applying ordinary CG method, we use a preconditioned CG (PCG) method. By using the proposed new preconditioner fast convergence can be observed in the numerical results.

    The contributions of the paper are as follows: (i) we present a preconditioned augmented Lagrangian method for an MC-based image deblurring model; (ii) we introduce a new circulant preconditioned matrix that overcomes the problem of slow convergence due to the CG method inside the augmented Lagrangian scheme and (iii) we present a comparison with existing numerical methods to demonstrate the efficiency of the preconditioned augmented Lagrangian method. The paper includes different sections. The second section explains the image deblurring problem. The third section is about our proposed augmented Lagrangian method for an MC-based image deblurring model. The cell discretization and matrix system are also presented in third section. The fourth section describes the proposed circulant preconditioner, and the fifth section discusses the numerical experiments. We present the conclusions in the last section of the manuscript.

    We start the paper by presenting a concise description of the image debluring problem. The mathematical relationship of u (original or true image) and z (blurred image) is

    3z=Ku+ϵ. (2.1)

    The parameter ϵ represents a noise function which can be Gaussian noise, Brownian noise, salt-and-pepper noise etc. In our experiments, we have only considered Gaussian noise. K represents the blurring operator,

    (Ku)(x)=Ωk(x,y)u(y)dy,xΩ, (2.2)

    where k(x,y)=k(xy) is known as translation-invariant kernel. Figure 1 depicts the process of blurring.

    Figure 1.  (left) True image, (center-left) blurring kernel, (center-right) noise and (right) the resulting image.

    If K=I (identity operator), then Problem (2.1) becomes an image denoising problem. If the blurring operator K is known then the technique is referred to non-blind deconvolution [14,34,38]. However, if the blurring operator is unknown, then it is called blind deconvolution [6,19,23]. Our research will focus on non-blind deconvolution. K is a Fredholm-integral operator of the first kind, so it is compact. That is why Problem (2.1) becomes ill-posed [1,35,36]. The image intensity function u lies in a square ΩR2. The position of the pixel in Ω is defined by x=(x,y). Additionally, |x|=x2+y2 and are the Euclidean norm and L2(Ω) norm respectively. The recovery of u from z makes Problem (2.1) an unstable inverse problem [1,35,36]. The use of the MC regularization functional[12,31,39,41],

    J(u)=Ωκ(u)2dx=Ω(u|u|)2dx, (2.3)

    makes Problem (2.1) a stable problem. The Problem (2.1) is then to find a u that minimizes the functional

    3(u)=12Kuz2+α2J(u),α>0. (2.4)

    In [41], Zhu and Chan explained the well-posedness of Problem (2.4) for the case of synthetic image denoising. The Euler-Lagrange equations of Problem (2.4) are as follows:

    3K(Kuz)+α(κ|u|2+β2κu(|u|2+β2)3u)=0inΩ, (2.5)
    un=0inΩ, (2.6)
    κ(u)=0inΩ, (2.7)

    where Kis the adjoint operator of K and n is the outward unit normal. β>0 is added to make the MC functional differentiable at zero. Equation (2.5) is a non-linear fourth order differential equation.

    The MC-based regularization models not only remove the staircase effects but they also preserve edges during the recovery of digital images. However, the Euler-Lagrange equations of the MC model can be used to solve the non-linear fourth order integro-differential equation. The MC functional generates a non-linear high-order term. When developing an efficient numerical method, one key issue is determining a proper approximation of the MC functional. In the next section, the non-linearity of MC is treated by introducing three new variables for the associated Lagrangian.

    In the given literature [37,40,42,43] one can find lot of work on the augmented Lagrangian method (ALM) for the MC-based image denoising problem but not for the image deblurring problem. In this paper, we have extended the ALM algorithm for application to the image deblurring problem. To develop an ALM for the image deblurring model described by Problem (2.4), we introduce three new variables, i.e., w,v and t, and consider the following constrained minimization problem

    minu,w,v,t[12(Kuz)2+Ω|w|],withw=v,v=t|t|2+β,t=u. (3.1)

    The associated augmented Lagrangian functional in Problem (3.1) is

    L(u,w,v,t,λ1,λ2,λ3)=12(Kuz)2+α|w| (3.2)
    +r12(wv)2+λ1(wv)
    +r22|vt|t|2+β|2+λ2(vt|t|2+β)
    +r32(tu)2+λ3(tu),

    where r1,r2 and r3 are the penalization parameters. λ1R, λ2,λ3R3 are Lagrange multipliers and v,tR3. One of the key issues with ALMs is related to handling the subproblems involving the term t|t|2+β=u|u|2+β. This type of term appears due to the mean curvature functional u|u|. One remedy is to introduce a variable t=u,1 instead of t=u for the curvature term. Accordingly, we will use v=u,1|u,1|2+β. Then, our MC model becomes

    minu,w,v,t[12(Kuz)2+Ω|w|],withw=v,v=t|t|2+β,t=u,1. (3.3)

    Then, the associated augmented Lagrangian functional is

    L(u,w,v,n,t,λ1,λ2,λ3,λ4)=12(Kuz)2+α|w| (3.4)
    +r1(|t|tn)+λ1(|t|tn)
    +r22|tu,1|2+λ2(tu,1)
    +r32(wxv1yv2)2+λ3(wxv1yv2)
    +r42|vn|2+λ4(vn)+δR(n),

    where r1,r2,r3 and r4 are the penalty parameters. λ1,λ3,R, λ2,λ4R3 are Lagrange multipliers and v,n,tR3. Here, we have introduced n to relax the variable v. R={nL2(Ω):|n|1a.e.nΩ} and δR(n) is a characteristic function on R that can be expressed as

    δR(n)={0,nR+,otherwise. (3.5)

    To make the term |t|tn always non-negative, we need nR. To find the saddle points of the minimization given by Problem (3.3), we need to find the saddle points of the augmented Lagrangian functional given by Eq (3.4). So, we have to fix all variables in Eq (3.4) except one particular variable. Then, we find a critical point of the induced functional to update that particular variable. We repeat the same process for all variables in Eq (3.4). The Lagrangian multipliers will automatically be advanced once all variables are updated. The process will continue until all variables in Eq (3.4) converge to a steady state. The ALM is summarized in the following algorithm.

    Algorithm 1: The Augmented Lagrangian method for MC model
        Step-Ⅰ: Initialize u0,w0,v0,n0,t0, and λ01,λ02,λ03,λ04.
        Step-Ⅱ: For k=0,1,2,...: Compute (uk,wk,vk,nk,tk) as an (approximate) minimizer of the augmented Lagrangian functional with the Lagrange multiplier λk11,λk12,λk13,λk14, i.e.,
    (uk,wk,vk,nk,tk)argminL(u,w,v,n,t,λk11,λk12,λk13,λk14).(3.6)
        Step-Ⅲ: Update the Lagrangian multipliers
    λk1=λk11+r1(|tk|tknk),(3.7)
    λk2=λk12+r2(|tk|uk,1),(3.8)
    λk3=λk13+r3(wkvk1vk2),(3.9)
    λk4=λk14+r4(vknk),(3.10)
    where n=n1,n2,n3.
        Step-Ⅳ: Stop the iteration if relative residuals are smaller than a threshold ϵr.

     | Show Table
    DownLoad: CSV

    Now we consider the following subproblems and find their critical points

    f1(u)=12(Kuz)2+r22|tu,1|2+λ2.(tu,1), (3.11)
    f2(w)=α|w|+r32(wxv1yv2)2+λ3(wxv1yv2), (3.12)
    f3(t)=r1(|t|tn)+λ1(|t|t.n)+r22|tu,1|2+λ2.(tu,1), (3.13)
    f4(v)=r32(wxv1yv2)2+λ3(wxv1yv2)+r42|vn|2+λ4.(vn), (3.14)
    f5(n)=r1(|t|tn)+λ1(|t|tn)+r42|vn|2+λ4(vn)+δR(n). (3.15)

    The standard procedure gives us the following Euler-Lagrange equations for the functionals f1(u) and f4(v):

    KKuu=Kz(r2t1+λ21)x(r2t2+λ22)y, (3.16)
    3r4v1r3(xv1+yv2)x=r4n1λ41(r3wλ3)x, (3.17)
    r4v2r3(xv1+yv2)y=r4n2λ42(r3wλ3)y, (3.18)
    r4v3=r4n3λ43, (3.19)

    respectively, where v=v1,v2,v3,t=t1,t2,t3,n=n1,n2,n3,λ2=λ21,λ22,λ23 and λ4=λ41,λ42,λ43. To update the functionals f2(w),f3(t) and f5(n), we need the following equations:

    argminwf2(w)=max{0,1αr3|˜w|}˜w,˜w=xv1+yv2λ3r3, (3.20)
    argmintf3(t)=max{0,1r1+λ1r2|˜t|}˜t,˜t=u,1λ2r2+r1+λ1r2n, (3.21)
    argminnf5(n)=max{˜n,|˜n|1˜n/|˜n|,otherwise,˜n=v+λ4r4+r1+λ1r4t. (3.22)

    To update the Lagrangian multipliers, we have the following equations:

    3λnew1=λold1+r1(|t|tn), (3.23)
    λnew2=λold2+r2(tu,1), (3.24)
    λnew3=λold3+r3(wv1v2), (3.25)
    λnew4=λold4+r4(vn). (3.26)

    Since the MC regularizer κ(u)2=(u|u|)2 of the image deblurring problem is not homogeneous in u, we need a spatial kind of discretization especially for the terms that involve derivatives. This is because the role of the spatial mesh size is quite important to the numerical performance of the MC model. So, we partitioned the domain Ω=(0,1)×(0,1) by δx×δy. Additionally,

    δx:0=x1/2<x3/2<...<xnx+1/2=1,δy:0=y1/2<y3/2<...<ynx+1/2=1,

    where the number of equispaced partitions in the direction of x or y is equal to nx and (xi,yj) represents the centers of the cells. Additionally,

    xi=(2ihh)2fori=1,2,3,...,nx,
    yj=(2jhh)2forj=1,2,3,...,nx,

    where h=1nx; (xi±12,yj) and (xi,yj±12) represent the midpoints of cell edges:

    xi±12=xi±h2fori=1,2,3,...,nx,
    yj±12=yj±h2forj=1,2,3,...,nx.

    For each i=1,2,...,nx, and j=1,2,...,nx, define

    Ωi,j=(xi1/2,xi+1/2)×(yj1/2,yj+1/2).

    For the function θ(x,y), let θk,l denote θ(xl,ym), where k and l may take values of i1,i, or i+1 and j1,j, or j+1 respectively, for integers i,j0. For backward and forward discrete functions, we need values at proper discrete points, so we define

    [d+xθ]i,j=θi+1,jθi,jh,[dxθ]i,j=θi,jθi1,jh,[d+yθ]i,j=θi,j+1θi,jh,[dyθ]i,j=θi,jθi,j1h.

    Then the central difference discrete functions and discrete gradient are

    [dcxθ]i,j=[dxθ]i,j+[d+xθ]i,j2,[dcyθ]i,j=[dyθ]i,j+[d+yθ]i,j2,

    and

    [+θ]i,j=[d+xθ]i,j,[d+yθ]i,j,[θ]i,j=[dxθ]i,j,[dyθ]i,j,

    respectively. By applying midpoint quadrature approximation, we have that

    (Ku)(xi,yj)[KhU](ij).

    Here, we consider the cell-centered finite-difference (CCFD) method for the MC-based image deblurring problem. The CCFD approximations {Ui,j} to {u(xi,j)} are chosen. So from Eq (3.16) we have that

    [KKU]i,j+r2[div+U]i,j=[KZ]i,j[dx(r2t1+λ21)]i,j[dy(r2t2+λ22)]i,j,
    [KKU]i,j+r2[div+U]i,j=[KZ]i,j+g(i,j), (3.27)

    where g(i,j)=[dx(r2t1+λ21)]i,j[dy(r2t2+λ22)]i,j. According to the lexicographical ordering of the unknowns, U=[U11U12...Unxnx]t. Then, from Eq (3.27), we have the following matrix system:

    3(KhKh+r2BhBh)U=KhZ+Gh. (3.28)

    Here Kh is a matrix of size n2x×n2x and Bh is a matrix of size 2nx(nx1)×n2x. The matrices KhZ and Gh are of size n2x×1. KhKh is symmetric positive semidefinite. The matrix Kh is a block Toeplitz matrix with Toeplitz blocks (BTTB). The structure of the matrix Bh is

    Bh=1h[B1B2],

    where the size of both B1 and B2 is nx(nx1)×n2x, and

    B1=CI,B2=IC.

    The size of

    C=[1111111]

    is (nx1)×nx and I is an identity matrix. To get the value of u, one has to solve the large matrix system (3.28).

    Now given Eqs (3.17) and (3.18), after discretizing, we have that

    3r4v1(i,j)r3([d+xdxv1]i,j+[d+xdyv2]i,j)=r4n1(i,j)λ41(i,j)[d+xd+x(r3wλ3)]i,j, (3.29)
    r4v2(i,j)r3([d+xdxv1]i,j+[d+xdyv2]i,j)=r4n2(i,j)λ42(i,j)[d+xd+y(r3wλ3)]i,j. (3.30)

    Then, applying a discrete Fourier transformation followed by a discrete inverse Fourier transformation to Eqs (3.29) and (3.30), one can get v1 and v2, and v3 can be obtained directly from Eq (3.19). To update the variables w,t,n and Lagrange multipliers λ1,λ2,λ3,λ4, we have to discretize Eqs (3.20)–(3.26) on grid points (i,j). So we need the following equations:

    w(i,j)=max{0,1αr3|˜w(i,j)|}˜w(i,j),˜w(i,j)=xv1(i,j)+yv2(i,j)λ3r3, (3.31)
    t(i,j)=max{0,1r1+λ1r2|˜t(i,j)|}˜t(i,j), (3.32)
    ˜t(i,j)=d+xu(i,j),d+yu(i,j),1λ21,λ22,λ23(i,j)r2+r1+λ1r2n1,n2,n3(i,j),
    n(i,j)=max{˜n(i,j),|˜n|1˜n(i,j)/|˜n(i,j)|,otherwise, (3.33)
    ˜n(i,j)=v(i,j)+λ4(i,j)r4+r1+λ1r4t(i,j).

    To update the Lagrangian multipliers, we have the following equations:

    3λnew1(i,j)=λold1(i,j)+r1(|t(i,j)|t(i,j).n(i,j)), (3.34)
    λnew21(i,j)=λold21(i,j)+r2(t1(i,j)dxu(i,j)), (3.35)
    λnew22(i,j)=λold22(i,j)+r2(t2(i,j)dyu(i,j)), (3.36)
    λnew23(i,j)=λold23(i,j)+r2(t3(i,j)1), (3.37)
    λnew3(i,j)=λold3(i,j)+r3(w(i,j)dxv1(i,j)dyv2(i,j)), (3.38)
    λnew41(i,j)=λold41(i,j)+r4(v1(i,j)n1(i,j)) (3.39)
    λnew42(i,j)=λold42(i,j)+r4(v2(i,j)n2(i,j)) (3.40)
    λnew43(i,j)=λold43(i,j)+r4(v3(i,j)n3(i,j)). (3.41)

    To find the value of our main variable u, one has to solve the matrix system given by Eq (3.28). The Hessian matrix KhKh+r2BhBh of the system given by Eq (3.28) is extremely large for practical application and tends to be quite ill-conditioned when r2 is small. This happens because of the eigenvalues of the blurring operator ˉK are very small and close to zero [36]. KhKh is a full matrix, but an FFT can be used to evaluate KhKhu in O(nxlognx) operations [36] because the blurring operator ˉK is translation-invariant. The good thing is that the Hessian matrix is SPD.

    Theorem 4.1. The inner product ˉAU,U is positive for any matrix U0, where ˉA=KhKh+r2BhBh is the Hessian matrix of the system given by Eq (3.28).

    Proof. For any matrix U0, consider that

    ˉAU,U=(KhKh+r2BhBh)U,U
    =KhKhU,U+r2BhBhU,U
    =KhU,KhU+r2BhU,BhU
    =KhU2+r2BhU2>0.

    This completes the proof.

    The CG method is suitable for the solution of the system given by Eq (3.28) owing to the above-mentioned properties. But, the CG method have a quite slow convergence rate due to the system being large and ill-conditioned system. So, we use a PCG method [7,8,10,24,25,26,30]. The preconditioning matrix P must be SPD so that we get an effective solution for our system [2,3,4,11,36]. Here, we introduce our SPD circulant preconditioned matrix P of the Strang-type [27].

    3P=~Kh~Kh+γIh, (4.1)

    where ~Kh is a circulant approximation of Matrix Kh and γ>0. Ih is an identity matrix. While applying the PCG to Eq (3.28), one of the requirements is to take the inverse of the preconditioned matrix P. Since the second term in P is an identity matrix Ih, inversion is not a problem. For the inversion of the first term ~Kh~Kh, we need to apply FFTs to O(nxlognx) floating-point operations [36].

    Now, let ˉA=KhKh+r2BhBh be the Hessian matrix of the system given by Eq (3.28). Let the eigenvalues of KhKh and BhBh be λKi and λBi respectively such that λKi0 and λBi. Then the eigenvalues of P1ˉA are

    θi=λKi+r2λBiλKi+γ. (4.2)

    One can notice that clearly θiλBiγ as i. Hence, for λBiγ, the spectrum of P1ˉA is more favorable than the Hessian matrix ˉA. This can also be observed in the numerical examples when we use the PCG algorithm.

    Here, we present numerical examples for the image deblurring problem. In all experiments, we have used different values of nx, and the resulting matrix system has n2x unknowns. Then the mesh size is h=1/nx. The numerical computations were performed using MATLAB software on an Intel(R) Core(TM) i7-4510U CPU @ 2.00 GHz 2.60 GHz. In all experiments, we have taken the initial guess to be the zero vector. The value of the parameters α and β were set by referencing [5,41]. To observe the optimum value of γ, we have done numerical experiments by using Goldhills image. We found that the value of the peak signal-to-noise ratio (PSNR) does not show much improvement after γ=1. So, one can use optimum value of γ close to one. The results of the experiments are given in Figure 2.

    Figure 2.  γ vs PSNR.

    The PSNR is used to measure the quality of the restored images. For numerical calculations, we have used the kegen(nx,r,σ) kernel [13,16,17,28]. It is a circular Gaussian filter of size nx×nx with a standard deviation σ and radius r. Figure 3 depicts the kegen(120,40,4) kernel.

    Figure 3.  kegen(120,40,4) kernel.

    Example 1.

    For this example, we have used three different types of images. These are Goldhills, Kids and Peppers images (see [33]). The Goldhills image is a real image and Peppers image is a nontexture image. The Kids image is a complicated image, because it contains a small scale texture part (shirt) and a large scale cartoon part (face). The different aspects of all images can be seen in Figure 5. Each subfigure has a pixel of 256×256. Here, we have also compared our MC-based ALMs, i.e., ALM and preconditioned ALM (PALM) with the TV-based method (TVM) [1,29,36]. The Hessian matrix of the TV-based model is SPD, so for the numerical calculation we have used the CG method. The parameters of ALM and PALM were established by referencing [42,43]. The parameters for the ALM and PALM were α=8e9,r1=9.5e7,r2=1e6,r3=1e8,r4=1e5,β=1 and γ=1. For the TVM algorithm (CG), we have used α=1e6 and β=1, as according to [36]. For the numerical calculations, the kegen(nx,300,10) kernel was used. The stopping criteria for the numerical methods is based on the tolerance tol=1e7. The Tables 13 contain all of the information of this experiment. A performance comparison of the TVM, ALM and PALM is depicted in Figure 7. The relative residues and eigenvalues of the ALM and PALM are depicted in Figures 4 and 6, respectively.

    Table 1.  Comparison of TVM, ALM and PALM performance on the Goldhills image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Goldhills 128×128 22.9784 TVM 48.5749 37.1002
    128×128 22.9784 ALM 56.3756 40.8027
    128×128 22.9784 PALM 56.3654 30.5428
    256×256 22.2335 TVM 42.3448 146.2581
    256×256 22.2335 ALM 51.9872 195.2980
    256×256 22.2335 PALM 51.8512 126.7891

     | Show Table
    DownLoad: CSV
    Table 2.  Comparison of TVM, ALM and PALM performance on the Kids image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Kids 128×128 20.4859 TVM 45.2541 21.0522
    128×128 20.4859 ALM 52.1174 40.1252
    128×128 20.4859 PALM 52.0586 23.5469
    256×256 20.4147 TVM 40.5482 133.0289
    256×256 20.4147 ALM 46.0595 189.2586
    256×256 20.4147 PALM 46.9837 141.2561

     | Show Table
    DownLoad: CSV
    Table 3.  Comparison of TVM, ALM and PALM performance on the Peppers image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Peppers 128×128 20.5545 TVM 46.2567 30.2596
    128×128 20.5545 ALM 49.7693 37.2904
    128×128 20.5545 PALM 49.1567 32.2563
    256×256 20.5531 TVM 49.2863 118.5327
    256×256 20.5531 ALM 51.0284 190.3504
    256×256 20.5531 PALM 51.4522 146.3177

     | Show Table
    DownLoad: CSV
    Figure 4.  Convergence of CG (original) and PCG (preconditioned) for the inner iteration count for the Goldhills image. Pixel size of (left) 128×128 and (right) 256×256.
    Figure 5.  (first row) Blurry images, (second row) TVM deblurred images, (third row) ALM deblurred images and (last row) PALM deblurred images.
    Figure 6.  (left) Eigenvalues of ˉA and (right) eigenvalues of P1ˉA.
    Figure 7.  Performance of TVM, ALM and PALM on different images. Pixel size of (left) 128×128 and (right) 256×256.

    Remarks:

    (1) From Figure 5 one can notice that the deblurred images produced via our methods (ALM and PALM) are much better than those produced via the TVM.

    (2) From Tables 13, one can observe that the PSNR values of the ALM and PALM methods were quite higher than those for the TVM for all images. Although the TVM generates its PSNR value in much less computational time as compared to the ALM and PALM, its PSNR value is quite low. For example, for the Goldhills image with a pixel size of 128×128, the TVM took 37.1002 seconds to generate its 48.5749 PSNR, but the ALM and PALM generated higher PSNR values of 56.3756 and 56.3654 in 40.8027 seconds and 30.5428 seconds respectively. For the Kids image with a pixel size of 256×256, the TVM took 133.0289 seconds to generate a 40.5482 PSNR, but the ALM and PALM generated higher PSNR values of 46.0595 and 46.9837 in 189.2586 seconds and 141.2561 seconds respectively. Similarly, for the Peppers image with a pixel size of 256×256, the TVM took 118.5327 seconds to generate 49.2863 PSNR, but the ALM and PALM generated higher PSNR values of 51.0284 and 51.4522 in 190.3504 seconds and 146.3177 seconds respectively. Similar behavior was observed for other sizes. So, the proposed ALM and PALM tend to generate high-quality deblurred images as compared to the TVM.

    Example 2.

    Here, we have compared our MC-based ALMs, i.e., the ALM and PALM with the MC-based method proposed by Fairag et al. [18], who developed a One-Level method (OLM) and Two-Level method (TLM) for the MC-based image deblurring problem. For this experiment, we used three different types of images, i.e., the images of Goldhills (real), Cameraman (complicated) and Moon (non-texture). The different aspects of all images can be seen in Figure 8. Each subfigure has apixel size of 256×256.

    Figure 8.  (first row) Blurry images, (second row) OLM deblurred images, (third row) TLM deblurred images, (fourth row) ALM deblurred images and (last row) PALM deblurred images.

    The parameters for ALM and PALM were α=1e9,r1=9.5e7,r2=1e6,r3=1e8,r4=1e5, β=1 and γ=1. For the MC-based algorithms (OLM and TLM) we used α=8e9 and β=1. The Level-Ⅱ parameter ˜α of the TLM was calculated by referencing the formula presented in [18]. For the numerical calculations, the kegen(nx,300,10) kernel was used. The stopping criteria for the numerical methods is based on the tolerance tol=1e8. Tables 46 contain all of the information on this experiment. A performance comparison of the ALM, PALM, OLM and TLM is depicted in Figure 9.

    Table 4.  Comparison of OLM, TLM, ALM and PALM performance on the Goldhills image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Goldhills 128×128 22.9784 OLM 50.2303 47.9473
    128×128 22.9784 TLM 54.6042 31.0442
    128×128 22.9784 ALM 56.3756 40.8027
    128×128 22.9784 PALM 56.3654 30.5428
    256×256 22.2335 OLM 45.1287 248.5400
    256×256 22.2335 TLM 49.2369 128.4097
    256×256 22.2335 ALM 51.9872 195.2980
    256×256 22.2335 PALM 51.8512 126.7891

     | Show Table
    DownLoad: CSV
    Table 5.  Comparison of OLM, TLM, ALM and PALM performance on the Cameraman image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Cameraman 128×128 18.6322 OLM 43.8732 24.5327
    128×128 18.6322 TLM 44.1709 19.7541
    128×128 18.6322 ALM 48.7714 38.4161
    128×128 18.6322 PALM 48.9437 18.2997
    256×256 17.8172 OLM 38.8709 197.7316
    256×256 17.8172 TLM 40.7359 112.5587
    256×256 17.8172 ALM 40.1991 164.2474
    256×256 17.8172 PALM 40.9556 108.2873

     | Show Table
    DownLoad: CSV
    Table 6.  Comparison of OLM, TLM, ALM and PALM performance on the Moon image.
    Image Pixel size Blurred Method Deblurred CPU time
    PSNR PSNR
    Moon 128×128 26.1840 OLM 55.2128 34.3866
    128×128 26.1840 TLM 56.5319 24.8366
    128×128 26.1840 ALM 60.2041 25.3032
    128×128 26.1840 PALM 60.1598 16.4356
    256×256 26.4905 OLM 52.3328 179.7899
    256×256 26.4905 TLM 55.4471 125.3007
    256×256 26.4905 ALM 56.9144 148.3155
    256×256 26.4905 PALM 56.7752 119.1213

     | Show Table
    DownLoad: CSV
    Figure 9.  Performance of OLM, TLM, ALM and PALM on different images. Pixel size of (left) 128×128 and (right) 256×256.

    Remarks:

    (1) From Figure 8, one can notice that the ALM and PALM methods both generated slightly higher quality results.

    (2) From Tables 46, it can be observed that the PSNR values of the ALM and PALM were higher than the OLM and TLM for all images. The TLM generated its PSNR in much less time than the OLM and ALM, but not the PALM. For example, for the Goldhills image with a pixel size of 256×256, the OLM, TLM and ALM took 248.5400,128.4097 and 195.2980 seconds, respectively, to deblur the image. But, the PALM only required 126.7891 seconds for deblurring. For the Cameraman image with a pixel size of 128×128, the OLM, TLM and ALM took 24.5327, 19.7541 and 38.4161 seconds, respectively, to deblurred image. But, the PALM only required 18.2997 seconds for deblurring. Similarly, for Moon image with a pixel size of 128×128, the OLM, TLM and ALM took 34.3866, 24.8366 and 25.3032 seconds, respectively, to deblur image. But, the PALM only required 16.4356 seconds for deblurring. Similar behavior was observed for other sizes. So, our PALM consumed much less CPU time than other methods, and it also generated high-quality deblurred images.

    An ALM for the primal form of the image deblurring problem with MC regularizational functional has been presented. A new SPD circulant preconditioned matrix has been introduced to overcome the problem of slow convergence due to the CG method inside of the ALM. Numerical experiments were conducted using different kinds of images (synthetic, real, complicated and nontexture) by our proposed PALM with a new preconditioned matrix. We have compared the TV (total variation) based algorithm with our mean curvature based (ALM and PALM) algorithm. The proposed ALMs, i.e., the ALM and PALM were also compared with the latest MC-based techniques i.e., the OLM and TLM. The numerical experiments showed the effectiveness of our proposed PALM method with a new preconditioned matrix. In the future, we will work on developing an ALM for other computationally expensive regularization functionals like the fractional TV regularized tensor [21]. Furthermore, the matrix BhBh was also constructed to have a block Toeplitz matrix structure in the system given by Eq (3.28), so, a single term preconditioner like tau preconditioner [15,20] can be used to approximate both the matrices BhBh and KhKh in order to increase the convergence rate of the CG method inside the ALM.

    The second and third authors would like to thank King Fahd University of Petroleum and Minerals (KFUPM) for its continuous support. The authors also thank the referees for their very careful reading and valuable comments. This work was funded by KFUPM, Grant No. #SB201012.

    The authors declare that there is no conflict of interest.



    [1] R. Acar, C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Probl., 10 (1994), 1217–1229. https://doi.org/10.1088/0266-5611/10/6/003 doi: 10.1088/0266-5611/10/6/003
    [2] S. Ahmad, A. M. Al-Mahdi, R. Ahmed, Two new preconditioners for mean curvature-based image deblurring problem, AIMS Math., 6 (2021), 13824–13844.
    [3] S. Ahmad, F. Fairag, Circulant preconditioners for mean curvature-based image deblurring problem, J. Algorithms Comput., 15 (2021).
    [4] M. Benzi, G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. A., 26 (2004), 20–41.
    [5] C. Brito-Loeza, K. Chen, V. Uc-Cetina, Image denoising using the gaussian curvature of the image surface, Numer. Meth. Part. D. E., 32 (2016), 1066–1089. https://doi.org/10.1002/num.22042 doi: 10.1002/num.22042
    [6] P. Campisi, K. Egiazarian, Blind image deconvolution: Theory and applications, CRC press, 2016.
    [7] R. H. Chan, Toeplitz preconditioners for Toeplitz systems with nonnegative generating functions, IMA J. Numer. Anal., 11 (1991), 333–345.
    [8] R. H. Chan, K. P. Ng, Toeplitz preconditioners for Hermitian Toeplitz systems, Linear Algebra Appl., 190 (1993), 181–208. https://doi.org/10.1016/0024-3795(93)90226-E doi: 10.1016/0024-3795(93)90226-E
    [9] S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, T. Q. Nguyen, An augmented lagrangian method for total variation video restoration, IEEE T. Image Process., 20 (2011), 3097–3111. https://doi.org/10.1109/TIP.2011.2158229 doi: 10.1109/TIP.2011.2158229
    [10] T. F. Chan, An optimal circulant preconditioner for Toeplitz systems, SIAM J. Sci. Comput., 9 (1988), 766–771. https://doi.org/10.1137/0909051 doi: 10.1137/0909051
    [11] C. Chen, C. Ma, A generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett., 43 (2015), 49–55. https://doi.org/10.1016/j.aml.2014.12.001 doi: 10.1016/j.aml.2014.12.001
    [12] K. Chen, Introduction to variational image-processing models and applications, Int. J. Comput. Math., 90 (2013), 1–8.
    [13] K. Chen, F. Fairag, A. Al-Mahdi, Preconditioning techniques for an image deblurring problem, Numer. Linear Algebra, 23 (2016), 570–584. https://doi.org/10.1002/nla.2040 doi: 10.1002/nla.2040
    [14] N. R. Choi, A comparative study of non-blind and blind deconvolution of ultrasound images, Dissertations Theses Gradworks, University of Southern California, 2014.
    [15] F. Di Benedetto, Solution of Toeplitz normal equations by sine transform based preconditioning, Linear Algebra Appl., 285 (1998), 229–255. https://doi.org/10.1016/S0024-3795(98)10115-5 doi: 10.1016/S0024-3795(98)10115-5
    [16] F. Fairag, S. Ahmad, A two-level method for image deblurring problem, in 2019 8th International Conference on Modeling Simulation and Applied Optimization (ICMSAO), IEEE, 2019, 1–5.
    [17] F. Fairag, K. Chen, S. Ahmad, Analysis of the ccfd method for mc-based image denoising problems, Electron. T. Numer. Ana., 54 (2021), 108–127. https://doi.org/10.1553/etna_vol54s108 doi: 10.1553/etna_vol54s108
    [18] F. Fairag, K. Chen, C. Brito-Loeza, S. Ahmad, A two-level method for image denoising and image deblurring models using mean curvature regularization, Int. J. Comput. Math., 2021, 1–21.
    [19] X. Ge, J. Tan, L. Zhang, Y. Qian, Blind image deconvolution via salient edge selection and mean curvature regularization, Signal Process., 190 (2022), 108336.
    [20] X. M. Gu, Y. L. Zhao, X. L. Zhao, B. Carpentieri, Y. Y. Huang, A note on parallel preconditioning for the all-at-once solution of Riesz fractional diffusion equations, Numer. Math. Theor. Meth. Appl., 14 (2021), 893–919. https://doi.org/10.4208/nmtma.OA-2020-0020 doi: 10.4208/nmtma.OA-2020-0020
    [21] L. Guo, X. L. Zhao, X. M. Gu, Y. L. Zhao, Y. B. Zheng, T. Z. Huang, Three-dimensional fractional total variation regularized tensor optimized model for image deblurring, Appl. Math. Comput., 404 (2021), 126224.
    [22] C. Li, W. Yin, H. Jiang, Y. Zhang, An efficient augmented lagrangian method with applications to total variation minimization, Comput. Optim. Appl., 56 (2013), 507–530. https://doi.org/10.1007/s10589-013-9576-1 doi: 10.1007/s10589-013-9576-1
    [23] L. Li, J. Pan, W. S. Lai, C. Gao, N. Sang, M. H. Yang, Learning a discriminative prior for blind image deblurring, IEEE conference on computer vision and pattern recognition, 2018, 6616–6625.
    [24] F. R. Lin, Preconditioners for block Toeplitz systems based on circulant preconditioners, Numer. Algorithms, 26 (2001), 365–379. https://doi.org/10.1023/A:1016674923507 doi: 10.1023/A:1016674923507
    [25] F. R. Lin, W. K. Ching, Inverse Toeplitz preconditioners for Hermitian Toeplitz systems, Numer. Linear Algebra, 12 (2005), 221–229. https://doi.org/10.1002/nla.397 doi: 10.1002/nla.397
    [26] F. R. Lin, C. X. Wang, Bttb preconditioners for Bttb systems, Numer. Algorithms, 60 (2012), 153–167. https://doi.org/10.1007/s11075-011-9516-z doi: 10.1007/s11075-011-9516-z
    [27] M. K. Ng, Iterative methods for Toeplitz systems, London, U.K.: Oxford University Press, 2004.
    [28] K. L. Riley, Two-level preconditioners for regularized ill-posed problems, PhD Thesis, Montana State University, 1999.
    [29] L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D, 60 (1992), 259–268.
    [30] D. K. Salkuyeh, M. Masoudi, D. Hezari, On the generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett., 48 (2015), 55–61. https://doi.org/10.1007/s10986-015-9265-0 doi: 10.1007/s10986-015-9265-0
    [31] L. Sun, K. Chen, A new iterative algorithm for mean curvature-based variational image denoising, BIT Numer. Math., 54 (2014), 523–553.
    [32] X. C. Tai, J. Hahn, G. J. Chung, A fast algorithm for Euler's elastica model using augmented lagrangian method, SIAM J. Imaging Sci., 4 (2011), 313–344. https://doi.org/10.1137/100803730 doi: 10.1137/100803730
    [33] X. C. Tai, K. A. Lie, T. F. Chan, S. Osher, Image processing based on partial differential equations: Proceedings of the international conference on PDE-based image processing and related inverse problems, CMA, Oslo, August 8–12, 2005, Springer Science & Business Media, 2006.
    [34] S. Tao, W. Dong, H. Feng, Z. Xu, Q. Li, Non-blind image deconvolution using natural image gradient prior, Optik, 124 (2013), 6599–6605. https://doi.org/10.1016/j.ijleo.2013.05.068 doi: 10.1016/j.ijleo.2013.05.068
    [35] A. N. Tikhonov, Regularization of incorrectly posed problems, Soviet Math. Dokl., 4 (1963), 1624–1627.
    [36] C. R. Vogel, M. E. Oman, Fast, robust total variation-based reconstruction of noisy, blurred images, IEEE T. Image Process., 7 (1998), 813–824. https://doi.org/10.1109/83.679423 doi: 10.1109/83.679423
    [37] C. Wu, X. C. Tai, Augmented lagrangian method, dual methods, and split bregman iteration for rof, vectorial tv, and high order models, SIAM J. Imaging Sci., 3 (2010), 300–339. https://doi.org/10.1137/090767558 doi: 10.1137/090767558
    [38] N. Xiong, R. W. Liu, M. Liang, D. Wu, Z. Liu, H. Wu, Effective alternating direction optimization methods for sparsity-constrained blind image deblurring, Sensors, 17 (2017), 174. https://doi.org/10.3390/s17010174 doi: 10.3390/s17010174
    [39] F. Yang, K. Chen, B. Yu, D. Fang, A relaxed fixed point method for a mean curvature-based denoising model, Optim. Meth. Softw., 29 (2014), 274–285.
    [40] J. Zhang, C. Deng, Y. Shi, S. Wang, Y. Zhu, A fast linearised augmented lagrangian method for a mean curvature based model, E. Asian J. Appl. Math., 8 (2018), 463–476.
    [41] W. Zhu, T. Chan, Image denoising using mean curvature of image surface, SIAM J. Imaging Sci., 5 (2012), 1–32.
    [42] W. Zhu, X. C. Tai, T. Chan, Augmented lagrangian method for a mean curvature based image denoising model, Inverse Probl. Imag., 7 (2013), 1409–1432.
    [43] W. Zhu, X. C. Tai, T. Chan, A fast algorithm for a mean curvature based image denoising model using augmented lagrangian method, in Efficient Algorithms for Global Optimization Methods in Computer Vision, Springer, 2014,104–118.
  • This article has been cited by:

    1. Azhar Iqbal, Shahbaz Ahmad, Junseok Kim, Two-Level method for blind image deblurring problems, 2025, 485, 00963003, 129008, 10.1016/j.amc.2024.129008
    2. Shahid Saleem, Faisal Fairag, Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Shahbaz Ahmad, Conformable fractional order variation-based image deblurring, 2024, 11, 26668181, 100827, 10.1016/j.padiff.2024.100827
    3. Yuzi Jin, Soobin Kwak, Seokjun Ham, Junseok Kim, A fast and efficient numerical algorithm for image segmentation and denoising, 2024, 9, 2473-6988, 5015, 10.3934/math.2024243
    4. Shahbaz Ahmad, Wasif Khan, Optimizing convergence: GMRES preconditioner for Darcy flow problem in a fracture network, 2025, 29, 1420-0597, 10.1007/s10596-024-10332-8
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2104) PDF downloads(100) Cited by(4)

Figures and Tables

Figures(9)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog