Processing math: 100%
Research article

A variational image denoising model under mixed Cauchy and Gaussian noise

  • Received: 23 June 2022 Revised: 11 August 2022 Accepted: 22 August 2022 Published: 06 September 2022
  • MSC : 68U10, 65K10, 94A08, 49J40

  • In this article, we propose a novel variational model for restoring images in the presence of the mixture of Cauchy and Gaussian noise. The model involves a novel data-fidelity term that features the mixed noise as an infimal convolution of two noise distributions and total variation regularization. This data-fidelity term contributes to suitable separation of Cauchy noise and Gaussian noise components, facilitating simultaneous removal of the mixed noise. Besides, the total variation regularization enables adequate denoising in homogeneous regions while conserving edges. Despite the nonconvexity of the model, the existence of a solution is proven. By employing an alternating minimization approach and the alternating direction method of multipliers, we present an iterative algorithm for solving the proposed model. Experimental results validate the effectiveness of the proposed model compared to other existing models according to both visual quality and some image quality measurements.

    Citation: Miyoun Jung. A variational image denoising model under mixed Cauchy and Gaussian noise[J]. AIMS Mathematics, 2022, 7(11): 19696-19726. doi: 10.3934/math.20221080

    Related Papers:

    [1] Junyong Zhao . On the number of unit solutions of cubic congruence modulo n. AIMS Mathematics, 2021, 6(12): 13515-13524. doi: 10.3934/math.2021784
    [2] Wafaa Fakieh, Amal Alsaluli, Hanaa Alashwali . Laplacian spectrum of the unit graph associated to the ring of integers modulo pq. AIMS Mathematics, 2024, 9(2): 4098-4108. doi: 10.3934/math.2024200
    [3] Zhiqun Li, Huadong Su . The radius of unit graphs of rings. AIMS Mathematics, 2021, 6(10): 11508-11515. doi: 10.3934/math.2021667
    [4] Songxiao Li, Jizhen Zhou . Essential norm of generalized Hilbert matrix from Bloch type spaces to BMOA and Bloch space. AIMS Mathematics, 2021, 6(4): 3305-3318. doi: 10.3934/math.2021198
    [5] Huadong Su, Zhunti Liang . The diameter of the nil-clean graph of Zn. AIMS Mathematics, 2024, 9(9): 24854-24859. doi: 10.3934/math.20241210
    [6] Shakir Ali, Amal S. Alali, Atif Ahmad Khan, Indah Emilia Wijayanti, Kok Bin Wong . XOR count and block circulant MDS matrices over finite commutative rings. AIMS Mathematics, 2024, 9(11): 30529-30547. doi: 10.3934/math.20241474
    [7] Dan Liu, Jianhua Zhang, Mingliang Song . Local Lie derivations of generalized matrix algebras. AIMS Mathematics, 2023, 8(3): 6900-6912. doi: 10.3934/math.2023349
    [8] Zhao Xiaoqing, Yi Yuan . Square-free numbers in the intersection of Lehmer set and Piatetski-Shapiro sequence. AIMS Mathematics, 2024, 9(12): 33591-33609. doi: 10.3934/math.20241603
    [9] Guangren Sun, Zhengjun Zhao . SLn(Z)-normalizer of a principal congruence subgroup. AIMS Mathematics, 2022, 7(4): 5305-5313. doi: 10.3934/math.2022295
    [10] B. Amutha, R. Perumal . Public key exchange protocols based on tropical lower circulant and anti circulant matrices. AIMS Mathematics, 2023, 8(7): 17307-17334. doi: 10.3934/math.2023885
  • In this article, we propose a novel variational model for restoring images in the presence of the mixture of Cauchy and Gaussian noise. The model involves a novel data-fidelity term that features the mixed noise as an infimal convolution of two noise distributions and total variation regularization. This data-fidelity term contributes to suitable separation of Cauchy noise and Gaussian noise components, facilitating simultaneous removal of the mixed noise. Besides, the total variation regularization enables adequate denoising in homogeneous regions while conserving edges. Despite the nonconvexity of the model, the existence of a solution is proven. By employing an alternating minimization approach and the alternating direction method of multipliers, we present an iterative algorithm for solving the proposed model. Experimental results validate the effectiveness of the proposed model compared to other existing models according to both visual quality and some image quality measurements.



    The Sylvester equation

    AX+XB=C (1.1)

    appears frequently in many areas of applied mathematics. We refer readers to the elegant survey by Bhatia and Rosenthal [1] and the references therein for the history of the Sylvester equation and many interesting and important theoretical results. The Sylvester equation is important in a number of applications such as matrix eigenvalue decompositions [2,3], control theory [3,4,5], model reduction [6,7,8,9], physics mathematics to construct exact solutions of nonlinear integrable equations [10], feature problemss of slice semi-regular functions [11] and the numerical solution of the matrix differential Riccati equations [12,13,14]. There are several numerical algorithms to compute the solution of the Sylvester equation. The standard ones are the Bartels Stewart algorithm [15] and the Hessenberg Schur method first described by Enright [14], but more often attributed to Golub, Nash and Van Loan [16]. Other computationally efficient approaches for the case that both A and B are stable, i.e., both A and B have all their eigenvalues in the open left half plane, are the sign function method [17], Smith method [18] and ADI iteration methods [19,20,21,22]. All these methods are efficient for the small size of the dense matrices A and B.

    The recent interest is directed more towards the large and sparse matrices A and B, and C with low rank. For the dense A and B, the approach based on the sign function method is suggested in [23] that exploits the low rank structure of C. This approach is further used in [24] in order to solve the large scale Sylvester equation with sparse A and B, i.e., the matrices A and B can be represented by O(nlog(n)) data. Problems for the sensitivity of the solution of the Sylvester equation are also widely studied. There are several books that contain the results for these problems [25,26,27].

    In this paper, we focus our attention on the multiple constrained least squares solution of the Sylvester equation, that is, the following multiple constrained least squares problem:

    minXT=X,LXU,λmin(X)ε>0f(X)=12AX+XBC2 (1.2)

    where A,B,C,L and U are given n×n real matrices, X is a n×n real symmetric matrix which we wish to find, λmin(X) represents the smallest eigenvalue of the symmetric matrix X, and ε is a given positive constant. The inequality XY, for any two real matrices, means that XijYij, here Xij and Yij denote the ijth entries of the matrices X and Y, respectively.

    Multiple constrained conditions least squares estimations of matrices are widely used in mathematical economics, statistical data analysis, image reconstruction, recommendation problems and so on. They differ from the ordinary least squares problems, and the estimated matrices are usually required to be symmetric positive definite, bounded and, sometimes, to have some special construction patterns. For example, in the dynamic equilibrium model of economy [28], one needs to estimate an aggregate demand function derived from second order analysis of the utility function of individuals. The formulation of this problem is to find the least squares solution of the matrix equation AX=B, where A and B are given, the fitting matrix X is a symmetric and bounded matrix, and the smallest eigenvalue is no less than a specified positive number since, in the neighborhood of equilibrium, the approximate of the utility function is a quadratic and strictly concave with Hessian matrix. Other examples discussed in [29,30] are respectively to find a symmetric positive definite patterned matrix closest to a sample covariance matrix and to find a symmetric and diagonally dominant matrices with positive diagonal matrix closest to a given matrix. Based on the above analysis, we have a strong motivation to study the multiple constrained least squares problem (1.2).

    In this paper, we first transform the multiple constrained least squares (1.2) into an equivalent constrained optimization problem. Then, we give the necessary and sufficient conditions for the existence of a solution to the equivalent constrained optimization problem. Noting that the alternating direction method of multipliers (ADMM) is one-step iterative method, we propose a multi-step alternating direction method of multipliers (MSADMM) to the multiple constrained least squares (1.2), and analyze the global convergence of the proposed algorithm. We will give some numerical examples to illustrate the effectiveness of the proposed algorithm to the multiple constrained least squares (1.2) and list some problems that should be studied in the near future. We also give some numerical comparisons between MSADMM, ADMM and ADMM with Anderson acceleration (ACADMM).

    Throughout this paper, Rm×n, SRn×n and SRn×n0 denote the set of m×n real matrices, n×n symmetric matrices and n×n symmetric positive semidefinite matrices, respectively. In stands for the n×n identity matrix. A+ denotes a matrix with ijth entry equal to max{0,Aij}. The inner product in space Rm×n defined as A,B=tr(ATB)=ijAijBij for all A,BRm×n, and the associated norm is Frobenius norm denoted by A. PΩ(X) denotes the projection of the matrix X onto the constrained matrix set Ω, that is PΩ(X)=argminZΩZX.

    In this section, we give an existence theorem for a solution of the multiple constrained least squares problem (1.2) and some theoretical results for the optimization problems which are useful for discussions in the next sections.

    Theorem 2.1. The matrix ˜X is a solution of the multiple constrained least squares problem (1.2) if and only if there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that the following conditions (2.1)–(2.4) are satisfied.

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0, (2.1)
    ˜Λ1,˜XL=0,˜XL0,˜Λ10, (2.2)
    ˜Λ2,U˜X=0,U˜X0,˜Λ20, (2.3)
    ˜Λ3+˜ΛT3,˜XεIn=0,˜XεInSRn×n0,˜Λ3+˜ΛT3SRn×n0. (2.4)

    Proof. Obviously, the multiple constrained least squares problem (1.2) can be rewritten as

    minXSRn×nF(X)=12AX+XBC2s.t. XL0,UX0,XεInSRn×n0. (2.5)

    Then, if ˜X is a solution to the constrained optimization problem (2.5), ˜X certainly satisfies KKT conditions of the constrained optimization problem (2.5), and hence of the multiple constrained least squares problem (1.2). That is, there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied.

    Conversely, assume that there exist matrices ˜Λ1,˜Λ2 and ˜Λ3 such that conditions (2.1)–(2.4) are satisfied. Let

    ˉF(X)=F(X)˜Λ1,XL˜Λ2,UX˜Λ3+˜ΛT32,XεIn,

    then, for any matrix WSRn×n, we have

    ˉF(˜X+W)=12A(˜X+W)+(˜X+W)BC2   ˜Λ1,˜X+WL˜Λ2,U˜XW˜Λ3+˜ΛT32,˜X+WεIn=ˉF(˜X)+12AW+WB2+AW+WB,A˜X+˜XBC   ˜Λ1,W+˜Λ2,W˜Λ3+˜ΛT32,W=ˉF(˜X)+12AW+WB2   +W,AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=ˉF(˜X)+12AW+WB2ˉF(˜X).

    This implies that ˜X is a global minimizer of the function ˉF(X) with XSRn×n. Since

    ˜Λ1,˜XL=0,˜Λ2,U˜X=0,˜Λ3+˜ΛT3,˜XεIn=0,

    and ˉF(X)ˉF(˜X) holds for all XSRn×n, we have

    F(X)F(˜X)+˜Λ1,XL+˜Λ2,UX+˜Λ3+˜ΛT32,XεIn.

    Noting that ˜Λ10,˜Λ20 and ˜Λ3+˜ΛT3SRn×n0, then F(X)F(˜X) holds for all X with XL0, UX0 and XεInSRn×n0. Hence, ˜X is a solution of the constrained optimization problem (2.5), that is, ˜X a solution of the multiple constrained least squares problem (1.2).

    Lemma 2.1. [31] Assume that ˜x is a solution of the optimization problem

    minf(x)  s.t.xΩ,

    where f(x) is a continuously differentiable function, Ω is a closed convex set, then

    f(˜x),x˜x0,xΩ.

    Lemma 2.2. [31] Assume that (˜x1,˜x2,...,˜xn) is a solution of the optimization problem

    minni=1fi(xi)  s.t.ni=1Aixi=b,xiΩi,i=1,2,...,n (2.6)

    where fi(xi)(i=1,2,...,n) are continuously differentiable functions, Ωi(i=1,2,...,n) are closed convex sets, then

    xif(˜xi)ATi˜λ,xi˜xi0,xiΩi,i=1,2,...,n,

    where ˜λ is a solution to the dual problem of (2.6).

    Lemma 2.3. Assume that Ω={XRn×n:LXU}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    (M)+,YL=0,(M)+,UY=0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZRn×n,ZL0,UZ0Z(YM)2, (2.7)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.7). That is, there exist matrices ˜Λ10 and ˜Λ20 such that

    ˜ZY+M˜Λ1+˜Λ2=0,˜Λ1,˜ZL=0,˜Λ2,U˜Z=0,˜ZL0,U˜Z0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M˜Λ1+˜Λ2=0,˜Λ1,YL=0,˜Λ2,UY=0,YL0,UY0.

    So we have from the above conditions that (˜Λ1)ij(˜Λ2)ij=0 when LijUij, and (˜Λ1)ij and (˜Λ2)ij can be arbitrarily selected as (˜Λ1)ij0 and (˜Λ2)ij0 when Lij=Uij. Noting that M=˜Λ1˜Λ2, ˜Λ1 and ˜Λ2 can be selected as ˜Λ1=(M)+ and ˜Λ2=(M)+. Hence, the results hold.

    Lemma 2.4. Assume that Ω={XRn×n:XT=X,λmin(X)ε>0}, then, for any matrix MRn×n, if Y=PΩ(YM), we have

    M+MT,YεIn=0,M+MTSRn×n0,YεInSRn×n0.

    Proof. Let

    ˜Z=argminZΩZ(YM)2

    and noting that the optimization problem

    minZΩZ(YM)2

    is equivalent to the optimization problem

    minZSRn×n,ZεInSRn×n0Z(YM)2, (2.8)

    then ˜Z satisfies the KKT conditions for the optimization problem (2.8). That is, there exists a matrix ˜ΛSRn×n0 such that

    ˜ZY+M˜Λ+(˜ZY+M˜Λ)T=0,˜Λ,˜ZεIn=0,˜ZεInSRn×n0.

    Since

    Y=PΩ(YM)=argminZΩZ(YM)2,

    then

    M+MT2˜Λ=0,˜Λ,YεIn=0,YεInSRn×n0.

    Hence, the results hold.

    In this section we give a multi-step alternating direction method of multipliers (MSADM) tothe multiple constrained least squares problem (1.2). Obviously, the multiple constrained least squares problem (1.2) is equivalent to the following constrained optimization problem

    minXF(X)=12AX+XBC2,s.t. XY=0,XZ=0,      XRn×n,      YΩ1={YRn×n:LYU},      ZΩ2={ZRn×n:ZT=Z,λmin(Z)ε>0}. (3.1)

    The Lagrange function, augmented Lagrangian function and dual problem to the constrained optimization problem (3.1) are, respectively,

    L(X,Y,Z,M,N)=F(X)M,XYN,XZ, (3.2)
    Lα(X,Y,Z,M,N)=F(X)+α2XYM/α2+α2XZN/α2, (3.3)
    maxM,NRn×ninfXRn×n,YΩ1,ZΩ2L(X,Y,Z,M,N), (3.4)

    where M and N are Lagrangian multipliers and α is penalty parameter.

    The alternating direction method of multipliers [32,33] to the constrained optimization problem (3.1) can be described as the following Algorithm 3.1.

    Algorithm 3.1. ADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0 and penalty parameter α>0. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           (a)Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(b)Yk+1=argminYΩ1Lα(Xk+1,Y,Zk,Mk,Nk)=PΩ1(Xk+1Mk/α),(c)Zk+1=argminZΩ2Lα(Xk+1,Yk+1,Z,Mk,Nk)=PΩ2(Xk+1Nk/α),(d)Mk+1=Mkα(Xk+1Yk+1),(e)Nk+1=Nkα(Xk+1Zk+1);(3.5)
    Step 3. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 4. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Alternating direction method of multipliers (ADMM) has been well studied in the context of the linearly constrained convex optimization. In the last few years, we have witnessed a number of novel applications arising from image processing, compressive sensing and statistics, etc. ADMM is a splitting version of the augmented Lagrange method (ALM) where the ALM subproblem is decomposed into multiple subproblems at each iteration, and thus the variables can be solved separably in alternating order. ADMM, in fact, is one-step iterative method, that is, the current iterates is obtained by the information only from the previous step, and the convergence rate of ADMM is only linear, which was proved in [33]. In this paper we propose a multi-step alternating direction method of multipliers (MSADMM), which is more effective than ADMM, to the constrained optimization problem (3.1). The iterative pattern of MSADMM can be described as the following Algorithm 3.2.

    Algorithm 3.2. MSADMM to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and correction factor γ(0,2). Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. ADMM step
          (a)˜Xk=argminXRn×nLα(X,Yk,Zk,Mk,Nk),(3.6)
          (b)˜Mk=Mkα(˜XkYk),(3.7)
          (c)˜Nk=Nkα(˜XkZk),(3.8)
          (d)˜Yk=argminYΩ1Lα(˜Xk,Y,Zk,˜Mk,˜Nk)=PΩ1(˜Xk˜Mk/α),(3.9)
          (e)˜Zk=argminZΩ2Lα(˜Xk,˜Yk,Z,˜Mk,˜Nk)=PΩ2(˜Xk˜Nk/α);(3.10)
    Step 3. Correction step
          (a)Yk+1=Ykγ(Yk˜Yk),    (3.11)
          (b)Zk+1=Zkγ(Zk˜Zk),    (3.12)
          (c)Mk+1=Mkγ(Mk˜Mk).    (3.13)
          (d)Nk+1=Nkγ(Nk˜Nk);    (3.14)
    Step 4. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2<εout, stop. In this case, ˜Xk is an approximate solution of problem (3.1);
    Step 5. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Compared to ADMM, MSADMM yields the new iterate in the order XMNYZ with the difference in the order of XYZMN. Despite this difference, MSADMM and ADMM are equally effective to exploit the separable structure of (3.1) and equally easy to implement. In fact, the resulting subproblems of these two methods are of the same degree of decomposition and they are of the same difficulty. We shall verify by numerical experiments that these two methods are also equally competitive in numerical senses, and that, if we choose the correction factor γ suitably, MSADMM is more efficient than ADMM.

    In this section, we prove the global convergence and the O(1/t) convergence rate for the proposed Algorithm 3.2. We start the proof with some lemmas which are useful for the analysis of coming theorems.

    To simplify our analysis, we use the following notations throughout this section.

    Ω=Rn×n×Ω1×Ω2×Rn×n×Rn×n;V=(YZMN);W=(XV);
    G(W)=(F(X)MNMNXYXZ);Q=(αIn0In00αIn0InIn01αIn00In01αIn).

    Lemma 4.1. Assume that (X,Y,Z) is a solution of problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk} and {˜Wk} are generated by Algorithm 3.2, then we have

    VkV,Q(Vk˜Vk)Vk˜Vk,Q(Vk˜Vk). (4.1)

    Proof. By (3.6)–(3.10) and Lemma 2.1, we have, for any (X,Y,Z,M,N)Ω, that

    (X˜XkY˜YkZ˜ZkM˜MkN˜Nk),(F(˜Xk)˜Mk˜Nk˜Mk˜Nk˜Xk˜Yk˜Xk˜Zk)+(0000αIn0In00αIn0InIn01αIn00In01αIn)(˜YkYk˜ZkZk˜MkMk˜NkNk)0, (4.2)

    or compactly,

    W˜Wk,G(˜Wk)+(0Q)(˜VkVk)0. (4.3)

    Choosing W as W=(XV), then (4.3) can be rewritten as

    ˜VkV,Q(Vk˜Vk)˜WkW,G(˜Wk).

    Noting that the monotonicity of the gradients of the convex functions, we have by Lemma 2.2 that

    ˜WkW,G(˜Wk)˜WkW,G(W)0.

    Therefore, the above two inequalities imply that

    ˜VkV,Q(Vk˜Vk)0,

    from which the assertion (4.1) is immediately derived.

    Noting that the matrix Q is a symmetric and positive semi-definite matrix, we use, for convenience, the notation

    Vk˜VkQ:=Vk˜Vk,Q(Vk˜Vk).

    Then, the assertion (4.1) can be rewritten as

    VkV,Q(Vk˜Vk)Vk˜Vk2Q. (4.4)

    Lemma 4.2. Assume that (X,Y,Z) is a solution of the constrained optimization problem (3.1), (M,N) is a solution of the dual problem (3.4) to the constrained optimization problem (3.1), and that the sequences {Vk},{˜Vk} are generated by Algorithm 3.2. Then, we have

    Vk+1V2QVkV2Qγ(2γ)Vk˜Vk2Q. (4.5)

    Proof. By elementary manipulation, we obtain

    Vk+1V2Q=(VkV)γ(Vk˜Vk)2Q=VkV2Q2γVkV,Q(Vk˜Vk)+γ2Vk˜Vk2QVkV2Q2γVk˜Vk2Q+γ2Vk˜Vk2Q=VkV2Qγ(2γ)Vk˜Vk2Q,

    where the inequality follows from (4.1) and (4.4).

    Lemma 4.3. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 satisfy

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q (4.6)

    for any (X,Y,Z,M,N)Ω.

    Proof. By (4.2) or its compact form (4.3), we have, for any (X,Y,Z,M,N)Ω, that

    W˜Wk,G(˜Wk)W˜Wk,(0Q)(˜VkVk)=V˜Vk,Q(Vk˜Vk). (4.7)

    Thus, it suffices to show that

    V˜Vk,Q(Vk˜Vk)+12γ(VVk2QVVk+12Q)(1γ2)Vk˜Vk2Q. (4.8)

    By using the formula 2a,Qb=a2Q+b2Qab2Q, we derive that

    VVk+1,Q(VkVk+1)=12VVk+12Q+12VkVk+12Q12VVk2Q. (4.9)

    Moreover, since (3.11)–(3.14) can be rewritten as (VkVk+1)=γ(Vk˜Vk), we have

    VVk+1,Q(Vk˜Vk)=1γVVk+1,Q(VkVk+1). (4.10)

    Combining (4.9) and (4.10), we obtain

    VVk+1,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q. (4.11)

    On the other hand, we have by using (3.11)–(3.14) that

    Vk+1˜Vk,Q(Vk˜Vk)=(1γ)Vk˜Vk2Q. (4.12)

    By adding (4.11) and (4.12), and again using the fact that (VkVk+1)=γ(Vk˜Vk), we obtain that

    V˜Vk,Q(Vk˜Vk)=12γ(VVk+12QVVk2Q)+12γVkVk+12Q+(1γ)Vk˜Vk2Q=12γ(VVk+12QVVk2Q)+(1γ2)Vk˜Vk2Q

    which is equivalent to (4.8). Hence, the lemma is proved.

    Theorem 4.1. The sequences {Vk} and {˜Wk} generated by Algorithm 3.2 are bounded, and furthermore, any accumulation point ˜X of the sequence {˜Xk} is a solution of problem (1.2).

    Proof. The inequality (4.5) with the restriction γ(0,2) implies that

    (ⅰ) limkVk˜VkQ=0;

    (ⅱ) VkVQ is bounded upper.

    Recall that the matrix Q is symmetric and positive semi-definite. Thus, we have by the assertion (ⅰ) that Q(Vk˜Vk)=0 (k) which, together with (3.7) and (3.8), imply that ˜Yk=˜Xk=˜Zk (k). The assertion (ⅱ) implies that the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded. Equations (3.11)–(3.14) hold, and the sequences {Yk}, {Zk}, {Mk} and {Nk} are bounded imply the sequence {˜Yk}, {˜Zk}, {˜Mk} and {˜Nk} are also bounded. Hence, by the clustering theorem and together with (3.11)–(3.14), there exist subsequences {˜Xk}K, {˜Yk}K, {˜Zk}K, {˜Mk}K, {˜Nk}K, {Yk}K, {Zk}K, {Mk}K and {Nk}K such that

    limk,kK˜Xk=˜X,limk,kK˜Yk=limk,kKYk=˜Y,limk,kK˜Zk=limk,kKZk=˜Z,
    limk,kK˜Mk=limk,kKMk=˜M,limk,kK˜Nk=limk,kKNk=˜N.

    Furthermore, we have by (3.7) and (3.8) that

    ˜X=˜Y=˜Z. (4.13)

    By (3.6)–(3.8), we have

    AT(A˜Xk+˜XkBC)+(A˜Xk+˜XkBC)BT˜Mk˜Nk=0.

    So we have

    AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜M˜N=0. (4.14)

    Let k,kK, we have by (3.9), (3.10) and (4.13) that

    ˜X=PΩ1(˜X˜M/α),˜X=PΩ2(˜X˜N/α). (4.15)

    Noting that α>0, we have by (4.15), Lemma 2.3 and Lemma 2.4 that

    ˜M+,˜XL=0,˜XL0, (4.16)
    (˜M)+,U˜X=0,U˜X0, (4.17)

    and

    ˜N+˜NT,˜XεI=0,˜N+˜NTSRn×n0,˜XεISRn×n0. (4.18)

    Let

    ˜Λ1=(˜M)+,˜Λ2=(˜M)+,˜Λ3=˜N,

    we have by (4.14) and (4.16)–(4.18) that

    {AT(A˜X+˜XBC)+(A˜X+˜XBC)BT˜Λ1+˜Λ2˜Λ3=0˜Λ1,˜XL=0,˜XL0,˜Λ10,˜Λ2,U˜X=0,U˜X0,˜Λ20,˜Λ3+˜ΛT3,˜XεI=0,˜XεISRn×n0,˜Λ3+˜ΛT3SRn×n0. (4.19)

    Hence, we have by Theorem 2.1 that ˜X is a solution of problem (1.2).

    Theorem 4.2. Let the sequences {˜Wk} be generated by Algorithm 3.2. For an integer t>0, let

    ˜Wt=1t+1tk=0˜Wk, (4.20)

    then 1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω and the inequality

    ˜WtW,G(W)12γ(t+1)VV02Q (4.21)

    holds for any (X,Y,Z,M,N)Ω.

    Proof. First, for any integer t>0, we have (˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω for k=0,1,2,,t. Since 1t+1tk=0˜Wk can be viewed as a convex combination of ˜Wks, we obtain

    1t+1tk=0(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω.

    Second, since γ(0,2), it follows from Lemma 4.3 that

    W˜Wk,G(˜Wk)+12γ(VVk2QVVk+12Q)0,(˜Xk,˜Yk,˜Zk,˜Mk,˜Nk)Ω. (4.22)

    By combining the monotonicity of G(W) with the inequality (4.22), we have

    W˜Wk,G(W)+12γ(VVk2QVVk+12Q)0,(X,Y,Z,M,N)Ω.

    Summing the above inequality over k=0,1,,t, we derive that

    (t+1)Wtk=0˜Wk,G(W)+12γ(VV02QVVt+12Q)0,(X,Y,Z,M,N)Ω.

    By dropping the minus term, we have

    (t+1)Wtk=0˜Wk,G(W)+12γVV02Q0,(X,Y,Z,M,N)Ω.

    which is equivalent to

    1t+1tk=0˜WkW,G(W)12γ(t+1)VV02Q,(X,Y,Z,M,N)Ω.

    The proof is completed.

    Noting that problem (3.1) is equivalent to find (X,Y,Z,M,N)Ω such that the following inequality

    WW,G(W)0 (4.23)

    holds for any (X,Y,Z,M,N)Ω. Theorem 4.2 means that, for any initial matrices Y0,Z0,M0,N0Rn×n, the point ˜Wt defined in (4.20) satisfies

    ˜WtW,G(W)VV02Q2γ(t+1),

    which means the point ˜Wt is an approximate solution of (4.23) with the accuracy O(1/t). That is, the convergence rate O(1/t) of the Algorithm 3.2 is established in an ergodic sense.

    In this section, we present some numerical examples to illustrate the convergence of MSADMM to the constrained least squares problem (1.2). All tests were performed by Matlab 7 with 64-bit Windows 7 operating system. In all tests, the constant ε=0.1, matrices L with all elements are 1 and U with all elements are 3. The matrices A,B and C are randomly generated, i.e., generated in Matlab style as A=randn(n,n), B=randn(n,n), C=randn(n,n). In all algorithms, the initial matrices are chosen as the null matrices. The maximum number of inner iterations and out iterations are restricted to 5000. The error tolerance εout=εin=109 in Algorithms 3.1 and 3.2. The computational methods of the projection PΩi(X)(i=1,2) are as follows[38].

    PΩ1(X)={Xij,ifLijXijUijUij,ifXij>UijLij,ifXij<Lij,PΩ2(X)=Wdiag(d1,d2,,dn)WT

    where

    di={λi(X+XT2),ifλi(X+XT2)εε,ifλi(X+XT2)<ε

    and W is such that X+XT2=WΔWT is spectral decomposition, i.e., WTW=I and Δ=diag(λ1(X+XT2),λ2(X+XT2),,λn(X+XT2)). We use LSQR algorithm described in [34] with necessary modifications to solve the subproblems (3.5) in Algorithm 3.1, (3.6) in Algorithm 3.2 and (6.1) in Algorithm 6.2.

    The LSQR algorithm is an effective method to solve consistent linear matrix equation or least square problem of inconsistent linear matrix equation. Using this iterative algorithm, for any initial matrix, a solution can be obtained within finite iteration steps if exact arithmetic is used. In addition, using this iterative algorithm, a solution with minimum Frobenius norm can be obtained by choosing a special kind of initial matrix, and a solution which is nearest to given matrix in Frobenius norm can be obtained by first finding minimum Frobenius norm solution of a new consistent matrix equation. The LSQR algorithm to solve the subproblems (3.5), (3.6) and (6.1) can be described as follows:

    Algorithm 5.1. LSQR algorithm to solve subproblems (3.5) and (3.6).
    Step 1. Input matrices A,B,C,Yk,Zk,Mk and Nk, penalty parameter α>0 and error tolerance εin. Compute
          η1=(C2+αYk+Mkα2+αZk+Nkα2)1/2,U(1)1=Cη1,U(2)1=αYk+Mkαη1,U(3)1=αZk+Nkαη1,ξ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1,Γ1=ATU(1)1+U(1)1BT+αU(2)1+αU(3)1ξ1,Φ1=Γ1,ˉϕ=η1,ˉρ1=ξ1.Leti=:1;
    Step 2. Compute
          ηi+1=(AΓi+ΓiBξiU(1)i2+αΓiξiU(2)i2+αΓiξiU(3)i2)1/2,U(1)i+1=AΓi+ΓiBξiU(1)iηi+1,U(2)i+1=αΓiξiU(2)iηi+1,U(3)i+1=αΓiξiU(3)iηi+1,ξi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γi,Γi+1=ATU(1)i+1+U(1)i+1BT+αU(2)i+1+αU(3)i+1ηi+1Γiξi+1,ρi=(ˉρ2i+η2i+1)1/2,ci=ˉρiρi,si=ηi+1ρi,θi+1=siξi+1,ˉρi+1=ciξi+1,ϕi=ciˉϕi,ˉϕi+1=siˉϕi,Xi+1=Xi+ϕiρiΦi,Φi+1=Γi+1θi+1ρiΦi;
    Step 3. If Xi+1Xi<εin, terminate the execution of the algorithm. (In this case, Xi is a solution of problem (3.5) or (3.6));
    Step 4. Let i=:i+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Table 1 reports the average computing time (CPU) of 10 tests of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with penalty parameter α=n. Figures 14 report the computing time of ADMM with the same size of problem and different penalty parameters α. Figures 58 report the computing time of MSADMM with the same size of problem and different correction factor γ. Figure 9 reports the computing time curve of Algorithm 3.1 (ADMM) and Algorithm 3.2 (MSADMM) with different matrix size.

    Table 1.  Numerical comparisons between MSADMM and ADMM.
    α= n ADMM MSADMM(γ=0.8) MSADMM(γ=1.0) MSADMM(γ=1.5)
    20 0.0846 0.1254 0.0865 0.0538
    40 0.3935 0.4883 0.3836 0.2676
    80 1.3370 1.6930 1.3726 0.8965
    100 2.4766 3.1514 2.5015 1.6488
    150 5.8780 7.3482 5.8742 3.9154
    200 11.6398 14.6023 11.6162 7.6576
    300 35.1929 41.4151 32.9488 21.4898
    400 83.7386 108.9807 86.0472 56.8144
    500 147.5759 183.4758 144.2038 92.3137
    600 242.0408 302.6395 241.6225 157.8042

     | Show Table
    DownLoad: CSV
    Figure 1.  Computing times (seconds) vs. the values of α for n = 40.
    Figure 2.  Computing times (seconds) vs. the values of α for n = 60.
    Figure 3.  Computing times (seconds) vs. the values of α for n = 80.
    Figure 4.  Computing times (seconds) vs. the values of α for n = 100.
    Figure 5.  Computing times (seconds) vs. the values of γ for α = n = 40.
    Figure 6.  Computing times (seconds) vs. the values of γ for α = n = 60.
    Figure 7.  Computing times (seconds) vs. the values of γ for α = n = 80.
    Figure 8.  Computing times (seconds) vs. the values of γ for α = n = 100.
    Figure 9.  Numerical comparisons between ADMM and MSADMM.

    Based on the tests reported in Table 1, Figures 19 and many other performed unreported tests which show similar patterns, we have the following results:

    Remark 5.1. The convergence speed of ADMM is directly related to the penalty parameter α. In general, the penalty parameter α in this paper can be chosen as αn (see Figures 14). However, how to select the best penalty parameter α is an important problem should be studied future time.

    Remark 5.2. The convergence speed of MSADMM is direct relation to the penalty parameter α and the correction factor γ. The selection of the penalty parameter α is similar to ADMM since MSADMM is a direct extension of ADMM. For the correction factor γ, as showed in Table 1 and Figures 58, aggressive values such as γ1.5 are often preferred. However, how to select the best correction factor γ is also an important problem should be studied future time.

    Remark 5.3. As showed in Table 1 and Figure 9, MSADMM, with the correction factor γ1.5 and the penalty parameter α be chosen as the same as ADMM, is more effective than ADMM.

    Anderson acceleration, or Anderson mixing, was initially developed in 1965 by Donald Anderson [35] as an iterative procedure to solve some nonlinear integral equations arising in physics. It turns out that the Anderson acceleration is very efficient to solve other types of nonlinear equations as well, see [36,37,38], and the literature cited therein. When Anderson acceleration is applied to the equation f(x)=g(x)x=0, the iterative pattern can be described as the following Algorithm 6.1.

    Algorithm 6.1. Anderson accelerated method to solve the equation f(x)=0.
    Given x0Rn and an integer m1 this algorithm produces a sequence xk of iterates intended to converge to a fixed point of the function g:RnRn
    Step 1. Compute x1=g(x0);
    Step 2. For k=1,2, until convergence;
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
          minλRmkf(xk)mkj=1λj(f(xkmk+j)f(xkmk+j1))22;
    Step 5. Set
          xk+1=g(xk)+mk1j=1λj[g(xkmk+j+1)g(xkmk+j)].

     | Show Table
    DownLoad: CSV

    In this we define the following matrix functions

    f(Y,Z,M,N)=g(Y,Z,M,N)(Y,Z,M,N),

    where g(Yk,Zk,Mk,Nk)=(Yk+1,Zk+1,Mk+1,Nk+1) with Yk+1,Zk+1,Mk+1 and Nk+1 are computed by (b)–(e) in Algorithm 3.1, and let fk=f(Yk,Zk,Mk,Nk),gk=g(Yk,Zk,Mk,Nk), then Algorithm 3.1 with Anderson acceleration can be described as the following Algorithm 6.2.

    Algorithm 6.2. Algorithm 3.1 with Anderson acceleration to solve problem (3.1).
    Step 1. Input matrices A,B,C,L and U. Input constant ε>0, error tolerance εout>0, penalty parameter α>0 and integer m1. Choose initial matrices Y0,Z0,M0,N0Rn×n. Let k=:0;
    Step 2. Compute
           Xk+1=argminXRn×nLα(X,Yk,Zk,Mk,Nk);(6.1)
    Step 3. Let mk=min(m,k);
    Step 4. Compute λk=(λ1,λ2,,λmk)TRmk that solves
           minλRmkfkmkj=1λj(fkmk+j+1fkmk+j)2;
    Step 5. Set
           (Yk+1,Zk+1,Mk+1,Nk+1)=gk+mk1j=1λj(gkmk+j+1gkmkj);
    Step 6. If (Yk+1Yk2+Zk+1Zk2+Mk+1Mk2+Nk+1Nk2)1/2εout, stop. In this case, Xk+1 is an approximate solution of problem (3.1);
    Step 7. Let k=:k+1 and go to step 2.

     | Show Table
    DownLoad: CSV

    Algorithm 6.2 (ACADMM) is m-step iterative method, that is, the current iterates is obtained by the linear combination of the previous m steps. Furthermore, the combination coefficients of the linear combination are modified at each iteration steps. Compared to ACADMM, Algorithm 3.2 (MSADMM) is two-step iterative method and the combination coefficients of the linear combination are fixed at each iteration steps. The convergence speed of ACADMM is directly related to the penalty parameter α and the backtracking step m. The selection of the penalty parameter α is the same as ADMM since ACADMM's iterates are corrected by ADMM's iterates. For the backtracking step m, as showed in Table 2 (the average computing time of 10 tests) and Figure 10, aggressive values such as m=10 are often preferred (in this case, ACADMM is more efficient than MSADMM). However, how to select the best backtracking step m is an important problem which should be studied in near future.

    Table 2.  Numerical comparisons between MSADMM and ACADMM.
    α= n MSADMM(γ=1.5) ACADMM(m=2) ACADMM(m=10) ACADMM(m=20)
    20 0.0735 0.0991 0.0521 0.0647
    40 0.2595 0.3485 0.2186 0.2660
    80 1.1866 1.1319 1.0232 1.1069
    100 1.7750 2.2081 1.6003 1.7660
    150 3.8474 5.2760 3.5700 3.9587
    200 7.6133 10.8719 6.9807 7.7318
    300 21.2970 28.5379 18.8233 19.9526
    400 56.2133 74.3192 44.8087 46.5275
    500 98.0542 130.2326 75.6044 80.1480
    600 157.7573 208.1124 125.2842 133.4549

     | Show Table
    DownLoad: CSV
    Figure 10.  Numerical comparisons between MSADMM and ACADMM.

    In this paper, the multiple constraint least squares solution of the Sylvester equation AX+XB=C is discussed. The necessary and sufficient conditions for the existence of solutions to the considered problem are given (Theorem 2.1). MSADMM to solve the considered problem is proposed and some convergence results of the proposed algorithm are proved (Theorem 4.1 and Theorem 4.2). Problems which should be studied in the near future are listed. Numerical experiments show that MSADMM with a suitable correction factor γ is more effective than ADMM (See Table 1 and Figure 10), and ACADMM with a suitable backtracdking step m is the most effective of ADMM, MSADMM and ACADMM (See Table 2 and Figure 10).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by National Natural Science Foundation of China (grant number 11961012) and Special Research Project for Guangxi Young Innovative Talents (grant number AD20297063).

    The authors declare no competing interests.



    [1] M. Shinde, S. Gupta, Signal detection in the presence of atmospheric noise in tropics, IEEE Trans. Commun., 22 (1974), 1055–1063. https://doi.org/10.1109/TCOM.1974.1092336 doi: 10.1109/TCOM.1974.1092336
    [2] M. A. Chitre, J. R. Potter, S. H. Ong, Optimal and near optimal signal detection in snapping shrimp dominated ambient noise, IEEE J. Oceanic Eng., 31 (2006), 497–503. https://doi.org/10.1109/JOE.2006.875272 doi: 10.1109/JOE.2006.875272
    [3] S. Banerjee, M. Agrawal, Underwater acoustic communication in the presence of heavy-tailed impulsive noise with bi-parameter cauchy-gaussian mixture model, 2013 Ocean Electronics (SYMPOL), 2013, 1–7. https://doi.org/10.1109/SYMPOL.2013.6701903
    [4] G. A. Tsihrintzis, P. Tsakalides, C. L. Nikias, Signal detection in severely heavy-tailed radar clutter, Conference Record of The Twenty-Ninth Asilomar Conference on Signals, Systems and Computers, 1995,865–869. https://doi.org/10.1109/ACSSC.1995.540823
    [5] E. E. Kuruoglu, W. J. Fitzgerald, P. J. W. Rayner, Near optimal detection of signals in impulsive noise modeled with asymmetric alpha-stable distribution, IEEE Commun. Lett., 2 (1998), 282–284. https://doi.org/10.1109/4234.725224 doi: 10.1109/4234.725224
    [6] H. El Ghannudi, L. Clavier, N. Azzaoui, F. Septier, P. A. Rolland, α-stable interference modeling and cauchy receiver for an ir-uwb ad hoc network, IEEE Trans. Commun., 58 (2010), 1748–1757. https://doi.org/10.1109/TCOMM.2010.06.090074 doi: 10.1109/TCOMM.2010.06.090074
    [7] M. Zimmermann, K. Dostert, Analysis and modeling of impulsive noise in broad-band powerline communications, IEEE Trans. Electromagn. Compat., 44 (2002), 249–258. https://doi.org/10.1109/15.990732 doi: 10.1109/15.990732
    [8] P. M. Reeves, A non-gaussian turbulence simulation, Air Force Flight Dynamics Laboratory, 1969.
    [9] A. Achim, P. Tsakalides, A. Bezerianos, Sar image denoising via bayesian wavelet shrinkage based on heavy-tailed modeling, IEEE Trans. Geosci. Remote Sens., 41 (2003), 1773–1784. https://doi.org/10.1109/TGRS.2003.813488 doi: 10.1109/TGRS.2003.813488
    [10] Y. Peng, J. Chen, X. Xu, F. Pu, Sar images statistical modeling and classification based on the mixture of alpha-stable distributions, Remote Sens., 5 (2013), 2145–2163. https://doi.org/10.3390/rs5052145 doi: 10.3390/rs5052145
    [11] C. L. Nikias, M. Shao, Signal processing with alpha-stable distri?butions and applications, Hoboken, NJ, USA: Wiley, 1995.
    [12] S. A. Kassam, Signal detection in non-gaussian noise, New York, USA: Springer, 2012.
    [13] S. R. Krishna Vadali, P. Ray, S. Mula, P. K. Varshney, Linear detection of a weak signal in additive cauchy noise, IEEE Trans. Commun., 65 (2017), 1061–1076. https://doi.org/10.1109/TCOMM.2016.2647599 doi: 10.1109/TCOMM.2016.2647599
    [14] J. Ilow, D. Hatzinakos, Detection in alpha-stable noise environments based on prediction, Int. J. Adapt. Control Signal Proc., 11 (1997), 555–568.
    [15] D. Herranz, E. E. Kuruoglu, L. Toffolatti, An α-stable approach to the study of the p(d) distribution of unresolved point sources in cmb sky maps, Astron. Astrophys., 424 (2004), 1081–1096. https://doi.org/10.1051/0004-6361:20035858 doi: 10.1051/0004-6361:20035858
    [16] W. Feller, An introduction to probability theory and its applications, Vol. 2, 2 Eds., New York: John Wiley & Sons Inc., 1991.
    [17] N. L. Johnson, S. Kotz, N. Balakrishnan, Continuous univariate distributions, Vol. 1, 2 Eds., New York: Wiley, 1994.
    [18] L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithm, Phys. D, 60 (1992), 259–268. https://doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
    [19] M. A. Nikolova, A variational approach to remove outliers and impulse noise, J. Math. Imaging Vis., 20 (2004), 99–120. https://doi.org/10.1023/B:JMIV.0000011326.88682.e5 doi: 10.1023/B:JMIV.0000011326.88682.e5
    [20] R. H. Chan, Y. Dong, M. Hintermuller, An efficient two-phase l1-tv method for restoring blurred images with impulse noise, IEEE Trans. Image Process., 19 (2010), 1731–1739. https://doi.org/10.1109/TIP.2010.2045148 doi: 10.1109/TIP.2010.2045148
    [21] J. F. Cai, R. Chan, M. Nikolova, Fast two-phase image deblurring under impulse noise, J. Math. Imaging Vis., 36 (2010), 46–53. https://doi.org/10.1007/s10851-009-0169-7 doi: 10.1007/s10851-009-0169-7
    [22] G. Aubert, J. F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math., 68 (2008), 925–946. https://doi.org/10.1137/060671814 doi: 10.1137/060671814
    [23] J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci., 1 (2008), 294–321. https://doi.org/10.1137/070689954 doi: 10.1137/070689954
    [24] Y. Dong, T. Zeng, A convex variational model for restoring blurred images with multiplicative noise, SIAM J. Imaging Sci., 6 (2013), 1598–1625. https://doi.org/10.1137/120870621 doi: 10.1137/120870621
    [25] J. Lu, L. Shen, C. Xu, Y. Xu, Multiplicative noise removal in imaging: An exp-model and its fixed-point proximity algorithm, Appl. Comput. Harmon. Anal., 41 (2016), 518–539. https://doi.org/10.1016/j.acha.2015.10.003 doi: 10.1016/j.acha.2015.10.003
    [26] T. Le, R. Chartrand, T. Asaki, A variational approach to reconstructing images corrupted by poisson noise, J. Math. Imaging Vis., 27 (2007), 257–263. https://doi.org/10.1007/s10851-007-0652-y doi: 10.1007/s10851-007-0652-y
    [27] P. Getreuer, M. Tong, L. A. Vese, A variational model for the restoration of mr images corrupted by blur and rician noise, In: Advances in visual computing, Lecture Notes in Computer Science, Berlin, Heidelberg: Springer, 2011. https://doi.org/10.1007/978-3-642-24028-7_63
    [28] L. Chen, T. Zeng, A convex variational model for restoring blurred images with large rician noise, J. Math. Imaging Vis., 53 (2015), 92–111. https://doi.org/10.1007/s10851-014-0551-y doi: 10.1007/s10851-014-0551-y
    [29] F. Sciacchitano, Y. Dong, T. Zeng, Variational approach for restoring blurred images with cauchy noise, SIAM J. Imag. Sci., 8 (2015), 1894–1922. https://doi.org/10.1137/140997816 doi: 10.1137/140997816
    [30] J. J. Mei, Y. Dong, T. Z. Hunag, W. Yin, Cauchy noise removal by nonconvex admm with convergence guarantees, J. Sci. Comput., 74 (2018), 743–766. https://doi.org/10.1007/s10915-017-0460-5 doi: 10.1007/s10915-017-0460-5
    [31] Z. Yang, Z. Yang, G. Gui, A convex constraint variational method for restoring blurred images in the presence of alpha-stable noises, Sensors, 18 (2018). https://doi.org/10.3390/s18041175
    [32] Y. Chang, S. R. Kadaba, P. C. Doerschuk, S. B. Gelfand, Image restoration using recursive markov random field models driven by cauchy distributed noise, IEEE Signal Process. Lett., 8 (2001), 65–66. https://doi.org/10.1109/97.905941 doi: 10.1109/97.905941
    [33] A. Achim, E. Kuruoǧlu, Image denoising using bivariate α-stable distributions in the complex wavelet domain, IEEE Signal Process. Lett., 12 (2005), 17–20. https://doi.org/10.1109/LSP.2004.839692 doi: 10.1109/LSP.2004.839692
    [34] A. Loza, D. Bull, N. Canagarajah, A. Achim, Non-gaussian model-based fusion of noisy images in the wavelet domain, Comput. Vis. Image Und., 114 (2010), 54–65.
    [35] Y. Wang, W. Yin, J. Zeng, Global convergence of ADMM in nonconvex nonsmooth optimization, J. Sci. Comput., 78 (2019), 29–63. https://doi.org/10.1007/s10915-018-0757-z doi: 10.1007/s10915-018-0757-z
    [36] J. Yang, Y. Zhang, W. Yin, An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Computing, 31 (2009), 2842–2865. https://doi.org/10.1137/080732894 doi: 10.1137/080732894
    [37] M. Ding, T. Z. Huang, S. Wang, J. J. Mei, X. L. Zhao, Total variation with overlapping group sparsity for deblurring images under cauchy noise, Appl. Math. Comput., 341 (2019), 128–147. https://doi.org/10.1016/j.amc.2018.08.014 doi: 10.1016/j.amc.2018.08.014
    [38] J. H. Yang, X. L. Zhao, J. J. Mei, S. Wang, T. H. Ma, T. Z. Huang, Total variation and high-order total variation adaptive model for restoring blurred images with cauchy noise, Comput. Math. Appl., 77 (2019), 1255–1272. https://doi.org/10.1016/j.camwa.2018.11.003 doi: 10.1016/j.camwa.2018.11.003
    [39] G. Kim, J. Cho, M. Kang, Cauchy noise removal by weighted nuclear norm minimization, J. Sci. Comput., 83 (2020), 1–21. https://doi.org/10.1007/s10915-020-01203-2 doi: 10.1007/s10915-020-01203-2
    [40] S. Lee, M. Kang, Group sparse representation for restoring blurred images with cauchy noise, J. Sci. Comput., 83 (2020), 1–27. https://doi.org/10.1007/s10915-020-01227-8 doi: 10.1007/s10915-020-01227-8
    [41] M. Jung, M. Kang, Image restoration under cauchy noise with sparse representation prior and total generalized variation, J. Comput. Math., 39 (2021), 81–107. https://doi.org/10.4208/jcm.1907-m2018-0234 doi: 10.4208/jcm.1907-m2018-0234
    [42] L. Bai, A new approach for cauchy noise removal, AIMS Math., 6 (2021), 10296–10312. https://doi.org/10.3934/math.2021596 doi: 10.3934/math.2021596
    [43] X. Ai, G. Ni, T. Zeng, Nonconvex regularization for blurred images with cauchy noise, Inverse Probl. Imag., 16 (2022), 625–646. https://doi.org/10.3934/ipi.2021065 doi: 10.3934/ipi.2021065
    [44] J. F. Cai, R. H. Chan, M. Nikolova, Two-phase approach for deblurring images corrupted by impulse plus gaussian noise, Inverse Probl. Imag., 2 (2008), 187–204. https://doi.org/10.3934/ipi.2008.2.187 doi: 10.3934/ipi.2008.2.187
    [45] Y. Xiao, T. Y. Zeng, J. Yu, M. K. Ng, Restoration of images corrupted by mixed gaussian-impulse noise via l1-l0 minimization, Pattern Recogn., 44 (2010), 1708–1720. https://doi.org/10.1016/j.patcog.2011.02.002 doi: 10.1016/j.patcog.2011.02.002
    [46] R. Rojas P. Rodríguez, B. Wohlberg, Mixed gaussian-impulse noise image restoration via total variation, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, 1077–1080. https://doi.org/10.1109/ICASSP.2012.6288073
    [47] B. Dong, H. Ji, J. Li, Z. W. Shen, Y. H. Xu, Wavelet frame based blind image inpainting, Appl. Comput. Harmon. Anal., 32 (2011), 268–279. https://doi.org/10.1016/j.acha.2011.06.001 doi: 10.1016/j.acha.2011.06.001
    [48] J. Liu, X. C. Tai, H. Y. Huang, Z. D. Huan, A weighted dictionary learning models for denoising images corrupted by mixed noise, IEEE Trans. Image Process., 22 (2013), 1108–1120. https://doi.org/10.1109/TIP.2012.2227766 doi: 10.1109/TIP.2012.2227766
    [49] M. Yan, Restoration of images corrupted by impulse noise and mixed gaussian impulse noise using blind inpainting, SIAM J. Imaging Sci., 6 (2013), 1227–1245. https://doi.org/10.1137/12087178X doi: 10.1137/12087178X
    [50] M. Hintermüller, A. Langer, Subspace correction methods for a class of nonsmooth and nonadditive convex variational problems with mixed L1/L2 data-fidelity in image processing, SIAM J. Imaging Sci., 6 (2013), 2134–2173. https://doi.org/10.1137/120894130 doi: 10.1137/120894130
    [51] A. Langer, Automated parameter selection in the L1-L2-TV model for removing Gaussian plus impulse noise, Inverse Probl., 33 (2017). https://doi.org/10.1088/1361-6420/33/7/074002
    [52] A. Foi, M. Trimeche, V. Katkovnik, K. Egiazarian, Practical poissonian-gaussian noise modeling and fitting for single-image raw-data, IEEE Trans. Image Process., 17 (2008), 1737–1754. https://doi.org/10.1109/TIP.2008.2001399 doi: 10.1109/TIP.2008.2001399
    [53] A. Jezierska, C. Chaux, J. Pesquet, H. Talbot, An EM approach for Poisson-Gaussian noise modeling, 2011 19th European Signal Processing Conference, 2011, 2244–2248.
    [54] F. Murtagh, J. L. Starck, A. Bijaoui, Image restoration with noise suppression using a multiresolution support, Astron. Astrophys. Suppl. Ser., 112 (1995), 179–189.
    [55] B. Begovic, V. Stankovic, L. Stankovic, Contrast enhancement and denoising of poisson and gaussian mixture noise for solar images, 2011 18th IEEE International Conference on Image Processing, 2011,185–188. https://doi.org/10.1109/ICIP.2011.6115829
    [56] F. Luisier, T. Blu, M. Unser, Image denoising in mixed Poisson-Gaussian noise, IEEE Trans. Image Process., 20 (2011), 696–708. https://doi.org/10.1109/TIP.2010.2073477 doi: 10.1109/TIP.2010.2073477
    [57] M. Makitalo, A. Foi, Optimal inversion of the generalized anscombe transformation for Poisson-Gaussian noise, IEEE Trans. Image Process., 22 (2013), 91–103. https://doi.org/10.1109/TIP.2012.2202675 doi: 10.1109/TIP.2012.2202675
    [58] Y. Marnissi, Y. Zheng, J. Pesquet, Fast variational bayesian signal recovery in the presence of Poisson-Gaussian noise, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, 3964–3968. https://doi.org/10.1109/ICASSP.2016.7472421
    [59] F. J. Anscombe, The transformation of poisson, binomial and negative-binomial data, Biometrika, 35 (1948), 246–254. https://doi.org/10.1093/biomet/35.3-4.246 doi: 10.1093/biomet/35.3-4.246
    [60] F. Benvenuto, A. La Camera, C. Theys, A. Ferrari, H. Lantéri, M. Bertero, The study of an iterative method for the reconstruction of images corrupted by poisson and gaussian noise, Inverse Probl., 24 (2008), 035016.
    [61] E. Chouzenoux, A. Jezierska, J. C. Pesquet, H. Talbot, A convex approach for image restoration with exact Poisson-Gaussian likelihood, SIAM J. Imaging Sci., 8 (2015), 17–30. https://doi.org/10.1137/15M1014395 doi: 10.1137/15M1014395
    [62] J. C. De los Reyes, C. B. Schönlieb, Image denoising: Learning the noise model via nonsmooth pde-constrained optimization, Inverse Probl. Imaging, 7 (2013), 1183–1214. https://doi.org/10.3934/ipi.2013.7.1183 doi: 10.3934/ipi.2013.7.1183
    [63] L. Calatroni, C. Chung, J. C. De Los Reyes, C. B. Schönlieb, T. Valkonen, Bilevel approaches for learning of variational imaging models, In: Variational methods: In imaging and geometric control, Berlin, Boston: De Gruyter, 2017. https://doi.org/10.1515/9783110430394-008
    [64] D. N. H. Thanh, S. D. Dvoenko, A method of total variation to remove the mixed poisson-gaussian noise, Pattern Recognit. Image Anal., 26 (2016), 285–293. https://doi.org/10.1134/S1054661816020231 doi: 10.1134/S1054661816020231
    [65] L. Calatroni, J. C. De Los Reyes, C. B. Schönlieb, Infimal convolution of data discrepancies for mixed noise removal, SIAM J. Imaging Sci., 10 (2017), 1196–1233. https://doi.org/10.1137/16M1101684 doi: 10.1137/16M1101684
    [66] L. Calatroni, K. Papafitsoros, Analysis and automatic parameter selection of a variational model for mixed gaussian and salt-and-pepper noise removal, Inverse Probl., 35 (2019), 114001.
    [67] J. Zhang, Y. Duan, Y. Lu, M. K. Ng, H. Chang, Bilinear constraint based admm for mixed poisson-gaussian noise removal, SIAM J. Imaging Sci., 15 (2021), 339–366. https://doi.org/10.3934/ipi.2020071 doi: 10.3934/ipi.2020071
    [68] Y. Chen, E. E. Kuruoglu, H. C. So, L. T. Huang, W. Q. Wang, Density parameter estimation for additive cauchy-gaussian mixture, 2014 IEEE Workshop on Statistical Signal Processing (SSP), 2014,197–200. https://doi.org/10.1109/SSP.2014.6884609
    [69] Y. Chen, E. E. Kuruoglu, H. C. So, Optimum linear regression in additive cauchy-gaussian noise, Signal Process., 106 (2015), 312–318. https://doi.org/10.1016/j.sigpro.2014.07.028 doi: 10.1016/j.sigpro.2014.07.028
    [70] A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis., 40 (2011), 120–145. https://doi.org/10.1007/s10851-010-0251-1 doi: 10.1007/s10851-010-0251-1
    [71] F. Li, C. Shen, C. Shen J. Fan, Image restoration combining a total variational filter and a fourth-order filter, J. Vis. Commun. Image Represent., 18 (2007), 322–330. https://doi.org/10.1016/j.jvcir.2007.04.005 doi: 10.1016/j.jvcir.2007.04.005
    [72] K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492–526. https://doi.org/10.1137/090769521
    [73] G. Gilboa, S. Osher, Nonlocal operators with applications to image processing, SIAM J. Multiscale Model. Simul., 7 (2009), 1005–1028. https://doi.org/10.1137/070698592 doi: 10.1137/070698592
    [74] M. Aharon, M. Elad, A. Bruckstein, K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation, IEEE Trans. Signal Process., 54 (2006), 4311–4322. https://doi.org/10.1109/TSP.2006.881199 doi: 10.1109/TSP.2006.881199
    [75] M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process., 15 (2006), 3736–3745. https://doi.org/10.1109/TIP.2006.881969 doi: 10.1109/TIP.2006.881969
    [76] Y. R. Li, L. Shen, D. Q. Dai, B. W. Suter, Framelet algorithms for de-blurring images corrupted by impulse plus gaussian noise, IEEE Trans. Image Process., 20 (2011), 1822–1837. https://doi.org/10.1109/TIP.2010.2103950 doi: 10.1109/TIP.2010.2103950
    [77] A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89–97. https://doi.org/10.1023/B:JMIV.0000011325.36760.1e doi: 10.1023/B:JMIV.0000011325.36760.1e
    [78] T. Goldstein, S. Osher, The split bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323–343. https://doi.org/10.1137/080725891 doi: 10.1137/080725891
    [79] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn., 3 (2010), 1–122. http://doi.org/10.1561/2200000016 doi: 10.1561/2200000016
    [80] C. Chen, M. K. Ng, X. L. Zhao, Alternating direction method of multipliers for nonlinear image restoration problems, IEEE Trans. Image Process., 24 (2015), 33–43. http://doi.org/10.1109/TIP.2014.2369953 doi: 10.1109/TIP.2014.2369953
    [81] M. K. Ng, R. H. Chan, W. C. Tang, A fast algorithm for deblurring models with neumann boundary conditions, SIAM J. Sci. Comput., 21 (1999), 851–866. https://doi.org/10.1137/S1064827598341384 doi: 10.1137/S1064827598341384
    [82] N. Jacobson, Basic algebra, Freeman, New York, 1974.
    [83] B. R. Frieden, A new restoring algorithm for the preferential enhancement of edge gradients, J. Opt. Soc. Am., 66 (1976), 280–283. https://doi.org/10.1364/JOSA.66.000280 doi: 10.1364/JOSA.66.000280
    [84] J. P. Nolan, Numerical calculation of stable densities and distribution functions, Commun. Stat. Stoch. Models, 13 (1997), 759–774. https://doi.org/10.1080/15326349708807450 doi: 10.1080/15326349708807450
    [85] N. Balakrishnan, V. B. Nevzorov, A primer on statistical distributions, New York: John Wiley & Sons, 2003. https://doi.org/10.1002/0471722227
    [86] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2472) PDF downloads(176) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog