Processing math: 45%
Research article Special Issues

Blind2Grad: Blind detail-preserving denoising via zero-shot with gradient regularized loss

  • Received: 28 March 2025 Revised: 25 May 2025 Accepted: 04 June 2025 Published: 19 June 2025
  • MSC : 49N45, 68U10, 90C15, 94A08

  • Over the past few years, deep learning-based approaches have come to dominate the areas of image processing, with no exception to the field of image denoising. Recently, self-supervised denoising networks have garnered extensive attention in the zero-shot domain. Nevertheless, training the denoising network solely with zero-shot data will lead to significant detail loss, thereby influencing the denoising quality. In this work, we proposed a novel dual mask mechanism based on Euclidean distance selection as a global masker for the blind spot network, thereby enhancing image quality and mitigating image detail loss. Furthermore, we also constructed a recurrent framework, named Blind2Grad, which integrated the gradient-regularized loss and a uniform loss for training the denoiser and enabled the double-mask mechanism to be continuously updated. This framework was capable of directly acquiring information from the original noisy image, giving priority to the key information that needed to be retained without succumbing to equivalent mapping. Moreover, we conducted a thorough theoretical analysis of the convergence properties of our Blind2Grad loss from both probabilistic and game-theoretic viewpoints, demonstrating its consistency with supervised methods. Our Blind2Grad not only demonstrated outstanding performance on both synthetic and real-world datasets, but also exhibited significant efficacy in processing images with high noise levels.

    Citation: Bolin Song, Zhenhao Shuai, Yuanyuan Si, Ke Li. Blind2Grad: Blind detail-preserving denoising via zero-shot with gradient regularized loss[J]. AIMS Mathematics, 2025, 10(6): 14140-14166. doi: 10.3934/math.2025637

    Related Papers:

    [1] Farman Ali Shah, Kamran, Zareen A Khan, Fatima Azmi, Nabil Mlaiki . A hybrid collocation method for the approximation of 2D time fractional diffusion-wave equation. AIMS Mathematics, 2024, 9(10): 27122-27149. doi: 10.3934/math.20241319
    [2] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [3] Zihan Yue, Wei Jiang, Boying Wu, Biao Zhang . A meshless method based on the Laplace transform for multi-term time-space fractional diffusion equation. AIMS Mathematics, 2024, 9(3): 7040-7062. doi: 10.3934/math.2024343
    [4] Fethi Bouzeffour . Inversion formulas for space-fractional Bessel heat diffusion through Tikhonov regularization. AIMS Mathematics, 2024, 9(8): 20826-20842. doi: 10.3934/math.20241013
    [5] Qian Li, Qianqian Yuan, Jianhua Chen . An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942
    [6] Lynda Taleb, Rabah Gherdaoui . Approximation by the heat kernel of the solution to the transport-diffusion equation with the time-dependent diffusion coefficient. AIMS Mathematics, 2025, 10(2): 2392-2412. doi: 10.3934/math.2025111
    [7] Jin Li . Barycentric rational collocation method for fractional reaction-diffusion equation. AIMS Mathematics, 2023, 8(4): 9009-9026. doi: 10.3934/math.2023451
    [8] Mubashara Wali, Sadia Arshad, Sayed M Eldin, Imran Siddique . Numerical approximation of Atangana-Baleanu Caputo derivative for space-time fractional diffusion equations. AIMS Mathematics, 2023, 8(7): 15129-15147. doi: 10.3934/math.2023772
    [9] Krunal B. Kachhia, Jyotindra C. Prajapati . Generalized iterative method for the solution of linear and nonlinear fractional differential equations with composite fractional derivative operator. AIMS Mathematics, 2020, 5(4): 2888-2898. doi: 10.3934/math.2020186
    [10] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
  • Over the past few years, deep learning-based approaches have come to dominate the areas of image processing, with no exception to the field of image denoising. Recently, self-supervised denoising networks have garnered extensive attention in the zero-shot domain. Nevertheless, training the denoising network solely with zero-shot data will lead to significant detail loss, thereby influencing the denoising quality. In this work, we proposed a novel dual mask mechanism based on Euclidean distance selection as a global masker for the blind spot network, thereby enhancing image quality and mitigating image detail loss. Furthermore, we also constructed a recurrent framework, named Blind2Grad, which integrated the gradient-regularized loss and a uniform loss for training the denoiser and enabled the double-mask mechanism to be continuously updated. This framework was capable of directly acquiring information from the original noisy image, giving priority to the key information that needed to be retained without succumbing to equivalent mapping. Moreover, we conducted a thorough theoretical analysis of the convergence properties of our Blind2Grad loss from both probabilistic and game-theoretic viewpoints, demonstrating its consistency with supervised methods. Our Blind2Grad not only demonstrated outstanding performance on both synthetic and real-world datasets, but also exhibited significant efficacy in processing images with high noise levels.



    Fractional diffusion equations (FDEs) have been utilized to model anomalous diffusion phenomena in the real world, see for instance [1,2,4,18,19,22,23,25]. One of the main features of the fractional differential operator is nonlocality. It brings big challenge for finding the numerical solution of FDEs, as the coefficient matrix of the discretized FDEs is typically dense, which requires O(N3) of computational cost and O(N2) of memory storage if a direct solution method is employed, where N is the number of unknowns. However, by making use of the Toeplitz-like structure of the coefficient matrices, many efficient algorithms have been developed, see for instance [6,9,12,13,14,15,16,21].

    In this paper, we consider the following initial-boundary value problem of spatial balanced FDEs [7,24]:

    u(x,t)t+RLxLDαx(d+(x,t)CxDαxRu(x,t))+RLxDβxR(d(x,t)CxLDβxu(x,t))=f(x,t),(x,t)(xL,xR)×(0,T],u(xL,t)=uL(t),u(xR,t)=uR(t),0tT,u(x,0)=u0(x),xLxxR. (1.1)

    where d±(x,t)>0 are diffusion coefficients, f(x,t) is the source term, and α,β are the fractional orders satisfying 12<α,β<1. Here RLxLDγx and RLxDγxR denote the left-sided and right-sided Riemann-Liouville fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    RLxLDγxu(x)=1Γ(1γ)ddxxxLu(ξ)(xξ)γdξ,RLxDγxRu(x)=1Γ(1γ)ddxxRxu(ξ)(ξx)γdξ,

    where Γ() is the Gamma function, while CxLDγx and CxDγxR denote the left-sided and right-sided Caputo fractional derivatives for 0<γ<1, respectively, and are defined by [17]

    CxLDγxu(x)=1Γ(1γ)xxLu(ξ)(xξ)γdξ,CxDγxRu(x)=1Γ(1γ)xRxu(ξ)(ξx)γdξ.

    The fractional differential operator in (1.1) is called the balanced central fractional derivative, which was studied in [24]. One advantage of such fractional differential operator is that its variational formulation has a symmetric bilinear form, which can greatly benefit theoretical investigation.

    Recently, a finite volume approximation for the spatial balanced FDEs (1.1) is proposed in [7]. By applying a standard first-order difference scheme for the time derivative and a finite volume discretization scheme for the spatial balanced fractional differential operator, a series of systems of linear equations are generated, whose coefficient matrices share the form of the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. One attractive feature of these coefficient matrices is that they are symmetric positive definite, so that the linear systems can be solved by CG method, in which the three-term recurrence can significantly reduce the computational and storage cost. However, due to the ill-conditioning of the coefficient matrices, CG method, when applied to solve the resulted linear systems, usually converges very slow. Therefore, preconditioners should be applied to improve the computational efficiency. In [7], the authors proposed two preconditioners: circulant preconditioner for the constant diffusion coefficient case and banded preconditioner for the variational diffusion coefficient case.

    In this paper, we consider the approximate inverse preconditioners for the resulting linear systems arising from the finite volume discretization of the spatial balanced FDEs (1.1). Our preconditioner is based on the symmetric approximate inverse strategy studied in [11] and the Sherman-Morrison-Woodburg formula. Rigorous analysis shows that the preconditioned matrix can be written as the sum of the identity matrix, a small norm matrix, and a low-rank matrix. Therefore, the quick convergence of the CG method for solving the preconditioned linear systems is expected. Numerical examples, for both one-dimensional and two-dimensional cases, are given to demonstrate the robustness and effectiveness of the proposed preconditioner. We remark that our preconditioner can also be applied to another class of conservative balanced FDEs which was studied in [8,10].

    The rest of the paper is organized as follows. In Section 2, we present the discretized linear systems of the balanced FDEs. Our new preconditioners are given in Section 3, and their properties are investigated in detail in Section 4. In Section 5, we carry out the numerical experiments to demonstrate the performance of the proposed preconditioners. Finally, we give the concluding remarks in Section 6.

    Let Δt=T/Mt be the time step where Mt is a given positive integer. We define a temporal partition tj=jΔt for j=0,1,2,,Mt. The first-order time derivative in (1.1) can be discretized by the standard backward Euler scheme, and we obtain the following semidiscrete form:

    u(x,tj)u(x,tj1)Δt+RLxLDαx(d+(x,tj)CxDαxRu(x,tj))+RLxDβxR(d(x,tj)CxLDβxu(x,tj))=f(x,tj), (2.1)

    for j=1,2,,Mt. Let Δx=(xRxL)/(N+1) be the size of the spatial grid where N is a positive integer. We define a spatial partition xi=xL+iΔx for i=0,1,2,,N+1, and denote by xi12=xi1+xi2 the midpoint of the interval [xi1,xi]. Integrating both sides of (2.1) over [xi12,xi+12] gives

    1Δtxi+12xi12u(x,tj)dx+xi+12xi12RLxLDαx(d+(x,tj)CxDαxRu(x,tj))dx+xi+12xi12RLxDβxR(d(x,tj)CxLDβxu(x,tj))dx=1Δtxi+12xi12u(x,tj1)dx+xi+12xi12f(x,tj)dx. (2.2)

    Let SΔx(xL,xR) be the space of continuous and piecewise-linear functions with respect to the spatial partition, and define the nodal basis functions ϕk(x) as

    ϕk(x)={xxk1Δx,x[xk1,xk],xk+1xΔx,x[xk,xk+1],0,elsewhere,

    for k=1,2,,N, and

    ϕ0(x)={x1xΔx,x[x0,x1],0,elsewhere,ϕN+1(x)={xxNΔx,x[xN,xN+1],0,elsewhere.

    The approximate solution uΔx(x,tj)SΔx(xL,xR) can be expressed as

    uΔx(x,tj)=N+1k=0u(j)kϕk(x).

    Therefore, the corresponding finite volume scheme leads to

    1ΔtN+1k=0u(j)kxi+12xi12ϕj(x)dx+1Γ(1α)xxL(xξ)α(d+(ξ,tj)CξDαxRuh(ξ,tj))dξ|xi+12xi121Γ(1α)xRx(ξx)α(d(ξ,tj)CxLDαξuh(ξ,tj))dξ|xi+12xi12=1ΔtN+1k=0u(j1)kxi+1/2xi1/2ϕj(x)dx+xi+1/2xi1/2f(x,tj)dx, (2.3)

    which can be further approximated and we obtain [7]

    18(u(j)i1+6u(j)i+u(j)i+1)+ηαil=0g(α)ild+(xl+12,tj)(Nk=lg(α)klu(j)ka(α)Nlu(j)N+1)+ηβN+1l=ig(β)lid(xl12,tj)(k=1g(β)lku(j)ka(β)l1u(j)0)=18(u(j1)i1+6u(j1)i+u(j1)i+1)+ΔtΔxxi+12xi12f(x,tj)dx,1iN, 1jMt, (2.4)

    where ηα=ΔtΓ(2α)2Δx2α, ηβ=ΔtΓ(2β)2Δx2β, and

    g(γ)0=a(γ)0,g(γ)k=a(γ)ka(γ)k1,k=1,2,,N, (2.5)

    for γ=α,β, with

    a(γ)0=(12)1γ,a(γ)k=(k+12)1γ(k12)1γ,k=1,2,,N.

    The initial value and boundary condition are

    u(0)k=u0(xk),k=0,1,2,,N+1,u(j)0=uL(tj),u(j)N+1=uR(tj),j=1,2,,Mt.

    Collecting all components i into a single matrix system, we obtain

    (M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ)u(j)=Mu(j1)+Δtf(j)+b(j),j=1,2,,Mt, (2.6)

    where

    ˜D(j)+=diag({d+(xk+12,tj)}Nk=0)and˜D(j)=diag({d(xk+12,tj)}Nk=0)

    are diagonal matrices, M=18tridiag(1,6,1), u(j)=[u(j)1,u(j)2,,u(j)N], f(j)=[f(j)1,f(j)2,, f(j)N] with

    f(j)i=1Δxxi+12xi12f(x,tj)dx,

    and

    b(j)=[b(j)1+18(u(j1)0u(j)0),b(j)2,b(j)3,,b(j)N1,b(j)N+18(u(j1)N+1u(j)N+1)]

    with

    b(j)i=ηαil=0g(α)ild+(xl+12,tj)a(α)Nlu(j)N+1ηαg(α)id+(x12,tj)g(α)0u(j)0+ηβN+1l=ig(β)lid(xl12,tj)a(β)l1u(j)0ηβg(β)Ni+1d(xN+12,tj)g(β)0u(j)N+1,i=1,2,,N.

    The matrices ˜Gα, ˜Gβ are N-by-(N+1) Toeplitz matrices defined by

    ˜Gα=[g(α)1g(α)0g(α)2g(α)1g(α)0g(α)Ng(α)2g(α)1g(α)0],˜Gβ=[g(β)0g(β)1g(β)2g(β)Ng(β)0g(β)1g(β)2g(β)0g(β)1].

    We remark that the entries of ˜Gα and ˜Gβ are independent on the time partition tj.

    We denote the coefficient matrix by A(j), that is,

    A(j)=M+ηα˜Gα˜D(j)+˜Gα+ηβ˜Gβ˜D(j)˜Gβ. (2.7)

    As M and ˜D(j)+,˜D(j) are symmetric positive definite, we can see that the coefficient matrix A(j) is symmetric positive definite too.

    For the entries of ˜Gα and ˜Gβ, we have the following result, which is directly obtained from Lemma 2.3 in [15].

    Lemma 2.1. Let g(γ)k be defined by (2.5) with 12<γ<1. Then

    (1) g(γ)0>0, g(γ)1<g(γ)2<<g(γ)k<<0;

    (2) |g(γ)k|<cγ(k1)γ+1 for k=2,3,, where cγ=γ(1γ);

    (3) limk|g(γ)k|=0, k=0g(γ)k=0, and pk=0g(γ)k>0 for p1.

    As the coefficient matrix is symmetric positive definite, we can apply CG method to solve the linear systems (2.6). In order to improve the performance and reliability of the CG method, preconditioning is necessarily to employed. In this section, we develop the approximate inverse preconditioner for the linear system (2.6).

    For the convenience of analysis, we omit the superscript "(j)" for the jth time step, that is, we denote the coefficient matrix (2.7) as

    A=M+ηα˜Gα˜D+˜Gα+ηβ˜Gβ˜D˜Gβ, (3.1)

    where ˜D+=˜D(j)+, ˜D=˜D(j).

    Let gα and Gα be the first column and the last N columns of ˜Gα, respectively, that is, ˜Gα=[gα,Gα] where

    gα=[g(α)1g(α)2g(α)N]RNandGα=[g(α)0g(α)1g(α)0g(α)N1g(α)1g(α)0]RN×N.

    Analogously, we denote by gβ and Gβ the last column and the first N columns of ˜Gβ, respectively, that is, ˜Gβ=[Gβ,gβ] where

    gβ=[g(β)Ng(β)N1g(β)1]RNandGβ=[g(β)0g(β)1g(β)N1g(β)0g(β)1g(β)0]RN×N.

    Then we have

    A=M+ηα[gα,Gα]˜D+[gα,Gα]+ηβ[Gβ,gβ]˜D[Gβ,gβ]=M+ηαGαˆD+Gα+ηβGβˆDGβ+ηαd+(x12)gαgα+ηβd(xN+12)gβgβ,

    where

    ˆD+=diag({d+(xk+12)}Nk=1),ˆD=diag({d(xk+12)}N1k=0).

    Therefore, A can be written as

    A=ˆA+UU, (3.2)

    where

    ˆA=M+ηαGαˆD+Gα+ηβGβˆDGβ (3.3)

    and

    U=[ηαd+(x12)gα,ηβd(xN+12)gβ]RN×2. (3.4)

    According to Sherman-Morrison-Woodburg Theorem, we have

    A1=(ˆA+UU)1=ˆA1ˆA1U(I+UˆA1U)1UˆA1. (3.5)

    In the following, we consider the approximation of ˆA1. To this end, we first define a matrix ˜A with

    ˜A=M+ηαGαD+Gα+ηβGβDGβ, (3.6)

    where D+=diag({d+(xk)}Nk=1) and D=diag({d(xk)}Nk=1).

    Assume d+(x),d(x)C[xL,xR], then it is easy to see that for any ϵ>0, when the spatial gird Δx is small enough, we have

    |d+(xk)d+(xk±12)|<ϵand|d(xk)d(xk±12)|<ϵ.

    Meanwhile, we learn from Lemma 2.1 that there exists cα,cβ>0 such that Gα2cα,Gβ2cβ. Let

    S1=ˆA˜A. (3.7)

    Then it holds that

    S12=ηαGα(ˆD+D+)Gα+ηβGβ(ˆDD)Gβ2(ηαc2α+ηβc2β)ϵ.

    This indicates that ˜A can be a good approximation of ˆA when Δx is very small. However, it is not easy to compute the inverse of ˜A.

    Motivated by the symmetric approximate inverse preconditioner proposed in [11], we consider the following approximations

    ˜A1/2eiK1/2iei,i=1,2,,N,

    where ei is the i-th column of the identity matrix I and

    Ki=M+ηαd+(xi)GαGα+ηβd(xi)GβGβ, (3.8)

    which is symmetric positive definite. That is, we approximate the i-th column of ˜A1/2 by the i-th column of K1/2i. Then we propose the following symmetric approximate inverse preconditioner

    P11=(Ni=1K1/2ieiei)(Ni=1K1/2ieiei). (3.9)

    Although Ki is symmetric positive definite, it is not easy to compute the inverse of its square root. Hence we further approximate Ki by circulant matrices whose inverse square roots can be easily obtained by FFT.

    Let CM, Cα and Cβ be Strang's circulant approximations [5] of M, Gα and Gβ, respectively. Then we obtain the following preconditioner

    P12=(Ni=1C1/2ieiei)(Ni=1C1/2ieiei), (3.10)

    where Ci=CM+ηαd+(xi)CαCα+ηβd(xi)CβCβ is circulant matrix.

    In order to make the preconditioner more practical, similar to the idea in [11,13], we utilize the interpolation technique. Let {˜xk}k=1 be distinct interpolation points in [xL,xR] with a small integer (N), and denote by λ(λM,λα,λβ), where λM, λα, λβ are certain positive real numbers. Then we define the function

    gλ(x)=(λM+ηαλαd+(x)+ηβλβd(x))1/2,x[xL,xR].

    Let

    qλ(x)=ϕ1(x)gλ(˜x1)+ϕ2(x)gλ(˜x2)++ϕ(x)gλ(˜x)

    be the piecewise-linear interpolation for gλ(x) based on the points {(˜xk,gλ(˜xk))}k=1. Define

    ˜Ck=CM+ηαd+(˜xk)CαCα+ηβd(˜xk)CβCβ,k=1,2,,.

    Then ˜Ck is symmetric positive definite and can be diagonalized by FFT, that is,

    ˜Ck=F˜ΛkF,

    where F is the Fourier matrix and ˜Λk is a diagonal matrix whose diagonal entries are the eigenvalues of ˜Ck. By making use of the interpolation technique, we approximate C1/2i by

    C1/2iF(k=1ϕk(xi)˜Λ1/2k)F=k=1ϕk(xi)˜C1/2k,i=1,2,,N.

    By substituting these approximations into P12, we obtain the practical preconditioner

    P13=(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)(Ni=1Fk=1ϕk(xi)˜Λ1/2kFeiei)=(Ni=1k=1ϕk(xi)eieiF˜Λ1/2k)(Ni=1k=1˜Λ1/2kFϕk(xi)eiei)=(k=1Ni=1ϕk(xi)eieiF˜Λ1/2k)(k=1˜Λ1/2kFNi=1ϕk(xi)eiei)=(k=1ΦkF˜Λ1/2k)(k=1˜Λ1/2kFΦk), (3.11)

    where Φk=diag(ϕk(x1),ϕk(x2),,ϕk(xN)). Now applying P13 to any vector requires about O(NlogN) operations, which is acceptable for a moderate number .

    Finally, by substituting ˆA1 in the Sherman-Morrison-Woodburg formula (3.5) with P13, we obtain the preconditioner

    P14=P13P13U(I+UP13U)1UP13. (3.12)

    We remark that both P13 and P14 can be taken as preconditioners. It is clear that implementing P14 requires more computational work than P13. However, we note that P14 is a rank-2 update of P13, and gα, gβ are independent on tj, which indicates that, during the implementation of the preconditioner P14 to the CG method, we can precompute P13U ahead, as well as the inner product UP13U and the inverse of the 2-by-2 matrix I+UP13U. Therefore, at each CG iteration with preconditioner P14, besides the matrix-vector product with P13, only two inner-products and two vector updates are needed. Therefore, it is expected that P14 may have better performance for the one dimensional problems.

    Since P14 is obtained by substituting ˆA1 with P13 in the expression of A1, the approximation property of P14 to A1 is dependent on how close P13 to ˆA1 will be. Therefore, in this section, we study the difference between P13 and ˆA1.

    We first introduce the off-diagonal decay property [20], which is crucial for studying the circulant approximation of the Toeplitz matrix.

    Definition 4.1. Let A=[ai,j]i,jI be a matrix, where the index set is I=Z,N or {1,2,,N}. We say A belongs to the class Ls if

    |ai,j|c(1+|ij|)s (4.1)

    holds for s>1 and some constant c>0, and say A belongs to the class Er if

    |ai,j|cer|ij| (4.2)

    holds for r>0 and some constant c>0.

    The following results hold for the off-diagonal decay matrix class Ls and Er [3,11,15,20].

    Lemma 4.1. Let A=[ai,j]i,jI be a nonsingular matrix, where the index set is I=Z,N or {1,2,,N}.

    (1) If ALs for some s>1, then A1Ls.

    (2) If AL1+s1 and BL1+s2 are finite matrices, then ABL1+s, where s=min{s1,s2}.

    (3) If AEr1 and BEr2 are finite matrices, then ABEr for some constant 0<r<min{r1,r2}.

    (4) Let A be a banded finite matrix and Hermitian positive definite, and let f be an analytic function on [λmin(A),λmax(A)] and f(λ) is real for real λ, where λmin(A) and λmax(A) denote the minimal and maximal eigenvalues of A, respectively. Then f(A) has the off-diagonal exponential decay property (4.2). In particular, let f(x)=x1,x1/2,andx1/2, respectively, then A1, A1/2, and A1/2 have the off-diagonal exponential decay property (4.2).

    Assume that d+(x),d(x)C[xL,xR], it follows from Lemma 2.1 and Lemma 4.1 that we have

    Gα, GαGα, GαD+GαL1+αandGβ, GβGβ, GβDGβL1+β,

    for 12<α,β<1. Therefore, it holds that ˜A, ˜A1L1+min{α,β}.

    Given an integer m, let Gα,m and Gβ,m be the (2m+1)-banded approximations of Gα and Gβ, respectively, that is,

    Gα,m(i,j)={Gα(i,j),|ij|m,0,otherwise,Gβ,m(i,j)={Gβ(i,j),|ij|m,0,otherwise.

    In the following discussion, we let

    Ki=M+ηαd+(xi)Gα,mGα,m+ηβd(xi)Gβ,mGβ,m,i=1,2,,N, (4.3)

    which can be regarded as some kind of banded approximation of the Ki defined in (3.8). We remark that, in the actual applications, we still employ Ki in (3.8) to construct the preconditioners.

    Define

    ˜Am=M+ηαGα,mD+Gα,m+ηβGβ,mDGβ,m. (4.4)

    As ˜Am and Ki are banded matrices and symmetric positive definite, it follows from Lemma 4.1 that ˜A1m, K1i and K1/2i have off diagonal exponential decay property. Meanwhile, we have

    λmin(˜A)λmin(M)12,λmin(˜Am)λmin(M)12,

    and hence

    ˜A122and˜A1m22.

    For a given ϵ>0, it follows from Theorem 12 in [7] that there exists an integer m1>0 such that for all mm1 we have

    ˜Am˜A2ϵ/4.

    Therefore, it holds that

    ˜A1m˜A12=˜A1m(˜A˜Am)˜A12˜A1m2˜Am˜A2˜A12ϵ, (4.5)

    and

    P11˜A12P11˜A1m2+˜A1m˜A12P11˜A1m2+ϵ.

    Now, we turn to estimate the upper bound of P11˜A1m2. Since both P11 and ˜A1m are symmetric, we have

    P11˜A1m2=ρ(P11˜A1m)P11˜A1m1=max1jN(P11˜A1m)ej1=max1jN(P11K1j)ej1+max1jN(K1j˜A1m)ej1. (4.6)

    For the first item, we have the following estimation.

    Lemma 4.2. Let Kj be defined by (4.3). Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exist an integer N1>0 and two positive constants c3,c4, such that

    P11ejK1jej1<c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ,

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j|, ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Proof. We have

    P11ejK1jej=(Ni=1K1/2ieiei)K1/2jejK1jej=Ni=1eieiK1/2iK1/2jejNi=1eieiK1jej=Ni=1eiei(K1/2iK1/2j)K1/2jej. (4.7)

    As it was shown in Theorem 2.2 of [3], for a given ϵ>0, we can find a polynomial pk(t)=k=0at of degree k such that

    |[K1/2i]l,r[pk(Ki)]l,r|K1/2ipk(Ki)2<ϵ,1iN,1l,rN, (4.8)

    where []l,r denotes the (l,r)-th entry of a matrix. Then we can write (4.7) as

    P11ejK1jej=Ni=1eiei(K1/2ipk(Ki)+pk(Ki)pk(Kj)+pk(Kj)K1/2j)K1/2jej.=Ni=1eieiH(1)ijej+Ni=1eieiH(2)ijej+Ni=1eieiH(3)ijej, (4.9)

    where

    H(1)ij=(K1/2ipk(Ki))K1/2j,H(2)ij=(pk(Ki)pk(Kj))K1/2j,H(3)ij=(pk(Kj)K1/2j)K1/2j.

    Note that pk(Ki) is (2km+1)-banded matrix and K1/2i has the off diagonal exponential decay property, we know that K1/2ipk(Ki) also has the off diagonal exponential decay property. According to Lemma 4.1, we learn that H(1)ij has the off diagonal exponential decay property. Hence, analogous to the proof of Lemma 3.8 in [11], we can find constants ˆc>0 and ˆr>0 such that

    |[H(1)ij]l,r|ˆceˆr|lr|forl,r=1,2,,N. (4.10)

    On the other hand, we denote the j-th column of H(1)ij by [h1,j,h2,j,,hN,j], and let ˜KiK1/2ipk(Ki). Observe that

    hl,j=Nr=1˜Ki(l,r)K1/2j(r,j),l=1,2,,N.

    Then based on (4.8) and the fact that K1/2j has off diagonal exponential property, we can show that there exists a constant ˆc1 such that

    |hl,j|<ˆc1ϵ,l=1,2,,N. (4.11)

    Denote

    Ni=1eieiH(1)ijej[H(1)1j(1,j),H(1)2j(2,j),,H(1)Nj(N,j)].

    From (4.10) and (4.11), we can show that there exists a constant c1>0, such that

    Ni=1eieiH(1)ijej1Ni=1|H(1)ij(i,j)|<c1ϵ. (4.12)

    Analogously, we can show that there exists a constant c2>0 such that

    Ni=1eieiH(3)ijej1<c2ϵ. (4.13)

    Now we turn to H(2)ij. It follows from the proof of Lemma 3.11 in [11] that pk(Ki)pk(Kj) has the form

    H(2)ij=k=1a(1q=0Kq1i(KiKj)Kqj)K1/2j=(d+,id+,j)˜H(2)ij+(d,id,j)ˆH(2)ij, (4.14)

    where

    ˜H(2)ij=(k=11q=0aηαKq1iGαGαKqj)K1/2jandˆH(2)ij=(k=11q=0aηβKq1iGβGβKqj)K1/2j.

    It is easy to see that both ˜H(2)ij and ˆH(2)ij have the off diagonal exponential decay property. As

    Ni=1eieiH(2)ijej=[(d+,1d+,j)˜H(2)1j(1,j),,(d+,Nd+,j)˜H(2)Nj(N,j)]+[(d,1d,j)ˆH(2)1j(1,j),,(d,Nd,j)ˆH(2)Nj(N,j)],

    there exists an integer N1 such that

    Ni=1eieiH(2)ijej1Nk=1|d+,kd+,j||˜H(2)kj(k,j)|+Nk=1|d,kd,j||ˆH(2)kj(k,j)|<j+N1k=jN1|d+,kd+,j||˜H(2)kj(k,j)|+j+N1k=jN1|d,kd,j||ˆH(2)kj(k,j)|+4dmaxϵ˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ, (4.15)

    where ΔN1d+,j=maxjN1kj+N1|d+,kd+,j| and ΔN1d,j=maxjN1kj+N1|d,kd,j|.

    Let c3=˜c and c4=c1+c2+4dmax. It follows from (4.12), (4.13) and (4.15) that

    P11ejK1jej1Ni=1eieiH(1)ijej1+Ni=1eieiH(2)ijej1+Ni=1eieiH(3)ijej1<(c1+c2)ϵ+˜cmax{ΔN1d+,j,ΔN1d,j}+4dmaxϵ=c3max{ΔN1d+,j,ΔN1d,j}+c4ϵ.

    Now we consider the second item in (4.6).

    Lemma 4.3. Let Kj and ˜Am be defined by (4.3) and (4.4), respectively. Assume that 0<dmind+(x),d(x)dmax. Then for a given ϵ>0, there exists an integer N2>0 and constants c5,c6>0 such that

    (K1j˜A1m)ej1c5max1kNmax{ΔN2d+,k,ΔN2d,k}+c6ϵ. (4.16)

    Proof. Let

    ˆAm=M+ηαGαGαD++ηβGβGβD

    then we have

    (K1j˜A1m)ej1(K1jˆA1m)ej1+(ˆA1m˜A1m)ej1. (4.17)

    On the one hand, observe that

    K1jˆA1m=ˆA1m(ˆAmKj)K1j=ηαˆA1mGαGα(D+d+,jI)K1j+ηβˆA1mGβGβ(Dd,jI)K1j,

    we have

    (K1jˆA1m)ej1ηαˆA1m1Gα21(D+d+,jI)K1jej1+ηβˆA1m1Gβ21(Dd,jI)K1jej1,

    where

    (D+d+,jI)K1jej1=Nk=1|(d+,kd+,j)K1j(k,j)|

    and

    (Dd,jI)K1jej1=Nk=1|(d,kd,j)K1j(k,j)|.

    As K1j has off diagonal exponential decay property, similar to the derivation of (4.15), we can show that, given a ϵ>0, there exists a integer ˜N1>0 and a constant ˜c1>0, such that

    (D+d+,jI)K1jej1˜c1Δ˜N1d+,j+2dmaxϵ,

    and

    (Dd,jI)K1jej1˜c1Δ˜N1d,j+2dmaxϵ.

    Therefore, there exists positive constants ˜c2 and ˜c3, such that

    (K1jˆA1m)ej1˜c2max{Δ˜N1d+,j,Δ˜N1d,j}+˜c3ϵ. (4.18)

    On the other hand, observe that

    ˆA1m˜A1m=˜A1m(˜AmˆAm)ˆA1m=ηα˜A1m(GαD+GαGαGαD+)ˆA1m+ηβ˜A1m(GβDGβGβGβD)ˆA1m=ηα˜A1mGα(D+GαGαD)ˆA1m+ηβ˜A1mGβ(DGβGβD)ˆA1m,

    we have

    (ˆA1m˜A1m)ej1ˆA1m˜A1m1˜A1m1ˆA1m1(ηαGα1D+GTαGαD+1+ηβGβ1DGTβGβD1).

    Since Gα and Gβ have off diagonal decay property, then as shown in the proof of Lemma 4.7 in [15], given a ϵ>0, there exists a integer ˜N2>0 and a constant ˜c4>0 such that

    D+GαGαD+1˜c4max1kNΔ˜N2d+,k+2dmaxϵ

    and

    DGβGβD1˜c4max1kNΔ˜N2d,k+2dmaxϵ,

    where Δ˜N2d+,k=maxk˜N2lk+˜N2|d+,ld+,k| and Δ˜N2d,k=maxk˜N2lk+˜N2|d,ld,k|. Therefore, there exists positive constants ˜c5 and ˜c6 such that

    (ˆA1m˜A1m)ej1˜c5max1kNmax{Δ˜N2d+,k,Δ˜N2d,k}+˜c6ϵ. (4.19)

    Now, let

    N2=max{˜N1,˜N2},c5=˜c2+˜c5andc6=˜c3+˜c6.

    It follows from (4.18) and (4.19) that, given a \epsilon > 0 , there exists a integer N_2 > 0 and constants c_{5}, c_{6} > 0 such that

    \begin{equation*} \|(K_j^{-1}-\tilde{A}_{m}^{-1})e_j\|_{1}\leq c_{5}\max\limits_{1\leq k \leq N} \max \{\Delta_{N_2}d_{+,k}, \Delta_{N_2}d_{-,k}\} + c_{6}\epsilon. \end{equation*}

    The proof is completed.

    In combination with (4.5), Lemma 4.2 and Lemma 4.3, we obtain the following theorem.

    Theorem 4.1. Assume 0\leq d_{\min} \leq d_+(x), d_-(x)\leq d_{\max} . Then given an \epsilon > 0 , there exist an integer N_3 > 0 and constants c_{7}, c_{8} > 0 such that

    \begin{equation} \|P_{1}^{-1}-\tilde{A}^{-1}\|_{2}\leq c_{7}\max\limits_{1\leq k \leq N} \max\{\Delta_{N_3}d_{+,k}, \Delta_{N_3}d_{-,k}\} +c_{8}\epsilon. \end{equation} (4.20)

    Remark 1. If d_{+}(x), \, d_{-}(x)\in C[x_L, x_R] , then \max\limits_{1\leq k\leq N}\max\{\Delta_{N_3}d_{+, k}, \Delta_{N_3}d_{-, k}\} can be very small for sufficiently large N , which implies that P_{1}^{-1} then will be a good approximation to \tilde{A}^{-1} .

    Now we consider the difference between P_{2}^{-1} to P_{1}^{-1} . As shown in Section 3.3 of [11], we know that, for a given \epsilon > 0 , there exists a polynomial p_{k}(t) = \sum_{\ell = 0}^{k} a_\ell t^\ell of degree k such that

    \begin{equation} \left\|K_{i}^{\frac{1}{2}}-p_{k}(K_i)\right\|_2 < \epsilon \; \; {\text{and}}\; \; \left\|C_{i}^{\frac{1}{2}}-p_{k}(C_i)\right\|_2 < \epsilon, \quad {\text{for}}\ i = 1,2,\cdots,N. \end{equation} (4.21)

    Define

    \begin{equation} \tilde{P}_{1} = \left(\sum\limits_{i = 1}^{n} p_{k}\left(K_{i}\right) e_{i} e_{i}^{ {\intercal}}\right)^{*} \left(\sum\limits_{i = 1}^{n} p_{k}\left(K_{i}\right) e_{i} e_{i}^{ {\intercal}}\right) \end{equation} (4.22)

    and

    \begin{equation} \tilde{P}_{2} = \left(\sum\limits_{i = 1}^{n} p_{k}\left(C_{i}\right) e_{i} e_{i}^{ {\intercal}}\right)^{*} \left(\sum\limits_{i = 1}^{n} p_{k}\left(C_{i}\right) e_{i} e_{i}^{ {\intercal}}\right). \end{equation} (4.23)

    Then we have

    \begin{equation} P_2^{-1}-P_1^{-1} = (P_2^{-1}-\tilde{P}_2)+(\tilde{P}_2-\tilde{P}_1)+(\tilde{P}_1-P_1^{-1}). \end{equation} (4.24)

    Lemma 4.4. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then we have

    \begin{equation*} {\mathrm{rank}}(\tilde{P}_{2}-\tilde{P}_{1})\leq 8km. \end{equation*}

    Proof. Observe that

    \begin{equation} K_{i} - C_{i} = M-C(M)+\eta_{\alpha} d_{+,i} \left(G_{\alpha}G_{\alpha}^{ { \intercal }}-C_{\alpha}C_{\alpha}^{ { \intercal }}\right) + \eta_{\beta} d_{-,i}\left(G_{\beta}G_{\beta}^{ { \intercal }}-C_{\beta}C_{\beta}^{ { \intercal }}\right). \end{equation} (4.25)

    It is easy to see that

    \begin{equation} M-C(M) = \frac{1}{8}\left(\begin{bmatrix} 6 & 1 & & \\ 1 & 6 & \ddots & \\ & \ddots & \ddots & 1 \\ & & 1 & 6 \end{bmatrix} -\begin{bmatrix} 6 & 1 & & 1 \\ 1 & 6 & \ddots & \\ & \ddots & \ddots & 1 \\ 1 & & 1 & 6 \end{bmatrix}\right) = \frac{1}{8}\begin{bmatrix} & & & -1 \\ & & & \\ & & & \\ -1 & & & \end{bmatrix}. \end{equation} (4.26)

    For the matrix G_{\alpha} and its Strang's circulant approximation C_{\alpha} , they have the following form

    \begin{equation*} G_{\alpha} = \begin{bmatrix} K_0 & & & \\ K_1 & K_0 & & \\ &\ddots &\ddots & \\ & & K_1 & K_0 \end{bmatrix} \; \; {\text{and}}\; \; C_{\alpha} = \begin{bmatrix} K_0 & & & K_1 \\ K_1 & K_0 & & \\ & \ddots & \ddots & \\ & & K_1 & K_0 \end{bmatrix}, \end{equation*}

    where K_0, K_1\in\mathbb{R}^{m \times m} . Therefore, we have

    \begin{equation} \begin{split} C_{\alpha}C_{\alpha}^{ { \intercal }} & = \left(G_{\alpha}+\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}\right) \left(G_{\alpha}^{ { \intercal }}+\begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix}\right) \\ & = G_{\alpha}G_{\alpha}^{ { \intercal }} +G_{\alpha}\begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix} +\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}G_{\alpha}^{ { \intercal }} +\begin{bmatrix} & & K_{1} \\&&\\&&\end{bmatrix}\, \begin{bmatrix} & & \\& &\\ K_{1}^{ { \intercal }}& &\end{bmatrix}\\ & = G_{\alpha}G_{\alpha}^{ { \intercal }} +\begin{bmatrix} & & \\& &\\K_{0}K_{1}^{ { \intercal }} & &\end{bmatrix} +\begin{bmatrix} & & K_{1}K_{0}^{ { \intercal }}\\ & & \\ & &\end{bmatrix} +\begin{bmatrix} K_{1}K_{1}^{ { \intercal }} & & \\ & & \\ & &\end{bmatrix}. \end{split} \end{equation} (4.27)

    Similarly, we can show that

    \begin{equation} C_{\beta}C_{\beta}^{ { \intercal }} = G_{\beta}G_{\beta}^{ { \intercal }} +\begin{bmatrix} & & \tilde{K}_{0}\tilde{K}_{1}^{ { \intercal }} \\ & & \\ & &\end{bmatrix} +\begin{bmatrix} & & \\ & & \\ \tilde{K}_{1}\tilde{K}_{0}^{ { \intercal }} & &\end{bmatrix} +\begin{bmatrix} & & \\ & & \\ & & \tilde{K}_{1}\tilde{K}_{1}^{ { \intercal }}\end{bmatrix}. \end{equation} (4.28)

    Therefore, we can see from (4.25), (4.26), (4.27) and (4.28) that K_{i}-C_{i} is of the form

    \begin{equation*} K_{i}-C_{i} = \begin{bmatrix} + & & & + \\ & & & \\ + & & & + \end{bmatrix}, \end{equation*}

    where "+" denotes a m -by- m block matrix.

    Analogous to the proof of Lemma 3.11 in [11], it follows from

    \begin{equation*} p_k(C_i)-p_k(K_i) = \sum\limits_{j = 0}^{k} a_j \left(C_i^j-K_i^j\right) = \sum\limits_{j = 0}^{k}\sum\limits_{\ell = 0}^{j-1} a_j C_i^{j-\ell-1}(C_i-K_i)K_i^{\ell} \end{equation*}

    and

    \tilde{P}_{2}-\tilde{P}_{1} = \left(\sum\limits_{i = 1}^{N} e_ie_i^{ { \intercal }} p_k(C_i)\right) \left(\sum\limits_{i = 1}^{N} (p_k(C_i)-p_k(K_i)) e_ie_i^{ { \intercal }}\right) +\left(\sum\limits_{i = 1}^{N} (p_k(C_i)-p_k(K_i)) e_ie_i^{ { \intercal }}\right)^{*} \left(\sum\limits_{i = 1}^{N} p_k(K_i)e_ie_i^{ { \intercal }}\right)

    that {\mathrm{rank}}(\tilde{P}_2-\tilde{P}_1)\leq 8km .

    On the other hand, for P_1-\tilde{P}_1 and P_2-\tilde{P}_2 , we can directly apply the proof of Lemma 3.12 in [11] to obtain the following lemma.

    Lemma 4.5. Let \tilde{P}_1 and \tilde{P}_2 be defined by (4.22) and (4.23), respectively. Then given a \epsilon > 0 , there exist constants \tilde{c}_7 > 0 and \tilde{c}_8 > 0 such that

    \begin{equation*} \|P_1^{-1}-\tilde{P}_{1}\|_{2} < \tilde{c}_7\epsilon \; \; \mathit{{\text{and}}}\; \; \|P_2^{-1}-\tilde{P}_{2}\|_{2} < \tilde{c}_8\epsilon. \end{equation*}

    By Lemma 4.4 and Lemma 4.5, we obtain the following theorem.

    Theorem 4.2. Let P_{1}^{-1} and P_{2}^{-1} be defined by (3.9) and (3.10), respectively. Then given a \epsilon > 0 , there exists constants \tilde{c}_9 > 0 and k > 0 such that

    \begin{equation*} P_2^{-1}-P_{1}^{-1} = E+M, \end{equation*}

    where E = (P_2^{-1}-\tilde{P}_2)+(\tilde{P}_1-P_1^{-1}) is a small norm matrix with \|E\|_{2} < \tilde{c}_9\epsilon , and M = \tilde{P}_2-\tilde{P}_1 is a low rank matrix with {\mathrm{rank}}(M)\leq 8km .

    We can directly apply the analysis of section 3.4 in [11] to reach the following conclusion.

    Lemma 4.6. Let P_{2}^{-1} and P_{3}^{-1} be defined by (3.10) and (3.11), respectively. Then, for a given \epsilon > 0 , there exist an integer \ell_{0} > 0 and a constant c_{9} > 0 such that

    \begin{equation*} \|P_{3}^{-1}-P_{2}^{-1}\|_{2} < c_{9}\epsilon \end{equation*}

    holds when the number of interpolation points in P_{3}^{-1} satisfies \ell\geq \ell_{0} .

    Summarizing our analysis, we have the following theorem.

    Theorem 4.3. Let P_{3}^{-1} and A be defined by (3.11) and (3.2), respectively. Then we have

    \begin{equation*} P_{3}^{-1}A = I+E+S, \end{equation*}

    where E is a low rank matrix and S is a small norm matrix.

    Proof. Observe that

    \begin{equation} \begin{split} P_{3}^{-1}A & = \left(P_{3}^{-1}-\tilde{A}^{-1}+\tilde{A}^{-1}\right)A = \left(P_{3}^{-1}-\tilde{A}^{-1}\right)A + \tilde{A}^{-1}\left(\tilde{A}+A-\tilde{A}\right) = I+\left(P_{3}^{-1}-\tilde{A}^{-1}\right)A + \tilde{A}^{-1}\left(A-\tilde{A}\right). \end{split} \end{equation} (4.29)

    By (3.2), (3.7), Theorems 4.1, 4.2 and Lemma 4.6, we can show that

    \begin{equation*} A-\tilde{A} = E_1+S_1 \quad{\text{and}}\quad P_{3}^{-1}-\tilde{A}^{-1} = E_2+S_2, \end{equation*}

    where E_1 and E_2 are two low rank matrices, and S_1 and S_2 are two small norm matrices. Therefore, it is easy to see from (4.29) that

    \begin{equation*} P_{3}^{-1}A = I+E+S, \end{equation*}

    where E is a low rank matrix and S is a small norm matrix.

    In this section, we carry out numerical experiments to show the performance of the proposed preconditioners P_3 and P_4 . We use the preconditioned conjugate gradient (PCG) method to solve the symmetric positive definite linear systems (2.6). In all experiments, we set the stopping criterion as

    \begin{equation*} \frac{\|r_k\|_{2}}{\|b\|_{2}} < 10^{-8}, \end{equation*}

    where r_{k} is the residual vector after k iterations and b is the right-hand side.

    For the sake of comparison, we also test two other types of preconditioners which are proposed in [7]. One is the Strang's circulant preconditioner C_s defined as

    C_s = s(M)+\eta_{\alpha}\tilde{d}_{+}s(G_{\alpha})s(G_{\alpha})^{ { \intercal }} +\eta_{\beta}\tilde{d}_{-}s(G_{\beta})s(G_{\beta})^{ { \intercal }},

    where s(M) , s(G_{\alpha}) and s(G_{\beta}) are the strang-circulant approximations of the Toeplitz matrices M , G_{\alpha} and G_{\beta} , and \tilde{d}_{+} and \tilde{d}_{-} are the mean values of the diagonals of \tilde{D}_{+} and \tilde{D}_{-} , respectively. Another is the banded preconditioner B(\ell) defined as

    \begin{equation*} B(\ell) = M+\eta_{\alpha}\tilde{B}_{\alpha}\tilde{D}_{+}^{(j)}\tilde{B}_{\alpha}^{ { \intercal }} +\eta_{\beta}\tilde{B}_{\beta}\tilde{D}_{-}^{(j)}\tilde{B}_{\beta}^{ { \intercal }}, \end{equation*}

    where \tilde{B}_{\alpha} and \tilde{B}_{\beta} are the banded approximations of \tilde{G}_{\alpha} and \tilde{G}_{\beta} which are of the form

    \begin{equation*} \begin{bmatrix} g_{1}^{(\alpha)} & g_{0}^{(\alpha)} & & & & \\ \vdots & g_{1}^{(\alpha)} & g_{0}^{(\alpha)} & & & \\ g_{\ell}^{(\alpha)} & \ddots & \ddots & \ddots & & \\ & \ddots & \ddots & \ddots & \ddots & \\ & & \sum\limits_{k = \ell}^{N}g_{k}^{(\alpha)} & \cdots & g_{1}^{(\alpha)} & g_{0}^{(\alpha)} \end{bmatrix}\quad {\text{and}} \quad \begin{bmatrix} g_{0}^{(\beta)} & g_{1}^{(\beta)} & \cdots & \sum\limits_{k = \ell}^{N}g_{k}^{(\beta)} & & \\ & g_{0}^{(\beta)} & g_{1}^{(\beta)} & \ddots & \ddots & \\ & & \ddots & \ddots & \ddots & g_{\ell}^{(\beta)} \\ & & & \ddots & \ddots & \vdots\\ & & & & g_{0}^{(\beta)} & g_{1}^{(\beta)} \end{bmatrix}, \end{equation*}

    respectively.

    Example 1. We consider balanced fractional diffusion equation (1.1) with (x_L, x_R) = (0, 1) and T = 1 . The diffusion coefficients are given by

    \begin{equation*} d_{+}(x,t) = 1000 (x+1)^{4+\alpha}+t^2,\quad d_{-}(x,t) = 1000 (x+1)^{4+\beta}+t^2, \end{equation*}

    and f(x, t) = 1000 .

    In the numerical testing, we set the number of temporal girds with M_t = N/2 . The results for different values of \alpha and \beta are reported in Table 1. We do not report the numerical results of the unpreconditioned CG method as it converges very slow.

    Table 1.  Numerical results for Example 1.
    C_{s} B(4) B(6) P_3(3) P_3(4) P_4(3) P_4(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    \alpha=0.6, \ \beta=0.6
    2^{11} 61.44 14.90 35.00 10.26 31.00 9.60 14.00 5.21 12.00 5.01 13.00 5.24 11.00 5.06
    2^{12} 62.98 58.68 42.81 47.63 37.00 42.57 14.00 18.80 13.00 18.36 13.00 18.73 12.00 17.76
    2^{13} 64.00 231.27 50.01 210.70 44.00 187.71 15.00 76.60 13.00 69.07 13.00 71.41 12.00 69.55
    2^{14} 65.00 1060.76 58.01 1068.46 52.00 989.73 15.00 334.53 14.00 319.51 14.00 337.00 13.00 325.66
    \alpha=0.6, \ \beta=0.7
    2^{11} 65.06 16.61 31.00 9.63 27.41 8.97 14.00 5.43 13.00 5.54 13.00 5.36 11.00 5.15
    2^{12} 66.72 61.84 37.00 40.06 32.88 39.05 15.85 21.13 13.00 18.23 13.00 18.85 12.00 17.94
    2^{13} 68.00 248.34 43.00 182.87 38.00 170.32 16.01 83.84 14.54 79.90 14.34 79.94 13.00 77.52
    2^{14} 69.01 1171.84 49.00 910.07 43.95 862.68 18.00 409.78 15.95 374.91 14.99 366.86 13.99 357.92
    \alpha=0.7, \ \beta=0.7
    2^{11} 67.55 17.41 28.29 8.97 25.01 8.41 14.96 5.85 13.52 5.82 13.00 5.38 12.00 5.64
    2^{12} 69.01 65.25 34.00 38.90 30.11 37.09 16.00 22.02 13.86 20.20 14.00 20.61 12.00 18.76
    2^{13} 70.09 264.48 40.00 176.53 35.39 164.35 17.98 95.51 15.95 87.88 14.00 77.99 13.00 77.78
    2^{14} 71.31 1197.63 46.00 857.65 41.00 797.35 18.00 415.70 17.00 408.09 15.00 383.49 13.00 349.08
    \alpha=0.7, \ \beta=0.8
    2^{11} 72.00 17.96 23.97 7.53 21.06 7.00 16.00 6.02 14.00 5.95 13.04 5.41 12.00 5.60
    2^{12} 73.68 68.46 28.00 32.62 25.00 30.81 18.73 25.53 16.00 22.81 15.00 22.06 12.99 20.87
    2^{13} 75.16 282.66 32.00 141.63 28.98 140.93 18.96 101.00 17.47 96.63 15.00 84.31 14.00 82.02
    2^{14} 76.76 1262.66 35.99 683.73 32.98 653.98 20.00 458.79 17.79 430.50 16.00 397.79 14.00 370.01
    \alpha=0.8, \ \beta=0.8
    2^{11} 74.69 18.91 21.00 7.04 19.00 6.65 17.94 6.82 14.94 6.34 14.00 5.76 12.00 5.60
    2^{12} 76.09 70.32 24.00 28.41 22.00 27.70 19.00 26.66 16.65 24.02 14.00 20.89 12.00 18.97
    2^{13} 77.57 290.59 28.00 128.10 25.00 120.11 20.00 105.85 18.00 100.20 15.00 83.79 13.00 78.07
    2^{14} 79.37 1291.96 31.00 592.09 28.99 587.79 20.24 464.72 18.00 432.03 15.99 405.75 14.00 370.07
    \alpha=0.8, \ \beta=0.9
    2^{11} 79.85 20.58 16.00 5.67 15.00 5.66 19.19 7.45 17.00 7.20 14.96 6.18 13.00 6.03
    2^{12} 81.45 77.06 18.00 22.74 17.00 22.56 19.71 26.73 17.00 24.08 15.00 21.87 13.00 19.83
    2^{13} 84.14 320.03 20.00 95.92 18.99 100.71 22.00 115.14 19.12 104.20 16.00 90.63 14.00 81.37
    2^{14} 85.47 1413.42 21.98 449.53 19.99 428.83 23.22 529.87 19.93 478.16 16.97 424.17 14.00 374.99
    \alpha=0.9, \ \beta=0.9
    2^{11} 82.93 20.64 14.00 4.95 13.00 4.95 20.00 7.51 17.00 7.22 14.00 5.62 12.00 5.57
    2^{12} 84.41 77.71 15.00 19.57 14.00 19.73 22.14 29.98 18.83 26.68 15.00 22.15 13.00 20.20
    2^{13} 85.77 324.09 17.00 83.82 16.00 84.84 22.52 118.15 20.00 111.70 15.99 90.72 13.99 84.47
    2^{14} 86.84 1422.57 18.00 384.27 17.00 385.21 23.96 541.14 20.00 494.21 16.00 409.06 13.99 373.06

     | Show Table
    DownLoad: CSV

    In the table, we use "P_3(\ell)" and "P_4(\ell)" to denote our proposed preconditioners with \ell being the number of interpolation points, "Iter" to denote the average number of iteration steps required to solve the discrete fractional diffusion equation (2.6) at each time step, and "CPU" to denote the total CPU time in seconds for solve the whole discretized system.

    From Table 1, we can see that the banded preconditioner B(\ell) and our proposed preconditioners P_3(\ell) and P_4(\ell) perform much better than the circulant preconditioner C_s in terms of both iteration number and elapsed time. The banded preconditioner B(\ell) has better performance when \alpha = \beta = 0.9 . In this case, the off-diagonals of the coefficient matrix decay to zero very quickly. For other cases, P_3(\ell) and P_4(\ell) perform much better.

    We also see that the performance of P_3(\ell) and P_4(\ell) are very robust in the sense that the average iteration numbers are almost the same for different values of \alpha and \beta , as well as for the different mesh size.

    Example 2. Consider the two dimensional balanced fractional diffusion equation

    \begin{equation*} \begin{split} &\frac{\partial u(x,y,t)}{\partial t} +{}_{{x_L}}^{{{\rm{RL}}}}{{\rm{D}}}_{x}^{{\alpha_1}}\left(d_{+}(x,y,t){}_{{x}}^{{{\rm{C}}}}{{\rm{D}}}_{x_R}^{\alpha_1} u(x,y,t)\right) +{}_{{x}}^{{{\rm{RL}}}}{{\rm{D}}}_{x_R}^{\beta_1}\left(d_{-}(x,y,t){}_{{x_L\!}}^{{{\rm{C}}}}{{\rm{D}}}_{x}^{{\beta_1}} u(x,y,t)\right) \\ &\qquad +{}_{{y_L}}^{{{\rm{RL}}}}{{\rm{D}}}_{y}^{\alpha_2}\left(e_{+}(x,y,t){}_{{y}}^{{{\rm{C}}}}{{\rm{D}}}_{y_R}^{\alpha_2} u(x,y,t)\right) +{}_{{y}}^{{{\rm{RL}}}}{{\rm{D}}}_{y_R}^{\beta_2}\left(e_{-}(x,y,t){}_{{y_L}}^{{{\rm{C}}}}{{\rm{D}}}_{y}^{\beta_2} u(x,y,t)\right) = f(x,y,t),\\[0.3em] & (x,y) \in \Omega = \left(x_{L}, x_{R}\right) \times (y_L,y_R), \quad t\in (0,T], \\[0.2em] & u(x,y,t) = 0, \quad (x,y)\in \partial\Omega, t\in [0,T], \\[0.2em] & u(x,y,0) = u_{0}(x,y), \quad (x,y)\in [x_L,x_R]\times[y_L,y_R], \end{split} \end{equation*}

    where \frac{1}{2} < \alpha_1, \beta_1, \alpha_2, \beta_2 < 1 , and d_{\pm}(x, y, t) > 0 , e_{\pm}(x, y, t) > 0 for (x, y, t)\in\Omega \times (0, T] .

    As was shown in [7], the finite volume discretization leads to

    \begin{equation*} A^{(j)}u^{(j)} = (M_{x}\otimes M_{y}) u^{(j-1)}+f^{(j)}, \quad j = 1,2,\ldots, M_t, \end{equation*}

    where

    \begin{equation*} \begin{aligned} A^{(j)}& = M_{x} \otimes M_{y} +\eta_{\alpha_{1}}\left(G_{\alpha_{1}} \otimes I_{y}\right){D}_{+}^{(j)} \left(G_{\alpha_{1}}^{ { \intercal }} \otimes I_{y}\right) +\eta_{\beta_{1}}\left(G_{\beta_{1}} \otimes I_{y}\right){D}_{-}^{(j)} \left(G_{\beta_{1}}^{ { \intercal }} \otimes I_{y}\right) \\ &\quad +\eta_{\alpha_{2}}\left(I_{x} \otimes G_{\alpha_{2}}\right){E}_{+}^{(j)} \left(I_{x} \otimes G_{\alpha_{2}}^{ { \intercal }}\right) +\eta_{\beta_{2}}\left(I_{x} \otimes G_{\beta_{2}}\right){E}_{-}^{(j)} \left(I_{x} \otimes G_{\beta_{2}}^{ { \intercal }}\right). \end{aligned} \end{equation*}

    Here M_{x}, M_{y} are tridiagonal matrices and D_{\pm}^{(j)} , E_{\pm}^{(j)} are block diagonal matrices

    \begin{equation*} {D}_{\pm}^{(j)} = {\rm{diag}}\left(\left\{\left\{ d_{\pm}\left(x_{i+\frac{1}{2}}, y_{k}, t_{j} \right)\right\}_{k = 1}^{N_{y}}\right\}_{i = 0}^{N_{x}}\right), \quad {E}_{\pm}^{(j)} = {\rm{diag}}\left(\left\{\left\{ e_{\pm}\left(x_{i}, y_{k+\frac{1}{2}}, t_{j} \right)\right\}_{k = 0}^{N_{y}}\right\}_{i = 1}^{N_{x}}\right), \end{equation*}

    where N_{x} and N_{y} denote the number of grid points in the x -direction and y -direction, respectively.

    Similar to the construction of P_{3} of (3.11), we can define an approximate inverse preconditioner for the two dimensional problems. For convenience of expression, we omit the superscript in A^{(j)} . To construct the preconditioner, we define the following circulant matrices

    \begin{equation*} \begin{aligned} \tilde{C}_{i,j}& = C_{M_x} \otimes C_{M_y} +\eta_{\alpha_1} d_{+}(x_i,y_j) \left(C_{\alpha_1}C_{\alpha_1}^{ { \intercal }}\right)\otimes I_y +\eta_{\beta_1} d_{-}(x_i,y_j) \left(C_{\beta_1}C_{\beta_1}^{ { \intercal }}\right)\otimes I_y \\ &\quad +\eta_{\alpha_2} e_{+}(x_i,y_j) I_{x} \otimes \left(C_{\alpha_2}C_{\alpha_2}^{ { \intercal }}\right) +\eta_{\beta_2} e_{-}(x_i,y_j) I_{x} \otimes \left(C_{\beta_2}C_{\beta_2}^{ { \intercal }}\right). \end{aligned} \end{equation*}

    Then we choose the interpolation points \{(\tilde{x}_i, \tilde{y}_j\} , i = 1, 2, \ldots, \ell_x , j = 1, 2, \ldots, \ell_y , and we approximate \tilde{C}_{i, j}^{-1/2} by

    \begin{equation*} \tilde{C}_{i,j}^{-1/2}\approx F^{*}\left(\sum\limits_{k = 1}^{\ell_{x}}\sum\limits_{l = 1}^{\ell_{y}} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}\right)F. \end{equation*}

    where F refers to the two dimensional discrete Fourier transform matrix of size N_{x}N_{y} , and \tilde{\Lambda}_{k, l} is the diagonal matrix whose diagonals are the eigenvalues of \tilde{C}_{k, l} for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y . Then we obtain the resulting preconditioner

    \begin{equation*} \begin{split} P_3^{-1}& = \left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} F^{*} \sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}F \,(e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }} \right)^{*}\\ &\quad\left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} F^{*} \sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \phi_{k,l}(x_i,y_j)\tilde{\Lambda}_{k,l}^{-1/2}F \,(e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }} \right)\\ & = \left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y}\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j)\, F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right)\\ &\quad\left(\sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y}\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \, (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j) \right)\\ & = \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j)\, F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right)\\ &\quad\left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \sum\limits_{i = 1}^{N_x}\sum\limits_{j = 1}^{N_y} (e_i\otimes e_j)(e_{i}\otimes e_j)^{ { \intercal }}\phi_{k,l}(x_i,y_j) \right)\\ & = \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \Phi_{k,l} F^{*}\tilde{\Lambda}_{k,l}^{-1/2} \right) \left(\sum\limits_{k = 1}^{\ell_x}\sum\limits_{l = 1}^{\ell_y} \tilde{\Lambda}_{k,l}^{-1/2} F \Phi_{k,l} \right), \end{split} \end{equation*}

    where \Phi_{k, l} = {\rm{diag}}\left(\phi_{k, l}(x_1, y_1), \phi_{k, l}(x_1, y_2), \cdots, \phi_{k, l}(x_{N_x}, y_{N_y})\right) for 1\leq k\leq \ell_x and 1\leq l\leq \ell_y .

    In the tests, we set \Omega = (0, 1)\times(0, 1) , T = 1 ,

    \begin{equation*} \begin{aligned} &d_{+}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\alpha_1}+1000 y^{4+\alpha_1}+1),\ d_{-}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\beta_1}+1000 y^{4+\beta_1}+1), \\ &e_{+}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\alpha_2}+1000 y^{4+\alpha_2}+1),\ e_{-}(x,y,t) = {\mathrm{e}}^{t}(1000 x^{4+\beta_2}+1000 y^{4+\beta_2}+1), \end{aligned} \end{equation*}

    and f(x, y, t) = 1000 . We also set N_{x} = N_{y} = N , M = N/2 and \ell_{x} = \ell_{y} = \ell . The maximum iteration number is set to be 500. As it is expensive to compute the second item of the Sherman-Morrison-Woodburg formula 3.5 for the two dimensional problem, we only test the preconditioner P_3(\ell) .

    The results are reported in Table 2. We can see that, for the two dimensional problem, although our proposed preconditioners P_3(3) and P_3(4) take more iteration steps to converge than the banded preconditioners B(3) and B(4) , it is much cheaper to implement our new preconditioners than banded preconditioner and circulant preconditioner in terms of CPU time.

    Table 2.  Numerical results for Example 2.
    C_{s} B(3) B(4) P_3(3) P_3(4)
    N Iter CPU Iter CPU Iter CPU Iter CPU Iter CPU
    \alpha_1=\alpha_2=0.6, \ \beta_1=\beta_2=0.6
    2^{7} 497.05 242.31 20.95 39.17 18.00 45.78 28.81 18.30 28.73 21.17
    2^{8} > 500 - 25.95 588.07 22.98 718.04 33.02 112.40 31.97 126.29
    2^{9} > 500 - 31.11 9887.04 27.98 12435.90 38.02 868.40 36.80 1023.78
    \alpha_1=\alpha_2=0.7, \ \beta_1=\beta_2=0.7
    2^{7} > 500 - 17.02 33.78 15.95 43.17 36.30 23.01 35.31 25.93
    2^{8} > 500 - 21.00 511.49 19.00 637.12 43.07 144.57 41.18 163.50
    2^{9} > 500 - 25.84 8799.13 23.42 11217.25 52.83 1233.09 50.97 1455.41
    \alpha_1=\alpha_2=0.8, \ \beta_1=\beta_2=0.8
    2^{7} > 500 - 13.98 29.96 12.73 38.05 46.31 29.15 45.38 33.82
    2^{8} > 500 - 16.00 437.62 15.00 568.48 58.30 198.03 56.60 223.90
    2^{9} > 500 - 18.99 7353.96 17.72 9675.65 76.26 1767.78 72.09 2053.69
    \alpha_1=\alpha_2=0.9, \ \beta_1=\beta_2=0.9
    2^{7} > 500 - 9.98 25.37 9.00 32.09 62.08 39.55 58.61 42.91
    2^{8} > 500 - 11.00 360.76 10.00 470.30 83.97 284.48 78.90 309.27
    2^{9} > 500 - 12.00 5933.38 11.99 8162.24 114.65 2631.44 109.24 3042.00

     | Show Table
    DownLoad: CSV

    In this paper, we consider the preconditioners for the linear systems resulted from the discretization of the balanced fractional diffusion equations. The coefficient matrices are symmetric positive definite and can be written as the sum of a tridiagonal matrix and two Toeplitz-times-diagonal-times-Toeplitz matrices. We investigate the approximate inverse preconditioners and show that the spectra of the preconditioned matrices are clustered around 1. Numerical experiments have shown that the conjugate gradient method, when applied to solve the preconditioned systems, converges very quickly. Besides, we extend our preconditioning technique to the case of two dimensional problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by NSFC grant 11771148 and Science and Technology Commission of Shanghai Municipality grant 22DZ2229014.

    All authors declare no conflicts of interest in this paper.



    [1] L. Lu, T. Deng, A method of self-supervised denoising and classification for sensor-based human activity recognition, IEEE Sens. J., 23 (2023), 27997–28011. https://doi.org/10.1109/JSEN.2023.3323314 doi: 10.1109/JSEN.2023.3323314
    [2] W. Dong, H. Wang, F. Wu, G. Shi, X. Li, Deep spatial-spectral representation learning for hyperspectral image denoising, IEEE T. Comput. Imag., 5 (2019), 635–648. https://doi.org/10.1109/TCI.2019.2911881 doi: 10.1109/TCI.2019.2911881
    [3] L. Zhuang, M. K. Ng, L. Gao, Z. Wang, Eigen-CNN: Eigenimages plus eigennoise level maps guided network for hyperspectral image denoising, IEEE T. Geosci. Remote, 62 (2024), 5512018. https://doi.org/10.1109/TGRS.2024.3379199 doi: 10.1109/TGRS.2024.3379199
    [4] Q. Jiang, H. Shi, S. Gao, J. Zhang, K. Yang, L. Sun, et al., Computational imaging for machine perception: Transferring semantic segmentation beyond aberrations, IEEE T. Comput. Imag., 10 (2024), 535–548. https://doi.org/10.1109/TCI.2024.3380363 doi: 10.1109/TCI.2024.3380363
    [5] F. Xiao, R. Liu, Y. Zhu, H. Zhang, J. Zhang, S. Chen, A dense multicross self-attention and adaptive gated perceptual unit method for few-shot semantic segmentation, IEEE T. Artif. Intell., 5 (2024), 2493–2504. https://doi.org/10.1109/TAI.2024.3369553 doi: 10.1109/TAI.2024.3369553
    [6] Y. Jin, S. Kwak, S. Ham, J. Kim, A fast and efficient numerical algorithm for image segmentation and denoising, AIMS Mathematics, 9 (2024), 5015–5027. https://doi.org/10.3934/math.2024243 doi: 10.3934/math.2024243
    [7] J. Liao, C. He, J. Li, J. Sun, S. Zhang, X. Zhang, Classifier-guided neural blind deconvolution: A physics-informed denoising module for bearing fault diagnosis under noisy conditions, Mech. Syst. Signal Pr., 222 (2025), 111750. https://doi.org/10.1016/j.ymssp.2024.111750 doi: 10.1016/j.ymssp.2024.111750
    [8] Y. Li, H. Chang, Y. Duan, CurvPnP: Plug-and-play blind image restoration with deep curvature denoiser, Signal Process., 233 (2025), 109951. https://doi.org/10.1016/j.sigpro.2025.109951 doi: 10.1016/j.sigpro.2025.109951
    [9] A. C. Bovik, T. S. Huang, D. C. Munson, The effect of median filtering on edge estimation and detection, IEEE T. Pattern Anal., PAMI-9 (1987), 181–194. https://doi.org/10.1109/TPAMI.1987.4767894 doi: 10.1109/TPAMI.1987.4767894
    [10] L. K. Choi, J. You, A. C. Bovik, Referenceless prediction of perceptual fog density and perceptual image defogging, IEEE T. Image Process., 24 (2015), 3888–3901. https://doi.org/10.1109/TIP.2015.2456502 doi: 10.1109/TIP.2015.2456502
    [11] H. Feng, L. Wang, Y. Wang, F. Han, H. Huang, Learnability enhancement for low-light raw image denoising: A data perspective, IEEE T. Pattern Anal., 46 (2024), 370–387. https://doi.org/10.1109/TPAMI.2023.3301502 doi: 10.1109/TPAMI.2023.3301502
    [12] Y. Luo, B. You, G. Yue, J. Ling, Pseudo-supervised low-light image enhancement with mutual learning, IEEE T. Circ. Syst. Vid., 34 (2024), 85–96. https://doi.org/10.1109/TCSVT.2023.3284856 doi: 10.1109/TCSVT.2023.3284856
    [13] Y. Zhao, Q. Zheng, P. Zhu, X. Zhang, M. Wenpeng, TUFusion: A transformer-based universal fusion algorithm for multimodal images, IEEE T. Circ. Syst. Vid., 34 (2024), 1712–1725. https://doi.org/10.1109/TCSVT.2023.3296745 doi: 10.1109/TCSVT.2023.3296745
    [14] M. Kang, M. Jung, Nonconvex fractional order total variation based image denoising model under mixed stripe and Gaussian noise, AIMS Mathematics, 9 (2024), 21094–21124. https://doi.org/10.3934/math.20241025 doi: 10.3934/math.20241025
    [15] J. Yin, X. Zhuang, W. Sui, Y. Sheng. J. Wang, R. Song, et al., A bearing signal adaptive denoising technique based on manifold learning and genetic algorithm, IEEE Sens. J., 24 (2024), 20758–20768. https://doi.org/10.1109/JSEN.2024.3403845 doi: 10.1109/JSEN.2024.3403845
    [16] X. Chen, L. Zhao, J. Xu, Z. Dai, L. Xu, N. Guo, et al., Feature-adaptive self-supervised leaning for microthrust measurement signals blind denoising, IEEE Sens. J., 25 (2025), 1015–1028. https://doi.org/10.1109/JSEN.2024.3493104 doi: 10.1109/JSEN.2024.3493104
    [17] S. Zhang, Y. Yang, Q. Qin, L. Feng, L. Jiao, Rapid blind denoising method for grating fringe images based on noise level estimation, IEEE Sens. J., 21 (2021), 8150–8160. https://doi.org/10.1109/JSEN.2021.3050237 doi: 10.1109/JSEN.2021.3050237
    [18] W. Wang, F. Wen, Z. Yan, P. Liu, Optimal transport for unsupervised denoising learning, IEEE T. Pattern Anal., 45 (2023), 2104–2118. https://doi.org/10.1109/TPAMI.2022.3170155 doi: 10.1109/TPAMI.2022.3170155
    [19] Y. Hou, J. Xu, M. Liu, G. Liu, L. Liu, F. Zhu, et al., NLH: A blind pixel-level non-local method for real-world image denoising, IEEE T. Image Process., 29 (2020), 5121–5135. https://doi.org/10.1109/TIP.2020.2980116 doi: 10.1109/TIP.2020.2980116
    [20] D. Gilton, G. Ongie, R. Willett, Deep equilibrium architectures for inverse problems in imaging, IEEE T. Comput. Imag., 7 (2021), 1123–1133. https://doi.org/10.1109/TCI.2021.3118944 doi: 10.1109/TCI.2021.3118944
    [21] B. Jiang, Y. Lu, J. Wang, G. Lu, D. Zhang, Deep image denoising with adaptive priors, IEEE T. Circ. Syst. Vid., 32 (2022), 5124–5136. https://doi.org/10.1109/TCSVT.2022.3149518 doi: 10.1109/TCSVT.2022.3149518
    [22] W. Wang, D. Yin, C. Fang, Q. Zhang, Q. Li, A novel multi-image stripe noise removal method based on wavelet and recurrent networks, IEEE Sens. J., 24 (2024), 26058–26069. https://doi.org/10.1109/JSEN.2024.3421337 doi: 10.1109/JSEN.2024.3421337
    [23] S. H. Chan, X. Wang, O. A. Elgendy, Plug-and-play ADMM for image restoration: Fixed-point convergence and applications, IEEE T. Comput. Imag., 3 (2017), 84–98. https://doi.org/10.1109/TCI.2016.2629286 doi: 10.1109/TCI.2016.2629286
    [24] Y. Sun, B. Wohlberg, U. S. Kamilov, An online plug-and-play algorithm for regularized image reconstruction, IEEE T. Comput. Imag., 5 (2019), 395–408. https://doi.org/10.1109/TCI.2019.2893568 doi: 10.1109/TCI.2019.2893568
    [25] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, R. Timofte, Plug-and-Play Image Restoration With Deep Denoiser Prior, IEEE T. Pattern Anal., 44 (2021), 6360–6376. https://doi.org/10.1109/TPAMI.2021.3088914 doi: 10.1109/TPAMI.2021.3088914
    [26] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala, et al., Noise2Noise: Learning image restoration without clean data, 2018, arXiv: 1803.04189 https://doi.org/10.48550/arXiv.1803.04189
    [27] T. Huang, S. Li, X. Jia, H. Lu, J. Liu, Neighbor2Neighbor: Self-supervised denoising from single noisy images, In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, 14776–14785. https://doi.org/10.1109/CVPR46437.2021.01454
    [28] J. Xu, Y. Huang, M. Cheng, L. Liu, F. Zhu, Z. Xu, et al., Noisy-as-clean: Learning self-supervised denoising from corrupted image, IEEE T. Image Process., 29 (2020), 9316–9329. https://doi.org/10.1109/TIP.2020.3026622 doi: 10.1109/TIP.2020.3026622
    [29] N. Moran, D. Schmidt, Y. Zhong, P. Coady, Noisier2Noise: Learning to denoise from unpaired noisy data, In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, 12061–12069. https://doi.org/10.1109/CVPR42600.2020.01208
    [30] V. Lempitsky, A. Vedaldi, D. Ulyanov, Deep image prior, In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, 9446–9454. https://doi.org/10.1109/CVPR.2018.00984
    [31] A. Krull, T. Buchholz, F. Jug, Noise2Void-Learning denoising from single noisy images, In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 2124–2132. https://doi.org/10.1109/CVPR.2019.00223
    [32] J. Batson, L. Royer, Noise2Self: Blind denoising by self-supervision, 2019, arXiv: 1901.11365. https://doi.org/10.48550/arXiv.1901.11365
    [33] Y. Quan, M. Chen, T. Pang, H. Ji, Self2Self with dropout: Learning self-supervised denoising from single image, In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, 1887–1895. https://doi.org/10.1109/CVPR42600.2020.00196
    [34] J. Lequyer, R. Philip, A. Sharma, W. Hsu, L. Pelletier, A fast blind zero-shot denoiser, Nat. Mach. Intell., 4 (2022), 953–963. https://doi.org/10.1038/s42256-022-00547-8 doi: 10.1038/s42256-022-00547-8
    [35] Y. Mansour, R. Heckel, Zero-Shot Noise2Noise: Efficient image denoising without any data, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, 14018–14027. https://doi.org/10.1109/CVPR52729.2023.01347
    [36] Z. Wang, J. Liu, G. Li, H. Han, Blind2Unblind: Self-supervised image denoising with visible blind spots, In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 2017–2026. https://doi.org/10.1109/CVPR52688.2022.00207
    [37] X. Ma, Z. Wei, Y. Jin, P. Ling, T. Liu, B. Wang, et al., Masked pre-training enables universal zero-shot denoiser, 2024, arXiv: 2401.14966. https://doi.org/10.48550/arXiv.2401.14966
    [38] O. Ghahabi, J. Hernando, Restricted Boltzmann machines for vector representation of speech in speaker recognition, Comput. Speech Lang., 47 (2018), 16–29. https://doi.org/10.1016/j.csl.2017.06.007 doi: 10.1016/j.csl.2017.06.007
    [39] Y. Xie, Z. Wang, S. Ji, Noise2Same: Optimizing a self-supervised bound for image denoising, 2020, arXiv: 2010.11971. https://doi.org/10.48550/arXiv.2010.11971
    [40] C. R. Hauf, J. S. Houchin, The FlashPix™ image file format, In: Proc. IS & T 4th Color and Imaging Conf., 4 (1996), 234–234. https://doi.org/10.2352/CIC.1996.4.1.art00060
    [41] R. Zeyde, M. Elad, M. Protter, On single image scale-up using sparse-representations, In: Curves and surfaces, Berlin, Heidelberg: Springer, 2012,711–730. https://doi.org/10.1007/978-3-642-27413-8_47
    [42] D. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, In: Proceedings Eighth IEEE International Conference on Computer Vision, 2001,416–423. https://doi.org/10.1109/ICCV.2001.937655
    [43] J. Huang, A. Singh, N. Ahuja, Single image super-resolution from transformed self-exemplars, In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, 5197–5206. https://doi.org/10.1109/cvpr.2015.7299156
    [44] M. Weigert, U. Schmidt, T. Boothe, A. Müller, A. Dibrov, A. Jain, et al., Content-aware image restoration: Pushing the limits of fluorescence microscopy, Nat. Methods, 15 (2018), 1090–1097. https://doi.org/10.1038/s41592-018-0216-7 doi: 10.1038/s41592-018-0216-7
    [45] A. Abdelhamed, S. Lin, M. S. Brown, A high-quality denoising dataset for smartphone cameras, In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, 1692–1700. https://doi.org/10.1109/CVPR.2018.00182
    [46] J. Xu, H. Li, Z. Liang, D. Zhang, L. Zhang, Real-world Noisy Image Denoising: A New Benchmark, 2018, arXiv: 1804.02603. https://doi.org/10.48550/arXiv.1804.02603
    [47] S. Nam, Y. Hwang, Y. Matsushita, S. J. Kim, A holistic approach to cross-channel image noise modeling and its application to image denoising, In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 1683–1691. https://doi.org/10.1109/CVPR.2016.186
    [48] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE T. Image Process., 16 (2007), 2080–2095. https://doi.org/10.1109/TIP.2007.901238 doi: 10.1109/TIP.2007.901238
    [49] C. Mou, Q. Wang, J. Zhang, Deep generalized unfolding networks for image restoration, In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 17378–17389. https://doi.org/10.1109/CVPR52688.2022.01688
    [50] W. Lee, S. Son, K. M. Lee, AP-BSN: Self-supervised denoising for real-world images via asymmetric PD and blind-spot network, In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 17704–17713. https://doi.org/10.1109/CVPR52688.2022.01720
    [51] Z. Wang, Y. Fu, J. Liu, Y. Zhang, LG-BPN: Local and global blind-patch network for self-supervised real-world denoising, In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, 18156–18165. https://doi.org/10.1109/CVPR52729.2023.01741
    [52] H. Chihaoui, P. Favaro, Masked and shuffled blind spot denoising for real-world images, In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, 3025–3034. https://doi.org/10.1109/CVPR52733.2024.00292
  • This article has been cited by:

    1. Shi-Ping Tang, Ai-Li Yang, Jian-Lin Zhou, Yu-Jiang Wu, R. Chan’s circulant-based approximate inverse preconditioning iterative method for solving second-order space fractional advection–dispersion equations with variable coefficients, 2024, 43, 2238-3603, 10.1007/s40314-024-02592-y
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(180) PDF downloads(16) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog