Research article

Restoring medical images with combined noise base on the nonlinear inverse scale space method

  • Received: 10 September 2024 Revised: 24 October 2024 Accepted: 28 November 2024 Published: 09 June 2025
  • This paper proposes a model that eliminates combined additive and multiplicative noise by merging the Rudin, Osher, and Fatemi (ROF) and I-divergence data fidelity terms. Many important techniques for denoising and restoring medical images were included in this model. The addition of the I-divergence fidelity term increases the complexity of the model and difficulty of the solution in comparison with the ROF. To solve this model, we first proposed the generalized concept of the maximum common factor based on the inverse scale space algorithm. Different from general denoising algorithms, the inverse scale space method exploits the fact that u starts at some value c0 and gradually approaches the noisy image f as time passes which better handles noise while preserving image details, resulting in sharper and more natural-looking images. Furthermore, a proof for the existence and uniqueness of the minimum solution of the model was provided. The experimental findings reveal that our proposed model has an excellent denoising effect on images destroyed by additive noise and multiplicative noise at the same time. Compared with general methods, numerical results demonstrate that the nonlinear inverse scale space method has better performance and faster running time on medical images especially including lesion images, with combined noises.

    Citation: Chenwei Li, Donghong Zhao. Restoring medical images with combined noise base on the nonlinear inverse scale space method[J]. Mathematical Modelling and Control, 2025, 5(2): 216-235. doi: 10.3934/mmc.2025016

    Related Papers:

    [1] Yi Tian . Approximate solution of initial boundary value problems for ordinary differential equations with fractal derivative. Mathematical Modelling and Control, 2022, 2(2): 75-80. doi: 10.3934/mmc.2022009
    [2] Jianhua Sun, Ying Li, Mingcui Zhang, Zhihong Liu, Anli Wei . A new method based on semi-tensor product of matrices for solving reduced biquaternion matrix equation $ \sum\limits_{p = 1}^l A_pXB_p = C $ and its application in color image restoration. Mathematical Modelling and Control, 2023, 3(3): 218-232. doi: 10.3934/mmc.2023019
    [3] Yaxin Ren, Yakui Xue . Modeling and optimal control of COVID-19 and malaria co-infection based on vaccination. Mathematical Modelling and Control, 2024, 4(3): 316-335. doi: 10.3934/mmc.2024026
    [4] Bengisen Pekmen, Ummuhan Yirmili . Numerical and statistical approach on chemotaxis-haptotaxis model for cancer cell invasion of tissue. Mathematical Modelling and Control, 2024, 4(2): 195-207. doi: 10.3934/mmc.2024017
    [5] C. Kavitha, A. Gowrisankar . Fractional integral approach on nonlinear fractal function and its application. Mathematical Modelling and Control, 2024, 4(3): 230-245. doi: 10.3934/mmc.2024019
    [6] Kexin Ouyang, Xinmin Qu, Huiqin Lu . Sign-changing and signed solutions for fractional Laplacian equations with critical or supercritical nonlinearity. Mathematical Modelling and Control, 2025, 5(1): 1-14. doi: 10.3934/mmc.2025001
    [7] Biresh Kumar Dakua, Bibhuti Bhusan Pati . A frequency domain-based loop shaping procedure for the parameter estimation of the fractional-order tilt integral derivative controller. Mathematical Modelling and Control, 2024, 4(4): 374-389. doi: 10.3934/mmc.2024030
    [8] Xueling Fan, Ying Li, Wenxv Ding, Jianli Zhao . $ \mathcal{H} $-representation method for solving reduced biquaternion matrix equation. Mathematical Modelling and Control, 2022, 2(2): 65-74. doi: 10.3934/mmc.2022008
    [9] Md. Motlubar Rahman, Mahtab Uddin, M. Monir Uddin, L. S. Andallah . SVD-Krylov based techniques for structure-preserving reduced order modelling of second-order systems. Mathematical Modelling and Control, 2021, 1(2): 79-89. doi: 10.3934/mmc.2021006
    [10] Ihteram Ali, Imtiaz Ahmad . Applications of the nonlinear Klein/Sinh-Gordon equations in modern physics: a numerical study. Mathematical Modelling and Control, 2024, 4(3): 361-373. doi: 10.3934/mmc.2024029
  • This paper proposes a model that eliminates combined additive and multiplicative noise by merging the Rudin, Osher, and Fatemi (ROF) and I-divergence data fidelity terms. Many important techniques for denoising and restoring medical images were included in this model. The addition of the I-divergence fidelity term increases the complexity of the model and difficulty of the solution in comparison with the ROF. To solve this model, we first proposed the generalized concept of the maximum common factor based on the inverse scale space algorithm. Different from general denoising algorithms, the inverse scale space method exploits the fact that u starts at some value c0 and gradually approaches the noisy image f as time passes which better handles noise while preserving image details, resulting in sharper and more natural-looking images. Furthermore, a proof for the existence and uniqueness of the minimum solution of the model was provided. The experimental findings reveal that our proposed model has an excellent denoising effect on images destroyed by additive noise and multiplicative noise at the same time. Compared with general methods, numerical results demonstrate that the nonlinear inverse scale space method has better performance and faster running time on medical images especially including lesion images, with combined noises.



    During image acquisition, storage, processing, and transmission, the sensor is easily affected by various factors. Besides, external factors may cause interference, resulting in some noise during image transmission. The presence of noise results in a blurred image and loss of intricate details from the original image, complicating any subsequent image analysis. The aim is to remove the effects of noise on the quality of the image, a standard problem in image restoration. Image denoising and restoration are some of the major image problems. The mathematical model builds on prior knowledge to degrade the image, and then uses the reverse process of image degradation to restore the image [1].

    The additive noise model has been extensively studied by scholars. Most literature studies additive Gaussian noise, assuming that a given original image u is destroyed by additive noise ηN(μ,σ2). The task is to recover u from the observed image f:

    f=u+η. (1.1)

    Many effective methods have been developed to deal with this problem, and well-known techniques include wavelet approaches [2], learning-based methods [3,4,5], and variational approaches [6]. One of the learning-based approaches is the one that scholars are now committed to pursuing. In the literature [7], the authors proposed a new medical image denoising method to denoise images with Gaussian and impulse noise using end-to-end deep learning. Learning-based denoising methods can generally be used to remove specific types of noise. Variational-based methods are also important methods used for image denoising. Based on [6], the researchers proposed some improved methods: high-order total variation [8], fractional-order total variation [9], and Weberized total variation [10,11,12].

    In addition, there is another kind of noise called multiplicative noise. In a multiplicative noise model, f represents a noisy image, the original image u is destroyed by multiplicative noise η:

    f=uη. (1.2)

    There are many possible distributions of multiplicative noise. Assume that the multiplicative noise in this paper follows a Gamma distribution with mean one with its probability density function given by

    G(η)={LLηL1Γ(L)eLη,η>0,0,η0,

    where 1/L is the variance. Generally, L takes an integer coefficient. The smaller the L is, the more severe the original image is destroyed by noise. Γ() is a Gamma function, Γ(L)=(L1)!. Recently, there is a lot of interest in multiplicative noise removal. Different from additive noise, the distribution of multiplicative noise is usually non-Gaussian, and it has a multiplicative relationship with the original image. Due to its characteristics, it is more difficult and demanding to remove than additive noise. At present, scientists propose many techniques for eliminating multiplicative noise, such as the Lee method [13], scale threshold method [14], and variational methods [15,16,17,18,19,20,21,22].

    However, the true observation image has been contaminated by many different types of noise. The above-mentioned denoising method can only remove a certain noise, or the effect of removing other types of noise is not obvious. In this paper we will focus on removing combined additive and multiplicative noise. Most learning-based denoising methods can remove a specific type (additive or multiplicative) of noise in most cases; whereas, variational-based methods have been studied for combined additive and multiplicative noise. There are variational models available for restoring images affected by combined noises. In [23], the author provided restoration of images corrupted with additive, multiplicative, and mixed noise by a new total least squares-based image denoising algorithm. In [24], the authors developed a variational model and an efficient nonlinear multigrid approach via a robust fixed-point smoother. Other scholars in [25] proposed a variational model combining a fractional-order total variation-norm with the fields of experts-image prior [26], employing a fast alternating minimization algorithm, Newton method, and a non-convex optimization iPiano algorithm [27].

    In [28], the authors derived a scale space method for inverse problems, which usually starts from a noisy image and then gradually smooths the image. Different from other methods, the inverse scale space method starts from u(x,0)=c0 and gradually approximates the noise image f as time grows. The inverse scale space method is related to the regularization theory, especially Tikhonov regularization [29]. In [28], the inverse scale space method worked effectively on quadratic regularization functions, which can derive a linear evolution equation, but it cannot derive good results for total variational functionals. Then, the author in [30,31] derived a general method of the inverse scale space as a limit of the iterative refining process. With the new method, the nonlinear inverse scale space method is implemented on the total variation model. Later, Shi and Osher extended the nonlinear inverse scale space method to the total variational model of multiplicative noise in [17].

    In fact, real images often contain more than one type of noise, but may contain a mixture of additive and multiplicative noise, making the processing of mixed noise more challenging. The above scholars have only denoised specific types of noise using inverse scale space methods. Motivated by these works, we proposed a new variational model to restore the image damaged by combined noise and generalized the nonlinear inverse scale method on our model. This paper is dedicated to recovering images corrupted by a mixture of additive noise and multiplicative noise. The method provides a unified framework for mixed additive and multiplicative noise.

    This paper is organized as follows. In Section 2, we introduce some basic mathematical knowledge and review some classic variational models for removing additive noise or multiplicative noise. In Section 3, we propose our new restoration model and prove that the minimum value exists. In Section 4, the relaxed inverse scale space method is introduced for the solution of the proposed model. Some numeric results of the experiments are presented in Section 5 and the paper concludes with Section 6.

    Image denoising methods based on variational methods and partial differential equations (PDEs) have been widely studied because there is a complete set of mathematical theoretical systems and calculation methods.

    The basic theory of bounded variation (BV) function space is the theoretical basis of image processing based on the variational method and PDEs. The total variation (TV) model is a classic model of BV function space theory applied to image denoising. Next, we introduce the theory of bounded variation function space.

    Definition 2.1 (Total variation). Let ΩRn be a bounded open subset and uL1loc(Ω) be a locally integrable function. The total variation of u is TV(u)={supΩudivφdx;φ=(φ1,φ2,,φn)C10(Ω)n,|φ|L(Ω)1}.\\ If TV(u)< is satisfied, then u is called a bounded variation function, denoted as |u|BV(Ω). The space composed of all bounded variation functions in Ω is denoted as BV(Ω).

    Definition 2.2. With a grayscale image u as a two-dimensional image, let it be the Sobolev space W(1,1)(Ω), and then uL1(Ω) and

    Ωudivφdx=Ωuφdx.

    The total variation of the two-dimensional image u can be defined as

    TV(u)=Ω|u|dx

    where u=(ux,uy)=(ux,uy), denoting the derivative of a two-dimensional gray image in regard to variables x and y, is called the gradient of the image, and |u|=u2x+u2y denotes the modulus of the image gradient.

    Definition 2.3. (Bounded variation function space). The space of the bounded variation function can be defined as

    BV(Ω)={u|uL1(Ω),TV(u)<}.

    Under the definition of natural norm uBV= uL1+ TV(u), BV(Ω) forms a Banach space.

    Remark 2.1. The function of bounded variation uBV is convex in the BV(Ω) space, but not strictly convex.

    ● ROF model

    In 1992, Rudin, Osher, and Fatemi used the space of a function of bounded variation establishing a total variation regularized image denoising model (ROF) in [6]. The authors constructed the following energy functional:

    E(u)=Ωudxdy, (2.1)

    which meets the following two constraints:

    {Ωudxdy=Ωfdxdy,1ΩΩ(uf)2dxdy=σ2. (2.2)

    The first formula represents that the mean value of noise is 0, and the second formula represents that the variance of noise is σ2. By introducing the Lagrange multiplier, (2.1) and (2.2) are transformed into a new energy functional without restriction.

    E(u)=Ωudxdy+λ2Ωuf2dxdy

    where λ is a positive parameter, which plays an important balance between smoothing and denoising, u denotes the image to be restored, and f denotes the noisy image.

    Among the many denoising methods for multiplicative noise, variational models have attracted the attention of many scholars. In this section, we review several classic total variation models for removing multiplicative noise.

    ● RLO model

    In 2003, the first variational model was proposed by Rudin, Lions, and Osher in [15]. Assume that the noise in (1.2), the mean noise is 1, with variance σ2. In other words, the noise satisfies (2.3).

    {Ωfudxdy=1,1ΩΩ(fu1)2dxdy=σ2. (2.3)

    Use total variation Ωudxdy as the regular term. The model (RLO) is expressed as follows:

    E(u)=Ωudxdy+λΩfudxdy+μΩ(fu1)2dxdy

    where λ and μ are two Lagrange multipliers.

    ● AA model

    In 2008, based on the maximum a posterior (MAP) estimator, Aubert and Aujol in [16] derived a non-convex variational model (AA) (2.4) and transformed the solution of the AA model into solving its corresponding Euler-Lagrange equation. This model is suitable for the multiplicative noise of the Gamma distribution with a mean value of 1.

    E(u)=Ωudxdy+λΩ(logu+fu)dxdy (2.4)

    where λ is parameter.

    Although the AA model is non-convex, it still proves the existence of its minimum and gives a sufficient condition to ensure the uniqueness of the solution. However, since the AA model is non-convex, its solution may not be the optimal solution, and the solution depends on the initial value selection.

    ● I-divergence model

    In 2009, Steidl and Teuber in [18] proposed a convex variational model using I-divergence as the fidelity term to remove multiplicative noise.

    Definition 2.4. Let uRn+ and fRn+. The I-divergence of f from u is defined in [32] by

    I(fu):=Ω(flogfuf+u)dx.

    With the property I(fu)0, if and only if u=f, the equal sign holds. Ignore the constant terms, the data fidelity term is simplified to Ω(uflogu)dx, and the denoising model is as follows:

    E(u)=Ωudxdy+λΩ(uflogu)dxdy

    where λ is parameter.

    Although I-divergence is generally used as a fidelity term to remove Poisson noise, the author applied Douglas-Rachford splitting techniques and alternating direction methods of multipliers, and proved that it can also effectively remove Gamma noise.

    In this section, we propose a novel variational model combining additive noise (AN) and multiplicative noise (MN). This model is more common in real life, and its solving process is also more challenging. We begin with an observation image contaminated by AN and MN, with the assumption that AN and MN are independent or unrelated. In [24,33], the following model is proposed for formulating the image:

    f=u+k0η+k1uη

    where k0 and k1 are constants and η represents Gaussian noise with variance 1.

    We propose the following convex model:

    E(u)=Ωudxdy+α1Ω(uflogu)dxdy+α22Ωuf22dxdy (3.1)

    where α1 and α2 are parameters that can be adjusted to achieve a balance between the additive and multiplicative parts of the fidelity.

    ● When α1=0, this reduces to the model in [6].

    ● When α2=0, this reduces to the model in [18].

    For the proof of the existence of the model solution, the boundary condition of the solution must be proved [34].

    Lemma 3.1. (Boundedness). Let fL2(Ω) and infΩf>0, where ΩR2. If the optimization problem Eq (3.1) has a solution ˆu, then infΩfˆusupΩf.

    Proof. Define α=infΩf, β=supΩf. Notice that u>f, uflogu, and (uf)2 strictly monotonically increase. Then

    Ω(inf(u,β)flog(inf(u,β)))dxdyΩ(uflogu)dxdy, (3.2)
    Ω(inf(u,β)f)2dxdyΩ(uf)2dxdy. (3.3)

    Besides, Lemma 1 in [35] states that there is

    TV(inf(u,β))TV(u). (3.4)

    Combining (3.2)–(3.4),

    E(inf(u,β))E(u). (3.5)

    If uβ, the equation is valid.

    Since ˆu is the minimum solution of Eq (3.1), the equation holds when u=ˆu, thus ˆuβ.

    It is also the case that

    E(u)E(sup(u,α)). (3.6)

    Thus ˆuα.

    So there is αˆuβ, that is, infΩfˆusupΩf.

    Theorem 3.1. (Existence). Let fL2(Ω) and infΩf>0, where ΩR2, and then there exists at least one solution in the solution space BV(Ω) by Eq (3.1).

    Proof. Define α=infΩf, β=supΩf. Because there is uβBV(Ω), the solution space is not empty [10]. Consider the optimization problem's minimization sequence unBV(Ω). According to Lemma 3.1, we have αunβ.

    Notice that {un} is a bounded sequence in BV(Ω), Ω is also bounded, and then

    unL1(Ω)<+. (3.7)

    Moreover, by the definition of a minimization sequence, we know there is a constant C>0 and E(un)C. When un=f, Ω(unflogun)dxdy reaches a positive minimum Ω(fflogf)dxdy and Ω(unf)2dxdy reaches 0. Since TV(un)<, we have

    TV(un)C. (3.8)

    From (3.7) and (3.8), there will be evidence that {un} is consistently bounded in BV(Ω) space with respect to n. Because of the compactness of BV(Ω) space, there are subcolumns in {un} (this remains as {un} for simplicity) and function u in BV(Ω) space such that {un} converges strongly to u in L1(Ω) space. More, assume that un(x,y)u(x,y), a.e. x,yΩ.

    The Lebesgue control convergence theorem allows us to deduce

    Ω(uflogu)dxdy=limnΩ(unflogun)dxdy. (3.9)

    According to the lower semi-continuity of the function, we have the following formula:

    E(u)limninfE(un). (3.10)

    Since {un} is a minimization sequence, it is the smallest solution to the optimization problem Eq (3.1), and then Eq (3.1) has at least one solution in its solution space.

    The objective function E(u) in the optimization problem Eq (3.1) is convex, and the uniqueness of the solution thus follows directly from the convexity of the objective function.

    In this section, we will introduce the basic principles and solution process of the inverse scale space method and apply this method to our proposed model. Some scholars have used the inverse scale space method to remove additive noise, and some scholars have used this method to remove multiplicative noise. However, no one has used this method to remove combined noise on our model. Encouraged by this, we have made some improvements on the inverse scale space method to remove combined noise.

    In the following we generalize nonlinear inverse scale space flow [30,31] to the proposed model (3.1), a relaxation method using two evolution equations to approximate the real evolution equations.

    The method relies on a convex variance problem:

    minu{J(u)+λG(u,f)} (4.1)

    where J(u) is the regularization term, and G(u,f) is the data fidelity term. We assume that f has a zero mean: Ωfdxdy=0.

    Definition 4.1 (Subdifferential). Let f be a proper lower-semi-continuous convex function on Rm, where Rm is mdimensional Euclidean space. The subdifferential of f is defined for yRm by

    f:={yRmandf(z)f(x)+y,zx,zRm}.

    Definition 4.2 (Bregman distance). Assuming J:XR is a convex function, where XRm, u,vX, pJ(v), then the Bregman distance between u and v is defined as DpJ(u,v):

    DpJ(u,v)=J(u)J(v)uv,p.

    From the definition of the subdifferential Bregman distance DpJ(u,v)0, the Bregman distance does not satisfy symmetry, DpJ(u,v)DpJ(v,u). Assume pJ(u)J(v), and then

    DpJ(u,v)=J(u)J(v)uv,p=J(u)J(v)+vu,p=(J(v)J(u)vu,p)=DpJ(v,u).

    Applying Bregman's regularization method, (4.1) can be rewritten as

    uk=argminu{D(u,uk1)+λG(u,f)}. (4.2)

    According to the generalized form of Bregman distance, D(u,uk1)=J(u)J(uk1)uuk1,pk1, where pk1J(uk1), so (4.2) can be rewritten as

    uk=argminu{J(u)J(uk1)uuk1,pk1+λG(u,f)}.

    Omitting the constant term (the term about uk1) irrelevant to minimization yields

    uk=argminu{J(u)u,pk1+λG(u,f)}. (4.3)

    The Euler-Lagrange equation of (4.3) is

    p(uk)p(uk1)+λuG(uk,f)=0 (4.4)

    where uG denotes the variation of G with respect to u.

    We are led to the relation:

    p(uk)p(uk1)λ=uG(uk,f). (4.5)

    Let λ=t, p(uk)=p(uk(t)), and

    p(uk)p(uk1)t=uG(uk(t),f).

    When t0, we arrive at a continuous process

    pt=uG(u(t),f),pJ(u). (4.6)

    The initial conditions for (4.6) are ut=0=c0, pt=0=0, ΩuG(c0,f)=0.

    The flow begins with u(0)=c0 and it converges back to the image f as t, i.e., limtu(t)=f. By (4.6) the image u(t) flows from the smoothest possible image (u(0)=c0) to the noisy image f. We are trying to use the flow to denoise the image.

    Since the relationship between p and u is difficult to calculate in the case of non-linearity, (4.6) cannot be directly calculated, so we propose a relaxation method to approximate the direct flow.

    The first-order optimality condition and vk=pk/λ yield the following formula from (4.5):

    vkvk1=uG(uk,f),
    vk=vk1uG(uk,f),k1,v0=0. (4.7)

    Therefore,

    vk=kj=1uG(uj,f),k1

    and a system of equations about uk(4.3) and vk(4.7) can be expressed as

    {uk=argminu{J(u)+λ(G(u,f)u,vk1)},vk=vk1uG(uk,f), (4.8)

    where u0=c0, v0=0, k=1,2,3,, ΩuG(c0,f)=0.

    It can also be rewritten as

    uk=argminu{J(u)u,λvk1+λG(u,f)}=argminu{J(u)u,λk1j=1uG(uj,f)+λG(u,f)}=argminu{J(u)+λk1j=1u,uG(uj,f)+λG(u,f)}.

    After converging to a minimizer uk in the first formula of (4.8), it is easy to compute uG(uk,f) and update vk. However, the amount of calculation in this process is quite high.

    The standard way of solving uk and vk is first to evolve a steepest descent flow for uk, having a fixed vk, based on the Euler-Lagrange equations of (4.8):

    p+λ(uG(u,f)vk1)=0,pJ(u).

    The corresponding diffusion equation is

    ut=p + λ(uG(u,f) + vk),pJ(u),ut=0 = uk1. (4.9)

    The update for vk in the second formula of (4.8) can be viewed as an iterative descent in vk1 for minimizing G(uk,f). This is an indirect minimization, which affects uk by its coupling with vk. We write the solution for vk in the following manner:

    vτ=uG(uk,f),vτ=0 = vk1.

    τ is a time variable similar to t. This extends the definition of the sequence vk to a continuous formulation.

    We propose two continuous flows u(t) and v(t) to approximate the sequences uk and vk. Let τ=mt, hence we obtain the following:

    vt=mvτ=muG(uk,f). (4.10)

    Replacing vk in (4.9) by v(t) and uk in (4.10) by u(t) yield the relaxed inverse scale space flow:

    {ut=p+λ(uG(u,f)+v)vt=muG(uk,f) (4.11)

    with pJ(u) and initial conditions ut=0 = c0,vt=0 = 0, ΩuG(c0,f)= 0.

    Consider our model

    E(u)=Ωudxdy+α1Ω(uflogu)dxdy+α22Ωuf22dxdy.

    In [17,20], the authors used the inverse scale space method to solve the denoising model containing only one fidelity term. For the two fidelity terms in our paper, we do the following processing to apply the inverse scale space method theory.

    The greatest common divisor is the greatest common divisor shared by two or more integers, and the parameters of the fidelity term of the model proposed in this paper may be small, thus we generalize the concept of the greatest common divisor as follows. We extend the concept of the greatest common divisor from integers to decimals. Define λ=d(α1,α2),

    d(α1,α2)=α1μ1=α2μ2

    where λ denotes the greatest common divisor of α1 and α2, μ1 and μ2 are integers, and α1 and α2 can be integers or decimals. Below are two examples of λ.

    When α1=1, α2=2, then λ=d(1,2)=1. λ is an integer.

    When α1=0.36, α2=0.45, then λ=d(0.36,0.45)=0.09. λ is a decimal.

    In the model of this paper, the function G(u,f) is composed of two terms and the coefficients of the fidelity term are two different decimals, whereas the traditional inverse scale space method only applies to one fidelity term, so we promote the concept of the greatest common divisor to combine d(α1,α2) with the traditional inverse scale space method.

    We will mark the last two terms of E(u) as g(u,f). Let g(u,f)=λG(u,f), and therefore,

    G(u,f)=μ1Ω(uflogu)dxdy+μ22Ωuf22dxdy.

    Thus, it is possible to obtain:

    uG(u(t),f)=μ1(1fu)+μ2(uf).

    Then, we get the following inverse scale space model from (4.6):

    pt=μ1(fu1)+μ2(fu),pE(u).

    We also have

    vt=muG(u(t),f)=m[μ1(1fu)+μ2(uf)]

    from (4.10).

    We can also obtain the corresponding relaxed inverse scale space flow of the model proposed in this paper:

    {ut=p(u)+λ[μ1(fu1)+μ2(fu)+v],vt=m[μ1(fu1)+μ2(fu)],p(u)=uu. (4.12)

    Its initial conditions are u0=c0, p0=0, ΩuG(c0,f)=0.

    The procedure of solving (4.12) is summarized as follows.

    Algorithm: Inverse scale space method (ISSM).
    Input: t,μ1,μ2,λ, β are small fixed parameters to avoid
    singularity.
    1. Initialization: u0=c0,v0=0.
    2. Iteration. For every k=1,2,3,, compute uk+1 and
    vk+1.
    uk+1=uk
    +t{div(ukuk2+β2)+λ[μ1(fuk1)+μ2(fuk)+vk]},
    vk+1=vk+mt[μ1(fuk1)+μ2(fuk)].
    3. k=k+1, and go to Step 2. If the stop condition is met, the recovered image u is used for quitting.

    It is obvious that these equations (ut=0,vt=0) are the equilibrium: u=f,v=p(f)λ. We still try to show that for any fBV, the solutions converge to this steady state.

    The following energy function is defined to analyze the convergence:

    e(t)=12λuf22+12αvqλ22

    where qJ(f),u=u(t),v=v(t).

    Proposition 4.1. Let u(0),v(0) be an initial value such that e(0)<. Let u(t),v(t) be the solution of (4.12) with λ>0,α>0. Then the energy e(t) decreases monotonically. Moreover, when t, then e(t)0.

    Proof. We calculate the time derivative of the energy functional to obtain

    de(t)dt=1λuf,ut+1αvqλ,vt=1λuf,p(u)+λ[μ1(fu1)+μ2(fu)+v]+1αvqλ,α[μ1(fu1)+μ2(fu)]=uf,p(u)λ+μ1(fu1)+μ2(fu)+v+vqλ,μ1(fu1)+μ2(fu).

    In steady state, uf=0, this is equivalent to μ1(fu1)+μ2(fu)=0. We replace μ1(fu1)+μ2(fu) by fu.

    de(t)dt=uf,p(u)λ+fu+v+vqλ,fu=uf,fu+uf,p(u)λ+v+vqλ,fu=uf2uf,p(u)λqλ=uf21λuf,p(u)q.

    According to the definition of the Bregman distance, we know D(u,f)=J(u)J(f)uf,J(f) and D(f,u)=J(f)J(u)fu,J(u). Therefore, D(u,f)+D(f,u)=uf,J(u)J(f).

    Finally, we get

    de(t)dt=uf21λ(D(u,f)+D(f,u))0

    which implies that the energy e(t) decreases monotonically. Because of e(t)0 and de(t)dt0, we get limte(t)=0.

    The dual function of u is E(p):=supu{u,pE(u)}, and then puE(u) and upE(p) are equivalent. If we can compute the dual function E, we can get an explicit relationship for u(p). Based on the above conditions, we can get the following estimator.

    First, the derivatives of the fitted function and u(t) in the time direction are calculated separately:

    12ddtu(t)f2L2=u(t)f,tu(t), (4.13)
    tu(t)=ddtpE(p)=G(p(t))tp(t)=G(p(t))(μ1u+μ2)(u(t)f). (4.14)

    Because u>0, u is bounded, μ1>0, and μ2>0, we might as well set μ1u+μ2=a>0, and then

    tu(t)=aG(p(t))(u(t)f) (4.15)

    where G=ppE is the Hessian matrix of the dual function.

    If E is strictly convex, then there is a constant b>0 that makes

    φ,G(ω)φbφ2 (4.16)

    where φ,ωBV. Therefore combine (4.13), (4.15), and (4.16), and we get

    12ddtu(t)f2L2=u(t)f,aG(p(t))(u(t)f)abu(t)f2L2.

    Using Gronwall's inequality, there is

    u(t)fL2  eab(ts)u(s)fL2  eabtfL2

    if t>s. Thus, t, uL2f.

    In the estimate of L2 above, f can be either a clean image or an image containing noise. Assume f is a clean image and E(f)<, and we can get the Bregman distance error estimator

    ddtD(f,u(t))=ddt[E(f)E(u(t))fu(t),p(t)]=fu(t),tp(t)=fu(t),(μ1u+μ2)(fu(t))=au(t)f2L2.

    If q(t)L2E(f), we can get estimators for p(t) and q(t),

    12ddtp(t)q(t)2L2=tp(t),p(t)q(t)=(μ1u+μ2)(fu(t)),p(t)q(t)=a(fu(t)),p(t)q(t)=a(D(f,u)D(u,f)).

    Integrate the above formula:

    12t0ddtp(t)q(t)2L2dt=t0a(D(f,u)D(u,f))dt,12(p(t)q(t)2L2p(0)q(t)2L2)+t0a(D(f,u(s))+D(u(s),f))ds=0.

    Since p(0)=0,p(t)q(t)2L2, D(u(s),f) are all non-negative and D(f,u(s))D(f,u(t)) (st), D(f,u(t))q(t)2L22at. Thus, t, uDf.

    When f is a noisy image, w is a clean image, and E(w)<, there may be E(f)=. In this case, there are the following propositions:

    Proposition 4.2. Based on the above conditions, if fu(t)L2 > σ and fgL2  σ, and σ is the standard deviation of the noise, then D(g,u(t)) is monotonically decreasing in time.

    Proof.

    ddtD(g,u(t))=tp(t),gu(t)=(μ1u+μ2)(fu(t)),gu(t)=a(fu(t)),gf+fu(t)=afu(t)2L2afu(t),gfafu(t)2L22+afg2L22.

    When fu(t)L2 > fgL2, the right-hand side of the inequality is negative, and D(g,u(t)) is monotonically decreasing in time.

    For the initialization of the inverse scale space, u(0)=c0, we use the fact that

    Ωtp=0

    which means that ΩuG(u(t),f)=0 is time invariant for all time.

    As t0, the expression vanishes, leading to

    ΩuG(c0,f)=0.

    Combining uG(u,f)=μ1(1fu)+μ2(uf), we obtain

    Ω[μ1(1fc0)+μ2(c0f)]=0,
    Ω(μ1(c0f)+μ2(c20fc0)c0)=0,
    Ω(μ2c20+(μ1μ2f)c0μ1fc0)=0,

    which means that

    μ2c20+(μ1μ2f)c0μ1fc0=0,c00,

    so

    μ2c20+(μ1μ2f)c0μ1f=0. (4.17)

    The solution of this Eq (4.17) for c0 is: c01=f,c02=μ1μ2<0.

    Thus the initial condition of ISS flow is given: c0=f.

    Our relaxed model can be written in a general form as

    {ut=p(u)+λ(uG(u,f)+v),vt=muG(u,f). (4.18)

    The first equation in (4.18) can be written in u with a second-order derivative in the time domain.

    2ttu=tp(u)+λ(t(uG(u,f))+tv).

    Replacing tv in the above formula with the second equation in (4.18), we can obtain

    2ttu=up(u)tu+λ((2uuG(u,f)tu)muG(u,f)).

    The above formula can be rewritten as

    2ttu+up(u)tu+λ(2uuG(u,f)tu)+λmuG(u,f)=0

    with the initial conditions ut=0 = c0,utt=0 = λuG(0,f).

    As in [30,31] we approximate p(u)=Δu, which leads to

    2ttu+(Δ+λ2uuG(u,f))tu+λmuG(u,f)=0. (4.19)

    We fix u as a constant u0 and f as a constant f0. Rewriting (4.19) in the frequency domain ξ by taking the Fourier transform, we obtain the characteristic equation

    r2+(ξ2+λ2uuG(u0,f0))r+λmuG(u0,f0)=0

    with two solutions

    r±=(ξ2+λ2uuG(u0,f0))2±(ξ2+λ2uuG(u0,f0))24λmuG(u0,f0)2.

    Both roots exist for all frequencies if

    (2uuG(u0,f0))24mλuG(u0,f0)

    and we know

    uG(u0,f0)=μ1(1f0u0)+μ2(u0f0),
    2uuG(u0,f0)=μ1fu20+μ2.

    Thus, we should require the parameters α and λ meet

    αλ(μ1fu20+μ2)24(μ1(1f0u0)+μ2(u0f0))

    to ensure monotone evolution of u toward f.

    In this section, some numerical experiments on medical images show the performance of our proposed model, expanding the application field of the model. The peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) are currently the most widely used methods for evaluating image quality. The larger the value of PSNR is, the closer the denoised image is to the original image, which means that the effect of image restoration is better.

    First, we introduce the mean square error (MSE).

    MSE(f,u)=1MNMi=1Nj=1(fijuij)2

    where fij and uij respectively represent the pixel value of the restored image and the original image at (i,j), and M and N are the number of rows and columns of the image.

    The specific calculation formula of the PSNR is as follows:

    PSNR=10×log10(L2MSE)

    where L denotes the maximum gray level of the image. For 8-bit grayscale images, L is equal to 255.

    The expression for SSIM is:

    SSIM=(2μfμu+c1)(2σfu+c2)(μ2f+μ2u+c1)(σ2fσ2u+c2)

    where μf and μu are the means of f and u, respectively; σ2f, σ2u are the variances of f and u, respectively, σfu is the covariance of f and u, and c1 and c2 are constants used to maintain stability.

    As shown in Figure 1, our numerical experiments use six real medical images, which are called "Adenocarcinoma", "Chest", "Brain", "CXR", "Knee", and "Foot".

    Figure 1.  (a) The original "Adenocarcinoma" image; (b) the original "Chest" image; (c) the original "Brain" image; (d) the original "CXR" image; (e) the original "Knee" image; (f) the original "Foot" image.

    First, a more thorough analysis is given. We have compared the denoising effect of different sizes of the same image at the same noise level (25,0.1) using the ISSM denoising method of this paper and the results are shown in Table 1. According to Table 1, we can find that as the image size increases, the PSNR value after denoising also increases. As the image size is larger, the number of pixels contained in the image is more and the observed image is also clearer. The result is that image processing takes longer. Based on the above considerations, we chose an image of size 512*512 for the experiment in this paper.

    Table 1.  Comparison results of PSNR values for different image sizes.
    Adenocarcinoma Chest
    Noisy Denoised Noisy Denoised
    128*128 19.30 27.05 18.63 25.08
    256*256 19.30 28.76 18.63 26.24
    512*512 19.30 29.52 18.63 27.75
    1024*1024 19.30 30.53 18.63 28.66
    Brain CXR
    Noisy Denoised Noisy Denoised
    128*128 19.68 25.18 18.48 27.01
    256*256 19.68 27.68 18.48 27.91
    512*512 19.68 29.86 18.48 28.62
    1024*1024 19.68 30.42 18.48 28.93
    Knee Foot
    Noisy Denoised Noisy Denoised
    128*128 18.64 28.11 19.32 28.10
    256*256 18.64 28.67 19.32 28.81
    512*512 18.64 29.33 19.32 29.79
    1024*1024 18.64 29.44 19.32 30.31

     | Show Table
    DownLoad: CSV

    In the first experiment, three images with a size of 512512 were destroyed by different degrees of noise: (k0,k1)=(10,0),(25,0),(25,0.1),(25,0.2),(50,0). The PSNR values of the image contaminated by noise and the PSNR values after denoising are shown in Table 2. After a lot of experiments, in order to make the noise reduction effect better, we chose the parameters dt=0.03,β=0.01,α1=0.01,α2=0.005, and the number of iterations equal to 800. It can be seen that the PSNR values of the three images have been improved after denoising by our model.

    Table 2.  The PSNR and SSIM values of noisy and denoised images.
    Image (k0,k1) noisy PSNR/SSIM denoised PSNR/SSIM
    Adenocarcinoma (10, 0) 28.13/0.2650 37.29/0.3986
    (25, 0) 20.16/0.1383 30.92/0.3237
    (25, 0.1) 19.30/0.1271 29.52/0.2962
    (25, 0.2) 17.25/0.1080 26.13/0.2358
    (50, 0) 14.14/0.0727 21.96/0.1496
    Chest (10, 0) 28.13/0.4677 34.72/0.6835
    (25, 0) 20.16/0.2190 29.73/0.4887
    (25, 0.1) 18.63/0.1872 27.75/0.4370
    (25, 0.2) 15.88/0.1454 23.36/0.3434
    (50, 0) 14.15/0.1056 21.16/0.2386
    Brain (10, 0) 28.13/0.4351 36.42/0.7259
    (25, 0) 20.19/0.2071 30.40/0.5453
    (25, 0.1) 19.68/0.1936 29.86/0.5209
    (25, 0.2) 18.51/0.1696 27.79/0.4599
    (50, 0) 14.16/0.0897 21.32/0.2404
    CXR (10, 0) 28.12/0.2974 37.14/0.6335
    (25, 0) 20.19/0.0881 31.50/0.4123
    (25, 0.1) 18.48/0.0694 28.62/0.3212
    (25, 0.2) 15.54/0.0458 23.44/0.2053
    (50, 0) 14.16/0.0272 21.44/0.1096
    Knee (10, 0) 28.13/0.1884 38.94/0.5220
    (25, 0) 20.18/0.0533 32.18/0.3122
    (25, 0.1) 18.64/0.0408 29.33/0.2286
    (25, 0.2) 15.91/0.0259 24.31/0.1261
    (50, 0) 14.16/0.0167 21.51/0.0685
    Foot (10, 0) 28.12/0.2627 37.01/0.3787
    (25, 0) 20.17/0.0903 31.30/0.2920
    (25, 0.1) 19.32/0.0764 29.79/0.2527
    (25, 0.2) 17.42/0.0586 26.17/0.1837
    (50, 0) 14.16/0.0338 21.42/0.1003

     | Show Table
    DownLoad: CSV

    Figure 2 shows the PSNR values of the six medical CT images tested in Figure 1 at different noise levels, from which we can see that the denoising effect of these six images varies at the same noise level. Among them, under the noise level of (10,0) and (25,0), image "Knee'' has the highest PSNR value after denoising and the best recovery effect; under the noise levels of (25,0.1), the PSNR of images "Adenocarcinoma'', "Brain'', "Knee'', "Foot'', are basically about the same, but image "Brain'' is slightly higher; under the noise level of (25,0.2), the PSNR of image "Brain'' is the highest and the denoising effect is the best; under the noise level of (50,0), image "Adenocarcinoma" has the highest PSNR value, which is not distinctly high compared with other tested images.

    Figure 2.  The PSNR of the tested images at different noise levels (k0,k1).

    We plot the PSNR and running time (CPU) of the tested images after denoising at different noise levels as bar charts (as shown in Figure 3), where 3(a) is the PSNR after denoising and 3(b) is the running time (CPU). The dark blue, red, yellow, light blue, green, and purple bars in the figure represent images "Adenocarcinoma", "Chest", "Brain", "CXR", "Knee", and "Foot", respectively. From 3(a), it can be found that the denoising ability of different images is different for different noise levels. From 3(b), it can be seen that the denoising runtime for the "Chest'' image is longer for all the different noise levels, and the runtime for the other five images is basically the same. We can also see from 3(b) that there is not much difference in the running time when the noise level is relatively small; however, when the denoising level is large, the difference in the running time is more obvious, and the running time of the X-ray (1(d)–1(f)) is more similar compared to that of the CT image (1(a)–1(c)).

    Figure 3.  (a) PSNR of the tested images at different noise levels (k0,k1); (b) CPU of the tested images at different noise levels (k0,k1).

    In the second experiment, we illustrate from all aspects that our model has good performance. We take (k0,k1)=(25,0.1) to denoise the image in Figure 1. The choice of parameters remains dt=0.03,β=0.01,α1=0.01,α2=0.005.

    Figure 4 shows that our proposed model can remove the mixture of additive and multiplicative noise to some extent.

    Figure 4.  (a)–(c) The original image, the noisy image, and the denoised image of "Adenocarcinoma", respectively. (d)–(f) The original image, the noisy image, and the denoised image of "Chest", respectively. (h)–(g) The original image, the noisy image, and the denoised image of "Brain", respectively. (k)–(m) The original image, the noisy image, and the denoised image of "CXR", respectively. (n)–(p) The original image, the noisy image, and the denoised image of "Knee", respectively. (q)–(s) The original image, the noisy image, and the denoised image of "Foot", respectively.

    In Figure 5, the histogram of the image shows the distribution of different gray levels in the total pixels. The horizontal axis represents the value of the pixel, and the vertical axis represents the number of times that the corresponding pixel value appears. It can be seen that the histogram of the image destroyed by noises is completely different from the original image, but the histogram of the denoised image is very close to the original image.

    Figure 5.  (a) Histogram of the "Adenocarcinoma" image; (b) histogram of the "Chest" image; (c) histogram of the "Brain" image; (d) histogram of the "CXR" image; (e) histogram of the "Knee" image; (f) histogram of the "Foot" image, where the left is the original image, the center is the noisy image, and the right is the denoised image.

    In Figure 6, we show the 250th line of the tested images. The original image, noisy image, and denoised image are represented by blue, purple, and yellow solid lines, respectively. From these figures, we can see that when the original image is contaminated by noise, the one-dimensional image oscillates greatly. The yellow solid line is highly consistent with the blue, which represents that the denoising effect of our proposed model is very effective.

    Figure 6.  (a) The 250th line of the "Adenocarcinoma" image; (b) the 250th line of the "Chest" image; (c) the 250th line of the "Brain" image; (d) the 250th line of the "CXR" image; (e) the 250th line of the "Knee" image; (f) the 250th line of the "Foot" image.

    In Figure 7, we look at the same areas of the tested image and mark them with a red rectangle. The three-dimensional surface map of the original image is relatively flat, and that of the noisy image is rugged and accompanied by many sharp protrusions. It can be clearly seen that the three-dimensional surface map is highly consistent with the original image, which concludes that our proposed model has a good denoising effect.

    Figure 7.  (a) The 3D surface map of the "Adenocarcinoma" image; (b) the 3D surface map of the "Chest" image; (c) the 3D surface map of the "Brain" image; (d) the 3D surface map of the "CXR" image; (e) the 3D surface map of the "Knee" image; (f) the 3D surface map of the "Foot" image. In each figure, the top left corner is the original image with the red rectangular box, the top right corner is its corresponding 3D surface map, the bottom left corner is the 3D surface map with noise, and the bottom right corner is the 3D surface map after denoising.

    In the third experiment, we first selected several different sets of parameter values and conducted experiments, and the results are shown in the following Table 3. According to Table 3, we can find that different parameter values affect the denoising effect (PSNR value and SSIM value), and therefore, to further find the suitable parameter values, we utilize a grid search for them.

    Table 3.  Comparison results of PSNR and SSIM values with different parameters α1 and α2.
    α1 α2 Noisy Denoised
    PSNR SSIM PSNR SSIM
    0.001 0.005 19.36 0.1156 29.88 0.2851
    0.005 0.005 19.38 0.1160 29.94 0.2842
    0.01 0.005 19.35 0.1155 29.90 0.2840
    0.05 0.005 19.34 0.1157 24.95 0.2830
    α1 α2 Noisy Denoised
    PSNR SSIM PSNR SSIM
    0.01 0.001 19.36 0.1157 28.75 0.2850
    0.01 0.005 19.36 0.1156 29.97 0.2831
    0.01 0.01 19.34 0.1152 29.86 0.2818
    0.01 0.05 19.36 0.1157 29.82 0.2829

     | Show Table
    DownLoad: CSV

    We examine the impact of changing the values of α1[0.001,0.05] and α2[0.001,0.05] on the proposed variational method. Our method is executed with the parameters dt=0.03,β=0.01, the number of iterations N=700, and a noise level equal to (25,0.1). As clearly seen in Figure 8(a), we set α2=0.005, according to the PSNR value, and it can be seen that when α1[0.001,0.015], the effect of removing this type of noise is excellent. From 8(b), we set α1=0.003, and it can be seen that when α2[0.001,0.01], the effect of removing this type of noise is excellent. In Figure 8(c), we show a three-dimensional plot of the changes in the PSNR values corresponding to changes in the regularization parameters α1 and α2. The darker the red, the higher the PSNR value, and the red areas are those with higher PSNR values, which we find are consistent with the conclusions reflected in 8(a) and 8(b). For more details, see Figure 8.

    Figure 8.  (a) The PSNR value of the restored image with different values of α1; (b) the PSNR value of the restored image with different values of α2; (c) the PSNR value of the restored image with different values of α1 and α2.

    From Figure 8, we can further see that although the size of parameters α1 and α2 do not have a significant impact on the PSNR value after image denoising as a whole, there are individual parameter values that will make the PSNR value drop dramatically, or even become negative. In order to make the model's denoising effect more robust and accurate, we need to discuss the parameters.

    In the fourth experiment, in order to decide the value of the N, we study the relation between the PSNR and the number of the iterations N. The Figure 9 shows the relation between the PSNR and the number of iterations N on "Adenocarcinoma", "Brain", and "Knee" images corrupted by the combined additive and multiplicative noise with noise intensity (k0,k1)=(25,0.1). Figure 9 shows that the curves of the relationship between the PSNR and number of iterations are slightly different from other images. In 9(a), we can see that the curve first shows an upward trend and then decreases, but finally reaches a steady state; in 9(b) and 9(c), we can see that the curve shows an upward trend and finally reaches a steady state. Therefore, for images "Adenocarcinoma" and "Brain", the optimal number of iterations N for removing the mixed noise is in the interval (1000,1250), and for image "Knee", the optimal number of iterations N for removing the mixed noise is around 1500.

    Figure 9.  The relationship between PSNR and the number of iterations N. (a) The "Adenocarcinoma" image; (b) the "Brain" image; (c) the "Knee" image.

    In the following, we still test the six images of Figure 1. A noise of (k0,k1)=(25,0.1) is added to the images and denoised with gradient descent (GDE) and the inverse scale space algorithm (ISSM), which shows the PSNR and running time (CPU) after denoising as shown in Table 4 where higher PSNR values and smaller CPUs are in bold. In this experiment, the parameter values are tuned as dt=0.03,β=0.01,(α1,α2)=(0.01,0.005), and the number of iterations is equal to 500. In Table 4, we can find that the PSNR and SSIM values of the ISSM method are slightly higher than those of the GDE, and for images 1(a), 1(c), 1(f), the running time of the GDE is slightly longer than that of the ISSM, whereas for images 1(b), 1(d), 1(e), the running time of the GDE is about 2 seconds longer than that of the ISSM. In general the ISSM is a little bit shorter in terms of running time than the GDE. Therefore the method in this paper is better than the GDE in terms of PSNR, SSIM, and runtime.

    Table 4.  Comparison results of PSNR, SSIM values, and runtime (CPU) of the two methods.
    Image PSNR
    Noisy GDE ISSM
    Adenocarcinoma 19.30 28.34 29.52
    Chest 18.63 27.51 27.75
    Brain 19.68 29.38 29.86
    CXR 18.48 28.29 28.62
    Knee 18.64 28.96 29.33
    Foot 19.32 29.24 29.79
    Image SSIM
    Noisy GDE ISSM
    Adenocarcinoma 0.1271 0.2923 0.2962
    Chest 0.1872 0.4317 0.4370
    Brain 0.1936 0.5095 0.5209
    CXR 0.0694 0.3103 0.3212
    Knee 0.0408 0.2174 0.2286
    Foot 0.0764 0.2479 0.2527
    Image Time
    GDE ISSM
    Adenocarcinoma 25.7696 25.1879
    Chest 30.0489 28.8607
    Brain 27.0294 27.0153
    CXR 27.5674 24.9295
    Knee 27.6340 24.4423
    Foot 26.8773 26.0720

     | Show Table
    DownLoad: CSV

    We used the "Adenocarcinoma" image to compare the ISSM method with the GDE method in detail. As shown in Figure 10, 10(a) shows that the GDE denoising method yields a greater number of low-pixel values than the ISSM method. 10(b) shows that the ISSM method and the GDE method are essentially identical at line 370th of the image after denoising. However, in certain instances, the ISSM method outperforms the GDE method. As shown in 10(c), the 3D surface maps of the images denoised by the ISSM method and the GDE method are basically the same. Thus, Figure 10 as a whole reflects that the inverse scale space method (ISSM) is superior to the gradient descent method (GDE).

    Figure 10.  (a) The histogram of the "Adenocarcinoma" image; (b) the 370th line of the "Adenocarcinoma" image; (c) the 3D surface map of the "Adenocarcinoma" image.

    In this subsection of the experiment, we tested brain tumour MRI and fetal head ultrasound images, as shown in Figure 11. The original image, the noisy image, the ISSM denoised image, and the GDE denoised image are shown from left to right in Figure 12, where the noise level of the noisy image is (25,0.1). As can be seen from the figure, visually we can see very little difference with the naked eye, so we analyzed and compared them again from a quantitative point of view, as shown in Table 5.

    Table 5.  Comparison results of PSNR, SSIM values, and runtime (CPU) of the two methods.
    Image PSNR
    Noisy GDE ISSM
    MRI 19.76 26.65 30.30
    Ultrasound 19.98 25.76 30.40
    Image SSIM
    Noisy GDE ISSM
    MRI 0.1271 0.2601 0.3390
    Ultrasound 0.1117 0.2330 0.2836
    Image Time
    GDE ISSM
    MRI 25.9930 24.7379
    Ultrasound 27.5113 27.2909

     | Show Table
    DownLoad: CSV
    Figure 11.  (a) Brain tumour MRI image; (b) fetal head ultrasound image.
    Figure 12.  Original image, noisy image, and image after denoising by different methods.

    In Table 5, we can clearly see that the PSNR and SSIM values obtained by our method in this paper are higher than those of GDE, which indicates that our method can effectively denoise another two types of images, MRI and ultrasound, besides X-ray and CT; and according to the comparison of the last two columns of the running time, it can be seen that our method has a slight improvement in comparison with GDE, which further illustrates the advantages of our method for the adaptability of medical image denoising.

    In this section of the experiment, we add salt and pepper noise with a noise level of 0.02 to the image in Figure 11 and denoise it with the method of this paper and GDE, and the results are shown in Figure 13 and Table 6. From the figure, we can find that both this paper's method and GDE can effectively remove part of the salt and pepper noise, and ISSM is better than the GDE denoising effect, but both ISSM and the GDE method cannot effectively retain the texture details. From Table 6, it can be seen that the PSNR value after denoising is not high, and there is not much difference between the two methods.

    Table 6.  Comparison results of PSNR values of the two methods.
    Image PSNR
    Noisy GDE ISSM
    MRI 9.86 10.23 10.26
    Ultrasound 13.06 14.02 14.09

     | Show Table
    DownLoad: CSV
    Figure 13.  Original image, noisy image, and image after denoising by different methods.

    In this paper, we combine the data fidelity terms from the ROF model and the I-divergence model to restore medical images with combined noises. We have proved the existence and uniqueness of the minimum solution of the model. The inverse scale space method has been used to solve the non-linear PDE resulting from minimizing the proposed functional. Experimental results demonstrate that the proposed model can remove combined noise on medical images from different perspectives. The inverse scale space method on this model improves PSNR better and has faster runtime than traditional methods on medical images. Parameter sensitivity is also discussed.

    Medical imaging is often affected by noise, resulting in image quality degradation, while the model and method proposed in this paper can effectively denoise medical images, thus enabling doctors to more accurately identify lesions and reduce the possibility of misdiagnosis and omission of diagnosis, thus improving diagnostic accuracy and enhancing lesion detection. It can also help patients to reduce the number of repeated examinations and enhance the patient experience. Learning-based methods are currently more popular research. Combining the modeling approach of this paper with deep learning as well as further exploring real medical images with noise are directions for future work.

    Conceptualization, Chenwei Li and Donghong Zhao; methodology, Chenwei Li and Donghong Zhao; software, Chenwei Li; formal analysis, Chenwei Li and Donghong Zhao; writing—original draft preparation, Chenwei Li; writing—review and editing, Chenwei Li and Donghong Zhao; visualization, Chenwei Li. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors thank the reviewers for their helpful comments and valuable suggestions. This research is funded by the National Natural Science Foundation of China (grant number 12371481) and Youth Teaching Talents Training Program of USTB (2016JXGGRC-002).

    All authors declare no conflicts of interest.



    [1] G. Aubert, P. Kornprobst, Mathematical problems in image processing–-partial differential equations and the calculus of variations, 2 Eds., Applied Mathematical Sciences, 2006. http://dx.doi.org/10.1007/978-0-387-44588-5
    [2] D. L. Donoho, I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Am. Stat. Assoc., 90 (1995), 1200–1224. http://dx.doi.org/10.1080/01621459.1995.10476626 doi: 10.1080/01621459.1995.10476626
    [3] S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460–489. http://dx.doi.org/10.1137/040605412 doi: 10.1137/040605412
    [4] X. Yu, D. Zhao, A weberized total variance regularization-based image multiplicative noise model, Image Anal. Stereol., 42 (2023), 65–76. http://dx.doi.org/10.5566/ias.2837 doi: 10.5566/ias.2837
    [5] P. Kornprobst, R. Deriche, G. Aubert, Image sequence analysis via partial differential equations, J. Math. Imaging Vis., 11 (1999), 5–26. http://dx.doi.org/10.1023/A:1008318126505 doi: 10.1023/A:1008318126505
    [6] S. M. A. Sharif, R. A. Naqvi, Z. Mehmood, J. Hussain, A. Ali, S. W. Lee, Meddeblur: medical image deblurring with residual dense spatial-asymmetric attention, Mathematics, 11 (2023), 115. http://dx.doi.org/10.3390/math11010115 doi: 10.3390/math11010115
    [7] R. A. Naqvi, A. Haider, H. S. Kim, D. Jeong, S. W. Lee, Transformative noise reduction: leveraging a transformer-based deep network for medical image denoising, Mathematics, 12 (2024), 2313. http://dx.doi.org/10.3390/math12152313 doi: 10.3390/math12152313
    [8] S. Umirzakova, S. Mardieva, S. Muksimova, S. Ahmad, T. Whangbo, Enhancing the super-resolution of medical images: introducing the deep residual feature distillation channel attention network for optimized performance and efficiency, Bioengineering, 10 (2023), 1332. http://dx.doi.org/10.3390/bioengineering10111332 doi: 10.3390/bioengineering10111332
    [9] M. Shakhnoza, S. Umirzakova, M. Sevara, Y. I. Cho, Enhancing medical image denoising with innovative teacher student model-based approaches for precision diagnostics, Sensors, 23 (2023), 9502. http://dx.doi.org/10.3390/s23239502 doi: 10.3390/s23239502
    [10] L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259–268. http://dx.doi.org/10.1016/0167-2789(92)90242-F doi: 10.1016/0167-2789(92)90242-F
    [11] A. Boukdir, M. Nachaoui, A. Laghrib, Hybrid variable exponent model for image denoising: a nonstandard high-order pde approach with local and nonlocal coupling, J. Math. Anal. Appl., 536 (2024), 128245. http://dx.doi.org/10.1016/j.jmaa.2024.128245 doi: 10.1016/j.jmaa.2024.128245
    [12] W. Lian, X. Liu, Non-convex fractional-order TV model for impulse noise removal, J. Comput. Appl. Math., 417 (2023), 114615. http://dx.doi.org/10.1016/j.cam.2022.114615 doi: 10.1016/j.cam.2022.114615
    [13] J. Shen, On the foundations of vision modeling: I. Webers law and Weberized TV restoration, Phys. D, 175 (2003), 241–251. http://dx.doi.org/10.1016/S0167-2789(02)00734-0 doi: 10.1016/S0167-2789(02)00734-0
    [14] C. Liu, X. Qian, C. Li, A texture image denoising model based on image frequency and energy minimization, Springer Berlin Heidelberg, 2012,939–949. http://dx.doi.org/10.1007/978-3-642-34531-9_101
    [15] C. Li, D. Zhao, A non-convex fractional-order differen- tial equation for medical image restoration, Symmetry, 16 (2024), 258. http://dx.doi.org/10.3390/sym16030258 doi: 10.3390/sym16030258
    [16] J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell., PAMI-2 (1980), 165–168. http://dx.doi.org/10.1109/TPAMI.1980.4766994 doi: 10.1109/TPAMI.1980.4766994
    [17] A. Achim, A. Bezerianos, P. Tsakalides, Novel bayesian multiscale method for speckle removal in medical ultrasound images, IEEE Trans. Med. Imaging, 20 (2001), 772–783. http://dx.doi.org/10.1109/42.938245 doi: 10.1109/42.938245
    [18] L. Rudin, P. L. Lions, S. Osher, Multiplicative denoising and deblurring: theory and algorithms, In: Geometric level set methods in imaging, vision, and graphics, New York: Springer, 2003,103–119. https://dx.doi.org/10.1007/0-387-21810-6_6
    [19] G. Aubert, J. F. Aujol, A variational approach to removing multiplicative noise, SIAM J. Appl. Math., 68 (2008), 925–946. http://dx.doi.org/10.1137/060671814 doi: 10.1137/060671814
    [20] J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci., 1 (2008), 294–321. http://dx.doi.org/10.1137/070689954 doi: 10.1137/070689954
    [21] G. Steidl, T. Teuber, Removing multiplicative noise by douglas-rachford splitting methods, J. Math. Imaging Vis., 36 (2010), 168–184. http://dx.doi.org/10.1007/s10851-009-0179-5 doi: 10.1007/s10851-009-0179-5
    [22] L. Xiao, L. L. Huang, Z. H. Wei, A weberized total variation regularization-based image multiplicative noise removal algorithm, EURASIP J. Adv. Signal Process., 2010 (2010), 1–15. http://dx.doi.org/10.1155/2010/490384 doi: 10.1155/2010/490384
    [23] L. Huang, L. Xiao, Z. Wei, A nonlinear inverse scale space method for multiplicative noise removal based on weberized total variation, 2009 Fifth International Conference on Image and Graphics, Xi'an, China, 2009,119–123. http://dx.doi.org/10.1109/ICIG.2009.19
    [24] X. Liu, T. Sun, Hybrid non-convex regularizers model for removing multiplicative noise, Comput. Math. Appl., 126 (2022), 182–195. http://dx.doi.org/10.1016/j.camwa.2022.09.012 doi: 10.1016/j.camwa.2022.09.012
    [25] C. Li, C. He, Fractional-order diffusion coupled with integer-order diffusion for multiplicative noise removal, Comput. Math. Appl., 136 (2023), 34–43. http://dx.doi.org/10.1016/j.camwa.2023.01.036 doi: 10.1016/j.camwa.2023.01.036
    [26] K. Hirakawa, T. W. Parks, Image denoising using total least squares, IEEE Trans. Image Process., 15 (2006), 2730–2742. http://dx.doi.org/10.1109/TIP.2006.877352 doi: 10.1109/TIP.2006.877352
    [27] N. Chumchob, K. Chen, C. Brito-Loeza, A new variational model for removal of combined additive and multiplicative noise and a fast algorithm for its numerical approximation, Int. J. Comput. Math., 90 (2013), 140–161. http://dx.doi.org/10.1080/00207160.2012.709625 doi: 10.1080/00207160.2012.709625
    [28] A. Ullah, W. Chen, M. A. Khan, H. G. Sun, An efficient variational method for restoring images with combined additive and multiplicative noise, Int. J. Appl. Comput. Math., 3 (2017), 1999–2019. http://dx.doi.org/10.1007/s40819-016-0219-y doi: 10.1007/s40819-016-0219-y
    [29] Y. Chen, W. Feng, R. Ranftl, H. Qiao, T. Pock, A higher-order mrf based variational model for multiplicative noise reduction, IEEE Signal Process Lett., 21 (2014), 1370–1374. http://dx.doi.org/10.1109/LSP.2014.2337274 doi: 10.1109/LSP.2014.2337274
    [30] P. Ochs, Y. Chen, T. Brox, T. Pock, ipiano: inertial proximal algorithm for nonconvex optimization, SIAM J. Imaging Sci., 7 (2014), 1388–1419. http://dx.doi.org/10.1137/130942954 doi: 10.1137/130942954
    [31] O. Scherzer, C. Groetsch, Inverse scale space theory for inverse problems, In: Scale-space and morphology in computer vision, Springer, Berlin, Heidelberg, 2106 (2001), 317–325. http://dx.doi.org/10.1007/3-540-47778-0_29
    [32] C. W. Groetsch, O. Scherzer, Nonstationary iterated tikhonovmorozov method and third order differential equations for the evaluation of unbounded operators, Math. Methods Appl. Sci., 23 (2000), 1287–1300.
    [33] M. Burger, S. Osher, J. Xu, G. Gilboa, Nonlinear inverse scale space methods for image restoration, In: Variational, geometric, and level set methods in computer vision, Springer, Berlin, Heidelberg, 3752 (2005), 25–36. http://dx.doi.org/10.1007/11567646_3
    [34] M. Burger, G. Gilboa, S. Osher, J. Xu, Nonlinear inverse scale space methods, Commun. Math. Sci., 4 (2006), 179–212. http://dx.doi.org/10.4310/CMS.2006.v4.n1.a7 doi: 10.4310/CMS.2006.v4.n1.a7
    [35] I. Csiszar, Why least squares and maximum entropy? an axiomatic approach to inference for linear inverse problems, Ann. Statist., 19 (1991), 2032–2066. http://dx.doi.org/10.1214/AOS/1176348385 doi: 10.1214/AOS/1176348385
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(209) PDF downloads(45) Cited by(0)

Figures and Tables

Figures(13)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog