Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention

  • Medical image denoising is particularly important in brain image processing. Noise in acquisition and transmission degrades image quality and affects the reliability of diagnosis and research. Due to the complexity of the brain's structure and minor density differences, noise can increase diagnosis difficulty, so high-quality images are essential for disease detection, prognosis assessment, and treatment plan development. This paper proposes a multi-convolutional neural network based on feature distillation learning and dense residual attention to enhance the quality of brain images and improve denoising performance. The overall network structure contains four parts: a global sparse network (GSN), a dense residual attention network (DRAN), a feature distiller network (FDN), and a feature processing block (FPB). Before feeding the brain images into the denoising network model, they are preprocessed using a modified watershed algorithm based on a combination of a morphological gradient, Sobel's operator, and Canny's operator. The GSN is used to extract global features and increase the sensory field, and the DRAN efficiently extracts key features by combining improved channel attention and spatial attention mechanisms. The FDN extracts useful features through two feature distillation blocks, suppresses redundant information, and reduces computational complexity. The FPB performs feature fusion. Experimental results on brain image datasets and ground-based open datasets show that the proposed model outperforms existing methods in several metrics, and helps to improve the accuracy of brain disease diagnosis and treatment.

    Citation: Huimin Qu, Haiyan Xie, Qianying Wang. Multi-convolutional neural network brain image denoising study based on feature distillation learning and dense residual attention[J]. Electronic Research Archive, 2025, 33(3): 1231-1266. doi: 10.3934/era.2025055

    Related Papers:

    [1] Azmy S. Ackleh, Rainey Lyons, Nicolas Saintier . Finite difference schemes for a structured population model in the space of measures. Mathematical Biosciences and Engineering, 2020, 17(1): 747-775. doi: 10.3934/mbe.2020039
    [2] Azmy S. Ackleh, Vinodh K. Chellamuthu, Kazufumi Ito . Finite difference approximations for measure-valued solutions of a hierarchicallysize-structured population model. Mathematical Biosciences and Engineering, 2015, 12(2): 233-258. doi: 10.3934/mbe.2015.12.233
    [3] Azmy S. Ackleh, Mark L. Delcambre, Karyn L. Sutton, Don G. Ennis . A structured model for the spread of Mycobacterium marinum: Foundations for a numerical approximation scheme. Mathematical Biosciences and Engineering, 2014, 11(4): 679-721. doi: 10.3934/mbe.2014.11.679
    [4] Horst R. Thieme . Discrete-time population dynamics on the state space of measures. Mathematical Biosciences and Engineering, 2020, 17(2): 1168-1217. doi: 10.3934/mbe.2020061
    [5] Qihua Huang, Hao Wang . A toxin-mediated size-structured population model: Finite difference approximation and well-posedness. Mathematical Biosciences and Engineering, 2016, 13(4): 697-722. doi: 10.3934/mbe.2016015
    [6] Inom Mirzaev, David M. Bortz . A numerical framework for computing steady states of structured population models and their stability. Mathematical Biosciences and Engineering, 2017, 14(4): 933-952. doi: 10.3934/mbe.2017049
    [7] Suzanne M. O'Regan, John M. Drake . Finite mixture models of superspreading in epidemics. Mathematical Biosciences and Engineering, 2025, 22(5): 1081-1108. doi: 10.3934/mbe.2025039
    [8] Ugur G. Abdulla, Vladislav Bukshtynov, Saleheh Seif . Cancer detection through Electrical Impedance Tomography and optimal control theory: theoretical and computational analysis. Mathematical Biosciences and Engineering, 2021, 18(4): 4834-4859. doi: 10.3934/mbe.2021246
    [9] József Z. Farkas, Peter Hinow . Physiologically structured populations with diffusion and dynamic boundary conditions. Mathematical Biosciences and Engineering, 2011, 8(2): 503-513. doi: 10.3934/mbe.2011.8.503
    [10] Nalin Fonseka, Jerome Goddard Ⅱ, Alketa Henderson, Dustin Nichols, Ratnasingham Shivaji . Modeling effects of matrix heterogeneity on population persistence at the patch-level. Mathematical Biosciences and Engineering, 2022, 19(12): 13675-13709. doi: 10.3934/mbe.2022638
  • Medical image denoising is particularly important in brain image processing. Noise in acquisition and transmission degrades image quality and affects the reliability of diagnosis and research. Due to the complexity of the brain's structure and minor density differences, noise can increase diagnosis difficulty, so high-quality images are essential for disease detection, prognosis assessment, and treatment plan development. This paper proposes a multi-convolutional neural network based on feature distillation learning and dense residual attention to enhance the quality of brain images and improve denoising performance. The overall network structure contains four parts: a global sparse network (GSN), a dense residual attention network (DRAN), a feature distiller network (FDN), and a feature processing block (FPB). Before feeding the brain images into the denoising network model, they are preprocessed using a modified watershed algorithm based on a combination of a morphological gradient, Sobel's operator, and Canny's operator. The GSN is used to extract global features and increase the sensory field, and the DRAN efficiently extracts key features by combining improved channel attention and spatial attention mechanisms. The FDN extracts useful features through two feature distillation blocks, suppresses redundant information, and reduces computational complexity. The FPB performs feature fusion. Experimental results on brain image datasets and ground-based open datasets show that the proposed model outperforms existing methods in several metrics, and helps to improve the accuracy of brain disease diagnosis and treatment.



    Coagulation-fragmentation (CF) equations have been used to model many physical and biological phenomena [1,2]. In particular, when combined with transport terms, these equations can be used to model the population dynamics of oceanic phytoplankton [3,4,5]. Setting such models in the space of Radon measures allows for the unified study of both discrete and continuous structures. Not only are the classical discrete and continuous CF equations special cases of the measure valued model (as shown in [6]), but this setting allows for a mixing of the two structures, which has become of interest in particular applications [7,8].

    With the above applications in mind, numerical schemes to solve CF equations are of great importance to researchers. In particular, finite difference methods offer numerical schemes which are easy to implement and approximate the solution with a high order of accuracy. The latter benefit is especially important in the study of stability and optimal control of such equations.

    The purpose of this article is to make improvements on two of the three first-order schemes presented in [9], namely the fully explicit and semi-implicit schemes. These schemes are shown to have certain advantages and disadvantages as discussed in the aforementioned study. In particular, the fully explicit scheme has the qualitative property of conservation of mass through coagulation. On the other hand, the semi-implicit scheme has a more relaxed Courant–Friedrichs–Lewy (CFL) condition, which does not depend on the initial condition. We have decided not to attempt to improve the third scheme presented in [9] as there does not seem to be a significant advantage of the named conservation law scheme to outweigh the drastic computational cost. The main improvement here is to lift-up these two first-order schemes to second-order ones on the space of Radon measures; however, as this state space contains singular elements (including point measures), the improvement of these schemes must be handled with care. As shown in [10], discontinuities and singularities in the solution can cause drastic changes in not only the order of convergence of the scheme, but also in the behavior of the scheme. To address these issues, we turn to a high resolution scheme studied with classical structured population models (i.e. without coagulation-fragmentation) in [11,12,13]. This scheme makes use of a minmod flux limiter to control any oscillatory behavior of the scheme caused by irregularities. With this new flux, we show that it is possible for second order convergence rates to be obtained for continuous density solutions. However, as the solutions become more irregular, one should expect the convergence rate to decline. Such a phenomenon is demonstrated in [10,13], and we direct the reader to these manuscripts for more discussion.

    The layout of the paper is as follows. In Section 2, we present the notation and preliminary results about the model and state space used throughout the paper. In Section 3, we describe the model and state all assumptions imposed on the model parameters. In Section 4, we present the numerical schemes, their CFL conditions, and state the main theorem of the paper. In Section 5, we test the convergence rate of the schemes against well-known examples. In Section 6, we provide a conclusion and in the Appendix (Section 7) we provide proofs for some of our results.

    We make use of standard notations for function spaces. The most common examples of these are C1(R+) for the space of real valued continuously differentable functions and W1,(R+) for the usual Sobelov space. The space of Radon measures will be denoted with M(R+), with M+(R+) representing its positive cone. This space will be equipped with the Bounded-Lipschitz (BL) norm given by

    μBL:=supϕW1,1{R+ϕ(x)μ(dx):ϕW1,(R+)}.

    Another norm of interest to this space is the well studied Total Variation (TV) norm given by

    νTV=|ν|(R+)=supf1{R+f(x)ν(dx):fCc(R+)}.

    For more information about these particular norms and their relationship we direct the reader to [14,15]. For lucidity, we use operator notation in place of integration when we believe it necessary, namely

    (μ,f):=Af(x)μ(dx),

    where the set A is the support of the measure μ. Finally, we denote the minmod function by mm(a,b) and use the following definition

    mm(a,b):=sign(a)+sign(b)2max(|a|,|b|).

    The model of interest is the size-structured coagulation fragmentation model given by

    {tμ+x(g(t,μ)μ)+d(t,μ)μ=K[μ]+F[μ],(t,x)(0,T)×(0,),g(t,μ)(0)Ddxμ(0)=R+β(t,μ)(y)μ(dy),t[0,T],μ(0)=μ0M+(R+),, (3.1)

    where μ(t)M+(R+) represents individuals' size distribution at time t and the functions g,d,β are their growth, death, and reproduction rate, respectively. The coagulation and fragmentation processes of a population distributed according to μM+(R+) are modeled by the measures K[μ] and F[μ] given

    (K[μ],ϕ)=12R+R+κ(y,x)ϕ(x+y)μ(dx)μ(dy)R+R+κ(y,x)ϕ(x)μ(dy)μ(dx)

    and

    (F[μ],ϕ)=R+(b(y,),ϕ)a(y)μ(dy)R+a(y)ϕ(y)μ(dy)

    for any test function ϕ. Here, κ(x,y) is the rate at which individuals of size x coalesce with individuals of size y, a(y) is the global fragmentation rate of individuals of size y, and b(y,) is a measure supported on [0,y] such that b(y,A) represents the probability a particle of size y fragments to a particle with size in the Borel set A.

    Definition 3.1. Given T0, we say a function μC([0,T],M+(R+)) is a weak solution to (3.1) if for all ϕ(C1W1,)([0,T]×R+) and for all t[0,T], the following holds:

    0ϕ(t,x)μt(dx)0ϕ(0,x)μ0(dx)=t00[tϕ(s,x)+g(s,μs)(x)xϕ(s,x)d(s,μs)(x)ϕ(s,x)]μs(dx)ds+t0(K[μs]+F[μs],ϕ(s,))ds+t00ϕ(s,0)β(s,μs)(x)μs(dx)ds. (3.2)

    For the numerical scheme, we will restrict ourselves to a finite domain, [0,xmax]. Thus, we impose the following assumptions on the growth, death and birth functions:

    (A1) For any R>0, there exists LR>0 such that for all μiTVR and ti[0,) (i=1,2) the following hold for f=g,d,β,

    f(t1,μ1)f(t2,μ2)LR(|t1t2|+μ1μ2BL),

    (A2) There exists ζ>0 such that for all T>0,

    supt[0,T]supμM+(R+)g(t,μ)W1,+d(t,μ)W1,+β(t,μ)W1,<ζ,

    (A3) For all (t,μ)[0,)×M+(R+),

    g(t,μ)(0)>0andg(t,μ)(xmax)=0

    for some large xmax>0.

    We assume that the coagulation kernel κ satisfies the following assumption:

    (K1) κ is symmetric, nonnegative, bounded by a constant Cκ, and globally Lipschitz with Lipschitz constant Lκ.

    (K2) κ(x,y)=0 whenever x+y>xmax.

    We assume that the fragmentation kernel satisfies the following assumptions:

    (F1) aW1,(R+) is non-negative,

    (F2) for any y0, b(y,dx) is a measure such that

    (i) b(y,dx) is non-negative and supported in [0,y], and there exist a Cb>0 such that b(y,R+)Cb for all y>0,

    (ii) there exists Lb such that for any y,ˉy0,

    b(y,)b(ˉy,)BLLb|yˉy|

    (iii) for any y0,

    (b(y,dx),x)=y0xb(y,dx)=y.

    The existence and uniqueness of mass conserving solutions of model (3.1) under these assumptions were established in [6].

    We adopt the numerical discretization presented in [6]. For some fixed mesh sizes Δx,Δt>0, we discretize the size domain [0,xmax] with the cells

    ΛΔxj:=[(j12)Δx,(j+12)Δx), for j=1,,J,

    and

    ΛΔx0:=[0,Δx2).

    We denote the midpoints of these grids by xj. The initial condition μ0M+(R+) will be approximated by a combination of Dirac measures

    μΔx0=Jj=0m0jδxj, where m0j:=μ0(ΛΔxj).

    We first approximate the model coefficients κ, a, b as follows. For the physical ingredients, we define

    aΔxi=1ΔxΛΔxia(y)dy,κΔxi,j=1Δx2ΛΔxi×ΛΔxjκ(x,y)dxdy

    for i,j1, and

    aΔx0=2ΔxΛΔx0a(y)dy,κΔx0,0=4Δx2ΛΔx0×ΛΔx0κ(x,y)dxdy

    (with the natural modifications for κΔx0,j and κΔxi,0, i1). We then let aΔxW1,(R+) and κΔxW1,(R+×R+) be the linear interpolation of the aΔxi and κΔxi,j, respectively. Finally, we define the measure bΔx(xj,)M+(ΔxN) by

    bΔx(xj,)=ijb(xj,ΛΔxi)δxj=:ijbΔxj,iδxj

    and then bΔx(x,)M+(ΔxN0) for x0 as the linear interpolate between the bΔx(xj,). When the context is clear, we omit the Δx from the notation above.

    We make use of these approximations to combine the high-resolution scheme presented in [13] with the fully explicit and semi-implicit schemes presented in [9]. Together, these schemes give us the numerical scheme

    {mk+1j=mkjΔtΔx(fkj+12fkj12)Δtdkjmkj+Δt(Cj,k+Fj,k),j=1,..,J,gk0mk0=ΔxJj=1βkjmkj:=Δx(32βk1mk1+12βkJmkJ+J1j=2βkjmkj), (4.1)

    where the flux term is given by

    fkj+12={gkjmkj+12(gkj+1gkj)mkj+12gkjmm(Δ+mkj,Δmkj)j=2,3,,J2gkjmkjj=0,1,J1,J, (4.2)

    the fragmentation term, Fj,k, is given by

    Fj,k:=Ji=jbi,jaimkiajmkj, (4.3)

    and the coagulation term, Cj, is either given by an explicit discretization as

    Cexpj,k:=12j1i=1κi,jimkimkjiJi=1κi,jmkimkj, (4.4)

    or by an implicit one as

    Cimpj,k:=12j1i=1κi,jimk+1imkjiJi=1κi,jmkimk+1j. (4.5)

    As discussed in [9], the explicit scheme which uses (4.4) to approximate the coagulation term and the semi-implicit scheme which instead uses (4.5) to approximate the coagulation term behave differently with respect to the mass conservation and have different Courant–Friedrichs–Lewy (CFL) conditions. The assumed CFL condition for the schemes are

    Explicit:Δt(Cκμ0TVexp((ζ+CbCa)T)+Camax{1,Cb}+(1+32Δx)ζ)1Semi-Implicit:ˉζ(2+32Δx)Δt1, (4.6)

    where ˉζ=max{ζ,aW1,}, Ca=a. The CFL conditions above are similar to those used in [9], but are adjusted due to the flux limiter term as in [13]. It is clear that the semi-implicit scheme has a less restrictive and simpler CFL condition than the explicit scheme. In particular, the CFL condition of the semi-implicit scheme is independent on the initial condition, unlike its counterpart. The trade off for this is a loss of qualitative behavior of the scheme in the sense of mass conservation. Indeed as shown in [9], when β=d=g=a=0, the semi-implicit coagulation term does not conserve mass, whereas the explicit term does. As shown in the appendix, this loss is controlled by the time step size, Δt.

    It is useful to define the following coefficients:

    Akj={gkjj=1,J,12(gkj+1+gkj+gkjmm(Δ+mkj,Δmkj)Δmkj)j=2,12(gkj+1+gkj+gkjmm(Δ+mkj,Δmkj)Δmkjgkj1mm(Δmkj,Δmkj1)Δmkj)j=3,,J2,12(2gkjgkj1mm(Δmkj,Δmkj1)Δmkj)j=J1,

    and

    Bkj={Δgkjj=1,J,12Δ+gkjj=2,12(Δ+gkj+Δgkj)j=3,,J2,12Δgkjj=J1..

    Notice, |Akj|3Δt2Δxζ and AkjBkj0 as

    2(AkjBkj)={2gkj1j=1,J,gkj(2+mm(Δ+mkj,Δmkj)Δmkj)j=2,gkj(1+mm(Δ+mkj,Δmkj)Δmkj)+gkj1(1mm(Δmkj,Δmkj1)Δmkj)j=3,,J2,gnj+gnj1(1mm(Δmnj,Δmnj1)Δmnj)j=J1..

    Scheme (4.1) can then be rewritten as

    {mk+1j=(1ΔtΔxAkjΔt(dkj+aj))mkj+ΔtΔx(AkjBkj)mkj1+ΔtJi=jbi,jaimki+Δt.Cj,kgk0mk0=ΔxJj=1βkjmkj.. (4.7)

    Depending on the choice of coagulation term, this formulation leads to either

    {mk+1j=(1ΔtΔxAkjΔt(dkj+aj)ΔtJi=1κi,jmki)mkj+ΔtΔx(AkjBkj)mkj1+ΔtJi=jbi,jaimki+Δt2j1i=1κi,jimkimkjigk0mk0=ΔxJj=1βkjmkj, (4.8)

    for the explicit term, Cexpj,k, or

    {(1+ΔtJi=1κi,jmki)mk+1j=(1ΔtΔxAkjΔt(dkj+aj))mkj+ΔtΔx(AkjBkj)mkj1+ΔtJi=jbi,jaimki+Δt2j1i=1κi,jimk+1imkjigk0mk0=ΔxJj=1βkjmkj,. (4.9)

    for the implicit term, Cimpj,k.

    For these, schemes, we have the following Lemmas which are proven in the appendix:

    Lemma 4.1. For each k=1,2,,ˉk,

    mkj0 for all j=1,2,J,

    μkΔxTVμ0TVexp((ζ+CbCa)T).

    Lemma 4.2. For any l,p=1,2,,ˉk,

    μlΔxμpΔxBLLT|lp|.

    Using the above two Lemmas, we can arrive at analogous results for the linear interpolation (4.10):

    μΔtΔx(t):=μ0Δxχ{0}(t)+ˉk1k=0[(1tkΔtΔt)μkΔx+tkΔtΔtμk+1Δx]χ(kΔt,(k+1)Δt](t). (4.10)

    Thus by the well know Ascoli-Arzela Theorem, we have the existence of a convergent subsequence of the net {μΔtΔx(t)} in C([0,T],M+([0,xmax]). We now need only show any convergent subsequence converges to the unique solution (3.2).

    Theorem 4.1. As Δx,Δt0 the sequence μΔtΔx converges in C([0,T],M+([0,xmax])) to the solution of (3.1).

    Proof. By multiplying (4.1) by a superfluously smooth test function ϕ(W1,C2)([0,T]×R), denoting ϕkj:=ϕ(kΔt,xj), summing over all j and k, and rearranging we arrive at

    ˉk1k=0Jj=1((mk+1jmkj)ϕkj+ΔtΔx(fkj+12fkj12)ϕkj)+Δtˉk1k=0j=1dkjmkjϕkj=Δtˉk1k=1Jj=1ϕkj(12j1i=1κi,jimkimkjiJi=1κi,jmkimkj+Ji=jbi,jaimkiajmkj). (4.11)

    The left-hand side of equation (4.11) was shown in [13] to be equivalent to

    xmax0ϕ(T,x)dμˉkΔx(x)xmax0ϕ(0,x)dμ0Δx(x)Δtˉk1k=0(xmax0tϕ(tk,x)dμkΔx(x)+xmax0xϕ(tk,x)g(tk,μkΔx)(x)dμkΔx(x)R+d(tk,μkΔx)(x)ϕ(tk,x)dμkΔx(x)+xmax0ϕ(tk,Δx)β(tk,μkΔx)(x)dμkΔx(x))+o(1),

    where o(1)0 as Δt,Δx0.

    The right-hand side of (4.11) was shown in [9] to be equal to

    Δtˉk1k=1{(K[μΔtΔx(tk)],ϕ(tk,))+(F[μΔtΔx(tk)],ϕ(tk,))}+O(Δx).

    Making use of results, it is then easy to see (4.11) is equivalent to

    xmax0ϕ(T,x)dμΔtΔx(T)(x)xmax0ϕ(0,x)dμ0Δx(x)=T0(xmax0tϕ(t,x)+xϕ(t,x)g(t,μΔtΔx(t))(x)dμΔtΔx(t)(x)xmax0d(t,μΔtΔx(t))(x)ϕ(t,x)dμΔtΔx(t)(x)+xmax0ϕ(t,Δx)β(t,μΔtΔx(t))(x)dμΔtΔx(t)(x))dt+T0(K[μΔtΔx(t)],ϕ(t,))+(F[μΔtΔx(t)],ϕ(t,))dt+o(1).

    Passing the limit as Δt,Δx0 along a converging subsequence, we then obtain that equation (3.2) holds for any ϕ(C2W1,)([0,T]×R+) with compact support. A standard density argument shows that equation (3.2) holds for any ϕ(C1W1,)([0,T]×R+). As the weak solution is unique [6], we conclude the net {μΔtΔx} converges to the solution of model (3.1).

    We point out that while these schemes are higher-order in space, they are only first order in time. To lift these schemes into a second-order in time as well, we make use of the second-order Runge-Kutta time discretization [16] for the explicit scheme and second-order Richardson extrapolation [17] for the semi-implicit scheme.

    In this section, we provide numerical simulations which test the order of the explicit and semi-implicit schemes developed in the previous sections. We test each component separately, beginning first with a pure coagulation equation in example 1 (where we set g=β=d=a=0), then a pure fragmentation equation in example 2 (where we set g=β=d=κ=0). In example 3, we consider all components of model (3.1) including the boundary term which is implemented as in scheme (4.7). For readers interested in the schemes performance in the absence of the coagulation-fragmentation processes, we direct you to [11,12,13]. For each example, we give the BL error and the order of convergence. To appreciate the gain in the order of convergence compared to those studied in [9], which are based on a first order approximation of the transport term, we add some of the numerical results from the scheme presented in [9].

    In some of the following examples, the exact solution of the model problem is given. In these cases, we approximate the order of accuracy, q, with the standard calculation:

    q=log2(ρ(μΔtΔx(T),μ(T))ρ(μ0.5Δt0.5Δx(T),μ(T)))

    where μ represents the exact solution of the examples considered. In the cases where the exact solutions are unknown, we approximate the order by

    q=log2(ρ(μΔtΔx(T),μ2Δt2Δx(T))ρ(μ0.5Δt0.5Δx(T),μΔtΔx(T)))

    and we report the numerator of the log argument as the error. The metric ρ we use here was introduced in [18] and is equivalent to the BL metric, namely

    Cρ(μ,ν)μνBLρ(μ,ν)

    for some constant C (dependent on the finite domain). As discussed in [18], this metric is more efficient to compute than the BL norm and maintains the same order of convergence. An alternative to this algorithm would be to make use of the algorithms presented in [19], where convergence in the Fortet-Mourier distance is considered.

    Example 1 In this example, we test the quality of the finite difference schemes against coagulation equations. To this end, we take κ(x,y)1 and μ0=exdx with all other ingredients set to 0. This example has an exact solution given by

    μt=(22+t)2exp(22+tx)dx

    see [20] for more details. The simulation is performed over the truncated domain (x,t)[0,20]×[0,0.5]. We present the BL error and the numerical order of convergence for both schemes in Table 1.

    Table 1.  Error, order, and computation time for example 1. Here, Nx and Nt represent the number of points in x and t, respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [9].
    Explicit Semi-Implicit
    Nx Nt BL Error Order Time (secs) BL Error Order Time (secs)
    100 250 0.0020733 1.0374 0.0020886 0.79633
    200 500 0.00054068 1.9391 6.8224 0.00054408 1.9407 5.1724
    400 1000 0.00013802 1.9699 98.525 0.00013883 1.9705 73.298
    800 2000 3.4842e-05 1.9860 2430.2 3.5040e-05 1.9862 1792.5
    1600 4000 8.7417e-06 1.9948 43381 8.7906e-06 1.9950 32361
    Explicit (1st order) Semi-Implicit (1st order)
    800 2000 0.015675 0.96974 523.11 0.010996 0.97418 1393.3

     | Show Table
    DownLoad: CSV

    Example 2 In this example, we test the quality of the finite difference scheme against fragmentation equations. We point out that in this case, the two schemes are identical in the spacial component. For this demonstration, we take μ0=exdx, b(y,)=2ydx and a(x)=x. As given in [21], this problem has an exact solution of

    μt=(1+t)2exp(x(1+t))dx.

    The simulation is performed over the finite domain (x,t)[0,20]×[0,0.5]. We present the BL error and the numerical order of convergence for both schemes in Table 2. Note as compared to coagulation, the fragmentation process is more affected by the truncation of the domain. This results in the numerical order of the scheme being further from 2 than example 1.

    Table 2.  Error, order, and computation time for example 2. Here, Nx and Nt represent the number of points in x and t, respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [9].
    Explicit Semi-Implicit
    Nx Nt BL Error Order Time (secs) BL Error Order Time (secs)
    100 250 0.0053857 1.0148 0.0053836 0.78499
    200 500 0.0014548 1.8883 6.7398 0.0014536 1.8890 5.1448
    400 1000 0.00037786 1.9449 99.38 0.00037753 1.9449 73.587
    800 2000 9.6317e-05 1.9720 2369.4 9.6322e-05 1.9707 1763.3
    1600 4000 2.4468e-05 1.9769 43512 2.4514e-05 1.9743 32585
    Explicit (1st order) Semi-Implicit (1st order)
    800 2000 0.059804 0.9128 574.91 0.096943 0.86667 1368.9

     | Show Table
    DownLoad: CSV

    Example 3 In this example, we test the schemes against the complete model (i.e., with all biological and physical processes). To this end, we take μ0=exdx, g(x)=22ex20, β(x)=2, d(x)=1, κ(x,y)=1, a(x)=x, and b(y,)=2y. The simulation is performed over the finite domain (x,t)[0,20]×[0,0.5]. To our knowledge, the solution of this problem is unknown.

    Table 3.  Error, order, and computation time for example 3. Here, Nx and Nt represent the number of points in x and t, respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [9].
    Explicit Semi-Implicit
    Nx Nt BL Error Order Time (secs) BL Error Order Time (secs)
    100 250 0.0023026 1.0332 0.0028799 0.74398
    200 500 0.00085562 1.4282 6.8831 0.00076654 1.9096 5.5104
    400 1000 0.0002743 1.6412 100.57 0.00076654 1.9549 75.055
    800 2000 7.5404e-05 1.8631 2371.2 5.021e-05 1.9775 1739.1
    1600 4000 1.9495e-05 1.9515 43779 1.2651e-05 1.9887 32286
    Explicit (1st order) Semi-Implicit (1st order)
    800 2000 0.0092432 0.97728 625.3 0.0014192 0.98355 1112.8

     | Show Table
    DownLoad: CSV

    Example 4 As mentioned in [9], the mixed discrete and continuous fragmentation model studied in [7,8], with adjusted assumptions, is a special case of model (3.1). Indeed, by removing the biological and coagulation terms and letting the kernel

    (b(y,),ϕ)=Ni=1bi(y)ϕ(ih)+yNhϕ(x)bc(y,x)dx

    with supp bc(y,)[Nh,y] for some h>0, we have the mixed model in question. We wish to demonstrate the finite difference scheme presented here maintains this mixed structure.

    To this end, we take the fragmentation kernel

    bc(y,x)=2y,bi(y)=2y, and a(x)=x1,

    with initial condition μ=5i=1δi+χ[5,15](x)dx, where χA represents the characteristic function over the set A. This is similar to some examples in [8], where more detail and analysis are provided. In Figure 1, we present the simulation of this example. Notice, the mixed structure is preserved in finite time. For examples of this type, the scheme could be improved upon by the inclusion of mass conservative fragmentation terms similar to those presented in [6].

    Figure 1.  Initial condition and numerical solution at time T=4 of example 4.

    In this paper, we have lifted two of the first order finite difference schemes presented in [9] to second order high resolution schemes using flux limiter methods. The difference between both schemes is only found in the coagulation term, where the semi-implicit scheme is made linear. In context of standard structured population models (i.e. without coagulation or fragmentation), these type of schemes have been shown to be well-behaved in the presences of discontinuities and singularities. This quality makes them a well fit tool for studying PDEs in spaces of measures. We prove the convergence of both schemes under the assumption of natural CFL conditions. The order of convergence of both schemes is then tested numerically with previously used examples.

    In summary, the schemes preform as expected in the presence of smooth initial conditions. In all such simulations, the numerical schemes presented demonstrate a convergence rate of order 2. For simulations with biological terms, this convergence rate is expected to drop when singularities and discontinuities occur, as demonstrated in [13]. Mass conservation of the schemes, an important property for coagulation/fragmentation processes, is discussed in detail in [6,9].

    The research of ASA is supported in part by funds from R.P. Authement Eminent Scholar and Endowed Chair in Computational Mathematics at the University of Louisiana at Lafayette. RL is grateful for the support of the Carl Tryggers Stiftelse via the grant CTS 21:1656.

    In this section, we present the proofs of Lemmas 4.1 and 4.2 for the explicit coagulation term. The semi-implicit term follows from similar arguments in the same fashion as [9].

    Proof of Lemma 4.1

    Proof. We first prove via induction that for any k=1,2,,ˉk, μkΔx satisfies the following:

    (i) μkΔxM+(R+) i.e. mkj0 for all j=1,,J,

    (ii) μkΔxTVμ0ΔxTV(1+(ζ+CbCa)Δt)k.

    Then, the TV bound in the Lemma follows from standard arguments (see e.g. Lemma 4.1 in [9]). We prove this Theorem for the choice of the explicit coagulation term, Cexpj,k, as the implicit case is similar and more straight forward.

    We begin by showing that mk+1j0 for every j=1,2,,J. Notice by way of (4.8), this reduces down to showing

    ΔtΔxAkj+Δt(dkj+aj)+ΔtJi=1κi,jmki1.

    Indeed, by the CFL condition (4.6), induction hypothesis, and

    Ji=1κi,jmkiCκJi=1mki=CκμkΔxTVCκμ0ΔxTVexp((ζ+CbCa)T),

    we arrive at the result.

    For the TV bound, we have since the mkj are non-negative, μkΔxTV=Jj=1mkj. By rearranging (4.8) and summing over j=1,2,,J we have

    μk+1ΔxTVJj=1mkj+ΔtΔxJj=1(fkj12fkj+12)+ΔtJj=1Ji=jbi,jaimki+Δt(12Jj=1j1i=1κi,jimkimkjiJj=1Ji=1κi,jmkimkj). (7.1)

    To bound the right-hand side of equation (7.1), we directly follow the arguments of Lemma 4.1 in [9] which yields

    μk+1ΔxTV(1+(ζ+CaCb)Δt)Jj=1mkj=(1+(ζ+CaCb)Δt)μkΔxTV.

    Using the induction hypothesis, we obtain μk+1ΔxTVμ0ΔxTV(1+(ζ+CbCa)Δt)k+1 as desired.

    Proof of Lemma 4.2

    Proof. For ϕW1,(R+) with ϕW1,1, and denoting ϕj:=ϕ(xj), we have for any k,

    (μk+1ΔxμkΔx,ϕ)=Jj=1(mk+1jmkj)ϕjΔtJj=1ϕj(1Δx(fkj12fkj+12)dkjmkjajmkj+12j1i=1κi,jimkimkjiJi=1κi,jmkimkj+Ji=jbi,jaimki).

    Let C be the right-hand side of the TV-bound from Lemma 4.1, we then see

    (μk+1ΔxμkΔx,ϕ)ΔtΔxJj=1ϕj(fkj12fkj+12)+Δt(ζ+Ca+CbCa+32CκC)C.

    Moreover, since gkJ=0 the sum in the right-hand side takes the form

    ϕ1gk0mk0+J1j=1(ϕj+1ϕj)fkj+12=Δxϕ1Jj=1βkjmkj+J1j=1(ϕj+1ϕj)fkj+123.5ΔxζC.

    We thus obtain

    (μk+1ΔxμkΔx,ϕ)LΔt,L:=(3.5ζ+Ca+CbCa+32CκC)C.

    Taking the supremum over ϕ gives μk+1ΔxμkΔxBLLΔt for any k. The result follows.

    In this section, we consider the semi-implicit scheme (4.9) without any biological ingredients or fragmentation (i.e., a,g,d,β=0) where mass is not conserved (observed by the numerical experiments in Section 5) and show this change is controlled by the time step. For a bound on the loss of mass via fragmentation, we direct the reader to Section 6.1 of [6]. Multiplying (4.9) by xj and summing over j, we can arrive at

    Jj=1xjmk+1j=Jj=1xjmkj+Δt2Jj=1j1i=1xjκi,jimk+1imkjiΔtJj=1Ji=1xjκi,jmkimk+1j.

    In [6, Section 6.1] it was shown that the explicit scheme conserves mass through the coagulation process, i.e.,

    Δt2Jj=1j1i=1xjκi,jimkimkjiΔtJj=1Ji=1xjκi,jmkimkj=0.

    Adding this to the previous equation we have

    Jj=1xjmk+1j=Jj=1xjmkj+Δt2Jj=1j1i=1xjκi,ji(mk+1imki)mkjiΔtJj=1Ji=1xjκi,jmki(mk+1jmkj)=Jj=1xjmkj+Δt2Ji=1Jl=1xl+iκi,l(mk+1imki)mklΔtJj=1Ji=1xjκi,jmki(mk+1jmkj),

    where in the last equality, we change the order of integration and introduce the new index l=ji. Noticing that due to the uniform mesh size xl+i=xl+xi, we can split the second term on the right-hand side and obtain the equation

    Jj=1xjmk+1j=Jj=1xjmkj+Δt2Ji=1Jl=1xiκi,l(mkl(mk+1imki)mki(mk+1lmkl)).

    Since xjxmax, we can bound the last term on the right-hand side

    |Δt2Ji=1Jl=1xiκi,l(mkl(mk+1imki)mki(mk+1lmkl))|ΔtCκxmaxμkΔxTVμk+1ΔxμkΔxBL.

    Using Lemmas 4.1 and 4.2, we have the estimate

    |Jj=1xjmk+1jxjmkjΔt|CκxmaxLexp((ζ+CaCb)T)μ0TVΔt.


    [1] P. S. Chaitanya, S. K. Satpathy, A multilevel de-noising approach for precision edge-based fragmentation in MRI brain tumor segmentation, Trait. Signal, 40 (2023), 1715–1722. https://doi.org/10.18280/ts.400440 doi: 10.18280/ts.400440
    [2] K. Gong, K. Johnson, G. El Fakhri, Q. Li, T. Pan, PET image denoising based on denoising diffusion probabilistic model, Eur. J. Nucl. Med. Mol. Imaging, 51 (2024), 358–368. https://doi.org/10.1007/s00259-023-06417-8 doi: 10.1007/s00259-023-06417-8
    [3] H. Aetesam, S. K. Maji, Deep variational magnetic resonance image denoising via network conditioning, Biomed. Signal Process. Control, 95 (2024), 106452. https://doi.org/10.1016/j.bspc.2024.106452 doi: 10.1016/j.bspc.2024.106452
    [4] A. Sharma, V. Chaurasia, MRI denoising using advanced NLM filtering with non-subsampled shearlet transform, Signal Image Video Process., 15 (2021), 1331–1339. https://doi.org/10.1007/s11760-021-01864-y doi: 10.1007/s11760-021-01864-y
    [5] S. V. M. Sagheer, S. N. George, A review on medical image denoising algorithms, Biomed. Signal Process. Control, 61 (2020), 102036. https://doi.org/10.1016/j.bspc.2020.102036 doi: 10.1016/j.bspc.2020.102036
    [6] J. Zhang, W. Gong, L. Ye, F. Wang, Z. Shangguan, Y. Cheng, A review of deep learning methods for denoising of medical low-dose CT images, Comput. Biol. Med., 171 (2024), 108112. https://doi.org/10.1016/j.compbiomed.2024.108112 doi: 10.1016/j.compbiomed.2024.108112
    [7] J. Pan, Q. Zuo, B. Wang, C. L. P. Chen, B. Lei, S. Wang, DecGAN: Decoupling generative adversarial network for detecting abnormal neural circuits in Alzheimer's disease, IEEE Trans. Artif. Intell., 5 (2024), 5050–5063. https://doi.org/10.1109/TAI.2024.3416420 doi: 10.1109/TAI.2024.3416420
    [8] M. Li, R. Idoughi, B. Choudhury, W. Heidrich, Statistical model for OCT image denoising, Biomed. Opt. Express, 8 (2017), 3903–3917. https://doi.org/10.1364/BOE.8.003903 doi: 10.1364/BOE.8.003903
    [9] S. Dolui, A. Kuurstra, I. C. S. Patarroyo, O.V. Michailovich, A new similarity measure for non-local means filtering of MRI images, J. Visual Commun. Image Representation, 24 (2013), 1040–1054. https://doi.org/10.1016/j.jvcir.2013.06.011 doi: 10.1016/j.jvcir.2013.06.011
    [10] C. Swarup, K. U. Singh, A. Kumar, S. K. Pandey, N. Varshney, T. Singh, Brain tumor detection using CNN, AlexNet & GoogLeNet ensembling learning approaches, Electron. Res. Arch., 31 (2023), 2900–2924. https://doi.org/10.3934/era.2023146 doi: 10.3934/era.2023146
    [11] K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE Trans. Image Process., 26 (2017), 3142–3155. https://doi.org/10.1109/TIP.2017.2662206 doi: 10.1109/TIP.2017.2662206
    [12] C. Tian, Y. Xu, L. Fei, J. Wang, J. Wen, N. Luo, Enhanced CNN for image denoising, CAAI Trans. Intell. Technol., 4 (2019), 17–23. https://doi.org/10.1049/trit.2018.1054 doi: 10.1049/trit.2018.1054
    [13] K. Zhang, W. Zuo, L. Zhang, FFDNet: Toward a fast and flexible solution for CNN-based image denoising, IEEE Trans. Image Process., 27 (2018), 4608–4622. https://doi.org/10.1109/TIP.2018.2839891 doi: 10.1109/TIP.2018.2839891
    [14] W. Jifara, F. Jiang, S. Rho, M. Cheng, S. Liu, Medical image denoising using convolutional neural network: A residual learning approach, J. Supercomput., 75 (2019), 704–718. https://doi.org/10.1007/s11227-017-2080-0 doi: 10.1007/s11227-017-2080-0
    [15] C. Tian, Y. Xu, W. Zuo, Image denoising using deep CNN with batch renormalization, Neural Networks, 121 (2020), 461–473. https://doi.org/10.1016/j.neunet.2019.08.022 doi: 10.1016/j.neunet.2019.08.022
    [16] M. S. Hema, Sowjanya, N. Sharma, G. Abhishek, G. Shivani, P. P. Kumar, Identification and classification of brain tumor using convolutional neural network with autoencoder feature selection, in International Conference on Emerging Technologies in Computer Engineering, 1591 (2022), 251–258. https://doi.org/10.1007/978-3-031-07012-9_22
    [17] C. Tian, M. Zheng, W. Zuo, B. Zhang, Y. Zhang, D. Zhang, Multi-stage image denoising with the wavelet transform, Pattern Recognit., 134 (2023), 109050. https://doi.org/10.1016/j.patcog.2022.109050 doi: 10.1016/j.patcog.2022.109050
    [18] T. Li, Z. Zhang, M. Zhu, Z. Cui, D. Wei, Combining transformer global and local feature extraction for object detection, Complex Intell. Syst., 10 (2024), 4897–4920. https://doi.org/10.1007/s40747-024-01409-z doi: 10.1007/s40747-024-01409-z
    [19] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: Efficient channel attention for deep convolutional neural networks, in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 11531–11539. https://doi.org/10.1109/CVPR42600.2020.01155
    [20] D. Hong, C. Huang, C. Yang, J. Li, Y. Qian, C. Cai, FFA-DMRI: A network based on feature fusion and attention mechanism for brain MRI denoising, Front. Neurosci., 14 (2020), 577937. https://doi.org/10.3389/fnins.2020.577937 doi: 10.3389/fnins.2020.577937
    [21] Y. Chen, R. Xia, K. Yang, K. Zou, MFMAM: Image inpainting via multi-scale feature module with attention module, Comput. Vision Image Understanding, 238 (2024), 103883. https://doi.org/10.1016/j.cviu.2023.103883 doi: 10.1016/j.cviu.2023.103883
    [22] J. Deng, C. Hu, Recovering a clean background: A new progressive multi-scale CNN for image denoising, Signal Image Video Process., 18 (2024), 4541–4552. https://doi.org/10.1007/s11760-024-03093-5 doi: 10.1007/s11760-024-03093-5
    [23] M. Duong, B. N. Thi, S. Lee, M. Hong, Multi-branch network for color image denoising using dilated convolution and attention mechanisms, Sensors, 24 (2024), 3608. https://doi.org/10.3390/s24113608 doi: 10.3390/s24113608
    [24] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image restoration, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2021), 2480–2495. https://doi.org/10.1109/TPAMI.2020.2968521 doi: 10.1109/TPAMI.2020.2968521
    [25] F. Gao, Y. Wang, Z. Yang, Y. Ma, Q. Zhang, Single image super-resolution based on multi-scale dense attention network, Soft Comput., 27 (2023), 2981–2992. https://doi.org/10.1007/s00500-022-07456-3 doi: 10.1007/s00500-022-07456-3
    [26] Y. Li, T. Xie, D. Mei, Application of convolutional neural networks for parallel multi-scale feature extraction in noise image denoising, IEEE Access, 12 (2024), 98599–98610. https://doi.org/10.1109/ACCESS.2024.3427143 doi: 10.1109/ACCESS.2024.3427143
    [27] Y. Zhang, C. Wang, X. Lv, Y. Song, Attention-driven residual-dense network for no-reference image quality assessment, Signal Image Video Process., 18 (2024), 537–551. https://doi.org/10.1007/s11760-024-03172-7 doi: 10.1007/s11760-024-03172-7
    [28] Z. Hui, X. Wang, X. Gao, Fast and accurate single image super-resolution via information distillation network, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 723–731. https://doi.org/10.1109/CVPR.2018.00082
    [29] J. Liu, J. Tang, G. Wu, Residual feature distillation network for lightweight image super-resolution, in European Conference on Computer Vision, 12537 (2020), 41–55. https://doi.org/10.1007/978-3-030-67070-2_2
    [30] Y. Zhang, Y. Liu, Q. Li, J. Wang, M. Qi, H. Sun, et al., A lightweight fusion distillation network for image deblurring and deraining, Sensors, 21 (2021), 5312. https://doi.org/10.3390/s21165312 doi: 10.3390/s21165312
    [31] Z. Zong, L. Zha, J. Jiang, X. Liu, Asymmetric information distillation network for lightweight super resolution, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2022), 1248–1257. https://doi.org/10.1109/CVPRW56347.2022.00131
    [32] Z. Yu, K. Xie, C. Wen, J. He, W. Zhang, A lightweight image super-resolution reconstruction algorithm based on the residual feature distillation mechanism, Sensors, 24 (2024), 1049. https://doi.org/10.3390/s24041049 doi: 10.3390/s24041049
    [33] M. Jiang, C. You, M. Wang, H. Zhang, Z. Gao, D. Wu, et al., Controllable deep learning denoising model for ultrasound images using synthetic noisy image, in Computer Graphics International Conference, 14495 (2024), 297–308. https://doi.org/10.1007/978-3-031-50069-5_25
    [34] W. Wu, S. Liu, Y. Xia, Y. Zhang, Dual residual attention network for image denoising, Pattern Recognit., 149 (2024), 110291. https://doi.org/10.1016/j.patcog.2024.110291 doi: 10.1016/j.patcog.2024.110291
    [35] W. Wu, G. Lv, S. Liao, Y. Zhang, FEUNet: A flexible and effective U-shaped network for image denoising, Signal Image Video Process., 17 (2023), 2545–2553. https://doi.org/10.1007/s11760-022-02471-1 doi: 10.1007/s11760-022-02471-1
    [36] R. K. Thakur, S. K. Maji, Multi scale pixel attention and feature extraction based neural network for image denoising, Pattern Recognit., 141 (2023), 109603. https://doi.org/10.1016/j.patcog.2023.109603 doi: 10.1016/j.patcog.2023.109603
    [37] C. Tian, Y. Xu, W. Zuo, B. Du, C. Lin, D. Zhang, Designing and training of a dual CNN for image denoising, Knowl. Based Syst., 226 (2021), 106949. https://doi.org/10.1016/j.knosys.2021.106949 doi: 10.1016/j.knosys.2021.106949
    [38] J. Yang, H. Xie, N. Xue, A. Zhang, Research on underwater image denoising based on dual-channel residual network, Comput. Eng., 49 (2023), 188–198. https://doi.org/10.19678/j.issn.1000-3428.0064662 doi: 10.19678/j.issn.1000-3428.0064662
    [39] S. Ghaderi, S. Mohammadi, K. Ghaderi, F. Kiasat, M. Mohammadi, Marker-controlled watershed algorithm and fuzzy C-means clustering machine learning: automated segmentation of glioblastoma from MRI images in a case series, Annal. Med. Surg., 86 (2024), 1460–1475. https://doi.org/10.1097/MS9.0000000000001756 doi: 10.1097/MS9.0000000000001756
    [40] V. Sivakumar, N. Janakiraman, A novel method for segmenting brain tumor using modified watershed algorithm in MRI image with FPGA, Biosystems, 198 (2020), 104226. https://doi.org/10.1016/j.biosystems.2020.104226 doi: 10.1016/j.biosystems.2020.104226
    [41] Y. Liang, J. Fu, Watershed algorithm for medical image segmentation based on morphology and total variation model, Int. J. Pattern Recognit Artif Intell., 33 (2019), 1954019. https://doi.org/10.1142/S0218001419540193 doi: 10.1142/S0218001419540193
    [42] Y. Wu, Q. Li, The algorithm of watershed color image segmentation based on morphological gradient, Sensors, 22 (2022), 8202. https://doi.org/10.3390/s22218202 doi: 10.3390/s22218202
    [43] T. Yesmin, H. Lohiya, P. P. Acharjya, Detection and segmentation of brain tumor by using modified watershed algorithm and thresholding to reduce over-segmentation, in 2023 IEEE International Conference on Contemporary Computing and Communications (InC4), (2023), 1–6. https://doi.org/10.1109/InC457730.2023.10262891
    [44] X. Xu, S. Xu, L. Jin, E. Song, Characteristic analysis of Otsu threshold and its applications, Pattern Recognit. Lett., 32 (2011), 956–961. https://doi.org/10.1016/j.patrec.2011.01.021 doi: 10.1016/j.patrec.2011.01.021
    [45] C. Kumari, A. Mustafi, A novel radial kernel watershed basis segmentation algorithm for color image segmentation, Wireless Pers. Commun., 133 (2023), 2105–2124. https://doi.org/10.1007/s11277-023-10831-4 doi: 10.1007/s11277-023-10831-4
    [46] C. Zhang, J. Fang, Edge detection based on improved sobel operator, in Proceedings of the 2016 International Conference on Computer Engineering and Information Systems, (2016), 129–132. https://doi.org/10.2991/ceis-16.2016.25
    [47] B. Chen, W. Wu, Z. Li, T. Han, Z. Chen, W. Zhang, Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection, Electron. Res. Arch., 32 (2024), 643–669. https://doi.org/10.3934/era.2024031 doi: 10.3934/era.2024031
    [48] Z. Cai, L. Xu, J. Zhang, Y. Feng, L. Zhu, F. Liu, ViT-DualAtt: An efficient pornographic image classification method based on Vision Transformer with dual attention, Electron. Res. Arch., 32 (2024), 6698–6716. https://doi.org/10.3934/era.2024313 doi: 10.3934/era.2024313
    [49] Y. Chen, R. Xia, K. Yang, K. Zou, DARGS: Image inpainting algorithm via deep attention residuals group and semantics, J. King Saud Univ. Comput. Inf. Sci., 35 (2023), 101567. https://doi.org/10.1016/j.jksuci.2023.101567 doi: 10.1016/j.jksuci.2023.101567
    [50] J. Yang, F. Xie, H. Fan, Z. Jiang, J. Liu, Classification for dermoscopy images using convolutional neural networks based on region average pooling, IEEE Access, 6 (2018), 65130–65138. https://doi.org/10.1109/ACCESS.2018.2877587 doi: 10.1109/ACCESS.2018.2877587
    [51] H. Wu, X. Gu, Max-pooling dropout for regularization of convolutional neural networks, in International Conference on Neural Information Processing, 9489 (2015), 46–54. https://doi.org/10.1007/978-3-319-26532-2_6
    [52] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [53] G. Huang, Z. Liu, L. V. D. Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 2261–2269. https://doi.org/10.1109/CVPR.2017.243
    [54] Z. Feng, Y. Chen, L. Xie, Unsupervised anomaly detection via knowledge distillation with non-directly-coupled student block fusion, Mach. Vision Appl., 34 (2023), 104. https://doi.org/10.1007/s00138-023-01454-7 doi: 10.1007/s00138-023-01454-7
    [55] X. Zhao, X. Cai, Y. Xue, Y. Liao, L. Lin, T. Zhao, UKD-Net: efficient image enhancement with knowledge distillation, J. Electron. Imaging, 33 (2024), 023024. https://doi.org/10.1117/1.JEI.33.2.023024 doi: 10.1117/1.JEI.33.2.023024
    [56] Y. Li, J. Cao, Z. Li, S. Oh, N. Komuro, Lightweight single image super-resolution with dense connection distillation network, ACM Trans. Multimedia Comput. Commun. Appl., 17 (2021), 1–17. https://doi.org/10.1145/3414838 doi: 10.1145/3414838
    [57] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980.
    [58] K. A. Johnson, J. A. Becker, The Whole Brain Atlas, 2005. Available from: https://www.med.harvard.edu/aanlib/home.html.
    [59] S. Roth, M. J. Black, Fields of experts: A framework for learning image priors, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 2 (2005), 860–867. https://doi.org/10.1109/CVPR.2005.160
    [60] True Color Kodak Images, Kodak Lossless True Color Image Suite: PhotoCD PCD0992, 2013. Available from: https://r0k.us/graphics/kodak/.
    [61] A. Abdelhamed, S. Lin, M. S. Brown, A high-quality denoising dataset for smartphone cameras, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 1692–1700. https://doi.org/10.1109/CVPR.2018.00182
    [62] M. Lebrun, M. Colom, J. Morel, The noise clinic: A blind image denoising algorithm, Image Process. On Line, 5 (2015), 1–54. https://doi.org/10.5201/ipol.2015.125 doi: 10.5201/ipol.2015.125
  • This article has been cited by:

    1. Carlo Bianca, Nicolas Saintier, Thermostatted kinetic theory in measure spaces: Well-posedness, 2025, 251, 0362546X, 113666, 10.1016/j.na.2024.113666
    2. Konstantinos Alexiou, Daniel B. Cooney, Steady-State and Dynamical Behavior of a PDE Model of Multilevel Selection with Pairwise Group-Level Competition, 2025, 87, 0092-8240, 10.1007/s11538-025-01476-4
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(602) PDF downloads(44) Cited by(0)

Figures and Tables

Figures(9)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog