Loading [MathJax]/jax/output/SVG/jax.js
Research article

Enhanced brain tumor detection from brain MRI images using convolutional neural networks

  • The brain is one of the most important organs of a human body. It controls all activities of the body and hence is referred to as the CPU of human body. Fast, accurate and early diagnosis of brain tumors is essential for better treatment plans that potentially result in larger survival rates. Automatic and non-invasive brain tumor detection methods are highly relevant. Continuous efforts by researchers have resulted in a significant increase in brain tumor detection accuracy, but 100% accuracy on testing/validation data is still challenging to obtain. This paper presents a convolutional neural network model that significantly enhances brain tumor classification accuracy with low complexity. To do so, Kaggle's publicly available dataset titled “Brain_Tumor_Detection_MRI”, consisting of 2891 brain MRI images annotated as “yes” (having tumors) or “no” (without tumors), was used. By leveraging advanced deep learning techniques, we achieved an accuracy of 99.31% on the test dataset. This value shows a significant improvement in classification accuracy, showcasing the potential of convolutional neural networks in medical imaging and diagnostic applications.

    Citation: Abhimanu Singh, Smita Jain. Enhanced brain tumor detection from brain MRI images using convolutional neural networks[J]. AIMS Bioengineering, 2025, 12(2): 215-224. doi: 10.3934/bioeng.2025010

    Related Papers:

    [1] Dina Abdelhamid, Wedad Albalawi, Kottakkaran Sooppy Nisar, A. Abdel-Aty, Suliman Alsaeed, M. Abdelhakem . Mixed Chebyshev and Legendre polynomials differentiation matrices for solving initial-boundary value problems. AIMS Mathematics, 2023, 8(10): 24609-24631. doi: 10.3934/math.20231255
    [2] Taekyun Kim, Dae San Kim, Dmitry V. Dolgy, Jongkyum Kwon . Sums of finite products of Chebyshev polynomials of two different types. AIMS Mathematics, 2021, 6(11): 12528-12542. doi: 10.3934/math.2021722
    [3] Fan Yang, Yang Li . The infinite sums of reciprocals and the partial sums of Chebyshev polynomials. AIMS Mathematics, 2022, 7(1): 334-348. doi: 10.3934/math.2022023
    [4] Jiankang Wang, Zhefeng Xu, Minmin Jia . Distribution of values of Hardy sums over Chebyshev polynomials. AIMS Mathematics, 2024, 9(2): 3788-3797. doi: 10.3934/math.2024186
    [5] Mariam Al-Mazmumy, Maryam Ahmed Alyami, Mona Alsulami, Asrar Saleh Alsulami, Saleh S. Redhwan . An Adomian decomposition method with some orthogonal polynomials to solve nonhomogeneous fractional differential equations (FDEs). AIMS Mathematics, 2024, 9(11): 30548-30571. doi: 10.3934/math.20241475
    [6] Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Ahmed Gamal Atta . A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials. AIMS Mathematics, 2025, 10(1): 1201-1223. doi: 10.3934/math.2025057
    [7] Hari M. Srivastava, Artion Kashuri, Pshtiwan Othman Mohammed, Abdullah M. Alsharif, Juan L. G. Guirao . New Chebyshev type inequalities via a general family of fractional integral operators with a modified Mittag-Leffler kernel. AIMS Mathematics, 2021, 6(10): 11167-11186. doi: 10.3934/math.2021648
    [8] Waleed Mohamed Abd-Elhameed, Youssri Hassan Youssri . Spectral tau solution of the linearized time-fractional KdV-Type equations. AIMS Mathematics, 2022, 7(8): 15138-15158. doi: 10.3934/math.2022830
    [9] Miguel Vivas-Cortez, Pshtiwan O. Mohammed, Y. S. Hamed, Artion Kashuri, Jorge E. Hernández, Jorge E. Macías-Díaz . On some generalized Raina-type fractional-order integral operators and related Chebyshev inequalities. AIMS Mathematics, 2022, 7(6): 10256-10275. doi: 10.3934/math.2022571
    [10] Manal Alqhtani, Khaled M. Saad . Numerical solutions of space-fractional diffusion equations via the exponential decay kernel. AIMS Mathematics, 2022, 7(4): 6535-6549. doi: 10.3934/math.2022364
  • The brain is one of the most important organs of a human body. It controls all activities of the body and hence is referred to as the CPU of human body. Fast, accurate and early diagnosis of brain tumors is essential for better treatment plans that potentially result in larger survival rates. Automatic and non-invasive brain tumor detection methods are highly relevant. Continuous efforts by researchers have resulted in a significant increase in brain tumor detection accuracy, but 100% accuracy on testing/validation data is still challenging to obtain. This paper presents a convolutional neural network model that significantly enhances brain tumor classification accuracy with low complexity. To do so, Kaggle's publicly available dataset titled “Brain_Tumor_Detection_MRI”, consisting of 2891 brain MRI images annotated as “yes” (having tumors) or “no” (without tumors), was used. By leveraging advanced deep learning techniques, we achieved an accuracy of 99.31% on the test dataset. This value shows a significant improvement in classification accuracy, showcasing the potential of convolutional neural networks in medical imaging and diagnostic applications.



    Polynomial approximation serves as a fundamental method across various domains of numerical analysis [1,2]. Not only does it provide a robust tool for approximating complex functions, but it also plays a crucial role in numerical integration and solving differential and integral equations. The Lagrange interpolation polynomial at Chebyshev points of the first or second kind has been observed to mitigate the Runge phenomenon [3], surpassing interpolants at equally spaced points. Moreover, the accuracy of approximation exhibits rapid enhancement with an increase in the number of interpolation points [4,5]. Functions of bounded variation hold significant importance in various branches of mathematical physics, optimization [6], free-discontinuity problems [7], and hyperbolic systems of conservation laws [8]. Additionally, these functions find application in image segmentation and related models [9]. However, despite their relevance, the theory of numerical approximations for such functions remains relatively underdeveloped, primarily due to the inherent singularities they exhibit.

    A significant body of research has focused on approximating non-smooth functions through decay estimates of series coefficients. Xiang [10] explored the decay behavior of coefficients in polynomial expansions of functions with limited regularity, specifically examining Jacobi and Gegenbauer polynomial series. The goal is to derive optimal asymptotic results for the decay of these coefficients, investigating how the decay rate varies for functions with both interior and boundary singularities, across different parameters. Francesco et al. [11,12] introduced the constrained mock-Chebyshev least squares (CMCLS) approximation method, which aims to mitigate the Runge phenomenon by interpolating functions on nodes near Chebyshev-Lobatto nodes and using remaining nodes for regression in univariate and bivariate functions. More recently, Wang [13,14,15] addressed error localization in Chebyshev spectral methods for functions with singularities. This study begins with a pointwise error analysis for Chebyshev projections of such functions, revealing that the convergence rate away from the singularity is faster than at the singularity itself by a factor of 1x. The analysis rigorously explains the observed error localization phenomenon, suggesting that Chebyshev spectral differentiations generally outperform other methods, except near singularities, where the latter exhibit faster convergence.

    Liu et al. [16] introduced a novel theoretical framework grounded in fractional Sobolev-type spaces, leveraging Riemann-Liouville fractional integrals/derivatives for optimal error estimates of Chebyshev polynomial interpolation for functions with limited regularity. Key components include fractional integration by parts and generalized Gegenbauer functions of fractional degree (GGF-Fs). This framework facilitates the estimation of the optimal decay rate of Chebyshev expansion coefficients for functions with singularities, leading to enhanced error estimates for spectral expansions and related operations. In a separate study, Wang [15] focused on deriving error bounds for Legendre approximations of differentiable functions using Legendre coefficients by Hamzehnejad [17]. Additionally, Xie [14] recently obtained bounds for Chebyshev polynomials with endpoint singularities. Zhang and Boyd [18] derived estimates for weak endpoint singularities, while Zhang [19,20] obtained bounds for logarithmic endpoint singularities. In [21], the focus lied on a specialized filtered approximation technique that generates interpolation polynomials at Chebyshev zeros using de la Vallée Poussin filters. The aim is to approximate locally continuous functions equipped with weighted uniform norms. Ensuring that the associated Lebesgue constants remain uniformly bounded is crucial for this endeavor.

    The methodologies discussed above primarily concentrate on Chebyshev interpolation [19,21,22,23], yielding results with exact Chebyshev series coefficients. However, computing exact series coefficients poses a general challenge and proves impractical for numerical algorithms, diminishing their utility in practical applications. Furthermore, these approaches usually involve Jacobi, Gegenbauer, and Legendre polynomials on fixed intervals where the respective series' basis functions are defined. Such limitations highlight the need for more versatile and efficient approximation methods in numerical analysis. Addressing this gap necessitates the utilization of efficient approximation techniques. Chebyshev polynomials, renowned for their versatility and effectiveness across diverse fields such as digital signal processing [24], spectral graph neural networks [25,26,27], image processing [22], and graph signal filtering [28,29] present a promising avenue for approximating functions of bounded variation. Many physical systems modeled using partial differential equations (PDEs) involve boundary layers or discontinuities, where functions of bounded variation frequently occur. Extending Chebyshev approximations to these functions allows for more accurate error analysis and truncation in numerical simulations of such systems.

    Truncated Chebyshev expansions have proven capable of yielding minimax polynomial approximations for analytic functions [30]. Our objective is to employ these polynomials not only for approximating functions of bounded variation but also for conducting a comprehensive convergence analysis of Chebyshev polynomial approximation techniques. At the core of our convergence analysis lies the estimation of Chebyshev coefficients' decay. We leverage two recently established decay estimates: Majidian's decay estimate for Chebyshev series coefficients of functions defined on the interval [1,1], subject to specific regularity conditions [31]; and a sharper decay estimate demonstrated by Xiang [10], with a more relaxed smoothness assumption on the function. In our pursuit of convergence results, we take an initial step by extending these decay bounds to encompass Chebyshev series coefficients of functions defined on the interval [a,b].

    The main contributions are three folds:

    1. Generalization of Chebyshev polynomial approximation: The article extends the traditional Chebyshev polynomial approximation to a broader domain beyond the fixed intervals where basis functions are typically defined. This generalization allows for more flexible application of Chebyshev approximations in various settings.

    2. Optimal error estimates: Two new optimal error estimates for Chebyshev polynomial approximations are presented, specifically tailored for functions of bounded variation. These error estimates are derived using approximated Chebyshev series coefficients rather than exact ones, addressing a significant gap in the existing literature.

    3. Practical computation with approximated coefficients: By focusing on approximated Chebyshev series coefficients, the article offers a more practical approach for numerical algorithms, overcoming the challenges associated with computing exact series coefficients. The theoretical findings are supported by numerical experiments, providing empirical evidence of the efficacy and accuracy of the proposed error estimates.

    These contributions collectively advance the understanding and application of Chebyshev polynomial approximations, particularly for functions with bounded variation, and offer practical solutions for numerical analysis.

    While preparing this manuscript we found some very interesting recent works in the domain of approximation theory and Chebyshev polynomials. One of them [32] introduced unified Chebyshev polynomials (UCPs) and established their foundational properties, including analytic forms, moments, and inversion formulas. UCPs are shown to be expressible through three consecutive Chebyshev polynomials of the second kind. The authors derive new derivative expressions and connection formulas between different UCP classes, linking them with orthogonal and non-orthogonal polynomials. The second one [33] proposed two numerical schemes for solving the time-fractional heat equation (TFHE) using collocation and tau spectral methods. The authors introduce a new basis: Unified Chebyshev polynomials (UCPs) of the first and second kinds, deriving novel theoretical results for these polynomials.

    The structure of the article is as follows: Section 2 provides the necessary preliminaries, including the Chebyshev series expansion of a function, the Gauss-Chebyshev quadrature rule, and several lemmas that are utilized to develop the main results. In Section 3, we derive decay bounds for the Chebyshev coefficients for functions of bounded variation and functions with limited smoothness. Section 4 presents L1-error estimates for the Chebyshev approximation of f, leveraging the two decay estimates established in Section 3. Section 5 numerically demonstrates that the improved decay estimates of the Chebyshev coefficients and the L1-error estimates of the truncated Chebyshev series approximation obtained in Section 4 are sharper than previously known results. Finally, we conclude the paper in Section 6 by outlining some promising future research directions.

    The Chebyshev polynomial of the first kind, denoted as Tj(t) for a given integer j0, is defined as:

    Tj(t)=cos(jθ), (2.1)

    where θ=cos1(t) and t[1,1]. Notably, Tj(t) is a polynomial of degree j in the variable t. These polynomials exhibit orthogonality with respect to the weight function ω(t)=1(1t2), within the interval [1,1]. Specifically, they satisfy the following orthogonality relations:

    11ω(s)Tp(s)Tq(s)ds={0if pq,πif p=q=0,π2if p=q0.

    The Chebyshev series expansion of a function f:[1,1]R is expressed as follows:

    f(t)=c02+j=1cjTj(t),wherecj=f,TjωTj2ω, (2.2)

    and

    f,Tjω=11ω(s)f(s)Tj(s)ds.

    The norm Tjω is computed as:

    Tjω=Tj,Tj12ω={πj=0,π/2j0. (2.3)

    Hence, the Chebyshev coefficients cj can be obtained using the integral form:

    cj=2π11f(s)Tj(s)ω(s)ds. (2.4)

    Given the difficulty in accurately evaluating the integral (2.4) for general functions, we resort to employing the Gauss-Chebyshev quadrature rule to approximate cj, the jth coefficient of the series.

    Quadrature methods are renowned for numerically computing definite integrals of the type presented in (2.4). The Gauss-Chebyshev quadrature formula, a variant of Gaussian quadrature employing the weight function ω and n Chebyshev points, provides an explicit formula for numerical integration (see [3,34,35]):

    11ω(s)F(s)dsπnnl=1F(tl), (2.5)

    where t1,t2,,tn represent the n roots of a Chebyshev polynomial Tn(t) of degree n, also known as Chebyshev points, defined as:

    tl=cos((2l1)π2n),l=1,2,,n. (2.6)

    Leveraging the quadrature formula (2.5), we can readily approximate Chebyshev series coefficients (2.4) for any function using the formula provided by Rivlin [35, p. 148] :

    ck2nnl=1f(tl)Tk(tl)=:ck,n. (2.7)

    Here, ck,n denotes the approximated Chebyshev coefficients using n quadrature points.

    We denote the Chebyshev series expansion of a function fL2ω[a,b] by C[f](x), defined as:

    C[f](x):=j=0cjTj(G1(x)),

    where G:[1,1][a,b] is a bijection map given by:

    G(t)=a+(ba)2(t+1),t[1,1].

    The Chebyshev coefficients cj are calculated as:

    cj=2π11f(G(t))Tj(t)1t2dt.

    Utilizing the change of variable t=cosθ, we express cj as:

    cj=2ππ0f(G(cosθ))cosjθdθ. (2.8)

    The dth partial sum, Cd[f](x), approximates the function f at a point x[a,b], given by:

    Cd[f](x):=dj=0cjTj(G1(x)). (2.9)

    In our results, we use:

    Cd,n[f](x):=dj=0cj,nTj(G1(x)) (2.10)

    to denote the corresponding approximation of f using n quadrature points.

    Additionally, we represent the Chebyshev series expansion of f, approximated with n quadrature points, as:

    C,n[f](t):=j=0cj,nTj(t). (2.11)

    The following lemmas are used in deriving the required error estimates.

    Lemma 2.1. For a given positive integer n, we have

    ck,nck=j=1(1)j(c2jnk+c2jn+k),

    for any integer k such that 0k<2n.

    Proof. Since the identity is trivially satisfied for k=n, we assume that kn.

    Using (2.2) in the quadrature formula (2.7), we get

    ck,n=2nn1i=0(j=0cjTj(ti))Tk(ti)=2nj=0cj(n1i=0Tj(ti)Tk(ti)).

    First, consider the case when k=0. In this case, we can write

    c0,n=2n{c02(n1i=0T0(ti)T0(ti))+j=1cj(n1i=0Tj(ti)T0(ti))}.

    Using the fact that T01, we see that

    c0,n=c0+2nj=1cj(n1i=0Tj(ti)T0(ti)).

    The first possibility is:

    n1i=0Tj(ti)T0(ti)={(1)pn,if j=2pn,for p=1,2,,0,otherwise. 

    Thus, we can write

    c0,n=c0+2p=1(1)pc2pn,

    which is the required identity for k=0.

    We now assume that k0 (and recall that we have already assumed that kn). We can write

    ck,n=2n{k1j=0cj(n1i=0Tj(ti)Tk(ti))+ck(n1i=0Tk(ti)Tk(ti))+j=k+1cj(n1i=0Tj(ti)Tk(ti))}.

    Let us first evaluate the second term on the right-hand side. Since 0<k<2n with kn, we see by taking j=k that

    j+k=2k2pn, for any nonnegative integer p,

    and

    |jk|=0s=0,

    and hence we see that

    n1i=0Tk(ti)Tk(ti)=n2.

    Therefore, the above expression can be written as

    ck,n=ck+2n{k1j=0cj(n1i=0Tj(ti)Tk(ti))+j=k+1cj(n1i=0Tj(ti)Tk(ti))}. (2.12)

    Let us now consider two cases, namely, 0<k<n and n<k<2n. We skip the proof of 0<k<n and consider only the case when n<k<2n (note that we have already proved for k=n separately).

    (1) For j=0,,k1, we write j=kα for α=1,2,,k. Then, for some pZ+,

    j+k=2kα=2pnα=2k2pn.

    Since α ranges from 1 to k, we cannot have p=0, for otherwise α=2k which is not possible. Also, we see that j+k2pn, for any p=2,3,, for then α becomes negative. However, for p=1, we have α=2k2n. Thus,

    for k=n+1,n+2,2n1, we have α=2,4,,2n2(=k1).

    Thus, we see that one term in the first summation within the brace of (2.12) is nonzero depending on the given value of k between n and 2n. Note that, for this to happen, we need n2 (because only then α will have a meaningful range). Also note that in order for the present case of n<k<2n to happen, we need n2. An interesting point is that this is one of the assumptions in Theorem 3.1.

    On the other hand, for any nonnegative integer s,

    |jk|=2snα=2sn.

    Since α ranges from 1 to k and n<k<2n, we see that the above condition does not hold for any nonnegative integer s and therefore

    |jk|2sn for any sZ+.

    Thus we see that

    n1i=0Tj(ti)Tk(ti)={n2,if α=2k2n,0, otherwise.

    Thus, we have

    k1j=0cj(n1i=0Tj(ti)Tk(ti))=kα=1ckα(n1i=0Tkα(ti)Tk(ti))=n2ck2k+2n=n2c2nk. (2.13)

    (2) For j=k+1,k+2,, let us write j=k+α, for α=1,2,. For some pZ+,

    j+k=2k+α=2pnα=2pn2k.

    This is not possible for p=0,1 because then α becomes negative. However, this is possible for p=2,3,, for which we have

    α=4n2k,6n2k,.

    On the other hand, for any qZ+,

    |jk|=2qnα=2qn.

    This is possible for q=1,2,, for which we have

    α=2n,4n,.

    Note that we have to see if both j+k=2pn and |jk|=2qn hold for some p,qZ+. If this is so, then we must have the corresponding α values be equal. That is, for some p and q,

    2pn2k=2qnpnk=qnk=(pq)n.

    This shows that both these cases happen if and only if k is a multiple of n. But our present case is that n<k<2n and hence both these cases cannot happen simultaneously.

    From the above discussions, we see that

    either j+k=2pn or |jk|=2qn or neither of these two

    for any p=2,3 and q=1,2,.. Thus, we have for j=k+1,k+2,,

    n1i=0Tj(ti)Tk(ti)=n1i=0Tk+α(ti)Tk(ti)={(1)pn2,{if α=2pn2k,for p=2,3,,orif α=2pn,for p=1,2,,0,otherwise.

    Using this, we can write

    j=k+1cj(n1i=0Tj(ti)Tk(ti))=α=1ck+α(n1i=0Tk+α(ti)Tk(ti))=n2{c2n+k+p=2(1)p(c2pnk+c2pn+k)}. (2.14)

    Substituting (2.13) and (2.14) in (2.12), we get

    ck,n=ckc2nkc2n+k+p=2(1)p(c2pnk+c2pn+k)=ck+p=1(1)p(c2pnk+c2pn+k).

    This completes the proof.

    Lemma 2.2. For 0d<2n, we have

    Cd[f]Cd,n[f]1(ba)j=12jn+di=2jnd|ci|,

    for any fL1[a,b].

    Proof. For any t[a,b], we have

    |Cd[f](t)Cd,n[f](t)|=|dk=0(ckck,n)Tk(G1(t))|dk=0|ckck,n|.

    By Lemma 2.1, we have

    |Cd[f](t)Cd,n[f](t)|dk=0|j=1(1)j(c2jnk+c2jn+k)|j=1{dk=0{|c2jnk|+|c2jn+k|}}.

    Note that each term of the right-hand-side series can be rewritten as

    dk=0{|c2jnk|+|c2jn+k|}=|c2jn|2+|c2jn|2+|c2jn1|+|c2jn+1|++|c2jnd|+|c2jn+d|=2jn+di=2jnd|ci|.

    Substituting this expression in the above inequality, we get

    |Cd[f](t)Cd,n[f](t)|j=12jn+di=2jnd|ci|.

    Therefore,

    Cd[f]Cd,n[f]1=ba|Cd[f](t)Cd,n[f](t)|dtbaj=12jn+di=2jnd|ci|dt=(ba)j=12jn+di=2jnd|ci|.

    Using the preliminaries and the lemmas presented in Section 2, we establish the decay estimates for the Chebyshev coefficients.

    In this section, we extend the decay bounds established in prior works by Majidian [31] and Xiang [10]. This generalization is pivotal for numerous applications. In practical scenarios, the function to be approximated may not always reside solely within the domain [1,1]. Moreover, in various applications, local schemes [36] or piecewise approximation [37,38] are preferred over global ones. In such cases, decay estimates on a general domain become imperative.

    Theorem 3.1. For some integer k0, let f, f, , f(k1) be absolutely continuous on the interval [a,b]. If Vk:=f(k)T<, where

    fT:=π0|f(G(cosθ))|dθ, (3.1)

    then for jk+1 and for some s0, we have

    |cj|{(ba2)2s+12Vkπsi=s(j+2i),ifk=2s,(ba2)2s+22Vkπs+1i=s(j+2i1),ifk=2s+1, (3.2)

    where ck, k=0,1,, are the Chebyshev coefficients of f.

    Proof. For a given nonnegative integer r, we define:

    c(r)j=2ππ0f(r)(G(cosθ))cosjθdθ, (3.3)

    with the understanding that c(0)j=cj. In dealing with non-smooth functions, we must utilize the weak derivative (distributional derivative) on the right-hand side of the above expression, if it exists. Employing integration by parts in (3.3), we can express c(r)j as:

    c(r)j=(ba)4j(c(r+1)j1c(r+1)j+1), (3.4)

    for j=1,2,. In order to prove the required estimate, we prove the following general inequality :

    |c(km)j|2Vkπ{(ba2)m+11si=s(j+2i), if m=2s,s0,(ba2)m+11s+1i=s(j+2i1), if m=2s+1,s0, (3.5)

    for m=0,....,k and jm+1 using induction on m. Then m=k gives our required result. First let us claim that (3.5) holds for m=0. From (3.4) and (3.3), we have

    |c(k)j|ba4j(|c(k+1)j1|+|c(k+1)j+1|)(ba)jπVk.

    This is precisely the inequality (3.5) for m=0. Let us assume that the inequality (3.5) holds for m=2s for some s1. Then for m=2s+1 (odd), we have

    |c(k2s1)j|ba4j(|c(k2s)j1|+|c(k2s)j+1|).

    Using the assumption that the inequality (3.5) holds for m=2s, we can write

    |c(k2s1)j|ba4j((ba2)2s+12Vkπ[1si=s(j+2i1)+1si=s(j+2i+1)]).

    By simplifying the right-hand side, we get

    |c(k2s1)j|(ba2)2s+22Vkπs+1i=s(j+2i1),

    which is precisely the required inequality (3.5) for m=2s+1. Finally, asssume that the inequality (3.5) holds for m=2s+1. Then for m=2s+2 (even), we have

    |c(k2s2)j|ba4j(|c(k2s1)j1|+|c(k2s1)j+1|).

    Using the assumption that the inequality (3.5) holds for m=2s+1, we can write

    |c(k2s2)j|ba4j((ba2)2s+22Vkπ[1s+1i=s(j+2i2)+1s+1i=s(j+2i)]).

    By simplifying the right-hand side, we get

    |c(k2s1)j|(ba2)2s+32Vkπs+1i=(s+1)(j+2i),

    which is precisely the required inequality (3.5) for m=2s+2. The proof now follows by induction.

    Remark 3.1. Note that if f(k) is absolutely continuous, then Vk is precisely the total variation of f(k) and hence in this case, the assumption that Vk is finite implies f(k) is of BV on [a,b]. If f(k) involves a jump discontinuity, then one has to necessarily use the distribution derivative of f(k) in computing Vk.

    The following lemma is the generalization of a result in [10].

    Lemma 3.1. Let f be a function defined on an interval [a,b] such that for some integer k1, c(k)j is well-defined and f(k) is of bounded variation on [a,b]. Then we have

    cj=(ba4)ppi=0(pi)(1)i(j+2ip)(j+i)(j+i1)(j+ip)c(p)(j+2ip), (3.6)

    where j=p,p+1, and p=1,2,,k.

    Theorem 3.2. Let f be a function defined on [a,b] such that for some nonnegative integer k, f(k) is of bounded variation with Vk=Var(f(k))<. Then we have

    |c(k)j|2Vkjπ,j=1,2,, (3.7)
    |cj|2Vkπ(ba4)kki=0(ki)1(j+i)(j+i1)(j+ik), (3.8)

    for j=k+1,k+2,.

    Proof. Since f(k) is of bounded variation, we can write (see Lang [39])

    Var(f(k))=Var(g1)+Var(g2), (3.9)

    where g1 and g2 are monotonically increasing functions on [a,b]. Define u(θ):=gi(G(cosθ)) which is monotonically decreasing for θ[0,π]. Further, we have

    π0gi(G(cosθ))cosjθdθ=2ππ0v(θ)cosjθdθ,

    where v(θ)=u(θ). By the second mean value theorem of integral calculus (Apostol [40, Theorem 7.37]), there exists x0[0,π] such that

    π0gi(G(cosθ))cosjθdθ=2π(v(0)x00cosjθdθ+v(π)πx0cosjθdθ). (3.10)

    By the definition of v, we have

    v(0)=u(0)=gi(G(cos0))=gi(G(1))=gi(b),v(π)=u(π)=gi(G(cosπ))=gi(G(1))=gi(a).

    Substituting in (3.10) and then integrating yields

    π0gi(G(cosθ))cosjθdθ=2πgi(b)gi(a)jsinjx0.

    Using (3.3) and (3.9), we get |c(k)j|2Vkjπ, for j=1,2,. Consider (3.6) with p=k which gives

    cj=(ba4)kki=0(ki)(1)i(j+2ik)(j+i)(j+i1)(j+ik)c(k)(j+2ik).

    Taking the modulus on both sides and using the above inequality, we get

    |cj|(ba4)kki=0(ki)(j+2ik)(j+i)(j+i1)(j+ik)(2Vk(j+2ik)π),

    for j=k+1,k+2,, which leads to the desired result. Note that, in this case, j=k is not defined when i is 0.

    For the well-known decay estimate of the Chebyshev coefficients of a real analytic function, we refer to Rivlin [35] (also see Xiang {et al.} [41]).

    In this section, we derive L1-error estimates for the Chebyshev approximation of f, utilizing two decay estimates provided in Theorems 3.1 and 3.2. Specifically, we establish the error estimate for the truncated Chebyshev series approximation, relying on the decay estimate (3.2) of the Chebyshev coefficients, as presented in the following theorem.

    Theorem 4.1. Assume the hypotheses of Theorem 3.1. Then for any given integers n and d such that n1k1 and kd2nk1, we have

    fCd,n[f]1Td,n,

    where

    (1) if d=nl, for some l=1,2,,nk, then we have

    Td,n:={(ba2)k+24Vkkπ(Π1,1(nl)+Π1,2(nl)),if k=2s,(ba2)k+24Vkkπ(Π0,0(nl)+Π0,1(nl)),if k=2s+1, (4.1)

    (2) if d=n+l, for some l=0,1,,nk1, then we have

    Td,n:={(ba2)k+26Vkπk(Π1,0(nl)+Π1,1(nl)),if k=2s,(ba2)k+26Vkkπ(Π0,1(nl)+Π0,0(nl)),if k=2s+1, (4.2)

    for some integer s0, where

    Πα,β(η):=1sαi=s(η+2i+β),α=0,1,β=1,0,1,2. (4.3)

    Proof. We have

    fCd,n[f]1fCd[f]1+Cd[f]Cd,n[f]1.

    For estimating the second term on the right-hand side of the above inequality, we use the well-known result (see Fox and Parker [42])

    cd,ncd=j=1(1)j(c2jnd+c2jn+d),

    for 0d<2n. Using this property, with an obvious rearrangement of the terms in the series, we can obtain

    fCd,n[f]1(ba)E, (4.4)

    where

    E:={j=d+1|cj|+j=12jn+di=2jnd|ci|}.

    By adding some appropriate positive terms, we can see that (also see Xiang et al. [41])

    E{2i=nl+1|ci|,for d=nl,l=1,2,,n,3i=nl|ci|,for d=n+l,l=0,1,,n1. (4.5)

    We restrict the integer d to kd2nk1 so that the decay estimate in Theorem 3.1 can be used. Now using the telescopic property of the resulting series (see also Majidian [31]), we can arrive at the required estimate.

    Remark 4.1. From the above theorem, we see that for a fixed n (as in the hypotheses), the upper bound Td,n decreases for d<n and increases for dn. Further, we see that Tnl1,n=23Tn+l,n, however computationally Cnl1,n[f] is more efficient than Cn+l,n[f].

    The following theorem states the error estimate for the truncated Chebyshev series approximation based on the decay estimate (3.8).

    Theorem 4.2. Assume the hypotheses of Theorem 3.2 for an integer k1. Let, for given integers d and n such that nk and kd2nk1, Cd,n[f] be the truncated Chebyshev series of f with approximated coefficients. Then we have

    fCd,n[f]1Td,n, (4.6)

    where

    (1) if d=nl, for some l=1,2,,nk, we have

    Td,n=4Vk(ba)k+14kkπkj=0(kj)1(nl+j)(nl+j1)(nl+jk+1), (4.7)

    (2) if d=n+l, for some l=0,1,2,,nk1, we have

    Td,n=6Vk(ba)k+14kkπkj=0(kj)1(nl+j1)(nl+j2)(nl+jk). (4.8)

    Proof. Recall the L1-error estimate for the truncated Chebyshev series expansion of f with approximated coefficients given by (4.4) and (4.5):

    fCd,n[f]1{2(ba)i=nl+1|ci|,for d=nl,l=1,2,,nk,3(ba)i=nl|ci|,for d=n+l,l=0,1,,nk1. (4.9)

    Case 1. Now let us take the first case in the above estimate and apply Theorem 3.2 for d=nl,l=1,2,,nk, to get

    i=nl+1|ci|2Vkπ(ba4)kkj=0(kj)i=nl+11(i+j)(i+j1)(i+jk)=2Vkπ(ba4)kkj=0(kj)i=nl+11k(1(i+j1)(i+jk)1(i+j)(i+jk+1)).

    Hence using the telescopic property of the above series, we have

    i=nl+1|ci|2Vkkπ(ba4)kkj=0(kj)1(nl+j)(nl+j1)(nl+jk+1). (4.10)

    Case 2. Similarly, for d=n+l,l=0,1,,nk1, we apply Theorem 3.2 to get

    i=nl|ci|2Vkkπ(ba4)kkj=0(kj)1(nl+j1)(nl+j2)(nl+jk). (4.11)

    By substituting (4.10) and (4.11) in (4.9), we get

    fCd,n[f]1{2(ba)2Vkkπ(ba4)kkj=0(kj)1(nl+j)(nl+j1)(nl+jk+1),for d=nl,l=1,2,,nk,3(ba)2Vkkπ(ba4)kkj=0(kj)1(nl+j1)(nl+j2)(nl+jk),for d=n+l,l=0,1,,nk1.

    Hence, we have the required results.

    In this section, we numerically illustrate that the improved decay estimate of the Chebyshev coefficients and the L1-error estimate of the truncated Chebyshev series approximation, obtained in Section 4, are sharper than the earlier ones obtained in [31].

    Example 5.1. Let us consider the following example:

    g(t)=|t|t+2,t[1,1]. (5.1)

    The function g is absolutely continuous and

    g(t)={2(t+2)2, if 1t<0,2(x+2)2, if 0<t1,

    which is not continuous. Therefore, we have to take k=1. Let us check the other hypothesis of Theorems 3.1 and 3.2. Using the weak derivative of g, we can compute V1 in Theorems 4.1 and 4.2 as

    V1=1+2π3<,

    and the bounded variation of g is approximately equal to 2.7778, which is taken as the value of V1 in Theorems 3.2 and 4.2.

    The decay estimates (bounds given in (3.2) and (3.8)) of the Chebyshev series coefficients cj of g, for j=2,3,, as a function j given in Theorems 4.1 and 3.2 are depicted in Figure 1(a). The error estimates for the truncated Chebyshev series that we obtained in Theorems 4.1 and 4.2 are demonstrated in Figure 1(b). It can be seen that the improved estimates we derived by using the results of Xiang [10] are sharper than the earlier ones we obtained using the results of Majidian [31].

    Figure 1.  (a) Depicts the comparison between the decay bounds of |cj|, for j=2,3,,30 derived in Theorem 4.1 (blue line) and Theorem 3.2 (red line). (b) Depicts the comparison between the error estimates we obtain in Theorem 4.1 (blue line) and Theorem 4.2 (red line) for d=nl, where n=200 and l=2j,j=1,2,,7.

    In conclusion, this study extends the applicability of Chebyshev series to functions defined beyond the traditional [1,1] domain, broadening the scope of Chebyshev approximations for a variety of real-world applications. By introducing generalized decay bounds and truncation error results for Chebyshev approximations, we provide a more efficient and accessible framework for approximating functions, particularly in situations where exact computation of Chebyshev coefficients is not feasible. The results are highly relevant for fields such as spectral graph neural networks (GNNs), where Chebyshev approximations are commonly used to analyze graph signals and compute spectral filters efficiently. Additionally, these findings can enhance approximation techniques in image processing, particularly in tasks like edge detection and image compression, where rapid, accurate approximations are crucial. Overall, the theoretical advancements presented here offer a promising path to improve computational efficiency and approximation accuracy across a wide range of applications, particularly for functions with bounded variation.

    S. Akansha: Conceptualization, methodology, writing-original draft preparation, visualization; Aditya Subramaniam: Investigation, validation, writing and proofreading, reviewing and editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Dr. Akansha expresses her sincere gratitude to her Ph.D. supervisor, Prof. S. Baskar, for his invaluable insights and constructive feedback on the research problem addressed in this article. His thoughtful guidance greatly enhanced the clarity and overall quality of this work.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.


    Acknowledgments



    The authors are thankful to the management and administration of their parent institutions for providing the facility required to conduct this research. Also, the authors are thankful to Shivam Singh, student of B. Tech, (IT) at Bhagwan Parshuram Institute of Technology, Rohini, Delhi, India, for his cooperation in execution of the code. Special thank goes to Google Colab for providing the computing facility, without which it could have not been possible.

    Conflict of interest



    Authors declare that they have no conflict of interest.

    Author contributions:



    Abhimanu Singh: Concept, drafting, design, data acquisition, editing, experimenting. Analysis.
    Smita Jain: concept, design, manuscript reviewing, analysis.

    [1] Singh A, Jain S (2023) Study on Variation of Prediction Accuracy of a Convolutional Neural Network Model for Brain Tumor Detection using MR Images. Proceedings of Fourth Doctoral Symposium on Computational Intelligence, DoSCI 2023. Lecture Notes in Networks and Systems . Singapore: Springer Nature 415-424. https://doi.org/10.1007/978-981-99-3716-5_35
    [2] Seoni S, Shahini A, Meiburger KM, et al. (2020) All you need is data preparation: A systematic review image harmonization techniques in Multi-center/device studies for medical support systems. Comput Meth Prog Biomed 250: 108200. https://doi.org/10.1016/j.cmpb.2024.108200
    [3] Sun M, Song Z, Jiang X, et al. (2017) Learning pooling for convolutional neural network. Neurocomputing 224: 96-104. https://doi.org/10.1016/j.neucom.2016.10.049
    [4] Zafar A, Aamir M, Mohd Nawi N, et al. (2022) A comparison of pooling methods for convolutional neural networks. Appl Sci 12: 8643. https://doi.org/10.3390/app12178643
    [5] Abuqaddom I, Mahafzah BA, Faris H (2021) Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients. Knowl-Based Syst 230: 107391. https://doi.org/10.1016/j.knosys.2021.107391
    [6] Sabri N, Hamed HNA, Ibrahim Z, et al. (2020) A comparison between average and max-pooling in convolutional neural network for scoliosis classification. Int J Adv Trends Comput Sci Eng 9: 689-696. https://doi.org/10.30534/ijatcse/2020/9791.42020
    [7] Akter A, Nosheen N, Ahmad S, et al. (2024) Robust clinical applicable CNN and U-Net based algorithm for MRI classification and segmentation for brain tumor. Expert Syst Appl 23: 122347. https://doi.org/10.1016/j.eswa.2023.122347
    [8] Ramakrishnan AB, Sridevi M, Vasudevan SK, et al. (2024) Optimizing brain tumor classification with hybrid CNN architecture: Balancing accuracy and efficiency through one API optimization. Inf Med Unlock 44: 101436. https://doi.org/10.1016/j.imu.2023.101436
    [9] Deshpande A, Estrela VV, Patvardhan P (2021) The DCT-CNN-ResNet50 architecture to classify brain tumors with super-resolution, convolutional neural network, and the ResNet50. Neurosci Inf 1: 100013. https://doi.org/10.1016/j.neuri.2021.100013
    [10] Gupta BB, Gaurav A, Arya V (2024) Deep CNN based brain tumor detection in intelligent systems. Int J Intell Net 5: 30-37. https://doi.org/10.1016/j.ijin.2023.12.001
    [11] Rai HM, Chatterjee K (2020) Detection of brain abnormality by a novel Lu-Net deep neural CNN model from MR images. Mach Learn Appl 2: 100004. https://doi.org/10.1016/j.mlwa.2020.100004
    [12] Islam MN, Azam MS, Md Islam S, et al. (2024) An improved deep learning-based hybrid model with ensemble techniques for brain tumor detection from MRI image. Inf Med Unlock 47: 101483. https://doi.org/10.1016/j.imu.2024.101483
    [13] Al-Jammas MH, Al-Sabawi E A, Yassin A M, et al. (2024) Brain tumors recognition based on deep learning. e-Prime-Adv Electr Eng Electron Energy 8: 100500. https://doi.org/10.1016/j.prime.2024.100500
    [14] Khairandish MO, Sharma M, Jain V, et al. (2022) A hybrid CNN-SVM threshold segmentation approach for tumor detection and classification of MRI brain images. Irbm 43: 290-299. https://doi.org/10.1016/j.irbm.2021.06.003
    [15] Patil S, Kirange D (2023) Ensemble of deep learning models for brain tumor detection. Procedia Comput Sci 218: 2468-2479. https://doi.org/10.1016/j.procs.2023.01.222
    [16] Shanthi S, Saradha S, Smitha JA, et al. (2022) An efficient automatic brain tumor classification using optimized hybrid deep neural network. Int J Intell Net 3: 188-196. https://doi.org/10.1016/j.ijin.2022.11.003
    [17] Rahman T, Islam Md S (2023) MRI brain tumor detection and classification using parallel deep convolutional neural networks. Measurement: Sensors 26: 100694. https://doi.org/10.1016/j.measen.2023.100694
    [18] Swati ZNK, Zhao Q, Kabir M, et al. (2019) Brain tumor classification for MR images using transfer learning and fine-tuning. Comput Med Imag Grap 75: 34-46. https://doi.org/10.1016/j.compmedimag.2019.05.001
    [19] Agrawal T, Chaudhary P, Kumar V, et al. (2024) A comparative study of brain tumor classification on unbalanced dataset using deep neural networks. Biomed Signal Proces 94: 106256. https://doi.org/10.1016/j.bspc.2024.106256
    [20] Shivahare BD, Gupta SK, Katiyar AN, et al. (2023) Medical image denoising and brain tumor detection using CNN and U-Net. 3rd International Conference on Innovative Sustainable Computational Technologies (CISCT) . IEEE 1-5. https://doi.org/10.1109/CISCT57197.2023.10351338
    [21] Gursoy E, Kaya Y (2024) Brain-gcn-net: Graph-convolutional neural network for brain tumor identification. Comput Biol Med 180: 108971. https://doi.org/10.1016/j.compbiomed.2024.108971
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(198) PDF downloads(16) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog