Research article Special Issues

A framework for efficient brain tumor classification using MRI images

  • The authors contributed equally to this work
  • Received: 12 May 2021 Accepted: 22 June 2021 Published: 25 June 2021
  • A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.

    Citation: Yurong Guan, Muhammad Aamir, Ziaur Rahman, Ammara Ali, Waheed Ahmed Abro, Zaheer Ahmed Dayo, Muhammad Shoaib Bhutta, Zhihua Hu. A framework for efficient brain tumor classification using MRI images[J]. Mathematical Biosciences and Engineering, 2021, 18(5): 5790-5815. doi: 10.3934/mbe.2021292

    Related Papers:

    [1] Tolga Zaman, Cem Kadilar . Exponential ratio and product type estimators of the mean in stratified two-phase sampling. AIMS Mathematics, 2021, 6(5): 4265-4279. doi: 10.3934/math.2021252
    [2] Khazan Sher, Muhammad Ameeq, Sidra Naz, Basem A. Alkhaleel, Muhammad Muneeb Hassan, Olayan Albalawi . Developing and evaluating efficient estimators for finite population mean in two-phase sampling. AIMS Mathematics, 2025, 10(4): 8907-8925. doi: 10.3934/math.2025408
    [3] Sohaib Ahmad, Sardar Hussain, Muhammad Aamir, Faridoon Khan, Mohammed N Alshahrani, Mohammed Alqawba . Estimation of finite population mean using dual auxiliary variable for non-response using simple random sampling. AIMS Mathematics, 2022, 7(3): 4592-4613. doi: 10.3934/math.2022256
    [4] Yasir Hassan, Muhammad Ismai, Will Murray, Muhammad Qaiser Shahbaz . Efficient estimation combining exponential and ln functions under two phase sampling. AIMS Mathematics, 2020, 5(6): 7605-7623. doi: 10.3934/math.2020486
    [5] Saman Hanif Shahbaz, Aisha Fayomi, Muhammad Qaiser Shahbaz . Estimation of the general population parameter in single- and two-phase sampling. AIMS Mathematics, 2023, 8(7): 14951-14977. doi: 10.3934/math.2023763
    [6] Amber Yousaf Dar, Nadia Saeed, Moustafa Omar Ahmed Abu-Shawiesh, Saman Hanif Shahbaz, Muhammad Qaiser Shahbaz . A new class of ratio type estimators in single- and two-phase sampling. AIMS Mathematics, 2022, 7(8): 14208-14226. doi: 10.3934/math.2022783
    [7] Sanaa Al-Marzouki, Christophe Chesneau, Sohail Akhtar, Jamal Abdul Nasir, Sohaib Ahmad, Sardar Hussain, Farrukh Jamal, Mohammed Elgarhy, M. El-Morshedy . Estimation of finite population mean under PPS in presence of maximum and minimum values. AIMS Mathematics, 2021, 6(5): 5397-5409. doi: 10.3934/math.2021318
    [8] Sohaib Ahmad, Sardar Hussain, Javid Shabbir, Muhammad Aamir, M. El-Morshedy, Zubair Ahmad, Sharifah Alrajhi . Improved generalized class of estimators in estimating the finite population mean using two auxiliary variables under two-stage sampling. AIMS Mathematics, 2022, 7(6): 10609-10624. doi: 10.3934/math.2022592
    [9] Khazan Sher, Muhammad Ameeq, Muhammad Muneeb Hassan, Basem A. Alkhaleel, Sidra Naz, Olyan Albalawi . Novel efficient estimators of finite population mean in stratified random sampling with application. AIMS Mathematics, 2025, 10(3): 5495-5531. doi: 10.3934/math.2025254
    [10] Hleil Alrweili, Fatimah A. Almulhim . Estimation of the finite population mean using extreme values and ranks of the auxiliary variable in two-phase sampling. AIMS Mathematics, 2025, 10(4): 8794-8817. doi: 10.3934/math.2025403
  • A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.



    The study of difference equations has long been a cornerstone in the understanding of discrete dynamical systems, with applications spanning fields as diverse as economics, biology, engineering, and time series. Since the 18th century, significant progress has been made in deriving closed-form solutions for certain classes of difference equations and systems (see [1,2]). These early contributions laid the foundation for the exploration of solvability in difference equations, a topic still prevalent in modern research. Many classical texts dedicate chapters to the solvability of these systems (see [3,4,5]), while recent advancements continue to extend the boundaries of this field (see [6,7]). Despite substantial progress, solvable difference equations remain a narrow subset of the broader spectrum of systems, prompting the use of alternative methods to analyze those that lack closed-form solutions (see [4,5,6,7,8,9,10,11,12,13]).

    The quest for solvability often revolves around deriving applicable closed-form formulas, though these can sometimes become exceedingly complex. In such cases, qualitative analysis becomes a more practical approach, providing insights into the system's behavior when explicit solutions are either cumbersome or impossible to obtain (see [14]). Nevertheless, when possible, having general solution formulas for new classes of difference equations is invaluable. Many systems, including those involving nonlinearities, can often be transformed into known solvable ones, thereby inheriting their solvability properties. Notable examples include two-dimensional systems studied by Stević, where mathematical transformations were applied to render the systems solvable. For instance, in the work by Stević [15], the system of difference equations

    m0,   pm=δmqm3αm+βmqm1pm2qm3,qm=δmpm3αm+βmpm1qm2pm3,

    was transformed into a more manageable form, leading to its solvability. Similarly, in another study [16], the system

    m0,   pm=pmiqmjαmpmi+βmqmij,qm=qmipmjαmqmi+βmpmij,

    was also successfully transformed into a solvable form. Additionally, in a further study [17], a similar approach was used for the system

    m0,   pm+1=pmαpmqm+βpm1qm1pm1qm,qm+1=pm1q2mαpmqm+βpm1qm1,

    demonstrating how transformations could simplify the system's structure and allow for deeper insights into its dynamics. These examples underscore the continued relevance of transformation techniques in the analysis of complex dynamic behaviors within difference systems. Such transformations not only render the systems solvable but also provide a clearer understanding of the underlying patterns governing their behavior.

    As complex systems continue to emerge in various scientific domains, the need for robust models capable of capturing both the linear and nonlinear behaviors of these systems has grown significantly. Among the various classes of difference equations, three-dimensional systems present a particularly rich area of investigation due to their potential to model real-world phenomena where multiple interacting variables influence the system's evolution. Recent advancements in the study of three-dimensional difference equations have highlighted their critical role in understanding complex dynamical behaviors, especially in systems with nonlinear feedback and interacting variables. For instance, Elsayed et al. [18] investigated solutions for nonlinear rational systems, revealing intricate behaviors governed by recursive relations. Similarly, Khaliq et al. [19] analyzed exponential difference equations, uncovering stability properties and persistent dynamics in three-dimensional systems. Additionally, Khaliq et al. [20] investigated the boundedness nature and persistence, global and local behavior, and rate of convergence of positive solutions of a second-order system of exponential difference equations in three-dimensional systems. Some examples are provided to support their theoretical results. These studies underscore the growing importance of developing analytical and computational tools to tackle the challenges posed by such systems, particularly when closed-form solutions are not readily available. Our paper builds on these contributions by introducing a bilinear transformation approach, providing a novel perspective to study and simplify three-dimensional systems. In this paper, we focus on a specific three-dimensional system of difference equations,

    m0,    pm+1=pmαrm+βpm1qm1pm1qm,qm+1=pm1q2mγrm+δpm1qm1,rm+1=pmqmεpmqm+λpm1qm1τpmqm+σpm1qm1, (1.a)

    where α,β,γ,δ,ε,λ,τ,σR, α2+β20, γ2+δ20, ε2+λ20, τ2+σ20, and p0,q0,p1,q1,r0R. We employ advanced mathematical transformations and computational tools to unlock a deeper understanding of its dynamics. One of the critical challenges in studying such systems is the complexity of their solutions, especially when nonlinearities and feedback mechanisms are involved. To address this, we introduce a bilinear transformation that simplifies the original system, offering not only a clearer interpretation of its structure but also a more tractable equivalent two-dimensional representation. This approach allows us to focus on essential properties like stability, oscillations, and growth, and how these characteristics evolve under different parameter settings.

    Our investigation reveals that this system exhibits rich dynamical behaviors, including oscillatory and stable states, which are significantly influenced by the discriminant of the quadratic polynomial that defines the system. By examining cases of repeated and distinct characteristic roots, we uncover how subtle changes in system parameters can lead to dramatically different outcomes. This insight is particularly valuable for applications in fields like neural networks, where the dynamic interactions between neurons often mirror such complex behaviors.

    In the pursuit of developing an advanced model for analyzing the complex interactions within neural systems, we present a sophisticated mathematical framework that integrates recursive equations and nonlinear dynamics. This system aims to explore the evolution of neural signals over time, focusing on how different variables interact to influence overall system stability and activity patterns. By constructing a model based on three critical variables—pm, representing the first neural activity signal, qm, denoting the second neural response, and rm, reflecting the combined influence of both signals—we investigate how fluctuations in these signals impact both short-term dynamics and long-term stability. The recursive nature of the model enables us to track the dynamic behavior of these variables over time, incorporating nonlinear terms to simulate feedback loops characteristic of neural systems. Parameters of our models modulate interactions between the signals, with small constants introduced to ensure numerical stability. Through simulations under varying initial conditions, the model demonstrates how changes in parameter values can lead to markedly different outcomes, such as oscillatory or stable states.

    The use of visualizations such as time series, phase diagrams, and frequency analysis complements the theoretical findings, offering insights into the system's behavior across multiple dimensions. These methods are particularly useful in identifying patterns like periodicity, bifurcations, or chaotic behavior, which are critical in understanding how neural systems respond to external stimuli and internal perturbations. This approach not only enhances our understanding of the underlying dynamics in neural networks but also provides a powerful tool for simulating real-world neural behavior. Through this analysis, we aim to bridge the gap between theoretical models and practical applications, offering insights relevant to fields such as neurobiology, artificial neural networks, and systems neuroscience.

    The rest of the paper is organized as follows: In Section 2, we address the resolution of the equation system in (1.a). Section 3 focuses on simulation, providing a dynamic analysis of neural activity and system stability using recursive models. Finally, Section 4 concludes the paper with a discussion of the findings and potential directions for future research.

    In this section, we turn our attention to the systematic approach required to solve the difference equations outlined in (1.a), subject to specific initial conditions. Consider the sequence (pm,qm,rm) representing the solution to the system. It is important to highlight that the system may break down and lose its definition if any initial values are zero. To maintain the viability and continuity of the solution, the condition rm+1qmpm0 for m+10 must always hold. This ensures the solution is consistent and free from singularities. From this stage onward, we adopt the assumption that r0qkpk0 for k{0,1}. This assumption plays a pivotal role in ensuring the continued existence of a well-defined solution. By securing the integrity of the system through these conditions, we can delve deeper into its dynamics and uncover potential behaviors that the system might exhibit over time.

    To ease the analysis and provide a fresh perspective on the system (1.a), we introduce a strategic change of variables. This redefinition aims to simplify the relationships between the variables and allow for clearer insight into the nature of the system. The transformation is expressed as:

    ˆpm=pmqmpm1qm1, ˆrm=rmpm1qm1 for m0. (2.a)

    The primary objective of this redefinition is to restructure the system into a more manageable form while retaining its key characteristics. By leveraging these new variables, we simplify the interactions between components, enabling a more straightforward exploration of the system's dynamic behavior. This approach sets the stage for deriving essential properties of the model. To further this simplification, we multiply the corresponding sides of the first and second equations in (1.a), obtaining:

    m0,   pm+1qm+1=pmqmαrm+βpm1qm1pm1qmpm1qmγrm+δpm1qm1,rm+1=pmqmεpmqm+λpm1qm1τpmqm+σpm1qm1.

    Simplifying further, we express the system in terms of the transformed variables:

    m0,    pm+1qm+1pmqm=αrmpm1qm1+βγrmpm1qm1+δ,rm+1pmqm=εpmqmpm1qm1+λτpmqmpm1qm1+σ.

    Through this process, the original system (1.a) is recast into the following bilinear form:

    m0,ˆpm+1=αˆrm+βγˆrm+δ,   ˆrm+1=εˆpm+λτˆpm+σ. (2.b)

    This new system formulation presents a bilinear structure that simplifies its behavior and makes it easier to analyze. By adopting this approach, we can better understand the interplay between the variables ˆpm and ˆrm. Moreover, this transformation not only aids in solving the system but also provides valuable insights into the long-term dynamics. The bilinear form allows us to examine specific cases where the system may exhibit stability, periodicity, or chaotic behavior, depending on the chosen initial conditions and parameter values.

    The nonlinear two-dimensional system of difference equations described in (2.b) was originally introduced by Stević and Tollu [17], who demonstrated that the system is solvable by providing a comprehensive method for determining its general solution. Their analysis reveals that certain constraints on the system's parameters are necessary to ensure its solvability. For instance, when γ=0, the first equation in the system (2.b) can be substituted into the second equation, which results in a system characterized by interlacing indices—a structure derived from a bilinear equation. This special case is solvable due to the simplified form of the system. A similar outcome is observed when τ=0, which forms a dual case to the one where γ=0. Given these observations, we will operate under the assumption that neither γ nor τ is equal to zero, ensuring the system maintains its nonlinear complexity. Furthermore, to avoid dealing with ill-defined solutions, additional conditions must be met: specifically, we assume γˆrm+δ0 and τˆpm+σ0 for all m. These conditions guarantee that the terms in the equations remain valid across iterations. With these assumptions in place, we present the following lemma, as derived from the work of Stević and Tollu [17], which lays the groundwork for further analysis of the system.

    Lemma 2.1. Let α,β,γ,δ,ε,λ,τ,σR, ensuring that the following condition holds: γ(α2+β2)τ(ε2+λ2)0. Furthermore, let μ1 and μ2 represent the roots of the quadratic polynomial:

    Q(μ)=μ2(αε+βτ+γλ+δσ)μ+(βγαδ)(λτεσ).

    Under these conditions, the system of difference equations given in (2.b) admits a closed-form solution. The general solution to this system can be described in two distinct cases, depending on whether the roots μ1 and μ2 coincide or differ.

    i. Case 1: μ1=μ2. In the case where the polynomial Q(μ) has a repeated root μ2, the general solution to the system is given by:

    τˆp2m+στˆp0+σ=((ταεˆp0+λτˆp0+σ+βγεˆp0+λτˆp0+σ+δ+σ)(γεˆp0+λτˆp0+σ+δ)μ2)m+μ2((τˆp0+σ)(γεˆp0+λτˆp0+σ+δ)μ2)m+μ2,(γˆr0+δ)μ2(τˆp2m+1+σ)=((ταˆr0+βγˆr0+δ+σ)(γˆr0+δ)μ2)(1+m)+μ2((ταˆr0+βγˆr0+δ+σ)(γεαˆr0+βγˆr0+δ+λταˆr0+βγˆr0+δ+σ+δ)μ2)m+μ2,γˆr2m+δγˆr0+δ=((ταˆr0+βγˆr0+δ+σ)(γεαˆr0+βγˆr0+δ+λταˆr0+βγˆr0+δ+σ+δ)μ2)m+μ2((ταˆr0+βγˆr0+δ+σ)(γˆr0+δ)μ2)m+μ2,(τˆp0+σ)μ2(γˆr2m+1+δ)=((τˆp0+σ)(γεˆp0+λτˆp0+σ+δ)μ2)(1+m)+μ2((ταεˆp0+λτˆp0+σ+βγεˆp0+λτˆp0+σ+δ+σ)(γεˆp0+λτˆp0+σ+δ)μ2)m+μ2.

    This solution highlights how the behavior of the system evolves in the case of a repeated root, where the sequences (ˆpm) and (ˆrm) depend on both the initial values and the root μ2, with a linear relationship to the index m.

    ii. Case 2: μ1μ2. When the roots μ1 and μ2 are distinct, the general solution takes a different form, reflecting the more complex interplay between the two roots:

    τˆp2m+στˆp0+σ=(ταεˆp0+λτˆp0+σ+βγεˆp0+λτˆp0+σ+δ+σ)(γεˆp0+λτˆp0+σ+δ)(μm1μm2)+μ1μ2(μm12μm11)(τˆp0+σ)(γεˆp0+λτˆp0+σ+δ)(μm1μm2)+μ1μ2(μm12μm11),(γˆr0+δ)(τˆp2m+1+σ)=(ταˆr0+βγˆr0+δ+σ)(γˆr0+δ)(μm+11μm+12)+μ1μ2(μm2μm1)(ταˆr0+βγˆr0+δ+σ)(γεαˆr0+βγˆr0+δ+λταˆr0+βγˆr0+δ+σ+δ)(μm1μm2)+μ1μ2(μm12μm11),γˆr2m+δγˆr0+δ=(ταˆr0+βγˆr0+δ+σ)(γεαˆr0+βγˆr0+δ+λταˆr0+βγˆr0+δ+σ+δ)(μm1μm2)+μ1μ2(μm12μm11)(ταˆr0+βγˆr0+δ+σ)(γˆr0+δ)(μm1μm2)+μ1μ2(μm12μm11),(τˆp0+σ)(γˆr2m+1+δ)=(τˆp0+σ)(γεˆp0+λτˆp0+σ+δ)(μm+11μm+12)+μ1μ2(μm2μm1)(ταεˆp0+λτˆp0+σ+βγεˆp0+λτˆp0+σ+δ+σ)(γεˆp0+λτˆp0+σ+δ)(μm1μm2)+μ1μ2(μm12μm11).

    This form of the solution captures the intricate dynamics of the system when the two roots differ. The system exhibits oscillatory or exponential behavior depending on the magnitude and sign of μ1 and μ2, reflecting the underlying complexity of the difference equations.

    Remark 2.1. In certain cases, when specific relationships hold among the parameters of the system, simpler solutions can be derived. Specifically, if βγ=αδ or λτ=εσ, the system (2.b) can be significantly simplified. When βγ=αδ, the system reduces to a straightforward form where the sequences (ˆpm) and (ˆrm) stabilize to constant values. In this case, we have the following:

    m0,ˆpm={αγif γ0βδif δ0 ,   ˆrm={αε+γλατ+γσif γ0βε+δλβτ+δσif δ0 .

    Similarly, if λτ=εσ, the sequences (ˆpm) and (ˆrm) are also reduced to constant expressions, but with a different set of parameters governing their values:

    m0,ˆpm={εα+τβεγ+τδif τ0λα+σβλγ+σδif σ0 ,   ˆrm={ετif τ0λσif σ0 .

    These simplified cases are particularly useful when certain parameter combinations lead to degeneracies in the system, enabling us to derive constant sequences for (ˆpm) and (ˆrm), which can make further analysis much more manageable.

    Remark 2.2. Additionally, from the quadratic polynomial Q(μ), a necessary and sufficient condition for the system to have distinct roots μ1 and μ2 is that the following inequality holds:

    (αε+βτ+γλ+δσ)2(βγαδ)(λτεσ)4. (2.c)

    This condition ensures that the system avoids critical cases where the roots μ1 and μ2 coincide, thereby maintaining the generality of the solution structure.

    According to Lemma 2.1, we derive the closed-form expressions for the solutions of system (1.a). From these solutions, we can formulate the following theorem.

    Theorem 2.1. Let α,β,γ,δ,ε,λ,τ,σR, where the condition γ(α2+β2)τ(ε2+λ2)0 holds. Under these parameters, the system of difference equations presented in (2.b) admits a general closed-form solution. The solution to this system is expressed in two distinct cases, based on whether the roots μ1 and μ2 are identical or distinct.

    i. Case 1: Under the assumption specified in (2.c), the general solution to the system is:

    p2m+s=p0αr0+βp1q1p1q0{m1+sl=1q0γ(α(γr0p1q1+δ)((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)×(μl1μl2)+μ1μ2(μl12μl11))((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)(μl1μl2)+μ1μ2(μl12μl11))1+βγαδ)}×{ml=1q0γ(α(τp0q0p1q1+σ)((τp0q0p1q1+σ)×(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μl1μl2)+μ1μ2(μl12μl11))((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)×(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μl11μl12)+μ1μ2(μl22μl21))1αδ+βγ)}×2m+sl=2({[l2]1k=1(γr0p1q1+δ)((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)(μk1μk2)+μ1μ2(μk12μk11))((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)(μk1μk2)+μ1μ2(μm12μk11))1}×{[l12]k=11(τp0q0p1q1+σ)((τp0q0p1q1+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μk1μk2)μ1μ2(μk12μk11))×((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μk11μk12)+μ1μ2(μk22μk21))1}/p1q20γr0+δp1q1),
    q2m+s=q0(p1q20γr0+δp1q1)2m+s/2m+sl=1(q0{[l12]k=1(γr0p1q1+δ)((ταr0+βp1q1γr0+δp1q1+σ)×(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)(μk1μk2)+μ1μ2(μk12μk11))((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)×(μk1μk2)+μ1μ2(μm12μk11))1}×{[l2]k=11(τp0q0p1q1+σ)((τp0q0p1q1+σ)×(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μk1μk2)+μ1μ2(μk12μk11))((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)×(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μk11μk12)+μ1μ2(μk22μk21))1}),
    r2m=p2m1q2m1((r0p1q1+δγ)((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)(μm1μm2)+μ1μ2(μm12μm11))((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)(μm1μm2)+μ1μ2×(μm12μm11))1δγ),
    r2m+1=p2mq2m(1γ(τp0q0p1q1+σ)((τp0q0p1q1+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μm+11μm+12)+μ1μ2(μm2μm1))((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)(μm1μm2)+μ1μ2(μm12μm11))1δγ),

    for s{0,1}, and m0.

    ii. Case 2: When the assumption in (2.c) is not satisfied, the general solution becomes:

    p2m+s=p0αr0+βp1q1p1q0{m1+sl=1q0γ(α(γr0p1q1+δ)((((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)μ2)l+μ2)(((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)μ2)l+μ2)1αδ+βγ)}×{ml=1q0γ(αμ2(τp0q0p1q1+σ)(((τp0q0p1q1+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)l+μ2)×(((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)(l1)+μ2)1αδ+βγ)}×2m+sl=2({[l2]1k=1(γr0p1q1+δ)(((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)μ2)k+μ2)×(((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)μ2)k+μ2)1}×{[l12]k=1μ2(τp0q0p1q1+σ)(((τp0q0p1q1+σ)×(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)k+μ2)(((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)(k1)+μ2)}/p1q20γr0+δp1q1),
    q2m+s=q0(p1q20γr0+δp1q1)2m+s/2m+sl=1(q0{[l12]k=1(γr0p1q1+δ)(((ταr0+βp1q1γr0+δp1q1+σ)×(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)μ2))k+μ2)(((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)μ2)k+μ2)1}×{[l2]k=1μ2(τp0q0p1q1+σ)(((τp0q0p1q1+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)k+μ2)(((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)(k1)+μ2)1}),
    r2m=1γp2m1q2m1((γr0p1q1+δ)(((ταr0+βp1q1γr0+δp1q1+σ)(γεαr0+βp1q1γr0+δp1q1+λταr0+βp1q1γr0+δp1q1+σ+δ)μ2)×m+μ2)(((ταr0+βp1q1γr0+δp1q1+σ)(γr0p1q1+δ)μ2)m+μ2)1δ),
    r2m+1=1γp2mq2m(μ2(τp0q0p1q1+σ)((τp0q0p1q1+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)×(1+m)+μ2)((ταεp0q0+λp1q1τp0q0+σp1q1+βγεp0q0+λp1q1τp0q0+σp1q1+δ+σ)(γεp0q0+λp1q1τp0q0+σp1q1+δ)μ2)×m+μ2)1δ),

    for s{0,1}, and m0.

    Proof. To prove this, we first express the system (1.a) in a more suitable form by introducing the following changes of variables:

    m0,   pm+1=qm1pmqm(αˆrm+β),qm+1qm=qmqm1(γˆrm+δ)1.

    From the second equation, we find:

    m0, qm=qm1q1/q0m1k=1(γˆrk+δ)=q0qm1/ml=1(q0l1k=1(γˆrk+δ)). (2.d)

    Substituting this result into the first equation of the system yields:

    m2,   pm=pm1q0(αˆrm1+β)m2k=1(γˆrk+δ)/q1.

    Thus, we obtain:

    m0,   pm=p1ml=2(q0(αˆrl1+β)l2k=1(γˆrk+δ)/q1). (2.e)

    Finally, using the expressions from (2.d) and (2.e) in the second change of variables from (2.a), we find that rm=pm1qm1ˆrm for m0. Upon further decomposition, this results in the following expressions: r2m=p2m1q2m1ˆr2m and r2m+1=p2mq2mˆr2m+1 for m0.These equations highlight the recursive structure of the system, where each term is expressed in terms of preceding variables and parameters, reflecting the intricate interdependence of the system's components under the applied transformations.

    Remark 2.3. In our paper on the three-dimensional system of difference equations, we analyzed two main cases for the roots of the characteristic polynomial: repeated roots and distinct roots. In the case of repeated roots, the system's behavior is characterized by stability or linear growth, where the solutions depend on a linear relationship between the index m and the repeated root μ2. This type of behavior reflects simpler dynamics with a strong dependence on the system's initial values. On the other hand, in the case of distinct roots, the solutions exhibit more complex dynamics due to the interactions between the roots μ1 and μ2. In this scenario, the system may display oscillatory behavior or exponential growth, depending on the magnitude and sign of the roots. Our analysis highlights that these variations directly impact the system's stability and dynamic nature. These findings underscore the importance of the bilinear transformation in simplifying the three-dimensional system, offering a novel framework for understanding complex dynamics and interpreting diverse behaviors in differential systems.

    Example 2.1. In this example, we examine a system defined by three sequences: (pm), (qm), and (rm). The evolution of these sequences is governed by the following recursive equations:

    m1,pm+1=pm0.7rm+0.5pm1qm1pm1qm,qm+1=pm1q2m0.4rm+0.6pm1qm1,rm+1=pmqm0.8pmqm+0.9pm1qm1pmqm+1.1pm1qm1.

    The initial values are specified as p1=0.5, p0=0.3, q1=0.2, q0=0.1, and r1=0.2. To illustrate the behavior of these sequences, we plot {pm}, {qm}, and {rm} over multiple iterations, as shown in Figure 1.

    Figure 1.  Distinct behaviors of the sequences (pm), (qm), and (rm), and their even-and odd-indexed subsequences.

    In Figure 1, the plots of the sequences (pm), (qm), and (rm) display unique behaviors over several iterations. The first plot (top left) shows the evolution of pm, where the values oscillate significantly between positive and negative regions before stabilizing. In the second plot (top middle), the sequence (qm) exhibits extreme fluctuations, with values reaching very high positive and negative magnitudes. The third plot (top right) presents the sequence (rm), where the values remain close to zero, showing only moderate fluctuations. The second row of Figure 1 highlights the behavior of these sequences when observed for even and odd indices. The fourth plot displays p2m and p2m+1, revealing a slight difference between even-and odd-indexed values, though both values maintain an oscillatory pattern. The fifth plot illustrates the behavior of q2m and q2m+1, showing extreme variations, with odd-indexed values exhibiting larger magnitudes. Finally, the sixth plot presents r_{2m} and r_{2m+1} $, where the values remain relatively close to zero, with mild fluctuations between the even and odd sequences.

    Example 2.2. In this example, we examine a system defined by three sequences: (pm), (qm), and (rm). The evolution of these sequences is governed by the following recursive equations:

    m1,pm+1=pm0.2rm+0.1pm1qm1pm1qm,qm+1=pm1q2m0.1rm+0.2pm1qm1,rm+1=pmqm2pmqm+0.1pm1qm10.1pmqm+0.2pm1qm1.

    The initial values are specified as p1=1.2, p0=1.5, q1=1.4, q0=1.5, and r1=0.5. To illustrate the behavior of these sequences, we plot {pm}, {qm}, and {rm} over multiple iterations, as shown in Figure 2.

    Figure 2.  Distinct behaviors of the sequences (pm), (qm), and (rm), and their even-and odd-indexed subsequences.

    In Figure 2, the sequences (pm), (qm), and (rm) evolve under different recursive equations and initial conditions. The first plot (top left) shows the sequence (pm), which displays a smooth and continuous increase without noticeable oscillations. The second plot (top middle) illustrates the sequence (qm), where the values rise sharply with each iteration, reaching progressively higher magnitudes. Similarly, the third plot (top right) presents the sequence (rm), which follows a comparable pattern to qm, with steadily increasing values over time. The second row of Figure 2 highlights the behavior of these sequences for even and odd indices. The fourth plot shows p2m and p2m+1, where there is a slight divergence between even-and odd-indexed values, though both follow a smooth and increasing trend. The fifth plot depicts q2m and q2m+1, revealing substantial growth, with larger values observed for the odd indices. Finally, the sixth plot illustrates the sequences (r2m) and (r2m+1), both of which follow a similar upward trend, with odd-indexed values growing more rapidly.

    In the pursuit of developing an advanced model for analyzing the complex interactions within neural systems, we present a sophisticated mathematical framework that integrates recursive equations with nonlinear dynamics. This system aims to explore the evolution of neural signals over time, focusing on how various variables interact to influence overall system stability and activity patterns. The model is designed to investigate how fluctuations in neural activity impact both short-term dynamics and long-term stability under varying conditions.

    The system revolves around three critical variables: pm, representing the first neural activity signal; qm, denoting the second neural response; and rm, which reflects the combined influence of both neural signals on system stability. These variables evolve based on several key parameters: α, which affects the influence of the external signal on the first neural activity, is set at 0.7; β, which modulates the interaction between the neural signals, is set at 0.5; γ, impacting the internal feedback on the second neural response, is 0.4; and δ, governing the influence of the first neural signal on overall system dynamics, is set at 0.6. Additional parameters, such as ε and λ, further regulate complex interdependencies within the system.

    The model employs recursive iterations to track the dynamic behavior of these variables over time, carefully incorporating nonlinear terms to capture the complex feedback loops characteristic of neural systems. Small constants are introduced to avoid singularities and ensure numerical stability. Various plots, including time series of neural activity, phase diagrams, and heatmaps of signal distributions, are used to visualize how changes in system parameters affect overall behavior. The analysis also includes frequency domain investigations and topological maps to illustrate connectivity and functional relationships within the neural network.

    Figure 3 presents the time series of the three primary variables pm, qm, and rm, which represent different aspects of neural activity. Each variable is tracked over time, providing an initial understanding of how the system evolves. The time series reveals fluctuations, possible stability points, or oscillations in the variables, which can indicate periodic behavior or instability in the neural system. By visualizing the individual trajectories of these variables, we obtain a clearer understanding of their temporal evolution. However, while this plot offers insight into the independent dynamics of each variable, it does not reveal how these variables interact with one another. To gain a deeper understanding of the relationships between these variables, we transition to the phase plot in Figure 4, which allows for the exploration of the dynamic interplay between pm and qm.

    Figure 3.  Time series of neural activity variables pm, qm, and rm.
    Figure 4.  Phase plot between pm and qm.

    The phase plot in Figure 4 illustrates the dynamic relationship between pm and qm by plotting their values against each other in a two-dimensional space. This type of plot provides a clear view of how these two variables interact over time, revealing potential cycles, attractors, or chaotic behavior. Phase plots are particularly useful for identifying recurring patterns that may not be visible in the time series data. By examining this plot, we can discern whether the system exhibits periodic behavior, remains stable, or shows signs of chaos. The next logical step is to examine the frequency components of these variables, as understanding the oscillations at different frequencies provides further insight into the system's behavior. This leads us to the frequency analysis in Figure 5.

    Figure 5.  Frequency analysis of pm and qm.

    Figure 5 shows the frequency analysis of the variables pm and qm using Fourier transforms. This analysis decomposes the time-domain signals into their frequency components, highlighting the dominant frequencies or oscillatory patterns that govern the system's dynamics. Peaks in the frequency spectrum indicate the presence of strong periodic components, while a spread of frequencies may point to chaotic or broadband behavior. This frequency analysis complements the time series and phase plot by identifying the key frequencies at which the system oscillates. By isolating these frequencies, we can better understand the neural system's stability and rhythmic behavior. Having examined both the time-domain and frequency-domain characteristics, the next step is to explore how the system responds to external stimuli, as illustrated in Figure 6.

    Figure 6.  Neural response graph to external stimuli.

    Figure 6 examines the system's response to external neural stimuli, illustrating how the variables pm, qm, and rm react when subjected to an external perturbation. The response graph emphasizes the system's adaptability, illustrating how quickly and effectively the neural signals stabilize or oscillate following a stimulus. Understanding the system's responsiveness is essential for evaluating its robustness and ability to maintain stability under varying conditions. This analysis identifies whether the system can recover from disturbances or if it transitions into a new behavioral state. To further investigate the system's internal structure, we now proceed to Figure 7, which depicts the functional connectivity within the system.

    Figure 7.  Functional connectivity map.

    Figure 7 illustrates a functional connectivity map, depicting the relationships and interactions between different components of the neural system. Each node in the map represents a variable, and the edges indicate the strength and direction of their interactions. This visualization helps uncover how the neural variables influence each other, highlighting key nodes that drive system behavior. By examining this map, we can identify densely connected regions within the system, which may be critical for maintaining overall stability or contributing to instability. The next figure builds upon this analysis by investigating the structural organization of the neural system using topological maps.

    Figure 8 presents a topological brain map, a spatial representation of the neural network's structure. The topological map demonstrates how neural components are organized and interconnected, providing insights into the system's overall architecture. Clusters of nodes or regions with strong connectivity highlight functional modules that contribute to specific behaviors or functions. Through topological analysis, we gain a better understanding of the neural system's resilience and efficiency in processing information. By visualizing the layout and connections, we develop a deeper appreciation of how the network is structured to support stability and adaptability. From here, we progress to Figure 9, which offers a detailed view of the neural network graph, elaborating on its structural complexity.

    Figure 8.  Topological brain map.
    Figure 9.  Neural network graph.

    Figure 9 illustrates the neural network graph, depicting the detailed interconnections between nodes in the system. The graph emphasizes the relationships between individual neural components, showing how they are linked to form a cohesive network. This representation is critical for understanding how information is transmitted through the network and how the overall system behaves under different conditions. By analyzing the graph, we can pinpoint key nodes and pathways that play a significant role in determining the system's overall stability and performance. The structural insights gained here are further enhanced by the following figure, which provides a three-dimensional visualization of neural activity across time and space.

    In Figure 10, we see a three-dimensional visualization of neural activity across the three key variables pm, qm, and rm. This plot offers a spatial and temporal overview of how these variables evolve together, providing a richer, more dynamic understanding of the system's behavior. The 3D perspective allows for the identification of complex patterns, such as oscillations or chaotic behavior, that may not be easily observed in two-dimensional plots. The 3D plot in Figure 10 provides an overall impression of how the variables interact in space and time, but it may overlook finer details about how each variable, such as pm, behaves over specific ranges or regions. The transition to the heatmap facilitates a clearer, more focused visualization of pm alone, highlighting areas of concentration, clustering, or anomalies that may not be obvious in a 3D plot.

    Figure 10.  3D visualization of neural activity.

    Figure 11 displays a heatmap of the pm values, offering a visual representation of the distribution and variation of pm across the simulation. The heatmap uses color gradients to indicate the intensity of pm at different points in the data, enabling us to quickly identify regions of high or low activity. Areas with more intense colors correspond to higher values of pm, while lighter areas indicate lower values. Heatmaps are particularly useful for detecting spatial patterns or clusters of activity within the data that might not be evident from the time series or phase plots. This visualization provides a more comprehensive view of the data by mapping the values onto a two-dimensional grid, making it easier to spot regions of interest or anomalies in the system's behavior.

    Figure 11.  Heatmap of pm values.

    Analysis of the simulation results clearly reveals a strong agreement with theoretical predictions, significantly enhancing the model's credibility and robustness in representing the complex dynamics of neural systems. Overall, the results demonstrate the model's ability to provide accurate predictions about the interactions between the key variables pm, qm, and rm, while explaining the periodic fluctuations and chaotic behaviors that are an essential part of neural systems. These results are consistent with the scenarios assumed in the theoretical design, thus supporting the theory on which the model is based. In terms of temporal analysis, the graphs show that the system exhibits periodic behavior under certain conditions, with the variables moving between relative stability and mild fluctuations, aligning with theoretical expectations about the stability of neural systems under normal conditions. The phase diagrams also show the presence of periodic attractors, which supports the hypothesis that the system can reach a state of dynamic equilibrium, where the variables interact in a balanced manner over time. These results represent an important step toward verifying the validity of the theoretical framework on which the model is based. On the other hand, in cases where radical parameter changes were introduced (such as modifying the parameters α, β, γ, and δ), unexpected behaviors appeared, indicating that the patterns evolved into chaotic behaviors or that new attractors appeared. This reflects the complexity of the system, as these cases show that the system does not behave in a stable manner under all conditions. Such sudden changes in behavior highlight the system's flexibility and its responsiveness to parameter variations, which calls for further analysis to understand the exact mechanisms leading to these unexpected behaviors. Regarding the frequency analysis, the results of the frequency analysis using Fourier transforms showed that the system is characterized by strong peaks at certain frequencies, while inhomogeneous frequency components appear in chaotic cases. This suggests that the dominant frequencies can provide stable signals under certain conditions, while frequency dispersion appears in cases of disorder or chaos, which helps in determining the behavior of the system in response to parameter changes.

    Overall, this analysis underscores the strength of the model in simulating neural systems across a wide range of different scenarios, which enhances the credibility of the research. The model not only simulates the expected interactions but also uncovers new possibilities in representing unexpected patterns or chaotic behaviors, thereby opening up new avenues for understanding complex neural systems. These unexpected behaviors may prompt us to consider more complex models or include additional factors that may contribute to distinguishing these phenomena. In addition, this analysis demonstrates the flexibility and adaptability of the model to changes, strengthening its position as an effective tool for analyzing neural systems under a variety of conditions. These results open new avenues for future research on how to manage stability in neural systems under changing influences, and may help in designing strategies to predict or control the behavior of complex neural systems in medical applications or advanced scientific research.

    In this paper, we thoroughly analyzed the system of difference equations represented by (2.b), aiming to gain a deeper understanding of its dynamic properties through advanced mathematical transformations and computational techniques. The successful application of a transformation that simplified the original system (1.a) into a bilinear form (2.b) was pivotal. This approach not only clarified the system's underlying behavior but also facilitated the direct derivation of key properties. By transforming the system, we obtained a clearer perspective on its dynamics, allowing for a more comprehensive exploration of its structural features.

    Our analysis, outlined in Lemma 2.1, revealed that the system of difference equations possesses a closed-form solution under the condition γ(α2+β2)τ(ε2+λ2)0. This result is crucial, as it ensures the system's solvability while shedding light on the relationship between the solutions and the parameters defined by the discriminant of the quadratic polynomial Q(μ). By distinguishing between cases with repeated and distinct roots of the characteristic polynomial, we observed two distinctly different dynamical behaviors. In Case 1, where the roots are repeated, the solutions were characterized by simpler linear dependencies on the index m. On the other hand, Case 2, with distinct roots, showcased more complex dynamics, including oscillatory and exponential behaviors. This differentiation is significant as it underscores how varying parameter values can drastically influence the system's evolution, leading to different qualitative behaviors.

    The simulations performed under diverse initial conditions and parameter values further demonstrated the model's versatility in capturing a wide range of dynamical behaviors such as oscillations, stability, and growth patterns. These findings highlight the model's potential for real-world applications, especially in the analysis of neural system dynamics, population models, and other complex interactions. The ability of the model to adapt to various dynamic regimes makes it a valuable tool for practical implementation in a range of scientific fields.

    The results of this paper offer several exciting avenues for future research. Expanding the model to higher-dimensional systems is a logical next step, as it could provide a more comprehensive understanding of multi-variable dynamic interactions. Additionally, the incorporation of additional nonlinearities would further enhance the realism of the model, allowing it to capture more intricate behaviors. Another promising direction is optimizing computational methods to handle singularities, ensuring greater efficiency in solving more complex systems. By refining these aspects, future research could build on this foundational work, offering deeper insights into the behavior of dynamical systems and broadening the scope of applications in fields such as neural science, population biology, and ecological modeling.

    In conclusion, this paper makes significant contributions to the understanding of systems of difference equations and their dynamic behaviors, while also paving the way for further advancements in both theoretical analysis and practical applications. The proposed directions for future research will undoubtedly enrich the field, providing new perspectives and opportunities for further exploration.

    The author declares he has not used Artificial Intelligence (AI) tools in the creation of this article.

    The author declares no conflicts of interest in this paper.



    [1] T. Zhang, A. H. Sodhro, Z. Luo, N. Zahid, M. W. Nawaz, S. Pirbhulal, et al., A joint deep learning and internet of medical things driven framework for elderly patients, IEEE Access, 8 (2020), 75822-75832.
    [2] S. Pirbhulal, W Wu, S. C. Mukhopadhyay, G. Li, Adaptive energy optimization algorithm for internet of medical things, in 2018 12th International Conference on Sensing Technology (ICST), (2018), 269-272.
    [3] H. Zhang, H. Zhang, S. Pirbhulal, W. Wu, V. H. Albuquerque, Active balancing mechanism for imbalanced medical data in deep learning-based classification models, ACM Trans. Multimedia Comput., Commun., Appl. (TOMM), 16 (2020), 1-15.
    [4] M. Muzammal, R. Talat, A. H. Sodhro, S. Pirbhulal, A multi-sensor data fusion enabled ensemble approach for medical data from body sensor networks, Inf. Fusion, 53 (2020), 155-164. doi: 10.1016/j.inffus.2019.06.021
    [5] S. Pirbhulal, H. Zhang, W. Wu, S. C. Mukhopadhyay, T. Islam, HRV-based biometric privacy-preserving and security mechanism for wireless body sensor networks, Wearable Sens. Appl. Des. Implementation, (2017), 1-27.
    [6] U. K. Acharya, S. Kumar, Genetic algorithm based adaptive histogram equalization (GAAHE) technique for medical image enhancement, Optik, 230 (2021), 166273.
    [7] Y. Zhang, S. Liu, C. Li, J. Wang, Rethinking the dice loss for deep learning lesion segmentation in medical images, J. Shanghai Jiaotong Univ. (Sci.), 26 (2021), 93-102. doi: 10.1007/s12204-021-2264-x
    [8] S. Liang, H. Liu, Y. Gu, X. Guo, H. Li, L. Li, et al., Fast automated detection of COVID-19 from medical images using convolutional neural networks, Commun. Biol., 4 (2021), 1-3.
    [9] A. S. Miroshnichenko, V. M. Mikhelev, Classification of medical images of patients with Covid-19 using transfer learning technology of convolutional neural network, in Journal of Physics: Conference Series, 1801 (2021), 012010.
    [10] F. Alenezi, K. C. Santosh, Geometric regularized Hopfield neural network for medical image enhancement, Int. J. Biomed. Imaging, 2021 (2021).
    [11] R. A. Gougeh, T. Y. Rezaii, A. Farzamnia, Medical image enhancement and deblurring, in Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019, (2021), 543-554.
    [12] Y. Ma, Y. Liu, J. Cheng, Y. Zheng, M. Ghahremani, H. Chen, et al., Cycle structure and illumination constrained GAN for medical image enhancement, in International Conference on Medical Image Computing and Computer-Assisted Intervention, (2020), 667-677.
    [13] D. Zhang, G. Huang, Q. Zhang, J. Han, J. Han, Y. Yu, Cross-modality deep feature learning for brain tumor segmentation, Pattern Recognit., 110 (2021), 107562.
    [14] N. Heller, F. Isensee, K. H. Maier-Hein, X. Hou, C. Xie, F. Li, et al., The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the kits19 challenge, Med. Image Anal., 67 (2021), 101821.
    [15] D. Zhang, B. Chen, J. Chong, S. Li, Weakly-supervised teacher-student network for liver tumor segmentation from non-enhanced images, Med. Image Anal., (2021), 102005.
    [16] S. Preethi, P. Aishwarya, An efficient wavelet-based image fusion for brain tumor detection and segmentation over PET and MRI image, Multimedia Tools Appl., (2021), 1-8.
    [17] M. Toğaçar, B. Ergen, Z. Cömert, Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks, Med. Biol. Eng. Comput., 59 (2021), 57-70.
    [18] B. Kaushal, M. D. Patil, G. K. Birajdar, Fractional wavelet transform based diagnostic system for brain tumor detection in MR imaging, Int. J. Imaging Syst. Technol., 31 (2021), 575-591. doi: 10.1002/ima.22497
    [19] F. J. Díaz-Pernas, M. Martínez-Zarzuela, M. Antón-Rodríguez, D. González-Ortega, A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network, in Healthcare, 9 (2021) 153.
    [20] A. R. Khan, S. Khan, M. Harouni, R. Abbasi, S. Iqbal, Z. Mehmood, Brain tumor segmentation using K‐means clustering and deep learning with synthetic data augmentation for classification, Microsc. Res. Tech., 2021.
    [21] C. L. Maire, M. M. Fuh, K. Kaulich, K. D. Fita, I. Stevic, D. H. Heiland, et al., Genome-wide methylation profiling of glioblastoma cell-derived extracellular vesicle DNA allows tumor classification, Neuro-oncology, 2021.
    [22] G. Garg, R. Garg, Brain tumor detection and classification based on hybrid ensemble classifier, preprint, arXiv: 2101.00216.
    [23] K. Kaplan, Y. Kaya, M. Kuncan, H. M. Ertunç, Brain tumor classification using modified local binary patterns (LBP) feature extraction methods, Med. Hypotheses, 139 (2020), 109696.
    [24] J. Amin, M. Sharif, N. Gul, M. Yasmin, S. A. Shad, Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network, Pattern Recognit. Lett., 129 (2020), 115-122. doi: 10.1016/j.patrec.2019.11.016
    [25] N. Ghassemi, A. Shoeibi, M. Rouhani, Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images, Biomed. Signal Process. Control, 57 (2020), 101678.
    [26] A. M. Alhassan, W. M. Zainon, Brain tumor classification in magnetic resonance image using hard swish-based RELU activation function-convolutional neural network, Neural Comput. Appl., (2021), 1-3.
    [27] M. Agarwal, G. Rani, V. S. Dhaka, Optimized contrast enhancement for tumor detection, Int. J. Imaging Syst. Technol., 30 (2020), 687-703. doi: 10.1002/ima.22408
    [28] B. S. Rao, Dynamic histogram equalization for contrast enhancement for digital images, Appl. Soft Comput., 89 (2020), 106114.
    [29] B. Subramani, M. Veluchamy, A fast and effective method for enhancement of contrast resolution properties in medical images, Multimedia Tools Appl., 79 (2020), 7837-7855. doi: 10.1007/s11042-018-6434-2
    [30] Z. Ullah, M. U. Farooq, S. H. Lee, D. An, A hybrid image enhancement based brain MRI images classification technique, Med. Hypotheses, 143 (2020), 109922.
    [31] U. K. Acharya, S. Kumar, Particle swarm optimized texture based histogram equalization (PSOTHE) for MRI brain image enhancement, Optik, 224 (2020), 165760.
    [32] J. Cheng, W. Huang, S. Cao, R. Yang, W. Yang, Z. Yun, et al., Enhanced performance of brain tumor classification via tumor region augmentation and partition, PloS One, 10 (2015), e0140381.
    [33] M. R. Ismael, I. Abdel-Qader, Brain tumor classification via statistical features and back-propagation neural network, in 2018 IEEE International Conference on Electro/Information Technology (EIT), IEEE, (2018), 252-257.
    [34] B. Tahir, S. Iqbal, M. Usman Ghani Khan, T. Saba T, Z. Mehmood, A. Anjum, et al., Feature enhancement framework for brain tumor segmentation and classification, Microscopy Res. Tech., 82 (2019), 803-811.
    [35] J. S. Paul, A. J. Plassard, B. A. Landman, D. Fabbri, Deep learning for brain tumor classification, in Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging, 10137 (2017), 1013710.
    [36] P. Afshar, A. Mohammadi, K. N. Plataniotis, Brain tumor type classification via capsule networks, in 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, (2018), 3129-3133,
    [37] P. Afshar, K. N. Plataniotis, A. Mohammadi, Capsule networks for brain tumor classification based on MRI images and coarse tumor boundaries, in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (2019), 1368-1372.
    [38] Y. Zhou, Z. Li, H. Zhu, C. Chen, M. Gao, K. Xu, et al., Holistic brain tumor screening and classification based on densenet and recurrent neural network, in International MICCAI Brain lesion Workshop, Springer, Cham, (2018), 208-217.
    [39] A. Pashaei, H. Sajedi, N. Jazayeri, Brain tumor classification via convolutional neural network and extreme learning machines, in 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE), IEEE, (2018), 314-319.
    [40] N. Abiwinanda, M. Hanif, S. T. Hesaputra, A. Handayani, T. R. Mengko, Brain tumor classification using convolutional neural network, in World Congress on Medical Physics and Biomedical Engineering 2018, Springer, Singapore, (2019), 183-189.
    [41] J. Guo, W. Qiu, X. Li, X. Zhao, N. Guo, Q. Li, Predicting alzheimer's disease by hierarchical graph convolution from positron emission tomography imaging, in 2019 IEEE International Conference on Big Data (Big Data), IEEE, (2019), 5359-5363.
    [42] W. Ayadi, W. Elhamzi, I. Charfi, M. Atri, Deep CNN for brain tumor classification, Neural Process. Lett., 53 (2021), 671-700. doi: 10.1007/s11063-020-10398-2
    [43] S. Deepak, P. M. Ameer, Brain tumor classification using deep CNN features via transfer learning, Comput. Biol. Med., 111 (2019), 103345.
    [44] P. F. Felzenszwalb, D. P. Huttenlocher, Efficient graph-based image segmentation, Int. J. Comput. Vision, 59 (2004), 167-181. doi: 10.1023/B:VISI.0000022288.19776.77
    [45] M. Aamir, Y. F. Pu, Z. Rahman, W. A. Abro, H. Naeem, F. Ullah, et al., A hybrid proposed framework for object detection and classification, J. Inf. Process. Syst., 14 (2018).
    [46] M. Aamir, Y. F. Pu, W. A. Abro, H. Naeem, Z. Rahman, A hybrid approach for object proposal generation, in International Conference on Sensing and Imaging, Springer, Cham, (2017), 251-259.
    [47] M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, in International Conference on Machine Learning, PMLR, (2019), 6105-6114.
    [48] Y. Guan, M. Aamir, Z. Hu, W. A. Abro, Z. Rahman, Z. A. Dayo, et al., A region-based efficient network for accurate object detection, Trait. du Signal, 38 (2021).
    [49] M. Sandler, A. Howard, M. Zhu M, A. Zhmoginov, L. C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 4510-4520.
    [50] R. Girshick, Fast r-cnn, in Proceedings of the IEEE International Conference on Computer Vision, (2015), 1440-1448.
    [51] T. Sadad, A. Rehman, A. Munir, T. Saba, U. Tariq, N. Ayesha, et al., Brain tumor detection and multi‐classification using advanced deep learning techniques, Microscopy Res. Tech., 84 (2021), 1296-1308.
    [52] A. Rehman, S. Naz, M. I. Razzak, F. Akram, M. Imran, A deep learning-based framework for automatic brain tumors classification using transfer learning, Circuits, Syst. Signal Process., 39 (2020), 757-775. doi: 10.1007/s00034-019-01246-3
    [53] Z. Rahman, Y. F. Pu, M. Aamir, S. Wali, Structure revealing of low-light images using wavelet transform based on fractional-order denoising and multiscale decomposition, Visual Comput., 37 (2021), 865-880. doi: 10.1007/s00371-020-01838-0
    [54] M. Aamir, Z. Rahman, Y. F. Pu, W. A. Abro, K. Gulzar, Satellite image enhancement using wavelet-domain based on singular value decomposition, Int. J. Adv. Comput. Sci. Appl. (IJACSA), 2019.
    [55] M. Aamir, Z. Rehman, Y. F. Pu, A. Ahmed, W. A. Abro, Image enhancement in varying light conditions based on wavelet transform, in 2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing, (2019), 317-322.
    [56] M. Aamir, Y. F. Pu, Z. Rahman, M. Tahir, H. Naeem, Q. Dai, A framework for automatic building detection from low-contrast satellite images, Symmetry, 11 (2019), 3.
  • This article has been cited by:

    1. Muhammad Ahmed Shehzad, Anam Nisar, Aamna Khan, Walid Emam, Yusra Tashkandy, Haris Khurram, Isra Al-Shbeil, Modified median quartile double ranked set sampling for estimation of population mean, 2024, 10, 24058440, e34627, 10.1016/j.heliyon.2024.e34627
    2. Muhammad Nadeem Intizar, Muhammad Ahmed Shehzad, Haris Khurram, Soofia Iftikhar, Aamna Khan, Abdul Rauf Kashif, Integrating endogeneity in survey sampling using instrumental-variable calibration estimator, 2024, 10, 24058440, e33969, 10.1016/j.heliyon.2024.e33969
    3. Anoop Kumar, Walid Emam, Yusra Tashkandy, Memory type general class of estimators for population variance under simple random sampling, 2024, 10, 24058440, e36090, 10.1016/j.heliyon.2024.e36090
    4. Jing Wang, Sohaib Ahmad, Muhammad Arslan, Showkat Ahmad Lone, A.H. Abd Ellah, Maha A. Aldahlan, Mohammed Elgarhy, Estimation of finite population mean using double sampling under probability proportional to size sampling in the presence of extreme values, 2023, 9, 24058440, e21418, 10.1016/j.heliyon.2023.e21418
    5. Muhammad Junaid, Sadaf Manzoor, Sardar Hussain, M.E. Bakr, Oluwafemi Samson Balogun, Shahab Rasheed, An optimal estimation approach in non-response under simple random sampling utilizing dual auxiliary variable for finite distribution function, 2024, 10, 24058440, e38343, 10.1016/j.heliyon.2024.e38343
    6. Khazan Sher, Muhammad Ameeq, Muhammad Muneeb Hassan, Olayan Albalawi, Ayesha Afzal, Development of improved estimators of finite population mean in simple random sampling with dual auxiliaries and its application to real world problems, 2024, 10, 24058440, e30991, 10.1016/j.heliyon.2024.e30991
    7. Abdullah Mohammed Alomair, Soofia Iftikhar, Calibrated EWMA estimators for time-scaled surveys with diverse applications, 2024, 10, 24058440, e31030, 10.1016/j.heliyon.2024.e31030
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6859) PDF downloads(499) Cited by(56)

Figures and Tables

Figures(9)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog