Research article Special Issues

Classification of MRI brain tumors based on registration preprocessing and deep belief networks

  • In recent years, augmented reality has emerged as an emerging technology with huge potential in image-guided surgery, and in particular, its application in brain tumor surgery seems promising. Augmented reality can be divided into two parts: hardware and software. Further, artificial intelligence, and deep learning in particular, have attracted great interest from researchers in the medical field, especially for the diagnosis of brain tumors. In this paper, we focus on the software part of an augmented reality scenario. The main objective of this study was to develop a classification technique based on a deep belief network (DBN) and a softmax classifier to (1) distinguish a benign brain tumor from a malignant one by exploiting the spatial heterogeneity of cancer tumors and homologous anatomical structures, and (2) extract the brain tumor features. In this work, we developed three steps to explain our classification method. In the first step, a global affine transformation is preprocessed for registration to obtain the same or similar results for different locations (voxels, ROI). In the next step, an unsupervised DBN with unlabeled features is used for the learning process. The discriminative subsets of features obtained in the first two steps serve as input to the classifier and are used in the third step for evaluation by a hybrid system combining the DBN and a softmax classifier. For the evaluation, we used data from Harvard Medical School to train the DBN with softmax regression. The model performed well in the classification phase, achieving an improved accuracy of 97.2%.

    Citation: Karim Gasmi, Ahmed Kharrat, Lassaad Ben Ammar, Ibtihel Ben Ltaifa, Moez Krichen, Manel Mrabet, Hamoud Alshammari, Samia Yahyaoui, Kais Khaldi, Olfa Hrizi. Classification of MRI brain tumors based on registration preprocessing and deep belief networks[J]. AIMS Mathematics, 2024, 9(2): 4604-4631. doi: 10.3934/math.2024222

    Related Papers:

    [1] Aftab Hussain . Fractional convex type contraction with solution of fractional differential equation. AIMS Mathematics, 2020, 5(5): 5364-5380. doi: 10.3934/math.2020344
    [2] Hanadi Zahed, Ahmed Al-Rawashdeh, Jamshaid Ahmad . Common fixed point results in F-metric spaces with application to nonlinear neutral differential equation. AIMS Mathematics, 2023, 8(2): 4786-4805. doi: 10.3934/math.2023237
    [3] Hanadi Zahed, Zhenhua Ma, Jamshaid Ahmad . On fixed point results in F-metric spaces with applications. AIMS Mathematics, 2023, 8(7): 16887-16905. doi: 10.3934/math.2023863
    [4] Fatima M. Azmi . New fixed point results in double controlled metric type spaces with applications. AIMS Mathematics, 2023, 8(1): 1592-1609. doi: 10.3934/math.2023080
    [5] Amer Hassan Albargi, Jamshaid Ahmad . Fixed point results of fuzzy mappings with applications. AIMS Mathematics, 2023, 8(5): 11572-11588. doi: 10.3934/math.2023586
    [6] Abdullah Eqal Al-Mazrooei, Jamshaid Ahmad . Fixed point approach to solve nonlinear fractional differential equations in orthogonal F-metric spaces. AIMS Mathematics, 2023, 8(3): 5080-5098. doi: 10.3934/math.2023255
    [7] Budi Nurwahyu, Naimah Aris, Firman . Some results in function weighted b-metric spaces. AIMS Mathematics, 2023, 8(4): 8274-8293. doi: 10.3934/math.2023417
    [8] Nehad Abduallah Alhajaji, Afrah Ahmad Noman Abdou, Jamshaid Ahmad . Application of fixed point theory to synaptic delay differential equations in neural networks. AIMS Mathematics, 2024, 9(11): 30989-31009. doi: 10.3934/math.20241495
    [9] Mi Zhou, Naeem Saleem, Xiao-lan Liu, Nihal Özgür . On two new contractions and discontinuity on fixed points. AIMS Mathematics, 2022, 7(2): 1628-1663. doi: 10.3934/math.2022095
    [10] Gunaseelan Mani, Rajagopalan Ramaswamy, Arul Joseph Gnanaprakasam, Vuk Stojiljković, Zaid. M. Fadail, Stojan Radenović . Application of fixed point results in the setting of F-contraction and simulation function in the setting of bipolar metric space. AIMS Mathematics, 2023, 8(2): 3269-3285. doi: 10.3934/math.2023168
  • In recent years, augmented reality has emerged as an emerging technology with huge potential in image-guided surgery, and in particular, its application in brain tumor surgery seems promising. Augmented reality can be divided into two parts: hardware and software. Further, artificial intelligence, and deep learning in particular, have attracted great interest from researchers in the medical field, especially for the diagnosis of brain tumors. In this paper, we focus on the software part of an augmented reality scenario. The main objective of this study was to develop a classification technique based on a deep belief network (DBN) and a softmax classifier to (1) distinguish a benign brain tumor from a malignant one by exploiting the spatial heterogeneity of cancer tumors and homologous anatomical structures, and (2) extract the brain tumor features. In this work, we developed three steps to explain our classification method. In the first step, a global affine transformation is preprocessed for registration to obtain the same or similar results for different locations (voxels, ROI). In the next step, an unsupervised DBN with unlabeled features is used for the learning process. The discriminative subsets of features obtained in the first two steps serve as input to the classifier and are used in the third step for evaluation by a hybrid system combining the DBN and a softmax classifier. For the evaluation, we used data from Harvard Medical School to train the DBN with softmax regression. The model performed well in the classification phase, achieving an improved accuracy of 97.2%.



    In science, wind speed is a fundamental quantity that results from the movement of air from high-pressure areas to low-pressure areas, primarily caused by temperature changes. Wind speed has a diverse impact on life and the economy and is important, such as in renewable energy production, aviation operations, and crop production. Additionally, monitoring and predicting wind speed contributes to preparedness and disaster prevention. Mohammadi, Alavi, and McGowan [1] studied the estimation of wind speed distributions and demonstrated that the Birnbaum-Saunders distribution is the most suitable. However, on days or times with no wind, where the wind speed is zero, the Birnbaum-Saunders distribution cannot be used for analysis since it is positively skewed. Therefore, the delta-Birnbaum-Saunders distribution is a more suitable option. The delta-Birnbaum-Saunders distribution combines both zero and positive values. The zero observations follow the binomial distribution with binomial proportion δ, whereas the positive observations with the probability 1δ follow the Birnbaum-Saunders (BS) distribution. It is well known that the BS distribution is widely applied across various fields such as environmental research, agriculture, business, industry, and medical sciences [2,3,4,5]. Since the Birnbaum-Saunders distribution is positively skewed and cannot be applied when zero values are present, it is not suitable for datasets containing zeros. However, real-world data may include zeros, making the delta-Birnbaum-Saunders distribution a more appropriate choice. The concept of the delta-Birnbaum-Saunders distribution originates from Aitchison's research [6]. Subsequently, several researchers have applied the concept of incorporating zero values into various positive distributions, providing more diverse and accurate approaches for statistical analysis, such as the delta-lognormal distribution. Hasan and Krishnamoorthy [7] used the delta-lognormal distribution to construct confidence intervals for the mean, employing both the fiducial approach and the method of variance estimate recovery (MOVER). Maneerat, Niwitpong, and Niwitpon [8] constructed confidence intervals for the difference between variances using the delta-lognormal distribution. They compared the highest posterior density (HPD) method with the normal approximation (NA), parametric bootstrap (PB), and fiducial generalized confidence interval (FGCI) methods. Singhasomboon, Panichkitkosolkul, and Volodin [9] proposed methods for constructing confidence intervals for the ratio of medians in lognormal distributions. The methods they introduced include the NA, the MOVER, and the generalized confidence interval (GCI). Their findings indicate that GCI performs well in terms of coverage probabilities, and they recommend using the NA method for moderate to large sample sizes when the mean and variance are small. For the delta-Birnbaum-Saunders distribution, Ratasukharom, Niwitpong, and Niwitpong [10] used the GCI, bootstrap confidence interval, generalized fiducial confidence interval (GFCI), and NA to estimate the proportion of zeros using the variance-stabilized transformation (VST), Wilson, and Hannig methods. These approaches were applied to construct confidence intervals for the variance of the delta-Birnbaum-Saunders distribution. They found that the GFCI based on the Wilson method is most suitable for small sample sizes, the GFCI based on the Hannig method is optimal for medium sample sizes, and the GFCI based on the VST method performs best for large sample sizes. For the delta-gamma distribution, Guo et al. [11] proposed GCIs based on fiducial inference, Box-Cox transformation, PB, and MOVER to construct confidence intervals for the difference between coefficients of variation in delta-gamma distributions. They found that all four GCI methods provided satisfactory results in terms of coverage probabilities. For the delta-two-parameter exponential distribution, Khooriphan, Niwitpong, and Niwitpong [12] proposed methods for constructing confidence intervals for the mean of the delta-two-parameter exponential distribution using PB, standard bootstrapping, the GCI, and the MOVER. They found that GCI is recommended for small to moderate sample sizes, while PB is more suitable for large sample sizes.

    The coefficient of variation is a statistical measure of relative dispersion used to compare the variability of distinct datasets. The coefficient of variation is defined as the ratio of the standard deviation to the mean. The coefficient of variation value is typically expressed as a percentage. A higher coefficient of variation indicates greater relative variability, while a lower coefficient of variation suggests less relative variability. Moreover, the coefficient of variation is a useful tool and is applied in various real-world scenarios, for example, investment analysis, healthcare, education, and economics. Importantly, environmental scientists use coefficients of variation to study the variability in environmental data, such as rainfall patterns, temperature fluctuations, or pollutant levels [13,14,15]. Furthermore, numerous researchers have conducted studies on confidence intervals for the coefficient of variation in various distributions. In the normal distribution, Vangel [16] created the confidence intervals for the coefficient of variation. Buntao and Niwitpong [17] used delta-lognormal and lognormal distributions to create the confidence intervals for the coefficient of variation. D'Cunha and Rao [14] described a method for calculating the coefficient of variation of the lognormal distribution using Bayesian inference. Sangnawakij and Niwitpong [18] examined the Gamma distribution's ratio coefficient of variation. Yosboonruang, Niwitpong, and Niwitpong [19] suggested confidence intervals for the difference between two independent coefficients of variation of the two delta-lognormal distributions. Puggard, Niwitpong, and Niwitpong [20] proposed confidence intervals for the coefficient of variation in the Birnbaum-Saunders distribution. La-ongkaew, Niwitpong, and Niwitpong [21] presented the confidence intervals for the ratio of the coefficients of variation between the two Weibull distributions.

    Many researchers have studied and developed confidence intervals for parameters in various probability distributions. From the study on constructing confidence intervals for parameters in various positive distributions that include zero values, it was found that the generalized confidence interval and normal approximation methods are effective. Additionally, the bootstrap confidence interval is recognized as a fundamental technique for constructing confidence intervals. Many researchers recommend these methods after comparing them with other methods. However, to date, there has been no research conducted on confidence intervals for parameters of the delta-Birnbaum-Saunders distribution. As a result, the purpose of this study is to construct confidence intervals for the coefficients of variation in the delta-Birnbaum-Saunders distribution. This study proposes three methods for constructing confidence intervals: the normal approximation, the generalized confidence interval that estimates the proportion of zero using variance-stabilizing transformation, as proposed by Wu and Hsieh [22], and the generalized confidence interval that estimates the proportion of zero using the Wilson score method, as proposed by Li, Zhou, and Tian [23]. These three methods are then compared with the bootstrap confidence interval. Furthermore, to validate the accuracy of these methods, all four of them will be applied to real-world data, specifically wind speed data collected in Ubon Ratchathani and Si Sa Kat, Thailand.

    Let Y=(Y1,Y2,,Yn) be a random sample from the delta-Birnbaum-Saunders (DBS) distribution with the proportion of zero δ, shape parameter α, and scale parameter β, denoted by YDBS(δ,α,β), the probability density function for the delta-Birnbaum-Saunders population is expressed as

    f(y;δ,α,β)=δI0[y]+(1δ)12αβ2π[(βy)1/2+(βy)3/2]exp[12(yβ+βy2)]I(0,)[y],

    where I is an indicator function, with I0[y]={1;y=0,0;otherwise, and I(0,)[y]={0;y=0,1;y>0. Then the distribution function of Y is given by

    G(y;δ,α,β)={δ;y=0,δ+(1δ)F(y;α,β);y>0, (1)

    where F(y;α,β) is the Birnbaum-Saunders distribution function. For Y=0, the number of zero observations is distributed according to the binomial distribution denoted by n(0)Binomial(n,δ). Given n=n(1)+n(0), where n(1) and n(0) represent the numbers of positive and zero values, respectively, the maximum likelihood estimate of δ is ˆδ=n(0)n. According to the Aitchison [6] concept, the population mean, variance, and coefficient of variation can be calculated as follows:

    E(Y)=(1δ)β(1+α22),V(Y)=(1δ)(αβ)2(1+5α24)+δ(1δ)β2(1α22)2,

    and

    θ=E(Y)V(Y)=12+α2α2(4+5α2)+δ(2+α2)21δ. (2)

    The method for constructing confidence intervals for the coefficient of variation of the delta-Birnbaum-Saunders distribution will be presented in the next section.

    The normal approximation (NA) method is a technique that depends on the sample size, becoming more accurate as the sample size increases. A statistical approach used to derive an estimator with an asymptotically normal distribution is the delta method. Let

    θ=g(α,δ)=12+α2α2(4+5α2)+δ(2+α2)21δ.

    Using the delta method, the asymptotic distribution of the estimator based on the Taylor series of g(ˆα,ˆδ) at α and δ is calculated as follows:

    g(ˆα,ˆδ)=g(α,δ)+g(α,δ)α(ˆαα)+g(α,δ)δ(ˆδδ)+Remainder. (3)

    Consider that it is possible to demonstrate that the probability of nRemainder converges to 0 as the sample size n approaches infinity. Since ˆαN(α,α22n(1)) and ˆδN(δ,δ(1δ)n), following computations, we can obtain

    g(ˆα,ˆδ)12+α2α2(4+5α2)+δ(2+α2)21δ+8α(1α2)(2+α2)2(1δ)[α2(4+5α2)+δ(2+α2)2](ˆαα)+2+α2(43α2)(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2](ˆδδ).

    Subsequently, we can calculate the asymptotic mean and variance of the estimator as follows:

    E[g(ˆα,ˆδ)]12+α2α2(4+5α2)+δ(2+α2)21δ

    and

    V[g(ˆα,ˆδ)]1O(1δ)[α2(4+5α2)+Oδ]{32α4(1+2α2)2n(1)O+δ[2+α2(4+3α2)]2n(1δ)},

    where O=(2+α2)2. Detailed procedures for deriving the asymptotic mean and variance are provided in the appendix. Assume that ˆα and ˆδ are independent; then the maximum likelihood estimator of θ can be determined as

    ˆθ=12+ˆα2ˆα2(4+5ˆα2)+ˆδ(2+ˆα2)21ˆδ. (4)

    where ˆα={2[(n(1)i=1yin(1)(n(1)i=1y1in(1)))1/21]}1/2 is the modified moment estimator of α proposed by Ng, Kundu, and Balakrishnan [24]. Then, the estimated variance of ˆθ can be written as

    ˆV[ˆθ]1H(1ˆδ)[ˆα2(4+5ˆα2)+Hˆδ]{32ˆα4(1+2ˆα2)2n(1)H+ˆδ[2+ˆα2(4+3ˆα2)]2n(1ˆδ)}, (5)

    where H=(2+ˆα2)2. A random variable Z=ˆθθˆV(ˆθ)N(0,1) according to the central limit theorem. Therefore, the (1υ)100% CI for θ based on NA is given by

    [LNA,UNA]=[ˆθzυ/2ˆV(ˆθ),ˆθ+zυ/2ˆV(ˆθ)], (6)

    where zυ/2 is the (υ/2)th quantile value from the standard normal distribution.

    The concept of the generalized confidence interval (GCI) method proposed by Weerahandi [25] provides a general framework for constructing confidence intervals by considering the generalized pivotal quantity (GPQ). In constructing confidence intervals for θ based on GCI, the GPQs of β and α are taken into consideration. Let Tt(n(1)1). Sun [26] recommended that the GPQ of the β should be provided by

    Rβ(y;T)={max(β1,β2);T0,min(β1,β2);T>0, (7)

    β1and β2 are the two solutions of the quadratic equation,

    [(n(1)1)A21n(1)BT2]β22[(n(1)1)AC(1AC)T2]β+(n(1)1)C21n(1)DT2=0, (8)

    where A=1n(1)n(1)i=11Yi, B=n(1)i=1(1YiA)2, C=1n(1)n(1)i=1Yi, and D=n(1)i=1(YiC)2, while the GPQ of α should be provided by

    Rα(y;U,T)=E1+E2R2β2n(1)RβRβU, (9)

    where E1=n(1)i=1Yi, E2=n(1)i=11Yi, and Uχ2n(1).

    For the GPQ of δ, we use two concepts: the variance-stabilized transformation (VST) and the Wilson score method (WS). The details are explained in the following subsections.

    According to Wu and Hsieh [22], the GPQ of δ is defined as

    RVSTδ=sin2[arcsinˆδV2n], (10)

    where V=2n(arcsinˆδarcsinδ)N(0,1). Hence, the GPQ for θ is

    RVSTθ=12+R2αR2α(4+5R2α)+RVSTδ(2+R2α)21RVSTδ. (11)

    Consequently, the (1υ)100% CI for θ is based on G.VST is given by

    [LG.VST,UG.VST]=[RVSTθ(υ/2),RVSTθ(1υ/2)], (12)

    where RVSTθ(υ) is the (υ/2)th percentile of RVSTθ.

    In accordance with Li, Zhou, and Tian [23], the GPQ of δ is described as

    RWSδ=n(0)+(W2υ/2/2)n+W2υ/2W2υ/2n+W2υ/2n(0)(1n(0)n)+W2υ/24, (13)

    where W=n(0)nδnδ(1δ). Thus, the GPQ for θ is

    RWSθ=12+R2αR2α(4+5R2α)+RWSδ(2+R2α)21RWSδ. (14)

    Therefore, the (1υ)100% CI for θ is based on G.WS is given by

    [LG.WS,UG.WS]=[RWSθ(υ/2),RWSθ(1υ/2)], (15)

    where RWSθ(υ) is the (υ/2)th percentile of RWSθ.

    The bootstrap method is a resampling technique used to estimate the sampling distribution of a statistic by repeatedly resampling from the observed data with replacement, as proposed by Efron [27]. Let ˆα/ and ˆδ/ be observed values of ˆα and ˆδ based on bootstrap samples. Suppose that K bootstrap samples are available. The bootstrap expectation E(ˆα) can be approximated by using the mean ˆα/(.)=1KKj=1ˆα/j, where ˆα/j is sequence of the bootstrap MLEs of α, for j=1,2,,K. The bootstrap bias estimate based on K replications of ˆα is given by

    ˆK(ˆα,α)=ˆα/(.)ˆα.

    Then, the constant-bias-correcting estimates, as defined by Mackinnon and Smith [28], are used for creating the bias-corrected estimator, which is

    ˆα#=ˆα/2ˆK(ˆα,α). (16)

    According to Brown, Cai, and DasGupta [29], they proposed the Jeffreys interval for the binomial proportion, which employs the Jeffreys prior and is represented by Beta(0.5,0.5). Therefore, it results in

    ˆδBeta(n(0)+0.5,n(1)+0.5), (17)

    where n(0)=nˆδ/and n(1)=n(1ˆδ/). The bootstrap estimator of θ can be written as

    ˆθ(Boot)=12+(ˆα#)2(ˆα#)2(4+5(ˆα#)2)+ˆδ(2+(ˆα#)2)21ˆδ. (18)

    Consequently, the (1υ)100% CI for θ is based on BCI is given by

    [LBCI,UBCI]=[ˆθ(Boot)(υ/2),ˆθ(Boot)(1υ/2)], (19)

    where ˆθ(Boot)(υ) is the (υ/2)th percentile of ˆθ(Boot).

    In this simulation study, we have compared the performance of the proposed methods by considering the coverage probabilities greater than or equal to the nominal confidence level of 0.95, along with the expected lengths of the shortest confidence interval. This comparison was conducted using Monte Carlo simulations and the statistical software R. The overall number of replications was set to generate a simulation with 5,000 replications in total, 1,000 replications for the GCI, and 500 replications for the BCI. In addition, the sample size has been set to n = 30, 50,100,150, and 200, and the following parameters have been specified: δ = 0.1, 0.5, and 0.7; α = 0.25, 0.50, 0.75, 1.00, and 1.50; and β = 1. The algorithm presents the steps for estimating the coverage probability and expected length to compare the efficiency of the proposed methods.

    The results from Table 1 are as follows: it is evident that the NA and BCI methods have values that are close, both in terms of coverage probabilities and expected lengths. Similarly, the G.VST and G.WS methods also exhibit close values to each other in almost all the cases studied, with coverage probabilities remaining stable and close to 0.95, and they have the shortest expected lengths. This results in the G.VST and G.WS methods being more efficient than the NA and BCI methods. Figure 1 shows a comparison of various methods in terms of shape parameters relative to coverage probability and expected length. It is evident that the coverage probabilities for the G.VST and G.WS methods are consistently greater than and close to the nominal confidence level of 0.95 in almost all cases. The BCI method achieves a coverage probability close to the specified criterion when the shape parameters are 0.25 and 1.00. Meanwhile, for the NA method, the coverage probability meets the required criterion when the shape parameter is small. As the shape parameter increases, the NA method's coverage probability shows a tendency to decrease. When considering the expected lengths, a consistent trend is observed for all methods. As the shape parameter value increases, expected lengths also increase progressively. Figure 2 shows a comparison of various methods in terms of sample sizes relative to coverage probability and expected length. It was found that the coverage probabilities of the G.VST and G.WS methods meet the specified criteria. For the NA method, the coverage probability increases as the sample size increases. The BCI method provides coverage probability close to the specified level only when the sample size is 50. Regarding the expected length, it reveals that as the sample size increases, the expected length for all methods decreases, resulting in improved efficiency. Figure 3 shows a comparison of various methods in terms of the proportion of zero relative to coverage probability and expected length. It demonstrates that the coverage probabilities for the G.VST and G.WS methods consistently align closely with the specified confidence level. However, the NA and BCI methods provide coverage probabilities that fall below the specified confidence level. When examining expected length, a similar trend is observed across all methods: as the proportion of zero increases, the expected length also increases. Nonetheless, the G.VST and G.WS methods yield a shorter expected length compared to the NA and BCI methods.

    Algorithm: The coverage probability and expected length
      Ⅰ.  For a givenα, n, δ, andβ.
     Ⅱ.   For m=1 to M. Generate sample from the DBS distribution and calculate ˆα and ˆδ.
    Ⅲ.    Construct CIs for θ based on the NA, GCI, and BCI:
            For the NA; Calculate ˆV(ˆθ) and calculate LNA and UNA using Equations (6) and (7).
            For the GCI;
            1. Calculate A,B,C,D,E1 and E2, respectively.
            2. At the gth step
                a) Generate Tt(n(1)1), and then calculate Rβ(y;T) using Equation (7).
                b) If Rβ(y;T)<0, regenerate Tt(n(1)1).
                c) Generate Uχ2n(1), and then calculate Rα(y;T) using Equation (9)
                d) For G.VST method, calculate RVSTδ and RVSTθ using Equations (10) and (11).
                e) For G.WS method, calculate RWSδ and RWSθ using Equations (13) and (14).
            3. Repeat step 2. a number of times, with G = 1,000 times.
            4. Calculate LG.VST, UG.VST, LG.WS, and UG.WS using Equations (12) and (15).
            For BCI;
            1. At the bth step
                a) Generate y1,y2,,yn with replacement from y1,y2,,yn.
                b) Calculate ˆα/ and ˆK(ˆα,α).
                c) Calculate ˆα#, ˆδ and ˆθ(Boot) using Equations (16), (17) and (18).
            2. Repeat step 1. a number of times, with B = 500 times.
            3. Calculate LBCIand UBCI using Equation (19).
    Ⅳ.    If L[θU], set H=1; else set H=0. The coverage probability and expected length for each method are obtained by CP=1MMi=1Hm and EL=ULM, where U and L are the upper and lower confidence limits, respectively. (End m loop)

    Table 1.  The coverage probabilities and expected lengths for the 95% CIs for θ.
    α n δ Coverage probabilities Expected lengths
    NA G.VST G.WS BCI NA G.VST G.WS BCI
    0.25 30 0.1 0.9176 0.9508 0.9498 0.9318 0.3337 0.1001 0.1002 0.3308
    0.3 0.9492 0.9515 0.9589 0.9444 0.5126 0.0934 0.0928 0.5075
    0.5 0.9502 0.9489 0.9522 0.9505 0.7606 0.1195 0.1187 0.7635
    50 0.1 0.9256 0.9526 0.9450 0.9417 0.2646 0.0740 0.0739 0.2590
    0.3 0.9508 0.9520 0.9574 0.9422 0.3963 0.0671 0.0667 0.3919
    0.5 0.9564 0.9552 0.9520 0.9488 0.5818 0.0795 0.0802 0.5791
    100 0.1 0.9522 0.9513 0.9525 0.9597 0.1898 0.0506 0.0506 0.1861
    0.3 0.9561 0.9540 0.9604 0.9549 0.2770 0.0446 0.0442 0.2724
    0.5 0.9532 0.9537 0.9520 0.9502 0.4064 0.0515 0.0513 0.4013
    150 0.1 0.9406 0.9501 0.9484 0.9478 0.1547 0.0408 0.0408 0.1516
    0.3 0.9518 0.9542 0.9552 0.9510 0.2278 0.0356 0.0356 0.2268
    0.5 0.9522 0.9518 0.9532 0.9541 0.3298 0.0410 0.0409 0.3066
    200 0.1 0.9500 0.9520 0.9502 0.9530 0.1341 0.0350 0.0351 0.1337
    0.3 0.9506 0.9512 0.9546 0.9528 0.1968 0.0307 0.0307 0.1824
    0.5 0.9546 0.9548 0.9532 0.9502 0.2856 0.0348 0.0348 0.2589
    0.50 30 0.1 0.9341 0.9533 0.9555 0.9523 0.3683 0.2688 0.2678 0.3641
    0.3 0.9512 0.9514 0.9549 0.9487 0.5689 0.2919 0.2932 0.5545
    0.5 0.9524 0.9519 0.9516 0.9422 0.8574 0.3832 0.3848 0.8439
    50 0.1 0.9465 0.9465 0.9516 0.9347 0.2875 0.2005 0.2004 0.2818
    0.3 0.9554 0.9517 0.9484 0.9479 0.4339 0.2142 0.2134 0.4194
    0.5 0.9547 0.9493 0.9497 0.9405 0.6514 0.2696 0.2689 0.6311
    100 0.1 0.9556 0.9484 0.9466 0.9492 0.2035 0.1385 0.1384 0.1981
    0.3 0.9562 0.9525 0.9476 0.9637 0.3078 0.1449 0.1439 0.2945
    0.5 0.9628 0.9504 0.9491 0.9465 0.4615 0.1785 0.1780 0.4433
    150 0.1 0.9506 0.9505 0.9515 0.9042 0.1663 0.1122 0.1122 0.1686
    0.3 0.9566 0.9537 0.9529 0.9174 0.2511 0.1168 0.1167 0.2570
    0.5 0.9544 0.9536 0.9552 0.9294 0.3739 0.1426 0.1425 0.3694
    200 0.1 0.9526 0.9526 0.9543 0.9256 0.1442 0.0968 0.0968 0.1404
    0.3 0.9562 0.9524 0.9560 0.9376 0.2174 0.1005 0.1004 0.2126
    0.5 0.9620 0.9518 0.9520 0.9498 0.3235 0.1220 0.1222 0.1200
    0.75 30 0.1 0.9352 0.9511 0.9416 0.9538 0.4527 0.4075 0.4038 0.4480
    0.3 0.9310 0.9509 0.9450 0.9412 0.6593 0.4705 0.4708 0.6473
    0.5 0.9306 0.9530 0.9409 0.9405 0.9800 0.6242 0.6206 0.9646
    50 0.1 0.9475 0.9651 0.9448 0.9547 0.3545 0.3070 0.3081 0.3503
    0.3 0.9526 0.9563 0.9602 0.9510 0.5122 0.3538 0.3533 0.4972
    0.5 0.9459 0.9456 0.9564 0.9395 0.7601 0.4572 0.4583 0.7385
    100 0.1 0.9523 0.9460 0.9615 0.9482 0.2518 0.2144 0.2135 0.2475
    0.3 0.9622 0.9459 0.9536 0.9513 0.3627 0.2436 0.2453 0.3515
    0.5 0.9567 0.9431 0.9487 0.9500 0.5386 0.3132 0.3101 0.5203
    150 0.1 0.9538 0.9510 0.9524 0.8598 0.2061 0.1743 0.1741 0.2027
    0.3 0.9530 0.9504 0.9512 0.8790 0.2967 0.1986 0.1986 0.2670
    0.5 0.9558 0.9546 0.9519 0.9306 0.4399 0.2522 0.2525 0.4332
    200 0.1 0.9518 0.9502 0.9528 0.8724 0.1793 0.1510 0.1509 0.1709
    0.3 0.9518 0.9516 0.9532 0.9012 0.2567 0.1711 0.1709 0.2207
    0.5 0.9530 0.9546 0.9552 0.9386 0.3810 0.2178 0.2177 0.3418
    1.00 30 0.1 0.9190 0.9482 0.9506 0.9352 0.5233 0.4901 0.4904 0.5209
    0.3 0.9247 0.9471 0.9505 0.9407 0.7447 0.5875 0.5879 0.7419
    0.5 0.9321 0.9460 0.9500 0.9413 1.1191 0.7878 0.7901 1.1260
    50 0.1 0.9342 0.9478 0.9528 0.9387 0.4100 0.3755 0.3749 0.4065
    0.3 0.9333 0.9506 0.9625 0.9471 0.5742 0.4467 0.4455 0.5743
    0.5 0.9411 0.9576 0.9610 0.9410 0.8602 0.5933 0.5933 0.8628
    100 0.1 0.9472 0.9511 0.9504 0.9548 0.2901 0.2625 0.2616 0.2892
    0.3 0.9490 0.9521 0.9582 0.9522 0.4090 0.3111 0.3127 0.4080
    0.5 0.9489 0.9545 0.9597 0.9514 0.6065 0.4093 0.4101 0.6084
    150 0.1 0.9468 0.9514 0.9516 0.9556 0.2375 0.2142 0.2141 0.2318
    0.3 0.9446 0.9526 0.9508 0.9536 0.3335 0.2538 0.2538 0.3035
    0.5 0.9504 0.9530 0.9502 0.9526 0.4945 0.3328 0.3329 0.4903
    200 0.1 0.9482 0.9540 0.9588 0.9524 0.2061 0.1854 0.1853 0.2059
    0.3 0.9524 0.9510 0.9532 0.9508 0.2892 0.2196 0.2195 0.2768
    0.5 0.9522 0.9524 0.9546 0.9510 0.4279 0.2872 0.2875 0.4209
    1.50 30 0.1 0.9209 0.9490 0.9542 0.9433 0.5521 0.5213 0.5235 0.5702
    0.3 0.9072 0.9540 0.9528 0.9570 0.7735 0.6436 0.6442 0.8364
    0.5 0.8931 0.9505 0.9449 0.9308 1.1629 0.8781 0.8739 1.2941
    50 0.1 0.9246 0.9529 0.9548 0.9418 0.4281 0.4016 0.4012 0.4409
    0.3 0.9162 0.9580 0.9522 0.9409 0.5987 0.4948 0.4947 0.6473
    0.5 0.9097 0.9498 0.9560 0.9494 0.8909 0.6693 0.6685 0.9837
    100 0.1 0.9453 0.9571 0.9535 0.9530 0.3028 0.2822 0.2823 0.3135
    0.3 0.9349 0.9498 0.9526 0.9591 0.4231 0.3465 0.3468 0.4583
    0.5 0.9164 0.9520 0.9484 0.9438 0.6292 0.4660 0.4664 0.6997
    150 0.1 0.9338 0.9522 0.9514 0.9006 0.2475 0.2298 0.2301 0.2580
    0.3 0.9272 0.9542 0.9522 0.9086 0.3439 0.2822 0.2822 0.3483
    0.5 0.9188 0.9504 0.9500 0.9230 0.5113 0.3794 0.3795 0.5248
    200 0.1 0.9522 0.9506 0.9510 0.9218 0.2144 0.1991 0.1992 0.2232
    0.3 0.9364 0.9554 0.9542 0.9226 0.2977 0.2441 0.2440 0.3085
    0.5 0.9276 0.9532 0.9500 0.9430 0.4417 0.3277 0.3279 0.3317

     | Show Table
    DownLoad: CSV
    Figure 1.  Graphs comparing the performance of the shape parameter with respect to the (A) coverage probability and (B) expected length.
    Figure 2.  Graphs comparing the performance of the sample sizes with respect to the (C) coverage probability and (D) expected length (a = 30, b = 50, c = 100, d = 150, e = 200).
    Figure 3.  Graphs comparing the performance of the proportion of zero with respect to the (E) coverage probability and (F) expected length.

    Wind speed plays multiple important roles and has various impacts, particularly in agriculture. It affects plant growth rates, leading to faster growth and increased crop yields. Because Thailand is known as an agricultural country, a large portion of its population has always been engaged in farming or related occupations. Therefore, wind speed is an important factor that affects agriculture in Thailand. In this research, wind speed data from Ubon Ratchathani province for the hourly periods on March 9–10, 2023, and wind speed data from Si Sa Kat province for the hourly periods on April 3–7, 2023, have been applied for analysis, as presented in Tables 2 and 3. The wind speed data for both provinces was obtained from the Automatic Weather System in Thailand (http://www.aws-observation.tmd.go.th/main/main). We have plotted histograms of the wind speed data for Ubon Ratchathani and Si Sa Kat provinces to visualize the data distribution, shown in Figures 4 and 5. Since the wind speed data include both zero values (no wind) and positive values, we examined the suitability of the data distribution for positive values by comparing it to other distributions, including the normal, exponential, Cauchy, logistic, and Birnbaum-Saunders distributions. To assess the suitability of these distributions for the data, we have used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), calculated as

    AIC=2ln(L)+2p,

    and

    AIC=2ln(L)+2pln(o),

    respectively, where p represents the number of parameters estimated, o represents the number of observations, and L represents the likelihood function. From Table 4, it is evident that the AIC and BIC values for the Birnbaum-Saunders distribution are the lowest compared to other distributions. This suggests that the Birnbaum-Saunders distribution is the most suitable for the positive value of the wind speed data. As a result, the wind speed data, which contains both positive and zero values, is modeled as the delta-Birnbaum-Saunders distribution. Consequently, we have used this distribution to calculate confidence intervals for the coefficients of variation of the wind speed data. In addition, we have presented summary statistics for the wind speed data in Table 5. In the wind speed data, the parameter α represents the shape or skewness of the distribution, reflecting the tendency toward lower or higher-than-normal wind speeds. The parameter β indicates the scale of the wind speed distribution in the area; if β changes, the distribution of the data will also shift. The parameter δ represents the proportion of zero values in the dataset. Point estimates or coefficients of variation for Ubon Ratchathani and Si Sa Kat provinces were found to be 1.2183 and 1.3085, respectively. Table 6 presents the calculated 95% confidence intervals for the coefficient of variation for the wind speed data from Ubon Ratchathani and Si Sa Kat provinces. We compared the wind speed data from Ubon Ratchathani with the parameters from the data simulation, using the sample size of n = 50, parameter α = 0.75, and parameter δ = 0.3, from Table 1. The simulation results indicate that the NA, G.VST, G.WS, and BCI methods achieve coverage probabilities greater than the specified confidence level of 0.95. Additionally, it was found that the G.WS method provides the shortest confidence interval compared to other methods. The confidence interval for the wind speed data from Ubon Ratchathani using the G.WS method is (1.0797, 1.4925), with the confidence interval length of 0.4128, the shortest among the methods. This indicates that the study results are consistent. Subsequently, we compared the wind speed data from Si Sa Ket with the parameters from the data simulation using the sample size of n = 100, parameter α = 1.00, and parameter δ = 0.3, from Table 1. The simulation results show that the G.VST, G.WS, and BCI methods achieve coverage probabilities greater than the specified confidence level, while the NA method has a coverage probability lower than the specified confidence level. Therefore, we considered only the G.VST, G.WS, and BCI methods. It was found that the G.VST method provides the shortest confidence interval. The confidence interval for the wind speed data from Si Sa Ket using the G.VST method is (1.2002, 1.4655), with the confidence interval length of 0.2653, the shortest among all methods. This indicates that the study results are consistent. Consequently, to construct confidence intervals for the coefficient of variation of wind speed data in Thailand, we recommend using the G.WS method for Ubon Ratchathani province and the G.VST method for Si Sa Ket province.

    Table 2.  Data on the wind speed of Ubon Ratchathani, Thailand.
    Data on the wind speed of Ubon Ratchathani (Knots)
    4.9 2.9 2.3 1.0 3.9 0.0 0.6 5.8
    0.8 1.6 3.3 2.3 2.1 0.0 0.0 4.7
    6.2 3.9 0.8 0.0 1.6 0.0 0.0 1.6
    3.3 5.2 1.9 0.0 1.9 0.0 0.6 0.0
    3.3 0.6 3.3 0.0 1.6 1.6 0.0 0.6
    0.4 3.1 3.9 0.0 0.0 0.0 0.0 0.0

     | Show Table
    DownLoad: CSV
    Table 3.  Data on the wind speed of Si Sa Kat, Thailand.
    Data on the wind speed of Si Sa Kat (Knots)
    0.2 0.2 1.2 0.4 1.6 2.5 4.3 2.1 3.3 0.0 1.2 0.0
    2.3 0.0 3.3 0.4 1.9 9.9 8.7 0.0 2.9 1.0 0.0 1.6
    0.4 0.0 5.8 0.8 1.0 5.2 6.2 1.2 1.2 0.6 0.0 0.8
    2.3 0.0 7.6 0.6 0.6 7.2 5.8 0.0 3.9 0.4 0.0 0.0
    8.7 0.6 5.8 0.0 1.2 5.6 8.9 0.6 2.1 0.0 1.6 1.2
    3.3 2.5 5.2 0.6 0.6 0.8 2.5 0.8 3.5 1.6 0.6 1.0
    3.9 1.2 0.8 0.0 0.6 8.4 2.1 0.8 2.9 0.0 0.0 0.0
    2.9 0.0 2.9 1.2 0.0 2.5 0.2 0.2 3.9 1.2 0.0 0.0
    2.5 0.8 1.2 2.9 0.8 4.3 3.3 0.8 0.4 0.0 1.6 0.6
    3.1 0.2 2.3 1.2 1.6 1.6 3.1 0.0 0.6 1.0 0.6 0.0

     | Show Table
    DownLoad: CSV
    Figure 4.  Histogram of wind speed data for Ubon Ratchathani.
    Figure 5.  Histogram of wind speed data for Si Sa Kat.
    Table 4.  The AIC and BIC values of each model for the wind speed data.
    Data Model Normal Exponential Birnbaum-Saunders Cauchy Logistic
    Ubon Ratchathani AIC 125.404 125.910 121.308 139.931 127.085
    BIC 128.335 127.378 125.705 142.863 130.016
    Si Sa Kat AIC 321.226 294.814 288.638 345.058 316.721
    BIC 326.135 297.268 296.001 349.967 321.630

     | Show Table
    DownLoad: CSV
    Table 5.  Summary statistics for the wind speed data.
    Data n ˆα ˆβ ˆδ ˆθ
    Ubon Ratchathani 48 0.7968 1.9355 0.3333 1.2183
    Si Sa Kat 120 0.9682 1.3744 0.2833 1.3085

     | Show Table
    DownLoad: CSV
    Table 6.  The 95% confidence intervals for the coefficients of variation of the wind speed data.
    Data Methods Interval Length
    Ubon Ratchathani NA (0.9314, 1.5052) 0.5738
    G.VST (1.0736, 1.4905) 0.4169
    G.WS (1.0797, 1.4925) 0.4128
    BCI (0.9795, 1.5470) 0.5675
    Si Sa Kat NA (1.1290, 1.4881) 0.3591
    G.VST (1.2002, 1.4655) 0.2653
    G.WS (1.1992, 1.4783) 0.2791
    BCI (1.1382, 1.4969) 0.3587

     | Show Table
    DownLoad: CSV

    In this study, we constructed confidence intervals for the coefficient of variation of the delta-Birnbaum-Saunders distribution. We proposed three methods: NA, G.VST, and G.WS, and compared them with BCI. Then, we compared the performance of the proposed method based on the coverage probabilities greater than or equal to the 0.95 confidence level, along with the expected lengths of the shortest confidence interval. The simulation results indicate that the coverage probabilities of the G.VST and G.WS methods are greater than or close to the nominal confidence level. Meanwhile, the NA method shows coverage probability greater than the nominal confidence level when the shape parameter is small. Additionally, the coverage probability of the BCI method becomes closer to the nominal confidence level as the sample size increases. Considering the expected lengths, the BCI method provides shorter confidence intervals than the NA method, except when the shape parameter is large. However, the G.VST and G.WS methods yield the shortest and most similar confidence intervals, making these two methods the most efficient overall. Moreover, all the proposed methods were applied to wind speed data in Thailand and yielded results consistent with the simulation outcomes. Therefore, the G.VST and G.WS methods are recommended for constructing confidence intervals for the coefficient of variation of the delta-Birnbaum-Saunders distribution. In future research, we will investigate new methods and expand the parameters of interest in the delta-Birnbaum-Saunders distribution to enhance the effectiveness of constructing confidence intervals.

    Usanee Janthasuwan analyzed the data, drafted, and wrote the manuscript. Suparat Niwitpong conceptualized and designed the experiment and revised the manuscript. Sa-Aat Niwitpong proposed analytical tools, approved the final draft, and secured funding.

    The authors would like to thank the editor and reviewers, whose valuable comments and suggestions enhanced the quality of the paper. This research was funded by the King Mongkut's University of Technology North Bangkok. Contract no: KMUTNB-68-KNOW-17.

    The authors declare no conflict of interest.

    The asymptotic mean and variance

    We have used the Delta method to obtain an estimator with an asymptotically normal distribution based on the Taylor series, as follows:

    g(ˆα,ˆδ)=g(α,δ)+g(α,δ)α(ˆαα)+g(α,δ)δ(ˆδδ)+Remainder, (A.1)

    where g(α,δ)=12+α2α2(4+5α2)+δ(2+α2)21δ. Now, we will calculate the partial derivatives of g(α,δ) with respect to α as follows:

    g(α,δ)α=α[12+α2α2(4+5α2)+δ(2+α2)21δ]{12+α212[α2(4+5α2)+δ(2+α2)21δ]1211δ(8α+20α3+8αδ+4α3δ)}+{[α2(4+5α2)+δ(2+α2)21δ]12[2α(2+α2)2]}{4α+10α3+4αδ+2α3δ(2+α2)1δ[α2(4+5α2)+δ(2+α2)2]}2α(2+α2)2α2(4+5α2)+δ(2+α2)21δ2α(2+α2)1δ[2+5α2+2δ+α2δα2(4+5α2)+δ(2+α2)2α2(4+5α2)+δ(2+α2)2(2+α2)]2α(2+α2)1δ{[(2+α2)(2+5α2+2δ+α2δ)][α2(4+5α2)+δ(2+α2)2](2+α2)α2(4+5α2)+δ(2+α2)2}2α{[(2+α2)(2+5α2+2δ+α2δ)]α2(4+5α2)δ(2+α2)2}(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)28α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2.

    Next, we have calculated the partial derivatives of g(α,δ)with respect to δ, and we obtain that

    g(α,δ)δ=δ[12+α2α2(4+5α2)+δ(2+α2)21δ]12+α2{12[α2(4+5α2)+δ(2+α2)21δ]12[(1δ)(2+α2)2+[α2(4+5α2)+δ(2+α2)2](1δ)2]}α2(4+5α2)+δ(2+α2)22(2+α2)(1δ)3/2α2(4+5α2)+δ(2+α2)2α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2].

    After that, by using the equation above to substitute into Eq A.1, we get that

    g(ˆα,ˆδ)=g(α,δ)+g(α,δ)α(ˆαα)+g(α,δ)δ(ˆδδ)+Remainder12+α2α2(4+5α2)+δ(2+α2)21δ+8α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2(ˆαα)+α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2](ˆδδ),

    as n. It is well known that the asymptotic distribution of α and δ is given by

    n(1)(ˆαα)DN(0,α22)andn(ˆδδ)DN(0,δ(1δ)),

    respectively. We have calculated the asymptotic mean of the coefficient of variation of the Delta-Birnbaum-Saunders distribution as follows:

    E(g(ˆα,ˆδ))E[12+α2α2(4+5α2)+δ(2+α2)21δ+8α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2(ˆαα)+α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2](ˆδδ)]12+α2α2(4+5α2)+δ(2+α2)21δ+8α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2E(ˆαα)+α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2]E(ˆδδ)12+α2α2(4+5α2)+δ(2+α2)21δ.

    In addition, the asymptotic variance of the coefficient of variation of the Delta-Birnbaum-Saunders distribution is given by

    V(g(ˆα,ˆδ))V[12+α2α2(4+5α2)+δ(2+α2)21δ+8α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2(ˆαα)+α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2](ˆδδ)][8α(1+2α2)(2+α2)2(1δ)α2(4+5α2)+δ(2+α2)2]2V(ˆαα)+[α2(4+5α2)+δ(2+α2)2(2+α2)(1δ)3[α2(4+5α2)+δ(2+α2)2]]2V(ˆδδ)1(2+α2)2(1δ)[α2(4+5α2)+δ(2+α2)2]{64α2(1+2α2)2(2+α2)2(α22n(1))+[2+α2(4+3α2)]2(1δ)2(δ(1δ)n)}1(2+α2)2(1δ)[α2(4+5α2)+δ(2+α2)2]{32α4(1+2α2)2n(1)(2+α2)2+δ[2+α2(4+3α2)]2n(1δ)}.

    Note that ˆαN(α,α22n(1)) and ˆδN(δ,δ(1δ)n).



    [1] K. Klinker, M. Wiesche, H. Krcmar, Digital transformation in health care: Augmented reality for hands-free service innovation, Inform. Syst. Front., 22 (2020), 1419–1431. https://doi.org/10.1007/s10796-019-09937-7 doi: 10.1007/s10796-019-09937-7
    [2] C. Jung, G. Wolff, B. Wernly, R. R. Bruno, M. Franz, P. C. Schulze, et al., Virtual and augmented reality in cardiovascular care: state-of-the-art and future perspectives, Cardiovascular Imag., 15 (2022), 519–532. https://www.jacc.org/doi/abs/10.1016/j.jcmg.2021.08.017
    [3] F. Davnall, C. S. Yip, G. Ljungqvist, M. Selmi, F. Ng, B. Sanghera, et al., Assessment of tumor heterogeneity: An emerging imaging tool for clinical practice?, Insights Imaging, 3 (2012), 573–589. https://doi.org/10.1007/s13244-012-0196-6 doi: 10.1007/s13244-012-0196-6
    [4] S. Bauer, R. Wiest, L. P. Nolte, M. Reyes, A survey of mri-based medical image analysis for brain tumor studies, Phys. Med. Biol., 58 (2013), R97. https://doi.org/10.1088/0031-9155/58/13/r97 doi: 10.1088/0031-9155/58/13/r97
    [5] A. Madani, M. Moradi, A. Karargyris, T. Syeda-Mahmood, Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation, in: 2018 IEEE 15th International symposium on biomedical imaging (ISBI 2018), IEEE, 2018, 1038–1042. 10.1109/ISBI.2018.8363749
    [6] A. Van Opbroek, M. A. Ikram, M. W. Vernooij, M. De Bruijne, Transfer learning improves supervised image segmentation across imaging protocols, IEEE T. Med. Imaging, 34 (2014), 1018–1030. https://doi.org/10.1109/tmi.2014.2366792 doi: 10.1109/tmi.2014.2366792
    [7] N. Varuna Shree, T. Kumar, Identification and classification of brain tumor mri images with feature extraction using dwt and probabilistic neural network, Brain Informatics, 5 (2018), 23–30. https://doi.org/10.1007/s40708-017-0075-5 doi: 10.1007/s40708-017-0075-5
    [8] O. Hrizi, K. Gasmi, I. Ben Ltaifa, H. Alshammari, H. Karamti, M. Krichen, et al., Tuberculosis disease diagnosis based on an optimized machine learning model, J. Healthc. Eng., 2022 (2022), https://doi.org/10.1155/2022/8950243 doi: 10.1155/2022/8950243
    [9] G. B. Abdennour, K. Gasmi, R. Ejbali, Ensemble learning model for medical text classification, in: International Conference on Web Information Systems Engineering, Springer, 2023, 3–12. https://doi.org/10.1007/978-981-99-7254-8_1
    [10] A. K. Attili, A. Schuster, E. Nagel, J. H. Reiber, R. J. Van der Geest, Quantification in cardiac mri: Advances in image acquisition and processing, Int. J. Cardiovas. Imag., 26 (2010), 27–40. https://doi.org/10.1007/s10554-009-9571-x doi: 10.1007/s10554-009-9571-x
    [11] L. Cai, J. Gao, D. Zhao, A review of the application of deep learning in medical image classification and segmentation, Ann. Transl. Med., 8 (2020). https://doi.org/10.21037/atm.2020.02.44 doi: 10.21037/atm.2020.02.44
    [12] R. Li, W. Zhang, H. I. Suk, L. Wang, J. Li, D. Shen, et al., Deep learning based imaging data completion for improved brain disease diagnosis, in: International conference on medical image computing and computer-assisted intervention, Springer, 2014, 305–312. https://doi.org/10.1007/978-3-319-10443-0_39
    [13] J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, et al., Clinically applicable deep learning for diagnosis and referral in retinal disease, Nat. Med., 24 (2018), 1342–1350. https://doi.org/10.1038/s41591-018-0107-6 doi: 10.1038/s41591-018-0107-6
    [14] A. Ramcharan, P. McCloskey, K. Baranowski, N. Mbilinyi, L. Mrisho, M. Ndalahwa, et al., A mobile-based deep learning model for cassava disease diagnosis, Front. Plant sci., 10 (2019), 272. https://doi.org/10.3389/fpls.2019.00272 doi: 10.3389/fpls.2019.00272
    [15] K. Gasmi, I. B. Ltaifa, G. Lejeune, H. Alshammari, L. B. Ammar, M. A. Mahmood, Optimal deep neural network-based model for answering visual medical question, Cybern. Syst., (2021), 1–22. https://doi.org/10.1080/01969722.2021.2018543 doi: 10.1080/01969722.2021.2018543
    [16] P. Sapra, R. Singh, S. Khurana, Brain tumor detection using neural network, Int. J. Sci. Modern Eng., (2013) 2319–6386. https://www.ijisme.org/wp-content/uploads/papers/v1i9/I0425081913.pdf
    [17] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556 (2014).
    [18] M. Rizwan, A. Shabbir, A. R. Javed, M. Shabbir, T. Baker, D. A.-J. Obe, Brain tumor and glioma grade classification using gaussian convolutional neural network, IEEE Access, 10 (2022), 29731–29740. https://doi.org/10.1109/access.2022.3153108 doi: 10.1109/access.2022.3153108
    [19] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Adv. Neural Inf. Process. Syst., 25 (2012). https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [20] S. C. Turaga, J. F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, et al, Convolutional networks can learn to generate affinity graphs for image segmentation, Neural Comput., 22 (2010), 511–538. https://doi.org/10.1162/neco.2009.10-08-881 doi: 10.1162/neco.2009.10-08-881
    [21] A. Kharrat, M. Neji, A system for brain image segmentation and classification based on three-dimensional convolutional neural network, Comput. Sist., 24 (2020), 1617–1626. https://doi.org/10.13053/cys-24-4-3058 doi: 10.13053/cys-24-4-3058
    [22] A. Rehman, S. Naz, M. I. Razzak, F. Akram, M. Imran, A deep learning-based framework for automatic brain tumors classification using transfer learning, Circ. Syst. Signal Pr., 39 (2020), 757–775. https://doi.org/10.1007/s00034-019-01246-3 doi: 10.1007/s00034-019-01246-3
    [23] M. Krichen, Convolutional Neural Networks: A Survey, Computers, 12 (2023), 151. https://doi.org/10.3390/computers12080151 doi: 10.3390/computers12080151
    [24] N. Montemurro, S. Condino, M. Carbone, N. Cattari, R. D'Amato, F. Cutolo, et al., Brain tumor and augmented reality: New technologies for the future, 2022. https://doi.org/10.3390/ijerph19106347
    [25] M. Krichen, A. J. Maâlej, M. Lahami, A model-based approach to combine conformance and load tests: An ehealth case study, Int. J. Critical Computer-Based Syst., 8 (2018), 282–310. https://doi.org/10.1504/ijccbs.2018.096437 doi: 10.1504/ijccbs.2018.096437
    [26] P. E. Pelargos, D. T. Nagasawa, C. Lagman, S. Tenn, J. V. Demos, S. J. Lee, et al., Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery, J. Clin. Neurosci., 35 (2017), 1–4. https://doi.org/10.1016/j.jocn.2016.09.002 doi: 10.1016/j.jocn.2016.09.002
    [27] Q. Shan, T. E. Doyle, R. Samavi, M. Al-Rei, Augmented reality based brain tumor 3d visualization, Procedia Comput. Sci., 113 (2017), 400–407. https://doi.org/10.1016/j.procs.2017.08.356 doi: 10.1016/j.procs.2017.08.356
    [28] R. Tagaytayan, A. Kelemen, C. Sik-Lanyi, Augmented reality in neurosurgery, Arch. Med. Sci., 14 (2018), 572–578. https://doi.org/10.5114/aoms.2016.58690 doi: 10.5114/aoms.2016.58690
    [29] C. Lee, G. K. C. Wong, Virtual reality and augmented reality in the management of intracranial tumors: A review, J. Clin. Neurosci., 62 (2019), 14–20. https://doi.org/10.1016/j.jocn.2018.12.036 doi: 10.1016/j.jocn.2018.12.036
    [30] H. Greenspan, B. Van Ginneken, R. M. Summers, Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique, IEEE T. Med. Imaging, 35 (2016), 1153–1159. https://doi.org/10.1109/tmi.2016.2553401 doi: 10.1109/tmi.2016.2553401
    [31] A. S. Lundervold, A. Lundervold, An overview of deep learning in medical imaging focusing on mri, Z. Medizinische Phys., 29 (2019), 102–127. https://doi.org/10.1016/j.zemedi.2018.11.002 doi: 10.1016/j.zemedi.2018.11.002
    [32] G. Tomasila, A. W. R. Emanuel, Mri image processing method on brain tumors: A review, in: AIP Conference Proceedings, volume 2296, AIP Publishing LLC, 2020, 020023. https://doi.org/10.1063/5.0030978
    [33] A. Kharrat, N. Mahmoud, Feature selection based on hybrid optimization for magnetic resonance imaging brain tumor classification and segmentation, Appl. Med. Inf., 41 (2019), 9–23. https://ami.info.umfcluj.ro/index.php/AMI/article/view/648
    [34] A. Hossain, M. T. Islam, S. K. Abdul Rahim, M. A. Rahman, T. Rahman, H. Arshad, et al., A lightweight deep learning based microwave brain image network model for brain tumor classification using reconstructed microwave brain (rmb) images, Biosensors, 13 (2023), 238. https://doi.org/10.3390/bios13020238 doi: 10.3390/bios13020238
    [35] B. Pattanaik, K. Anitha, S. Rathore, P. Biswas, P. Sethy, S. Behera, Brain tumor magnetic resonance images classification based machine learning paradigms, Contemporary Oncol., 27 (2022). https://doi.org/10.5114/wo.2023.124612 doi: 10.5114/wo.2023.124612
    [36] M. Rasool, N. A. Ismail, A. Al-Dhaqm, W. Yafooz, A. Alsaeedi, A novel approach for classifying brain tumours combining a squeezenet model with svm and fine-tuning, Electronics, 12 (2023), 149. https://doi.org/10.3390/electronics12010149 doi: 10.3390/electronics12010149
    [37] S. Solanki, U. P. Singh, S. S. Chouhan, S. Jain, Brain tumor detection and classification using intelligence techniques: An overview, IEEE Access, 2023). https://doi.org/10.1109/access.2023.3242666 doi: 10.1109/access.2023.3242666
    [38] K. Wisaeng, W. Sa-Ngiamvibool, Brain tumor segmentation using fuzzy otsu threshold morphological algorithm, IAENG Int. J. Appl. Math., 53 (2023), 1–12.
    [39] T. Schmah, G. E. Hinton, S. Small, S. Strother, R. Zemel, Generative versus discriminative training of rbms for classification of fmri images, Adv. Neural Inf. Proc. Syst., 21 (2008). http://www.cs.toronto.edu/fritz/absps/fmrinips.pdf
    [40] C. Dev, K. Kumar, A. Palathil, T. Anjali, V. Panicker, Machine learning based approach for detection of lung cancer in dicom ct image, in: Ambient Communications and Computer Systems, Springer, 2019, 161–173. https://doi.org/10.1007/978-981-13-5934-7_15
    [41] T. Williams, R. Li, Wavelet pooling for convolutional neural networks, in: International Conference on Learning Representations, 2018, 1.
    [42] E. I. Zacharaki, S. Wang, S. Chawla, D. Soo Yoo, R. Wolf, E. R. Melhem, et al., Classification of brain tumor type and grade using mri texture and shape in a machine learning scheme, Magn. Reson. Med., 62 (2009), 1609–1618. https://doi.org/10.1002/mrm.22147 doi: 10.1002/mrm.22147
    [43] K. Machhale, H. B. Nandpuru, V. Kapur, L. Kosta, Mri brain cancer classification using hybrid classifier (svm-knn), in: 2015 International Conference on Industrial Instrumentation and Control (ICIC), IEEE, 2015, 60–65. https://doi.org/10.1109/iic.2015.7150592
    [44] A. S. Ansari, Numerical simulation and development of brain tumor segmentation and classification of brain tumor using improved support vector machine, Int. J. Intell. Syst. Appl. Eng., 11 (2023), 35–44. https://www.ijisae.org/index.php/IJISAE/article/view/2505
    [45] O. Ariyo, Q. Zhi-guang, L. Tian, Brain mr segmentation using a fusion of k-means and spatial fuzzy c-means, in: 2017 International conference on computer science and application engineering (CSAE 2017), 2017, 863–873. https://doi.org/10.12783/dtcse/csae2017/17565
    [46] M. Sharif, M. A. Khan, Z. Iqbal, M. F. Azam, M. I. U. Lali, M. Y. Javed, Detection and classification of citrus diseases in agriculture based on optimized weighted segmentation and feature selection, Comput. Electron. Agr., 150 (2018), 220–234. https://doi.org/10.1016/j.compag.2018.04.023 doi: 10.1016/j.compag.2018.04.023
    [47] P. A. Babu, B. S. Rao, Y. V. B. Reddy, G. R. Kumar, J. N. Rao, S. K. R. Koduru, et al., Optimized cnn-based brain tumor segmentation and classification using artificial bee colony and thresholding, Int. J. Comput. Commun. Control, 18 (2023). https://doi.org/10.15837/ijccc.2023.1.4577 doi: 10.15837/ijccc.2023.1.4577
    [48] M. Sharma, G. Purohit, S. Mukherjee, Information retrieves from brain mri images for tumor detection using hybrid technique k-means and artificial neural network (kmann), in: Networking communication and data knowledge engineering, Springer, 2018, 145–157. https://doi.org/10.1007/978-981-10-4600-1_14
    [49] A. Mikołajczyk, M. Grochowski, Data augmentation for improving deep learning in image classification problem, in: 2018 international interdisciplinary PhD workshop (IIPhDW), IEEE, 2018, 117–122. https://doi.org/10.1109/iiphdw.2018.8388338
    [50] L. Zhang, X. Wang, D. Yang, T. Sanford, S. Harmon, B. Turkbey, et al., Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation, IEEE T. Med. Imaging, 39 (2020), 2531–2540. https://doi.org/10.1109/tmi.2020.2973595 doi: 10.1109/tmi.2020.2973595
    [51] A. Işın, C. Direkoğlu, M. Şah, Review of mri-based brain tumor image segmentation using deep learning methods, Procedia Comput. Sci., 102 (2016), 317–324. https://doi.org/10.1016/j.procs.2016.09.407 doi: 10.1016/j.procs.2016.09.407
    [52] N. Varuna Shree, T. Kumar, Identification and classification of brain tumor mri images with feature extraction using dwt and probabilistic neural network, Brain Inform., 5 (2018), 23–30. https://doi.org/10.1007/s40708-017-0075-5 doi: 10.1007/s40708-017-0075-5
    [53] M. A. Hamid, N. A. Khan, Investigation and classification of mri brain tumors using feature extraction technique, J. Med. Biol. Eng., 40 (2020), 307–317. https://doi.org/10.1007/s40846-020-00510-1 doi: 10.1007/s40846-020-00510-1
    [54] Y. Chen, H. Jiang, C. Li, X. Jia, P. Ghamisi, Deep feature extraction and classification of hyperspectral images based on convolutional neural networks, IEEE T. Geosci. Remote, 54 (2016), 6232–6251. https://doi.org/10.1109/tgrs.2016.2584107 doi: 10.1109/tgrs.2016.2584107
    [55] X. Yang, Y. Fan, Feature extraction using convolutional neural networks for multi-atlas based image segmentation, in: Medical Imaging 2018: Image Processing, volume 10574, International Society for Optics and Photonics, 2018, 1057439. https://doi.org/10.1117/12.2293876
    [56] A. M. Hasan, H. A. Jalab, F. Meziane, H. Kahtan, A. S. Al-Ahmad, Combining deep and handcrafted image features for mri brain scan classification, IEEE Access, 7 (2019), 79959–79967. https://doi.org/10.1109/access.2019.2922691 doi: 10.1109/access.2019.2922691
    [57] H. El Hamdaoui, A. Benfares, S. Boujraf, N. E. H. Chaoui, B. Alami, M. Maaroufi, et al., High precision brain tumor classification model based on deep transfer learning and stacking concepts, Indones. J. Electr. Eng. Comput. Sci., 24 (2021), 167–177. https://doi.org/10.11591/ijeecs.v24.i1.pp167-177 doi: 10.11591/ijeecs.v24.i1.pp167-177
    [58] A. Khatami, A. Khosravi, T. Nguyen, C. P. Lim, S. Nahavandi, Medical image analysis using wavelet transform and deep belief networks, Expert Syst. Appl., 86 (2017), 190–198. https://doi.org/10.1016/j.eswa.2017.05.073 doi: 10.1016/j.eswa.2017.05.073
    [59] N. D. G. Carneiro, A. P. Bradley, Automated mass detection from mammograms using deep learning and random forest, International Conference on Digital Image Computing: Techniques and Applications (DICTA), (2016) 1–8. https://doi.org/10.1109/dicta.2015.7371234 doi: 10.1109/dicta.2015.7371234
    [60] Z. N. Shahweli, Deep belief network for predicting the predisposition to lung cancer in tp53 gene, Iraqi J. Sci., (2020), 171–177. https://doi.org/10.24996/ijs.2020.61.1.19 doi: 10.24996/ijs.2020.61.1.19
    [61] T. Jemimma, Y. J. V. Raj, Brain tumor segmentation and classification using deep belief network, in: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), IEEE, 2018, 1390–1394. https://doi.org/10.1109/iccons.2018.8663207
    [62] N. Farajzadeh, N. Sadeghzadeh, M. Hashemzadeh, Brain tumor segmentation and classification on mri via deep hybrid representation learning, Expert Syst. Appl., 224 (2023), 119963. https://doi.org/10.1016/j.eswa.2023.119963 doi: 10.1016/j.eswa.2023.119963
    [63] H. H. Sultan, N. M. Salem, W. Al-Atabany, Multi-classification of brain tumor images using deep neural network, IEEE Access, 7 (2019), 69215–69225. https://doi.org/10.1109/access.2019.2919122 doi: 10.1109/access.2019.2919122
    [64] M. M. Badža, M. Č. Barjaktarović, Classification of brain tumors from mri images using a convolutional neural network, Appl. Sci., 10 (2020), 1999. https://doi.org/10.3390/app10061999 doi: 10.3390/app10061999
    [65] A. R. Raju, S. Pabboju, R. R. Rao, Hybrid active contour model and deep belief network based approach for brain tumor segmentation and classification, Sensor Rev., (2019). https://doi.org/10.1108/sr-01-2018-0008 doi: 10.1108/sr-01-2018-0008
    [66] A. Kharrat, M. Néji, Classification of brain tumors using personalized deep belief networks on mrimages: Pdbn-mri, in: Eleventh International Conference on Machine Vision (ICMV 2018), volume 11041, SPIE, 2019, 713–721. https://doi.org/10.1117/12.2522848
    [67] S. Deepa, J. Janet, S. Sumathi, J. Ananth, Hybrid optimization algorithm enabled deep learning approach brain tumor segmentation and classification using mri, J. Digit. Imaging, 36 (2023), 847–868. https://doi.org/10.1007/s10278-022-00752-2 doi: 10.1007/s10278-022-00752-2
    [68] B. E. Bejnordi, M. Veta, P. J. Van Diest, B. Van Ginneken, N. Karssemeijer, G. Litjens, et al., Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer, Jama, 318 (2017), 2199–2210. 10.1001/jama.2017.14585 doi: 10.1001/jama.2017.14585
    [69] M. F. Alanazi, M. U. Ali, S. J. Hussain, A. Zafar, M. Mohatram, M. Irfan, et al., Brain tumor/mass classification framework using magnetic-resonance-imaging-based isolated and developed transfer deep-learning model, Sensors, 22 (2022), 372. https://doi.org/10.3390/s22010372 doi: 10.3390/s22010372
    [70] B. B. Avants, C. L. Epstein, M. Grossman, J. C. Gee, Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain, Med. Image Anal., 12 (2008), 26–41. https://doi.org/10.1016/j.media.2007.06.004 doi: 10.1016/j.media.2007.06.004
    [71] M. P. Heinrich, I. J. Simpson, B. W. Papież, M. Brady, J. A. Schnabel, Deformable image registration by combining uncertainty estimates from supervoxel belief propagation, Med. Image Anal., 27 (2016), 57–71. https://doi.org/10.1016/j.media.2015.09.005 doi: 10.1016/j.media.2015.09.005
    [72] A. V. Dalca, G. Balakrishnan, J. Guttag, M. R. Sabuncu, Unsupervised learning for fast probabilistic diffeomorphic registration, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2018, pp. 729–738. https://doi.org/10.1016/j.media.2019.07.006
    [73] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, A. V. Dalca, An unsupervised learning model for deformable medical image registration, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, 9252–9260. https://doi.org/10.1109/cvpr.2018.00964
    [74] M. Holden, A review of geometric transformations for nonrigid body registration, IEEE T. Med. Imaging, 27 (2007), 111–128. https://doi.org/10.1109/tmi.2007.904691 doi: 10.1109/tmi.2007.904691
    [75] A. V. Dalca, G. Balakrishnan, J. Guttag, M. R. Sabuncu, Unsupervised learning for fast probabilistic diffeomorphic registration, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2018, 729–738. https://doi.org/10.1016/j.media.2019.07.006
    [76] P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features, in: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, volume 1, Ieee, 2001, Ⅰ–Ⅰ. https://doi.org/10.1109/cvpr.2001.990517
    [77] G. E. Hinton, T. J. Sejnowski, Learning and relearning in boltzmann machines, Parallel distributed processing: Explorations in the microstructure of cognition, 1 (1986), 2. https://doi.org/10.7551/mitpress/3349.003.0005 doi: 10.7551/mitpress/3349.003.0005
    [78] M. Zambra, A. Testolin, M. Zorzi, A developmental approach for training deep belief networks, Cogn. Comput., 15 (2023), 103–120. https://doi.org/10.1007/s12559-022-10085-5 doi: 10.1007/s12559-022-10085-5
    [79] A. P. Kale, R. M. Wahul, A. D. Patange, R. Soman, W. Ostachowicz, Development of deep belief network for tool faults recognition, Sensors, 23 (2023), 1872. https://doi.org/10.3390/s23041872 doi: 10.3390/s23041872
    [80] A. M. Abdel-Zaher, A. M. Eldeib, Breast cancer classification using deep belief networks, Expert Syst. Appl., 46 (2016), 139–144. https://doi.org/10.1016/j.eswa.2015.10.015 doi: 10.1016/j.eswa.2015.10.015
    [81] M. Latha, G. Kavitha, Detection of schizophrenia in brain mr images based on segmented ventricle region and deep belief networks, Neural Comput. Appl., 31 (2019), 5195–5206. https://doi.org/10.1007/s00521-018-3360-1 doi: 10.1007/s00521-018-3360-1
    [82] V. Golovko, A. Kroshchanka, U. Rubanau, S. Jankowski, A learning technique for deep belief neural networks, in: International Conference on Neural Networks and Artificial Intelligence, Springer, 2014, 136–146. https://doi.org/10.1007/978-3-319-08201-1_13
    [83] W. Zhang, L. Ren, L. Wang, A method of deep belief network image classification based on probability measure rough set theory, Int. J. Pattern Recogn., 32 (2018), 1850040. https://doi.org/10.1142/s0218001418500404. doi: 10.1142/s0218001418500404
    [84] A. R. Khan, S. Khan, M. Harouni, R. Abbasi, S. Iqbal, Z. Mehmood, Brain tumor segmentation using k-means clustering and deep learning with synthetic data augmentation for classification, Micros. Res. Techniq., 84 (2021), 1389–1399. https://doi.org/10.1002/jemt.23694 doi: 10.1002/jemt.23694
    [85] B. Dufumier, P. Gori, I. Battaglia, J. Victor, A. Grigis, E. Duchesnay, Benchmarking cnn on 3d anatomical brain mri: architectures, data augmentation and deep ensemble learning, arXiv preprint arXiv: 2106.01132 (2021). https://doi.org/10.48550/arXiv.2106.01132
    [86] F. Isensee, P. F. Jäger, P. M. Full, P. Vollmuth, K. H. Maier-Hein, nnu-net for brain tumor segmentation, in: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6, Springer, 2021, pp. 118–132. https://doi.org/10.1007/978-3-030-72087-2_11
    [87] J. Nalepa, M. Marcinkiewicz, M. Kawulok, Data augmentation for brain-tumor segmentation: a review, Front. Comput. Neurosc., 13 (2019), 83. https://doi.org/10.3389/fncom.2019.00083 doi: 10.3389/fncom.2019.00083
    [88] F. Wilcoxon, Individual comparisons by ranking methods, in: Breakthroughs in Statistics: Methodology and Distribution, Springer, 1992, 196–202. https://doi.org/10.2307/3001968
    [89] D. Hull, Using statistical testing in the evaluation of retrieval experiments, in: Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval, 1993, 329–338. https://doi.org/10.1145/160688.160758
  • This article has been cited by:

    1. Xiu-Liang Qiu, Selim Çetin, Ömer Kişi, Mehmet Gürdal, Qing-Bo Cai, Octonion-valued b-metric spaces and results on its application, 2025, 10, 2473-6988, 10504, 10.3934/math.2025478
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2498) PDF downloads(137) Cited by(4)

Figures and Tables

Figures(5)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog