Loading [MathJax]/jax/output/SVG/jax.js
Review Special Issues

Detection of glaucoma using retinal fundus images: A comprehensive review

  • Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.

    Citation: Amsa Shabbir, Aqsa Rasheed, Huma Shehraz, Aliya Saleem, Bushra Zafar, Muhammad Sajid, Nouman Ali, Saadat Hanif Dar, Tehmina Shehryar. Detection of glaucoma using retinal fundus images: A comprehensive review[J]. Mathematical Biosciences and Engineering, 2021, 18(3): 2033-2076. doi: 10.3934/mbe.2021106

    Related Papers:

    [1] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors. AIMS Mathematics, 2024, 9(12): 34311-34355. doi: 10.3934/math.20241635
    [2] Mohamed A. H. Sabry, Ehab M. Almetwally, Osama Abdulaziz Alamri, M. Yusuf, Hisham M. Almongy, Ahmed Sedky Eldeeb . Inference of fuzzy reliability model for inverse Rayleigh distribution. AIMS Mathematics, 2021, 6(9): 9770-9785. doi: 10.3934/math.2021568
    [3] Amit Singh Nayal, Bhupendra Singh, Abhishek Tyagi, Christophe Chesneau . Classical and Bayesian inferences on the stress-strength reliability R=P[Y<X<Z] in the geometric distribution setting. AIMS Mathematics, 2023, 8(9): 20679-20699. doi: 10.3934/math.20231054
    [4] Neama Salah Youssef Temraz . Analysis of stress-strength reliability with m-step strength levels under type I censoring and Gompertz distribution. AIMS Mathematics, 2024, 9(11): 30728-30744. doi: 10.3934/math.20241484
    [5] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of reliability index R=P(Y<X) for newly extended xgamma progressively first-failure censored samples with applications. AIMS Mathematics, 2024, 9(11): 32200-32231. doi: 10.3934/math.20241546
    [6] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [7] Yichen Lv, Xinping Xiao . Grey parameter estimation method for extreme value distribution of short-term wind speed data. AIMS Mathematics, 2024, 9(3): 6238-6265. doi: 10.3934/math.2024304
    [8] Saurabh L. Raikar, Dr. Rajesh S. Prabhu Gaonkar . Jaya algorithm in estimation of P[X > Y] for two parameter Weibull distribution. AIMS Mathematics, 2022, 7(2): 2820-2839. doi: 10.3934/math.2022156
    [9] Naif Alotaibi, A. S. Al-Moisheer, Ibrahim Elbatal, Salem A. Alyami, Ahmed M. Gemeay, Ehab M. Almetwally . Bivariate step-stress accelerated life test for a new three-parameter model under progressive censored schemes with application in medical. AIMS Mathematics, 2024, 9(2): 3521-3558. doi: 10.3934/math.2024173
    [10] Essam A. Ahmed, Laila A. Al-Essa . Inference of stress-strength reliability based on adaptive progressive type-Ⅱ censing from Chen distribution with application to carbon fiber data. AIMS Mathematics, 2024, 9(8): 20482-20515. doi: 10.3934/math.2024996
  • Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.



    The work of Markowitz [1] stands as one of the pioneering works on portfolio theory [2] and details Markowitz's major contributions to the rising modern portfolio theory. Besides providing criterious description of Markowitz's model of portfolio choice [3], the work in [1] traced future research directions explored by other scientists like the suggestion to replace variance with semi-variance as a measure of risk, recommendations on the maximization of the expected logarithmic utility of return, and the outlining of a market model developed in depth by Sharpe [4].

    However, Markowitz's portfolio selection framework relies on the premise that one can measure asset's return variance as emphasized by the statement that "examples of rapidly increasing variances are of mostly academic interest" [1]. This restriction brings concerns related to the framework suitability in a scenario where heavy-tailed distributions better model assets return by allowing fast varying volatilities resulting from extreme events. We can find evidences in literature that financial data is better modelled by α-stable processes (heavy-tailed alternative to Brownian motion [5]) or by heavy-tailed time series models [6,7]. Although in this work we rely on the general hypothesis that logarithmic returns in financial data follow an α-stable process with parameter 0<α<2 [8] (which implies that variance of return is undefined), without loss of generality, we use results from extreme value theory (EVT) regarding generalized extreme value (GEV) distribution [9] as an alternative to α-stable distributions. This approach can be considered valid since the GEV distribution has fat-tailed behaviour and can be used as a proxy of various fat-tailed distributions.

    From an economic point of view, it is well known that extreme share returns on stock markets can have important implications for financial risk management and several studies have successfully applied GEV to model financial data [10]. For example, Gettinby et al. [11] characterized the distribution of extreme returns for a UK share index over the years 1975 to 2000. They considered the suitability of several distributions, being the weekly maxima and minima of daily return best modelled by the GEV and the Generalised Logistic distributions. For the UK case, Generalised Logistic was a better choice overall. On the other hand, GEV has presented a similar modelling capability as well as presenting some important properties due to EVT. Also, Hussain & Li [12] studied the distribution of the extreme daily returns of the Shanghai Stock Exchange (SSE) Composite Index. They modelled the SSE Composite index returns based on the data from 1991 to 2013, which indicated that the Generalized Logistic distribution is a better fit for the minima series and that the GEV distribution is a better fit for the maxima series of the returns for the Chinese stock market.

    EVT is a branch of probability and statistics that deals with the modeling of extreme events that are related to maximums and minimums of independent random samples. Applications of this theory are found in finance [13], natural catastrophes, and equipment failures, among others. The books [6,14,15,16] provide extensive coverage that allows for a detailed study of EVT.

    Furthermore, EVT provides a theoretical basis and framework to deal with extreme deviations from the mean of distribution functions (DFs) by restricting the behavior of the DFs in the tails. It focuses on the study of the possible limiting distributions and their properties for the normalized maximum.

    Specifically, let X1,X2,,Xn be a sequence of independent and identically distributed (i.i.d.) random variables (RVs) with common distribution function F and set Mn=max{X1,,Xn}. The theory is concerned with properties of F and of the possible non-degenerate distribution functions G satisfying

    limnP(Mnbnanx)=limnFn(anx+bn)=G(x),xC(G), (1.1)

    for sequences of constants an>0 and bnR (n=1,2,), suitably chosen, where C(G) denotes the set of continuity points of G.

    The possible distribution functions G satisfying (1.1) have been known for some time [17] and have been extensively studied by several authors from then on. They are also known as max-stable laws (or max-stable distributions) and can only be of three well-known types: Fréchet, Weibull or Gumbel.

    Goncu et al. [18] used the EVT to model the extreme return behaviour of the Istanbul Stock Exchange (ISE), Turkey. They considered Gumbel, Fréchet and Weibull distributions for modelling extreme returns over different investment horizons. Their results indicate that when the Value at Risk (VaR) is computed with the proposed distributions, backtesting results indicate that the EVT provides superior risk management in all the sub-intervals considered compared to the VaR estimation under the assumption of a normal distribution.

    For statistical applications, the max-stable distributions can be summarized in a single distribution function called generalized extreme value (GEV) distribution. Essentially, the GEV distribution has the cumulative distribution function (CDF) given by

    G(x)=exp{(1+γx)1γ},1+γx>0,γR, (1.2)

    where γ is the shape parameter.

    Our interest in this work is the property of stress-strength probability which, in general terms, consists of the study of the probability of failure of a system or component based on the comparison of the applied stress to the strength of the system. Let stress Y and strength X be independent continuous RVs with probability density function (PDF) fY and CDF FX, respectively. The stress-strength probability (or reliability) is defined as

    R=P(X<Y)=FX(x)fY(x)dx. (1.3)

    There are several applications of this theory such as in engineering and manufacturing, aerospace and defense, automotive industry, energy sector, healthcare, and electronics, among others. See [19] for more details.

    The stress-strength reliability framework is versatile and finds various applications in economics. Besides this contribution regarding financial data, previous works researched economic inequality [19,20]. Regarding financial data, the authors explored stress-strength reliability framework in [21,22,23,24].

    The stress-strength probability for the extreme Fréchet, Weibull, and Gumbel distributions has been widely studied in the literature. Nadarajah [25] considered the class of extreme value distributions and derived the corresponding forms for the reliability R in terms of special functions. Confidence limits for R involving Weibull models were presented in [26]. Kundu & Raqab [27] proposed a modified maximum likelihood estimator of R and obtained the asymptotic distribution of the modified maximum likelihood estimators, which was used to construct the confidence interval of R. The previous results of R for Weibull distribution were generalized by Nojosa & Rathie [28], where R was expressed in terms of H-functions. Bayesian estimation of R for Fréchet and Weibull distributions has also been explored [29,30].

    The goal of this paper is to present an asset selection approach based on the probability R=P(X<Y) when both X and Y present the distributions of returns of two assets. In particular, we seek to derive an expression of R when X and Y have three-parameter GEV distributions and to propose an estimation procedure of R by not using transformations in the data and with as fewer parameter restrictions as possible.

    The paper is organized as follows: In Section 2, we define the H-function, H-function and the three-parameter GEV distribution. Section 3 deals with the derivation of R when X and Y are independent GEV RVs. The maximum likelihood estimation for R is presented in Section 4. In Section 5, we present Monte-Carlo simulations for the estimation of R and also deal with two real situations involving log-returns of stock prices and different-length carbon fibers. The last section deals with the conclusions and the Appendix presents the correlation matrices of the data set modeled in Section 5.

    In this section, we present definitions and results on which our contributions are based.

    Recently, Rathie et al. [31] introduced the extreme-value H-function as:

    H(a1,a2,a3,a4,a5,a6):=0ya6exp{a1y(a2ya3+a4)a5}dy, (2.1)

    where (a1),(a2),(a4)R+,a3,a5C, not both (a1) and (a2) can be equal to zero at the same time, (a6)>1 when a10 or a1=0 and sign(a3)=sign(a5), (a6)<1 when a1=0 and sign(a3)sign(a5). In this paper, R, C and denote the real numbers, complex numbers and the real part of a complex number, respectively.

    In this work, we are interested in the case a6=0. Thus, we omit such a parameter from the representation and denote only:

    H(a1,a2,a3,a4,a5):=0exp{a1y(a2ya3+a4)a5}dy. (2.2)

    In the following sections, we prove that all stress-strength probabilities involving three-parameter GEV distribution with shape parameters of equal sign can be written as H-functions.

    Note that (2.1) generalizes some important cases of the H-function (cf. [32]) defined by

    Hm,np,q[z|(a1,A1),,(ap,Ap)(b1,B1),,(bq,Bq)]=12πiLmk=1Γ(bj+Bjs)nj=1Γ(1ajAjs)qk=m+1Γ(1bjBjs)pj=n+1Γ(aj+Ajs)zsds, (2.3)

    where 0mq, 0np (not both m and n simultaneously zero), Aj>0 (j=1,,p), Bk>0 (k=1,,q), aj and bk are complex numbers such that no poles of Γ(bk+Bks) (k=1,,m) coincide with poles of Γ(1ajAjs) (j=1,,n). L is a suitable contour wi to w+i, wR, separating the poles of the two types mentioned above. For more details, see [32].

    An important special case of this function is obtained by taking a4=0, which represents an upper (or lower) bound for its value depending on the sign of a5. This case is, therefore, an extreme value of the function and can be written in terms of the H-function as [31]:

    H(a1,a2,a3,0,a5,a6)=0ya6exp{a1yaa52ya3a5}dy=1a(1+a6)/a32a3a5H1,11,1[a1a1/a32|(1(1+a6)a3a5,1a3a5)(0,1)]=1aa6+11H1,11,1[(a2aa31)a5|(a6,a3a5)(0,1)], (2.4)

    when sign(a3)=sign(a5) and:

    H(a1,a2,a3,0,a5,a6)=1a(1+a6)/a32|a3a5|H2,00,2[a1a1/a32|(0,1),((1+a6)a3a5,1|a3a5|)]=1aa6+11H2,00,2[(a2aa31)a5|(0,1),(1+a6,|a3a5|)], (2.5)

    otherwise.

    The three-parameter GEV distribution is obtained by taking CDF of the same type of the standard GEV G defined in (1.2). That means, G(x;μ,σ,γ)=G(xμσ). We denote by XGEV(μ,σ,γ), μ,γR and σR+, an RV with CDF given by

    G(x;μ,σ,γ)=exp{[1+γσ(xμ)]1γ},1+γσ(xμ)>0, (2.6)

    where μ is the location parameter, σ is the scale parameter and γ is the shape parameter. The corresponding probability density function (PDF) is given by

    g(x;μ,σ,γ)=G(xi,γi,μ,σ)1σ[1+γσ(xμ)]1γ1,1+γσ(xμ)>0. (2.7)

    Figure 1 shows the behavior of g for some parameter choices. Note that the location parameter shifts the curve, the scale controls dispersion, and the density changes according to the sign of the shape parameter.

    Figure 1.  Plot for the PDF g.

    In this section, the reliability of two independent three-parameter GEV RVs is derived in terms of the H-function. In addition, with suitable parameter restrictions, representations of R as an H-function and an explicit form are obtained. We consider the case of two independent GEV distributions with different shape parameters (but of the same sign). Cases of opposite signs of shape parameters are not normally of interest, as it would indicate that random variables with incompatible support are being compared. Therefore, these cases are not treated in the present paper.

    Theorem 3.1. Let Y and X be independent RVs, respectively, with distribution GEV(μ1,σ1,γ1) and GEV(μ2,σ2,γ2), μR, σjR+, γjR(γj0), j=1,2. Then

    When γj>0, j=1,2:

    R=P(X<Y)=H(1,γ2σ1σ2γ1,γ1,1+γ2σ2(μ1μ2σ1γ1),1γ2), (3.1)

    provided that μ1σ1γ1μ2σ2γ2. When μ1σ1γ1μ2σ2γ2:

    R=P(X<Y)=1H(1,γ1σ2σ1γ2,γ2,1+γ1σ1(μ2μ1σ2γ2),1γ1). (3.2)

    When γj<0, j=1,2:

    R=P(X<Y)=H(1,γ2σ1σ2γ1,γ1,1+γ2σ2(μ1μ2σ1γ1),1γ2), (3.3)

    provided that μ1σ1γ1μ2σ2γ2. When μ1σ1γ1μ2σ2γ2:

    R=P(X<Y)=1H(1,γ1σ2σ1γ2,γ2,1+γ1σ1(μ2μ1σ2γ2),1γ1). (3.4)

    In particular, if μ1σ1γ1=μ2σ2γ2, we have

    R=γ2γ1(γ2σ1γ1σ2)1/γ1H1,11,1[(γ2σ1γ1σ2)1/γ1|(γ1γ2γ1,γ2γ1)(0,1)]. (3.5)

    Proof. Set μjR, σj,γjR+ (j=1,2). Then

    R=P(X<Y)=G(x;μ2,σ2,γ2)g(x;μ1,σ1,γ1)dx=+Mexp{[1+γ2σ2(xμ2)]1γ2[1+γ1σ1(xμ1)]1γ1}[1+γ1σ1(xμ1)]1γ11dxσ1, (3.6)

    where M=max{μ1σ1γ1,μ2σ2γ2}. Substituting y=[1+γ1σ1(xμ1)]1γ1 and taking M=μ1σ1γ1, we can rewrite (3.6) as

    R=+0exp{y[1+γ2σ2(μ1μ2σ1γ1)+γ2σ2σ1γ1yγ1]1/γ2}dy. (3.7)

    Hence, (3.1) follows from (2.1) and (3.7). For the case where γj>0, j=1,2 and μ1σ1γ1μ2σ2γ2, it suffices to notice that P(X<Y)=1P(Y<X) and the result in (3.1) is applied with interchanged sub-indices. For the cases where γj<0, j=1,2, the same rationale can be applied, just noticing that in such cases x mostly takes negative values. The case where γj=0, j=1,2, can be obtained as a limiting procedure and shall be explicitly explored later on the present paper. In addition, applying (2.4) with μ1σ1γ1=μ2σ2γ2, we obtain (3.5).

    Remark 3.2. In a practical scenario, the estimates (ˆμ1,ˆσ1,ˆγ1,ˆμ2,ˆσ2,ˆγ2) should be obtained. Then, if sign(ˆγ1)=sign(ˆγ2), the conditions μ1σ1γ1μ2σ2γ2 or μ1σ1γ1μ2σ2γ2 must be verified and the corresponding R expression should be used.

    Next, we consider some special cases of two independent GEV random variables. We have the following immediate consequence of Theorem 3.1:

    Corollary 3.3. Let YGEV(μ1,σ1,γ1) and XGEV(μ2,σ2,γ2) be independent RVs, with γ1=γ2=γR,γ0, μ1,μ2R, σ1,σ2R+ and μ1μ2=σ1γσ2γ. Then, we have

    R=P(X<Y)=(σ1/γ1σ1/γ1+σ1/γ2). (3.8)

    Lastly, we consider the cases of two independent GEV with γ1=γ2=0.

    Theorem 3.4. Let Y and X be independent RVs, respectively, with distribution GEV(μ1,σ1,0) and GEV(μ2,σ2,0), μjR, σjR+, j=1,2. Then

    R=exp{μ1σ1}H(exp{μ1σ1},exp{μ2σ2},σ1σ2,0,1)=exp{μ1μ2σ1}σ2σ1H1,11,1[exp{μ1μ2σ1}|(σ1σ2σ1,σ2σ1)(0,1)]. (3.9)

    Proof. Set μjR and σjR+ (j=1,2). Then

    R=P(X<Y)=G(x;μ2,σ2,0)g(x;μ1,σ1,0)dx=exp{exp{xμ2σ2}exp{xμ1σ1}}exp{xμ1σ1}dxσ1. (3.10)

    Substituting y=exp{x/σ1}, we can rewrite (3.10) as

    R=exp{μ1σ1}+0exp{exp{μ1σ1}yexp{μ2σ2}yσ1/σ2}dy. (3.11)

    Hence, (3.9) follows from (2.1) and (3.11).

    We have the immediate consequence of Theorem 3.4.

    Corollary 3.5. Let YGEV(μ1,σ1,0) and XGEV(μ2,σ2,0) be independent RVs, with σ1=σ2=σR+. Then, we have

    R=P(X<Y)=exp{μ1/σ}exp{μ1/σ}+exp{μ2/σ}. (3.12)

    The results presented in Theorems 3.1 and 3.4 are more general than that presented in the literature. The H-function allows us to write the probability R with as little parameter restrictions as possible. Table 1 lists related studies and their parameter restrictions.

    Table 1.  Extreme distributions and related studies of stress-strength probability.
    sign(γ) Distribution Reference Parameter restriction
    0 Gumbel [25] σ1=σ2 or σ1=2σ2 or σ2/σ1>1
    1 Fréchet [25] μ1=μ2 and (γ1=γ2 or γ2=2γ1
    or γ2/γ1=p/q)*
    [30] μ1=μ2=0
    [29] μ1=μ2=0 and γ1=γ2
    [33] μ1=μ2=0
    1 Weibull (min) [25] μ1=μ2 and (γ1=γ2 or γ2=2γ1
    or γ2/γ1=p/q)*
    [26] μ1=μ2=0 and γ1=γ2=γ
    [27] μ1=μ2=μ and γ1=γ2=γ
    [28] μ1=μ2=0
    *p and q are coprime integers.

     | Show Table
    DownLoad: CSV

    Remark 3.6. The particular case of GEV while sign(γ)=1 (cf. [14]) is called reversed Weibull. The Weibull distribution studied by the authors cited in Table 1 is obtained as the limit of a normalized minimum of i.i.d. RVs. That is, the Weibull distribution is obtained by

    limnP(min{X1,,Xn}bnanx)=limn(1(1F(anx+bn))n)=1exp{(xμσ)γ},xμ, (3.13)

    where X1,,Xn are i.i.d. RVs of F and an and bn are suitable sequences of constants (see Theorem 2.1.5 in [14]).

    Let X1,,Xn be i.i.d. RVs with distribution GEV(μ2,σ2,γ2) and Y be an independent RV with distribution GEV(μ1,σ1,γ1). Set Mn=max{X1,,Xn}. Then, P(Mnu)=Gn(u;μ2,σ2,γ2) and we have

    P(X1<Y,,Xn<Y)=P(MnY)=Gn(u;μ2,σ2,γ2)g(u;μ1,σ1,γ1)du=:In. (3.14)

    Closed expressions for (3.14) are presented in the following result. Its proof follows the same steps of Theorems 3.1 and 3.4 and it will be omitted.

    Theorem 3.7. Let X1,,Xn be i.i.d. RVs with distribution GEV(μ2,σ2,γ2) and Y be an independent RV with distribution GEV(μ1,σ1,γ1). Then

    When γj>0, j=1,2:

    P(X1<Y,,Xn<Y)=H(1,γ2σ1σ2γ1nγ2,γ1,[1+γ2σ2(μ1μ2σ1γ1)]nγ2,1γ2), (3.15)

    provided that μ1σ1γ1μ2σ2γ2. When μ1σ1γ1μ2σ2γ2:

    P(X1<Y,,Xn<Y)=1H(1,γ1σ2σ1γ2nγ1,γ2,[1+γ1σ1(μ2μ1σ2γ2)]nγ1,1γ1). (3.16)

    When γj<0, j=1,2:

    P(X1<Y,,Xn<Y)=H(1,γ2σ1σ2γ1nγ2,γ1,[1+γ2σ2(μ1μ2σ1γ1)]nγ2,1γ2), (3.17)

    provided that μ1σ1γ1μ2σ2γ2. When μ1σ1γ1μ2σ2γ2:

    P(X1<Y,,Xn<Y)=1H(1,γ1σ2σ1γ2nγ1,γ2,[1+γ1σ1(μ2μ1σ2γ2)]nγ1,1γ1). (3.18)

    When γ1=γ2=0:

    P(X1<Y,,Xn<Y)=exp{μ1σ1}H(exp{μ1σ1},exp{μ2σ2}n,σ1σ2,0,1). (3.19)

    Remark 3.8. In a broader k-out-of-n multicomponent reliability context, consider independent RVs Y,X1,,Xk with YGEV(μ1,σ1,γ1) and XjGEV(μ2,σ2,γ2), for j=1,,k (Xjs are i.i.d.). The reliability for this kind of model is given by

    Rs,k=P(atleastsoutof(X1,,Xk)exceedY)=kj=s(kj)(1G(u;μ2,σ2,γ2))j(G(u;μ2,σ2,γ2))kjg(u;μ1,σ1,γ1)du.

    Using a binomial expansion, we obtain

    Rs,k=kj=sjr=0(kj)(jr)(1)jr(G(u;μ2,σ2,γ2))krg(u;μ1,σ1,γ1)du. (3.20)

    Note that the integral terms in (3.20) are particular cases of (3.14) provided that n=kr. Therefore,

    Rs,k=kj=sjr=0(kj)(jr)(1)jrIkr.

    This section deals with parameter estimation for R=P(X<Y) given two independent GEV RVs. The literature presents some maximum likelihood estimators for R considering explicit forms of R obtained by strong parameter restrictions on extreme value distributions (such as [27,29,30]). Those approaches require the estimation of the parameters to be done jointly in the two samples. In our case, we release any requirements about having the same parameters between different samples, since we deal with expressions of R in terms of functions H.

    Consider the PDF g(;μ,σ,γ) defined in (2.7). Take X=(X1,,Xn) as a sample of n observations. The likelihood function for the GEV(;μ,σ,γ) is given by:

    L(μ,σ,γ;X)=ni=1g(Xi;μ,σ,γ)1[1+γ(Xiμ)/σ>0], (4.1)

    where 1A denotes the indicator function on the set A. Note that ni=11[1+γ(Xiμ)/σ>0]>0 if and only if xisuppg(;μ,σ,γ) for all i=1,,n. Here, suppg denotes the support of the function g. Then, if γ0, we are not able to obtain the MLE explicitly, so an additional numeric procedure is required in the likelihood maximization (see [6] for a more detailed discussion).

    Remark 4.1. Set X=(X1,,Xn), a random sample of GEV(μ2,σ2,γ2), and Y=(Y1,,Ym), a random sample of GEV(μ1,σ1,γ1), with γj>0, j=1,2 and μ1σ1/γ1μ2σ2/γ2 (or γj<0, j=1,2 and μ1σ1/γ1μ2σ2/γ2). Let ˆμi,ˆσi,ˆγi (i=1,2) be the estimates of μi,σi,γi. We are able to estimate R by the invariance property of MLE, as follows:

    ˆR=H(1,ˆγ2ˆσ1ˆσ2ˆγ1,ˆγ1,1+ˆγ2ˆσ2(ˆμ1ˆμ2ˆσ1ˆγ1),1ˆγ2). (4.2)

    Alternatively, whenever γj>0, j=1,2 and μ1σ1/γ1μ2σ2/γ2 (or γj<0, j=1,2 and μ1σ1/γ1μ2σ2/γ2), the same invariance property can be applied, leading to:

    ˆR=1H(1,ˆγ1ˆσ2ˆσ1ˆγ2,ˆγ2,1+ˆγ1ˆσ1(ˆμ2ˆμ1ˆσ2ˆγ2),1ˆγ1). (4.3)

    This is due to the Theorems 3.1 and 3.4 that describe R in terms of the function H (which is an integral, hence a continuous and measurable function).

    Whenever a single set of realizations of the random variables involved is available, the MLE approach above is of utmost importance. This is the case, for example, of asset selection, when a single time series of observed returns is available for each asset.

    On the other hand, to illustrate the suitability of the analytical closed-form expressions hereby derived, a direct simulation approach can be carried out. In such case, several samples of size n can be drawn from each random variable, which are then used to estimate the empirical value of R and can be repeated several times. Both approaches will be explored in the next section.

    To evaluate the correctness of the closed-form expression for R given in Theorem 3.1, we generate N Monte-Carlo samples, each of which is size n, of the random variables GEV(μ2,σ2,γ2) and GEV(μ1,σ1,γ1). In these cases, the values of μ2,σ2,γ2,μ1,σ1,γ1 are pre-specified.

    The GEV distribution with negative shape parameter is treated in Tables 2 and 3 where we analyze the estimates ˆR, bias and root mean squared error (RMSE). Table 4 deals with positive shape parameter.

    Table 2.  Negative-shape mean, bias and RMSE of ˆRMC (N=100 and n=100).
    μ2 σ2 γ2 μ1 σ1 γ1 R ˆRMC Bias RMSE
    2.0 1.5 -1.0 0.0 0.5 -0.3 0.1147 0.1151 -0.0004 0.0330
    0.0 1.5 -1.0 0.0 0.5 -0.3 0.4350 0.4280 0.0070 0.0502
    0.4 1.5 -1.0 0.5 0.5 -0.3 0.4650 0.4591 0.0059 0.0525
    2.0 1.0 -1.0 0.0 0.7 -0.3 0.0798 0.0796 0.0002 0.0241
    0.0 1.0 -1.0 0.0 0.7 -0.3 0.5298 0.5277 0.0021 0.0466
    0.4 1.0 -1.0 0.5 0.7 -0.3 0.5686 0.5669 0.0017 0.0492
    2.0 1.5 -1.0 0.0 0.9 -0.3 0.1414 0.1387 0.0027 0.0389
    0.0 1.5 -1.0 0.0 0.9 -0.3 0.5092 0.5129 -0.0037 0.0462
    0.4 1.5 -1.0 0.5 0.9 -0.3 0.5371 0.5362 0.0009 0.0461
    2.0 1.5 -1.0 0.0 0.5 -1.0 0.1015 0.1025 -0.0010 0.0313
    0.0 1.5 -1.0 0.0 0.5 -1.0 0.3851 0.3873 -0.0022 0.0463
    0.4 1.5 -1.0 0.5 0.5 -1.0 0.4116 0.4159 -0.0043 0.0511
    2.0 1.0 -1.0 0.0 0.7 -1.0 0.0590 0.0573 0.0017 0.0270
    0.0 1.0 -1.0 0.0 0.7 -1.0 0.4358 0.4275 0.0083 0.0530
    0.4 1.0 -1.0 0.5 0.7 -1.0 0.4816 0.4807 0.0009 0.0474
    2.0 1.5 -1.0 0.0 0.9 -1.0 0.1104 0.1120 -0.0016 0.0277
    0.0 1.5 -1.0 0.0 0.9 -1.0 0.4190 0.4279 -0.0089 0.0474
    0.4 1.5 -1.0 0.5 0.9 -1.0 0.4478 0.4500 -0.0022 0.0506
    2.0 1.5 -1.5 0.0 0.5 -1.5 0.1237 0.1234 0.0003 0.0302
    0.0 1.5 -1.5 0.0 0.5 -1.5 0.3715 0.3701 0.0014 0.0488
    0.4 1.5 -1.5 0.5 0.5 -1.5 0.3989 0.4034 -0.0045 0.0472
    2.0 1.0 -1.5 0.0 0.7 -1.5 0.0822 0.0797 0.0025 0.0276
    0.0 1.0 -1.5 0.0 0.7 -1.5 0.4160 0.4134 0.0026 0.0403
    0.4 1.0 -1.5 0.5 0.7 -1.5 0.4739 0.4794 -0.0055 0.0548
    2.0 1.5 -1.5 0.0 0.9 -1.5 0.1271 0.1303 -0.0032 0.0325
    0.0 1.5 -1.5 0.0 0.9 -1.5 0.3999 0.4068 -0.0069 0.0457
    0.4 1.5 -1.5 0.5 0.9 -1.5 0.4329 0.4344 -0.0015 0.0452

     | Show Table
    DownLoad: CSV
    Table 3.  Negative-shape mean, bias and RMSE of ˆRMC (N=1000 and n=1000).
    μ2 σ2 γ2 μ1 σ1 γ1 R ˆRMC Bias RMSE
    2.0 1.5 -1.0 0.0 0.5 -0.3 0.1147 0.1147 -0.0000 0.0106
    0.0 1.5 -1.0 0.0 0.5 -0.3 0.4350 0.4355 -0.0005 0.0151
    0.4 1.5 -1.0 0.5 0.5 -0.3 0.4650 0.4645 0.0005 0.0156
    2.0 1.0 -1.0 0.0 0.7 -0.3 0.0798 0.0798 0.0001 0.0084
    0.0 1.0 -1.0 0.0 0.7 -0.3 0.5298 0.5299 -0.0001 0.0159
    0.4 1.0 -1.0 0.5 0.7 -0.3 0.5686 0.5693 -0.0007 0.0151
    2.0 1.5 -1.0 0.0 0.9 -0.3 0.1414 0.1414 -0.0000 0.0112
    0.0 1.5 -1.0 0.0 0.9 -0.3 0.5092 0.5096 -0.0004 0.0162
    0.4 1.5 -1.0 0.5 0.9 -0.3 0.5371 0.5374 -0.0003 0.0158
    2.0 1.5 -1.0 0.0 0.5 -1.0 0.1015 0.1018 -0.0003 0.0098
    0.0 1.5 -1.0 0.0 0.5 -1.0 0.3851 0.3852 -0.0001 0.0152
    0.4 1.5 -1.0 0.5 0.5 -1.0 0.4116 0.4119 -0.0003 0.0152
    2.0 1.0 -1.0 0.0 0.7 -1.0 0.0590 0.0587 0.0003 0.0074
    0.0 1.0 -1.0 0.0 0.7 -1.0 0.4358 0.4366 -0.0008 0.0154
    0.4 1.0 -1.0 0.5 0.7 -1.0 0.4816 0.4814 0.0002 0.0150
    2.0 1.5 -1.0 0.0 0.9 -1.0 0.1104 0.1106 -0.0002 0.0101
    0.0 1.5 -1.0 0.0 0.9 -1.0 0.4190 0.4196 -0.0006 0.0157
    0.4 1.5 -1.0 0.5 0.9 -1.0 0.4478 0.4471 0.0008 0.0150
    2.0 1.5 -1.5 0.0 0.5 -1.5 0.1237 0.1243 -0.0006 0.0103
    0.0 1.5 -1.5 0.0 0.5 -1.5 0.3715 0.3722 -0.0007 0.0158
    0.4 1.5 -1.5 0.5 0.5 -1.5 0.3989 0.3974 0.0014 0.0154
    2.0 1.0 -1.5 0.0 0.7 -1.5 0.0822 0.0822 0.0000 0.0086
    0.0 1.0 -1.5 0.0 0.7 -1.5 0.4160 0.4166 -0.0005 0.0160
    0.4 1.0 -1.5 0.5 0.7 -1.5 0.4739 0.4736 0.0004 0.0162
    2.0 1.5 -1.5 0.0 0.9 -1.5 0.1271 0.1265 0.0007 0.0105
    0.0 1.5 -1.5 0.0 0.9 -1.5 0.3999 0.3998 0.0001 0.0155
    0.4 1.5 -1.5 0.5 0.9 -1.5 0.4329 0.4323 0.0006 0.0151

     | Show Table
    DownLoad: CSV
    Table 4.  Positive-shape mean, bias and RMSE of ˆRMC (N=1000 and n=100).
    μ2 σ2 γ2 μ1 σ1 γ1 R ˆRMC Bias RMSE
    2.0 1.5 1.0 0.0 0.5 0.3 0.0617 0.0624 -0.0007 0.0244
    0.0 1.5 1.0 0.0 0.5 0.3 0.4288 0.4306 -0.0018 0.0497
    0.4 1.5 1.0 0.5 0.5 0.3 0.4491 0.4478 0.0012 0.0514
    2.0 1.0 1.0 0.0 0.7 0.3 0.0906 0.0891 0.0015 0.0284
    0.0 1.0 1.0 0.0 0.7 0.3 0.4443 0.4463 -0.0020 0.0507
    0.4 1.0 1.0 0.5 0.7 0.3 0.4717 0.4727 -0.0010 0.0499
    2.0 1.5 1.0 0.0 0.9 0.3 0.1299 0.1300 -0.0001 0.0332
    0.0 1.5 1.0 0.0 0.9 0.3 0.4419 0.4423 -0.0004 0.0509
    0.4 1.5 1.0 0.5 0.9 0.3 0.4611 0.4602 0.0008 0.0497
    2.0 1.5 1.0 0.0 0.5 1.0 0.1469 0.1474 -0.0004 0.0359
    0.0 1.5 1.0 0.0 0.5 1.0 0.4764 0.4774 -0.0010 0.0486
    0.4 1.5 1.0 0.5 0.5 1.0 0.4947 0.4940 0.0007 0.0504
    2.0 1.0 1.0 0.0 0.7 1.0 0.1846 0.1847 -0.0001 0.0382
    0.0 1.0 1.0 0.0 0.7 1.0 0.4980 0.4985 -0.0006 0.0520
    0.4 1.0 1.0 0.5 0.7 1.0 0.5240 0.5235 0.0004 0.0480
    2.0 1.5 1.0 0.0 0.9 1.0 0.2144 0.2164 -0.0019 0.0431
    0.0 1.5 1.0 0.0 0.9 1.0 0.4951 0.4925 0.0026 0.0497
    0.4 1.5 1.0 0.5 0.9 1.0 0.5128 0.5124 0.0004 0.0480
    2.0 1.5 1.5 0.0 0.5 1.5 0.1868 0.1883 -0.0015 0.0393
    0.0 1.5 1.5 0.0 0.5 1.5 0.4906 0.4894 0.0013 0.0523
    0.4 1.5 1.5 0.5 0.5 1.5 0.5076 0.5072 0.0004 0.0495
    2.0 1.0 1.5 0.0 0.7 1.5 0.2237 0.2243 -0.0006 0.0414
    0.0 1.0 1.5 0.0 0.7 1.5 0.5056 0.5052 0.0004 0.0501
    0.4 1.0 1.5 0.5 0.7 1.5 0.5318 0.5310 0.0008 0.0496
    2.0 1.5 1.5 0.0 0.9 1.5 0.2461 0.2451 0.0011 0.0444
    0.0 1.5 1.5 0.0 0.9 1.5 0.5046 0.5042 0.0004 0.0491
    0.4 1.5 1.5 0.5 0.9 1.5 0.5220 0.5226 -0.0006 0.0490

     | Show Table
    DownLoad: CSV

    For the simulation, for each line in the Tables 24, the following procedure was carried out:

    (1) for each Monte-Carlo sample, the estimate ˆR is computed empirically, simply as n1iI(xi<yi), where I(.) is an indicator function, which is 1 for true arguments and 0 otherwise;

    (2) ˆRMC is evaluated by taking the sample mean of the Monte-Carlo samples ˆR;

    (3) the bias is computed as the difference between the value obtained by Theorem 3.1 and ˆRMC. The same follows for the RMSE, which also considers the true value as the analytically obtained one.

    As expected, Tables 2 and 3 illustrate that the analytical results obtained match the empirical ones. This is a clear evidence of the correctness of the new expressions hereby derived. Besides, it is clear that increasing the sample size n leads to more precise estimations of R, indicating reduced bias and greater consistency. In Table 4 we observe the same good behavior of the estimator characterized by low bias and RMSE.

    In order to evaluate the proposed framework, we modeled stock prices log-returns as GEV and compare them in a reliability sense. For this, we assume that the returns are independent. To meet independence requirements, we take stock log-returns that are not correlated and correlations are measured using Pearson, Kendal and Spearman methods.

    Denote X1,X2,X3 and X4 the stock price log-returns of BBAS3, ITUB4, VALE3 and VIIA3, respectively. The data sets are retrieved directly through the software R [34] by the command

    quantmod::getSymbols("BBAS3.SA", src = "yahoo", auto.assign = FALSE, from = '2022-01-01', to = '2023-04-30', return.class = 'xts').

    The data sets have information in Brazilian currency (R$, BRL).

    Summary statistics for the data sets X1,X2,X3 and X4 are presented in Table 5. A boxplot is presented in Figure 2 showing the symmetry around zero of the log-returns and that X4 has more dispersion than the others.

    Table 5.  Summary statistics for the stock price log-returns X1,X2,X3 and X4.
    RV Data Set min 1st Qu. Median Mean 3rd Qu. Max n
    X1 BBAS3 -0.1057 -0.0097 0.0019 0.0012 0.0136 0.0736 330
    X2 ITUB4 -0.0492 -0.0105 0.0004 0.0006 0.0109 0.0794 330
    X3 VALE3 -0.0689 -0.0140 0.0001 -0.0002 0.0128 0.0989 330
    X4 VIIA3 -0.1075 -0.0344 -0.006 -0.0030 0.0231 0.1504 330

     | Show Table
    DownLoad: CSV
    Figure 2.  Boxplot for the stock price log-returns of the data sets BBAS3, ITUB4, VALE3 and VIIA3.

    Maximum likelihood (ML) estimates and Kolmogorov-Smirnov (KS) p-values are given in Table 6. Figure 3 shows histograms and the fit of the GEV model to X1,X2,X3 and X4. For each data set, the empirical CDF (ECDF) is compared to the theoretical model in Figure 4.

    Table 6.  ML estimates, log-likelihood (llmax) and KS p-values for the GEV models.
    RV Data set ˆμ ˆσ ˆγ llmax KS p-value
    X1 BBAS3 -0.0063 0.0219 -0.2535 -803.9168 0.0147
    X2 ITUB4 -0.0064 0.0165 -0.1545 -870.9392 0.4299
    X3 VALE3 -0.0095 0.0222 -0.1631 -774.0895 0.2331
    X4 VIIA3 -0.0217 0.0396 -0.1170 -567.6979 0.6996

     | Show Table
    DownLoad: CSV
    Figure 3.  Histograms and fitted GEV densities for the stock prices log-returns.
    Figure 4.  Fitted ECDF for GEV models.

    Although the p-value of the Kolmogorov-Smirnov test is small for the BBAS3 (X1), the graphical analysis does not invalidate the good fit of the distribution to the data.

    Aiming at estimating probabilities of the type R=P(X<Y) via Theorems 3.1–3.4, we need X and Y to be independent RVs. In this sense, we analyzed the dependency structures of X1,,X4 using the Pearson, Kendal and Spearman correlation matrices, and the results are presented in the Appendix. The pairs chosen to be compared are X3X2, X3X4 and X3X1.

    Reliability measures of the type R=P(X<Y) appear in a decision process of an investor. In summary, when X and Y represent profit RVs and R<1/2, it is advisable that the investor chooses the variable X. If R>1/2, the opposite occurs. The case R=1/2 is inconclusive. Thus, knowing how to evaluate R accurately is important to support the decision process. In this sense, Table 7 presents the estimates of P(X3<X1), P(X3<X2) and P(X3<X4) and the 95% bootstrap confidence intervals (CI). For all subsequent Tables, ˆR is the estimate obtained by first fitting the dataset to a GEV distribution and using the results from Remark 4.1 (invariance property of MLE of the parameters and the analytical expression obtained for R). Also, ˆRNP is a non-parametric estimation of R, which considers a similar approach as in the simulation study (uses an indicator function to compare samples from the two distributions). Finally, ˆRBoot is the mean value of the bootstrap estimation of R using the non-parametric approach and the CI reported is for such mean.

    Table 7.  Stress-strength probability estimates and bootstrap CI.
    R=P(X<Y) ˆR ˆRNP ˆRBoot 95% CI
    P(X3<X1) 0.5283 0.5242 0.5277 (0.4883; 0.5676)
    P(X3<X2) 0.5174 0.5242 0.5181 (0.4766; 0.5569)
    P(X3<X4) 0.4506 0.4364 0.4500 (0.4055; 0.4944)

     | Show Table
    DownLoad: CSV

    The estimates of R indicate that, within the analyzed time period, VALE3 would be preferred over BBAS3 and ITUB4, while VIIA3 would be preferred over VALE3. On the other hand, the bootstrap CI estimates indicate that only the case P(X3<X4) was conclusive since 0.5 does not belong to the estimated confidence interval.

    It is important to compare different distributions as candidate models for the log-returns modelling. Considering previous results in the literature [11], we compared the performance of the GEV and generalized logistic distributions as models for daily returns, as presented in Table 8.

    Table 8.  Log-likelihood comparison between different candidate random variables.
    RV Data set GEV Generalized Logistic
    X1 BBAS3 -803.92 - 833.43
    X2 ITUB4 -870.94 - 882.01
    X3 VALE3 -774.09 - 787.81
    X4 VIIA3 -567.70 - 569.60

     | Show Table
    DownLoad: CSV

    It is possible to see that both GEV and generalized logistic provided quite similar modelling capabilities (about the same log-likelihood values). Since the two distributions considered have three parameters, there is no need to consider information criteria.

    Instead of considering the log-returns for the closing prices, as in the previous analyses, one may study how the weekly maximums (or minimums) behave. This has a direct economic interpretation: a proxy for greater profits (or greater losses, i.e., shortfalls and values-at-risk) and has been explored in the literature [11,12,18].

    Following previous studies [12], it is of interest to explore the modelling scenario where the maximum returns are considered GEV random variables. Table 9 show the GEV ML estimates for such case.

    Table 9.  ML estimates, log-likelihood (llmax) and KS p-values for weekly maximum log-returns and their GEV modelling.
    RV Data set ˆμ ˆσ ˆγ llmax KS p-value
    X1 BBAS3 -0.0128 0.0393 -0.1157 -116.3016 0.8076
    X2 ITUB4 -0.091 0.0397 -0.4319 -126.5530 0.8292
    X3 VALE3 -0.0197 0.0426 -0.2003 -114.0183 0.9365
    X4 VIIA3 -0.0464 0.0750 -0.1339 -72.9854 0.5378

     | Show Table
    DownLoad: CSV

    Table 9 indicates that the GEV random variable is adequate for every case considered, which is expected as the EVT predicts such convergence. Now that the parameters have been obtained, Table 10 presents the reliability calculations.

    Table 10.  Stress-strength probability estimates and bootstrap CI for weekly maximums.
    R=P(X<Y) ˆR ˆRNP ˆRBoot 95% CI
    P(X3<X1) 0.5453 0.5522 0.5413 (0.4483; 0.6334)
    P(X3<X2) 0.5409 0.5522 0.5483 (0.4421; 0.6358)
    P(X3<X4) 0.4364 0.4328 0.4350 (0.3379; 0.5350)

     | Show Table
    DownLoad: CSV

    Since R=0.5 is within the confidence intervals, the metric becomes inconclusive. It is possible to notice, on the other hand, that if ˆR is considered as a metric by itself, it would precisely reflect the high volatility of VIIA3, which suffered severe instability and losses during the time window analyzed.

    Besides the direct application to financial assets selection, some engineering applications can also benefit from the new relations hereby defined. One application is illustrated in the next subsection.

    Stress-strength reliability can be also applied to the modelling and comparison of carbon fibers of lengths 10 and 20 mm. The data represent the strength data measured in GPa (gigapascal), for single carbon fibers tested under tension. The data are frequently used in the literature and are also presented below (e.g., [28]).

    Carbon fibers of length 20 mm:

    X=(1.312,1.314,1.479,1.552,1.700,1.803,1.861,1.865,1.944,1.958,1.966,1.977,2.006,2.021,2.027,2.055,2.063,2.098,2.140,2.179,2.224,2.240,2.253,2.270,2.272,2.274,2.301,2.301,2.359,2.382,2.382,2.426,2.434,2.435,2.478,2.490,2.511,2.514,2.535,2.554,2.566,2.570,2.586,2.629,2.633,2.642,2.648,2.684,2.697,2.726,2.770,2.773,2.800,2.809,2.818,2.821,2.848,2.880,2.954,3.012,3.067,3.084,3.090,3.096,3.128,3.233,3.433,3.585,3.585).

    Carbon fibers of length 10 mm:

    Y=(1.901,2.132,2.203,2.228,2.257,2.350,2.361,2.396,2.397,2.445,2.454,2.474,2.518,2.522,2.525,2.532,2.575,2.614,2.616,2.618,2.624,2.659,2.675,2.738,2.740,2.856,2.917,2.928,2.937,2.937,2.977,2.996,3.030,3.125,3.139,3.145,3.220,3.223,3.235,3.243,3.264,3.272,3.294,3.332,3.346,3.377,3.408,3.435,3.493,3.501,3.537,3.554,3.562,3.628,3.852,3.871,3.886,3.971,4.024,4.027,4.225,4.395,5.020).

    Table 11 and Figure 5 show the summary statistics of X and Y in which it is possible to observe that Y (carbon fibers of length 10 mm) tend to have greater strength values than X (carbon fibers of length 20 mm).

    Table 11.  Summary statistics for the carbon fibers of length 20 mm (X) and 10 mm (Y).
    RV Data set Min. 1st Qu. Median Mean 3rd Qu. Max. n
    X Carbon fibers of 20 mm 1.312 2.098 2.478 2.451 2.773 3.585 69
    Y Carbon fibers of 10 mm 1.901 2.554 2.996 3.059 3.421 5.020 63

     | Show Table
    DownLoad: CSV
    Figure 5.  Boxplot for the carbon fibers of length 20 mm (X) and 10 mm (Y).

    ML estimates and KS p-values for the GEV model are presented in Table 12 and the good fit can be observed in Figure 6.

    Table 12.  ML estimates and Kolmogorov-Smirnov (KS) p-values for the GEV model.
    Data set ˆμ ˆσ ˆγ KS p-value
    X 2.2781 0.4956 -0.2851 0.9978
    Y 2.7904 0.5245 -0.0747 0.8216

     | Show Table
    DownLoad: CSV
    Figure 6.  Histogram, ECDF and fitted GEV model for carbon fibers.

    The estimates of stress-strength reliability and bootstrap 95% CI are ˆR=0.774 and (0.699,0.851), respectively. It is easy to conclude that since P(X<Y)>1/2 and 0.5 is not within the CI, that X<Y statistically, i.e., the fibers with length of 10 mm have statistically greater strength values than the ones of length of 20 mm.

    Despite its significant limitations, Markowitz's modern portfolio theory is still relied upon by many practitioners because of its user-friendly simplicity. This way, studying alternative approaches that are also of straightforward comprehension is of utmost importance.

    In this paper, we studied the stress-strength reliability R=P(X<Y) when both X and Y follow three-parameter GEV distributions. In summary, when X and Y represent return RVs and R<1/2, it is advisable that the investor chooses the variable X. If R>1/2, the opposite occurs. The case R=1/2 is inconclusive. Thus, exact expressions for R have been obtained in terms of the extreme-value H-function with minimal parameter restrictions. With additional restrictions, it was shown that R can be calculated in terms of H-functions and even in terms of standard functions (such as exponential functions).

    Monte-Carlo simulations attested to the performance of the analytical closed-form expressions hereby derived. By applying our methodology to real-world financial data, we could orient a stock selection procedure by calculating P(X<Y) when both X and Y represent stock returns. Besides, an engineering application was also described, where carbon fibers tested under tension were modelled in a stress-strength reliability sense.

    In this appendix, we present the correlation matrices of the data sets (log-returns) modeled in Section 5. Thus, Tables 1315 give the correlations of Pearson, Spearman and Kendal, respectively.

    Table 13.  Pearson correlation matrix.
    X1 X2 X3 X4
    X1 1.00 0.61 0.10 0.29
    X2 1.00 0.15 0.31
    X3 1.00 0.03
    X4 1.00

     | Show Table
    DownLoad: CSV
    Table 14.  Spearman rank correlation matrix.
    X1 X2 X3 X4
    X1 1.00 0.63 0.15 0.31
    X2 1.00 0.22 0.34
    X3 1.00 0.06
    X4 1.00

     | Show Table
    DownLoad: CSV
    Table 15.  Kendall rank correlation matrix.
    X1 X2 X3 X4
    X1 1.00 0.45 0.10 0.21
    X2 1.00 0.15 0.24
    X3 1.00 0.04
    X4 1.00

     | Show Table
    DownLoad: CSV

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors acknowledge the support provided by the University of Brasilia (UnB). Additionally, M.O. acknowledges the Coordination for the Improvement of Higher Education Personnel (CAPES) for awarding him a scholarship for a Master of Science program.

    The authors declare no conflicts of interest.



    [1] A. Latif, A. Rasheed, U. Sajid, J. Ahmed, N. Ali, N. I. Ratyal, et al., Content-based image retrieval and feature extraction: a comprehensive review, Math. Probl. Eng., 2019.
    [2] B. Gupta, M. Tiwari, S. S. Lamba, Visibility improvement and mass segmentation of mammogram images using quantile separated histogram equalisation with local contrast enhancement, CAAI Trans. Intell. Technol., 4 (2019), 73–79. doi: 10.1049/trit.2018.1006
    [3] S. Maheshwari, V. Kanhangad, R. B. Pachori, S. V. Bhandary, U. R. Acharya, Automated glaucoma diagnosis using bit-plane slicing and local binary pattern techniques, Comput. Biol. Med., 105 (2019), 72–80. doi: 10.1016/j.compbiomed.2018.11.028
    [4] S. Masood, M. Sharif, M. Raza, M. Yasmin, M. Iqbal, M. Younus Javed, Glaucoma disease: A survey, Curr. Med. Imaging, 11 (2015), 272–283. doi: 10.2174/157340561104150727171246
    [5] U. R. Acharya, S. Bhat, J. E. Koh, S. V. Bhandary, H. Adeli, A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images, Comput. Biol. Med., 88 (2017), 72–83. doi: 10.1016/j.compbiomed.2017.06.022
    [6] B. J. Shingleton, L. S. Gamell, M. W. O'Donoghue, S. L. Baylus, R. King, Long-term changes in intraocular pressure after clear corneal phacoemulsification: Normal patients versus glaucoma suspect and glaucoma patients, J. Cataract. Refract. Surg., 25 (1999), 885–890. doi: 10.1016/S0886-3350(99)00107-8
    [7] K. F. Jamous, M. Kalloniatis, M. P. Hennessy, A. Agar, A. Hayen, B. Zangerl, Clinical model assisting with the collaborative care of glaucoma patients and suspects, Clin. Exp. Ophthalmol., 43 (2015), 308–319. doi: 10.1111/ceo.12466
    [8] T. Khalil, M. U. Akram, S. Khalid, S. H. Dar, N. Ali, A study to identify limitations of existing automated systems to detect glaucoma at initial and curable stage, Int. J. Imaging Syst. Technol., 8 (2021).
    [9] H. A. Quigley, A. T. Broman, The number of people with glaucoma worldwide in 2010 and 2020, Br. J. Ophthalmol., 90 (2006), 262–267. doi: 10.1136/bjo.2005.081224
    [10] C. Costagliola, R. Dell'Omo, M. R. Romano, M. Rinaldi, L. Zeppa, F. Parmeggiani, Pharmacotherapy of intraocular pressure: part i. parasympathomimetic, sympathomimetic and sympatholytics, Expert Opin. Pharmacother., 10 (2009), 2663–2677. doi: 10.1517/14656560903300103
    [11] A. A. Salam, M. U. Akram, K. Wazir, S. M. Anwar, M. Majid, Autonomous glaucoma detection from fundus image using cup to disc ratio and hybrid features, in ISSPIT.), IEEE, 2015,370–374.
    [12] R. JMJ, Leading causes of blindness worldwide, Bull. Soc. Belge. Ophtalmol., 283 (2002), 19–25.
    [13] M. K. Dutta, A. K. Mourya, A. Singh, M. Parthasarathi, R. Burget, K. Riha, Glaucoma detection by segmenting the super pixels from fundus colour retinal images, in 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom.), IEEE, 2014, 86–90.
    [14] C. E. Willoughby, D. Ponzin, S. Ferrari, A. Lobo, K. Landau, Y. Omidi, Anatomy and physiology of the human eye: effects of mucopolysaccharidoses disease on structure and function–a review, Clin. Exp. Ophthalmol., 38 (2010), 2–11.
    [15] M. S. Haleem, L. Han, J. Van Hemert, B. Li, Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review, Comput. Med. Imaging Graph., 37 (2013), 581–596. doi: 10.1016/j.compmedimag.2013.09.005
    [16] A. Sarhan, J. Rokne, R. Alhajj, Glaucoma detection using image processing techniques: A literature review, Comput. Med. Imaging Graph., 78 (2019), 101657. doi: 10.1016/j.compmedimag.2019.101657
    [17] H. A. Quigley, Neuronal death in glaucoma, Prog. Retin. Eye Res., 18 (1999), 39–57. doi: 10.1016/S1350-9462(98)00014-7
    [18] N. Salamat, M. M. S. Missen, A. Rashid, Diabetic retinopathy techniques in retinal images: A review, Artif. Intell. Med., 97 (2019), 168–188. doi: 10.1016/j.artmed.2018.10.009
    [19] F. Bokhari, T. Syedia, M. Sharif, M. Yasmin, S. L. Fernandes, Fundus image segmentation and feature extraction for the detection of glaucoma: A new approach, Curr. Med. Imaging Rev., 14 (2018), 77–87.
    [20] A. Agarwal, S. Gulia, S. Chaudhary, M. K. Dutta, R. Burget, K. Riha, Automatic glaucoma detection using adaptive threshold based technique in fundus image, in (TSP.), IEEE, 2015,416–420.
    [21] L. Xiong, H. Li, Y. Zheng, Automatic detection of glaucoma in retinal images, in 2014 9th IEEE Conference on Industrial Electronics and Applications, IEEE, 2014, 1016–1019.
    [22] A. Diaz-Pinto, S. Morales, V. Naranjo, T. Köhler, J. M. Mossi, A. Navea, Cnns for automatic glaucoma assessment using fundus images: An extensive validation, Biomed. Eng. Online, 18 (2019), 29. doi: 10.1186/s12938-019-0649-y
    [23] T. Kersey, C. I. Clement, P. Bloom, M. F. Cordeiro, New trends in glaucoma risk, diagnosis & management, Indian J. Med. Res., 137 (2013), 659.
    [24] A. L. Coleman, S. Miglior, Risk factors for glaucoma onset and progression, Surv. Ophthalmol., 53 (2008), S3–S10. doi: 10.1016/j.survophthal.2008.08.006
    [25] T. Saba, S. T. F. Bokhari, M. Sharif, M. Yasmin, M. Raza, Fundus image classification methods for the detection of glaucoma: A review, Microsc. Res. Tech., 81 (2018), 1105–1121. doi: 10.1002/jemt.23094
    [26] T. Aung, L. Ocaka, N. D. Ebenezer, A. G. Morris, M. Krawczak, D. L. Thiselton, et al., A major marker for normal tension glaucoma: association with polymorphisms in the opa1 gene, Hum. Genet., 110 (2002), 52–56. doi: 10.1007/s00439-001-0645-7
    [27] M. A. Khaimi, Canaloplasty: A minimally invasive and maximally effective glaucoma treatment, J. Ophthalmol., 2015.
    [28] M. Bechmann, M. J. Thiel, B. Roesen, S. Ullrich, M. W. Ulbig, K. Ludwig, Central corneal thickness determined with optical coherence tomography in various types of glaucoma, Br. J. Ophthalmol., 84 (2000), 1233–1237. doi: 10.1136/bjo.84.11.1233
    [29] D. Ahram, W. Alward, M. Kuehn, The genetic mechanisms of primary angle closure glaucoma, Eye., 29 (2015), 1251–1259. doi: 10.1038/eye.2015.124
    [30] R. Törnquist, Chamber depth in primary acute glaucoma, Br. J. Ophthalmol., 40 (1956), 421. doi: 10.1136/bjo.40.7.421
    [31] H. S. Sugar, F. A. Barbour, Pigmentary glaucoma*: A rare clinical entity, Am. J. Ophthalmol., 32 (1949), 90–92. doi: 10.1016/0002-9394(49)91112-5
    [32] H. S. Sugar, Pigmentary glaucoma: A 25-year review, Am. J. Ophthalmol., 62 (1966), 499–507. doi: 10.1016/0002-9394(66)91330-4
    [33] R. Ritch, U. Schlötzer-Schrehardt, A. G. Konstas, Why is glaucoma associated with exfoliation syndrome?, Prog. Retin. Eye Res., 22 (2003), 253–275. doi: 10.1016/S1350-9462(02)00014-9
    [34] J. L.-O. De, C. A. Girkin, Ocular trauma-related glaucoma., Ophthalmol. Clin. North. Am., 15 (2002), 215–223. doi: 10.1016/S0896-1549(02)00011-1
    [35] E. Milder, K. Davis, Ocular trauma and glaucoma, Int. Ophthalmol. Clin., 48 (2008), 47–64. doi: 10.1097/IIO.0b013e318187fcb8
    [36] T. G. Papadaki, I. P. Zacharopoulos, L. R. Pasquale, W. B. Christen, P. A. Netland, C. S. Foster, Long-term results of ahmed glaucoma valve implantation for uveitic glaucoma, Am. J. Ophthalmol., 144 (2007), 62–69. doi: 10.1016/j.ajo.2007.03.013
    [37] H. C. Laganowski, M. G. K. Muir, R. A. Hitchings, Glaucoma and the iridocorneal endothelial syndrome, Arch. Ophthalmol., 110 (1992), 346–350. doi: 10.1001/archopht.1992.01080150044025
    [38] C. L. Ho, D. S. Walton, Primary congenital glaucoma: 2004 update, J. Pediatr. Ophthalmol. Strabismus., 41 (2004), 271–288. doi: 10.3928/01913913-20040901-11
    [39] M. Erdurmuş, R. Yağcı, Ö. Atış, R. Karadağ, A. Akbaş, İ. F. Hepşen, Antioxidant status and oxidative stress in primary open angle glaucoma and pseudoexfoliative glaucoma, Curr. Eye Res., 36 (2011), 713–718. doi: 10.3109/02713683.2011.584370
    [40] S. S. Hayreh, Neovascular glaucoma, Prog. Retin. Eye Res., 26 (2007), 470–485. doi: 10.1016/j.preteyeres.2007.06.001
    [41] D. A. Lee, E. J. Higginbotham, Glaucoma and its treatment: a review, Am. J. Health. Syst. Pharm., 62 (2005), 691–699. doi: 10.1093/ajhp/62.7.691
    [42] X. Wang, R. Khan, A. Coleman, Device-modified trabeculectomy for glaucoma, Cochrane. Database Syst. Rev..
    [43] K. R. Sung, J. S. Kim, G. Wollstein, L. Folio, M. S. Kook, J. S. Schuman, Imaging of the retinal nerve fibre layer with spectral domain optical coherence tomography for glaucoma diagnosis, Br. J. Ophthalmol., 95 (2011), 909–914. doi: 10.1136/bjo.2010.186924
    [44] M. E. Karlen, E. Sanchez, C. C. Schnyder, M. Sickenberg, A. Mermoud, Deep sclerectomy with collagen implant: medium term results, Br. J. Ophthalmol., 83 (1999), 6–11. doi: 10.1136/bjo.83.1.6
    [45] J. Carrillo, L. Bautista, J. Villamizar, J. Rueda, M. Sanchez, D. Rueda, Glaucoma detection using fundus images of the eye, 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), IEEE, 2019, 1–4.
    [46] N. Sengar, M. K. Dutta, R. Burget, M. Ranjoha, Automated detection of suspected glaucoma in digital fundus images, 2017 40th International Conference on Telecommunications and Signal Processing (TSP), IEEE, 2017,749–752.
    [47] A. Poshtyar, J. Shanbehzadeh, H. Ahmadieh, Automatic measurement of cup to disc ratio for diagnosis of glaucoma on retinal fundus images, in 2013 6th International Conference on Biomedical Engineering and Informatics, IEEE, 2013, 24–27.
    [48] F. Khan, S. A. Khan, U. U. Yasin, I. ul Haq, U. Qamar, Detection of glaucoma using retinal fundus images, in The 6th 2013 Biomedical Engineering International Conference, IEEE, 2013, 1–5.
    [49] H. Yamada, T. Akagi, H. Nakanishi, H. O. Ikeda, Y. Kimura, K. Suda, et al., Microstructure of peripapillary atrophy and subsequent visual field progression in treated primary open-angle glaucoma, Ophthalmology, 123 (2016), 542–551. doi: 10.1016/j.ophtha.2015.10.061
    [50] J. B. Jonas, Clinical implications of peripapillary atrophy in glaucoma, Curr. Opin. Ophthalmol., 16 (2005), 84–88. doi: 10.1097/01.icu.0000156135.20570.30
    [51] K. H. Park, G. Tomita, S. Y. Liou, Y. Kitazawa, Correlation between peripapillary atrophy and optic nerve damage in normal-tension glaucoma, Ophthalmol., 103 (1996), 1899–1906. doi: 10.1016/S0161-6420(96)30409-0
    [52] F. A. Medeiros, L. M. Zangwill, C. Bowd, R. M. Vessani, R. Susanna Jr, R. N. Weinreb, Evaluation of retinal nerve fiber layer, optic nerve head, and macular thickness measurements for glaucoma detection using optical coherence tomography, Am. J. Ophthalmol., 139 (2005), 44–55. doi: 10.1016/j.ajo.2004.08.069
    [53] G. Wollstein, J. S. Schuman, L. L. Price, A. Aydin, P. C. Stark, E. Hertzmark, et al., Optical coherence tomography longitudinal evaluation of retinal nerve fiber layer thickness in glaucoma, Arch. Ophthalmol., 123 (2005), 464–470. doi: 10.1001/archopht.123.4.464
    [54] M. Armaly, The optic cup in the normal eye: I. cup width, depth, vessel displacement, ocular tension and outflow facility, Am. J. Ophthalmol., 68 (1969), 401–407. doi: 10.1016/0002-9394(69)90702-8
    [55] M. Galdos, A. Bayon, F. D. Rodriguez, C. Mico, S. C. Sharma, E. Vecino, Morphology of retinal vessels in the optic disk in a göttingen minipig experimental glaucoma model, Vet. Ophthalmol., 15 (2012), 36–46.
    [56] W. Zhou, Y. Yi, Y. Gao, J. Dai, Optic disc and cup segmentation in retinal images for glaucoma diagnosis by locally statistical active contour model with structure prior, Comput. Math. Methods. Med., 2019.
    [57] P. Sharma, P. A. Sample, L. M. Zangwill, J. S. Schuman, Diagnostic tools for glaucoma detection and management, Surv. Ophthalmol., 53 (2008), S17–S32. doi: 10.1016/j.survophthal.2008.08.003
    [58] M. J. Greaney, D. C. Hoffman, D. F. Garway-Heath, M. Nakla, A. L. Coleman, J. Caprioli, Comparison of optic nerve imaging methods to distinguish normal eyes from those with glaucoma, Invest. Ophthalmol. Vis. Sci., 43 (2002), 140–145.
    [59] R. Bock, J. Meier, L. G. Nyúl, J. Hornegger, G. Michelson, Glaucoma risk index: automated glaucoma detection from color fundus images, Med. Image. Anal., 14 (2010), 471–481. doi: 10.1016/j.media.2009.12.006
    [60] K. Chan, T.-W. Lee, P. A. Sample, M. H. Goldbaum, R. N. Weinreb, T. J. Sejnowski, Comparison of machine learning and traditional classifiers in glaucoma diagnosis, IEEE Trans. Biomed. Eng., 49 (2002), 963–974. doi: 10.1109/TBME.2002.802012
    [61] N. Varachiu, C. Karanicolas, M. Ulieru, Computational intelligence for medical knowledge acquisition with application to glaucoma, in Proceedings First IEEE International Conference on Cognitive Informatics, IEEE, 2002,233–238.
    [62] J. Yu, S. S. R. Abidi, P. H. Artes, A. McIntyre, M. Heywood, Automated optic nerve analysis for diagnostic support inglaucoma, in CBMS'05., IEEE, 2005, 97–102.
    [63] R. Bock, J. Meier, G. Michelson, L. G. Nyúl and J. Hornegger, Classifying glaucoma with image-based features from fundus photographs, in Joint Pattern Recognition Symposium., Springer, 2007,355–364.
    [64] Y. Hatanaka, A. Noudo, C. Muramatsu, A. Sawada, T. Hara, T. Yamamoto, et al., Vertical cup-to-disc ratio measurement for diagnosis of glaucoma on fundus images, in Medical Imaging 2010: Computer-Aided Diagnosis, vol. 7624, International Society for Optics and Photonics, 2010, 76243C.
    [65] S. S. Abirami, S. G. Shoba, Glaucoma images classification using fuzzy min-max neural network based on data-core, IJISME., 1 (2013), 9–15.
    [66] J. Liu, Z. Zhang, D. W. K. Wong, Y. Xu, F. Yin, J. Cheng, et al., Automatic glaucoma diagnosis through medical imaging informatics, Journal of the American Medical Informatics Association, 20 (2013), 1021–1027. doi: 10.1136/amiajnl-2012-001336
    [67] A. Almazroa, R. Burman, K. Raahemifar, V. Lakshminarayanan, Optic disc and optic cup segmentation methodologies for glaucoma image detection: A survey, J. Ophthalmol., 2015.
    [68] A. Dey, S. K. Bandyopadhyay, Automated glaucoma detection using support vector machine classification method, J. Adv. Med. Med. Res., 1–12.
    [69] F. R. Silva, V. G. Vidotti, F. Cremasco, M. Dias, E. S. Gomi, V. P. Costa, Sensitivity and specificity of machine learning classifiers for glaucoma diagnosis using spectral domain oct and standard automated perimetry, Arq. Bras. Oftalmol., 76 (2013), 170–174. doi: 10.1590/S0004-27492013000300008
    [70] M. U. Akram, A. Tariq, S. Khalid, M. Y. Javed, S. Abbas, U. U. Yasin, Glaucoma detection using novel optic disc localization, hybrid feature set and classification techniques, Australas. Phys. Eng. Sci. Med., 38 (2015), 643–655. doi: 10.1007/s13246-015-0377-y
    [71] A. T. A. Al-Sammarraie, R. R. Jassem, T. K. Ibrahim, Mixed convection heat transfer in inclined tubes with constant heat flux, Eur. J. Sci. Res., 97 (2013), 144–158.
    [72] M. R. K. Mookiah, U. R. Acharya, C. M. Lim, A. Petznick, J. S. Suri, Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features, Knowl. Based. Syst., 33 (2012), 73–82. doi: 10.1016/j.knosys.2012.02.010
    [73] C. Raja, N. Gangatharan, Glaucoma detection in fundal retinal images using trispectrum and complex wavelet-based features, Eur. J. Sci. Res., 97 (2013), 159–171.
    [74] G. Lim, Y. Cheng, W. Hsu, M. L. Lee, Integrated optic disc and cup segmentation with deep learning, in ICTAI., IEEE, 2015,162–169.
    [75] K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, L. Van Gool, Deep retinal image understanding, in International conference on medical image computing and computer-assisted intervention, Springer, 2016,140–148.
    [76] B. Al-Bander, W. Al-Nuaimy, B. M. Williams, Y. Zheng, Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc, Biomed. Signal Process Control., 40 (2018), 91–101. doi: 10.1016/j.bspc.2017.09.008
    [77] A. Mitra, P. S. Banerjee, S. Roy, S. Roy, S. K. Setua, The region of interest localization for glaucoma analysis from retinal fundus image using deep learning, Comput. Methods Programs Biomed., 165 (2018), 25–35. doi: 10.1016/j.cmpb.2018.08.003
    [78] S. S. Kruthiventi, K. Ayush, R. V. Babu, Deepfix: A fully convolutional neural network for predicting human eye fixations, IEEE Trans. Image. Process., 26 (2017), 4446–4456. doi: 10.1109/TIP.2017.2710620
    [79] M. Norouzifard, A. Nemati, A. Abdul-Rahman, H. GholamHosseini, R. Klette, A comparison of transfer learning techniques, deep convolutional neural network and multilayer neural network methods for the diagnosis of glaucomatous optic neuropathy, in International Computer Symposium, Springer, 2018,627–635.
    [80] X. Sun, Y. Xu, W. Zhao, T. You, J. Liu, Optic disc segmentation from retinal fundus images via deep object detection networks, in EMBC., IEEE, 2018, 5954–5957.
    [81] Z. Ghassabi, J. Shanbehzadeh, K. Nouri-Mahdavi, A unified optic nerve head and optic cup segmentation using unsupervised neural networks for glaucoma screening, in EMBC., IEEE, 2018, 5942–5945.
    [82] J. H. Tan, S. V. Bhandary, S. Sivaprasad, Y. Hagiwara, A. Bagchi, U. Raghavendra, et al., Age-related macular degeneration detection using deep convolutional neural network, Future Gener. Comput. Syst., 87 (2018), 127–135. doi: 10.1016/j.future.2018.05.001
    [83] J. Zilly, J. M. Buhmann, D. Mahapatra, Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation, Comput. Med. Imaging Graph., 55 (2017), 28–41. doi: 10.1016/j.compmedimag.2016.07.012
    [84] J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N.-M. Tan, et al., Superpixel classification based optic disc and optic cup segmentation for glaucoma screening, IEEE Trans. Med. Imaging, 32 (2013), 1019–1032. doi: 10.1109/TMI.2013.2247770
    [85] H. Ahmad, A. Yamin, A. Shakeel, S. O. Gillani, U. Ansari, Detection of glaucoma using retinal fundus images, in iCREATE., IEEE, 2014,321–324.
    [86] S. Kavitha, S. Karthikeyan, K. Duraiswamy, Neuroretinal rim quantification in fundus images to detect glaucoma, IJCSNS., 10 (2010), 134–140.
    [87] Z. Zhang, B. H. Lee, J. Liu, D. W. K. Wong, N. M. Tan, J. H. Lim, et al., Optic disc region of interest localization in fundus image for glaucoma detection in argali, in 2010 5th IEEE Conference on Industrial Electronics and Applications, IEEE, 2010, 1686–1689.
    [88] D. Welfer, J. Scharcanski, C. M. Kitamura, M. M. Dal Pizzol, L. W. Ludwig, D. R. Marinho, Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach, Computers Biol. Med., 40 (2010), 124–137. doi: 10.1016/j.compbiomed.2009.11.009
    [89] H. Tjandrasa, A. Wijayanti, N. Suciati, Optic nerve head segmentation using hough transform and active contours, Telkomnika, 10 (2012), 531. doi: 10.12928/telkomnika.v10i3.833
    [90] M. Tavakoli, M. Nazar, A. Golestaneh, F. Kalantari, Automated optic nerve head detection based on different retinal vasculature segmentation methods and mathematical morphology, in NSS/MIC., IEEE, 2017, 1–7.
    [91] P. Bibiloni, M. González-Hidalgo, S. Massanet, A real-time fuzzy morphological algorithm for retinal vessel segmentation, J. Real Time Image Process., 16 (2019), 2337–2350. doi: 10.1007/s11554-018-0748-1
    [92] A. Agarwal, A. Issac, A. Singh, M. K. Dutta, Automatic imaging method for optic disc segmentation using morphological techniques and active contour fitting, in 2016 Ninth International Conference on Contemporary Computing (IC3), IEEE, 2016, 1–5.
    [93] S. Pal, S. Chatterjee, Mathematical morphology aided optic disk segmentation from retinal images, in 2017 3rd International Conference on Condition Assessment Techniques in Electrical Systems (CATCON), IEEE, 2017,380–385.
    [94] L. Wang, A. Bhalerao, Model based segmentation for retinal fundus images, in Scandinavian Conference on Image Analysis, Springer, 2003,422–429.
    [95] G. Deng, L. Cahill, An adaptive gaussian filter for noise reduction and edge detection, in 1993 IEEE conference record nuclear science symposium and medical imaging conference, IEEE, 1993, 1615–1619.
    [96] K. A. Vermeer, F. M. Vos, H. G. Lemij, A. M. Vossepoel, A model based method for retinal blood vessel detection, Comput. Biol. Med., 34 (2004), 209–219. doi: 10.1016/S0010-4825(03)00055-6
    [97] J. I. Orlando, E. Prokofyeva, M. B. Blaschko, A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images, IEEE Trans. Biomed. Eng., 64 (2016), 16–27.
    [98] R. Ingle, P. Mishra, Cup segmentation by gradient method for the assessment of glaucoma from retinal image, Int. J. Latest Trends. Eng. Technol., 4 (2013), 2540–2543.
    [99] G. D. Joshi, J. Sivaswamy, S. Krishnadas, Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment, IEEE Trans. Biomed. Eng., 30 (2011), 1192–1205.
    [100] W. W. K. Damon, J. Liu, T. N. Meng, Y. Fengshou, W. T. Yin, Automatic detection of the optic cup using vessel kinking in digital retinal fundus images, in 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), IEEE, 2012, 1647–1650.
    [101] D. Finkelstein, Kinks, J. Math. Phys., 7 (1966), 1218–1225.
    [102] Y. Xu, L. Duan, S. Lin, X. Chen, D. W. K. Wong, T. Y. Wong, et al., Optic cup segmentation for glaucoma detection using low-rank superpixel representation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2014,788–795.
    [103] Y. Xu, J. Liu, S. Lin, D. Xu, C. Y. Cheung, T. Aung, et al., Efficient optic cup detection from intra-image learning with retinal structure priors, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2012, 58–65.
    [104] M. A. Aslam, M. N. Salik, F. Chughtai, N. Ali, S. H. Dar, T. Khalil, Image classification based on mid-level feature fusion, in 2019 15th International Conference on Emerging Technologies (ICET), IEEE, 2019, 1–6.
    [105] N. Ali, K. B. Bajwa, R. Sablatnig, S. A. Chatzichristofis, Z. Iqbal, M. Rashid, et al., A novel image retrieval based on visual words integration of sift and surf, PloS. one., 11 (2016), e0157428. doi: 10.1371/journal.pone.0157428
    [106] C.-Y. Ho, T.-W. Pai, H.-T. Chang, H.-Y. Chen, An atomatic fundus image analysis system for clinical diagnosis of glaucoma, in 2011 International Conference on Complex, Intelligent, and Software Intensive Systems, IEEE, 2011,559–564.
    [107] H.-T. Chang, C.-H. Liu, T.-W. Pai, Estimation and extraction of b-cell linear epitopes predicted by mathematical morphology approaches, J. Mol. Recognit., 21 (2008), 431–441. doi: 10.1002/jmr.910
    [108] D. Wong, J. Liu, J. Lim, N. Tan, Z. Zhang, S. Lu, et al., Intelligent fusion of cup-to-disc ratio determination methods for glaucoma detection in argali, in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2009, 5777–5780.
    [109] F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, C. Cheung, M. Baskaran, et al., Automated segmentation of optic disc and optic cup in fundus images for glaucoma diagnosis, in 2012 25th IEEE international symposium on computer-based medical systems (CBMS), IEEE, 2012, 1–6.
    [110] S. Chandrika, K. Nirmala, Analysis of cdr detection for glaucoma diagnosis, IJERA., 2 (2013), 23–27.
    [111] N. Annu, J. Justin, Automated classification of glaucoma images by wavelet energy features, IJERA., 5 (2013), 1716–1721.
    [112] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, X. Cao, Joint optic disc and cup segmentation based on multi-label deep network and polar transformation, IEEE Trans. Med. Imaging., 37 (2018), 1597–1605. doi: 10.1109/TMI.2018.2791488
    [113] P. K. Dhar, T. Shimamura, Blind svd-based audio watermarking using entropy and log-polar transformation, JISA., 20 (2015), 74–83.
    [114] D. Wong, J. Liu, J. Lim, H. Li, T. Wong, Automated detection of kinks from blood vessels for optic cup segmentation in retinal images, in Medical Imaging 2009: Computer-Aided Diagnosis, vol. 7260, International Society for Optics and Photonics, 2009, 72601J.
    [115] A. Murthi, M. Madheswaran, Enhancement of optic cup to disc ratio detection in glaucoma diagnosis, in 2012 International Conference on Computer Communication and Informatics, IEEE, 2012, 1–5.
    [116] N. E. A. Khalid, N. M. Noor, N. M. Ariff, Fuzzy c-means (fcm) for optic cup and disc segmentation with morphological operation, Procedia. Comput. Sci., 42 (2014), 255–262. doi: 10.1016/j.procs.2014.11.060
    [117] H. A. Nugroho, W. K. Oktoeberza, A. Erasari, A. Utami, C. Cahyono, Segmentation of optic disc and optic cup in colour fundus images based on morphological reconstruction, in 2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), IEEE, 2017, 1–5.
    [118] L. Zhang, M. Fisher, W. Wang, Retinal vessel segmentation using multi-scale textons derived from keypoints, Comput. Med. Imaging Graph., 45 (2015), 47–56. doi: 10.1016/j.compmedimag.2015.07.006
    [119] X. Li, B. Aldridge, R. Fisher, J. Rees, Estimating the ground truth from multiple individual segmentations incorporating prior pattern analysis with application to skin lesion segmentation, in 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE, 2011, 1438–1441.
    [120] M. M. Fraz, P. Remagnino, A. Hoppe, S. Velastin, B. Uyyanonvara, S. Barman, A supervised method for retinal blood vessel segmentation using line strength, multiscale gabor and morphological features, in 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), IEEE, 2011,410–415.
    [121] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, M. D. Abramoff, Comparative study of retinal vessel segmentation methods on a new publicly available database, in Medical imaging 2004: Image processing, vol. 5370, International Society for Optics and Photonics, 2004,648–656.
    [122] J. Y. Choi, T. K. Yoo, J. G. Seo, J. Kwak, T. T. Um, T. H. Rim, Multi-categorical deep learning neural network to classify retinal images: a pilot study employing small database, PloS one, 12.
    [123] L. Zhang, M. Fisher, W. Wang, Comparative performance of texton based vascular tree segmentation in retinal images, in 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014,952–956.
    [124] A. Septiarini, A. Harjoko, R. Pulungan, R. Ekantini, Automated detection of retinal nerve fiber layer by texture-based analysis for glaucoma evaluation, Healthc. Inform. Res., 24 (2018), 335–345. doi: 10.4258/hir.2018.24.4.335
    [125] B. S. Kirar, D. K. Agrawal, Computer aided diagnosis of glaucoma using discrete and empirical wavelet transform from fundus images, IET Image Process., 13 (2018), 73–82.
    [126] K. Nirmala, N. Venkateswaran, C. V. Kumar, J. S. Christobel, Glaucoma detection using wavelet based contourlet transform, in 2017 International Conference on Intelligent Computing and Control (I2C2), IEEE, 2017, 1–5.
    [127] A. A. G. Elseid, A. O. Hamza, Glaucoma detection using retinal nerve fiber layer texture features, J. Clin. Eng., 44 (2019), 180–185. doi: 10.1097/JCE.0000000000000361
    [128] M. Claro, L. Santos, W. Silva, F. Araújo, N. Moura, A. Macedo, Automatic glaucoma detection based on optic disc segmentation and texture feature extraction, CLEI Electron. J., 19 (2016), 5.
    [129] L. Abdel-Hamid, Glaucoma detection from retinal images using statistical and textural wavelet features, J. Digit Imaging., 1–8.
    [130] S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, R. Garnavi, A feature agnostic approach for glaucoma detection in oct volumes, PloS. one., 14 (2019), e0219126. doi: 10.1371/journal.pone.0219915
    [131] D. C. Hood, A. S. Raza, On improving the use of oct imaging for detecting glaucomatous damage, Br. J. Ophthalmol., 98 (2014), ii1–ii9. doi: 10.1136/bjophthalmol-2014-305156
    [132] D. C. Hood, Improving our understanding, and detection, of glaucomatous damage: an approach based upon optical coherence tomography (oct), Prog.Retin. Eye Res., 57 (2017), 46–75. doi: 10.1016/j.preteyeres.2016.12.002
    [133] H. S. Basavegowda, G. Dagnew, Deep learning approach for microarray cancer data classification, CAAI Trans. Intell. Technol., 5 (2020), 22–33. doi: 10.1049/trit.2019.0028
    [134] X. Chen, Y. Xu, D. W. K. Wong, T. Y. Wong, J. Liu, Glaucoma detection based on deep convolutional neural network, in 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC), IEEE, 2015,715–718.
    [135] U. T. Nguyen, A. Bhuiyan, L. A. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern Recognit., 46 (2013), 703–715. doi: 10.1016/j.patcog.2012.08.009
    [136] J. H. Tan, U. R. Acharya, S. V. Bhandary, K. C. Chua, S. Sivaprasad, Segmentation of optic disc, fovea and retinal vasculature using a single convolutional neural network, J. Comput. Sci., 20 (2017), 70–79. doi: 10.1016/j.jocs.2017.02.006
    [137] Y. Chai, H. Liu, J. Xu, Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models, Knowl. Based Syst., 161 (2018), 147–156. doi: 10.1016/j.knosys.2018.07.043
    [138] A. Pal, M. R. Moorthy, A. Shahina, G-eyenet: A convolutional autoencoding classifier framework for the detection of glaucoma from retinal fundus images, in 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, 2018, 2775–2779.
    [139] R. Asaoka, M. Tanito, N. Shibata, K. Mitsuhashi, K. Nakahara, Y. Fujino, et al., Validation of a deep learning model to screen for glaucoma using images from different fundus cameras and data augmentation, Ophthalmol. Glaucoma., 2 (2019), 224–231. doi: 10.1016/j.ogla.2019.03.008
    [140] X. Chen, Y. Xu, S. Yan, D. W. K. Wong, T. Y. Wong, J. Liu, Automatic feature learning for glaucoma detection based on deep learning, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015,669–677.
    [141] A. Singh, S. Sengupta, V. Lakshminarayanan, Glaucoma diagnosis using transfer learning methods, in Applications of Machine Learning, vol. 11139, International Society for Optics and Photonics, 2019, 111390U.
    [142] S. Maheshwari, V. Kanhangad, R. B. Pachori, Cnn-based approach for glaucoma diagnosis using transfer learning and lbp-based data augmentation, arXiv. preprint.
    [143] S. Sengupta, A. Singh, H. A. Leopold, T. Gulati, V. Lakshminarayanan, Ophthalmic diagnosis using deep learning with fundus images–a critical review, Artif. Intell. Med., 102 (2020), 101758. doi: 10.1016/j.artmed.2019.101758
    [144] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, U. R. Acharya, Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images, Inf. Sci., 441 (2018), 41–49. doi: 10.1016/j.ins.2018.01.051
    [145] R. Asaoka, H. Murata, A. Iwase, M. Araie, Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier, Ophthalmol., 123 (2016), 1974–1980. doi: 10.1016/j.ophtha.2016.05.029
    [146] Z. Li, Y. He, S. Keel, W. Meng, R. T. Chang, M. He, Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs, Ophthalmol., 125 (2018), 1199–1206. doi: 10.1016/j.ophtha.2018.01.023
    [147] V. V. Raghavan, V. N. Gudivada, V. Govindaraju, C. R. Rao, Cognitive computing: Theory and applications, Elsevier., 2016.
    [148] J. Sivaswamy, S. Krishnadas, G. D. Joshi, M. Jain, A. U. S. Tabish, Drishti-gs: Retinal image dataset for optic nerve head (onh) segmentation, in 2014 IEEE 11th international symposium on biomedical imaging (ISBI), IEEE, 2014, 53–56.
    [149] A. Chakravarty, J. Sivaswamy, Glaucoma classification with a fusion of segmentation and image-based features, in 2016 IEEE 13th international symposium on biomedical imaging (ISBI), IEEE, 2016,689–692.
    [150] E. Decencière, X. Zhang, G. Cazuguel, B. Lay, B. Cochener, C. Trone, et al., Feedback on a publicly distributed image database: The messidor database, Image Analys. Stereol., 33 (2014), 231–234. doi: 10.5566/ias.1155
    [151] A. Allam, A. Youssif, A. Ghalwash, Automatic segmentation of optic disc in eye fundus images: a survey, ELCVIA, 14 (2015), 1–20. doi: 10.5565/rev/elcvia.762
    [152] P. Porwal, S. Pachade, R. Kamble, M. Kokare, G. Deshmukh, V. Sahasrabuddhe, et al., Indian diabetic retinopathy image dataset (idrid): A database for diabetic retinopathy screening research, Data, 3 (2018), 25. doi: 10.3390/data3030025
    [153] F. Calimeri, A. Marzullo, C. Stamile, G. Terracina, Optic disc detection using fine tuned convolutional neural networks, in 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), IEEE, 2016, 69–75.
    [154] M. N. Reza, Automatic detection of optic disc in color fundus retinal images using circle operator, Biomed. Signal. Process. Control., 45 (2018), 274–283. doi: 10.1016/j.bspc.2018.05.027
    [155] Z. Zhang, F. S. Yin, J. Liu, W. K. Wong, N. M. Tan, B. H. Lee, et al., Origa-light: An online retinal fundus image database for glaucoma analysis and research, in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, IEEE, 2010, 3065–3068.
    [156] Z. Zhang, J. Liu, F. Yin, B.-H. Lee, D. W. K. Wong, K. R. Sung, Achiko-k: Database of fundus images from glaucoma patients, in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), IEEE, 2013,228–231.
    [157] F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, B. H. Lee, J. Cheng, et al., Achiko-i retinal fundus image database and its evaluation on cup-to-disc ratio measurement, in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), IEEE, 2013,224–227.
    [158] F. Fumero, S. Alayón, J. L. Sanchez, J. Sigut, M. Gonzalez-Hernandez, Rim-one: An open retinal image database for optic nerve evaluation, in 2011 24th international symposium on computer-based medical systems (CBMS), IEEE, 2011, 1–6.
    [159] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, et al., Optic nerve head segmentation, IEEE Trans. Med. Imaging., 23 (2004), 256–264. doi: 10.1109/TMI.2003.823261
    [160] C. C. Sng, L.-L. Foo, C.-Y. Cheng, J. C. Allen Jr, M. He, G. Krishnaswamy, et al., Determinants of anterior chamber depth: the singapore chinese eye study, Ophthalmol., 119 (2012), 1143–1150. doi: 10.1016/j.ophtha.2012.01.011
    [161] Z. Zhang, J. Liu, C. K. Kwoh, X. Sim, W. T. Tay, Y. Tan, et al., Learning in glaucoma genetic risk assessment, in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, IEEE, 2010, 6182–6185.
    [162] D. Wong, J. Liu, J. Lim, X. Jia, F. Yin, H. Li, et al., Level-set based automatic cup-to-disc ratio determination using retinal fundus images in argali, in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2008, 2266–2269.
    [163] T. Ge, L. Cui, B. Chang, Z. Sui, F. Wei, M. Zhou, Seri: A dataset for sub-event relation inference from an encyclopedia, in CCF International Conference on Natural Language Processing and Chinese Computing, Springer, 2018,268–277.
    [164] M. Haloi, Improved microaneurysm detection using deep neural networks, arXiv. preprint..
    [165] B. Antal, A. Hajdu, An ensemble-based system for automatic screening of diabetic retinopathy, Knowl. Based Syst., 60 (2014), 20–27. doi: 10.1016/j.knosys.2013.12.023
    [166] J. Nayak, R. Acharya, P. S. Bhat, N. Shetty, T.-C. Lim, Automated diagnosis of glaucoma using digital fundus images, J. Med. Syst., 33 (2009), 337. doi: 10.1007/s10916-008-9195-z
    [167] J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, M. J. Cree, Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification, IEEE Trans. Med. Imaging., 25 (2006), 1214–1222. doi: 10.1109/TMI.2006.879967
    [168] S. Kankanahalli, P. M. Burlina, Y. Wolfson, D. E. Freund, N. M. Bressler, Automated classification of severity of age-related macular degeneration from fundus photographs, Invest. Ophthalmol. Vis. Sci., 54 (2013), 1789–1796. doi: 10.1167/iovs.12-10928
    [169] K. Prasad, P. Sajith, M. Neema, L. Madhu, P. Priya, Multiple eye disease detection using deep neural network, in TENCON 2019-2019 IEEE Region 10 Conference (TENCON), IEEE, 2019, 2148–2153.
  • This article has been cited by:

    1. Rebeca Klamerick Lima, Felipe Sousa Quintino, Tiago A. da Fonseca, Luan Carlos de Sena Monteiro Ozelim, Pushpa Narayan Rathie, Helton Saulo, Assessing the Impact of Copula Selection on Reliability Measures of Type P(X < Y) with Generalized Extreme Value Marginals, 2024, 5, 2673-3951, 180, 10.3390/modelling5010010
    2. Melquisadec Oliveira, Felipe S. Quintino, Dióscoros Aguiar, Pushpa N. Rathie, Helton Saulo, Tiago A. da Fonseca, Luan Carlos de Sena Monteiro Ozelim, On the Stress–Strength Reliability of Transmuted GEV Random Variables with Applications to Financial Assets Selection, 2024, 26, 1099-4300, 441, 10.3390/e26060441
    3. Felipe Sousa Quintino, Pushpa Narayan Rathie, Luan Carlos de Sena Monteiro Ozelim, Tiago Alves da Fonseca, Estimation of P(X < Y) Stress–Strength Reliability Measures for a Class of Asymmetric Distributions: The Case of Three-Parameter p-Max Stable Laws, 2024, 16, 2073-8994, 837, 10.3390/sym16070837
    4. Rebeca Klamerick Lima, Felipe Sousa Quintino, Melquisadec Oliveira, Luan Carlos de Sena Monteiro Ozelim, Tiago A. da Fonseca, Pushpa Narayan Rathie, Multicomponent Stress–Strength Reliability with Extreme Value Distribution Margins: Its Theory and Application to Hydrological Data, 2024, 7, 2571-8800, 529, 10.3390/j7040032
    5. Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat, Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors, 2024, 9, 2473-6988, 34311, 10.3934/math.20241635
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(15474) PDF downloads(1748) Cited by(53)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog