
Data modeling played a crucial role in a variety of research domains due to its widespread practical applications, especially when handling complex datasets. This study explored a specific discrete distribution, characterized by a single parameter, developed using the weighted combining discretization method. The statistical properties of this distribution were rigorously derived and expressed mathematically, covering essential aspects such as moments, skewness, kurtosis, covariance, index of dispersion, order statistics, entropies, mean residual life, residual coefficient of variation function, stress-strength models, and premium principles. These properties highlighted the model's suitability for analyzing right-skewed data with heavy tails, making it a powerful tool for probabilistic modeling in situations where data exhibited overdispersion and increasing failure rates. The research introduced a range of estimation techniques, including maximum product of spacings, method of moments, Anderson-Darling, right-tail Anderson-Darling, maximum likelihood, least squares, weighted least squares, Cramer-Von-Mises, and percentile, each explained in detail. A ranking simulation study was performed to assess the performance of these estimators, with ranking techniques used to determine the most effective estimator across various sample sizes. The study further applied the proposed model to real-world datasets, demonstrating its ability to address complex data scenarios and showcasing its superior performance in comparison to traditional models such as the geometric, Poisson, and negative binomial distributions. Overall, the results emphasized the proposed model's potential as a versatile and effective tool for modeling over-dispersed and skewed data, with promising implications for future research in diverse fields.
Citation: Mahmoud El-Morshedy, Mohamed S. Eliwa, Mohamed El-Dawoody, Hend S. Shahen. A weighted hybrid discrete probability model: Mathematical framework, statistical analysis, estimation techniques, simulation-based ranking, and goodness-of-fit evaluation for over-dispersed data[J]. Electronic Research Archive, 2025, 33(4): 2061-2091. doi: 10.3934/era.2025091
[1] | Hanan H. Sakr, Mohamed S. Mohamed . On residual cumulative generalized exponential entropy and its application in human health. Electronic Research Archive, 2025, 33(3): 1633-1666. doi: 10.3934/era.2025077 |
[2] | Xuerui Li, Lican Kang, Yanyan Liu, Yuanshan Wu . Distributed Bayesian posterior voting strategy for massive data. Electronic Research Archive, 2022, 30(5): 1936-1953. doi: 10.3934/era.2022098 |
[3] | Xiangyun Shi, Dan Zhou, Xueyong Zhou, Fan Yu . Predicting the trend of leptospirosis in China via a stochastic model with vector and environmental transmission. Electronic Research Archive, 2024, 32(6): 3937-3951. doi: 10.3934/era.2024176 |
[4] | Shuai Chang, Jinrui Guan . A study on the estimator for the extreme value index of heavy-tailed distribution generated from moment statistic. Electronic Research Archive, 2025, 33(4): 2295-2311. doi: 10.3934/era.2025101 |
[5] | Ke Liu, Hanzhong Liu . Testing for individual and time effects in unbalanced panel data models with time-invariant regressors. Electronic Research Archive, 2022, 30(12): 4574-4592. doi: 10.3934/era.2022232 |
[6] | Sadia Anwar, Showkat Ahmad Lone, Aysha Khan, Salmeh Almutlak . Stress-strength reliability estimation for the inverted exponentiated Rayleigh distribution under unified progressive hybrid censoring with application. Electronic Research Archive, 2023, 31(7): 4011-4033. doi: 10.3934/era.2023204 |
[7] | YongKyung Oh, JiIn Kwak, Sungil Kim . Time delay estimation of traffic congestion propagation due to accidents based on statistical causality. Electronic Research Archive, 2023, 31(2): 691-707. doi: 10.3934/era.2023034 |
[8] | Amal S. Hassan, Najwan Alsadat, Mohammed Elgarhy, Laxmi Prasad Sapkota, Oluwafemi Samson Balogun, Ahmed M. Gemeay . A novel asymmetric form of the power half-logistic distribution with statistical inference and real data analysis. Electronic Research Archive, 2025, 33(2): 791-825. doi: 10.3934/era.2025036 |
[9] | Ruyang Yin, Jiping Xing, Pengli Mo, Nan Zheng, Zhiyuan Liu . BO-B&B: A hybrid algorithm based on Bayesian optimization and branch-and-bound for discrete network design problems. Electronic Research Archive, 2022, 30(11): 3993-4014. doi: 10.3934/era.2022203 |
[10] | Cui-Ping Cheng, Ruo-Fan An . Global stability of traveling wave fronts in a two-dimensional lattice dynamical system with global interaction. Electronic Research Archive, 2021, 29(5): 3535-3550. doi: 10.3934/era.2021051 |
Data modeling played a crucial role in a variety of research domains due to its widespread practical applications, especially when handling complex datasets. This study explored a specific discrete distribution, characterized by a single parameter, developed using the weighted combining discretization method. The statistical properties of this distribution were rigorously derived and expressed mathematically, covering essential aspects such as moments, skewness, kurtosis, covariance, index of dispersion, order statistics, entropies, mean residual life, residual coefficient of variation function, stress-strength models, and premium principles. These properties highlighted the model's suitability for analyzing right-skewed data with heavy tails, making it a powerful tool for probabilistic modeling in situations where data exhibited overdispersion and increasing failure rates. The research introduced a range of estimation techniques, including maximum product of spacings, method of moments, Anderson-Darling, right-tail Anderson-Darling, maximum likelihood, least squares, weighted least squares, Cramer-Von-Mises, and percentile, each explained in detail. A ranking simulation study was performed to assess the performance of these estimators, with ranking techniques used to determine the most effective estimator across various sample sizes. The study further applied the proposed model to real-world datasets, demonstrating its ability to address complex data scenarios and showcasing its superior performance in comparison to traditional models such as the geometric, Poisson, and negative binomial distributions. Overall, the results emphasized the proposed model's potential as a versatile and effective tool for modeling over-dispersed and skewed data, with promising implications for future research in diverse fields.
Abbreviations: HDWC: Hybrid discrete weighted combining; PMF: Probability mass function; CDF: Cumulative distribution function; HRF: Hazard rate function; RHRF: Reversed hazard rate function; IFR: Increasing failure rate; MGF: Moment generating function; PGF: Probability generating function; COV: Covariance; IOD: Index of dispersion; Sk and Ku: Skewness and kurtosis; MRL: Mean residual life; SF: Survival function; RCOV: Residual coefficient of variation; SSM: Stress-strength model; EVP: Expected value principle; EPP: Exponential premium principle; MLE: Maximum likelihood estimation; ME: Moment estimation; MPSE: Maximum product of spacings estimator; PCE: Percentile estimator; ADE: Anderson-Darling estimator; RADE: Right-tail Anderson-Darling estimator; LSE: Least-squares estimator; WLSE: Weighted least-squares estimator; CVME: Cramer-Von-Mises estimator; MCMC: Markov chain Monte Carlo; MSE: Mean squared errors; GOF: Goodness-of-fit; AIC: Akaike information criterion; CAIC: Corrected Akaike information criterion; BIC: Bayesian information criterion; HQIC: Hannan-Quinn information criterion; D.F: Degree of freedom; EDS: Empirical descriptive statistics; KS: Kolmogorov-Smirnov statistic; P-P: Probability-Probability
In recent years, there has been remarkable progress in developing discrete probability distributions, which are essential for modeling count data across a wide range of disciplines, including statistical quality control, engineering, epidemiology, biology, agriculture, medicine, and social sciences. These distributions are used to represent diverse scenarios, such as the number of species in an ecological community, customer purchases, regional accidents, or votes received by candidates in an election. Count data, which generally represent the occurrence of random events within a specific time frame, are often viewed as the outcomes of underlying continuous-time count processes. Discretizing continuous data is frequently preferred because it simplifies analysis and interpretation in many practical applications. Such data, consisting of nonnegative integers (including zero), typically lack an upper limit, making them ideal candidates for modeling with discrete probability distributions.
While classical models like the Poisson, geometric, and negative binomial distributions have long been used for this purpose, they come with notable limitations. To address these challenges, statisticians have developed various methods for generating discrete distributions as counterparts to continuous ones, enriching the tools available for analyzing count data. These include the survival discretization approach (see Nakagawa and Osaki [1]), infinite series discretization technique (see Kulasekera and Tonkyn [2] and Sato et al. [3]), round-off method (see Smith [4]), binning method (see Roederer et al. [5]), minimizing Cramer-von Mises-type distances method (see Barbiero and Hitaj [6]), reversed hazard function discretization method (see Ghosh et al. [7]), compound two-phase approach (see Chakraborty [8]), and others. For more details (see Kotsiantis and Kanellopoulos [9]). These methods offer different approaches to discretizing continuous distributions, each with its own advantages and implications. For a comprehensive understanding of these discretization methods and their differences, interested readers are encouraged to refer to Chakraborty [8], which provides a detailed survey on the topic.
Additionally, there is another approach for proposing a discrete probability model known as the combining approach. This method involves combining two probability mass functions (PMFs) to create a new PMF, a technique commonly used in probability theory and statistics. The combining approach is often applied in situations where the outcomes of interest can be modeled by multiple distributions, aiming to produce a combined distribution that captures the characteristics of both original distributions. Various methods can be employed for combining PMFs, including convolution, mixture models, and weighted averages. Convolution, a mathematical operation, merges two functions to generate a third function. In the context of probability distributions, convolution merges the PMFs of two random variables to derive the PMF of their sum, as described in Casella and Berger [10], which is particularly beneficial for dealing with independent random variables. Mixture models, on the other hand, are probabilistic models representing a variable's distribution as a mixture of several component distributions. Each component distribution is weighted by a mixing coefficient, resulting in an overall distribution that is a weighted sum of these components, as outlined in Bishop [11]. This approach is valuable for modeling data originating from different subpopulations, each with its distinct distribution. Furthermore, weighted averages can also be employed to combine PMFs by taking the weighted averages of their values, as discussed in Gelman et al. [12]. This method is frequently utilized in Bayesian statistics, where prior distributions are combined with likelihood functions to yield a posterior distribution. The weighted combining approach is vital for analyzing count-type data across diverse sectors, especially in situations where accurately measuring quality characteristics on a numerical scale is challenging. This challenge is particularly pronounced in statistical quality control scenarios involving defects in manufactured items or service error frequencies. By integrating dependencies or correlations between observations within integer-valued time series data models, this approach enhances the accuracy and reliability of performance evaluation, addressing a critical aspect in count-type data analysis where the assumption of observation independence in traditional control charts may not hold. As a result, the discrete weighted PMF stands out as a potent tool that boosts the flexibility, accuracy, and interpretability of probabilistic models, making it indispensable in various analytical and modeling tasks across different domains.
Building upon these principles, the proposed paper introduces a new PMF with a single parameter, known as the hybrid discrete weighted combining (HDWC) distribution. The HDWC model is distinct within the family of statistical models due to its capacity to handle complex data behaviors that other classical distributions may not effectively represent. While the geometric, Poisson, and negative binomial distributions each have their strengths, they struggle with data exhibiting heavy tails, overdispersion, or extreme values. In contrast, the HDWC model is specifically designed to address these challenges. The geometric distribution is useful for modeling the number of trials until the first success in Bernoulli trials, where the data follows a memoryless property. However, it cannot account for overdispersion or heavy tails, limitations that the HDWC model overcomes with its ability to model skewed and heavy-tailed data. Similarly, while the Poisson distribution is widely used to model count data with a constant mean and variance, it fails to handle overdispersion or data with greater variability than the mean. The HDWC model excels in this area, as it accommodates both overdispersion and outlier-sensitive, heavy-tailed data, offering a more robust fit for real-world datasets. The negative binomial distribution extends the Poisson model to address overdispersion by introducing an additional parameter. However, it assumes a specific variance structure, which may not always match observed data. The HDWC model offers greater flexibility, effectively capturing leptokurtic shapes and data with increasing failure rates, making it especially valuable for analyzing extreme data and phenomena with high variability.
The HDWC model presents distinct advantages compared to the geometric, Poisson, and negative binomial distributions, especially in its capacity to handle data with heavy tails, overdispersion, and extreme values. Its closed-form statistical properties and versatility in addressing complex datasets make it a valuable tool for researchers in fields like engineering sustainability, actuarial analysis, and medical data. Thus, the motivation for introducing the HDWC model can be summarized as follows:
● It allows for expressing statistical properties in closed forms with a simple formulation.
● It is suitable for modeling positively skewed data with a leptokurtic shape.
● It can be used to analyze overdispersed data.
● It enables the discussion of extreme and outlier data, fitting well under heavy-tailed shapes.
● It is considered a suitable choice for researchers analyzing the phenomenon of increasing failure rates.
The paper is organized as follows: Section 2 introduces the mathematical framework of the HDWC distribution, while Section 3 explores its key statistical properties. Section 4 derives various mathematical estimation methods for practical applications. In Section 5, these methods are evaluated and compared using a ranked-simulation approach to identify optimal model parameters across different sample sizes. Section 6 demonstrates the HDWC model's flexibility in fitting real-world datasets, highlighting its performance against well-established PMFs in the statistical literature, with applications in system reliability and treatment analysis. Finally, Section 7 concludes with a summary of key findings and suggestions for future research directions.
Let f1(x;θ)=(1−θ)θx represent a geometric distribution where the probability of success in each trial is θ, and the probability of failure is 1−θ. On the other hand, f2(x;θ)=(1−θ2)θ2x can be viewed as a geometric distribution applied specifically to even integers x, rather than the conventional geometric distribution that applies to all nonnegative integers. Let f(x;θ)=2f1(x;θ)−f2(x;θ), where f1(x;θ) and f2(x;θ) represent two PMFs such that 2f1(x;θ)−f2(x;θ)≥0 for all x=0,1,2,⋯ and 0<θ<1. Then, the PMF of the HDWC distribution can be formlated as
f(x;θ)=2(1−θ)θx−(1−θ2)θ2x;x=0,1,2,⋯, 0<θ<1. | (2.1) |
The conditions for a valid PMF, including nonnegativity and normalization, have been verified. Figure 1 illustrates the potential PMF shapes of the HDWC distribution, revealing unimodal right-skewed patterns with extended right tails.
The corresponding cumulative distribution function (CDF), denoted as F(x;θ), can be written as
F(x;θ)=x∑t=0Pr(X=t;θ)=1−2θx+1+θ2x+2; x=0,1,2,⋯, 0<θ<1. | (2.2) |
The hazard rate function (HRF) can be listed as
h(x;θ)=(1−θ)[(1+θ)θx−2]θx−2; x=0,1,2,⋯, 0<θ<1. | (2.3) |
The reversed hazard rate function (RHRF) can be reported as
r(x;θ)=2(1−θ)θx−(1−θ2)θ2x1−2θx+1+θ2x+2; x=0,1,2,⋯, 0<θ<1. | (2.4) |
Figure 2 presents the HRF for the HDWC model across different parameter values. The analysis indicates that the failure rate of the HDWC distribution exhibits a monotonically increasing or increasing-constant shape. A log-concave distribution has discrete increasing failure rate (IFR) (see Barlow [13]). The relationship between the IFR and other discrete aging classes like new better than used (NBU) can be explored in Lai and Xie [14].
The finding that the failure rate of the HDWC distribution shows either a consistently increasing trend or a steady, constant increase has significant implications across various domains of data analysis and analytics. In reliability engineering, this behavior aids in predicting system failures and scheduling maintenance. Healthcare analytics can leverage it to assess medical treatment effectiveness over time. Financial risk management benefits from understanding investment failure rates, while manufacturing and quality control can optimize processes based on product failure trends. Additionally, in survival analysis and event history studies, this insight proves valuable for analyzing time-to-event data in epidemiology, social sciences, and marketing, enhancing decision-making and risk mitigation strategies across diverse sectors.
The PMF described in Eq (2.1) exhibits log-concavity across all θ values, with the ratio of successive probabilities f(x+1;θ)f(x;θ) being a decreasing function in x for all θ values. Given that for a discrete distribution, the value of B=limx→∞f(x+1;θ)f(x;θ) indicates the relative long-tailedness of the distribution, and comparing it to the Poisson distribution (which has B=0), we can conclude that the HDWC distribution exhibits longer tails than the Poisson distribution. Long-tailedness refers to the slow decay of probabilities in the tail of a distribution, where the probability of observing large values remains higher compared to short-tailed distributions. Unlike light-tailed distributions such as the Poisson, long-tailed distributions decay more slowly, with their tail probabilities taking longer to approach zero. This characteristic is particularly important in fields like risk assessment, finance, and insurance, where rare but extreme events can have significant consequences.
The HDWC distribution exhibits log-concavity. Specifically, since the ratio of successive probabilities is a strictly decreasing function for x≥0, it follows that
f(x;θ)2>f(x−1;θ).f(x+1;θ), x=0,1,2,... |
This condition is sufficient for the log-concavity of a PMF. A log-concave distribution exhibits an increasing failure hazard rate, is strongly unimodal, has all moments that exist, retains its log-concavity when truncated, and its convolution with any other discrete distribution remains both unimodal and log-concave.
The moment generating function (MGF) and probability generating function (PGF) are fundamental tools in probability theory and statistics with diverse practical applications. The MGF, denoted as MX(t), for a random variable X, is defined as the expected value of etX where t is a parameter. It is a powerful tool for finding moments of a distribution. Specifically, the nth moment of X can be obtained by differentiating the MGF n times and evaluating at t=0, i.e., E(Xn)=M(n)X(0). This makes the MGF useful for calculating moments without directly integrating over the distribution. The PGF, denoted as ψX(z), for X is defined as the expected value of zX where z is a parameter typically restricted to the interval [0,1]. It is particularly useful for discrete random variables and is employed to derive probabilities of various events related to the random variable X. For the HDWC distribution, the MGF and PGF can be formulated as
MX(t)=∞∑x=0 etxPr(X=x;θ)=2(1−θ)1−θet−1−θ21−θ2et, | (3.1) |
and
ψX(t)=∞∑x=0zxPr(X=x;θ)=2(1−θ)1−θz−1−θ21−θ2z. | (3.2) |
MGFs are extensively employed in statistical analysis to compute moments, crucial for determining central tendencies, variances, index of dispersion and other statistical properties of distributions, while PGFs are utilized in analyzing discrete distributions, particularly in problems involving counting and combinatorial aspects. In reliability engineering, MGFs aid in analyzing component and system lifetimes, predicting failure rates, and assessing system reliability over time. Queueing theory benefits from both MGFs and PGFs, assisting in the analysis of arrival and service times, queue lengths, and overall system performance metrics. Actuarial science utilizes MGFs for modeling insurance risks, mortality rates, and financial aspects related to insurance and pensions. In epidemiology and public health, probability generating functions play a key role in modeling disease spread, calculating infection probabilities, and evaluating the impact of interventions. In machine learning and data science, these functions are integral to probability distributions used in various models, such as the Poisson distribution for count data and the exponential distribution for survival analysis. For the HDWC distribution, the raw moments, say E(Xn), can be formulated as
E(Xn)=∞∑x=0[(x+1)n−xn](1−F(x;θ))=∞∑x=0[(x+1)n−xn]θx+1(2−θx+1). | (3.3) |
Using Eq (3.3), the initial four moments can be represented in closed forms as
E(X)=θ(θ+2)1−θ2, E(X2)=θ(θ3+6θ2+5θ+2)(1−θ2)2, E(X3)=θ(θ5+14θ4+28θ3+32θ2+13θ+2)(1−θ2)3,E(X4)=θ(θ7+30θ6+111θ5+230θ4+219θ3+122θ2+29θ+2)(1−θ2)4. |
They are called "raw" because they are calculated directly from the data without any adjustments or transformations. The skewness, kurtosis, covariance (COV), and index of dispersion (IOD) can be calculated from the first four moments. Figure 3 illustrates several descriptive statistics pertaining to the HDWC distribution.
The data depicted in Figure 3 indicates that the HDWC model is adept at analyzing positively skewed data with a leptokurtic shape and can also be used to investigate phenomena associated with overdispersion "IOD(X) >1". Moreover, a positive covariance signifies that the variables tend to co-vary in the same direction.
Order statistics refer to the statistical analysis of ranked data points within a dataset. They are particularly useful in understanding the distribution and characteristics of extreme values, such as the minimum and maximum values, as well as percentiles and quartiles. By arranging data points in ascending or descending order, order statistics allow researchers to analyze the spread and central tendencies of a dataset more effectively. For instance, in environmental studies, order statistics can help identify the most extreme weather events, such as the hottest day of the year or the coldest month, which is crucial for risk assessment, infrastructure planning, and climate change analysis. Suppose we have a random sample X1,X2,⋯,Xn of size n from the CDF of the HDWC model. The CDF of the ith order statistic can be expressed as
Fi:n(x;θ)=n∑k=i(nk)F(x;θ)k(1−F(x;θ))n−k=n∑k=ik∑j=0(−1)j(nk)(kj)(1−F(x;θ))n−k+j=n∑k=ik∑j=0n−k+j∑r=0(−1)j+r(nk)(kj)(n−k+jr)2n−k+j−rθ(x+1)(n−k+j+r)=n∑k=ik∑j=0n−k+j∑r=0wj,r,kθ(x+1)(n−k+j+r), |
where
wj,r,k=(−1)j+r(nk)(kj)(n−k+jr)2n−k+j−r. |
The PMF of ith order statistics for x=1,2,⋯ can be formulated as
Pr(Xi:n=x;θ)=Fi:n(x;θ)−Fi:n(x−1;θ)=n∑k=ik∑j=0n−k+j∑r=0wj,r,kθ(x+1)(n−k+j+r)=n∑k=ik∑j=0n−k+j∑r=0wj,r,kθn−k+j+rθx(n−k+j+r)(θn−k+j+r−1), | (3.4) |
and
Pr(Xi:n=0;θ)=Fi:n(0;θ)=n∑k=i(nk)(1−θ2)2k(1−(1−θ)2)n−k. |
By referring to Eq (3.4), it is possible to calculate the rth moments of order statistics, and subsequently, derive the L-moments. L-moments are statistical measures used to describe the shape, location, and scale of a probability distribution. They are particularly useful in situations where traditional moments like mean and variance may not adequately capture the distribution's characteristics, especially in cases of skewed or heavy-tailed data. L-moments are derived from linear combinations of order statistics, providing information about the distribution's central tendency, dispersion, and skewness. They are commonly used in hydrology, meteorology, and other fields where analyzing extreme values and fitting distributions to observed data is important for risk assessment and decision-making.
Residual entropy and cumulative residual entropy are measures used in information theory to quantify the uncertainty or unpredictability of a random variable or system. Residual entropy captures the remaining uncertainty after observing a sequence of events, while cumulative residual entropy accounts for the accumulated uncertainty across multiple observations. These measures are particularly useful in assessing the information content or randomness in data sequences, such as in cryptography, where residual entropy helps evaluate the security of encryption algorithms based on the unpredictability of ciphertexts. The residual entropy of variable X and its cumulative expression can be formulated, respectively, as follows:
E(X)=−∞∑x=0F(x)log(F(x)), |
and
CE(X)=−∞∑x=0(1−F(x))log(1−F(x)). |
By utilizing algebraic techniques with respect to the HDWC model, we can derive
E(X)=∞∑k=0k∑j=0j∑i=0(−1)i(kj)(ji)θi+k1−θi+k−∞∑k=0k+1∑j=0j∑i=0(−1)i(k+1j)(ji)θi+k+11−θi+k+1, | (3.5) |
and
CE(X)=−θ(2θ2+3θ+2)log(θ)(1−θ2)2+∞∑k=0k+1∑j=0(−1)k+1(k+1j)θj+1(2−θ−θj+2)(k+1)(1−θj+1)(1−θj+2). | (3.6) |
On the other hand, Shannon entropy and Rényi entropy are also measures of uncertainty in information theory but differ in their sensitivity to outliers and the concentration of probability mass. Shannon entropy, the most widely used entropy measure, characterizes the average information content in a probability distribution, providing insights into the diversity and spread of data. In contrast, Rényi entropy generalizes Shannon entropy by introducing a parameter that controls the emphasis on rare events, making it more robust to outliers and suitable for analyzing distributions with heavy tails or extreme values. These entropy measures find applications in various fields such as data compression, pattern recognition, and statistical physics, where understanding the information content and structure of data is critical for modeling and decision-making. The Shanon entropy for the HDWC can be formulated as
S(X)=−log(2(1−θ))−log(θ)×2θ+11−θ2+∞∑k=0(θ+12)k+1k+1[2(1−θ)θk+21−θk+2−2(1−θ2)θk+31−θk+3], | (3.7) |
where
S(X)=−E[log(Pr(X=x;θ))]=−log(2(1−θ))−log(θ)E(X+1)+∞∑k=0(θ+12)k+1E(θ(X+1)(k+1))k+1, |
−log(Pr(X=x;θ))=−log(2(1−θ))−(x+1)log(θ)−log(1−1+θ2θx+1)=−log(2(1−θ))−(x+1)log(θ)+∞∑k=0(θ+12)k+1θ(x+1)(k+1)k+1. |
The Rényi entropy of variable X, obtained through the use of generalized binomial expansion, is defined as
IR(γ)=11−γlog(∞∑x=0f(x;θ)γ)=11−γ{γlog(2(1−θ))+log(∞∑i=0(−1)i(θ+12)i(γi)1−θi+γ)}, | (3.8) |
where γ>0 and γ≠1. Some numerical entropy of the HDWC model can be listed in Table 1.
γ↓θ→ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |
0.5 | 0.8219 | 1.1588 | 1.4393 | 1.7058 | 1.9796 | 2.2803 | 2.6355 | 3.1010 | 3.8412 |
2 | 0.3778 | 0.7073 | 0.9942 | 1.2568 | 1.5185 | 1.8051 | 2.1485 | 2.6065 | 3.3508 |
5 | 0.2633 | 0.5538 | 0.8552 | 1.1247 | 1.3610 | 1.6250 | 1.9632 | 2.4234 | 3.1698 |
Table 1 quantifies the uncertainty linked to its parameter combinations. Entropy increases with both γ and θ, indicating greater uncertainty. For fixed γ, entropy grows nonlinearly with θ, while for fixed θ, it decreases as γ increases. Smaller γ values, associated with heavier tails, lead to higher entropy and greater randomness, whereas larger γ values result in a more concentrated distribution and lower entropy. Larger θ values amplify entropy, reflecting a broader and less predictable distribution, with entropy rising faster at higher θ. The interplay between γ and θ shapes the overall randomness. For example, a small γ and large θ produce the highest entropy, while the reverse yields the lowest. These insights are valuable for applications in engineering, medicine, and actuarial sciences, where entropy helps assess the suitability of the HDWC distribution for modeling uncertainty. The relationship between entropy and parameters guides parameter selection and model fitting, leveraging the distribution's flexibility to match specific data or phenomena.
The mean residual life (MRL) and its variance function are important concepts in survival analysis, which deals with the time until an event of interest occurs. In the case of discrete random variables, such as in discrete survival analysis, these concepts are adapted to work with discrete time points. Considering F(x;.) as the CDF of an element with a finite first moment, where X denotes the random variable associated with F(x;.), in the discrete context, the MRL, say ϖ(k;θ), is defined as follows
ϖ(k;θ)=E(X−k|X≥k)=1Pr(X≥k)∞∑x=k(x−k)Pr(X=k); k=0,1,2,.... | (3.9) |
Consider X to be an HDWC random variable, then the MRL can be formulated as
ϖ(k;θ)=2θk+2−θ2k+2+2θk+1(θ2−1)(θ2k−2θk); k=0,1,2,.... | (3.10) |
The MRL of the HDWC distribution is a decreasing function. The relationship between the HRF and MRL can be established as
h(k;θ)=1−ϖ(k;θ)1+ϖ(k+1;θ); k=0,1,2,.... |
Thus, the survival function (SF) of the HDWC model can be expressed as
S(k;θ)=∏0≤i≤kϖ(i;θ)1+ϖ(i+1;θ); ϖ(0;θ)=E(X), k=0,1,2,.... | (3.11) |
The function for the variance of residual life (VRL), denoted as ΥVRL(k), is defined as
ΥVRL(k;θ)=Var(X|X≥k)=E(X2|X≥k)−[E(X|X≥k)]2=22θk−θ2k[[(k−1)θ2−k]θ2k+2(θ2−1)2−2[(k−1)θ−k]θk+1(θ−1)2]−(2k−1)ϖ(k;θ)−[ϖ(k;θ)]2. |
The random variable X exhibits increasing (decreasing) VRL if
ΥVRL(k+1;θ)≤(≥)ϖ(k;θ)[1+ϖ(k+1;θ)]. |
The residual coefficient of variation (RCOV), denoted as Ξ(k;θ), can be explicitly derived as
Ξ(k;θ)=√ΥVRL(k;θ)/ϖ(k;θ); k=0,1,2,..... |
The HRF, MRL, and VRL of the HDWC model are interconnected in the following manner
ΥVRL(k+1;θ)−ΥVRL(k;θ)=h(k;θ)[ΥVRL(k+1;θ)−ϖ(k;θ)(1+ϖ(k+1;θ))]. |
Further, the HRF, MRL, VRL, and RCOV of the HDWC distribution are interconnected as
ΥVRL(k+1;θ)−ΥVRL(k;θ)=h(k;θ)[ϖ(k+1;θ)]2{[Ξ(k+1;θ)]2−ϖ(k;θ)(1+ϖ(k+1;θ))[ϖ(k+1;θ)]2}. | (3.12) |
The stress-strength model (SSM) is a statistical framework used to assess reliability and risk in systems involving discrete random variables. It compares the strength of a system or material (represented by a random variable) against the stress it is subjected to, often in engineering, manufacturing, and quality control contexts. For example, in mechanical engineering, this model can evaluate the likelihood of a component failing under varying stress conditions. Similarly, in quality control, it helps determine if a product meets certain strength criteria given manufacturing variations. The model is also applicable in pharmaceuticals to assess the efficacy of drug formulations concerning patient-specific factors. Let X1∼ HDWC(θ1) and X2∼ HDWC(θ2), then
Pr(X1≤X2)=∞∑x=0Pr(X1≤X2|X2=x)Pr(X2=x)=∞∑x=0FX1(x;θ1)Pr(X2=x;θ2)=1−2θ1(1−θ2)1−θ1θ2+2θ1(1−θ22)1−θ1θ22+2θ21(1−θ2)1−θ2θ21−θ21(1−θ22)1−θ21θ22. | (3.13) |
By quantifying the relationship between stress and strength through probability distributions, the SSM aids in decision-making regarding design tolerances, product specifications, and risk mitigation strategies.
Insurance premiums are determined using premium principles, which consider the risk levels associated with different events. Over time, various premium principles have been developed. In this section, we introduce some of these principles, assuming that the loss distribution follows the HDWC distribution. Within this context, let λ≥0 represent the risk loading parameter.
The expected value principle (EVP) is a cornerstone in insurance and risk management, guiding the calculation of insurance premiums. It asserts that the premium assessed for insurance protection must equate to the anticipated losses' expected value, modified by a risk loading component. The EVP can be articulated as
EVP(λ;.)=(1+λ)E(Z), |
where EVP(λ;.) represents the insurance premium, λ is the risk loading factor, and E(Z) denotes the expected value of losses or the distribution's expected value. The term (1+λ) "Risk Loading" signifies the risk loading factor, included to cover miscellaneous expenses and ensure a profit margin for the insurer. The value of λ varies depending on factors like administrative costs, claims processing, underwriting, and the desired profit level of the insurer. The EVP of the HDWC distribution, denoted as EVP(λ;θ), is defined by the equation:
EVP(λ;θ)=θ(1+λ)(θ+2)1−θ2. | (3.14) |
The EVP serves as a fundamental concept in insurance pricing, applicable across diverse insurance categories such as property, liability, health, and life insurance. Insurers use the EVP to set premiums based on expected losses, adjusting for risk loading to ensure fairness in pricing. This methodology aligns premiums with the expected costs of coverage, enabling insurers to manage expenses, generate profits, and ensure that policyholders pay appropriate premiums relative to the risks transferred. From an economic standpoint, the EVP promotes risk transfer, enhances efficiency through risk pooling, and minimizes the financial impacts of uncertainty. However, it operates under the assumption of known and accurately estimated loss distributions, overlooking factors such as moral hazard and market competition that can influence pricing and market dynamics.
The exponential premium principle (EPP) is a key aspect of insurance pricing strategies, employing an exponential function to calculate premiums. This principle acknowledges that the likelihood of encountering significant loss events escalates quickly as risk levels elevate. The EPP asserts that insurance premiums should escalate exponentially in proportion to the risk level linked to the insured event. Put differently, as the probability of severe losses increases, premiums should surge at an escalating pace to sufficiently address the potential expenses associated with such occurrences. The EPP is determined by solving for EPP in the equation:
φ(v−EPP(λ;.))=E(v−Z). |
Here, v refers to an individual's wealth, and φ(t)=−e−λt signifies the exponential utility function. As a result, the EPP is derived as:
EPP(λ;.)=1λMZ(λ), |
where MZ(λ) is the MGF. The EPP of the HDWC distribution, say EPP(λ;θ), is characterized as
EPP(λ;θ)=1λ{2(1−θ)1−θet−1−θ21−θ2et}. | (3.15) |
This principle finds widespread application across different insurance sectors, such as property insurance (e.g., protection against natural disasters), liability insurance (e.g., coverage for high-risk activities), and health insurance (e.g., coverage for pre-existing medical conditions). Within these sectors, premiums are frequently determined using sophisticated actuarial models that integrate the EPP to guarantee sufficient coverage for various risks.
Suppose we have a random sample X1, X2, …, Xk obtained from the HDWC model. The log-likelihood, say L(x_|θ), function for the HDWC distribution can be expressed as follows:
L(x_|θ)=klog(1−θ)+log(θ)k∑i=1xi+k∑i=1log(2−(1+θ)θxi). | (4.1) |
Taking the derivative of the L(x_|θ) with respect to θ and equating it to zero, we obtain
∂L(x_|θ)∂θ=−k1−θ+1θk∑i=1xi−k∑i=1θxi(1+(1+θ)log(θ))2−(1+θ)θxi=0. | (4.2) |
It is not feasible to find an analytical solution for this equation. Therefore, it necessitates the use of a numerical iterative method such as the Newton-Raphson method within R software or other optimization techniques.
If we have a random sample X1, X2, …, Xk derived from the HDWC model, subsequently, the moment estimator, referred to as ˆθM, can be formulated as
ˆθM=−¯X+√¯X2+¯X(1+¯X)1+¯X, | (4.3) |
where ¯X represents the sample mean of the observed values. Estimation techniques like MLE and ME offer powerful tools for parameter estimation, but they come with notable computational and methodological challenges, especially when implemented in R. MLE, while efficient and consistent for large datasets, often requires iterative numerical methods such as Newton-Raphson, which can face convergence issues, instability due to singular or near-singular Hessians, and challenges with non-convex likelihood surfaces. These issues are amplified in high-dimensional models or with poor initial parameter values. ME, on the other hand, provides a simpler alternative but is less efficient, more prone to bias and variance in small samples, and heavily dependent on the choice of moments, which may not capture the data's complexity.
In R, implementing these methods demands careful handling of numerical precision, algorithmic stability, and computational efficiency. Challenges include the iterative nature of MLE, handling divergence or oscillation in numerical methods, and optimizing performance for large datasets. Users must choose starting values judiciously, apply robust optimization algorithms, and leverage advanced features in R packages like optim and bbmle to mitigate these issues. By addressing these computational hurdles and understanding the trade-offs of each method, practitioners can effectively apply MLE and ME to a wide range of statistical modeling problems while ensuring reliable and accurate results.
In this section, we explore the HDWC parameter estimation using the MPSE method with a complete sample. Suppose we have a random sample X1, X2, …, Xk obtained from the HDWC distribution. For h=1,2,…,m+1, consider the following:
Wh(θ)=F(x(h)|θ)−F(x(h−1)|θ), |
to be the uniform spacings of a random sample from the HDWC model, where F(x(0)|θ)=0, F(x(m+1)|θ)=1, and ∑m+1h=1Wh(θ)=1. The MPSE of θ, say ˆθMPS, can be derived by maximizing the geometric mean of the spacings
V(θ)=[m+1∏h=1Wh(θ)]1m+1, | (4.4) |
with respect to the parameter θ.
Consider sh=h/(k+1) to be an unbiased estimator of F(x(h)|θ). Hence, the PCE of the parameter θ, denoted by ˆθPC, can be reported by minimizing
P(θ)=k∑h=1(x(h)−ϖ(sh))2, | (4.5) |
with respect to the parameter θ, where ϖ(sh)=F−1(x(h)|θ) is the quantile function of the HDWC model.
Consider a random sample X1, X2, …, Xk drawn from the HDWC model. The Anderson-Darling estimator (ADE) serves as another form of minimum distance estimator. The ADE for the HDWC parameter, denoted as ˆθAD, is obtained by minimizing
AD(θ)=−k−1kk∑h=1(2h−1)[logF(x(h)|θ)+log(1−F(x(h)|θ))]. | (4.6) |
Regarding the parameter θ, the model undergoes optimization, whereas the right-tail Anderson-Darling estimator (RADE) for the model parameter is obtained by minimizing
RAD(θ)=k2−2k∑h=1F(x(h:k)|θ)−1kk∑h=1(2h−1)[log(1−F(x(k+1−h:k)|θ))], | (4.7) |
with respect to the parameter θ.
Let's consider a random sample from the HDWC model, denoted by order statistics X(1),X(2),⋯,X(k). The least-squares estimator (LSE) of the HDWC parameter, represented as ˆθLS, is derived by solving the nonlinear equation defined as follows:
k∑h=1[F(x(h)|θ)−hk+1]∂∂θF(x(h)|θ)=0, | (4.8) |
with respect to the parameter θ. The weighted LSE (WLSE), say ˆθWLS, can be derived by solving the nonlinear equation defined by
k∑h=1(k+1)2(k+2)h(k−h+1)[F(x(h)|θ)−hk+1]∂∂θF(x(h)|θ)=0, | (4.9) |
with respect to the parameter θ.
The CVME occurs due to the difference between the estimated CDF and the empirical CDF. Determining the CVME of the HDWC parameter entails solving the nonlinear equation specified as follows:
k∑h=1[F(x(h)|θ)−2h−12k]∂∂θF(x(h)|θ)=0, | (4.10) |
with respect to the parameter θ.
In this section, we evaluate the performance of MPSE, ME, ADE, MLE, LSE, RADE, PCE, CVME, and WLSE with respect to the sample size 'n' using the R software and HDWC parameters. The process of generating a random variable X from the HDWC distribution starts with generating a value Y from the continuous distribution. Next, the obtained Y value is discretized to generate X, where X is defined as the largest integer less than or equal to Y. To replicate this procedure, we conduct Markov Chain Monte Carlo (MCMC) simulations using various schemes. The evaluation is conducted through a simulation study:
1) Generate N=10000 samples of various sizes "ni;i=1,2,3,4,5" from the HDWC model as follows:
● Scheme Ⅰ: θ=0.05 | n1=20, n2=100, n3=250, n4=500, n5=750.
● Scheme Ⅱ: θ=0.2 | n1=20, n2=100, n3=250, n4=500, n5=750.
● Scheme Ⅲ: θ=0.5 | n1=20, n2=100, n3=250, n4=500, n5=750.
● Scheme Ⅳ: θ=0.8 | n1=20, n2=100, n3=250, n4=500, n5=750.
2) Compute the MPSE, ME, ADE, MLE, LSE, RADE, PCE, CVME, and WLSE for the 10,000 samples, say ˆθk for k=1,2,..., 10,000.
3) Caculate the bias and the mean squared errors (MSE) for N= 10,000 samples as
|Bias(θ)|=1NN∑k=1|^θk−θk|, MSE(θ)=1NN∑k=1(^θk−θk)2 . |
The MSE quantifies the average squared deviation between predicted and actual values, where a lower MSE indicates closer predictions to the actual values.
4) The empirical results of simulation are listed in the Tables 2–6.
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
20 | |Bias| | 0.385823 | 0.394885 | 0.359482 | 0.347831 | 0.427519 | 0.417688 | 0.409827 | 0.388944 | 0.399876 |
MSE | 0.284985 | 0.304876 | 0.253842 | 0.210491 | 0.328479 | 0.322648 | 0.310857 | 0.276483 | 0.284714 | |
Sum of Ranks | 84 | 116 | 42 | 21 | 189 | 168 | 147 | 73 | 105 | |
100 | |Bias| | 0.243753 | 0.266345 | 0.213532 | 0.209471 | 0.308759 | 0.288968 | 0.279587 | 0.262954 | 0.270856 |
MSE | 0.177496 | 0.194877 | 0.143751 | 0.154853 | 0.204879 | 0.199478 | 0.164855 | 0.153262 | 0.161834 | |
Sum of Ranks | 94 | 126.5 | 31 | 42 | 188 | 169 | 126.5 | 63 | 105 | |
250 | |Bias| | 0.103843 | 0.1327577 | 0.088642 | 0.042741 | 0.153649 | 0.144898 | 0.130866 | 0.128854 | 0.129475 |
MSE | 0.054645 | 0.099487 | 0.003751 | 0.009482 | 0.110949 | 0.107458 | 0.084746 | 0.018344 | 0.011473 | |
Sum of Ranks | 84 | 147 | 31.5 | 31.5 | 189 | 168 | 126 | 84 | 84 | |
500 | |Bias| | 0.005681 | 0.0183826 | 0.009283 | 0.008142 | 0.074479 | 0.066488 | 0.053767 | 0.012865 | 0.012084 |
MSE | 0.000833 | 0.002096 | 0.000282 | 0.000141 | 0.008478 | 0.009589 | 0.002297 | 0.000844 | 0.000995 | |
Sum of Ranks | 42 | 126 | 53 | 31 | 178.5 | 178.5 | 147 | 94.5 | 94.5 | |
750 | |Bias| | 0.000586 | 0.000264 | 0.000042 | 0.000011 | 0.002839 | 0.000988 | 0.000727 | 0.000183 | 0.000295 |
MSE | 0.000085 | 0.000496 | 0.000012 | 0.000001 | 0.000958 | 0.001099 | 0.000647 | 0.000063 | 0.000074 | |
Sum of Ranks | 116 | 105 | 42 | 21 | 178.5 | 178.5 | 147 | 63 | 94 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.892943 | 0.964876 | 0.818931 | 0.857372 | 0.977367 | 1.183679 | 0.994778 | 0.917365 | 0.910744 |
MSE | 0.517482 | 0.665316 | 0.483841 | 0.547623 | 0.689327 | 0.709478 | 0.804769 | 0.550874 | 0.564875 | |
Sum of Ranks | 52.5 | 126 | 21 | 52.5 | 147 | 178.5 | 178.5 | 94.5 | 94.5 | |
100 | |Bias| | 0.503763 | 0.553824 | 0.464861 | 0.475972 | 0.568835 | 0.724149 | 0.684687 | 0.697568 | 0.589756 |
MSE | 0.243663 | 0.290485 | 0.210341 | 0.228942 | 0.284864 | 0.437869 | 0.408738 | 0.397497 | 0.310396 | |
Sum of Ranks | 63 | 94.5 | 21 | 42 | 94.5 | 189 | 157.5 | 157.5 | 126 | |
250 | |Bias| | 0.294763 | 0.386817 | 0.268842 | 0.233741 | 0.345835 | 0.428119 | 0.388468 | 0.364866 | 0.308474 |
MSE | 0.129433 | 0.171036 | 0.100382 | 0.083651 | 0.163865 | 0.210389 | 0.192298 | 0.188377 | 0.139844 | |
Sum of Ranks | 63 | 136.5 | 42 | 21 | 105 | 189 | 168 | 136.5 | 84 | |
500 | |Bias| | 0.138745 | 0.153868 | 0.084672 | 0.038461 | 0.120473 | 0.149777 | 0.142846 | 0.158419 | 0.122984 |
MSE | 0.006866 | 0.019939 | 0.000912 | 0.000231 | 0.001933 | 0.007387 | 0.005635 | 0.018268 | 0.002784 | |
Sum of Ranks | 95 | 178.5 | 42 | 21 | 63 | 147 | 116 | 178.5 | 84 | |
750 | |Bias| | 0.002465 | 0.008416 | 0.000142 | 0.000081 | 0.000884 | 0.029858 | 0.034299 | 0.028467 | 0.000743 |
MSE | 0.000295 | 0.000586 | 0.000033 | 0.000001 | 0.000094 | 0.000989 | 0.000647 | 0.000768 | 0.000022 | |
Sum of Ranks | 105 | 126 | 52.5 | 21 | 84 | 179 | 168 | 157 | 52.5 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.650928 | 0.673919 | 0.487641 | 0.507432 | 0.595796 | 0.628747 | 0.557625 | 0.537564 | 0.537313 |
MSE | 0.437548 | 0.428577 | 0.282751 | 0.309352 | 0.385766 | 0.439659 | 0.330964 | 0.348675 | 0.329563 | |
Sum of Ranks | 168 | 168 | 21 | 42 | 126 | 168 | 94.5 | 94.5 | 63 | |
100 | |Bias| | 0.426489 | 0.405757 | 0.310451 | 0.350683 | 0.395916 | 0.410498 | 0.370985 | 0.311052 | 0.350924 |
MSE | 0.288469 | 0.274988 | 0.184662 | 0.234755 | 0.254866 | 0.274437 | 0.220683 | 0.165861 | 0.224614 | |
Sum of Ranks | 189 | 157.5 | 31.5 | 84 | 126 | 157.5 | 84 | 31.5 | 84 | |
250 | |Bias| | 0.249609 | 0.234877 | 0.184651 | 0.209483 | 0.240488 | 0.230866 | 0.230585 | 0.230294 | 0.200852 |
MSE | 0.129687 | 0.110682 | 0.085631 | 0.125725 | 0.135469 | 0.129858 | 0.125766 | 0.120584 | 0.116733 | |
Sum of Ranks | 168 | 95 | 21 | 83.5 | 179 | 147 | 116 | 83.5 | 52 | |
500 | |Bias| | 0.108578 | 0.068564 | 0.028651 | 0.059783 | 0.110489 | 0.095866 | 0.102957 | 0.084855 | 0.055482 |
MSE | 0.008578 | 0.000281 | 0.000855 | 0.000433 | 0.009589 | 0.002056 | 0.007457 | 0.000302 | 0.000784 | |
Sum of Ranks | 168 | 51 | 63 | 63 | 189 | 126 | 147 | 75 | 63 | |
750 | |Bias| | 0.003858 | 0.000584 | 0.000041 | 0.000082 | 0.004929 | 0.003747 | 0.001096 | 0.000645 | 0.000233 |
MSE | 0.000098 | 0.000086 | 0.000001 | 0.000002 | 0.000099 | 0.000087 | 0.000074 | 0.000085 | 0.000063 | |
Sum of Ranks | 167 | 105 | 21 | 42 | 189 | 148 | 105 | 105 | 63 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
50 | |Bias| | 1.257559 | 1.110488 | 0.895641 | 0.937563 | 1.085756 | 1.104985 | 1.109487 | 0.933252 | 0.985664 |
MSE | 0.547369 | 0.528468 | 0.417351 | 0.456382 | 0.472095 | 0.479746 | 0.467374 | 0.459753 | 0.508847 | |
Sum of Ranks | 189 | 168 | 21 | 52.5 | 115.5 | 115.5 | 115.5 | 52.5 | 115.5 | |
200 | |Bias| | 0.749759 | 0.685975 | 0.517481 | 0.577953 | 0.700586 | 0.720588 | 0.718857 | 0.550692 | 0.675974 |
MSE | 0.355389 | 0.326944 | 0.266382 | 0.294073 | 0.320984 | 0.343738 | 0.338767 | 0.254871 | 0.328736 | |
Sum of Ranks | 189 | 94 | 31.5 | 63 | 105.5 | 168 | 147 | 31.5 | 105.5 | |
400 | |Bias| | 0.433069 | 0.404767 | 0.298761 | 0.320582 | 0.410588 | 0.398596 | 0.386595 | 0.340583 | 0.385794 |
MSE | 0.238469 | 0.219467 | 0.144831 | 0.165782 | 0.224348 | 0.205385 | 0.208736 | 0.183653 | 0.198364 | |
Sum of Ranks | 189 | 147 | 21 | 42 | 168 | 115.5 | 115.5 | 63 | 84 | |
500 | |Bias| | 0.288599 | 0.244378 | 0.103871 | 0.140962 | 0.230596 | 0.233097 | 0.210955 | 0.165843 | 0.199464 |
MSE | 0.148269 | 0.120988 | 0.015371 | 0.098733 | 0.118467 | 0.117466 | 0.1053875 | 0.100484 | 0.084632 | |
Sum of Ranks | 189 | 168 | 21 | 52 | 136.5 | 136.5 | 105 | 74 | 63 | |
750 | |Bias| | 0.087569 | 0.018766 | 0.000461 | 0.003953 | 0.019587 | 0.029688 | 0.008694 | 0.000962 | 0.008815 |
MSE | 0.008759 | 0.001348 | 0.000021 | 0.000093 | 0.000847 | 0.000745 | 0.000836 | 0.000082 | 0.000174 | |
Sum of Ranks | 189 | 147.5 | 21 | 63 | 147.5 | 136 | 105 | 42 | 94 |
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
Schema Ⅰ | 20 | 4 | 6 | 2 | 1 | 9 | 8 | 7 | 3 | 5 |
100 | 4 | 6.5 | 1 | 2 | 8 | 9 | 6.5 | 3 | 5 | |
250 | 4 | 7 | 1.5 | 1.5 | 9 | 8 | 4 | 4 | 4 | |
500 | 2 | 6 | 3 | 1 | 8.5 | 8.5 | 4.5 | 4.5 | 4.5 | |
750 | 6 | 5 | 2 | 1 | 8.5 | 8.5 | 3 | 3 | 4 | |
Schema Ⅱ | 20 | 2.5 | 6 | 1 | 2.5 | 7 | 8.5 | 8.5 | 4.5 | 4.5 |
100 | 3 | 4.5 | 1 | 2 | 4.5 | 9 | 7.5 | 7.5 | 6 | |
250 | 3 | 6.5 | 2 | 1 | 5 | 9 | 8 | 6.5 | 4 | |
500 | 5 | 8.5 | 2 | 1 | 3 | 7 | 6 | 8.5 | 4 | |
750 | 5 | 6 | 2.5 | 1 | 4 | 9 | 8 | 7 | 2.5 | |
Schema Ⅲ | 20 | 8 | 8 | 1 | 2 | 6 | 8 | 4.5 | 4.5 | 3 |
100 | 9 | 7.5 | 1.5 | 4 | 6 | 7.5 | 4 | 1.5 | 4 | |
250 | 8 | 5 | 1 | 3.5 | 9 | 7 | 6 | 3.5 | 2 | |
500 | 8 | 1 | 3 | 3 | 9 | 6 | 7 | 5 | 3 | |
750 | 7 | 5 | 1 | 2 | 9 | 8 | 5 | 5 | 3 | |
Schema Ⅳ | 20 | 9 | 8 | 1 | 2.5 | 5.5 | 5.5 | 5.5 | 2.5 | 5.5 |
100 | 9 | 4 | 1.5 | 3 | 5.5 | 8 | 7 | 1.5 | 5.5 | |
250 | 9 | 7 | 1 | 2 | 8 | 5.5 | 5.5 | 3 | 4 | |
500 | 9 | 8 | 1 | 2 | 6.5 | 6.5 | 5 | 4 | 3 | |
750 | 9 | 7.5 | 1 | 3 | 7.5 | 6 | 5 | 2 | 4 | |
Sum of Ranks | 123.5 | 123 | 31 | 41 | 138.5 | 152.5 | 117.5 | 84 | 80.5 | |
Overall Rank | 7 | 6 | 1 | 2 | 8 | 9 | 5 | 4 | 3 |
As the sample size 'n' increases, the trend observed in Tables 2 to 6 shows a clear reduction in the bias of the parameter θ, approaching zero, while the MSE of the HDWC parameter also decreases toward zero. The findings indicate that the derived estimators consistently improve in performance as the sample size increases. Across varying sample sizes, all estimation methods perform well, with Table 6 demonstrating that the ME method outperforms MLE in simulations based on a ranking approach. This ranking process evaluates the methods against criteria such as bias, variance, MSE, efficiency, consistency, and robustness. Simulated data are generated, the estimation methods are applied, and their metrics are compared to determine their effectiveness. The final ranking balances these criteria, providing insights into the most suitable method for the problem at hand.
The choice between MLE and ME ultimately depends on the specific context, including factors like data size, computational complexity, and model requirements. Each method offers distinct advantages: MLE is celebrated for its efficiency, consistency, and precision, particularly in large datasets, making it a powerful choice when computational resources and well-defined likelihood functions are available. On the other hand, ME is a simpler, computationally lighter alternative, particularly advantageous when the likelihood function is difficult to derive or when computational simplicity is a priority. Selecting the appropriate method requires a careful balance between accuracy, simplicity, and the resources available for analysis.
In this section, we showcase the versatility of the HDWC distribution in accurately modeling a variety of datasets from different domains. We assess the fitted distributions using the Chi-square (χ2) test and its corresponding P-value in dataset Ⅰ. In datasets Ⅱ and Ⅲ, the tested models are compared using the Kolmogorov-Smirnov (KS) test with its corresponding P-value and some criteria, namely, the maximized log-likelihood (l), Akaike information criterion (AIC), corrected AIC (CAIC), Bayesian IC (BIC), and Hannan-Quinn IC (HQIC). Since there is a limited number of frequencies for each observation in datasets Ⅱ and Ⅲ, the Pearson's Chi-square statistic cannot be employed for an inference test. Therefore, the KS measure is adequate in this case. The codes can be listed in the following link: https://docs.google.com/document/d/13XpIsgrCGryY9Ma1777NQ46fDpz32Ufc/edit. We will compare the performance of the HDWC distribution against other competing distributions listed in Table 7.
Distribution | Abbreviation | Author(s) |
Discrete Inverted Nadarajah-Haghighi | DINH | Singh et al. [15] |
Geometric | Geo | - |
Negative Binomial (1 Parameter) | NBI | - |
Discrete Rayleigh | DR | Roy [16] |
Discrete Inverse Rayleigh | DIR | Hussain and Ahmad [17] |
Poisson | Poi | Poisson [18] |
Discrete Burr-Hatke | DBH | El-Morshedy et al. [19] |
Discrete Pareto | DPa | Krishna and Pundir [20] |
The dataset provided represents the count of computer breakdowns occurring over a period of 128 consecutive weeks of operation (Hand et al. [21]). Nonparametric plots are suitable for the initial visualization of this dataset, and the corresponding results can be shown in Figure 4. It is observed that there are some extreme values, displaying a right-skewed distribution. Table 8 presents the MLEs for the parameter(s), −l, and Chi-square test values, accompanied by their corresponding P-values, for all competing distributions in dataset Ⅰ.
X | Obs. Fre. | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
0 | 15 | 10.339 | 12.401 | 25.527 | 25.520 | 3.636 | 3.237 | 2.310 | 65.853 | 47.806 |
1 | 19 | 20.099 | 30.161 | 20.434 | 20.432 | 10.302 | 47.809 | 9.272 | 21.915 | 19.190 |
2 | 23 | 20.893 | 19.935 | 16.360 | 16.358 | 15.306 | 34.021 | 18.613 | 10.931 | 10.760 |
3 | 14 | 18.288 | 12.782 | 13.096 | 13.097 | 18.041 | 16.649 | 24.912 | 6.538 | 7.021 |
4 | 15 | 14.798 | 8.721 | 10.491 | 10.486 | 18.440 | 8.773 | 25.014 | 4.342 | 5.001 |
5 | 10 | 11.467 | 6.291 | 8.385 | 8.395 | 16.918 | 5.079 | 20.085 | 3.088 | 3.774 |
6 | 8 | 8.657 | 4.732 | 6.720 | 6.721 | 14.172 | 3.174 | 13.440 | 2.304 | 2.967 |
7 | 4 | 6.426 | 3.693 | 5.381 | 5.381 | 10.944 | 2.106 | 7.713 | 1.782 | 2.404 |
8 | 6 | 4.717 | 2.950 | 4.312 | 4.308 | 7.839 | 1.466 | 3.870 | 1.417 | 1.994 |
9 | 2 | 3.436 | 2.412 | 3.456 | 3.449 | 5.228 | 1.059 | 1.736 | 1.151 | 1.686 |
10 | 3 | 2.490 | 2.013 | 2.753 | 2.762 | 3.256 | 0.789 | 0.690 | 0.953 | 1.447 |
11 | 3 | 1.798 | 1.691 | 2.205 | 2.211 | 1.897 | 0.604 | 0.253 | 0.800 | 1.258 |
12 | 2 | 1.295 | 1.460 | 1.766 | 1.770 | 1.036 | 0.472 | 0.080 | 0.680 | 1.106 |
+13 | 4 | 3.297 | 18.758 | 7.114 | 7.110 | 0.985 | 2.762 | 0.006 | 6.246 | 21.586 |
Total | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 |
−l | 317.707 | 331.931 | 320.703 | 320.703 | 347.148 | 356.525 | 384.974 | 379.346 | 369.766 | |
MLEθ | 0.716 | 1.244 | 0.801 | 0.895 | 0.972 | 0.025 | 4.016 | 0.971 | 0.509 | |
MLEq | − | 1.634 | − | − | − | − | − | − | − | |
χ2 | 4.950 | 20.199 | 11.651 | 10.684 | 49.897 | 50.533 | 88.995 | 143.174 | 94.971 | |
D.F | 8 | 6 | 9 | 9 | 8 | 5 | 6 | 5 | 6 | |
P-value | 0.763 | 0.003 | 0.234 | 0.298 | 0 | 0 | 0 | 0 | 0 |
Of all the distributions tested, the HDWC distribution shows the best performance as the most suitable distribution for fitting dataset Ⅰ. The estimated PMFs of all the distributions tested for dataset Ⅰ are illustrated in Figure 5. From this figure, the same results are those derived from the Table 8.
Table 9 lists different estimators for dataset Ⅰ, and it was noted that the MLE and ME are the best estimators for the dataset Ⅰ. In addition, the CVME, LSE, and WLSE techniques work quite well for modeling this data.
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.8615 | 0.8623 | 0.7173 | 0.7158 | 0.7386 | 0.8611 | 0.8660 | 0.7385 | 0.7387 |
χ2 | 147.0785 | 149.5152 | 4.9860 | 4.950 | 8.3453 | 145.6854 | 161.6198 | 8.3215 | 8.3522 |
D.F | 11 | 11 | 8 | 8 | 9 | 10 | 11 | 9 | 9 |
P.value | 0 | 0 | 0.7591 | 0.763 | 0.4999 | 0 | 0 | 0.5021 | 0.4991 |
Table 10 presents numerical summaries of the empirical descriptive statistics (EDS).
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
ME | 4.0148 | 11.4015 | 2.8399 | 1.5941 | 36.1645 |
MLE | 4.0281 | 11.4683 | 2.8471 | 1.5942 | 36.2005 |
LSE | 4.4507 | 13.6949 | 3.0769 | 1.5965 | 37.2589 |
CVME | 4.4485 | 13.6827 | 3.0758 | 1.5965 | 37.2537 |
WLSE | 4.4529 | 13.7071 | 3.0782 | 1.5965 | 37.2640 |
The characteristics of dataset Ⅰ clearly show a right-skewed distribution with leptokurtic features and overdispersion.
The second dataset comprises the failure times for a sample of 15 electronic components subjected to an accelerated life test (refer to Lawless [22]). Nonparametric plots for dataset Ⅱ are illustrated in Figure 6. It is noted that there are no extreme values, yet the distribution remains right-skewed.
The MLEs and goodness of fit (GOF) measures for this data are reported in Table 11.
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.948 | 0.578 | 0.965 | 0.982 | 0.999 | 1.8×10−7 | 27.533 | 0.999 | 0.720 |
MLEq | − | 0.193 | − | − | − | − | − | − | − |
−l | 64.770 | 67.879 | 65.002 | 65.000 | 66.394 | 89.096 | 151.206 | 91.368 | 77.402 |
AIC | 131.540 | 139.758 | 132.002 | 132.000 | 134.788 | 180.192 | 304.413 | 184.737 | 156.805 |
CAIC | 131.848 | 140.758 | 132.310 | 132.308 | 135.096 | 180.499 | 304.721 | 185.045 | 157.112 |
BIC | 132.248 | 141.174 | 132.710 | 132.708 | 135.496 | 180.899 | 305.121 | 185.445 | 157.513 |
HQIC | 131.533 | 139.743 | 131.994 | 131.992 | 134.781 | 180.184 | 304.405 | 184.729 | 156.797 |
KS | 0.112 | 0.207 | 0.176 | 0.177 | 0.216 | 0.698 | 0.381 | 0.791 | 0.405 |
P-value | 0.981 | 0.481 | 0.673 | 0.675 | 0.433 | <0.0001 | 0.025 | <0.0001 | 0.009 |
As observed, at a significance level of 0.05, the HDWC, NBI, Geo, DINH, and DR models (for full names, refer to Table 7) perform effectively in modeling dataset Ⅱ, with the HDWC distribution being the most flexible. Figure 7 displays the probability-probability (P-P) plots for dataset Ⅱ, which support the empirical findings presented in Table 11.
Table 12 lists various estimators for dataset Ⅱ, and it was noted that all techniques work quite well for modeling this data except the RADE.
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9509 | 0.9496 | 0.9479 | 0.9478 | 0.9503 | 0.5512 | 0.9504 | 0.9499 | 0.9506 |
KS | 0.1119 | 0.1079 | 0.1120 | 0.1119 | 0.1073 | 0.8781 | 0.1102 | 0.1077 | 0.11087 |
P-Value | 0.9810 | 0.9869 | 0.9810 | 0.9809 | 0.9875 | 0.0000 | 0.9838 | 0.9871 | 0.9827 |
The empirical CDFs plots for dataset Ⅱ for all competing distributions are given in Figure 8 (left panel) and the empirical CDFs plots for dataset Ⅱ for the HDWC using the different estimation methods are reported in Figure 8 (right panel) which prove the empirical results mentioned in Tables 11 and 12.
Table 13 presents numerical summaries of the EDS.
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 29.2936 | 493.2271 | 16.8373 | 1.6096 | 47.9050 |
ADE | 28.5054 | 467.4808 | 16.3997 | 1.6095 | 47.8417 |
ME | 27.5341 | 436.7002 | 15.8603 | 1.6095 | 47.7587 |
MLE | 27.5895 | 438.4270 | 15.8911 | 1.6095 | 47.7636 |
LSE | 28.9247 | 481.0908 | 16.6325 | 1.6096 | 47.8757 |
PCE | 28.9856 | 483.0828 | 16.6663 | 1.6096 | 47.8807 |
CVME | 28.6837 | 473.2432 | 16.4987 | 1.6095 | 47.8562 |
WLSE | 29.1080 | 487.1033 | 16.7343 | 1.6096 | 47.8904 |
Dataset Ⅱ clearly exhibits a right-skewed distribution with leptokurtic characteristics and overdispersion.
This dataset includes leukemia remission times (in weeks) for 20 patients, as discussed by Damien and Walker [23] in relation to the concept of discretization. Nonparametric plots for dataset Ⅲ are displayed in Figure 9.
The MLEs and the GOF test and measures for this data are reported in Table 14.
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.928 | 0.737 | 0.951 | 0.975 | 0.998 | 7.82×10−7 | 19.550 | 0.998 | 0.696 |
MLEq | − | 14.798 | − | − | − | − | − | − | − |
−L | 79.053 | 82.818 | 79.962 | 79.963 | 81.175 | 101.987 | 152.718 | 110.283 | 95.448 |
AIC | 160.107 | 169.635 | 161.925 | 161.926 | 164.351 | 205.975 | 307.436 | 222.565 | 192.896 |
CAIC | 160.329 | 170.341 | 162.147 | 162.148 | 164.572 | 206.197 | 307.658 | 222.787 | 193.118 |
BIC | 161.102 | 171.627 | 162.921 | 162.922 | 165.346 | 206.973 | 308.432 | 223.561 | 193.892 |
HQIC | 160.301 | 170.024 | 162.119 | 162.120 | 164.544 | 206.169 | 307.630 | 222.759 | 193.090 |
KS | 0.103 | 0.189 | 0.145 | 144 | 0.199 | 0.556 | 0.352 | 0.751 | 0.392 |
P-value | 0.984 | 0.467 | 0.796 | 0.795 | 0.401 | <0.001 | 0.014 | <0.001 | 0.004 |
As can be seen, based on significance level 0.05, all tested distributions work quite well for analyzing dataset Ⅲ except DIR, DBH, DPa, and Poi, but the HDWC distribution is the best (For full names, refer to Table 7). Figure 10 presents the P-P plots for dataset Ⅲ, reinforcing the empirical results shown in Table 14.
Table 15 provides a list of various estimators for dataset Ⅲ, noting that all methods perform flexibility in modeling this data.
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9371 | 0.9373 | 0.9279 | 0.9279 | 0.9319 | 0.9373 | 0.9387 | 0.9315 | 0.9318 |
KS | 0.0763 | 0.0769 | 0.1026 | 0.1026 | 0.1137 | 0.0769 | 0.0828 | 0.1145 | 0.1138 |
P-Value | 0.9996 | 0.9995 | 0.9843 | 0.9843 | 0.9530 | 0.9995 | 0.9986 | 0.9524 | 0.9529 |
Figure 11 (left panel) displays the empirical CDF plots for dataset Ⅲ across all competing distributions, while Figure 11 (right panel) presents the empirical CDF plots for the HDWC distribution using various estimation methods. Together, these figures support the empirical findings discussed in Tables 14 and 15.
Table 16 provides numerical summaries of the EDS for dataset Ⅲ.
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 22.5892 | 296.2575 | 13.1149 | 1.6093 | 47.2310 |
ADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
ME | 19.5450 | 223.3083 | 11.4252 | 1.6091 | 46.7802 |
MLE | 19.5739 | 223.9526 | 11.4413 | 1.6091 | 46.7851 |
LSE | 20.7676 | 251.3672 | 12.1038 | 1.6092 | 46.9764 |
RADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
PCE | 23.2119 | 312.4469 | 13.4606 | 1.6093 | 47.3094 |
CVME | 20.6389 | 248.3358 | 12.0323 | 1.6092 | 46.9568 |
WLSE | 20.7353 | 250.6043 | 12.0858 | 1.6092 | 46.9715 |
Dataset Ⅲ distinctly displays a right-skewed distribution with leptokurtic traits and overdispersion.
The analysis of the three datasets provides insights into their reliability characteristics from different perspectives. Dataset Ⅰ, representing the count of computer breakdowns over 128 consecutive weeks, reveals an increasing HRF, indicating a growing likelihood of breakdowns as time progresses, likely due to aging or wear-and-tear effects. The MRL decreases as more breakdowns occur, suggesting shorter expected operational times before subsequent failures. Similarly, the RHRF and VRL also decrease, reinforcing the notion of declining system reliability over time and highlighting the importance of proactive maintenance strategies. Dataset Ⅱ, involving failure times for 15 electronic components subjected to accelerated life tests, also shows an increasing HRF, suggesting higher failure probabilities as time advances under stress conditions. The decreasing MRL highlights the reduced lifespan of components as failures occur, while the declining RHRF and VRL confirm the ongoing degradation. These observations emphasize the necessity of robust design, quality control, and testing to enhance component reliability in challenging environments. Dataset Ⅲ, which contains leukemia remission times for 20 patients, demonstrates a similar reliability trend. The increasing HRF suggests that the risk of remission ending grows over time, potentially due to disease progression or diminishing treatment effectiveness. The decreasing MRL indicates shorter expected remission durations, while the declining RHRF and VRL highlight the reduced stability of remission periods. These findings underscore the importance of tailored treatment plans and continuous monitoring to improve patient outcomes. Figure 12 visually summarizes these findings, demonstrating consistent patterns across the datasets, all of which reflect progressively worsening reliability over time, calling for targeted strategies like preventive maintenance, rigorous testing, and personalized interventions to enhance performance and outcomes.
The reliability analysis of these datasets provides valuable insights into the progression of failures and the potential for improving performance and resilience. By understanding these patterns, researchers and practitioners can develop effective approaches tailored to the unique challenges presented by engineering systems, electronic components, and medical treatment plans.
The article introduced a discrete distribution with a single parameter, developed using a weighted combining discretization approach, referred to as the HDWC distribution. This model offers a comprehensive set of statistical properties, all derived and expressed in closed forms. It has proven to be particularly effective for modeling datasets that exhibit right-skewness and leptokurtic shapes, making it well-suited for real-world data with heavy tails and extreme values. The HDWC distribution is particularly valuable in situations where the hazard rate function increases, especially when dealing with outliers, providing a more robust framework compared to traditional models. The article explored multiple estimation methods for the HDWC model, including MPSE, ME, ADE, MLE, LSE, RADE, PCE, CVME, and WLSE, and found that all methods were successful in estimating the model's parameter. Among these, the ME approach demonstrated the best performance. To further validate the effectiveness of the HDWC distribution, three real-world datasets were analyzed, showing that the HDWC distribution outperformed other competing distributions, such as the geometric, Poisson, and negative binomial models, in all aspects of the analysis. The article also outlined several promising directions for future research. These include extending the model to handle time-dependent stochastic processes for applications such as survival analysis and event prediction, adapting the model for spatially correlated data in fields like epidemiology and environmental studies, and incorporating Bayesian methods to improve parameter estimation, especially for small datasets. Additionally, the potential for combining the HDWC model with machine learning techniques or exploring hybrid models could further enhance its flexibility and predictive power, opening up new avenues for its application in complex data modeling.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2024/01/31935).
The data that supports the findings of this study are available within the article.
There are no conflicts of interest.
[1] |
T. Nakagawa, S. Osaki, The discrete Weibull distribution, IEEE Trans. Reliab., 24 (1975), 300–301. https://doi.org/10.1109/TR.1975.5214915 doi: 10.1109/TR.1975.5214915
![]() |
[2] |
K. B. Kulasekera, D. W. Tonkyn, A new discrete distribution, with applications to survival, dispersal and dispersion, Commun. Stat.- Simul. Comput., 21 (1992), 499–518. https://doi.org/10.1080/03610919208813032 doi: 10.1080/03610919208813032
![]() |
[3] |
H. Sato, M. Ikota, A. Sugimoto, H. Masuda, A new defect distribution metrology with a consistent discrete exponential formula and its applications, IEEE Trans. Semicond. Manuf., 12 (1999), 409–418. https://doi.org/10.1109/66.806118 doi: 10.1109/66.806118
![]() |
[4] | J. D. Smith, A review of Finn, Fischer, and Handler (Eds.), collaborative/therapeutic assessment: A casebook and guide, JPA, 95 (2012), 234–235. https://doi.org/10.1080/00223891.2012.730086 |
[5] |
M. Roederer, A. Treister, W. Moore, L. A. Herzenberg, Probability binning comparison: A metric for quantitating univariate distribution differences, Cytometry, 45 (2001), 37–46. https://doi.org/10.1002/1097-0320(20010901)45:1<37::AID-CYTO1142>3.0.CO;2-E doi: 10.1002/1097-0320(20010901)45:1<37::AID-CYTO1142>3.0.CO;2-E
![]() |
[6] |
A. Barbiero, A. Hitaj, Discrete approximations of continuous probability distributions obtained by minimizing Cramer-von Mises-type distances, Stat. Papers, 64 (2023), 1669–1697. https://doi.org/10.1007/s00362-022-01356-2 doi: 10.1007/s00362-022-01356-2
![]() |
[7] | T. Ghosh, D. Roy, N. K. Chandra, Reliability approximation through the discretization of random variables using reversed hazard rate function, Int. J. Math. Comput. Stat. Nat. Phys. Eng., 7 (2013), 96–100. |
[8] |
S. Chakraborty, Generating discrete analogues of continuous probability distributions-A survey of methods and constructions, J. Stat. Distrib. Appl., 2 (2015), 6. https://doi.org/10.1186/s40488-015-0028-6 doi: 10.1186/s40488-015-0028-6
![]() |
[9] | S. Kotsiantis, D. Kanellopoulos, Discretization techniques: A recent survey, GESTS Int. Trans. Comput. Sci. Eng., 32 (2006), 47–58. |
[10] | G. Casella, R. L. Berger, Statistical Inference Vol. 70, Duxbury Press, 1990. Available from: https://philpapers.org/rec/CASSIV. |
[11] | C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006. Available from: https://link.springer.com/book/9780387310732#bibliographic-information. |
[12] | A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, D. B. Rubin, Bayesian Data Analysis, CRC Press, 2013. https://doi.org/10.1201/b16018 |
[13] | R. E. Barlow, Statistical theory of reliability and life testing, 1975. https://cir.nii.ac.jp/crid/1571980074720917504 |
[14] | C. D. Lai, M. Xie, Stochastic Ageing and Dependence for Reliability, Springer, 2006. https://doi.org/10.1007/0-387-34232-X |
[15] |
B. Singh, R. P. Singh, A. S. Nayal, A. Tyagi, Discrete inverted Nadarajah-Haghighi distribution: Properties and classical estimation with application to complete and censored data, Stat. Optim. Inf. Comput., 10 (2022), 1293–1313. https://doi.org/10.19139/soic-2310-5070-1365 doi: 10.19139/soic-2310-5070-1365
![]() |
[16] |
D. Roy, Discrete rayleigh distribution, IEEE Trans. Reliab., 53 (2004), 255–260. https://doi.org/10.1109/TR.2004.829161 doi: 10.1109/TR.2004.829161
![]() |
[17] | T. Hussain, M. Ahmad, Discrete inverse Rayleigh distribution, Pak. J. Stat., 30 (2014). |
[18] | S. D. Poisson, Probabilité des Jugements en Matière Criminelle et en Matière Civile, Précédées des Règles Générales du Calcul des Probabilités, Paris, France: Bachelier, 1837. |
[19] |
M. El-Morshedy, M. S. Eliwa, E. Altun, Discrete Burr-Hatke distribution with properties, estimation methods and regression model, IEEE Access, 8 (2020), 74359–74370. https://doi.org/10.1109/ACCESS.2020.2988431 doi: 10.1109/ACCESS.2020.2988431
![]() |
[20] |
H. Krishna, P. S. Pundir, Discrete Burr and discrete Pareto distributions, Stat. Methodol., 6 (2009), 177–188. https://doi.org/10.1016/j.stamet.2008.07.001 doi: 10.1016/j.stamet.2008.07.001
![]() |
[21] | D. J. Hand, F. Daly, K. J. McConway, A. D. Lunn, E. O. Ostrowski, A Hand Book of Small Data Sets, Chapman and Hall/CRC, 1993. https://doi.org/10.1201/9780429246579 |
[22] | J. F. Lawless, Statistical Models and Methods for Lifetime Data, John Wiley & Sons, 2011. |
[23] |
P. Damien, S. Walker, A Bayesian non-parametric comparison of two treatments, Scand. J. Stat., 29 (2002), 51–56. https://doi.org/10.1111/1467-9469.00891 doi: 10.1111/1467-9469.00891
![]() |
γ↓θ→ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |
0.5 | 0.8219 | 1.1588 | 1.4393 | 1.7058 | 1.9796 | 2.2803 | 2.6355 | 3.1010 | 3.8412 |
2 | 0.3778 | 0.7073 | 0.9942 | 1.2568 | 1.5185 | 1.8051 | 2.1485 | 2.6065 | 3.3508 |
5 | 0.2633 | 0.5538 | 0.8552 | 1.1247 | 1.3610 | 1.6250 | 1.9632 | 2.4234 | 3.1698 |
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
20 | |Bias| | 0.385823 | 0.394885 | 0.359482 | 0.347831 | 0.427519 | 0.417688 | 0.409827 | 0.388944 | 0.399876 |
MSE | 0.284985 | 0.304876 | 0.253842 | 0.210491 | 0.328479 | 0.322648 | 0.310857 | 0.276483 | 0.284714 | |
Sum of Ranks | 84 | 116 | 42 | 21 | 189 | 168 | 147 | 73 | 105 | |
100 | |Bias| | 0.243753 | 0.266345 | 0.213532 | 0.209471 | 0.308759 | 0.288968 | 0.279587 | 0.262954 | 0.270856 |
MSE | 0.177496 | 0.194877 | 0.143751 | 0.154853 | 0.204879 | 0.199478 | 0.164855 | 0.153262 | 0.161834 | |
Sum of Ranks | 94 | 126.5 | 31 | 42 | 188 | 169 | 126.5 | 63 | 105 | |
250 | |Bias| | 0.103843 | 0.1327577 | 0.088642 | 0.042741 | 0.153649 | 0.144898 | 0.130866 | 0.128854 | 0.129475 |
MSE | 0.054645 | 0.099487 | 0.003751 | 0.009482 | 0.110949 | 0.107458 | 0.084746 | 0.018344 | 0.011473 | |
Sum of Ranks | 84 | 147 | 31.5 | 31.5 | 189 | 168 | 126 | 84 | 84 | |
500 | |Bias| | 0.005681 | 0.0183826 | 0.009283 | 0.008142 | 0.074479 | 0.066488 | 0.053767 | 0.012865 | 0.012084 |
MSE | 0.000833 | 0.002096 | 0.000282 | 0.000141 | 0.008478 | 0.009589 | 0.002297 | 0.000844 | 0.000995 | |
Sum of Ranks | 42 | 126 | 53 | 31 | 178.5 | 178.5 | 147 | 94.5 | 94.5 | |
750 | |Bias| | 0.000586 | 0.000264 | 0.000042 | 0.000011 | 0.002839 | 0.000988 | 0.000727 | 0.000183 | 0.000295 |
MSE | 0.000085 | 0.000496 | 0.000012 | 0.000001 | 0.000958 | 0.001099 | 0.000647 | 0.000063 | 0.000074 | |
Sum of Ranks | 116 | 105 | 42 | 21 | 178.5 | 178.5 | 147 | 63 | 94 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.892943 | 0.964876 | 0.818931 | 0.857372 | 0.977367 | 1.183679 | 0.994778 | 0.917365 | 0.910744 |
MSE | 0.517482 | 0.665316 | 0.483841 | 0.547623 | 0.689327 | 0.709478 | 0.804769 | 0.550874 | 0.564875 | |
Sum of Ranks | 52.5 | 126 | 21 | 52.5 | 147 | 178.5 | 178.5 | 94.5 | 94.5 | |
100 | |Bias| | 0.503763 | 0.553824 | 0.464861 | 0.475972 | 0.568835 | 0.724149 | 0.684687 | 0.697568 | 0.589756 |
MSE | 0.243663 | 0.290485 | 0.210341 | 0.228942 | 0.284864 | 0.437869 | 0.408738 | 0.397497 | 0.310396 | |
Sum of Ranks | 63 | 94.5 | 21 | 42 | 94.5 | 189 | 157.5 | 157.5 | 126 | |
250 | |Bias| | 0.294763 | 0.386817 | 0.268842 | 0.233741 | 0.345835 | 0.428119 | 0.388468 | 0.364866 | 0.308474 |
MSE | 0.129433 | 0.171036 | 0.100382 | 0.083651 | 0.163865 | 0.210389 | 0.192298 | 0.188377 | 0.139844 | |
Sum of Ranks | 63 | 136.5 | 42 | 21 | 105 | 189 | 168 | 136.5 | 84 | |
500 | |Bias| | 0.138745 | 0.153868 | 0.084672 | 0.038461 | 0.120473 | 0.149777 | 0.142846 | 0.158419 | 0.122984 |
MSE | 0.006866 | 0.019939 | 0.000912 | 0.000231 | 0.001933 | 0.007387 | 0.005635 | 0.018268 | 0.002784 | |
Sum of Ranks | 95 | 178.5 | 42 | 21 | 63 | 147 | 116 | 178.5 | 84 | |
750 | |Bias| | 0.002465 | 0.008416 | 0.000142 | 0.000081 | 0.000884 | 0.029858 | 0.034299 | 0.028467 | 0.000743 |
MSE | 0.000295 | 0.000586 | 0.000033 | 0.000001 | 0.000094 | 0.000989 | 0.000647 | 0.000768 | 0.000022 | |
Sum of Ranks | 105 | 126 | 52.5 | 21 | 84 | 179 | 168 | 157 | 52.5 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.650928 | 0.673919 | 0.487641 | 0.507432 | 0.595796 | 0.628747 | 0.557625 | 0.537564 | 0.537313 |
MSE | 0.437548 | 0.428577 | 0.282751 | 0.309352 | 0.385766 | 0.439659 | 0.330964 | 0.348675 | 0.329563 | |
Sum of Ranks | 168 | 168 | 21 | 42 | 126 | 168 | 94.5 | 94.5 | 63 | |
100 | |Bias| | 0.426489 | 0.405757 | 0.310451 | 0.350683 | 0.395916 | 0.410498 | 0.370985 | 0.311052 | 0.350924 |
MSE | 0.288469 | 0.274988 | 0.184662 | 0.234755 | 0.254866 | 0.274437 | 0.220683 | 0.165861 | 0.224614 | |
Sum of Ranks | 189 | 157.5 | 31.5 | 84 | 126 | 157.5 | 84 | 31.5 | 84 | |
250 | |Bias| | 0.249609 | 0.234877 | 0.184651 | 0.209483 | 0.240488 | 0.230866 | 0.230585 | 0.230294 | 0.200852 |
MSE | 0.129687 | 0.110682 | 0.085631 | 0.125725 | 0.135469 | 0.129858 | 0.125766 | 0.120584 | 0.116733 | |
Sum of Ranks | 168 | 95 | 21 | 83.5 | 179 | 147 | 116 | 83.5 | 52 | |
500 | |Bias| | 0.108578 | 0.068564 | 0.028651 | 0.059783 | 0.110489 | 0.095866 | 0.102957 | 0.084855 | 0.055482 |
MSE | 0.008578 | 0.000281 | 0.000855 | 0.000433 | 0.009589 | 0.002056 | 0.007457 | 0.000302 | 0.000784 | |
Sum of Ranks | 168 | 51 | 63 | 63 | 189 | 126 | 147 | 75 | 63 | |
750 | |Bias| | 0.003858 | 0.000584 | 0.000041 | 0.000082 | 0.004929 | 0.003747 | 0.001096 | 0.000645 | 0.000233 |
MSE | 0.000098 | 0.000086 | 0.000001 | 0.000002 | 0.000099 | 0.000087 | 0.000074 | 0.000085 | 0.000063 | |
Sum of Ranks | 167 | 105 | 21 | 42 | 189 | 148 | 105 | 105 | 63 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
50 | |Bias| | 1.257559 | 1.110488 | 0.895641 | 0.937563 | 1.085756 | 1.104985 | 1.109487 | 0.933252 | 0.985664 |
MSE | 0.547369 | 0.528468 | 0.417351 | 0.456382 | 0.472095 | 0.479746 | 0.467374 | 0.459753 | 0.508847 | |
Sum of Ranks | 189 | 168 | 21 | 52.5 | 115.5 | 115.5 | 115.5 | 52.5 | 115.5 | |
200 | |Bias| | 0.749759 | 0.685975 | 0.517481 | 0.577953 | 0.700586 | 0.720588 | 0.718857 | 0.550692 | 0.675974 |
MSE | 0.355389 | 0.326944 | 0.266382 | 0.294073 | 0.320984 | 0.343738 | 0.338767 | 0.254871 | 0.328736 | |
Sum of Ranks | 189 | 94 | 31.5 | 63 | 105.5 | 168 | 147 | 31.5 | 105.5 | |
400 | |Bias| | 0.433069 | 0.404767 | 0.298761 | 0.320582 | 0.410588 | 0.398596 | 0.386595 | 0.340583 | 0.385794 |
MSE | 0.238469 | 0.219467 | 0.144831 | 0.165782 | 0.224348 | 0.205385 | 0.208736 | 0.183653 | 0.198364 | |
Sum of Ranks | 189 | 147 | 21 | 42 | 168 | 115.5 | 115.5 | 63 | 84 | |
500 | |Bias| | 0.288599 | 0.244378 | 0.103871 | 0.140962 | 0.230596 | 0.233097 | 0.210955 | 0.165843 | 0.199464 |
MSE | 0.148269 | 0.120988 | 0.015371 | 0.098733 | 0.118467 | 0.117466 | 0.1053875 | 0.100484 | 0.084632 | |
Sum of Ranks | 189 | 168 | 21 | 52 | 136.5 | 136.5 | 105 | 74 | 63 | |
750 | |Bias| | 0.087569 | 0.018766 | 0.000461 | 0.003953 | 0.019587 | 0.029688 | 0.008694 | 0.000962 | 0.008815 |
MSE | 0.008759 | 0.001348 | 0.000021 | 0.000093 | 0.000847 | 0.000745 | 0.000836 | 0.000082 | 0.000174 | |
Sum of Ranks | 189 | 147.5 | 21 | 63 | 147.5 | 136 | 105 | 42 | 94 |
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
Schema Ⅰ | 20 | 4 | 6 | 2 | 1 | 9 | 8 | 7 | 3 | 5 |
100 | 4 | 6.5 | 1 | 2 | 8 | 9 | 6.5 | 3 | 5 | |
250 | 4 | 7 | 1.5 | 1.5 | 9 | 8 | 4 | 4 | 4 | |
500 | 2 | 6 | 3 | 1 | 8.5 | 8.5 | 4.5 | 4.5 | 4.5 | |
750 | 6 | 5 | 2 | 1 | 8.5 | 8.5 | 3 | 3 | 4 | |
Schema Ⅱ | 20 | 2.5 | 6 | 1 | 2.5 | 7 | 8.5 | 8.5 | 4.5 | 4.5 |
100 | 3 | 4.5 | 1 | 2 | 4.5 | 9 | 7.5 | 7.5 | 6 | |
250 | 3 | 6.5 | 2 | 1 | 5 | 9 | 8 | 6.5 | 4 | |
500 | 5 | 8.5 | 2 | 1 | 3 | 7 | 6 | 8.5 | 4 | |
750 | 5 | 6 | 2.5 | 1 | 4 | 9 | 8 | 7 | 2.5 | |
Schema Ⅲ | 20 | 8 | 8 | 1 | 2 | 6 | 8 | 4.5 | 4.5 | 3 |
100 | 9 | 7.5 | 1.5 | 4 | 6 | 7.5 | 4 | 1.5 | 4 | |
250 | 8 | 5 | 1 | 3.5 | 9 | 7 | 6 | 3.5 | 2 | |
500 | 8 | 1 | 3 | 3 | 9 | 6 | 7 | 5 | 3 | |
750 | 7 | 5 | 1 | 2 | 9 | 8 | 5 | 5 | 3 | |
Schema Ⅳ | 20 | 9 | 8 | 1 | 2.5 | 5.5 | 5.5 | 5.5 | 2.5 | 5.5 |
100 | 9 | 4 | 1.5 | 3 | 5.5 | 8 | 7 | 1.5 | 5.5 | |
250 | 9 | 7 | 1 | 2 | 8 | 5.5 | 5.5 | 3 | 4 | |
500 | 9 | 8 | 1 | 2 | 6.5 | 6.5 | 5 | 4 | 3 | |
750 | 9 | 7.5 | 1 | 3 | 7.5 | 6 | 5 | 2 | 4 | |
Sum of Ranks | 123.5 | 123 | 31 | 41 | 138.5 | 152.5 | 117.5 | 84 | 80.5 | |
Overall Rank | 7 | 6 | 1 | 2 | 8 | 9 | 5 | 4 | 3 |
Distribution | Abbreviation | Author(s) |
Discrete Inverted Nadarajah-Haghighi | DINH | Singh et al. [15] |
Geometric | Geo | - |
Negative Binomial (1 Parameter) | NBI | - |
Discrete Rayleigh | DR | Roy [16] |
Discrete Inverse Rayleigh | DIR | Hussain and Ahmad [17] |
Poisson | Poi | Poisson [18] |
Discrete Burr-Hatke | DBH | El-Morshedy et al. [19] |
Discrete Pareto | DPa | Krishna and Pundir [20] |
X | Obs. Fre. | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
0 | 15 | 10.339 | 12.401 | 25.527 | 25.520 | 3.636 | 3.237 | 2.310 | 65.853 | 47.806 |
1 | 19 | 20.099 | 30.161 | 20.434 | 20.432 | 10.302 | 47.809 | 9.272 | 21.915 | 19.190 |
2 | 23 | 20.893 | 19.935 | 16.360 | 16.358 | 15.306 | 34.021 | 18.613 | 10.931 | 10.760 |
3 | 14 | 18.288 | 12.782 | 13.096 | 13.097 | 18.041 | 16.649 | 24.912 | 6.538 | 7.021 |
4 | 15 | 14.798 | 8.721 | 10.491 | 10.486 | 18.440 | 8.773 | 25.014 | 4.342 | 5.001 |
5 | 10 | 11.467 | 6.291 | 8.385 | 8.395 | 16.918 | 5.079 | 20.085 | 3.088 | 3.774 |
6 | 8 | 8.657 | 4.732 | 6.720 | 6.721 | 14.172 | 3.174 | 13.440 | 2.304 | 2.967 |
7 | 4 | 6.426 | 3.693 | 5.381 | 5.381 | 10.944 | 2.106 | 7.713 | 1.782 | 2.404 |
8 | 6 | 4.717 | 2.950 | 4.312 | 4.308 | 7.839 | 1.466 | 3.870 | 1.417 | 1.994 |
9 | 2 | 3.436 | 2.412 | 3.456 | 3.449 | 5.228 | 1.059 | 1.736 | 1.151 | 1.686 |
10 | 3 | 2.490 | 2.013 | 2.753 | 2.762 | 3.256 | 0.789 | 0.690 | 0.953 | 1.447 |
11 | 3 | 1.798 | 1.691 | 2.205 | 2.211 | 1.897 | 0.604 | 0.253 | 0.800 | 1.258 |
12 | 2 | 1.295 | 1.460 | 1.766 | 1.770 | 1.036 | 0.472 | 0.080 | 0.680 | 1.106 |
+13 | 4 | 3.297 | 18.758 | 7.114 | 7.110 | 0.985 | 2.762 | 0.006 | 6.246 | 21.586 |
Total | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 |
−l | 317.707 | 331.931 | 320.703 | 320.703 | 347.148 | 356.525 | 384.974 | 379.346 | 369.766 | |
MLEθ | 0.716 | 1.244 | 0.801 | 0.895 | 0.972 | 0.025 | 4.016 | 0.971 | 0.509 | |
MLEq | − | 1.634 | − | − | − | − | − | − | − | |
χ2 | 4.950 | 20.199 | 11.651 | 10.684 | 49.897 | 50.533 | 88.995 | 143.174 | 94.971 | |
D.F | 8 | 6 | 9 | 9 | 8 | 5 | 6 | 5 | 6 | |
P-value | 0.763 | 0.003 | 0.234 | 0.298 | 0 | 0 | 0 | 0 | 0 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.8615 | 0.8623 | 0.7173 | 0.7158 | 0.7386 | 0.8611 | 0.8660 | 0.7385 | 0.7387 |
χ2 | 147.0785 | 149.5152 | 4.9860 | 4.950 | 8.3453 | 145.6854 | 161.6198 | 8.3215 | 8.3522 |
D.F | 11 | 11 | 8 | 8 | 9 | 10 | 11 | 9 | 9 |
P.value | 0 | 0 | 0.7591 | 0.763 | 0.4999 | 0 | 0 | 0.5021 | 0.4991 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
ME | 4.0148 | 11.4015 | 2.8399 | 1.5941 | 36.1645 |
MLE | 4.0281 | 11.4683 | 2.8471 | 1.5942 | 36.2005 |
LSE | 4.4507 | 13.6949 | 3.0769 | 1.5965 | 37.2589 |
CVME | 4.4485 | 13.6827 | 3.0758 | 1.5965 | 37.2537 |
WLSE | 4.4529 | 13.7071 | 3.0782 | 1.5965 | 37.2640 |
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.948 | 0.578 | 0.965 | 0.982 | 0.999 | 1.8×10−7 | 27.533 | 0.999 | 0.720 |
MLEq | − | 0.193 | − | − | − | − | − | − | − |
−l | 64.770 | 67.879 | 65.002 | 65.000 | 66.394 | 89.096 | 151.206 | 91.368 | 77.402 |
AIC | 131.540 | 139.758 | 132.002 | 132.000 | 134.788 | 180.192 | 304.413 | 184.737 | 156.805 |
CAIC | 131.848 | 140.758 | 132.310 | 132.308 | 135.096 | 180.499 | 304.721 | 185.045 | 157.112 |
BIC | 132.248 | 141.174 | 132.710 | 132.708 | 135.496 | 180.899 | 305.121 | 185.445 | 157.513 |
HQIC | 131.533 | 139.743 | 131.994 | 131.992 | 134.781 | 180.184 | 304.405 | 184.729 | 156.797 |
KS | 0.112 | 0.207 | 0.176 | 0.177 | 0.216 | 0.698 | 0.381 | 0.791 | 0.405 |
P-value | 0.981 | 0.481 | 0.673 | 0.675 | 0.433 | <0.0001 | 0.025 | <0.0001 | 0.009 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9509 | 0.9496 | 0.9479 | 0.9478 | 0.9503 | 0.5512 | 0.9504 | 0.9499 | 0.9506 |
KS | 0.1119 | 0.1079 | 0.1120 | 0.1119 | 0.1073 | 0.8781 | 0.1102 | 0.1077 | 0.11087 |
P-Value | 0.9810 | 0.9869 | 0.9810 | 0.9809 | 0.9875 | 0.0000 | 0.9838 | 0.9871 | 0.9827 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 29.2936 | 493.2271 | 16.8373 | 1.6096 | 47.9050 |
ADE | 28.5054 | 467.4808 | 16.3997 | 1.6095 | 47.8417 |
ME | 27.5341 | 436.7002 | 15.8603 | 1.6095 | 47.7587 |
MLE | 27.5895 | 438.4270 | 15.8911 | 1.6095 | 47.7636 |
LSE | 28.9247 | 481.0908 | 16.6325 | 1.6096 | 47.8757 |
PCE | 28.9856 | 483.0828 | 16.6663 | 1.6096 | 47.8807 |
CVME | 28.6837 | 473.2432 | 16.4987 | 1.6095 | 47.8562 |
WLSE | 29.1080 | 487.1033 | 16.7343 | 1.6096 | 47.8904 |
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.928 | 0.737 | 0.951 | 0.975 | 0.998 | 7.82×10−7 | 19.550 | 0.998 | 0.696 |
MLEq | − | 14.798 | − | − | − | − | − | − | − |
−L | 79.053 | 82.818 | 79.962 | 79.963 | 81.175 | 101.987 | 152.718 | 110.283 | 95.448 |
AIC | 160.107 | 169.635 | 161.925 | 161.926 | 164.351 | 205.975 | 307.436 | 222.565 | 192.896 |
CAIC | 160.329 | 170.341 | 162.147 | 162.148 | 164.572 | 206.197 | 307.658 | 222.787 | 193.118 |
BIC | 161.102 | 171.627 | 162.921 | 162.922 | 165.346 | 206.973 | 308.432 | 223.561 | 193.892 |
HQIC | 160.301 | 170.024 | 162.119 | 162.120 | 164.544 | 206.169 | 307.630 | 222.759 | 193.090 |
KS | 0.103 | 0.189 | 0.145 | 144 | 0.199 | 0.556 | 0.352 | 0.751 | 0.392 |
P-value | 0.984 | 0.467 | 0.796 | 0.795 | 0.401 | <0.001 | 0.014 | <0.001 | 0.004 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9371 | 0.9373 | 0.9279 | 0.9279 | 0.9319 | 0.9373 | 0.9387 | 0.9315 | 0.9318 |
KS | 0.0763 | 0.0769 | 0.1026 | 0.1026 | 0.1137 | 0.0769 | 0.0828 | 0.1145 | 0.1138 |
P-Value | 0.9996 | 0.9995 | 0.9843 | 0.9843 | 0.9530 | 0.9995 | 0.9986 | 0.9524 | 0.9529 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 22.5892 | 296.2575 | 13.1149 | 1.6093 | 47.2310 |
ADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
ME | 19.5450 | 223.3083 | 11.4252 | 1.6091 | 46.7802 |
MLE | 19.5739 | 223.9526 | 11.4413 | 1.6091 | 46.7851 |
LSE | 20.7676 | 251.3672 | 12.1038 | 1.6092 | 46.9764 |
RADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
PCE | 23.2119 | 312.4469 | 13.4606 | 1.6093 | 47.3094 |
CVME | 20.6389 | 248.3358 | 12.0323 | 1.6092 | 46.9568 |
WLSE | 20.7353 | 250.6043 | 12.0858 | 1.6092 | 46.9715 |
γ↓θ→ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |
0.5 | 0.8219 | 1.1588 | 1.4393 | 1.7058 | 1.9796 | 2.2803 | 2.6355 | 3.1010 | 3.8412 |
2 | 0.3778 | 0.7073 | 0.9942 | 1.2568 | 1.5185 | 1.8051 | 2.1485 | 2.6065 | 3.3508 |
5 | 0.2633 | 0.5538 | 0.8552 | 1.1247 | 1.3610 | 1.6250 | 1.9632 | 2.4234 | 3.1698 |
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
20 | |Bias| | 0.385823 | 0.394885 | 0.359482 | 0.347831 | 0.427519 | 0.417688 | 0.409827 | 0.388944 | 0.399876 |
MSE | 0.284985 | 0.304876 | 0.253842 | 0.210491 | 0.328479 | 0.322648 | 0.310857 | 0.276483 | 0.284714 | |
Sum of Ranks | 84 | 116 | 42 | 21 | 189 | 168 | 147 | 73 | 105 | |
100 | |Bias| | 0.243753 | 0.266345 | 0.213532 | 0.209471 | 0.308759 | 0.288968 | 0.279587 | 0.262954 | 0.270856 |
MSE | 0.177496 | 0.194877 | 0.143751 | 0.154853 | 0.204879 | 0.199478 | 0.164855 | 0.153262 | 0.161834 | |
Sum of Ranks | 94 | 126.5 | 31 | 42 | 188 | 169 | 126.5 | 63 | 105 | |
250 | |Bias| | 0.103843 | 0.1327577 | 0.088642 | 0.042741 | 0.153649 | 0.144898 | 0.130866 | 0.128854 | 0.129475 |
MSE | 0.054645 | 0.099487 | 0.003751 | 0.009482 | 0.110949 | 0.107458 | 0.084746 | 0.018344 | 0.011473 | |
Sum of Ranks | 84 | 147 | 31.5 | 31.5 | 189 | 168 | 126 | 84 | 84 | |
500 | |Bias| | 0.005681 | 0.0183826 | 0.009283 | 0.008142 | 0.074479 | 0.066488 | 0.053767 | 0.012865 | 0.012084 |
MSE | 0.000833 | 0.002096 | 0.000282 | 0.000141 | 0.008478 | 0.009589 | 0.002297 | 0.000844 | 0.000995 | |
Sum of Ranks | 42 | 126 | 53 | 31 | 178.5 | 178.5 | 147 | 94.5 | 94.5 | |
750 | |Bias| | 0.000586 | 0.000264 | 0.000042 | 0.000011 | 0.002839 | 0.000988 | 0.000727 | 0.000183 | 0.000295 |
MSE | 0.000085 | 0.000496 | 0.000012 | 0.000001 | 0.000958 | 0.001099 | 0.000647 | 0.000063 | 0.000074 | |
Sum of Ranks | 116 | 105 | 42 | 21 | 178.5 | 178.5 | 147 | 63 | 94 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.892943 | 0.964876 | 0.818931 | 0.857372 | 0.977367 | 1.183679 | 0.994778 | 0.917365 | 0.910744 |
MSE | 0.517482 | 0.665316 | 0.483841 | 0.547623 | 0.689327 | 0.709478 | 0.804769 | 0.550874 | 0.564875 | |
Sum of Ranks | 52.5 | 126 | 21 | 52.5 | 147 | 178.5 | 178.5 | 94.5 | 94.5 | |
100 | |Bias| | 0.503763 | 0.553824 | 0.464861 | 0.475972 | 0.568835 | 0.724149 | 0.684687 | 0.697568 | 0.589756 |
MSE | 0.243663 | 0.290485 | 0.210341 | 0.228942 | 0.284864 | 0.437869 | 0.408738 | 0.397497 | 0.310396 | |
Sum of Ranks | 63 | 94.5 | 21 | 42 | 94.5 | 189 | 157.5 | 157.5 | 126 | |
250 | |Bias| | 0.294763 | 0.386817 | 0.268842 | 0.233741 | 0.345835 | 0.428119 | 0.388468 | 0.364866 | 0.308474 |
MSE | 0.129433 | 0.171036 | 0.100382 | 0.083651 | 0.163865 | 0.210389 | 0.192298 | 0.188377 | 0.139844 | |
Sum of Ranks | 63 | 136.5 | 42 | 21 | 105 | 189 | 168 | 136.5 | 84 | |
500 | |Bias| | 0.138745 | 0.153868 | 0.084672 | 0.038461 | 0.120473 | 0.149777 | 0.142846 | 0.158419 | 0.122984 |
MSE | 0.006866 | 0.019939 | 0.000912 | 0.000231 | 0.001933 | 0.007387 | 0.005635 | 0.018268 | 0.002784 | |
Sum of Ranks | 95 | 178.5 | 42 | 21 | 63 | 147 | 116 | 178.5 | 84 | |
750 | |Bias| | 0.002465 | 0.008416 | 0.000142 | 0.000081 | 0.000884 | 0.029858 | 0.034299 | 0.028467 | 0.000743 |
MSE | 0.000295 | 0.000586 | 0.000033 | 0.000001 | 0.000094 | 0.000989 | 0.000647 | 0.000768 | 0.000022 | |
Sum of Ranks | 105 | 126 | 52.5 | 21 | 84 | 179 | 168 | 157 | 52.5 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
20 | |Bias| | 0.650928 | 0.673919 | 0.487641 | 0.507432 | 0.595796 | 0.628747 | 0.557625 | 0.537564 | 0.537313 |
MSE | 0.437548 | 0.428577 | 0.282751 | 0.309352 | 0.385766 | 0.439659 | 0.330964 | 0.348675 | 0.329563 | |
Sum of Ranks | 168 | 168 | 21 | 42 | 126 | 168 | 94.5 | 94.5 | 63 | |
100 | |Bias| | 0.426489 | 0.405757 | 0.310451 | 0.350683 | 0.395916 | 0.410498 | 0.370985 | 0.311052 | 0.350924 |
MSE | 0.288469 | 0.274988 | 0.184662 | 0.234755 | 0.254866 | 0.274437 | 0.220683 | 0.165861 | 0.224614 | |
Sum of Ranks | 189 | 157.5 | 31.5 | 84 | 126 | 157.5 | 84 | 31.5 | 84 | |
250 | |Bias| | 0.249609 | 0.234877 | 0.184651 | 0.209483 | 0.240488 | 0.230866 | 0.230585 | 0.230294 | 0.200852 |
MSE | 0.129687 | 0.110682 | 0.085631 | 0.125725 | 0.135469 | 0.129858 | 0.125766 | 0.120584 | 0.116733 | |
Sum of Ranks | 168 | 95 | 21 | 83.5 | 179 | 147 | 116 | 83.5 | 52 | |
500 | |Bias| | 0.108578 | 0.068564 | 0.028651 | 0.059783 | 0.110489 | 0.095866 | 0.102957 | 0.084855 | 0.055482 |
MSE | 0.008578 | 0.000281 | 0.000855 | 0.000433 | 0.009589 | 0.002056 | 0.007457 | 0.000302 | 0.000784 | |
Sum of Ranks | 168 | 51 | 63 | 63 | 189 | 126 | 147 | 75 | 63 | |
750 | |Bias| | 0.003858 | 0.000584 | 0.000041 | 0.000082 | 0.004929 | 0.003747 | 0.001096 | 0.000645 | 0.000233 |
MSE | 0.000098 | 0.000086 | 0.000001 | 0.000002 | 0.000099 | 0.000087 | 0.000074 | 0.000085 | 0.000063 | |
Sum of Ranks | 167 | 105 | 21 | 42 | 189 | 148 | 105 | 105 | 63 |
n | Est. | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME |
50 | |Bias| | 1.257559 | 1.110488 | 0.895641 | 0.937563 | 1.085756 | 1.104985 | 1.109487 | 0.933252 | 0.985664 |
MSE | 0.547369 | 0.528468 | 0.417351 | 0.456382 | 0.472095 | 0.479746 | 0.467374 | 0.459753 | 0.508847 | |
Sum of Ranks | 189 | 168 | 21 | 52.5 | 115.5 | 115.5 | 115.5 | 52.5 | 115.5 | |
200 | |Bias| | 0.749759 | 0.685975 | 0.517481 | 0.577953 | 0.700586 | 0.720588 | 0.718857 | 0.550692 | 0.675974 |
MSE | 0.355389 | 0.326944 | 0.266382 | 0.294073 | 0.320984 | 0.343738 | 0.338767 | 0.254871 | 0.328736 | |
Sum of Ranks | 189 | 94 | 31.5 | 63 | 105.5 | 168 | 147 | 31.5 | 105.5 | |
400 | |Bias| | 0.433069 | 0.404767 | 0.298761 | 0.320582 | 0.410588 | 0.398596 | 0.386595 | 0.340583 | 0.385794 |
MSE | 0.238469 | 0.219467 | 0.144831 | 0.165782 | 0.224348 | 0.205385 | 0.208736 | 0.183653 | 0.198364 | |
Sum of Ranks | 189 | 147 | 21 | 42 | 168 | 115.5 | 115.5 | 63 | 84 | |
500 | |Bias| | 0.288599 | 0.244378 | 0.103871 | 0.140962 | 0.230596 | 0.233097 | 0.210955 | 0.165843 | 0.199464 |
MSE | 0.148269 | 0.120988 | 0.015371 | 0.098733 | 0.118467 | 0.117466 | 0.1053875 | 0.100484 | 0.084632 | |
Sum of Ranks | 189 | 168 | 21 | 52 | 136.5 | 136.5 | 105 | 74 | 63 | |
750 | |Bias| | 0.087569 | 0.018766 | 0.000461 | 0.003953 | 0.019587 | 0.029688 | 0.008694 | 0.000962 | 0.008815 |
MSE | 0.008759 | 0.001348 | 0.000021 | 0.000093 | 0.000847 | 0.000745 | 0.000836 | 0.000082 | 0.000174 | |
Sum of Ranks | 189 | 147.5 | 21 | 63 | 147.5 | 136 | 105 | 42 | 94 |
n | MPSE | RADE | ME | MLE | ADE | PCE | LSE | WLSE | CVME | |
Schema Ⅰ | 20 | 4 | 6 | 2 | 1 | 9 | 8 | 7 | 3 | 5 |
100 | 4 | 6.5 | 1 | 2 | 8 | 9 | 6.5 | 3 | 5 | |
250 | 4 | 7 | 1.5 | 1.5 | 9 | 8 | 4 | 4 | 4 | |
500 | 2 | 6 | 3 | 1 | 8.5 | 8.5 | 4.5 | 4.5 | 4.5 | |
750 | 6 | 5 | 2 | 1 | 8.5 | 8.5 | 3 | 3 | 4 | |
Schema Ⅱ | 20 | 2.5 | 6 | 1 | 2.5 | 7 | 8.5 | 8.5 | 4.5 | 4.5 |
100 | 3 | 4.5 | 1 | 2 | 4.5 | 9 | 7.5 | 7.5 | 6 | |
250 | 3 | 6.5 | 2 | 1 | 5 | 9 | 8 | 6.5 | 4 | |
500 | 5 | 8.5 | 2 | 1 | 3 | 7 | 6 | 8.5 | 4 | |
750 | 5 | 6 | 2.5 | 1 | 4 | 9 | 8 | 7 | 2.5 | |
Schema Ⅲ | 20 | 8 | 8 | 1 | 2 | 6 | 8 | 4.5 | 4.5 | 3 |
100 | 9 | 7.5 | 1.5 | 4 | 6 | 7.5 | 4 | 1.5 | 4 | |
250 | 8 | 5 | 1 | 3.5 | 9 | 7 | 6 | 3.5 | 2 | |
500 | 8 | 1 | 3 | 3 | 9 | 6 | 7 | 5 | 3 | |
750 | 7 | 5 | 1 | 2 | 9 | 8 | 5 | 5 | 3 | |
Schema Ⅳ | 20 | 9 | 8 | 1 | 2.5 | 5.5 | 5.5 | 5.5 | 2.5 | 5.5 |
100 | 9 | 4 | 1.5 | 3 | 5.5 | 8 | 7 | 1.5 | 5.5 | |
250 | 9 | 7 | 1 | 2 | 8 | 5.5 | 5.5 | 3 | 4 | |
500 | 9 | 8 | 1 | 2 | 6.5 | 6.5 | 5 | 4 | 3 | |
750 | 9 | 7.5 | 1 | 3 | 7.5 | 6 | 5 | 2 | 4 | |
Sum of Ranks | 123.5 | 123 | 31 | 41 | 138.5 | 152.5 | 117.5 | 84 | 80.5 | |
Overall Rank | 7 | 6 | 1 | 2 | 8 | 9 | 5 | 4 | 3 |
Distribution | Abbreviation | Author(s) |
Discrete Inverted Nadarajah-Haghighi | DINH | Singh et al. [15] |
Geometric | Geo | - |
Negative Binomial (1 Parameter) | NBI | - |
Discrete Rayleigh | DR | Roy [16] |
Discrete Inverse Rayleigh | DIR | Hussain and Ahmad [17] |
Poisson | Poi | Poisson [18] |
Discrete Burr-Hatke | DBH | El-Morshedy et al. [19] |
Discrete Pareto | DPa | Krishna and Pundir [20] |
X | Obs. Fre. | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
0 | 15 | 10.339 | 12.401 | 25.527 | 25.520 | 3.636 | 3.237 | 2.310 | 65.853 | 47.806 |
1 | 19 | 20.099 | 30.161 | 20.434 | 20.432 | 10.302 | 47.809 | 9.272 | 21.915 | 19.190 |
2 | 23 | 20.893 | 19.935 | 16.360 | 16.358 | 15.306 | 34.021 | 18.613 | 10.931 | 10.760 |
3 | 14 | 18.288 | 12.782 | 13.096 | 13.097 | 18.041 | 16.649 | 24.912 | 6.538 | 7.021 |
4 | 15 | 14.798 | 8.721 | 10.491 | 10.486 | 18.440 | 8.773 | 25.014 | 4.342 | 5.001 |
5 | 10 | 11.467 | 6.291 | 8.385 | 8.395 | 16.918 | 5.079 | 20.085 | 3.088 | 3.774 |
6 | 8 | 8.657 | 4.732 | 6.720 | 6.721 | 14.172 | 3.174 | 13.440 | 2.304 | 2.967 |
7 | 4 | 6.426 | 3.693 | 5.381 | 5.381 | 10.944 | 2.106 | 7.713 | 1.782 | 2.404 |
8 | 6 | 4.717 | 2.950 | 4.312 | 4.308 | 7.839 | 1.466 | 3.870 | 1.417 | 1.994 |
9 | 2 | 3.436 | 2.412 | 3.456 | 3.449 | 5.228 | 1.059 | 1.736 | 1.151 | 1.686 |
10 | 3 | 2.490 | 2.013 | 2.753 | 2.762 | 3.256 | 0.789 | 0.690 | 0.953 | 1.447 |
11 | 3 | 1.798 | 1.691 | 2.205 | 2.211 | 1.897 | 0.604 | 0.253 | 0.800 | 1.258 |
12 | 2 | 1.295 | 1.460 | 1.766 | 1.770 | 1.036 | 0.472 | 0.080 | 0.680 | 1.106 |
+13 | 4 | 3.297 | 18.758 | 7.114 | 7.110 | 0.985 | 2.762 | 0.006 | 6.246 | 21.586 |
Total | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 | 128 |
−l | 317.707 | 331.931 | 320.703 | 320.703 | 347.148 | 356.525 | 384.974 | 379.346 | 369.766 | |
MLEθ | 0.716 | 1.244 | 0.801 | 0.895 | 0.972 | 0.025 | 4.016 | 0.971 | 0.509 | |
MLEq | − | 1.634 | − | − | − | − | − | − | − | |
χ2 | 4.950 | 20.199 | 11.651 | 10.684 | 49.897 | 50.533 | 88.995 | 143.174 | 94.971 | |
D.F | 8 | 6 | 9 | 9 | 8 | 5 | 6 | 5 | 6 | |
P-value | 0.763 | 0.003 | 0.234 | 0.298 | 0 | 0 | 0 | 0 | 0 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.8615 | 0.8623 | 0.7173 | 0.7158 | 0.7386 | 0.8611 | 0.8660 | 0.7385 | 0.7387 |
χ2 | 147.0785 | 149.5152 | 4.9860 | 4.950 | 8.3453 | 145.6854 | 161.6198 | 8.3215 | 8.3522 |
D.F | 11 | 11 | 8 | 8 | 9 | 10 | 11 | 9 | 9 |
P.value | 0 | 0 | 0.7591 | 0.763 | 0.4999 | 0 | 0 | 0.5021 | 0.4991 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
ME | 4.0148 | 11.4015 | 2.8399 | 1.5941 | 36.1645 |
MLE | 4.0281 | 11.4683 | 2.8471 | 1.5942 | 36.2005 |
LSE | 4.4507 | 13.6949 | 3.0769 | 1.5965 | 37.2589 |
CVME | 4.4485 | 13.6827 | 3.0758 | 1.5965 | 37.2537 |
WLSE | 4.4529 | 13.7071 | 3.0782 | 1.5965 | 37.2640 |
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.948 | 0.578 | 0.965 | 0.982 | 0.999 | 1.8×10−7 | 27.533 | 0.999 | 0.720 |
MLEq | − | 0.193 | − | − | − | − | − | − | − |
−l | 64.770 | 67.879 | 65.002 | 65.000 | 66.394 | 89.096 | 151.206 | 91.368 | 77.402 |
AIC | 131.540 | 139.758 | 132.002 | 132.000 | 134.788 | 180.192 | 304.413 | 184.737 | 156.805 |
CAIC | 131.848 | 140.758 | 132.310 | 132.308 | 135.096 | 180.499 | 304.721 | 185.045 | 157.112 |
BIC | 132.248 | 141.174 | 132.710 | 132.708 | 135.496 | 180.899 | 305.121 | 185.445 | 157.513 |
HQIC | 131.533 | 139.743 | 131.994 | 131.992 | 134.781 | 180.184 | 304.405 | 184.729 | 156.797 |
KS | 0.112 | 0.207 | 0.176 | 0.177 | 0.216 | 0.698 | 0.381 | 0.791 | 0.405 |
P-value | 0.981 | 0.481 | 0.673 | 0.675 | 0.433 | <0.0001 | 0.025 | <0.0001 | 0.009 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9509 | 0.9496 | 0.9479 | 0.9478 | 0.9503 | 0.5512 | 0.9504 | 0.9499 | 0.9506 |
KS | 0.1119 | 0.1079 | 0.1120 | 0.1119 | 0.1073 | 0.8781 | 0.1102 | 0.1077 | 0.11087 |
P-Value | 0.9810 | 0.9869 | 0.9810 | 0.9809 | 0.9875 | 0.0000 | 0.9838 | 0.9871 | 0.9827 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 29.2936 | 493.2271 | 16.8373 | 1.6096 | 47.9050 |
ADE | 28.5054 | 467.4808 | 16.3997 | 1.6095 | 47.8417 |
ME | 27.5341 | 436.7002 | 15.8603 | 1.6095 | 47.7587 |
MLE | 27.5895 | 438.4270 | 15.8911 | 1.6095 | 47.7636 |
LSE | 28.9247 | 481.0908 | 16.6325 | 1.6096 | 47.8757 |
PCE | 28.9856 | 483.0828 | 16.6663 | 1.6096 | 47.8807 |
CVME | 28.6837 | 473.2432 | 16.4987 | 1.6095 | 47.8562 |
WLSE | 29.1080 | 487.1033 | 16.7343 | 1.6096 | 47.8904 |
Statistic | HDWC | DINH | Geo | NBI | DR | DIR | Poi | DBH | DPa |
MLEθ | 0.928 | 0.737 | 0.951 | 0.975 | 0.998 | 7.82×10−7 | 19.550 | 0.998 | 0.696 |
MLEq | − | 14.798 | − | − | − | − | − | − | − |
−L | 79.053 | 82.818 | 79.962 | 79.963 | 81.175 | 101.987 | 152.718 | 110.283 | 95.448 |
AIC | 160.107 | 169.635 | 161.925 | 161.926 | 164.351 | 205.975 | 307.436 | 222.565 | 192.896 |
CAIC | 160.329 | 170.341 | 162.147 | 162.148 | 164.572 | 206.197 | 307.658 | 222.787 | 193.118 |
BIC | 161.102 | 171.627 | 162.921 | 162.922 | 165.346 | 206.973 | 308.432 | 223.561 | 193.892 |
HQIC | 160.301 | 170.024 | 162.119 | 162.120 | 164.544 | 206.169 | 307.630 | 222.759 | 193.090 |
KS | 0.103 | 0.189 | 0.145 | 144 | 0.199 | 0.556 | 0.352 | 0.751 | 0.392 |
P-value | 0.984 | 0.467 | 0.796 | 0.795 | 0.401 | <0.001 | 0.014 | <0.001 | 0.004 |
MPSE | ADE | ME | MLE | LSE | RADE | PCE | CVME | WLSE | |
θ | 0.9371 | 0.9373 | 0.9279 | 0.9279 | 0.9319 | 0.9373 | 0.9387 | 0.9315 | 0.9318 |
KS | 0.0763 | 0.0769 | 0.1026 | 0.1026 | 0.1137 | 0.0769 | 0.0828 | 0.1145 | 0.1138 |
P-Value | 0.9996 | 0.9995 | 0.9843 | 0.9843 | 0.9530 | 0.9995 | 0.9986 | 0.9524 | 0.9529 |
Approach | E(X) | Var(X) | IOD(X) | Sk(X) | Ku(X) |
MPSE | 22.5892 | 296.2575 | 13.1149 | 1.6093 | 47.2310 |
ADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
ME | 19.5450 | 223.3083 | 11.4252 | 1.6091 | 46.7802 |
MLE | 19.5739 | 223.9526 | 11.4413 | 1.6091 | 46.7851 |
LSE | 20.7676 | 251.3672 | 12.1038 | 1.6092 | 46.9764 |
RADE | 22.6653 | 298.2128 | 13.1572 | 1.6093 | 47.2409 |
PCE | 23.2119 | 312.4469 | 13.4606 | 1.6093 | 47.3094 |
CVME | 20.6389 | 248.3358 | 12.0323 | 1.6092 | 46.9568 |
WLSE | 20.7353 | 250.6043 | 12.0858 | 1.6092 | 46.9715 |