Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article Special Issues

Sub-exponential mixing of generalized cellular flows with bounded palenstrophy

  • We study the mixing properties of a passive scalar advected by an incompressible flow. We consider a class of cellular flows (more general than the class in [Crippa-Schulze M3AS 2017]) and show that, under the constraint that the palenstrophy is bounded uniformly in time, the mixing scale of the passive scalar cannot decay exponentially.

    Citation: Gianluca Crippa, Christian Schulze. Sub-exponential mixing of generalized cellular flows with bounded palenstrophy[J]. Mathematics in Engineering, 2023, 5(1): 1-12. doi: 10.3934/mine.2023006

    Related Papers:

    [1] Bing Long, Zaifu Jiang . Estimation and prediction for two-parameter Pareto distribution based on progressively double Type-II hybrid censored data. AIMS Mathematics, 2023, 8(7): 15332-15351. doi: 10.3934/math.2023784
    [2] Mustafa M. Hasaballah, Oluwafemi Samson Balogun, M. E. Bakr . Frequentist and Bayesian approach for the generalized logistic lifetime model with applications to air-conditioning system failure times under joint progressive censoring data. AIMS Mathematics, 2024, 9(10): 29346-29369. doi: 10.3934/math.20241422
    [3] Hatim Solayman Migdadi, Nesreen M. Al-Olaimat, Maryam Mohiuddin, Omar Meqdadi . Statistical inference for the Power Rayleigh distribution based on adaptive progressive Type-II censored data. AIMS Mathematics, 2023, 8(10): 22553-22576. doi: 10.3934/math.20231149
    [4] Hassan Okasha, Mazen Nassar, Saeed A. Dobbah . E-Bayesian estimation of Burr Type XII model based on adaptive Type-Ⅱ progressive hybrid censored data. AIMS Mathematics, 2021, 6(4): 4173-4196. doi: 10.3934/math.2021247
    [5] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [6] Mohamed S. Eliwa, Essam A. Ahmed . Reliability analysis of constant partially accelerated life tests under progressive first failure type-II censored data from Lomax model: EM and MCMC algorithms. AIMS Mathematics, 2023, 8(1): 29-60. doi: 10.3934/math.2023002
    [7] Naif Alotaibi, A. S. Al-Moisheer, Ibrahim Elbatal, Salem A. Alyami, Ahmed M. Gemeay, Ehab M. Almetwally . Bivariate step-stress accelerated life test for a new three-parameter model under progressive censored schemes with application in medical. AIMS Mathematics, 2024, 9(2): 3521-3558. doi: 10.3934/math.2024173
    [8] A. M. Abd El-Raheem, Ehab M. Almetwally, M. S. Mohamed, E. H. Hafez . Accelerated life tests for modified Kies exponential lifetime distribution: binomial removal, transformers turn insulation application and numerical results. AIMS Mathematics, 2021, 6(5): 5222-5255. doi: 10.3934/math.2021310
    [9] Xue Hu, Haiping Ren . Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample. AIMS Mathematics, 2023, 8(12): 28465-28487. doi: 10.3934/math.20231457
    [10] Ahmed Elshahhat, Refah Alotaibi, Mazen Nassar . Statistical inference of the Birnbaum-Saunders model using adaptive progressively hybrid censored data and its applications. AIMS Mathematics, 2024, 9(5): 11092-11121. doi: 10.3934/math.2024544
  • We study the mixing properties of a passive scalar advected by an incompressible flow. We consider a class of cellular flows (more general than the class in [Crippa-Schulze M3AS 2017]) and show that, under the constraint that the palenstrophy is bounded uniformly in time, the mixing scale of the passive scalar cannot decay exponentially.



    The Teissier distribution, also known as the Muth distribution, was developed by a French scientist named Teissier [1]. It is used as a model for the frequency of death solely due to aging. Due to its exponentially increasing failure rate, it finds applications in various fields such as hydrology, computer code failure rate, longevity, economics, and actuarial sciences. Laurent [2] revisited the distribution, characterizing it based on life expectancy and exploring its potential applications in demographic studies. Muth [3] applied this distribution to reliability analysis. Jodra et al. [4] investigated the statistical properties of the Teissier distribution, renaming it the Muth distribution. Additionally, Jodra et al. [5] proposed a two-parameter extension of the Teissier distribution, referred to as the power Muth distribution, and demonstrated its application to reliability data. Distribution and probability analysis experts have modified or expanded the characteristics of the Teissier distribution to enhance its flexibility and applicability. Recently, Sharma et al. [6] introduced a new two-parameter extension of the Teissier model called the exponentiated Teissier distribution (ExT). This extension incorporates increasing, decreasing, and bathtub-shaped hazard rate functions (HRF).

    Suppose X is a random variable that follows the ExT distribution. Then, the probability density function (PDF) as well as the cumulative distribution function (CDF) of X are expressed, respectively, as

    f(x;γ,σ)=γσ(eσx1)exp(1+σxeσx)[1exp(1+σxeσx)]γ1, x>0, (1.1)

    and

    F(x;γ,σ)=[1exp(1+σxeσx)]γ, (1.2)

    where γ>0 and σ>0 denote the shape and scale parameters, respectively. On the other hand, the reliability function (RF) and HRF related to the random variable X are

    R(x;γ,σ)=1[1exp(1+σteσt)]γ,t>0

    and

    h(t;γ,σ)=γσ(eσt1)exp(1+σteσt)1exp(1+σteσt),

    respectively. Figure 1 shows that the ExT's shape parameter γ controls the skewness and flatness of the ExT density and that it is unimodal and flexible enough to accommodate a variety of skewed datasets. It also indicates that the ExT's failure rate has decreasing or bathtub shape (for γ<1) and increasing shape (for γ>1). So, the ExT distribution may describe datasets with decreasing, increasing, and bathtub-shaped HRF shapes, as well as positively-skewed density shapes; for more details see Sharma et al. [6]. Recently, Pasha-Zanoosi [7] studied the problem of reliability evaluation in a multicomponent stress-strength for the ExT distribution. Pasha-Zanoosi [8] considered the reliability of the stress-strength model for the ExT distribution using lower record values.

    Figure 1.  Plots of PDF (left) and HRF (right) functions of ExT distribution.

    To evaluate a product's reliability, it is critical to begin testing its elements and components early in development. This is especially critical for new products with no historical or vendor data. Thus, test results should come from well-designed tests that were performed under controlled conditions with random samples of objects. For more details, see Blischke and Murthy [9]. When it comes to reliable products, it is not practical for the investigator to wait and gather data on every single tested sample item. Therefore, researchers often contemplate the use of censoring samples, where the test is ended before all items experience failure. There are numerous censoring plans discussed in the literature. Among them, one of the most commonly employed plans is known as progressive Type-II censoring (T2PC). This particular plan enables the experimenter to eliminate live items from the experiment based on a predetermined removal pattern. In the progressive Type-II censoring scheme, there are a total of n units being tested, with a predetermined effective sample size of m<n. The scheme also involves a progressive censoring plan denoted by (R1,,Rm), which is determined in advance. Under this scheme, when the ith failure occurs (represented by Xi:m:n), Ri randomly selected units that have not yet failed, where i=1,,m1, are removed from the experiment. Finally, when the time Xm:m:n is reached, the test is stopped and any remaining units are discarded. For more information on the progressive Type-II censoring scheme, refer to Aggarwala and Balakrishnan [10], Anwar et al. [11], and Lone et al. [12].

    In contrast, Kundu and Joarder [13] suggested a progressive Type-I hybrid censoring scheme which uses the same framework as the conventional progressive Type-II censoring scheme, but ends testing at T=min(T,Xm:m:n). Here, T denotes a specific time. However, one major disadvantage of this strategy is that the intended sample size may be random and even small, rendering statistical inference techniques ineffective. Ng et al. [14] proposed adaptive progressive Type-II hybrid censoring (adaptive-T2PHC) as a more flexible approach to overcome this problem. The adaptive-T2PHC plan allows the testing time to exceed the predefined time T, therefore some values of Ri,i=1,,m1 may change during the test. If Xm:m:n<T, the test terminates at Xm:m:n, and the usual progressive Type-II censoring plan is used. If not, we do not discard any surviving items from the experiment by putting Rd+1,Rd+2,,Rm1=0, where d denotes the number of reported failures prior to time T. After the final failure, Xm:m:n, all leftover units are removed, resulting in R=nmdi=1Ri. This adaptation assures that the test will stop when the required number of failures m is reached while keeping the overall test time near the optimal time T. Several studies have examined this scheme. For instance, Nassar and Abo-Kasem [15], Panahi and Moradi [16], Elshahhat and Nassar [17], Elshahhat et al. [18], Elshahhat et al. [19], and Lv et al. [20], as well as the references mentioned in these studies. Given an observed sample from a continuous population using adaptive-T2PHC plan, the likelihood function can be expressed as

    L=Cmi=1f(xi:m:n)di=1[1F(xi:m:n)]Ri[1F(xm:m:n)]R, (1.3)

    where C is a constant that does not affect the parameters.

    Although the ExT distribution has gained increasing relevance in modeling HRF patterns, including increasing, decreasing, and bathtub-shaped trends, no research to date has tackled its estimation challenges in the context of censoring schemes. Existing studies primarily focus on parameter estimation within complete sample scenarios, thereby neglecting cases that involve censored data. Furthermore, these investigations have largely overlooked the assessment of critical reliability metrics, such as the RF and HRF. Additionally, as shown in the real data analysis section, the ExT distribution demonstrates a better fit for actual datasets compared to several well-established statistical models. This further motivates us to develop and complete this work. To the best of our knowledge, this is the first study to examine the estimation of the ExT distribution within the context of censoring plans, as well as to analyze its reliability metrics from both classical and Bayesian perspectives. This study seeks to fill a significant gap in reliability analysis by introducing a comprehensive framework for the estimation of both parameters and reliability metrics of the ExT distribution under an adaptive-T2PHC scheme. Another key motivation for this research is the adoption of the adaptive-T2PHC scheme over other censoring methods. This approach was chosen based on its flexibility in concluding life-testing experiments within a reasonable timeframe, while ensuring a pre-specified number of failures, which is particularly relevant in the context of high-reliability products. Furthermore, this scheme enhances statistical efficiency compared to other censoring plans, such as the progressive Type-I hybrid censoring scheme. In this work, we study the estimations of the ExT distribution using the adaptive-T2PHC data from both classical and Bayesian perspectives. In addition to estimating γ and σ, we also examine the estimations of RF and HRF. Classical estimates are provided through maximum likelihood estimation along with a pair of approximated confidence intervals (ACIs). On the other hand, Bayes estimates are obtained using the Markov chain Monte Carlo (MCMC) method and the squared error (SE) loss function. We consider two types of credible interval ranges: Bayes credible intervals (BCIs) and highest posterior density (HPD) credible intervals. We then utilize the simulation approach to evaluate the effectiveness of the two estimation methods and determine which strategy yields more accurate estimations. We examine two engineering datasets to assess the ExT distribution's ability to align with real-world data, and demonstrate the practical implications of the proposed estimation methods.

    The remaining sections of the article are organized as follows: Section 2 covers the traditional estimations of the parameters, RF and HRF, which include both points and two types of ACIs. Section 3 shows how to simulate samples with the MCMC approach to produce Bayes point estimates, BCIs, and HPD intervals. Section 4 offers a Monte Carlo simulation that compares the results of the various strategies. Section 5 examines two engineering datasets to demonstrate the superiority of the ExT distribution and the usefulness of the methodologies employed in this study. Section 6 presents numerous findings.

    The maximum likelihood approach is used in this section to obtain the maximum likelihood estimates (MLEs) of γ,σ,R(t), and h(t) of the ExT distribution in the presence of adaptive-T2PHC data. The asymptotic traits of the MLEs are then considered in order to obtain the ACIs of these parameters.

    Based on an adaptive-T2PHC sample, expressed as x1:m:n<<xd:m:n<T<xd+1:m:n<<xm:m:n, selected from an ExT population with censoring plan (R1,,Rd,0,,0,Rm), one can write the likelihood function using (1.1)–(1.3), without the constant term, as follows:

    L(γ,σ)=(γσ)memi=1ai[1ϕγm]Rmi=1(eσxi1)ϕγ1idi=1[1ϕγi]Ri, (2.1)

    where xi=xi:m:n, ai=1+σxieσxi, and ϕi=1eai,i=1,,m. Thus, the natural logarithm of (2.1) is as follows:

    logL(γ,σ)=mlog(γσ)+mi=1ai+mi=1log(eσxi1)+(γ1)mi=1log(ϕi)+di=1Rilog[1ϕγi]+Rlog[1ϕγm]. (2.2)

    The two normal equations to be solved to obtain the MLEs of γ and σ are given by

    logL(γ,σ)γ=mγ+mi=1log(ϕi)di=1Riϕγilog(ϕi)1ϕγiRϕγmlog(ϕm)1ϕγm=0 (2.3)

    and

    logL(γ,σ)σ=mσ+mi=1biximi=1xieσxibi(γ1)mi=1xibieaiϕi+γdi=1Rixibieaiϕγ1i1ϕγi+γRxmbieaimϕγ1m1ϕγm=0, (2.4)

    where bi=1eσxi,i=1,,m.

    The preceding normal equations are clearly not in analytically tractable forms, hence we suggest employing the Newton-Raphson (N-R) iteration approach to obtain the MLEs of γ and σ. By plotting the log-likelihood function for each of the model parameters using the two real datasets, as subsequently indicated, we observe evidence suggesting that the classical estimates may indeed be unique.

    Let ˆγ and ˆσ be the MLEs of γ and σ, respectively. Then, by applying the MLEs' invariance property, we are able to acquire the MLEs of RF and HRF at mission time t, as follows:

    ˆR(t)=1[1exp(1+ˆσteˆσt)]ˆγ

    and

    ˆh(t)=ˆγˆσ(eˆσt1)exp(1+ˆσteˆσt)1exp(1+ˆσteˆσt).

    In conjunction with obtaining the point estimates, we employ the MLEs to produce the confidence intervals for the individual parameters. The ACIs are acquired based on the normal approximation (ACIs-NA) of the MLEs, as well as the normal approximation of the log-transformed (ACIs-NL) of the MLEs. In this case, we estimate the needed variances using the observed Fisher information (FI) matrix. To obtain the required variances, we must first obtain the second derivatives listed below:

    2logL(γ,σ)γ2=mγ2di=1Rilog2(ϕi)ϕγi(1ϕγi)2Rlog2(ϕm)ϕγm(1ϕγm)2,

    and

    2logL(γ,σ)σ2=mσ2mi=1x2ieσximi=1x2ieσxi(bi+eσxi)b2i(γ1)mi=1x2ieai[b2iϕieσxi]ϕ2i+γdi=1Rix2ieaiϕγ1iwi1ϕγi+γRx2meamϕγ1mwm1ϕγm

    and

    2logL(γ,σ)γσ=mi=1xibieaiϕi+di=1Rixibieaiϕγ1ivi(1ϕγi)2+Rxmbmeamϕγ1mvm(1ϕγm)2,

    where wi=b2i(γ1)b2ieaiϕ1ieσxiγb2ieaiϕγ1i1ϕγi and vi=1+γlog(ϕi)ϕγi. Based on these quantities, we can write the asymptotic variance-covariance matrix, denoted by I1(ˆγ,ˆσ), as follows:

    I1(ˆγ,ˆσ)=(2logL(γ,σ)γ22logL(α,σ)γσ2logL(γ,σ)σγ2logL(θ,σ)σ2)1(γ,σ)=(ˆγ,ˆσ)=(ˆV(ˆγ)ˆC(ˆγ,ˆσ)ˆC(ˆσ,ˆγ)ˆV(ˆσ)), (2.5)

    where the main diagonal elements represent the required estimated variances of the MLEs of γ and σ, respectively.

    Before proceeding, it is important to note that while the Fisher information (FI) matrix plays a crucial role in statistical inference theory, calculating it directly, especially for the proposed sample, can be tedious and computationally challenging. To derive the exact expectation expressions for the second derivatives, one needs to derive the probability mass function of the discrete random variable d and the marginal distribution of Xi:m:n. To simplify the process, we follow the approach of Balakrishnan et al. [21] and approximate the FI matrix using the invariance property of the maximum likelihood estimates (MLEs) of γ and σ. Based on the asymptotic normality of the MLEs, (ˆγ,ˆσ)N((γ,σ),I1(ˆγ,ˆσ)), we can construct the acquired ACIs. Let zα/2 be the upper (α/2)th percentile point of the standard normal distribution. Then, the 100(1α)% ACIs-NA of γ and σ can be obtained, respectively, as

    ˆγ±zα/2ˆV(ˆγ),andˆσ±zα/2ˆV(ˆσ).

    On the other hand, such ACIs-NA for the RF and HRF are of interest. To do so, we must first determine the variances of their MLEs. In our scenario, we use the so-called delta approach to approximate the requisite variances. Based on the asymptotic variance-covariance matrix in (2.5), the approximate estimated variances of the MLEs of R(t) and h(t) can be derived as follow

    ˆVR(R(t)/γ,R(t)/σ)I1(ˆγ,ˆσ)(R(t)/γ,R(t)/σ) (2.6)

    and

    ˆVh(h(t)/γ,h(t)/σ)I1(ˆγ,ˆσ)(h(t)/γ,h(t)/σ), (2.7)

    where the estimated variances in (2.6) and (2.7) are computed based on ˆγ and ˆσ, and

    R(t)γ=[1exp(1+σteσt)]γlog[1exp(1+σteσt)],
    R(t)σ=γt(1eσt)exp(1+σteσt)[1exp(1+σteσt)]γ1,
    h(t)γ=σ(1eσt)exp(1+σteσt)1exp(1+σteσt)

    and

    h(t)γ=γ(1eσt)exp(1+σteσt)1exp(1+σteσt)×[σt(1eσt)exp(1+σteσt)1exp(1+σteσt)+σteσteσt1+σt(1eσt)+1].

    Thus, the ACIs-NA for the RF and HRF are as follow

    ˆR(t)±zα/2ˆVR,andˆh(t)±zα/2ˆVh.

    It is crucial to note that obtaining the ACIs-NA may result in negative lower bounds. In such cases, the ACIs-NL might be employed to prevent this problem. The ACIs-NL can be obtained for any parameter, such as ς, as follows:

    ˆς×exp(±zα/2ˆV(ˆς)ˆς).

    It is useful to mention here that the N-R iterative method, via the maxLik package (by Henningsen and Toomet [22]) in R, is recommended to calculate the MLEs of γ, σ, R(t), and h(t), in addition to their 100(1α)% ACIs-NA/ACIs-NL estimators acquired once the information of n, m, T, and R is available.

    Bayesian estimation is concerned with revising past beliefs about the unknown parameters as a result of new information. All parameters in the Bayesian procedure are treated as random variables with certain distributions known as the prior distributions. In this section, we consider the Bayesian estimation of the model parameters, as well as the RF and HRF. The Bayes estimates are acquired based on the SE loss function and utilizing the assumption that the parameters γ and σ are independent and follow a gamma distribution. Moreover, two Bayes credible intervals are computed. According to the independent gamma priors, one can write the joint prior distribution of γ and σ as

    p(γ,σ)γθ11σθ21e(β1γ+β2σ),α,σ>0, (3.1)

    where θi,βi>0,i=1,2. Gamma priors were selected for their alignment with the model parameters, ensuring logical consistency in Bayesian inference. They facilitate computational efficiency when utilizing the MCMC sampling, thereby circumventing convergence issues often encountered with alternative priors. Furthermore, the gamma hyperparameters (shape and rate) provide flexibility in encoding prior knowledge. The use of gamma priors optimally balances computational stability and interpretability, rendering them particularly well-suited for our application. The joint posterior distributions of γ and σ can be simply stated by merging the likelihood function in (2.1) with the joint prior distribution in (3.1) as follows:

    ψ(γ,σ|x_)=γm+θ11σm+θ21Aemi=1ai(β1γ+β2σ)[1ϕγm]Rmi=1(eσxi1)ϕγ1idi=1[1ϕγi]Ri, (3.2)

    where x_ is the vector of the observed data and A is the normalized constant, expressed as

    A=00γm+θ11σm+θ21emi=1ai(β1γ+β2σ)[1ϕγm]Rmi=1(eσxi1)ϕγ1i×di=1[1ϕγi]Ridγdσ.

    According to (3.2), the marginal posterior distributions of γ and σ cannot be acquired analytically. As a result, obtaining the Bayes estimators for the various parameters is impossible. The reason for this is that the integrals in (3.2) are typically challenging to compute analytically, and numerical techniques may not succeed. The MCMC method is an alternate approach for estimating parameters in such scenarios. We suggest taking advantage of the MCMC technique to derive the required Bayes estimates of the various parameters, and to create the appropriate credible intervals.

    To estimate the parameters of the ExT distribution through the MCMC technique, we must first derive what are known as full conditional distributions of the unknown parameters γ and σ, as follows:

    ψ1(γ|σ,x_)γm+θ11eβ1γ[1ϕγm]Rmi=1(eσxi1)ϕγ1idi=1[1ϕγi]Ri, (3.3)

    and

    ψ2(σ|γ,x_)=σm+θ21emi=1aiβ2σ[1ϕγm]Rmi=1(eσxi1)ϕγ1idi=1[1ϕγi]Ri. (3.4)

    It is clear that the conditional posterior distributions presented in (3.3) and (3.4) cannot be analytically simplified to well-known distributions. Consequently, conventional sampling methods are ineffective for generating samples from these distributions. Without loss of generality, by drawing an adaptive-T2PHC sample from the ExT model at (γ,σ)=(1,2), with (n,m,T)=(100,50,0.5), Ri=1, i=1,2,,m, and (θ1,θ2,β1,β2)=(5,10,5,5), we can plot the conditional distributions as shown in Figure 2. The proposed ExT parameter values are specified arbitrarily for illustrative purposes, whereas the hyperparameter values of θi and βi (for i=1,2) are specified using the idea of hyperparameter value elections suggested by Kundu [23]. In addition, we have tested multiple samples using various choices of parameter values, and the resulting plots generally align well with the pattern shown in Figure 2. It is important to mention here that, an apparent departure from normality in the full conditional distribution of a parameter, such as the presence of skewness, multimodality, or heavy tails, can have significant implications for both the accuracy of inference and the reliability of MCMC sampling methods. In the next sections, several diagnostic tools are employed to ensure that the MCMC samples accurately represent the target posterior distribution, and to assess the potential impact of non-normal full conditionals. The results indicate that the full conditional distributions (3.3) and (3.4) for γ and σ, respectively, resemble Gaussian distributions. Therefore, we utilize the Metropolis-Hastings (MH) algorithm with a normal proposal distribution to generate random samples from these distributions. The symmetry of the normal distribution simplifies the calculation of acceptance probabilities in the MH algorithm. Furthermore, generating proposals from a normal distribution is both computationally efficient and numerically stable, making it a practical choice for implementation.

    Figure 2.  Conditional distribution curves of γ (left) and σ (right).

    The MH algorithm is a general type of MCMC method used to generate random samples from complex target distributions of any dimension. The MH algorithm is indispensable for Bayesian estimation in complex models like the ExT distribution under the adaptive-T2PHC plan. By iteratively proposing and accepting/rejecting parameter values, it constructs a Markov chain that asymptotically samples from the posterior. Presently, we suggest the following strategy for generating γ and σ from (3.3) and (3.4), respectively, and obtaining the Bayes estimates as well as the credible intervals.

    Step 1. Set l=1.

    Step 2. Use the MLEs as initial values by setting (γ(0),σ(0))=(ˆγ,ˆσ).

    Step 3. Simulate a proposal γ(l) from ψ1(γ|σ,x_) given by (3.3) employing the MH processes:

    ● Generate γ, while γN(ˆγ,ˆV(ˆγ)).

    ● Calculate the acceptance probability (AP):

    APγ=min(1,ψ1(γ|σ(l1),x_)ψ1(γ(l1)|σ(l1),x_)).

    ● Accept/reject: γ(l)={γwith probability APγγ(l1)otherwise.

    Step 4. In similar manner, repeat step 3 to get a proposal σ(l) from ψ2(σ|γ,x_) given by (3.4).

    Step 5. At each generated (γ(l),σ(l)), compute R(l)(t) and h(l)(t).

    Step 6. Set l=l+1.

    Step 7. Terminate when l=M, where M is the entire number of repetitions needed. As a result, we have the following series:

    [γ(l),σ(l),R(l)(t),h(l)(t)],l=1,,M.

    Let the unknown parameter to be estimated be denoted by δ. Then the Bayes estimate of δ using the SE loss function, after a burn in period M, is expressed as

    ˜δ=1BMl=M+1δ(l),B=MM.

    After sorting the generated sample of δ(l) as (δ(M+1),,δ(M)), one can get the 100(1α)% BCI of δ as follows:

    [δ(αB/2),δ((1δ/2)B)].

    Moreover, the 100(1α)% HPD credible interval of δ can be obtained as

    [δ(l),δ(l+(1α)B)],

    where l=M+1,,B is determined by

    δ(l+[(1α)B])δ(l)=min1lαB{δ(l+[(1α)B])δ(l)},

    where [l] refers to the highest integer which does not exceed l.

    Once the information of n, m, T, and R is available, via the 'coda' package (by Plummer et al. [24]) in R, it is recommended to calculate the Bayes estimates of γ, σ, R(t), and h(t) in addition to their 100(1α)% BCI/HPD interval estimators acquired.

    To emphasize the precise performance of the proposed point and interval theoretical results of γ, σ, R(t), and h(t), extensive Monte Carlo simulations based on large 1,000 Adaptive-T2PHC samples from ExT(1,2) are made (see Algorithm 1). In addition to evaluating the acquired estimators of the reliability indices of R(t) and h(t), by taking t=0.1, their plausible values are taken as 0.97882 and 0.44281, respectively. Using T(=0.5,1.5) and n(=40,80) as well as several options of m and R_, all Adaptive-T2PHC samples are collected. Since the examination is stopped when the size of the failed subjects reaches a preassigned m, so the failure percentage (FP) (m/n)100% is taken as 50 and 80% for each n. To clarify the performance of the progressive fashion R_, we consider the following schemes:

    Scheme-1: R1=nm,  Ri=0fori1,Scheme-2:Rm2=nm,  Ri=0forim2,Scheme-3:Rm=nm,  Ri=0forim.

    Algorithm 1 Generation steps of adaptive-T2PHC sample.
    1: Input: Assign the values of ExT(γ,σ)
    2: Input: Assign the levels of T, n, and m
    3: Input: Create g independent units of size m as g1,g2,,gm
    4: Input: Set τi=g(i+mj=mi+1Rj)1i, i=1,2,,m
    5: Input: Let ui=1τmτm1τmi+1 for i=1,2,,m
    6: Input: Obtain ui, i=1,2,,m, which is a conventional T2PC sample of size m from U(0,1) distribution
    7: Output: Get Xi=F1(ui;γ,σ), i=1,2,,m
    8: Output: Get Xi for i=d+2,,m from f(x)[1F(xd+1)]1
    9: Output: Get d at T
    10: Find R as
    11: if T<Xm then
    12:  Set R=nmdi=1Ri
    13: end if
    14: if TXm then
    15:  Set R=nmm1i=1Ri
    16: end if

    To illustrate the behavior of the gamma priors, we consider two (informative) sets of the unknown hyperparameters θi and βi for i=1,2, namely: Prior 1: (θ1,θ2)=(5,10) and βi=5 for i=1,2; Prior 2: (θ1,θ2)=(10,20) and βi=10 for i=1,2.

    Following the procedure reported by Kundu [23], the values in Prior 1 and 2 are selected in such a way that the prior mean becomes the expected value of the model parameter. It is clear that, when θi=βi=0, i=1,2, the posterior distribution is proportional to the corresponding likelihood function. Therefore, if one does not have prior information on ExT(γ,σ) parameters, it is better to use the frequentist estimates instead of the Bayesian estimates because the latter are computationally more expensive.

    Once 1,000 samples of Adaptive-T2PHC are collected, the MLEs along with their ACI-NA and ACI-NL estimates of γ, σ, R(t), and h(t) are developed. Following Section 3, we ignore the first 2,000 from the full 12,000 MCMC variates. Next, the Bayes' estimates along with their 95% BCI and HPD intervals of γ, σ, R(t), and h(t) are obtained. Here, the frequentist estimates are utilized as starting points to implement the MCMC mechanism. After installing the two recommended packages in the R 4.2.2 software, namely the 'maxLik' and 'coda' packages, all acquired maximum likelihood and Bayes estimates along with their 95% ACI-NA, ACI-NL, BCI, and HPD interval estimates are calculated.

    Assessing convergence, mixing, and the validity of the MCMC procedure plays a critical role in verifying that the MCMC samples provide a reliable representation of the target posterior distribution. Here, we consider three diagnostics, namely: (i) autocorrelation, (ii) Brooks-Gelman-Rubin (BGR), and (iii) trace plots, are shown in Figure 3. For instance, these plots are obtained using the simulated Markov draws when (T,n,m)=(0.5,40,20), Scheme-1, and Prior-1. Figure 3 indicates that the autocorrelation values are quite near to zero as the lag-value grows. This result is evidence that the collected MCMC iterations of γ, σ, R(t), or h(t) are uncorrelated. Also, Figure 3 demonstrates that removing the beginning 2,000 iterations of each chain as burn-in is sufficient to handle the autocorrelation problem. Additionally, in Figure 3, we take every 5th iteration as a thinning (sub-sampling) to check whether the acquired MCMC variates are independent or not. It shows that the acquired MCMC samples of γ, σ, R(t), and h(t) are mixed appropriately.

    Figure 3.  Autocorrelation (top), BGR (center), trace and its Gaussian kernel (bottom) plots from ExT distribution using Adaptive-T2PHC data in Monte Carlo simulation.

    Specifically, for each setup, the average estimates (Av.Es) of γ (for instance) are calculated as

    ¯ˆγ=110001000i=1ˆγ(i),

    where ˆγ(i) denotes the offered estimate of γ at the ith sample.

    The acquired point estimates of γ are compared based on their root mean squared-errors (RMSEs) and mean relative absolute biases (MRABs) as

    RMSE(ˆγ)=110001000i=1(ˆγ(i)γ)2,

    and

    MRAB(ˆγ)=110001000i=11γ|ˆγ(i)γ|,

    respectively.

    Additionally, the provided 95% interval estimates of γ are contrasted in terms of their coverage percentages (CPs) and average interval lengths (AILs) as

    CP95%(γ)=110001000i=11(Lˆγ(i);Uˆγ(i))

    and

    \begin{equation*} \text{AIL}^{95\%}(\gamma) = \frac{1}{1000}\sum\nolimits_{i = 1}^{1000}{\left({\mathcal{U}_{\hat{\gamma}^{(i)}}}-{\mathcal{L}_{\hat{\gamma}^{(i)}}}\right)}, \end{equation*}

    respectively, where {\mathit{\boldsymbol{1}}}^{\circledast}(\cdot) is the indicator. The Av.Es, RMSEs, and MRABs of \gamma , \sigma , R(t) , and h(t) (presented in Tables 1-4) are tabulated in the first, second, and third columns, respectively. The AILs and CPs (presented in Tables 5-8) are tabulated in the first and second columns, respectively. For brevity, we report all Monte Carlo results for \gamma , \sigma , R(t) , and h(t) when n = 40 , while the full results are moved to the supplementary file.

    Table 1.  Point evaluations of \gamma .
    T n [FP%] Scheme ML MCMC
    Prior \rightarrow P1 P2
    0.5 40[50%] 1 1.3643 0.4205 0.3661 1.2692 0.3602 0.3058 1.0522 0.2995 0.2178
    2 1.4874 0.5268 0.4874 1.4140 0.4916 0.4257 1.0497 0.3097 0.2281
    3 1.5020 0.5541 0.5024 1.4535 0.5330 0.4804 0.6143 0.3971 0.3871
    40[80%] 1 0.6961 0.3183 0.3039 1.0336 0.2474 0.1855 0.9886 0.2072 0.1629
    2 0.6801 0.3343 0.3109 1.2405 0.3155 0.2632 1.0351 0.2532 0.1921
    3 0.5972 0.4158 0.4029 0.6611 0.3544 0.3791 1.3098 0.3489 0.3201
    1.5 40[50%] 1 1.3427 0.4018 0.3587 1.3847 0.3133 0.2174 1.0624 0.2419 0.1820
    2 1.4155 0.4806 0.4278 1.2474 0.3455 0.2919 1.0490 0.2671 0.2007
    3 1.5306 0.5852 0.5306 1.3492 0.4424 0.3881 1.0382 0.2713 0.2130
    40[80%] 1 1.3440 0.3744 0.3440 1.1164 0.2123 0.1706 1.1101 0.1878 0.1520
    2 1.4076 0.4374 0.4076 1.1892 0.2455 0.2061 1.1323 0.2105 0.1708
    3 1.4260 0.4561 0.4260 1.2423 0.2945 0.2516 1.1436 0.2451 0.1814

     | Show Table
    DownLoad: CSV
    Table 2.  Point evaluations of \sigma .
    T n [FP%] Scheme ML MCMC
    Prior \rightarrow P1 P2
    0.5 40[50%] 1 2.1673 0.3673 0.1640 2.2012 0.2539 0.1061 2.0989 0.2111 0.0863
    2 1.8858 0.5728 0.2706 2.3121 0.3453 0.1566 2.0903 0.3143 0.1354
    3 1.9453 0.6618 0.3774 2.3352 0.3791 0.1765 2.3518 0.3696 0.1708
    40[80%] 1 2.2525 0.2791 0.1282 1.9110 0.1955 0.0814 2.0107 0.1575 0.0622
    2 2.1523 0.3121 0.1358 2.0735 0.2506 0.1018 1.9875 0.2164 0.0810
    3 1.9013 0.5003 0.2825 2.0140 0.3091 0.1493 2.2162 0.2762 0.1145
    1.5 40[50%] 1 2.0456 0.2496 0.0970 2.0991 0.1969 0.0796 2.1081 0.1591 0.0677
    2 2.0634 0.2899 0.1114 2.1160 0.2225 0.0911 2.1142 0.1659 0.0706
    3 2.1630 0.4919 0.2718 1.9870 0.2719 0.1500 1.9824 0.2118 0.1213
    40[80%] 1 2.0463 0.1963 0.0816 2.0588 0.1812 0.0756 2.0888 0.1352 0.0552
    2 2.0510 0.2196 0.0858 2.0690 0.1875 0.0776 2.0846 0.1404 0.0630
    3 2.1861 0.4439 0.2001 1.9790 0.2557 0.1243 1.9986 0.1768 0.0913

     | Show Table
    DownLoad: CSV
    Table 3.  Point evaluations of R(t) .
    T n [FP%] Scheme ML MCMC
    Prior \rightarrow P1 P2
    0.5 40[50%] 1 0.9723 0.0226 0.0178 0.9755 0.0193 0.0169 0.9933 0.0155 0.0150
    2 0.9739 0.0202 0.0167 0.9911 0.0176 0.0137 0.9908 0.0137 0.0128
    3 0.9754 0.0172 0.0145 0.9878 0.0149 0.0123 0.9858 0.0123 0.0113
    40[80%] 1 0.9728 0.0206 0.0162 0.9779 0.0174 0.0155 0.9933 0.0153 0.0129
    2 0.9758 0.0167 0.0142 0.9759 0.0160 0.0131 0.9688 0.0137 0.0105
    3 0.9812 0.0142 0.0119 0.9789 0.0126 0.0122 0.9899 0.0113 0.0101
    1.5 40[50%] 1 0.9745 0.0189 0.0164 0.9768 0.0185 0.0155 0.9959 0.0174 0.0141
    2 0.9893 0.0174 0.0140 0.9765 0.0152 0.0132 0.9919 0.0139 0.0121
    3 0.9812 0.0142 0.0119 0.9899 0.0133 0.0114 0.9839 0.0116 0.0095
    40[80%] 1 0.9778 0.0160 0.0136 0.9771 0.0154 0.0131 0.9944 0.0158 0.0122
    2 0.9864 0.0154 0.0122 0.9776 0.0144 0.0114 0.9906 0.0131 0.0094
    3 0.9789 0.0126 0.0113 0.9853 0.0124 0.0101 0.9862 0.0107 0.0097

     | Show Table
    DownLoad: CSV
    Table 4.  Point evaluations of h(t) .
    T n [FP%] Scheme ML MCMC
    Prior \rightarrow P1 P2
    0.5 40[50%] 1 0.2072 0.3140 0.6715 0.1837 0.2862 0.5965 0.4768 0.2650 0.4846
    2 0.2720 0.2541 0.5242 0.2351 0.2462 0.5028 0.4584 0.2286 0.4178
    3 0.2785 0.2221 0.4787 0.3619 0.2105 0.4458 0.3980 0.1865 0.3491
    40[80%] 1 0.1867 0.2765 0.5841 0.2361 0.2635 0.5440 0.4512 0.2290 0.4182
    2 0.4621 0.2405 0.4881 0.2587 0.2217 0.4490 0.4556 0.2149 0.3915
    3 0.3565 0.2002 0.4130 0.2967 0.1944 0.3839 0.3903 0.1782 0.3333
    1.5 40[50%] 1 0.1131 0.3376 0.7446 0.1475 0.3172 0.6790 0.4811 0.2643 0.4749
    2 0.2355 0.2985 0.6286 0.2060 0.2753 0.5646 0.4578 0.2216 0.4019
    3 0.2959 0.2510 0.5094 0.2528 0.2361 0.4821 0.2983 0.2022 0.3901
    40[80%] 1 0.1439 0.3046 0.6750 0.1780 0.2776 0.5984 0.4490 0.2244 0.4093
    2 0.2321 0.2377 0.4874 0.4416 0.2150 0.3713 0.4434 0.2112 0.3861
    3 0.3565 0.2002 0.4199 0.3638 0.1899 0.3584 0.3903 0.1782 0.3333

     | Show Table
    DownLoad: CSV
    Table 5.  Interval evaluations of \gamma .
    n [FP%] Scheme T=0.5 T=1.5
    ACI-NA BCI ACI-NA BCI
    Prior \rightarrow P1 P2 P1 P2
    40[50%] 1 0.897 0.926 0.882 0.930 0.693 0.940 0.919 0.923 0.884 0.928 0.698 0.937
    2 0.996 0.918 0.909 0.923 0.694 0.933 0.994 0.915 0.976 0.921 0.784 0.930
    3 1.036 0.916 0.998 0.921 0.771 0.931 1.009 0.913 0.995 0.919 0.802 0.928
    40[80%] 1 0.788 0.933 0.711 0.939 0.546 0.948 0.835 0.930 0.798 0.936 0.594 0.945
    2 0.846 0.930 0.820 0.937 0.562 0.946 0.861 0.927 0.849 0.934 0.601 0.943
    3 0.885 0.928 0.854 0.933 0.659 0.943 0.881 0.925 0.876 0.931 0.620 0.940
    ACI-NL HPD ACI-NL HPD
    40[50%] 1 0.909 0.924 0.865 0.932 0.689 0.942 0.948 0.921 0.877 0.930 0.693 0.941
    2 1.027 0.917 0.938 0.924 0.694 0.934 1.032 0.914 0.959 0.922 0.782 0.931
    3 1.065 0.915 1.016 0.922 0.770 0.932 1.134 0.912 0.965 0.920 0.797 0.929
    40[80%] 1 0.797 0.932 0.725 0.940 0.544 0.949 0.880 0.929 0.831 0.937 0.565 0.946
    2 0.847 0.930 0.816 0.937 0.560 0.946 0.872 0.927 0.850 0.934 0.572 0.943
    3 0.893 0.927 0.849 0.934 0.653 0.944 0.902 0.924 0.869 0.932 0.593 0.941

     | Show Table
    DownLoad: CSV
    Table 6.  Interval evaluations of \sigma .
    n [FP%] Scheme T=0.5 T=1.5
    ACI-NA BCI ACI-NA BCI
    Prior \rightarrow P1 P2 P1 P2
    40[50%] 1 1.043 0.932 0.722 0.938 0.704 0.947 1.119 0.928 0.746 0.931 0.712 0.944
    2 0.913 0.936 0.707 0.942 0.687 0.951 0.936 0.932 0.707 0.935 0.685 0.948
    3 0.808 0.942 0.641 0.946 0.634 0.956 0.820 0.938 0.662 0.940 0.673 0.953
    40[80%] 1 0.733 0.947 0.637 0.954 0.484 0.963 0.789 0.943 0.658 0.947 0.465 0.960
    2 0.675 0.952 0.617 0.958 0.466 0.967 0.718 0.948 0.647 0.950 0.456 0.964
    3 0.651 0.954 0.608 0.960 0.451 0.969 0.688 0.950 0.637 0.951 0.449 0.966
    ACI-NL HPD ACI-NL HPD
    40[50%] 1 1.067 0.929 0.726 0.940 0.703 0.950 1.143 0.925 0.744 0.935 0.709 0.947
    2 0.921 0.933 0.706 0.944 0.685 0.954 0.944 0.929 0.700 0.938 0.684 0.951
    3 0.813 0.938 0.671 0.950 0.631 0.960 0.826 0.934 0.666 0.944 0.656 0.957
    40[80%] 1 0.742 0.945 0.625 0.956 0.475 0.965 0.797 0.941 0.653 0.949 0.462 0.962
    2 0.678 0.949 0.612 0.961 0.458 0.970 0.724 0.945 0.642 0.954 0.451 0.967
    3 0.654 0.951 0.604 0.963 0.449 0.972 0.691 0.947 0.627 0.956 0.447 0.969

     | Show Table
    DownLoad: CSV
    Table 7.  Interval evaluations of R(t) .
    n [FP%] Scheme T=0.5 T=1.5
    ACI-NA BCI ACI-NA BCI
    Prior \rightarrow P1 P2 P1 P2
    40[50%] 1 0.097 0.947 0.089 0.950 0.066 0.955 0.087 0.949 0.072 0.953 0.036 0.956
    2 0.083 0.949 0.073 0.952 0.057 0.957 0.066 0.953 0.059 0.957 0.035 0.959
    3 0.080 0.951 0.077 0.954 0.052 0.960 0.063 0.956 0.055 0.960 0.027 0.962
    40[80%] 1 0.074 0.954 0.072 0.957 0.040 0.962 0.067 0.958 0.058 0.962 0.029 0.965
    2 0.071 0.956 0.067 0.959 0.036 0.964 0.060 0.960 0.053 0.964 0.026 0.967
    3 0.067 0.959 0.063 0.961 0.035 0.967 0.053 0.963 0.040 0.967 0.023 0.970
    ACI-NL HPD ACI-NL HPD
    40[50%] 1 0.094 0.949 0.079 0.952 0.059 0.957 0.072 0.953 0.058 0.957 0.034 0.959
    2 0.082 0.951 0.073 0.954 0.054 0.959 0.064 0.955 0.052 0.959 0.033 0.961
    3 0.070 0.954 0.064 0.957 0.049 0.962 0.059 0.958 0.047 0.962 0.026 0.965
    40[80%] 1 0.069 0.957 0.061 0.960 0.035 0.965 0.055 0.961 0.046 0.965 0.028 0.967
    2 0.066 0.959 0.056 0.962 0.034 0.966 0.052 0.963 0.041 0.967 0.024 0.969
    3 0.060 0.961 0.055 0.963 0.032 0.969 0.050 0.965 0.035 0.969 0.022 0.972

     | Show Table
    DownLoad: CSV
    Table 8.  Interval evaluations of h(t) .
    n [FP%] Scheme T=0.5 T=1.5
    ACI-NA BCI ACI-NA BCI
    Prior \rightarrow P1 P2 P1 P2
    40[50%] 1 1.015 0.931 0.994 0.927 0.708 0.931 1.004 0.933 0.988 0.930 0.605 0.934
    2 0.911 0.933 0.886 0.929 0.608 0.933 0.896 0.935 0.869 0.932 0.581 0.936
    3 0.871 0.937 0.851 0.932 0.586 0.936 0.858 0.939 0.837 0.935 0.487 0.939
    40[80%] 1 0.844 0.939 0.810 0.934 0.559 0.938 0.826 0.941 0.811 0.937 0.463 0.941
    2 0.828 0.941 0.790 0.937 0.519 0.941 0.751 0.943 0.724 0.940 0.446 0.944
    3 0.794 0.944 0.753 0.940 0.479 0.944 0.709 0.946 0.638 0.943 0.402 0.947
    ACI-NL HPD ACI-NL HPD
    40[50%] 1 1.285 0.924 0.876 0.934 0.679 0.938 1.309 0.926 0.865 0.937 0.586 0.941
    2 1.048 0.926 0.795 0.936 0.589 0.940 1.060 0.928 0.785 0.939 0.567 0.943
    3 1.007 0.929 0.793 0.940 0.572 0.944 1.021 0.931 0.719 0.943 0.462 0.947
    40[80%] 1 1.004 0.931 0.709 0.942 0.557 0.947 0.977 0.933 0.690 0.945 0.444 0.949
    2 0.973 0.934 0.690 0.944 0.501 0.949 0.855 0.936 0.667 0.947 0.439 0.951
    3 0.846 0.937 0.674 0.947 0.472 0.952 0.831 0.939 0.599 0.950 0.389 0.954

     | Show Table
    DownLoad: CSV

    From Tables 18, in regard to the lowest values from RMSE, MRAB, and AIL as well as the largest values of CP, we list the next observations:

    ● All estimations of the unknown ExT's parameters \gamma , \sigma , R(t) , and h(t) are often extremely excellent. It is evident that as the number of observed failures increases, the estimates converge to the true parameter values. This behavior indicates that the obtained estimates are consistent.

    ● As n (or FP%) grows, the behavior of the offered point/interval estimates improves. This finding is also noted when m\rightarrow n .

    ● As T grows, it is observed that:

    ● - The RMSE and MRAB values of \gamma , \sigma , and R(t) decrease, while those associated with h(t) increase.

    ● - The AILs of \gamma and \sigma increase, while those associated with R(t) and h(t) decrease. The reverse behavior of this observation is likewise attained for all unknown parameters depending on CP values.

    ● Evaluating the censoring plans, it is observed that:

    - In point inference, the acquired results of \gamma and \sigma behave more satisfactorily based on Scheme-1 'right-censoring', while those of R(t) and h(t) behave more satisfactorily based on Scheme-3 'left-censoring' than others.

    - In interval inference, the acquired results of \gamma behave more satisfactorily based on Scheme-1 'right-censoring', while those of \sigma , R(t) , and h(t) behave more satisfactorily based on Scheme-3 'left-censoring' than others.

    ● Since the Bayes MCMC estimates contain gamma information, as we anticipated, these estimates perform better compared to the frequentist estimates.

    ● Due to Prior 2's lower variance than Prior 1, the Bayes findings derived from Prior 2 outperform those derived from Prior 1.

    ● Evaluating the interval techniques, it is clear that:

    - The adaptive-T2PHC scheme has an affect on th asymptotic normality of the MLEs, especially with small effective sample sizes, potentially leading to inefficient ACIs. For small effective sample sizes, such as 40[50\%] , intervals assuming asymptotic normality performed poorly, showing the widest lengths and lowest coverage probabilities. However, as the effective sample size increased, the performance improved, with shorter intervals and coverage probabilities converging to the nominal level, as seen in the 80[80\%] case.

    - In the classical interval setup, the estimates of \gamma , \sigma , and h(t) created from the ACI-NA approach perform superior to others, whereas those of R(t) created from the ACI-NL approach perform superior to others.

    - In the credible interval setup, the estimates of \gamma , \sigma , R(t) , and h(t) created from the HPD approach perform better than others.

    - Recall that the BCI and HPD interval estimates contain gamma informative data, so they performed better compared to the ACI-NA and ACI-NL estimates, as expected.

    ● As a wrap up, it is recommended to apply the Bayes paradigm to estimate model parameters and reliability aspects of the ExT lifespan model when Adaptive-T2PHC censored data are available.

    This part investigates two real-world datasets from the engineering sector to examine how the estimating methodologies proposed in this study work in practice.

    Accelerated testing of electronic components helps detect weaknesses and potential failure modes much faster than under normal use conditions. This process ensures higher reliability, reduces development time, and improves product confidence before market release. This application, from the engineering field, shows an analysis of a real-world dataset describing collection contains the failure times (in minutes) for 15 electronic components that were subjected to an accelerated test; see Lawless [25]. In Table 9, each failure time has been divided by ten for computational simplicity and reported.

    Table 9.  Failure times of 15 electronic components.
    0.14 0.51 0.63 1.08 1.21 1.85 1.97 2.22 2.30 3.06
    3.73 4.63 5.39 5.98 6.62

     | Show Table
    DownLoad: CSV

    Firs, to highlight the superiority of the proposed ExT model, we consider other seven lifetime distributions, including: (i) alpha power exponential (APE (\gamma, \sigma) ) by Mahdavi and Kundu [26], (ii) generalized-exponential (GE (\gamma, \sigma) ) by Gupta and Kundu [27], (iii) Nadarajah-Haghighi (NH (\gamma, \sigma) ) by Nadarajah and Haghighi [28], (iv) Weibull (W (\gamma, \sigma) ) by Weibull [29], (v) gamma G(\gamma, \sigma) by Johnson et al. [30], (vi) Teissier (T (\sigma) ) by Teissier [1], and (vii) exponential (E (\sigma) ) by Johnson et al. [30]. To determine the best model, besides the Kolmogorov–Smirnov ( \mathcal{KS} ) along its P –value statistic, five metrics of model selection are utilized, including: (i) negative log–likelihood ( \mathcal{NL} ), (ii) Akaike ( \mathcal{A} ), (iii) Bayesian ( \mathcal{B} ), (iv) consistent Akaike ( \mathcal{CA} ), (v) Hannan–Quinn ( \mathcal{HQ} ) information criteria; see Table 10. By installing the ' \textsf{AdequacyModel} ' package via the \mathcal{R} 4.2.2 software, by Marinho et al. [31], the MLEs (with its standard–error (St.Er)) of \gamma or \sigma are obtained and provided Table in 10. It implies that the ExT distribution provides the best fit for the given electronic components dataset compared to others.

    Table 10.  Summary fit of the ExT and others using electronic components data.
    Model \gamma \sigma \mathcal{NL} \mathcal{A} \mathcal{B} \mathcal{CA} \mathcal{HQ} \mathcal{KS} ( P –value)
    Est. St.Er Est. St.Er
    ExT 0.4461 0.1278 0.2461 0.0396 29.089 62.179 63.107 62.707 62.163 0.0948(0.997)
    APE 5.8185 8.6061 0.5270 0.1661 29.581 63.162 64.578 64.162 63.147 0.1377(0.902)
    GE 1.4431 0.5130 0.4528 0.1373 29.699 63.397 64.813 64.397 63.382 0.1080(0.987)
    NH 10.173 17.726 0.0224 0.0412 29.149 62.298 63.714 63.298 62.283 0.1137(0.978)
    W 1.3057 0.2744 2.9764 0.6184 29.481 62.963 64.379 63.963 62.948 0.0981(0.996)
    G 1.4311 0.4759 1.9281 0.7665 29.647 63.295 64.711 64.295 63.280 0.1035(0.992)
    T - - 0.2973 0.0291 34.100 70.199 70.907 70.507 70.191 0.3427(0.045)
    E - - 0.3630 0.0937 30.199 62.399 63.595 63.179 62.391 0.1558(0.807)

     | Show Table
    DownLoad: CSV

    Furthermore, using the complete electronic components dataset, we examine the flexibility of the Ext model through three graphical plots, including: PP, PDFs, and RFs; see Figure 4. It indicates that the ExT model is a better choice than its seven competitors and supports the same findings provided in Table 10. On the other hand, to show the existence and uniqueness characteristics of \hat\gamma and \hat\sigma of the Ext parameters \mu and \gamma , respectively, from Table 9, the contour plot for log-likelihood of ExT (\gamma, \sigma) is also plotted and shown in Figure 4(d). It shows that the offered MLEs \hat\gamma and \hat\sigma exist and are unique.

    Figure 4.  The PP, PDF, RF, contour, and TTT diagrams from electronic components data.

    We propose starting with the collected estimations \hat\gamma\approx{0.4461} and \hat\sigma\approx{0.2461} for any future computations based on electronic components data. To identify the HRF shape from the electronic components dataset, the scaled–TTT transform plot is displayed in Figure 4(e). It indicates that the ExT failure rate model provides an increasing failure rate.

    To investigate how the acquired estimates of \gamma , \sigma , R(t) , and h(t) can be applied under electronic components data, three Adaptive-T2PHC samples with m = 10 are created; see Table 11. For brevity, the scheme (1, 1, 1, 1, 1, 0, 0, 0, 0, 0) is denoted as ( 1^{5} , 0^{5} ). Since we lacked prior knowledge about the Ext parameters \gamma and \sigma from the given real data, the Bayes' MCMC estimates along with their BCI/HPD interval estimates of \gamma , \sigma , R(t) , and h(t) (at t = 1 ) are obtained based on the improper gamma priors. To ignore the influence of the starting points (\gamma^{(0)}, \sigma^{(0)}) = (0.4461, 0.2461) , we discard the first 10,000 iterations from the total 50,000 MCMC iterations. The point estimates (with their St.Ers) as well as the interval estimates (with their interval widths (IWs)) are presented in Table 12. It is evident, because the non-informative priors are utilized, that the maximum likelihood and MCMC estimates are very similar. It also indicates that the ACI estimates (from the NA or NL method) are very close to the BCI/HPD interval estimates.

    Table 11.  Three Adaptive-T2PHC samples from electronic components data.
    Sample Scheme T(d) R_{m}^{*} Data
    S1 ( 1^{5} , 0^{5} ) 0.6(2) 3 0.14, 0.51, 0.63, 1.08, 1.21, 1.85, 1.97, 2.22, 2.30, 3.06
    S2 ( 0^{3} , 1^{5} , 0^{2} ) 2.1(6) 2 0.14, 0.51, 0.63, 1.08, 1.21, 1.97, 2.22, 2.30, 3.73, 4.63
    S3 ( 0^{5} , 1^{5} ) 4.8(9) 1 0.14, 0.51, 0.63, 1.08, 1.21, 1.85, 2.22, 2.30, 4.63, 5.39

     | Show Table
    DownLoad: CSV
    Table 12.  Estimates of \gamma , \sigma , R(t) and h(t) from electronic components data.
    Sample Par. MLE ACI-NA BCI
    MCMC ACI-NL HPD
    Est. St.Er Lower Upper IW Lower Upper IW
    S1 \gamma 0.5397 0.1781 0.1905 0.8888 0.6983 0.2889 0.6072 0.3183
    0.4417 0.1275 0.2826 1.0306 0.7480 0.2881 0.6058 0.3178
    \sigma 0.3641 0.0847 0.1981 0.5301 0.3319 0.1858 0.4202 0.2343
    0.3017 0.0872 0.2308 0.5744 0.3436 0.1858 0.4201 0.2343
    R(1) 0.7576 0.0918 0.5778 0.9375 0.3597 0.5843 0.8479 0.2636
    0.7291 0.0735 0.5975 0.9606 0.3631 0.5894 0.8501 0.2608
    h(1) 0.3539 0.1012 0.1556 0.5522 0.3966 0.1892 0.5026 0.3134
    0.3315 0.0834 0.2021 0.6198 0.4176 0.1833 0.4934 0.3101
    S2 \gamma 0.4456 0.1408 0.1695 0.7216 0.5521 0.2245 0.5061 0.2816
    0.3544 0.1171 0.2398 0.8279 0.5881 0.2195 0.5008 0.2813
    \sigma 0.2445 0.0630 0.1211 0.3680 0.2469 0.0921 0.2916 0.1995
    0.1923 0.0730 0.1476 0.4051 0.2575 0.0918 0.2907 0.1989
    R(1) 0.7844 0.0869 0.6141 0.9546 0.3406 0.6007 0.8668 0.2661
    0.7452 0.0794 0.6313 0.9745 0.3432 0.6143 0.8731 0.2588
    h(1) 0.2513 0.0711 0.1119 0.3906 0.2787 0.1329 0.3670 0.2340
    0.2419 0.0606 0.1443 0.4375 0.2932 0.1290 0.3605 0.2316
    S3 \gamma 0.4141 0.1304 0.1584 0.6697 0.5113 0.1999 0.4753 0.2754
    0.3255 0.1140 0.2233 0.7677 0.5444 0.1982 0.4713 0.2731
    \sigma 0.2059 0.0567 0.0949 0.3170 0.2221 0.0685 0.2497 0.1812
    0.1578 0.0674 0.1201 0.3531 0.2330 0.0681 0.2488 0.1807
    R(1) 0.7923 0.0855 0.6248 0.9597 0.3350 0.6040 0.8722 0.2682
    0.7493 0.0827 0.6413 0.9788 0.3375 0.6085 0.8757 0.2673
    h(1) 0.2223 0.0631 0.0986 0.3460 0.2474 0.1172 0.3293 0.2122
    0.2159 0.0548 0.1275 0.3878 0.2603 0.1102 0.3204 0.2102

     | Show Table
    DownLoad: CSV

    For each sample in S i, \ i = 1, 2, 3 , Figure 5 depicts the profile log-likelihood functions of \gamma and \sigma . It shows that the acquired MLE values of \gamma and \sigma exist and are unique. Moreover, from the remaining 40,000 iterations of {\gamma} , \sigma , R(t) , and h(t) , seven statistics are implemented for each unknown parameter, including: mean, mode, three quartiles {Q}_i, \ i = 1, 2, 3 , standard deviation (St.D), and skewness (Skew.); see Table 13.

    Figure 5.  The log-likelihood of \gamma (left) and \sigma (right) from electronic components data.
    Table 13.  Statistics of {\gamma} , {\sigma} , R(t) , and h(t) from electronic components data.
    Sample Par. Mean Mode Q_1 Q_2 Q_3 St.D Skew.
    S1 \gamma 0.44169 0.30750 0.38469 0.43990 0.49586 0.08160 0.13295
    \sigma 0.30166 0.20150 0.25929 0.30169 0.34305 0.06085 0.01851
    R(1) 0.72908 0.69293 0.68526 0.73397 0.77765 0.06774 -0.38216
    h(1) 0.33146 0.27894 0.27590 0.32602 0.38307 0.08031 0.38970
    S2 \gamma 0.35440 0.23191 0.30296 0.35159 0.40201 0.07344 0.26194
    \sigma 0.19229 0.12470 0.15709 0.19235 0.22789 0.05095 0.00273
    R(1) 0.74520 0.67292 0.69871 0.74989 0.79446 0.06908 -0.40344
    h(1) 0.24194 0.22929 0.20038 0.23883 0.28169 0.05987 0.28649
    S3 \gamma 0.32550 0.20039 0.27496 0.32252 0.37162 0.07176 0.28390
    \sigma 0.15782 0.08610 0.12403 0.15805 0.19039 0.04727 0.03042
    R(1) 0.74930 0.67252 0.70215 0.75491 0.80002 0.07066 -0.45168
    h(1) 0.21590 0.19762 0.17773 0.21205 0.25195 0.05445 0.31892

     | Show Table
    DownLoad: CSV

    Based on the same remaining Markovian draws for each unknown parameter, from S1 listed in Table 11, the estimated density (with Gaussian kernel) and trace plots are shown in Figure 6. It confirms the same facts listed in Table 13, and shows that the collected MCMC draws converged well. It also indicates that the acquired posterior results of {\gamma} and \sigma are fairly symmetric, while those of R(t) and h(t) are highly negative and highly positive skewed, respectively.

    Figure 6.  The MCMC diagrams of \gamma , \sigma , R(t) , and h(t) from electronic components data.

    For additional examination on verifying the convergence status of MCMC iteration, using S1 (as an example), three diagnostics, namely, (i) autocorrelation, (ii) BGR, and (iii) trace plots, are plotted and shown in Figure 7. Figure 7(a) shows that the autocorrelation values approach zero as the lag increases, indicating good mixing of the MCMC samples. This result is evidence that the collected MCMC iterations of \gamma , \sigma , R(t) , and h(t) are highly independent. Figure 7(b) demonstrates that the series of MCMC samples of \gamma , \sigma , R(t) , and h(t) are well mixed. Figure 7(c) implies that the burn-in is sufficiently large to eliminate the impact of the initial anticipates and properly mix the simulated samples.

    Figure 7.  Autocorrelation, trace, and BGR plots using S1 from electronic components data.

    Aircraft windshields protect pilots and passengers from external elements like wind, debris, and extreme weather at high speeds and altitudes. They also maintain cabin pressure and provide clear visibility, ensuring safety and operational performance during flight. An airplane windshield is a complex piece of equipment composed of many layers of material, incorporating a highly protective outer layer. This application analyzes the failure times of 85 aircraft windshields, provided by Murthy et al. [32]; see Table 14.

    Table 14.  Failure times of aircraft windshields.
    0.040 0.301 0.309 0.557 0.943 1.070 1.124 1.248 1.281 1.281 1.303 1.432
    1.480 1.505 1.506 1.568 1.615 1.619 1.652 1.652 1.757 1.866 1.876 1.899
    1.911 1.912 1.914 1.981 2.010 2.038 2.085 2.089 2.097 2.135 2.154 2.190
    2.194 2.223 2.224 2.229 2.300 2.324 2.385 2.481 2.610 2.625 2.632 2.646
    2.661 2.688 2.823 2.890 2.902 2.934 2.962 2.964 3.000 3.103 3.114 3.117
    3.166 3.344 3.376 3.443 3.467 3.478 3.578 3.595 3.699 3.779 3.924 4.035
    4.121 4.167 4.240 4.255 4.278 4.305 4.376 4.449 4.485 4.570 4.602 4.663

     | Show Table
    DownLoad: CSV

    Table 15 shows the MLEs (with St.Ers) of \gamma and \sigma , as well as the fitted criteria of model selection (including: \mathcal{NL} , \mathcal{A} , \mathcal{B} , \mathcal{CA} , \mathcal{HQ} , and \mathcal{KS} ( P –value)), of the Ext, APE, GH, NH, W, G, T, and E models. It shows that the ExT model produces the lowest values for all given selection metrics, with the exception of the greatest P -value among all fitted lifespan models. As a result, the ExT model is the best choice based on the aircraft windshield data.

    Table 15.  Summary fit of the ExT and others using aircraft windshield data.
    Model \gamma \sigma \mathcal{NL} \mathcal{A} \mathcal{B} \mathcal{CA} \mathcal{HQ} \mathcal{KS} ( P –value)
    Est. St.Er Est. St.Er
    ExT 0.9476 0.1248 0.3889 0.0202 128.326 258.821 261.252 258.870 259.799 0.0615(0.909)
    APE 84.976 41.758 0.8171 0.0612 134.306 272.611 277.473 272.759 274.565 0.1228(0.158)
    GE 3.5605 0.6110 0.7579 0.0769 139.841 283.681 288.543 283.829 285.635 0.1210(0.171)
    NH 34.031 136.94 0.0083 0.0335 143.787 291.573 296.435 291.722 293.528 0.2582(0.005)
    W 2.3744 0.2096 2.8629 0.1375 130.053 264.107 268.968 264.255 266.061 0.0537(0.969)
    G 3.4732 0.5128 0.7364 0.1170 136.938 277.875 282.737 278.023 279.829 0.1041(0.322)
    T 0.0000 0.0000 0.3935 0.0165 128.411 260.652 265.514 260.800 262.606 0.0666(0.851)
    E 0.0000 0.0000 0.3910 0.0427 162.877 327.754 330.185 327.803 328.731 0.3028(0.008)

     | Show Table
    DownLoad: CSV

    In Figure 8, the PP, PDFs, RFs, and contour plots of \gamma and \sigma and the scaled–TTT transform from aircraft windshield data are provided. Figure 8(a–c) supports the same facts presented in Table 15. Further, Figure 8(c) shows that the acquired MLEs \hat\gamma and \hat\sigma exist and are unique. The \hat\gamma\approx{0.4461} and \hat\sigma\approx{0.2461} estimates will be utilized for any future evaluations based on aircraft windshield data. Moreover, Figure 8(d) supports the same result displayed in Figure 4(d).

    Figure 8.  The PP, PDF, RF, contour, and TTT diagrams from aircraft windshield data.

    Now, three different Adaptive-T2PHC samples (with m = 44 ) are gathered; see Table 16. The suggested point estimates (with their St.Ers) and the suggested intervals estimates (with their IWs) of \gamma , \sigma , R(t), and h(t) (at t = 1 ) are acquired, see Table 17. Taking M = 50,000 and M^* = 10,000, the Bayes' calculations are conducted based on improper gamma density priors. The findings in Table 17 demonstrate how well the point and interval estimates of \gamma , \sigma , R(t) , and h(t) that were provided by the Bayes MCMC and likelihood techniques match one another. The obtained MLEs \hat\gamma and \hat\sigma exist and are unique, as shown by the log-likelihood functions of \gamma and \sigma in Figure 9, which are derived from Table 16.

    Table 16.  Three Adaptive-T2PHC samples from aircraft windshield data.
    Sample Scheme T(d) R_{m}^{*} Data
    S1 ( 10^{4} , 0^{40} ) 1.5(10) 11 0.040, 0.301, 0.309, 0.557, 0.943, 1.070, 1.124, 1.248, 1.281, 1.281,
    1.480, 1.505, 1.506, 1.568, 1.615, 1.619, 1.652, 1.652, 1.757, 1.866,
    1.899, 1.911, 1.912, 1.914, 1.981, 2.010, 2.038, 2.085, 2.089, 2.097,
    2.154, 2.190, 2.194, 2.223, 2.224, 2.229, 2.300, 2.324, 2.385, 2.481,
    2.625, 2.632, 2.646, 2.661
    S2 ( 0^{20} , 10^{4} , 0^{20} ) 1.9(22) 20 0.040, 0.301, 0.309, 0.557, 0.943, 1.070, 1.124, 1.248, 1.281, 1.281,
    1.303, 1.432, 1.480, 1.505, 1.506, 1.568, 1.615, 1.619, 1.652, 1.652,
    1.757, 1.866, 1.912, 1.914, 1.981, 2.010, 2.038, 2.085, 2.154, 2.190,
    2.194, 2.223, 2.224, 2.229, 2.300, 2.324, 2.385, 2.481, 2.610, 2.632,
    2.646, 2.661, 2.688, 2.902
    S3 ( 0^{40} , 10^{4} ) 2.7(43) 10 0.040, 0.301, 0.309, 0.557, 0.943, 1.070, 1.124, 1.248, 1.281, 1.281,
    1.303, 1.432, 1.480, 1.505, 1.506, 1.568, 1.615, 1.619, 1.652, 1.652,
    1.757, 1.866, 1.876, 1.899, 1.911, 1.912, 1.914, 1.981, 2.010, 2.038,
    2.085, 2.089, 2.097, 2.135, 2.154, 2.190, 2.194, 2.223, 2.224, 2.229,
    2.300, 2.324, 2.688, 2.823

     | Show Table
    DownLoad: CSV
    Table 17.  Estimates of \gamma , \sigma , R(t) , and h(t) from aircraft windshield data.
    Sample Par. MLE ACI-NA BCI
    MCMC ACI-NL HPD
    Est. St.Er Lower Upper IW Lower Upper IW
    S1 \gamma 0.9335 0.1580 0.6239 1.2431 0.6192 0.6829 1.0032 0.3203
    0.8428 0.1215 0.6700 1.3006 0.6307 0.6845 1.0033 0.3188
    \sigma 0.3988 0.0380 0.3244 0.4732 0.1488 0.3134 0.4312 0.1179
    0.3740 0.0390 0.3309 0.4805 0.1497 0.3107 0.4282 0.1175
    R(1) 0.8975 0.0290 0.8406 0.9543 0.1138 0.8374 0.9206 0.0832
    0.8835 0.0252 0.8423 0.9562 0.1138 0.8424 0.9235 0.0812
    h(1) 0.2182 0.0392 0.1413 0.2951 0.1538 0.1657 0.2943 0.1286
    0.2255 0.0332 0.1534 0.3104 0.1570 0.1640 0.2907 0.1268
    S2 \gamma 0.9495 0.1540 0.6476 1.2514 0.6037 0.7005 1.0193 0.3188
    0.8598 0.1204 0.6909 1.3049 0.6140 0.7005 1.0183 0.3178
    \sigma 0.3897 0.0372 0.3169 0.4625 0.1457 0.3055 0.4213 0.1158
    0.3655 0.0382 0.3233 0.4698 0.1465 0.3051 0.4203 0.1152
    R(1) 0.9057 0.0262 0.8543 0.9571 0.1029 0.8506 0.9273 0.0768
    0.8929 0.0232 0.8557 0.9586 0.1029 0.8551 0.9300 0.0749
    h(1) 0.2024 0.0361 0.1315 0.2732 0.1417 0.1536 0.2730 0.1194
    0.2093 0.0311 0.1426 0.2872 0.1446 0.1523 0.2714 0.1191
    S3 \gamma 0.9207 0.1533 0.6203 1.2212 0.6008 0.6717 0.9905 0.3188
    0.8304 0.1208 0.6644 1.2760 0.6116 0.6717 0.9894 0.3176
    \sigma 0.3813 0.0387 0.3054 0.4572 0.1518 0.2939 0.4131 0.1192
    0.3557 0.0396 0.3125 0.4652 0.1528 0.2934 0.4114 0.1180
    R(1) 0.9028 0.0268 0.8503 0.9552 0.1049 0.8460 0.9248 0.0789
    0.8895 0.0240 0.8518 0.9568 0.1050 0.8491 0.9263 0.0772
    h(1) 0.2030 0.0355 0.1335 0.2726 0.1390 0.1536 0.2729 0.1193
    0.2093 0.0309 0.1442 0.2859 0.1417 0.1540 0.2730 0.1190

     | Show Table
    DownLoad: CSV
    Figure 9.  The log-likelihood of \gamma (left) and \sigma (right) from aircraft windshield data.

    In Table 18, making use of the remaining 40,000 iterations of {\gamma} , \sigma , R(t) , and h(t) , the same characteristics provided in Table 18 are recalculated from aircraft windshield data. Moreover, to assess the convergence statues of the simulated MCMC variates, the density and trace plots for the remaining 40,000 MCMC simulated variates of \gamma , \sigma , R(t) , and h(t) are highlighted in Figure 10. The MCMC approach effectively converges, and the obtained posterior variates of \gamma and \sigma are pretty symmetric, but those associated with R(t) and h(t) are negatively and positively skewed. It also demonstrates that removing the first 10,000 iterations as burn-in is sufficient to neutralize the influence of initial estimate values. This confirms the same result listed in Table 13.

    Table 18.  Statistics of {\gamma} , {\sigma} , R(t) , and h(t) from aircraft windshield data.
    Sample Par. Mean Mode Q_1 Q_2 Q_3 St.D Skew.
    S1 \gamma 0.84282 0.75055 0.78802 0.84195 0.89594 0.08082 0.04738
    \sigma 0.37402 0.35816 0.35395 0.37460 0.39458 0.03012 -0.08938
    R(1) 0.88345 0.86415 0.87004 0.88457 0.89832 0.02092 -0.41895
    h(1) 0.22545 0.24192 0.20302 0.22396 0.24583 0.03237 0.25786
    S2 \gamma 0.85981 0.78040 0.80535 0.85864 0.91283 0.08032 0.05071
    \sigma 0.36547 0.35764 0.34606 0.36606 0.38568 0.02951 -0.10023
    R(1) 0.89287 0.87482 0.88049 0.89395 0.90671 0.01934 -0.41047
    h(1) 0.20931 0.22899 0.18824 0.20818 0.22830 0.03028 0.24202
    S3 \gamma 0.83042 0.73781 0.77565 0.82921 0.88327 0.08027 0.05388
    \sigma 0.35567 0.34068 0.33544 0.35655 0.37629 0.03026 -0.09416
    R(1) 0.88950 0.86972 0.87655 0.89085 0.90371 0.02000 -0.42335
    h(1) 0.20931 0.22675 0.18828 0.20835 0.22874 0.03028 0.24692

     | Show Table
    DownLoad: CSV
    Figure 10.  The MCMC diagrams of \gamma , \sigma , R(t) , and h(t) from aircraft windshield data.

    Moreover, using S1 (as an example) from aircraft windshield data, subplots shown in Figure 11 support the same facts displayed in Figure 7.

    Figure 11.  Autocorrelation, trace, and BGR plots using S1 from aircraft windshield data.

    The main facts from the two real-data applications, which include the failure times of electronic components and the failure times of 84 aircraft windshields, are the focus of this subsections. Subsequently, from the numerical findings of the proposed two genuine datasets, it is clear that:

    ● The primary deduction drawn is that the Bayesian approach performs better than the likelihood approach.

    ● Each real dataset that was examined showed how flexible the ExT model is and how it can outperform several popular models.

    ● All applications showed, from contour and profile log-likelihood diagrams, that the offered MLEs \hat\gamma and \hat\sigma exist and are unique. Thus, we recommended these estimates as starting points.

    ● The trace diagrams for the remaining 40,000 Markovian iterations of {\gamma} , \sigma , R(t) , or h(t) show that their simulated variates are adequately mixed, and show that the burn-in size is adequate to negate the impact of early initial starting points.

    ● We are now able to conclude that the analyzed approaches provide an excellent interpretation of the ExT model when data information is formed by the adaptive-T2PHC strategy.

    ● Overall, the current study advances reliability analysis by introducing a useful framework utilizing the ExT distribution within the adaptive-T2PHC model, which is particularly beneficial for industries such as electronics and aerospace. The adaptive-T2PHC model ensures efficient termination of experiments following a pre-specified number of failures, thereby reducing testing time and costs while maintaining an adequate volume of data. The ExT model effectively captures diverse hazard patterns, facilitating the development of precise predictive maintenance schedules that aim to minimize downtime. Bayesian estimation employing gamma priors integrates historical data and effectively quantifies uncertainty, particularly in scenarios of censored data. Classical maximum likelihood estimations provide a suitable baseline for circumstances involving abundant data. Collectively, these methodologies enhance reliability analysis and lifetime predictions, optimizing maintenance strategies, and supporting adherence to safety standards. The framework is instrumental in identifying phases prone to failure, which is crucial for high-stakes applications, including aircraft components. Consequently, industries are empowered to make data-driven decisions that balance cost, safety, and efficiency. This approach ultimately contributes to improved operational reliability and risk assessment across manufacturing and technology sectors.

    In this work, we have studied the estimation problems related to the exponentiated Teissier distribution's parameters, reliability, and hazard rate functions. To accomplish this, an adaptive progressive Type II hybrid censoring approach is used to get data from the studied population. Two estimation approaches are considered: maximum likelihood and Bayesian estimations. Using the likelihood approach, the point estimates are computed, and two approximate interval ranges are constructed. The delta approach is used to derive the estimators' variances for the reliability and hazard rate metrics. The Bayesian paradigm used the Markov chain Monte Carlo procedure, specifically the Metropolis-Hastings algorithm, to compute Bayesian estimates with a squared error loss function. Additionally, the two Bayes credible intervals are determined. Detailed simulation research was implemented to judge the functionality of the provided estimation methods. The numerical evaluation outputs show that the Bayesian estimation approach, when sampling from the posterior distribution, is suggested for producing point and interval estimates of the exponentiated Teissier distribution using adaptive progressively Type-II hybrid censored samples. Bayesian estimates have the lowest root mean square errors and interval widths in comparison to the maximum likelihood estimates. Two engineering applications are considered to capitalize on the significance of the chosen distribution and methodologies. They demonstrate that the exponentiated Teissier distribution is versatile and provides superior fitting of real data than certain existing models. One limitation of the current work is the use of squared error loss, which assumes equal weights for underestimation and overestimation. This assumption may not be applicable in numerous scenarios. As a direction for future research, it is recommended to investigate the estimations of the exponentiated Teissier distribution using asymmetric loss functions, such as LINEX or weighted squared error loss functions. Another limitation of the current study is the reliance on the maximum likelihood estimation method for obtaining point and interval estimates of model and reliability parameters. For highly reliable products, tests often conclude with a small predetermined number of failures to reduce costs and time. In such cases, the maximum likelihood approach may yield ineffective estimates due to the limited sample size. To address this limitation in future work, alternative classical estimation methods that perform better with small sample sizes should be explored. One promising approach is the maximum product of spacings estimation method, which may provide more reliable results under these conditions. Another topic for future research is the use of more advanced censoring schemes, such as the improved adaptive progressive Type-II censoring plan, which generalizes the approach used in this study. This enhanced censoring plan can potentially improve reliability estimation, particularly for highly reliable products. A new comparison between the proposed control strategy and its competitors using the same proposed methods is recommended as future research.

    Refah Alotaibi: Conceptualization, methodology, investigation, funding acquisition, writing – original draft; Mazen Nassar: Conceptualization, methodology, investigation, writing – review & editing; Ahmed Elshahhat: Software, Data curation, writing – original draft. All authors read and approved the final manuscript.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research project was funded by the Deanship of Scientific Research and Libraries, Princess Nourah bint Abdulrahman University, through the Program of Research Project Funding After Publication, grant (No. RPFAP-32-1445).

    The authors would like to express thank to the Editor-in-Chief and the four anonymous referees for their valuable comments. This research project was funded by the Deanship of Scientific Research and Libraries, Princess Nourah bint Abdulrahman University, through the Program of Research Project Funding After Publication, grant (No. RPFAP-32-1445).

    The authors confirm that the data supporting the findings of this study are available within the article.

    The authors declare no conflicts of interest.



    [1] G. Alberti, G. Crippa, A. Mazzucato, Exponential self-similar mixing and loss of regularity for continuity equations, C. R. Math., 352 (2014), 901–906. http://dx.doi.org/10.1016/j.crma.2014.08.021 doi: 10.1016/j.crma.2014.08.021
    [2] G. Alberti, G. Crippa, A. Mazzucato, Exponential self-similar mixing by incompressible flows, J. Amer. Math. Soc., 32 (2019), 445–490. http://dx.doi.org/10.1090/jams/913 doi: 10.1090/jams/913
    [3] A. Bressan, A lemma and a conjecture on the cost of rearrangements, Rend. Sem. Mat. Univ. Padova, 110 (2003), 97–102.
    [4] G. Crippa, C. De Lellis, Estimates and regularity results for the DiPerna Lions flow, J. Reine Angew. Math., 616 (2008), 15–46. http://dx.doi.org/10.1515/CRELLE.2008.016 doi: 10.1515/CRELLE.2008.016
    [5] G. Crippa, C. Schulze, Cellular mixing with bounded palenstrophy, Math. Models Methods Appl. Sci., 27 (2017), 2297–2320. http://dx.doi.org/10.1142/S0218202517500452 doi: 10.1142/S0218202517500452
    [6] T. M. Elgindi, A. Zlatoš, Universal mixers in all dimensions, Adv. Math., 356 (2019), 106807. http://dx.doi.org/10.1016/j.aim.2019.106807 doi: 10.1016/j.aim.2019.106807
    [7] G. Iyer, A. Kiselev, X. Xu, Lower bounds on the mix norm of passive scalars advected by incompressible enstrophy-constrained flows, Nonlinearity, 27 (2014), 973–985. http://dx.doi.org/10.1088/0951-7715/27/5/973 doi: 10.1088/0951-7715/27/5/973
    [8] Z. Lin, J.-L. Thiffeault, C. R. Doering, Optimal stirring strategies for passive scalar mixing, J. Fluid Mech., 675 (2011), 465–476. http://dx.doi.org/10.1017/S0022112011000292 doi: 10.1017/S0022112011000292
    [9] E. Lunasin, Z. Lin, A. Novikov, A. Mazzucato, C. R. Doering, Optimal mixing and optimal stirring for fixed energy, fixed power, or fixed palenstrophy flows, J. Math. Phys., 53 (2012), 115611. http://dx.doi.org/10.1063/1.4752098 doi: 10.1063/1.4752098
    [10] E. Lunasin, Z. Lin, A. Novikov, A. Mazzucato, C. R. Doering, Erratum: Optimal mixing and optimal stirring for fixed energy, fixed power, or fixed palenstrophy flows, J. Math. Phys., 54 (2013), 079903. http://dx.doi.org/10.1063/1.4816334 doi: 10.1063/1.4816334
    [11] G. Mathew, I. Mezic, L. Petzold, A multiscale measure for mixing, Physica D, 211 (2005), 23–46. http://dx.doi.org/10.1016/j.physd.2005.07.017 doi: 10.1016/j.physd.2005.07.017
    [12] C. Seis, Maximal mixing by incompressible fluid flows, Nonlinearity, 26 (2013), 3279–3289. http://dx.doi.org/10.1088/0951-7715/26/12/3279 doi: 10.1088/0951-7715/26/12/3279
    [13] J.-L. Thiffeault, Using multiscale norms to quantify mixing and transport, Nonlinearity, 25 (2012), R1–R44. http://dx.doi.org/10.1088/0951-7715/25/2/R1 doi: 10.1088/0951-7715/25/2/R1
    [14] Y. Yao, A. Zlatoš, Mixing and un-mixing by incompressible flows, J. Eur. Math. Soc., 19 (2014), 1911–1948. http://dx.doi.org/10.4171/JEMS/709 doi: 10.4171/JEMS/709
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2040) PDF downloads(148) Cited by(1)

Figures and Tables

Figures(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog