Loading [MathJax]/jax/element/mml/optable/MathOperators.js
Research article

The least common multiple of consecutive terms in a cubic progression

  • Received: 30 October 2019 Accepted: 07 February 2020 Published: 14 February 2020
  • MSC : 11B25, 11N13, 11A05

  • Let k be a positive integer and f(x) a polynomial with integer coefficients. Associated to the least common multiple lcm0ik{f(n+i)}, we define the function Gk,f for all positive integers nNZk,f by Gk,f(n):=ki=0|f(n+i)|lcm0ik{f(n+i)}, where Zk,f:=ki=0{nN:f(n+i)=0}. If f(x)=x, then Farhi showed in 2007 that Gk,f is periodic with k! as its period. Consequently, Hong and Yang improved Farhi's period k! to lcm(1,...,k). Later on, Farhi and Kane confirmed a conjecture of Hong and Yang and determined the smallest period of Gk,f. For the general linear polynomial f(x), Hong and Qian showed in 2011 that Gk,f is periodic and got a formula for its smallest period. In 2015, Hong and Qian characterized the quadratic polynomial f(x) such that Gk,f is almost periodic and also arrived at an explicit formula for the smallest period of Gk,f. If degf(x)3, then one naturally asks the following interesting question: Is the arithmetic function Gk,f almost periodic and, if so, what is the smallest period? In this paper, we asnwer this question for the case f(x)=x3+2. First of all, with the help of Hua's identity, we prove that Gk,x3+2 is periodic. Consequently, we use Hensel's lemma, develop a detailed p-adic analysis to Gk,x3+2 and particularly investigate arithmetic properties of the congruences x3+20(modpe) and x6+1080(modpe), and with more efforts, its smallest period is finally determined. Furthermore, an asymptotic formula for log lcm0ik{(n+i)3+2} is given.

    Citation: Zongbing Lin, Shaofang Hong. The least common multiple of consecutive terms in a cubic progression[J]. AIMS Mathematics, 2020, 5(3): 1757-1778. doi: 10.3934/math.2020119

    Related Papers:

    [1] Haiping Ren, Xue Hu . Estimation for inverse Weibull distribution under progressive type-Ⅱ censoring scheme. AIMS Mathematics, 2023, 8(10): 22808-22829. doi: 10.3934/math.20231162
    [2] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Statistical analysis of stress–strength in a newly inverted Chen model from adaptive progressive type-Ⅱ censoring and modelling on light-emitting diodes and pump motors. AIMS Mathematics, 2024, 9(12): 34311-34355. doi: 10.3934/math.20241635
    [3] Mohamed A. H. Sabry, Ehab M. Almetwally, Osama Abdulaziz Alamri, M. Yusuf, Hisham M. Almongy, Ahmed Sedky Eldeeb . Inference of fuzzy reliability model for inverse Rayleigh distribution. AIMS Mathematics, 2021, 6(9): 9770-9785. doi: 10.3934/math.2021568
    [4] Hanan Haj Ahmad, Ehab M. Almetwally, Dina A. Ramadan . A comparative inference on reliability estimation for a multi-component stress-strength model under power Lomax distribution with applications. AIMS Mathematics, 2022, 7(10): 18050-18079. doi: 10.3934/math.2022994
    [5] Nora Nader, Dina A. Ramadan, Hanan Haj Ahmad, M. A. El-Damcese, B. S. El-Desouky . Optimizing analgesic pain relief time analysis through Bayesian and non-Bayesian approaches to new right truncated Fréchet-inverted Weibull distribution. AIMS Mathematics, 2023, 8(12): 31217-31245. doi: 10.3934/math.20231598
    [6] Mazen Nassar, Refah Alotaibi, Ahmed Elshahhat . Reliability analysis at usual operating settings for Weibull Constant-stress model with improved adaptive Type-Ⅱ progressively censored samples. AIMS Mathematics, 2024, 9(7): 16931-16965. doi: 10.3934/math.2024823
    [7] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of reliability index R=P(Y<X) for newly extended xgamma progressively first-failure censored samples with applications. AIMS Mathematics, 2024, 9(11): 32200-32231. doi: 10.3934/math.20241546
    [8] Neama Salah Youssef Temraz . Analysis of stress-strength reliability with m-step strength levels under type I censoring and Gompertz distribution. AIMS Mathematics, 2024, 9(11): 30728-30744. doi: 10.3934/math.20241484
    [9] Abdelfattah Mustafa, M. I. Khan, Samah M. Ahmed . Estimating the stress-strength reliability parameter of the inverse power Lomax distribution. AIMS Mathematics, 2025, 10(7): 15632-15652. doi: 10.3934/math.2025700
    [10] Essam A. Ahmed, Laila A. Al-Essa . Inference of stress-strength reliability based on adaptive progressive type-Ⅱ censing from Chen distribution with application to carbon fiber data. AIMS Mathematics, 2024, 9(8): 20482-20515. doi: 10.3934/math.2024996
  • Let k be a positive integer and f(x) a polynomial with integer coefficients. Associated to the least common multiple lcm0ik{f(n+i)}, we define the function Gk,f for all positive integers nNZk,f by Gk,f(n):=ki=0|f(n+i)|lcm0ik{f(n+i)}, where Zk,f:=ki=0{nN:f(n+i)=0}. If f(x)=x, then Farhi showed in 2007 that Gk,f is periodic with k! as its period. Consequently, Hong and Yang improved Farhi's period k! to lcm(1,...,k). Later on, Farhi and Kane confirmed a conjecture of Hong and Yang and determined the smallest period of Gk,f. For the general linear polynomial f(x), Hong and Qian showed in 2011 that Gk,f is periodic and got a formula for its smallest period. In 2015, Hong and Qian characterized the quadratic polynomial f(x) such that Gk,f is almost periodic and also arrived at an explicit formula for the smallest period of Gk,f. If degf(x)3, then one naturally asks the following interesting question: Is the arithmetic function Gk,f almost periodic and, if so, what is the smallest period? In this paper, we asnwer this question for the case f(x)=x3+2. First of all, with the help of Hua's identity, we prove that Gk,x3+2 is periodic. Consequently, we use Hensel's lemma, develop a detailed p-adic analysis to Gk,x3+2 and particularly investigate arithmetic properties of the congruences x3+20(modpe) and x6+1080(modpe), and with more efforts, its smallest period is finally determined. Furthermore, an asymptotic formula for log lcm0ik{(n+i)3+2} is given.


    The stress-strength model has an essential role in lifetime study and engineering application. In terms of reliability, stress-strength reliability is an interesting topic, which is defined as δ=P(X>Y), X denotes the strength of a system or unit with stress Y. The system or unit works normally when X>Y. Aziz and Chassapis [1] considered the performance δ=P(X>Y) of a gearing system, which denotes the stress on the gear tooth and X denotes the strength of the tooth root. Dong et al. [2] studied the biomechanical performance δ=P(X>Y) of the healthy and reconstructed pelvic model, which denotes the strength of the pelvic model and Y indicates daily activities such as knee bending, standing up, stair descent and stair ascent. Zhou et al. [3] studied the effect of the stress-strength ratio and fiber length on the creep properties of polypropylene fiber-reinforced alkali-activated slag concrete.

    Since the application of stress-strength reliability is wide, its statistical inference has attracted the concern of many researchers. Mehdi and Mehrdad [4] assumed that strength X has the Pareto distribution within outliers but stress Y follows an unsullied Pareto distribution, and considered the stress-strength reliability estimation. They found that maximum likelihood estimation and the modified maximum likelihood estimation perform better than the method of moments and least squares. Mohamed and Reda [5] proposed a stress-strength model with a type-Ⅱ censored sample and studied in odd generalized exponential-exponential distribution. They observed that the performance of Bayesian estimation is better than maximum likelihood estimation in terms of mean square error. Based on progressive first failure censored samples, Shi and Shi [6] derived the estimators of stress-strength reliability for beta log Weibull distribution. It can be shown that the Bayesian estimation is better than the maximum likelihood estimation in terms of average absolute bias and mean squared error. For more research on stress-strength reliability, please refer to [7,8,9,10,11,12,13,14,15,16,17].

    Inverse Weibull distribution (IWD) is a lifetime distribution commonly employed in reliability analysis, and its application fields include engineering, medicine and so on. Aljeddani and Mohammed [18] proposed that IWD is an effective tool for modeling wind speed characteristics, offering a deep understanding of the density function and cumulative distribution function of wind speed. IWD can also be used for statistical process control. Baklizi and Ghannam [19] proposed a control chart based on the case that the product lifetime obeys the IWD and extended the applicability of the control chart method to the case involving censored lifetime tests. The probability density function (PDF) and cumulative density function (CDF) of IWD are given by Eqs (1) and (2), respectively.

    f(x;ζ,σ)=ζσxσ1exp(ζxσ) ; x>0, (1)
    F(x;ζ,σ)=exp(ζxσ) ; x>0, (2)

    where ζ>0 is the scale parameter and σ>0 is the shape parameter. For convenience, denote IWD with PDF (1) as IW(ζ,σ). In practical production, some hazard rate functions are often non-monotonic. As shown in Figure 1, the hazard rate function (hrf) of IWD exhibits an inverted bathtub shape, making it highly suitable for modeling such data. In the degradation process of diesel engine mechanical parts, Keller and Kanath [20] pointed out that IWD is more suitable for fitting failure data of pistons, crankshafts and main bearings compared to the exponential distribution and Weibull distribution.

    Figure 1.  The hazard rate functions of IWD.

    In recent years, the statistical inference of IWD has attracted many authors. Alam and Nassar [21] considered the estimation of entropy for IWD based on improved adaptive progressive type-Ⅱ censored data. Lin et al. [22] considered the estimation of parameters and percentiles for Marshall Olkin extended IWD based on progressive type-Ⅱ censored data. They found that the least-squares estimation, maximum likelihood estimation and percentiles estimation are not stable. Therefore, Bayesian estimation is focused. Nassar and Ahmed [23] studied the constant stress partial accelerated life test using adaptive progressive type-Ⅰ censored samples. The research assumes that the life of the product under normal use conditions obeys IWD. The maximum likelihood estimation, the maximum product of the interval process and Bayesian estimation were used to estimate the point and interval estimation of model parameters and acceleration factors. Amany [24] proposed different predictive and reconstructive pivotal quantities for IWD based on dual generalized order statistics. Based on complete samples, Hassan [25] obtained a modified maximum likelihood estimator and confidence intervals of stress-strength reliabilities for IWD by ranked set sampling. Jia et al. [26] discussed the maximum likelihood and Bayesian estimation of the stress-strength model P(X>Y) under the first-failure progressive unified hybrid censored sample, which X and Y were independent random variables from IWD. Based on complete samples, Bi and Gui [27] considered the classical and Bayesian estimation of stress-strength reliability of IWD. Under the adaptive progressive type-Ⅱ (APT-Ⅱ) censored samples, Alslman and Helu [28] obtained the maximum likelihood and maximum product of spacing estimators of the stress-strength reliability for IWD. Yadav et al. [29] derived the maximum likelihood estimator and Bayesian estimator of stress-strength reliability for IWD under progressively type-Ⅱ censoring data.

    In the available references, it is still not comprehensive enough in terms of the censored scheme and estimation method. Therefore, we consider the estimation of stress-strength reliability δ=P(X>Y) under APT-Ⅱ censored samples, where X and Y are two independent random variables from IWD with the same shape parameter but different scale parameters. The rest of this paper is organized as follows: Section 2 introduces the APT-Ⅱ censored scheme. Section 3 derives the maximum likelihood estimator (MLE) and asymptotic distribution of δ. Approximate maximum likelihood estimator (AMLE) and asymptotic confidence interval (ACI) are constructed. Section 4 derives the Bayesian estimators (BEs) of δ and approximates them using Lindley's approximation. Section 5 presents the Monte Carlo simulation. In Section 6, the application of the mentioned methods is illustrated by two real datasets. Section 7 contains the conclusions.

    In situations where products have long life spans, obtaining failure time data can be time-consuming and costly. To address this issue, experimenters often employ censored schemes. Two commonly used censored schemes are the progressive type-Ⅰ censored scheme and progressive type-Ⅱ censored scheme. The progressive type-Ⅰ censored scheme involves ending the test at a predetermined time. However, it may result in a small number of observed failures when the product life is long. This can limit the accuracy and efficiency of statistical inference. The progressive type-Ⅱ censored scheme ends the test after a predetermined number of failures occur. While this scheme ensures a sufficient number of failures are observed, it can lead to prolonged test times, which can be costly and impractical in some cases. Ng et al. [30] developed APT-Ⅱ censored scheme to address these limitations. In this scheme, the experimenter can not only ensure to observe enough numbers of failures, but also speed up the test process, which greatly improves the efficiency of statistical inference.

    Assume that n units are put into the lifetime test. Only m failure units can be observed. A censored scheme Q=(Q1,Q2,...,Qm) satisfies Q1+Q2+...+Qm+m=n. Denote the lifetime of the observed failure units by Xi (i=1,2,...,m). When the first failure X1 is observed, Q1 units are randomly removed from the residual n1 units that have not failed. Similarly, Q2 units are randomly removed from the remaining nQ12 units at the time of the second failure X2. When the m time of failure Xm is observed, all the remaining Qm units are removed. Then, (X1,X2,...,Xm) is a set of progressive type-Ⅱ censored samples.

    The APT-Ⅱ censored scheme is essentially a hybrid of the type-Ⅰ censored scheme and type-Ⅱ progressive censored scheme, as detailed in Figures 2 and 3. A desired total test time T is given, but the actual test time is allowed to exceed T as well. If the number of failure units has reached m before time T, the test will be stopped before T. On the contrary, if the test time exceeds T and the failure units observed are less than m, the testers would like to terminate the test as soon as possible. To fulfill this expectation, the testers will make some changes during the test. Ensure that there is enough time to observe m failure units without the actual test time exceeding T too much. Therefore, to terminate the test as soon as possible without changing m, it is necessary to retain more surviving units in the test. The specific situations of the APT-Ⅱ censored scheme are shown below.

    Figure 2.  The APT-Ⅱ censored scheme with the situation (1).
    Figure 3.  The APT-Ⅱ censored scheme with the situation (2).

    (1) If m failure units have been observed before T, the censored scheme is Q=(Q1,Q2,...,Qm).

    (2) Suppose that J (J<m) failure units are observed before time T, that is, XJ<T<XJ+1. To retain more surviving units in the test, the testers set QJ+1=QJ+2=...=Qm1=0 and Qm=nmQ1Q2...QJ.

    Suppose that X and Y are two independent random variables, where XIW(ζ1,σ) and YIW(ζ2,σ). The stress-strength reliability δ=P(X>Y) is given by

    δ=P(X>Y)=+0f(x;ζ1,σ)P(Yx)dx=+0f(x;ζ1,σ)F(x;ζ2,σ)dx=ζ1ζ1+ζ2. (3)

    Let X=(X1,X2,...,Xm) be an APT-Ⅱ censored sample from IW(ζ1,σ) with X1<X2<...<Xm under censored scheme Q=(Q1,...,QJ,0,...,0,Qm=n1mJi=1Qi) such that XJ<T1<XJ+1. Let Y=(Y1,Y2,...,Yt) be an APT-Ⅱ censored sample from IW(ζ2,σ) with Y1<Y2<...<Yt under censored scheme R=(R1,...,RK,0,...,0,Rt=n2tKi=1Ri) such that YK<T2<YK+1. Denote x=(x1,x2,...,xm) and y=(y1,y2,...,yt) as the observation of X and Y, respectively. The joint likelihood function can be written as

    l(ζ1,ζ2,σ;x,y)=A1A2[mi=1f1(xi)]Ji=1[1F1(xi)]Qi[1F1(xm)]Qm[ti=1f2(yi)]Ki=1[1F2(yi)]Ri[1F2(yt)]Rt=A1A2ζm1ζt2σm+tmi=1xσ1ieζ1xσi[Ji=1(1eζ1xσi)Qi](1eζ1xσm)Qmti=1yσ1ieζ2yσi[Ki=1(1eζ2yσi)Ri](1eζ2yσt)Rt (4)

    where

    A1=n1(n11Q1)(n12Q1Q2)....(n1m+1m1i=1Qi),
    A2=n2(n21R1)(n22R1R2)....(n2t+1t1i=1Ri),
    f1(x)=ζ1σxσ1eζ1xσ,
    f2(y)=ζ2σyσ1eζ2yσ,
    F1(x)=eζ1xσ,
    F2(y)=eζ2yσ.

    The log-likelihood function is

    L(ζ1,ζ2,σ;x,y)=lnl(ζ1,ζ2,σ;x,y)=lnA1A2+mlnζ1+tlnζ2+(m+t)lnσ(σ+1)mi=1lnxiζ1mi=1xσi+Ji=1Qiln(1eζ1xσi)+Qmln(1eζ1xσm)(σ+1)ti=1lnyiζ2ti=1yσi+Ki=1Riln(1eζ2yσi)+Rtln(1eζ2yσt). (5)

    The partial derivatives of the log-likelihood function L(ζ1,ζ2,σ;x,y) with respect to ζ1, ζ2 and σ are given by

    L(ζ1,ζ2,σ;x,y)ζ1=mζ1mi=1xσi+Ji=1Qixσiexp(ζ1xσi)1exp(ζ1xσi)+Qmxσmexp(ζ1xσm)1exp(ζ1xσm), (6)
    L(ζ1,ζ2,σ;x,y)ζ2=tζ2ti=1yσi+Ki=1Riyσiexp(ζ2yσi)1exp(ζ2yσi)+Rtyσtexp(ζ2yσt)1exp(ζ2yσt), (7)
    L(ζ1,ζ2,σ;x,y)σ=m+tσ+mi=1(ζ1xσilnxilnxi)+ti=1(ζ2yσilnyilnyi)ζ1Ji=1Qixσiexp(ζ1xσi)lnxi1exp(ζ1xσi)ζ2Ki=1Riyσiexp(ζ2yσi)lnyi1exp(ζ2yσi)ζ1Qmxσmexp(ζ1xσm)lnxm1exp(ζ1xσm)ζ2Rtyσtexp(ζ2yσt)lnyt1exp(ζ2yσt), (8)
    {L(ζ1,ζ2,σ;x,y)ζ1=0L(ζ1,ζ2,σ;x,y)ζ2=0L(ζ1,ζ2,σ;x,y)σ=0. (9)

    The MLEs ˆζ1,ML, ˆζ2,ML and ˆσML are the solutions of likelihood Eq (9). Considering the nonlinearity, we propose an iteration method to obtain the approximate solutions. Because of the invariance of maximum likelihood estimation, the MLE ˆδML of δ can be written as

    ˆδML=ˆζ1,MLˆζ1,ML+ˆζ2,ML. (10)

    Since the explicit form of ˆδML cannot be obtained in Section 3.1, we consider the approximate maximum likelihood estimation now.

    Let W=lnX and V=lnY. The CDFs of W and V can be obtained easily.

    FW(w)=P(Ww)=1P(Xew)=1exp(ζ1eσw) (11)
    FV(v)=P(Vv)=1P(Yev)=1exp(ζ2eσv) (12)

    Let σ=β1, ζ1=eσα1 and ζ2=eσα2. The CDFs (11) and (12) can be rewritten as

    FW(w)=1exp[exp(wα1β)] , w>0, (13)
    FV(v)=1exp[exp(vα2β)] , v>0. (14)

    It's obvious that W and V follow the extreme value distribution. Denote that WEV(α1,β) and VEV(α2,β). We assume that wj=lnxj (j=1,2,...,m) and vk=lnyk (k=1,2,...,t).

    Given the observations w1,w2,...,wm and v1,v2,...,vt, the log-likelihood function of α1, α2 and β is

    LWV=A3(m+t)lnβ+mj=1ωjmj=1eωjJj=1QjeωjQmeωm+tk=1υktk=1eυkKk=1RkeυkRteυt, (15)

    where ωj=β1(wjα1), υk=β1(vkα2) and A3 is a constant.

    Next, expending the function eωj and eυk at ω0j=ln[ln(1pj)] and υ0k=ln[ln(1qk)], respectively, and retaining the first derivative.

    eωjax,j+bx,jωj, (16)
    eυkay,k+by,kυk, (17)

    where

    pj=1mi=mj+1i+Qmi+1+...+Qmi+1+Qmi+1+...+Qm , ax,j=eω0j(1ω0j) , bx,j=eω0j
    qk=1ti=tk+1i+Rti+1+...+Rti+1+Rti+1+...+Rt , ay,k=eυ0k(1υ0k) , by,k=eυ0k.

    Thus,

    LWVα11β[mmj=1(ax,j+bx,jωj)Jj=1Qj(ax,j+bx,jωj)Qm(ax,m+bx,mωm)], (18)
    LWVα21β[ttk=1(ay,k+by,kυk)Kk=1Rk(ay,k+by,kυk)Rt(ay,t+by,tυt)], (19)
    LWVβ1β[m+t+mj=1ωj+tk=1υkmj=1ωj(ax,j+bx,jωj)tk=1υk(ay,k+by,kυk)Jj=1Qjωj(ax,j+bx,jωj)Kk=1Rkυk(ay,k+by,kυk)Qmωm(ax,m+bx,mωm)Rtυt(ay,t+by,tυt)], (20)
    {LWVα1=0LWVα2=0LWVβ=0. (21)

    The solutions of likelihood Eq (21) are

    {ˆα1=(BxAxˆβ)C1xˆα2=(ByAyˆβ)C1yˆβ=[(DC2xAxBxCx)24mC2x(B2xCxEC2x)+AxBxCxDC2x](2mC2x)1, (22)

    where

    Ax=mmj=1ax,jJj=1Qjax,jQmax,m,
    Ay=ttk=1ay,kKk=1Rkay,kRtay,t,
    Bx=mj=1bx,jwj+Jj=1Qjbx,jwj+Qmbx,mwm,
    By=tk=1by,kvk+Kk=1Rkby,kvk+Rtby,tvt,
    Cx=mj=1bx,j+Jj=1Qjbx,j+Qmbx,m,
    Cy=tk=1by,k+Kk=1Rkby,k+Rtby,t,
    D=mj=1wjmj=1ax,jwjJj=1Qjax,jwjQmax,mwm,
    E=mj=1bx,jw2j+Jj=1Qjbx,jw2j+Qmbx,mw2m.

    Hence, the AMLE of δ is given by

    ˆδAML=ˆζ1,AMLˆζ1,AML+ˆζ2,AML, (23)

    where

    ˆσAML=ˆβ , ˆζ1,AML=exp(ˆσAMLˆα1) , ˆζ2,AML=exp(ˆσAMLˆα2).

    It can be seen from Section 3.1 that the MLE of δ cannot be given in an explicit form. Therefore, we cannot construct the exact confidence interval. Based on the asymptotically normal property of maximum likelihood estimation, we construct the ACI of δ in this subsection.

    Denote θ=(ζ1,ζ2,σ) and ˆθML=(ˆζ1,ML,ˆζ2,ML,ˆσML). The observed Fisher information matrix can be expressed as

    H=[H11(ˆθML)H12(ˆθML)H13(ˆθML)H21(ˆθML)H22(ˆθML)H23(ˆθML)H31(ˆθML)H32(ˆθML)H33(ˆθML)]. (24)

    Here,

    H11(θ)=2L(ζ1,ζ2,σ;x,y)ζ21=mζ1Ji=1Qix2σieζ1xσi(1eζ1xσi)2Qmx2σmeζ1xσm(1eζ1xσm)2,
    H22(θ)=2L(ζ1,ζ2,σ;x,y)ζ22=tζ2Ki=1Riy2σieζ2yσi(1eζ2yσi)2Rty2σteζ2yσt(1eζ2yσt)2,
    H13(θ)=2L(ζ1,ζ2,σ;x,y)ζ1σ=mi=1xσilnxi+ζ1Ji=1Qixσieζ1xσilnxi(1eζ1xσi)2+ζ1Qmxσmeζ1xσmlnxm(1eζ1xσm)2,
    H23(θ)=2L(ζ1,ζ2,σ;x,y)ζ2σ=ti=1yσilnyi+ζ2Ki=1Riyσieζ2yσilnyi(1eζ2yσi)2+ζ2Rtyσteζ2yσtlnyt(1eζ2yσt)2,
    H33(θ)=2L(ζ1,ζ2,σ;x,y)σ2=m+tσ2ζ1mi=1xσi(lnxi)2ζ2ti=1yσi(lnyi)2+ζ1Ji=1Qixσieζ1xσi(lnxi)2+ζ2Ki=1Riyσieζ2yσi(lnyi)2+ζ1Qmxσmeζ1xσm(lnxm)2+ζ2Rtyσteζ2yσt(lnyt)2ζ21Ji=1Qix2σieζ1xσi(lnxi)2(1+eζ1xσi)1eζ1xσiζ21Qmx2σmeζ1xσm(lnxm)2(1+eζ1xσm)1eζ1xσmζ22Ki=1Riy2σieζ2yσi(lnyi)2(1+eζ2yσi)1eζ2yσiζ22Rty2σteζ2yσt(lnyt)2(1+eζ2yσt)1eζ2yσt,
    H12(θ)=H21(θ)=0 , H31(θ)=H13(θ) , H32(θ)=H23(θ).

    Next, the Delta method is used to derive the ACI of δ. Let ϕ=(ϕ1(ˆθML),ϕ2(ˆθML),ϕ3(ˆθML))T, and

    ϕ1(θ)=δζ1=ζ2(ζ1+ζ2)2 , ϕ2(θ)=δζ2=ζ1(ζ1+ζ2)2 , ϕ3(θ)=δσ=0.

    According to the Delta method, the estimate of variance Var(ˆδML) is approximated by Eq (25), where H1 is the inverse matrix of Fisher information matrix H.

    Var(ˆδML)=ϕTH1ϕ. (25)

    Then, the 100(1λ)% ACI of δ is present by Eq (26), where zλ2 is the upper λ2th quantile of the standardized normal distribution.

    (ˆδMLzλ2Var(ˆδML) , ˆδML+zλ2Var(ˆδML)). (26)

    In this section, we assume that ζ1 and ζ2 are independent random variables and follow gamma priors. The BEs of ζ1 and ζ2 are derived under symmetric entropy loss function and LINEX loss function.

    In Bayesian estimation, selecting prior distribution for unknown parameter is a significant matter. First, the gamma prior is versatile for adjusting different shapes of the distribution density function. Second, the gamma prior is relatively simple, and there will not be too complicated computational issues. Its advantage is to provide conjugacy and mathematical ease. As a result, we investigate the gamma prior. Then, the prior distributions of ζ1 and ζ2 are given as

    π(ζ1)ζa111exp(b1ζ1), a1,b1>0, (27)
    π(ζ2)ζa212exp(b2ζ2), a2,b2>0. (28)

    Denote that ζ1G(a1,b1) and ζ2G(a2,b2). The joint prior is

    π(ζ1,ζ2)ζa111ζa212exp(b1ζ1b2ζ2). (29)

    Therefore, the joint posterior distribution given observation data is

    π(ζ1,ζ2,σ|x,y)=A4ζm+a111ζt+a212σm+teb1ζ1b2ζ2mi=1xσ1ieζ1xσiti=1yσ1ieζ2yσiJi=1(1eζ1xσi)Qi[Ki=1(1eζ2yσi)Ri](1eζ1xσm)Qm(1eζ2yσt)Rt, (30)

    and A14=.

    Let \hat \rho be the estimator of \rho . The symmetric entropy loss function (Xu et al. [31]) and LINEX loss function (Varian [32]) are defined as

    {L_S}(\rho ,\hat \rho ) = \frac{{\hat \rho }}{\rho } + \frac{\rho }{{\hat \rho }} - 2 , (31)
    {L_E}(\rho ,\hat \rho ) = \exp [d(\hat \rho - \rho )] - d(\hat \rho - \rho ) - 1 , (32)

    where d is the hype-parameter of LINEX loss function. Given observations x and y, the BEs of \rho under symmetric entropy loss function and LINEX loss function are presented by Eqs (33) and (34), where {\text{E}}(\cdot |x, y) denotes the posterior expectation.

    {\hat \rho _S} = {[\frac{{{\rm{E}} (\rho |x,y)}}{{{\rm{E}} ({\rho ^{ - 1}}|x,y)}}]^{\frac{1}{2}}} (33)
    {\hat \rho _E} = - \frac{1}{d}\ln [{\rm{E}} ({e^{ - d\rho }}|x,y)] (34)

    Thus, based on APT-Ⅱ censored samples, the BE {\hat \delta _S} of \delta under symmetric entropy loss function is given by

    \begin{array}{l} {{\hat \delta }_S} = {[\frac{{{\rm{E}} (\delta |x,y)}}{{{\rm{E}} ({\delta ^{ - 1}}|x,y)}}]^{\frac{1}{2}}} \\ = {[\frac{{\int_0^{ + \infty } {\int_0^{ + \infty } {\int_0^{ + \infty } {\delta \pi ({\zeta _1},{\zeta _2},\sigma |x,y)} } } d{\zeta _1}d{\zeta _2}d\sigma }}{{\int_0^{ + \infty } {\int_0^{ + \infty } {\int_0^{ + \infty } {{\delta ^{ - 1}}\pi ({\zeta _1},{\zeta _2},\sigma |x,y)} } } d{\zeta _1}d{\zeta _2}d\sigma }}]^{\frac{1}{2}}} \\ = \{ (\int_0^{ + \infty } {\int_0^{ + \infty } {\int_0^{ + \infty } {\zeta _1^{m + {a_1}}} } } \zeta _2^{t + {a_2} - 1}{({\zeta _1} + {\zeta _2})^{ - 1}}{\sigma ^{m + t}}{e^{ - {b_1}{\zeta _1} - {b_2}{\zeta _2}}}\prod\limits_{i = 1}^m {x_i^{ - \sigma - 1}{e^{ - {\zeta _1}x_i^{ - \sigma }}}} \prod\limits_{i = 1}^t {y_i^{ - \sigma - 1}{e^{ - {\zeta _2}y_i^{ - \sigma }}}} \\ \prod\limits_{i = 1}^J {{{(1 - {e^{ - {\zeta _1}x_i^{ - \sigma }}})}^{{Q_i}}}} [\prod\limits_{i = 1}^K {{{(1 - {e^{ - {\zeta _2}y_i^{ - \sigma }}})}^{{R_i}}}} ]{(1 - {e^{ - {\zeta _1}x_m^{ - \sigma }}})^{{Q_m}}}{(1 - {e^{ - {\zeta _2}y_t^{ - \sigma }}})^{{R_t}}}d{\zeta _1}d{\zeta _2}d\sigma ) \\ [\int_0^{ + \infty } {\int_0^{ + \infty } {\int_0^{ + \infty } {\zeta _1^{m + {a_1} - 2}} } } \zeta _2^{t + {a_2} - 1}({\zeta _1} + {\zeta _2}){\sigma ^{m + t}}{e^{ - {b_1}{\zeta _1} - {b_2}{\zeta _2}}}\prod\limits_{i = 1}^m {x_i^{ - \sigma - 1}{e^{ - {\zeta _1}x_i^{ - \sigma }}}} \prod\limits_{i = 1}^t {y_i^{ - \sigma - 1}{e^{ - {\zeta _2}y_i^{ - \sigma }}}} \\ \prod\limits_{i = 1}^J {{{(1 - {e^{ - {\zeta _1}x_i^{ - \sigma }}})}^{{Q_i}}}} [\prod\limits_{i = 1}^K {{{(1 - {e^{ - {\zeta _2}y_i^{ - \sigma }}})}^{{R_i}}}} ]{(1 - {e^{ - {\zeta _1}x_m^{ - \sigma }}})^{{Q_m}}}{(1 - {e^{ - {\zeta _2}y_t^{ - \sigma }}})^{{R_t}}}d{\zeta _1}d{\zeta _2}d\sigma {]^{ - 1}}{\} ^{\frac{1}{2}}} \\ \end{array} . (35)

    The BE {\hat \delta _E} of \delta under LINEX loss function is given by

    \begin{array}{l} {{\hat \delta }_E} = - \frac{1}{d}\ln [{\rm{E}} ({e^{ - d\delta }}|x,y)] \\ = - \frac{1}{d}\ln [{A_4}\int_0^{ + \infty } {\int_0^{ + \infty } {\int_0^{ + \infty } {\zeta _1^{m + {a_1} - 1}} } } \zeta _2^{t + {a_2} - 1}{\sigma ^{m + t}}{e^{ - ({b_1} + d){\zeta _1} - {b_2}{\zeta _2} - d{{({\zeta _1} + {\zeta _2})}^{ - 1}}}}\prod\limits_{i = 1}^m {x_i^{ - \sigma - 1}{e^{ - {\zeta _1}x_i^{ - \sigma }}}} \\ \prod\limits_{i = 1}^t {y_i^{ - \sigma - 1}{e^{ - {\zeta _2}y_i^{ - \sigma }}}} \prod\limits_{i = 1}^J {{{(1 - {e^{ - {\zeta _1}x_i^{ - \sigma }}})}^{{Q_i}}}} [\prod\limits_{i = 1}^K {{{(1 - {e^{ - {\zeta _2}y_i^{ - \sigma }}})}^{{R_i}}}} ]{(1 - {e^{ - {\zeta _1}x_m^{ - \sigma }}})^{{Q_m}}}{(1 - {e^{ - {\zeta _2}y_t^{ - \sigma }}})^{{R_t}}}d{\zeta _1}d{\zeta _2}d\sigma ] \\ \end{array} . (36)

    It can be seen that both Eqs (35) and (36) involve the ratio of two integrals, and the form of integral is complex. Hence, we use Lindley's approximation (Lindley [33]) to compute the approximate Bayesian estimates. Lindley's approximation provides a method to obtain an approximation of the posterior expectation like the following form.

    {\rm{E}} [\eta ({\zeta _1},{\zeta _2},\sigma )|x,y] = \frac{{\int {\eta ({\zeta _1},{\zeta _2},\sigma ){e^{L({\zeta _1},{\zeta _2},\sigma ;x,y) + {\pi ^*}({\zeta _1},{\zeta _2},\sigma )}}} d({\zeta _1},{\zeta _2},\sigma )}}{{\int {{e^{L({\zeta _1},{\zeta _2},\sigma ;x,y) + {\pi ^*}({\zeta _1},{\zeta _2},\sigma )}}} d({\zeta _1},{\zeta _2},\sigma )}} . (37)

    In Eq (37), \eta ({\zeta _1}, {\zeta _2}, \sigma) is a function of {\zeta _1}, {\zeta _2} and \sigma , and {\pi ^*}({\zeta _1}, {\zeta _2}, \sigma) = \ln \pi ({\zeta _1}, {\zeta _2}, \sigma) . According to Lindley's approximation, the form of posterior expectation (37) can be rewritten as

    \begin{array}{l} {\rm{E}} [\eta ({\zeta _1},{\zeta _2},\sigma )|x,y] = \eta + \frac{1}{2}[({\eta _{11}} + 2{\eta _1}\pi _1^*){\varphi _{11}} + ({\eta _{21}} + 2{\eta _2}\pi _1^*){{\hat \varphi }_{21}} + ({\eta _{12}} + 2{\eta _1}\pi _2^*){\varphi _{12}} \\ + ({\eta _{22}} + 2{\eta _2}\pi _2^*){\varphi _{22}} + ({\eta _1}{\varphi _{11}} + {\eta _2}{\varphi _{12}})({L_{111}}{\varphi _{11}} + {L_{121}}{\varphi _{12}} + {L_{211}}{\varphi _{21}} + {L_{221}}{\varphi _{22}}) \\ + ({\eta _1}{\varphi _{21}} + {\eta _2}{\varphi _{22}})({L_{112}}{\varphi _{11}} + {L_{122}}{\varphi _{12}} + {L_{212}}{\varphi _{21}} + {L_{222}}{\varphi _{22}})] \\ \end{array} , (38)

    where

    {L_{111}} = \frac{{{\partial ^3}L({\zeta _1},{\zeta _2},\sigma ;x,y)}}{{\partial \zeta _1^3}} = \frac{{2m}}{{\zeta _1^3}} + \sum\limits_{i = 1}^J {\frac{{{Q_i}x_i^{ - 3\sigma }{e^{ - {\zeta _1}x_i^{ - \sigma }}}}}{{{{(1 - {e^{ - {\zeta _1}x_i^{ - \sigma }}})}^3}}}} + \frac{{{Q_m}x_m^{ - 3\sigma }{e^{ - {\zeta _1}x_m^{ - \sigma }}}}}{{{{(1 - {e^{ - {\zeta _1}x_m^{ - \sigma }}})}^3}}} ,
    {L_{222}} = \frac{{{\partial ^3}L({\zeta _1},{\zeta _2},\sigma ;x,y)}}{{\partial \zeta _2^3}} = \frac{{2mt}}{{\zeta _2^3}} + \sum\limits_{i = 1}^K {\frac{{{R_i}y_i^{ - 3\sigma }{e^{ - {\zeta _2}y_i^{ - \sigma }}}}}{{{{(1 - {e^{ - {\zeta _2}y_i^{ - \sigma }}})}^3}}}} + \frac{{{R_t}y_t^{ - 3\sigma }{e^{ - {\zeta _2}y_t^{ - \sigma }}}}}{{{{(1 - {e^{ - {\zeta _2}y_t^{ - \sigma }}})}^3}}} ,
    \pi _1^* = \frac{{{a_1} - 1}}{{{\zeta _1}}} - {b_1}{\text{ , }}\pi _2^* = \frac{{{a_2} - 1}}{{{\zeta _2}}} - {b_2} ,
    {L_{121}} = {L_{211}} = {L_{221}} = {L_{112}} = {L_{122}} = {L_{212}} = 0 ,
    \varphi = {\left[ {\begin{array}{*{20}{c}} { - \frac{{{\partial ^2}L({\zeta _1},{\zeta _2},\sigma ;x,y)}}{{\partial \zeta _1^2}}}&0 \\ 0&{ - \frac{{{\partial ^2}L({\zeta _1},{\zeta _2},\sigma ;x,y)}}{{\partial \zeta _2^2}}} \end{array}} \right]^{ - 1}} ,

    and {\varphi _{ij}} (i, j = 1, 2) is the element of \varphi .

    Under symmetric entropy loss function, we need to approximate {\rm{E}} (\delta |x, y) and {\rm{E}} ({\delta ^{ - 1}}|x, y) referring Eq (38). Let \eta = \eta ({\zeta _1}, {\zeta _2}) be a function of {\zeta _1} and {\zeta _2}, and we denote

    {\eta _1} = \frac{{\partial \eta }}{{\partial {\zeta _1}}},{\eta _2} = \frac{{\partial \eta }}{{\partial {\zeta _2}}},{\eta _{11}} = \frac{{{\partial ^2}\eta }}{{\partial \zeta _1^2}},{\eta _{22}} = \frac{{{\partial ^2}\eta }}{{\partial \zeta _2^2}},{\eta _{12}} = \frac{{{\partial ^2}\eta }}{{\partial {\zeta _1}\partial {\zeta _2}}}{\text{ ,}}{\eta _{21}} = \frac{{{\partial ^2}\eta }}{{\partial {\zeta _2}\partial {\zeta _1}}}.

    When the function mentioned in Eq (37) is \eta = {\zeta _1}{({\zeta _1} + {\zeta _2})^{ - 1}}, the partial derivatives are

    \begin{array}{l} {\eta _1} = \frac{{{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}{\text{ }},{\text{ }}{\eta _2} = \frac{{ - {\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}, \\ {\eta _{11}} = \frac{{ - 2{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^3}}}{\text{ }},{\text{ }}{\eta _{12}} = \frac{{{\zeta _1} - {\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^3}}}{\text{ }},{\text{ }}{\eta _{22}} = \frac{{2{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^3}}}{\text{ }},{\text{ }}{\eta _{21}} = {\eta _{12}}. \\ \end{array} (39)

    Therefore,

    \begin{array}{l} {\rm{E}} (\delta |x,y) = \frac{{{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}} + [\frac{{ - {\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} + \frac{{{\zeta _2}\pi _1^*}}{{{{({\zeta _1} + {\zeta _2})}^2}}}]{\varphi _{11}} + [\frac{{{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} - \frac{{{\zeta _1}\pi _2^*}}{{{{({\zeta _1} + {\zeta _2})}^2}}}]{\varphi _{22}} \\ + \frac{1}{2}[(\frac{{{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\varphi _{11}^2{L_{111}} - \frac{{{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\varphi _{22}^2{L_{222}}] \\ \end{array} . (40)

    When the function mentioned in Eq (37) is \eta = \zeta _1^{ - 1}({\zeta _1} + {\zeta _2}), the partial derivatives are

    {\eta _1} = \frac{{ - {\zeta _2}}}{{\zeta _1^2}}{\text{ }},{\text{ }}{\eta _2} = \frac{1}{{{\zeta _1}}}{\text{ }},{\text{ }}{\eta _{11}} = \frac{{2{\zeta _2}}}{{\zeta _1^3}}{\text{ }},{\text{ }}{\eta _{12}} = \frac{{ - 1}}{{\zeta _1^2}}{\text{ }},{\text{ }}{\eta _{22}} = 0{\text{ }},{\text{ }}{\eta _{21}} = {\eta _{12}} . (41)

    Therefore,

    {\rm{E}} ({\delta ^{ - 1}}|x,y) = 1 + \frac{{{\zeta _2}}}{{{\zeta _1}}} + (\frac{{{\zeta _2}}}{{\zeta _1^3}} - \frac{{{\zeta _2}}}{{\zeta _1^2}}\pi _1^*){\varphi _{11}} + \frac{1}{{{\zeta _1}}}\pi _2^*{\varphi _{22}} + \frac{1}{2}(\frac{1}{{{\zeta _1}}}{L_{222}}\varphi _{22}^2 - \frac{{{\zeta _2}}}{{\zeta _1^2}}\varphi _{11}^2{L_{111}}) . (42)

    The BE {\hat \delta _S} is given by Eq (43).

    {\hat \delta _S} = {[\frac{{{\rm{E}} (\delta |x,y)}}{{{\rm{E}} ({\delta ^{ - 1}}|x,y)}}]^{\frac{1}{2}}}{|_{({\zeta _1},{\zeta _2},\sigma ) = ({{\hat \zeta }_{1,ML}},{{\hat \zeta }_{2,ML}},{{\hat \sigma }_{ML}})}} . (43)

    Under the LINEX loss function, we only need to approximate {\rm{E}} ({e^{ - d\delta }}|x, y) . When \eta = \exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}), the partial derivatives are

    \begin{array}{l} {\eta _1} = \frac{{ - d{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}){\text{ }},{\text{ }}{\eta _2} = \frac{{d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}){\text{ }}, \\ {\eta _{11}} = [\frac{{2d{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} + \frac{{{d^2}\zeta _2^2}}{{{{({\zeta _1} + {\zeta _2})}^4}}}]\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}){\text{ }}, \\ {\eta _{22}} = [\frac{{ - 2d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} + \frac{{{d^2}\zeta _1^2}}{{{{({\zeta _1} + {\zeta _2})}^4}}}]\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}){\text{ }}, \\ {\eta _{12}} = [\frac{{d{\zeta _2} - d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} - \frac{{{d^2}{\zeta _1}{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^4}}}]\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}){\text{ }},{\text{ }}{\eta _{21}} = {\eta _{12}}{\text{ }}{\text{.}} \\ \end{array} (44)

    Thus,

    \begin{array}{l} {\rm{E}} ({e^{ - d\delta }}|x,y) = \exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}}) + \frac{1}{2}\exp (\frac{{ - d{\zeta _1}}}{{{\zeta _1} + {\zeta _2}}})\{ [\frac{{2d{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} + \frac{{{d^2}\zeta _2^2}}{{{{({\zeta _1} + {\zeta _2})}^4}}} - \frac{{2d{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\pi _1^*]{\varphi _{11}} \\ + [\frac{{ - 2d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^3}}} + \frac{{{d^2}\zeta _1^2}}{{{{({\zeta _1} + {\zeta _2})}^4}}} + \frac{{2d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\pi _2^*]{\varphi _{22}} + \frac{{ - d{\zeta _2}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}\varphi _{11}^2{L_{111}} + \frac{{d{\zeta _1}}}{{{{({\zeta _1} + {\zeta _2})}^2}}}{L_{222}}\varphi _{22}^2\} \\ \end{array} . (45)

    The BE {\hat \delta _E} is given by Eq (46)

    {\hat \delta _E} = - \frac{1}{d}\ln [{\rm{E}} ({e^{ - d\delta }}|x,y)]{|_{({\zeta _1},{\zeta _2},\sigma ) = ({{\hat \zeta }_{1,ML}},{{\hat \zeta }_{2,ML}},{{\hat \sigma }_{ML}})}} . (46)

    In this section, Monte Carlo simulation is used to evaluate the behavior of different estimators under different APT-Ⅱ censored schemes. We take the true values are ({\zeta _{1, real}}, {\zeta _{2, real}}, {\sigma _{real}}) = (2, 3, 5). Hence, the true value {\delta _{real}} is 0.4000. Consider two priors, namely, Priors 1 and 2. The hyper-parameters of Prior 1 are ({a_1}, {b_1}) = (5, 2) and ({a_2}, {b_2}) = (3, 6). Prior 2 is non-informative prior, that is, {a_1} = {a_2} = {b_1} = {b_2} = 0. Without loss of generality, let {T_1} = {x_{{m \mathord{\left/ {\vphantom {m 2}} \right. } 2}}} and {T_2} = {y_{t/5}}. On this basis, the trails are N at 10,000 times. We consider two cases with different censored schemes, which are detailed in Table 1. The point estimates are compared by average bias (AB) and mean squared error (MSE). The performance of confidence interval is represented by the average width (AW) and coverage probability (CP). All the results are displayed in Tables 28. It is necessary to select initial values using iteration method, so we take AMLE {\hat \delta _{AML}} to substitute for MLE {\hat \delta _{ML}}. The algorithm of generating APT-Ⅱ censored data is shown in Algorithm 1. Finally, the AB, MSE and AW are calculated by the following formulas:

    {\text{AB}} = \frac{1}{N}\sum\limits_{i = 1}^N {({{\hat \delta }_i} - {\delta _{real}})} , {\text{MSE}} = \frac{1}{N}\sum\limits_{i = 1}^N {{{({{\hat \delta }_i} - {\delta _{real}})}^2}} \;{\rm{ and}}\; {\text{AW}} = \frac{1}{N}\sum\limits_{i = 1}^N {({{\hat \delta }_{i, up}} - {{\hat \delta }_{i, low}})} .
    Table 1.  The censored schemes.
    ({n_1}, m) Q ({n_2}, t) R
    Case 1 (30, 10) Q1 = (0*8, 10*2) (40, 20) R1 = (2*10, 0*10)
    Q2 = (20*1, 0*9) R2 = (0*19, 20*1)
    Q3 = ((0, 5)*5) R3 = ((0, 0, 0, 0, 5)*4)
    Case 2 (50, 20) Q1 = (10*1, 0*18, 20*1) (50, 30) R1 = (5*2, 0*13, 5*2, 0*13)
    Q2 = (0*19, 30*1) R2 = (0*20, 2*10)
    Q3 = (0*10, 2*15) R3 = (10*1, 0*28, 10*1)
    Case 3 (100, 50) Q1 = (10*5, 0*45) (150, 70) R1 = (2*40, 0*30)
    Q2 = (20*1, 0*24, 30*1, 0*24) R2 = (30*1, 0*30, 50*1, 0*38)
    Q3 = (0*20, 50*1, 0*29) R3 = (0*45, 80*1, 0*24)

     | Show Table
    DownLoad: CSV
    Table 2.  The MSEs and ABs of \delta under Prior 1 based on Case 1.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 0.0165 0.0495 0.0406 0.0397 0.1129 0.2181 0.2003 0.1957
    Q1, R2 0.0064 0.0044 0.0868 0.0291 -0.0152 0.3182 0.3529 0.1627
    Q1, R3 0.0060 0.0031 0.0663 0.0331 0.0400 0.2613 0.2569 0.1765
    Q2, R1 0.0008 0.0034 0.0031 0.0030 -0.0140 0.0543 0.0514 0.0492
    Q2, R2 0.0032 0.0025 0.0034 0.0019 -0.0481 0.0571 0.0532 0.0373
    Q2, R3 0.0021 0.0029 0.0025 0.0018 -0.0374 0.0473 0.0447 0.0368
    Q3, R1 0.0032 0.0151 0.0137 0.0135 0.0422 0.1189 0.1139 0.1114
    Q3, R2 0.0027 0.0078 0.0167 0.0107 -0.0011 0.1285 0.1247 0.0968
    Q3, R3 0.0021 0.0185 0.0130 0.0106 0.0130 0.1155 0.1099 0.0973

     | Show Table
    DownLoad: CSV
    Table 3.  The MSEs and ABs of \delta under Prior 1 based on Case 2.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 0.0021 0.0096 0.0090 0.0090 0.0385 0.0955 0.0923 0.0922
    Q1, R2 0.0008 0.0076 0.0072 0.0061 0.0047 0.0850 0.0824 0.0751
    Q1, R3 0.0017 0.0094 0.0088 0.0086 0.0331 0.0944 0.0913 0.0899
    Q2, R1 0.0136 0.0320 0.0288 0.0298 0.1113 0.1775 0.1683 0.1709
    Q2, R2 0.0029 0.0349 0.0273 0.0222 0.0530 0.1707 0.1640 0.1464
    Q2, R3 0.0113 0.0311 0.0281 0.0284 0.1008 0.1749 0.1662 0.1665
    Q3, R1 0.0129 0.0318 0.0286 0.0292 0.1068 0.1767 0.1676 0.1685
    Q3, R2 0.0035 0.0297 0.0281 0.0203 0.0377 0.1712 0.1661 0.1390
    Q3, R3 0.0102 0.0306 0.0276 0.0272 0.0941 0.1731 0.1646 0.1625

     | Show Table
    DownLoad: CSV
    Table 4.  The MSEs and ABs of \delta under Prior 1 based on Case 3.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 3.61E-4 0.0012 0.0011 0.0012 0.0137 0.0318 0.0304 0.0326
    Q1, R2 7.16E-4 0.0018 0.0017 0.0019 0.0229 0.0403 0.0389 0.0412
    Q1, R3 6.24E-4 1.60E-4 1.56E-4 1.65E-4 -0.0208 0.0013 2.01E-4 0.0019
    Q2, R1 0.0052 0.0082 0.0078 0.0083 0.0708 0.0893 0.0872 0.0899
    Q2, R2 0.0065 0.0096 0.0092 0.0097 0.0793 0.0972 0.0950 0.0977
    Q2, R3 0.0016 0.0038 0.0035 0.0038 0.0373 0.0597 0.0579 0.0601
    Q3, R1 0.0024 0.0046 0.0040 0.0046 0.0421 0.0638 0.0620 0.0641
    Q3, R2 0.0039 0.0066 0.0063 0.0067 0.0574 0.0775 0.0757 0.0780
    Q3, R3 9.39E-4 9.27E-4 8.63 E-4 9.09E-4 -0.0144 0.0205 0.0192 0.0193

     | Show Table
    DownLoad: CSV
    Table 5.  The MSEs and ABs of \delta under Prior 2 based on Case 1.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 0.0165 0.0179 0.0129 0.0175 0.0385 0.1173 0.0967 0.1177
    Q1, R2 0.0064 0.0061 0.0059 0.0059 0.0047 -0.0063 -0.0235 -0.0015
    Q1, R3 0.0060 0.0066 0.0044 0.0066 0.0331 0.0447 0.0240 0.0483
    Q2, R1 0.0008 0.0009 0.0010 0.0007 0.1113 -0.0162 -0.0204 -0.0100
    Q2, R2 0.0032 0.0035 0.0038 0.0027 0.0530 -0.0504 -0.0538 -0.0427
    Q2, R3 0.0021 0.0023 0.0025 0.0017 0.1008 -0.0397 -0.0434 -0.0327
    Q3, R1 0.0032 0.0032 0.0026 0.0037 0.1068 0.0424 0.0353 0.0480
    Q3, R2 0.0027 0.0028 0.0027 0.0027 0.0377 -0.0019 -0.0082 0.0059
    Q3, R3 0.0021 0.0022 0.0019 0.0023 0.0941 0.0134 0.0063 0.0201

     | Show Table
    DownLoad: CSV
    Table 6.  The MSEs and ABs of \delta under Prior 2 based on Case 2.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 0.0021 0.0021 0.0018 0.0025 0.0385 0.0386 0.0342 0.0433
    Q1, R2 0.0008 0.0008 0.0007 0.0009 0.0047 0.0048 0.0009 0.0101
    Q1, R3 0.0017 0.0018 0.0015 0.0021 0.0331 0.0340 0.0295 0.0380
    Q2, R1 0.0136 0.0141 0.0118 0.0145 0.1113 0.1139 0.1036 0.1161
    Q2, R2 0.0029 0.0049 0.0037 0.0053 0.0530 0.0565 0.0465 0.0605
    Q2, R3 0.0113 0.0118 0.0097 0.0123 0.1008 0.1026 0.0923 0.1052
    Q3, R1 0.0129 0.0134 0.0110 0.0136 0.1068 0.1097 0.0987 0.1111
    Q3, R2 0.0035 0.0037 0.0029 0.0040 0.0377 0.0408 0.0313 0.0451
    Q3, R3 0.0102 0.0108 0.0087 0.0111 0.0941 0.0964 0.0855 0.0988

     | Show Table
    DownLoad: CSV
    Table 7.  The MSEs and ABs of \delta under Prior 2 based on Case 3.
    Censored scheme MSE AB
    {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E}
    d = 3 d = - 3 d = 3 d = - 3
    Q1, R1 3.61E-4 3.50E-4 3.11E-4 3.95E-4 0.0137 0.0132 0.0117 0.0149
    Q1, R2 7.16E-4 6.92E-4 6.24E-4 7.66E-4 0.0229 0.0223 0.0208 0.0239
    Q1, R3 6.24E-4 6.54E-4 7.08E-4 5.76E-4 -0.0208 -0.0215 -0.0228 -0.0196
    Q2, R1 0.0052 0.0053 0.0049 0.0055 0.0708 0.0710 0.0688 0.0724
    Q2, R2 0.0065 0.0066 0.0062 0.0068 0.0793 0.0798 0.0775 0.0811
    Q2, R3 0.0016 0.0016 0.0015 0.0017 0.0373 0.0371 0.0350 0.0387
    Q3, R1 0.0024 0.0024 0.0022 0.0025 0.0421 0.0426 0.0407 0.0442
    Q3, R2 0.0039 0.0039 0.0037 0.0041 0.0574 0.0574 0.0554 0.0589
    Q3, R3 9.39E-4 9.52E-4 9.92E-4 8.85E-4 -0.0144 -0.0151 -0.0168 -0.0131

     | Show Table
    DownLoad: CSV
    Table 8.  The CP and AW of \delta (\lambda = 0.05).
    Censored scheme CP AW
    Case 1 Case 2 Case 3 Case 1 Case 2 Case 3
    Q1, R1 0.8838 0.9564 0.9993 0.3178 0.2518 0.1287
    Q1, R2 0.8384 0.9075 0.9951 0.3723 0.2999 0.1288
    Q1, R3 0.8855 0.9289 0.9976 0.3001 0.2714 0.1315
    Q2, R1 0.9681 0.9456 0.9869 0.2493 0.2657 0.1341
    Q2, R2 0.8468 0.9702 0.9840 0.3669 0.2500 0.1339
    Q2, R3 0.9238 0.9396 0.9707 0.2783 0.2672 0.1374
    Q3, R1 0.9059 0.9422 0.9766 0.2456 0.2599 0.1238
    Q3, R2 0.8831 0.9747 0.9873 0.2831 0.2588 0.1287
    Q3, R3 0.8298 0.9657 0.9634 0.2730 0.2588 0.1283

     | Show Table
    DownLoad: CSV

    Algorithm 1.

    (1) Generate two sets of random numbers ({w_{x, 1}}, {w_{x, 2}}, ..., {w_{x, m}}) and ({w_{y, 1}}, {w_{y, 2}}, ..., {w_{y, t}}) from {\text{U}}(0, 1).

    (2) Let {v_{x, i}} = {w_{x, i}}^{{{(i + {Q_m} + {Q_{m - 1}} +... + {Q_{m - i + 1}})}^{ - 1}}} (i = 1, 2, ..., m) and {v_{y, j}} = {w_{y, j}}^{{{(j + {R_t} + {R_{t - 1}} +... + {R_{t - j + 1}})}^{ - 1}}} (j = 1, 2, ..., t). Set {u_{x, i}} = 1 - {v_{x, m}}{v_{x, m - 1}}...{v_{x, m - i + 1}} and {u_{y, j}} = 1 - {v_{y, t}}{v_{y, t - 1}}...{v_{y, t - j + 1}}.

    (3) Let {x_i} = {F^{ - 1}}({u_{x, i}}; {\zeta _{1, real}}, {\sigma _{real}}) and {y_j} = {F^{ - 1}}({u_{y, j}}; {\zeta _{2, real}}, {\sigma _{real}}), where F is the CDF of IWD. Then, ({x_1}, {x_2}, ..., {x_m}) is the progressive type-Ⅱ censored data from {\text{IW}}({\zeta _{1, real}}, {\sigma _{real}}) with censored scheme ({Q_1}, {Q_2}, ..., {Q_m}) and ({y_1}, {y_2}, ..., {y_t}) is the progressive type-Ⅱ censored data from {\text{IW}}({\zeta _{2, real}}, {\sigma _{real}}) with censored scheme ({R_1}, {R_2}, ..., {R_t}).

    (4) Determine J and K such that {x_J} < {T_1} < {x_{J + 1}} and {y_K} < {T_2} < {y_{K + 1}}. Remove {x_{J + 2}}, {x_{J + 3}}, ..., {x_m} and {y_{K + 2}}, {y_{K + 3}}, ..., {y_t}.

    (5) Generate the first m - J - 1 order statistics from the truncated distribution \frac{{f(x; {\zeta _{1, real}}, {\sigma _{real}})}}{{1 - F({x_{J + 1}}; {\zeta _{1, real}}, {\sigma _{real}})}} and denote them as {x_{J + 2}}, {x_{J + 3}}, ..., {x_m}. Then, the censored scheme changes to ({Q_1}, ..., {Q_J}, 0, ..., 0, {Q_m} = {n_1} - m - \sum\limits_{i = 1}^J {{Q_i}}). Similarly, generate the first t - K + 1 order statistics from \frac{{f(y; {\zeta _{2, real}}, {\sigma _{real}})}}{{1 - F({y_{K + 1}}; {\zeta _{2, real}}, {\sigma _{real}})}} as {y_{K + 2}}, {y_{K + 3}}, ..., {y_t}. Then, the censored scheme changes to ({R_1}, ..., {R_K}, 0, ..., 0, {R_t} = {n_2} - t - \sum\limits_{i = 1}^K {{R_i}}).

    From Tables 28, the following conclusions may be drawn:

    (1) When the effective sample sizes (m and t) increase, the MSEs of AMLE and BE decrease. Therefore, enlarging the effective sample size can appropriately enhance the accuracy of the estimation.

    (2) The BEs under Prior 2 perform similarly to the AMLE in terms of MSEs. However, the BEs under Prior 1 perform worse than AMLE.

    (3) Under the same prior, as the sample size increases, the available information increases. Therefore, the MSEs show a decreasing trend.

    (4) Under Prior 1, the BE based on LINEX loss function with d = - 3 has better behavior than the other BEs. Under Prior 2, the performance of all the BEs is comparable.

    (5) With the increase of the effective sample sizes, the CPs gradually reach the confidence level of 95%.

    In this section, two real data sets are used to validate the feasibility of the proposed method. These data sets are reported by Nelson [34], indicating the time when the electrodes are broken down by the insulating fluids at different voltages. X represents the insulating fluid at a voltage of 34kV, and Y represents the insulating fluid at a voltage of 36kV. The data sets are:

    X: 0.19, 0.78, 0.96, 1.31, 2.78, 3.16, 4.15, 4.67, 4.85, 6.50, 7.35, 8.01, 8.27, 12.06, 31.75, 32.52, 33.91, 36.71, 72.89;

    Y: 0.35, 0.59, 0.96, 0.99, 1.69, 1.97, 2.07, 2.58, 2.71, 2.90, 3.67, 3.99, 5.35, 13.77, 25.50.

    First, we need to check whether the IWD can fit these data sets. We know that if a random variable T follows Weibull distribution, X = {T^{ - 1}} follows IWD. Set X = {T^{ - 1}} and Y = {Z^{ - 1}}. The transformed data sets, Anderson-Darling (A-D) statistics and p-values are presented in Table 9.

    Table 9.  The transformed data sets and p-values.
    Data sets A-D p-values
    T 0.0137 0.0272 0.0295 0.0308 0.0315 0.0829 0.1209 0.1248 0.6006 0.1132
    0.1361 0.1538 0.2062 0.2141 0.2410 0.3165 0.3597 0.7634
    1.0417 1.2821 5.2632
    Z 0.0392 0.0726 0.1869 0.2506 0.2725 0.3448 0.3690 0.3876 0.3121 0.5858
    0.4831 0.5076 0.5917 1.0101 1.0417 1.6949 2.8571

     | Show Table
    DownLoad: CSV

    As we can see, the p-values are all greater than a 5% significance level, which means that the Weibull distribution can fit these data sets T and Z effectively. In other words, the IWD is suitable for fitting data sets X and Y. Figures 4 and 5 give the probability-probability (P-P) plot and quantile-quantile (Q-Q) plot to visually show the fitting.

    Figure 4.  P-P and Q-Q plots for Data T.
    Figure 5.  P-P and Q-Q plots for Data Z.

    Next, Table 10 presents the different APT-Ⅱ censored schemes. Since we cannot obtain any prior information, we take the hyperparameters of the prior distribution as {a_1} = {b_1} = {a_2} = {b_2} = 0. The approximate maximum likelihood estimates, the Bayesian estimates under symmetric entropy loss function and LINEX loss function with d = 3 and d = - 3, and 95% ACIs are given in Table 11. We illustrate the existence and uniqueness of MLE through visual representations. Without the loss of generality, we choose censored scheme 3 in Table 10 to plot, as shown in Figure 6.

    Table 10.  Different censored schemes.
    Censored scheme Q R
    1 (2*5, 0*2, 1*1) (1*5, 0*5)
    2 (0*7, 11*1) (0*9, 5*1)
    3 (0*3, 5*1, 0*3, 6*1) (0*4, 5*1, 0*3)

     | Show Table
    DownLoad: CSV
    Table 11.  The estimates and ACIs of \delta .
    Censored scheme {\hat \delta _{AML}} {\hat \delta _S} {\hat \delta _E} {\text{ACI}}
    d = 3 d = - 3
    1 0.5179 0.5228 0.5080 0.5335 (0.3815, 0.6579)
    2 0.5055 0.5131 0.4912 0.5244 (0.2808, 0.7303)
    3 0.4425 0.4507 0.4315 0.4634 (0.2852, 0.5998)

     | Show Table
    DownLoad: CSV
    Figure 6.  The graphs of partial derivatives of the log-likelihood function.

    The APT-Ⅱ censored scheme allows more flexibility during the lifetime test, thereby providing more control on the test, leading to shorter test time and more failed observations. In this paper, we investigate the classical and Bayesian estimation of stress-strength reliability based on APT-Ⅱ censored sample for IWD with the same shape but different scale parameters. The MLE can be obtained by the iteration algorithm. Note that the form of MLE is not explicit, and we propose AMLE and construct ACI. The BEs are also derived based on gamma prior under symmetric entropy loss function and LINEX loss function. Lindley's approximation is used to obtain the approximate Bayesian estimates. The simulation results show that MLE has the smaller MSE than BE under gamma prior. In addition, the censored scheme has a significant impact on the estimates. Yan et al. [35] proposed an improved adaptive progressive type-Ⅱ censored scheme. Based on this censored scheme, we will consider the statistical inference of multi-component stress-strength reliability for other distributions, such as Weighted Exponential distribution and improved Lomax distribution.

    This research was funded by the National Natural Science Foundation of China, grant number 71661012.

    The authors declare no conflict of interest.



    [1] P. Bateman, J. Kalb and A. Stenger, A limit involving least common multiples, Amer. Math. Monthly, 109 (2002), 393-394.
    [2] P. L. Chebyshev, Memoire sur les nombres premiers, J. Math. Pures Appl., 17 (1852), 366-390.
    [3] B. Farhi, Minorations non triviales du plus petit commun multiple de certaines suites finies d'entiers, C. R. Acad. Sci. Paris, Ser. I, 341 (2005), 469-474. doi: 10.1016/j.crma.2005.09.019
    [4] B. Farhi, Nontrivial lower bounds for the least common multiple of some finite sequences of integers, J. Number Theory, 125 (2007), 393-411. doi: 10.1016/j.jnt.2006.10.017
    [5] B. Farhi, An identity involving the least common multiple of binomial coeffcients and its application, Amer. Math. Monthly, 116 (2009), 836-839. doi: 10.4169/000298909X474909
    [6] B. Farhi, On the derivatives of the integer-valued polynomials, arXiv:1810.07560.
    [7] B. Farhi and D. Kane, New results on the least common multiple of consecutive integers, Proc. Amer. Math. Soc., 137 (2009), 1933-1939.
    [8] C. J. Goutziers, On the least common multiple of a set of integers not exceeding N, Indag. Math., 42 (1980), 163-169.
    [9] D. Hanson, On the product of the primes, Canad. Math. Bull., 15 (1972), 33-37. doi: 10.4153/CMB-1972-007-7
    [10] S. F. Hong and W. D. Feng, Lower bounds for the least common multiple of finite arithmetic progressions, C. R. Acad. Sci. Paris, Ser. I, 343 (2006), 695-698. doi: 10.1016/j.crma.2006.11.002
    [11] S. F. Hong, Y. Y. Luo, G. Y. Qian, et al. Uniform lower bound for the least common multiple of a polynomial sequence, C.R. Acad. Sci. Paris, Ser. I, 351 (2013), 781-785. doi: 10.1016/j.crma.2013.10.005
    [12] S. F. Hong and G. Y. Qian, The least common multiple of consecutive arithmetic progression terms, Proc. Edinb. Math. Soc., 54 (2011), 431-441. doi: 10.1017/S0013091509000431
    [13] S. F. Hong and G. Y. Qian, The least common multiple of consecutive quadratic progression terms, Forum Math., 27 (2015), 3335-3396.
    [14] S. F. Hong and G. Y. Qian, New lower bounds for the least common multiple of polynomial sequences, J. Number Theory, 175 (2017), 191-199. doi: 10.1016/j.jnt.2016.11.026
    [15] S. F. Hong, G. Y. Qian and Q. R. Tan, The least common multiple of a sequence of products of linear polynomials, Acta Math. Hungar., 135 (2012), 160-167. doi: 10.1007/s10474-011-0173-4
    [16] S. F. Hong and Y. J. Yang, On the periodicity of an arithmetical function, C. R. Acad. Sci. Paris Sér. I, 346 (2008), 717-721.
    [17] S. F. Hong and Y. J. Yang, Improvements of lower bounds for the least common multiple of arithmetic progressions, Proc. Amer. Math. Soc., 136 (2008), 4111-4114. doi: 10.1090/S0002-9939-08-09565-8
    [18] L.-K. Hua, Introduction to number theory, Springer-Verlag, Berlin Heidelberg, 1982.
    [19] N. Koblitz, p-Adic numbers, p-adic analysis, and zeta-functions, Springer-Verlag, Heidelberg, 1977.
    [20] M. Nair, On Chebyshev-type inequalities for primes, Amer. Math. Monthly, 89 (1982), 126-129. doi: 10.1080/00029890.1982.11995398
    [21] J. Neukirch, Algebraic number theory, Springer-Verlag, 1999.
    [22] S. M. Oon, Note on the lower bound of least common multiple, Abstr. Appl. Anal., 2013.
    [23] G. Y. Qian and S. F. Hong, Asymptotic behavior of the least common multiple of consecutive arithmetic progression terms, Arch. Math., 100 (2013), 337-345. doi: 10.1007/s00013-013-0510-7
    [24] G. Y. Qian, Q. R. Tan and S. F. Hong, The least common multiple of consecutive terms in a quadratic progression, Bull. Aust. Math. Soc., 86 (2012), 389-404. doi: 10.1017/S0004972712000202
    [25] R. J. Wu, Q. R. Tan and S. F. Hong, New lower bounds for the least common multiple of arithmetic progressions, Chinese Annals of Mathematics, Series B, 34 (2013), 861-864. doi: 10.1007/s11401-013-0805-9
  • This article has been cited by:

    1. Mustafa M Hasaballah, Oluwafemi Samson Balogun, M E Bakr, Point and interval estimation based on joint progressive censoring data from two Rayleigh-Weibull distribution with applications, 2024, 99, 0031-8949, 085239, 10.1088/1402-4896/ad6107
    2. Mustafa M. Hasaballah, Yusra A. Tashkandy, Oluwafemi Samson Balogun, M. E. Bakr, Reliability analysis for two populations Nadarajah-Haghighi distribution under Joint progressive type-II censoring, 2024, 9, 2473-6988, 10333, 10.3934/math.2024505
    3. Sunita Sharma, Vinod Kumar, Multicomponent Stress-Strength Reliability Estimation of Weighted Exponential-Lindley Lifetime Model Under Progressive Censoring, 2025, 2364-9569, 10.1007/s41096-025-00238-8
    4. Abdelfattah Mustafa, M. I. Khan, Samah M. Ahmed, Estimating the stress-strength reliability parameter of the inverse power Lomax distribution, 2025, 10, 2473-6988, 15632, 10.3934/math.2025700
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4317) PDF downloads(366) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog