Research article Topical Sections

Molecular dynamics study of homo-oligomeric ion channels: Structures of the surrounding lipids and dynamics of water movement

  • Molecular dynamics simulations were used to study the structural perturbations of lipids surrounding transmembrane ion channel forming helices/helical bundles and the movement of water within the pores of the ion-channels/bundles. Specifically, helical monomers to hexameric helical bundles embedded in palmitoyl-oleoyl-phosphatidyl-choline (POPC) lipid bilayer were studied. Two amphipathic α-helices with the sequence Ac-(LSLLLSL)3-NH2 (LS2), and Ac-(LSSLLSL)3-NH2 (LS3), which are known to form ion channels, were used. To investigate the surrounding lipid environment, we examined the hydrophobic mismatch, acyl chain order parameter profiles, lipid head-to-tail vector projection on the membrane surface, and the lipid headgroup vector projection. We find that the lipid structure is perturbed within approximately two lipid solvation shells from the protein bundle for each system (~15.0 Å). Beyond two lipid “solvation” shells bulk lipid bilayer properties were observed in all systems. To understand water flow, we enumerated each time a water molecule enters or exited the channel, which allowed us to calculate the number of water crossing events and their rates, and the residence time of water in the channel. We correlate the rate of water crossing with the structural properties of these ion channels and find that the movements of water are predominantly governed by the packing and pore diameter, rather than the topology of each peptide or the pore (hydrophobic or hydrophilic). We show that the crossing events of water fit quantitatively to a stochastic process and that water molecules are traveling diffusively through the pores. These lipid and water findings can be used for understanding the environment within and around ion channels. Furthermore, these findings can benefit various research areas such as rational design of novel therapeutics, in which the drug interacts with membranes and transmembrane proteins to enhance the efficacy or reduce off-target effects.

    Citation: Thuy Hien Nguyen, Catherine C. Moore, Preston B. Moore, Zhiwei Liu. Molecular dynamics study of homo-oligomeric ion channels: Structures of the surrounding lipids and dynamics of water movement[J]. AIMS Biophysics, 2018, 5(1): 50-76. doi: 10.3934/biophy.2018.1.50

    Related Papers:

    [1] M. Nagy, Adel Fahad Alrasheedi . The lifetime analysis of the Weibull model based on Generalized Type-I progressive hybrid censoring schemes. Mathematical Biosciences and Engineering, 2022, 19(3): 2330-2354. doi: 10.3934/mbe.2022108
    [2] Wael S. Abu El Azm, Ramy Aldallal, Hassan M. Aljohani, Said G. Nassr . Estimations of competing lifetime data from inverse Weibull distribution under adaptive progressively hybrid censored. Mathematical Biosciences and Engineering, 2022, 19(6): 6252-6275. doi: 10.3934/mbe.2022292
    [3] Walid Emam, Ghadah Alomani . Predictive modeling of reliability engineering data using a new version of the flexible Weibull model. Mathematical Biosciences and Engineering, 2023, 20(6): 9948-9964. doi: 10.3934/mbe.2023436
    [4] Manal M. Yousef, Rehab Alsultan, Said G. Nassr . Parametric inference on partially accelerated life testing for the inverted Kumaraswamy distribution based on Type-II progressive censoring data. Mathematical Biosciences and Engineering, 2023, 20(2): 1674-1694. doi: 10.3934/mbe.2023076
    [5] M. Nagy, M. H. Abu-Moussa, Adel Fahad Alrasheedi, A. Rabie . Expected Bayesian estimation for exponential model based on simple step stress with Type-I hybrid censored data. Mathematical Biosciences and Engineering, 2022, 19(10): 9773-9791. doi: 10.3934/mbe.2022455
    [6] Hatim Solayman Migdadi, Nesreen M. Al-Olaimat, Omar Meqdadi . Inference and optimal design for the k-level step-stress accelerated life test based on progressive Type-I interval censored power Rayleigh data. Mathematical Biosciences and Engineering, 2023, 20(12): 21407-21431. doi: 10.3934/mbe.2023947
    [7] Amal S. Hassan, Najwan Alsadat, Christophe Chesneau, Ahmed W. Shawki . A novel weighted family of probability distributions with applications to world natural gas, oil, and gold reserves. Mathematical Biosciences and Engineering, 2023, 20(11): 19871-19911. doi: 10.3934/mbe.2023880
    [8] Kimberlyn Roosa, Ruiyan Luo, Gerardo Chowell . Comparative assessment of parameter estimation methods in the presence of overdispersion: a simulation study. Mathematical Biosciences and Engineering, 2019, 16(5): 4299-4313. doi: 10.3934/mbe.2019214
    [9] M. G. M. Ghazal, H. M. M. Radwan . A reduced distribution of the modified Weibull distribution and its applications to medical and engineering data. Mathematical Biosciences and Engineering, 2022, 19(12): 13193-13213. doi: 10.3934/mbe.2022617
    [10] Said G. Nassr, Amal S. Hassan, Rehab Alsultan, Ahmed R. El-Saeed . Acceptance sampling plans for the three-parameter inverted Topp–Leone model. Mathematical Biosciences and Engineering, 2022, 19(12): 13628-13659. doi: 10.3934/mbe.2022636
  • Molecular dynamics simulations were used to study the structural perturbations of lipids surrounding transmembrane ion channel forming helices/helical bundles and the movement of water within the pores of the ion-channels/bundles. Specifically, helical monomers to hexameric helical bundles embedded in palmitoyl-oleoyl-phosphatidyl-choline (POPC) lipid bilayer were studied. Two amphipathic α-helices with the sequence Ac-(LSLLLSL)3-NH2 (LS2), and Ac-(LSSLLSL)3-NH2 (LS3), which are known to form ion channels, were used. To investigate the surrounding lipid environment, we examined the hydrophobic mismatch, acyl chain order parameter profiles, lipid head-to-tail vector projection on the membrane surface, and the lipid headgroup vector projection. We find that the lipid structure is perturbed within approximately two lipid solvation shells from the protein bundle for each system (~15.0 Å). Beyond two lipid “solvation” shells bulk lipid bilayer properties were observed in all systems. To understand water flow, we enumerated each time a water molecule enters or exited the channel, which allowed us to calculate the number of water crossing events and their rates, and the residence time of water in the channel. We correlate the rate of water crossing with the structural properties of these ion channels and find that the movements of water are predominantly governed by the packing and pore diameter, rather than the topology of each peptide or the pore (hydrophobic or hydrophilic). We show that the crossing events of water fit quantitatively to a stochastic process and that water molecules are traveling diffusively through the pores. These lipid and water findings can be used for understanding the environment within and around ion channels. Furthermore, these findings can benefit various research areas such as rational design of novel therapeutics, in which the drug interacts with membranes and transmembrane proteins to enhance the efficacy or reduce off-target effects.


    Type-Ⅰ and Type-Ⅱ censoring schemes are the two most popular censoring schemes which are used in practice. The mixture of Type-Ⅰ and Type-Ⅱ censoring schemes has been discussed in the literature for this purpose, is known as the hybrid censoring scheme which was first introduced by Epstein [3]. The hybrid censoring scheme becomes quite popular in the reliability and life-testing experiments, see for example, Fairbanks et al. [4], Draper and Guttman [5], Chen and Bhattacharya [6], Jeong et al. [7], Childs et al. [8] and Gupta and Kundu [9]. Balakrishnan and Kundu [10] have extensively reviewed and discussed Type-Ⅰ and Type-Ⅱ hybrid censoring schemes and associated inferential issues. They have presented details on developments of generalized hybrid censoring and unified hybrid censoring schemes that have been introduced in the literature and they have presented several examples to illustrate the described results. From now on, we refer to this hybrid censoring scheme as Type-Ⅰ hybrid censoring scheme (Type-Ⅰ HCS). It is evident that the complete sample situation as well as Type-Ⅰ and Type-Ⅱ right censoring schemes are all special cases of this Type-Ⅰ HCS.

    Recently, Tripathic and Lodhi [11] have discussed inferential procedures for Weibull competing risks model with partially observed failure causes under generalized progressive hybrid censoring. Jeon and Kang [12] have estimated the half-logistic distribution based on multiply Type-Ⅱ hybrid censoring. Nassar and Dobbah [13] have analyzed the reliability characteristics of bathtub-shaped distribution under adaptive Type-Ⅰ progressive hybrid censoring. Algarni, Almarashi and Abd-Elmougoud [14] have considered the joint Type-Ⅰ generalized hybrid censoring for estimation two Weibull distributions.

    A three parameter Dagum distribution was proposed by Dagum [15,16], which plays are important role in size distribution of personal income. This distribution offers a more flexible for modeling lifetime data, such as in reliability. The Dagum distribution is not much popular, perhaps, because of its difficult mathematical procedures. In the 1970s, Camilo Dagum embarked on a quest for a statistical distribution closely fitting empirical income and wealth distributions. Not satisfied with the classical distributions, he looked for a model accommodating the heavy tails present in empirical income and wealth distributions as well as permitting an interior mode. He end up with Dagum Type Ⅰ distribution, a three-parameter distribution, and two four parameter generalizations see Dagum [16,17,18]. Dagum distribution is also called the inverse Burr, especially in the actuarial literature, as it is the reciprocal transformation of the Burr XII. Nevertheless, unlike the Burr XII, which is widely known in various fields of science. Since Dagum proposed his model as income distribution, its properties have been appreciated in economics and financial fields and its features have been extensively discussed in the studies of income and wealth. Kleiber and Kotz [19] and Kleiber [20] provided an exhaustive review on the origin of the Dagum model and its applications. Contributions from Quintano and D'Agostino [21] adjusted Dagum model for income distribution to account for individual characteristics, while Domma et al. [22,23] studied the Fisher information matrix in doubly censored data from the Dagum distribution and reliability studies of the Dagum distribution. An important characteristic of Dagum distribution is that its hazard function can be monotonically decreasing, an upside-down bathtub, or bathtub and then upside-down bathtub shaped, see Domma [24]. This behavior has led several authors to study the model in different fields. In fact, Dagum distribution has been studied from a reliability point of view and used to analyze survival data, see Domma, et al.[23].

    Dagum distribution specified by the probability density function (pdf)

    $ f(x;λ,β,θ)=λβθx(β+1)(1+λxβ)(θ+1),x>0;λ,β,θ>0,
    $
    (1.1)

    and cumulative distribution function (cdf)

    $ F(x;λ,β,θ)=(1+λxβ)θ,x>0;λ,β,θ>0,
    $
    (1.2)

    where $ \lambda $ is the scale parameter and $ \beta, \theta $ are the shape parameters.

    Huang and Yang [1] have considered a combined hybrid censoring sampling (CHCS) scheme which define as follows: For fixed $ m $, $ r $ $ \in\left\{1, 2, \ldots, n\right\} $, $ (T_{1} $, $ T_{2}) $ $ \in(0, \infty) $ such that $ m < r $, $ T_{1} < T_{2} $ and $ T^{*} $ denote the terminating time of the experiment. If the $ k $th failure occurs before time $ T_{1} $, the experiment terminates at $ \min\{X_{r:n}, T_{1}\} $, if the $ m $th failure occurs between $ T_{1} $ and $ T_{2} $, the experiment is terminated at $ X_{m:n} $ and finally if the $ m $th failure occurs after time $ T_{2} $, then the experiment terminates at $ T_{2} $. For our later convenience, we abbreviate this scheme as combined CHCS$ (m, r; T_{1}, T_{2}) $. In fact, this system contains the following six cases, and obviously, in each case some part of data are unobservable as:

    $ T={Xm:n,0<T1<Xm:n<(T2<Xr:n),Xm:n,0<T1<Xm:n<(Xr:n<T2),T2,0<T1<T2<(Xm:n<Xr:n),Xr:n,0<Xm:n<Xr:n<(T1<T2),T1,0<Xm:n<T1<(Xr:n<T2),T1,0<Xm:n<T1<(T2<Xr:n),
    $
    (1.3)

    where the data in parentheses are unobservable.

    Balakrishnan, et al.[2] have proposed an unified hybrid censoring sampling (UHCS) scheme as follows: For fixed $ m $, $ r $ $ \in\left\{1, 2, \ldots, n\right\} $, $ (T_{1} $, $ T_{2}) $ $ \in(0, \infty) $, $ m < r $, $ T_{1} < T_{2} $ and $ T^* $ denote the terminating time of the experiment. If the $ k $th failure occurs before time $ T_{1} $, the experiment terminates at $ min \{max\{X_{r:n}, T_{1}\}, T_{2}\} $, if the $ m $th failure occurs between $ T_{1} $ and $ T_{2} $, the experiment is terminated at $ min \{X_{r:n}, T_{2}\} $ and finally if the $ m $th failure occurs after time $ T_{2} $, then the experiment terminates at $ X_{m:n} $. Again, for our later convenience, we abbreviate this scheme as UHCS$ (m, r; T_{1}, T_{2}) $. Similarly, each type of these hybrid censored samples contains six cases, and obviously, in each case some part of data are unobservable as following:

    $ T={T2,0<T1<Xm:n<T2<(Xr:n),Xr:n,0<T1<Xm:n<Xr:n<(T2),Xm:n,0<T1<T2<Xm:n<(Xr:n),T1,0<Xm:n<Xr:n<T1<(T2),Xr:n,0<Xm:n<T1<Xr:n<(T2),T2,0<Xm:n<T1<T2<(Xr:n),
    $
    (1.4)

    where the data in parentheses are unobservable.

    In this paper, we merge CHCS$ (m, r; T_{1}, T_{2}) $ and UHCS$ (m, r; T_{1}, T_{2}) $, in a unified approach known as a combined-unified hybrid censored scheme (C-UHCS$ (m, r; T_{1}, T_{2}) $). To the best of our knowledge, no attempt has been made on estimation of the parameters of the Dagum distribution using CHCS$ (m, r; T_{1}, T_{2}) $ or UHCS$ (m, r; T_{1}, T_{2}) $, so, we apply C-UHCS$ (m, r; T_{1}, T_{2}) $ to Dagum distribution. We first obtain the maximum likelihood estimate of the parameters and use them to construct asymptotic and bootstrap confidence intervals (CIs). Next, we obtain the Bayes estimates of $ \lambda, \beta $ and $ \theta $. The layout of this paper as follows. In Section 2, we first describe the construction of likelihood function based C-UHCS$ (m, r; T_{1}, T_{2}) $ and obtain the MLEs of $ \lambda, \beta $ and $ \theta $. The asymptotic and bootstrap confidence intervals based on the observed Fisher information matrix is also discussed here. Next, Section 3, we consider Bayesian estimation of the unknown parameters under squared error and LINEX loss functions. Simulation studies are carried out in Section 4 to assess the performance of the proposed methods. Section 5 contains a brief conclusion.

    Let $ X_{1}, X_{2}, ..., X_{n} $ denote a sequence of the lifetimes of reliability experiment units that placed on a life-test, we shall assume that these variables of this sample is iid from an absolutely continuous population with cumulative distribution function (cdf) $ F(x) $ and probability density function (pdf) $ f(x) $. In this section, we construct the likelihood function under the censoring scheme C-UHCS$ (m, r; T_{1}, T_{2}) $. Let $ D_{j} $ denote the maximum number of failures until $ T_{j}, j = 1, 2, $ obviously have $ D_{1}\leq D_{2} $. Then, the likelihood function of CHCS$ (m, r; T_{1}, T_{2}) $, for a parameter space $ \Omega $, is given as

    $ L(C)(Ω|x)={n!(nm)![1F(xm)]nmmi=1f(xi);D1=0,,m1,D2=m,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1,D2=0,,m1,n!(nr)![1F(xr)]nrri=1f(xi);D1=D2=r,n!(nD1)![1F(T1)]nD1D1i=1f(xi);D1=D2=m,,r1.
    $
    (2.1)

    Similarly, the observed likelihood function based on UHCS$ (m, r; T_{1}, T_{2}) $ is given as

    $ L(U)(Ω|x)={n!(nD)![1F(T1)]nDmi=1f(xi);D1=D2=D=r,,n,n!(nr)![1F(xr)]nrri=1f(xi);D1=m,,r1,D2=r,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1,D2=m,,r1,n!(nr)![1F(xr)]nrri=1f(xi);D1=0,,m1,D2=r,n!(nD2)![1F(T2)]nD2D2i=1f(xi);D1=0..m1,D2=m...r1,n!(nm)![1F(xm)]nmmi=1f(xi);D1,D2=0,,m1.
    $
    (2.2)

    Assume that, for any case, we terminate the experiment at $ T $ that may refer to time $ T_{1} $, $ T_{2} $, observation $ x_k $ or observation $ x_{r} $, and let $ k $ denote the maximum number of failures until $ T $ which equal, respectively, $ D_{1}, D_{2}, k $ and $ r $. The likelihood function of C-UHCS$ (k, r; T_1, T_2) $, that represents all previous likelihood functions $ L^{(C)}\left(\Omega|\textbf{x}\right) $ and $ L^{(U)}\left(\Omega|\textbf{x}\right) $ under different values of $ k $, $ T $ and $ \textbf{x}_{k} = (x_{1}, x_{2}, ..., x_{k}) $ can be written as

    $ L(Ω|xk)=n!(nk)![1F(T)]nkki=1f(xi),
    $
    (2.3)

    where $ k $ and $ T $ can be chosen as:

    Cases $ L^{(C)}\left(\Omega|\textbf{x}\right)$ $L^{(U)}\left(\Omega|\textbf{x}\right)$
    $ k $ $ T $ $ k $ $ T $
    1 : $ \ 0 < T_{1} < X_{k:n} < T_{2} < X_{r:n} $ $ m $ $ X_{m:n} $ $ D_{2} $ $ T_{2} $
    2 : $ \ 0 < T_{1} < X_{k:n} < X_{r:n} < T_{2} $ $ m $ $ X_{m:n} $ $ r $ $ X_{r:n} $
    3 : $ \ 0 < T_{1} < T_{2} < X_{k:n} < X_{r:n} $ $ D_{2} $ $ T_{2} $ $ m $ $ X_{m:n} $
    4 : $ \ 0 < X_{k:n} < X_{r:n} < T_{1} < T_{2} $ $ r $ $ X_{r:n} $ $ D_{1} $ $ T_{1} $
    5 : $ \ 0 < X_{k:n} < T_{1} < X_{r:n} < T_{2} $ $ D_{1} $ $ T_{1} $ $ r $ $ X_{r:n} $
    6 : $ \ 0 < X_{k:n} < T_{1} < T_{2} < X_{r:n} $ $ D_{1} $ $ T_{1} $ $ D_{2} $ $ T_{2} $

     | Show Table
    DownLoad: CSV

    Suppose that $ \{x_{1}, x_{2}, ..., x_{k}\} $ be a sequence of observed data from Dagum distribution. Substituting (1.1) and (1.2) in (2.3), the observed likelihood function of $ \lambda, \beta $ and $ \theta $ based on these C-UHCS$ (k, r; T_{1}, T_{2}) $ becomes

    $ L(λ,β,θ|xk)=n!(nk)!λkβkθk[1(1+λTβ)θ]nkki=1x(β+1)i(1+λxβi)(θ+1),
    $
    (3.1)

    and the corresponding log-likelihood function $ (Ł) $ is

    $ Ł=logL(λ,β,θ|xk)=logn!(nk)!+klogλ+klogβ+klogθ+(nk)log[1(1+λTβ)θ]ki=1[(β+1)logxi+(θ+1)log(1+λxβi)].
    $
    (3.2)

    Taking the first partial derivatives of log-likelihood (3.2) with respect to $ \lambda, \beta, \theta $ and equating each to zero. Let $ M_{a} = (1+\lambda T^{-\beta})^{-(\theta+a)} $, we obtain

    $ Łλ=kλ+(nk)θTβM11M0ki=1(θ+1)xβi1+λxβi=0,
    $
    (3.3)
    $ Łβ=kβ(nk)θλTβM1logT1M0ki=1[logxi(θ+1)λxβilogxi1+λxβi]=0,
    $
    (3.4)
    $ Łθ=kθ+(nk)M0log(1+λTβ)1M0ki=1log(1+λxβi)=0,
    $
    (3.5)

    The solutions of the above nonlinear equations are the maximum likelihood estimators of the Dagum distribution parameters $ \lambda, \beta $ and $ \theta $. As the equations expressed in (3.3), (3.4) and (3.5) cannot be solved analytically, one must use a numerical procedure to solve them.

    Then, we can use the asymptotic normality of the MLEs to compute the asymptotic confidence intervals of the parameters $ \lambda, \beta $ and $ \theta $. The observed variance-covariance matrix for the MLEs of the parameters $ \hat{V} = [ \sigma_{i, j}], i, j = 1, 2, 3 $, was taken as

    $ ˆV=[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=ˆλML,β=ˆβML,θ=ˆθML),
    $
    (3.6)

    where

    $ 2Łλ2=kλ2+(nk)[θ2M21θ(θ+1)(1M0)M2]T2β(1M0)2+ki=1(θ+1)x2βi(1+λxβi)2,
    $
    (3.7)
    $ 2Łβλ=(nk)θlogTTβ(1M0)2{[M1(θ+1)λM2Tβ](1M0)+θλM21Tβ}ki=1(θ+1)xβilogxi(1+λxβi)2,
    $
    (3.8)
    $ 2Łθλ=(nk)M1(1M0θlog(1+λTβ))Tβ(1M0)2ki=1xβi1+λxβi,
    $
    (3.9)
    $ 2Łβ2=kβ2ki=1(θ+1)λxβilog2xi(1+λxβi)2+(nk)log2TTβ(1M0)2{θλ(M1(θ+1)M2λTβ)(1M0)+Tβ(θλM1)2},
    $
    (3.10)
    $ 2Łθβ=(nk)λTβM1logT(θlog(1+λTβ)+M01)(1M0)2ki=1λxβilogxi1+λxβi,
    $
    (3.11)
    $ 2Łθ2=kθ2(nk)M0log2(1+λTβ)(1M0)2.
    $
    (3.12)

    A $ 100(1-\alpha)\% $ two-sided approximate confidence intervals for the parameters $ \lambda $, $ \beta $ and $ \theta $ are then given by

    $ ˆλ±zα/2V(ˆλ),
    $
    (3.13)
    $ ˆβ±zα/2V(ˆβ),
    $
    (3.14)

    and

    $ ˆθ±zα/2V(ˆθ),
    $
    (3.15)

    respectively, where $ V(\widehat{\lambda }), $ $ V(\widehat{\beta }), $ and $ V(\widehat{\theta }) \; $ are the estimated variances of $ \widehat{\lambda } _{ML}, $ $ \widehat{\beta }_{ML} $ and $ \widehat{\theta }_{ML} $, which are given by the diagonal elements of $ \widehat{V} $, and $ z_{\alpha /2} $ is the upper $ \left(\frac{\alpha}{2}\right) $ percentile of standard normal distribution.

    In order to construct the bootstrap confidence intervals Boot-p for the unknown parameters $ \phi = (\lambda, \beta $, $ \theta $) based on C-UHCS scheme, we apply the following algorithms [For more details, one may refer to Kundu and Joarder [25] and Dube, Garg and Krishna [26]].

    Boot-p interval's Algorithm:

    step-1: Simulate $ x_{1:n}, x_{2:n}, ..., x_{k:n} $ from Dagum distribution given in (1.1) and derive an estimate $ \hat{\phi} $ of $ {\phi} $.

    step-2: Simulate another sample $ x^{*}_{1:n}, x^{*}_{2:n}, ..., x^{*}_{k:n} $ using $ \hat{\phi} $, $ k $ and $ T $. Then derive the updated bootstrap estimate $ \hat{\phi}^{*} $ of $ {\phi} $.

    step-3: Repeat the previous step, a prescribed B number of replications.

    step-4: With $ \hat{F}(x) = P(\hat{\phi}^{*} \leq x) $ denoting the distribution function of $ \hat{\phi}^{*} $, the $ 100(1 - \alpha)\% $ confidence interval of $ {\phi} $ is given by

    $ \left(\hat{\phi}_{Boot-p} (\frac{\alpha}{2}),\hat{\phi}_{Boot-p} (1-\frac{\alpha}{2})\right), $

    where $ \hat{\phi }_{Boot-p} (x) = \hat{F}^{-1}(x) $ and $ x $ is prefixed.

    Bayesian inference is a convenient method to be used with C-UHCS$ (k, r; T_{1}, T_{2}) $. Indeed, given that C-UHCS$ (k, r; T_{1}, T_{2}) $ are so scarce, prior information is welcome. Risk functions are chosen depending on how one measures the distance between the estimate and the unknown parameter. In order to conduct the Bayesian analysis, usually quadratic loss function is considered. A very popular quadratic loss is the squared error (SE) loss function given by

    $ LS(g(φ),ˆg(φ))=(g(φ)ˆg(φ))2
    $
    (4.1)

    where $ \hat{g}(\varphi) $ is an estimate of the parametric function $ {g}(\varphi) $. The Bayes estimate of $ {g}(\varphi) $, say $ \hat{g}_{ST}(\varphi) $, against SE loss function is the posterior mean given by

    $ ˆgS(φ)=Eφ[g(φ)|xk].
    $
    (4.2)

    Using SE loss function in the Bayesian approach leads to the equal penalization for underestimation and overestimation which is inappropriate in practical purposes. For instance, in estimating the reliability characteristics, overestimation is more serious than the underestimation. Therefore, different asymmetric loss functions are considered by researchers such as LINEX loss function. The LINEX loss function given by

    $ LL(g(φ),ˆg(φ))=exp[ρ(g(φ)ˆg(φ))]ρ(g(φ)ˆg(φ))1,ρ0
    $
    (4.3)

    is a popular asymmetric loss function that penalizes underestimation and overestimation for negative and positive values of $ \nu $, respectively. For $ \rho $ close to zero, the LINEX loss is approximately equal to the SE loss and therefore almost symmetric. The Bayes estimate of $ g(\varphi) $ under LE loss function becomes

    $ ˆgL(φ)=1ρlog(Eφ[exp(ρg(φ))|xk]),
    $
    (4.4)

    Here, we derive different Bayes estimates by using the mentioned loss functions. Under the assumption that the parameters $ \lambda $, $ \beta $ and $ \theta $ are unknown and independent, we assume the joint prior density function, suggested by Al-Hussaini et al. [27] which gave good results, that is given by

    $ π(λ,β,θ)=ν1ν2ν3exp[(ν1λ+ν2β+ν3θ)],λ,β,θ>0,
    $
    (4.5)

    where $ \nu_{1}, \nu_{2} $ and $ \nu_{3} $ are positive constants.

    In order to use Tierney-Kadane's approximation technique, we set

    $ ϕ(λ,β,θ)=1nlog[L(λ,β,θ|xk)π(λ,β,θ)] and ϕ(g)(λ,β,θ)=ϕ(λ,β,θ)+1nlogg(λ,β,θ).
    $
    (4.6)

    Now, assuming squared error loss functions, Bayes estimate of the function of parameters $ {g}(\lambda, \beta, \theta) $ can be written in terms of (4.6) as

    $ ˆgST(λ,β,θ)000g(λ,β,θ)L(λ,β,θ|xk)π(λ,β,θ)λβθ=000exp(nϕ(g)(λ,β,θ))λβθ000exp(nϕ(λ,β,θ))λβθ.
    $
    (4.7)

    By using Tierney and Kadane [28], the approximate form of (4.7) becomes

    $ ˆgST(λ,β,θ)=[detH(g)detH]12exp(n[ϕ(g)(¯λ(g),¯β(g),¯θ(g))ϕ(¯λ,¯β,¯θ)]),
    $
    (4.8)

    where $ (\overline{\lambda}^{(g)}, \overline{\beta}^{(g)}, \overline{\theta}^{(g)}) $ and $ (\overline{\lambda}, \overline{\beta}, \overline{\theta}) $ maximize $ \phi^{(g)}(\lambda, \beta, \theta) $ and $ \phi (\lambda, \beta, \theta) $, respectively, and $ H^{(g)} $ and $ H $ are minus the inverse Hessians Matrix of $ \phi^{(g)}(\lambda, \beta, \theta) $ and $ \phi (\lambda, \beta, \theta) $ at $ (\overline{\lambda}^{(g)}, \overline{\beta}^{(g)}, \overline{\theta}^{(g)}) $ and $ (\overline{\lambda}, \overline{\beta}, \overline{\theta}) $, respectively. Here, from (3.2), (4.5) and (4.6), we have

    $ ϕ(λ,β,θ)=1n{logν1ν2ν3ν1λν2βν3θ+Ł}.
    $
    (4.9)

    Now, $ (\overline{\lambda}, \overline{\beta}, \overline{\theta}) $ can be calculated from the simultaneous solution of the nonlinear equations

    $ λŁ(λ,β,θ)=ν1,βŁ(λ,β,θ)=ν2 and θŁ(λ,β,θ)=ν3.
    $
    (4.10)

    The second order derivatives of $ Ł $, given in (3.7)-(3.12) can be used to determine the determinant of the negative of the inverse Hessian matrix of $ \phi(\lambda, \beta, \theta) $ at $ (\overline{\lambda}, \overline{\beta}, \overline{\theta}) $ as

    $ detH=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ,β=¯β,θ=¯θ)
    $
    (4.11)

    Then, the Bayesian estimate of $ \lambda, \beta $ and $ \theta $ based on square error loss function can be obtained by replacing $ g(\lambda, \beta, \theta) $ by $ \lambda, \beta $ and $ \theta $, respectively and corresponding $ \phi^{(g)}_{ST} (\lambda, \beta, \theta) $ take the forms:

    $ ϕ(g)ST(λ,β,θ)={ϕ(λ)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogλ,g(λ,β,θ)=λ,ϕ(β)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogβ,g(λ,β,θ)=β,ϕ(θ)ST(λ,β,θ)=ϕ(λ,β,θ)+1nlogθ,g(λ,β,θ)=θ.
    $
    (4.12)

    Hence, $ (\overline{\lambda}_{ST}^{(\lambda)}, \overline{\beta}_{ST}^{(\lambda)}, \overline{\theta}_{ST}^{(\lambda)}) $, $ (\overline{\lambda}_{ST}^{(\beta)}, \overline{\beta}_{ST}^{(\beta)}, \overline{\theta}_{ST}^{(\beta)}) $ and $ (\overline{\lambda}_{ST}^{(\theta)}, \overline{\beta}_{ST}^{(\theta)}, \overline{\theta}_{ST}^{(\theta)}) $ can be computed by maximizing $ \phi^{(\lambda)} _{ST}(\lambda, \beta, \theta) $, $ \phi^{(\beta)} _{ST}(\lambda, \beta, \theta) $ and $ \phi^{(\theta)} _{ST}(\lambda, \beta, \theta) $, respectively, through the simultaneous solution of the each of the following systems:

    System 1: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1}+\frac{1}{n\lambda} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3} = 0, $

    System 2: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2}+\frac{1}{n\beta} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3} = 0, $

    System 3: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3}+\frac{1}{n\theta} = 0. $

    Again, using the second order derivative of $ \phi^{(\lambda)} _{ST}(\lambda, \beta, \theta) $, $ \phi^{(\beta)} _{ST}(\lambda, \beta, \theta) $ and $ \phi^{(\theta)} _{ST}(\lambda, \beta, \theta) $ at $ (\overline{\lambda}_{ST}^{(\lambda)}, \overline{\beta}_{ST}^{(\lambda)}, \overline{\theta}_{ST}^{(\lambda)}) $, $ (\overline{\lambda}_{ST}^{(\beta)}, \overline{\beta}_{ST}^{(\beta)}, \overline{\theta}_{ST}^{(\beta)}) $ can be used to calculate $ (\overline{\lambda}_{ST}^{(\theta)}, \overline{\beta}_{ST}^{(\theta)}, \overline{\theta}_{ST}^{(\theta)}) $, the elements of $ H^{(\lambda)}_{ST} $, $ H^{(\beta)}_{ST} $ and $ H^{(\theta)}_{ST} $, respectively, as:

    $ detH(λ)ST=1n3det[2Łλ21λ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ(λ)ST,β=¯β(λ)ST,θ=¯θ(λ)ST),
    $
    (4.13)
    $ detH(β)ST=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ21β22Łβθ2Łθλ2Łθβ2Łθ2]1(λ=¯λ(β)ST,β=¯β(β)ST,θ=¯θ(β)ST),
    $
    (4.14)

    and

    $ detH(θ)ST=1n3det[2Łλ22Łλβ2Łλθ2Łβλ2Łβ22Łβθ2Łθλ2Łθβ2Łθ21θ2]1(λ=¯λ(θ)ST,β=¯β(θ)ST,θ=¯θ(θ)ST).
    $
    (4.15)

    Therefore, the approximate Bayes estimate of $ \lambda $ based on square error loss function are:

    $ ˆλST=[detH(λ)STdetH]12exp(n[ϕ(λ)ST(¯λ(λ)ST,¯β(λ)ST,¯θ(λ)ST)ϕ(¯λ,¯β,¯θ)]),ˆβST=[detH(β)STdetH]12exp(n[ϕ(β)ST(¯λ(β)ST,¯β(β)ST,¯θ(β)ST)ϕ(¯λ,¯β,¯θ)]),ˆθST=[detH(θ)STdetH]12exp(n[ϕ(θ)ST(¯λ(θ)ST,¯β(θ)ST,¯θ(θ)ST)ϕ(¯λ,¯β,¯θ)]).}
    $
    (4.16)

    Next, in order to obtain the Bayesian estimates of $ \lambda, \beta $ and $ \theta $ based on LINEX loss function we replacing $ g(\lambda, \beta, \theta) $ by $ e^{-\rho \lambda}, e^{-\rho \beta} $ and $ e^{-\rho\theta} $, respectively, and corresponding $ \phi^{(g)}_{LT} (\lambda, \beta, \theta) $ take the forms:

    $ ϕ(g)LT(λ,β,θ)={ϕ(λ)LT(λ,β,θ)=ϕ(λ,β,θ)ρλn,g(λ,β,θ)=eρλ,ϕ(β)LT(λ,β,θ)=ϕ(λ,β,θ)ρβn,g(λ,β,θ)=eρβ,ϕ(θ)LT(λ,β,θ)=ϕ(λ,β,θ)ρθn,g(λ,β,θ)=eρθ.
    $
    (4.17)

    Hence, $ (\overline{\lambda}_{LT}^{(\lambda)}, \overline{\beta}_{LT}^{(\lambda)}, \overline{\theta}_{LT}^{(\lambda)}) $, $ (\overline{\lambda}_{LT}^{(\beta)}, \overline{\beta}_{LT}^{(\beta)}, \overline{\theta}_{LT}^{(\beta)}) $ and $ (\overline{\lambda}_{LT}^{(\theta)}, \overline{\beta}_{LT}^{(\theta)}, \overline{\theta}_{LT}^{(\theta)}) $ can be computed by maximizing $ \phi^{(\lambda)} _{LT}(\lambda, \beta, \theta) $, $ \phi^{(\beta)} _{LT}(\lambda, \beta, \theta) $ and $ \phi^{(\theta)} _{LT}(\lambda, \beta, \theta) $, respectively, through solving simultaneously the following systems:

    System 4: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1}-\frac{\rho}{n} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3} = 0. $

    System 5: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2}-\frac{\rho}{n} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3} = 0, $

    System 6: $ \frac{\partial}{\partial \lambda}Ł(\lambda, \beta, \theta) -\nu_{1} = 0, \quad \frac{\partial{ }}{\partial \beta}Ł(\lambda, \beta, \theta) -\nu_{2} = 0 $ and $ \frac{\partial{}}{\partial \theta} Ł(\lambda, \beta, \theta) -\nu_{3}-\frac{\rho}{n} = 0. $

    Once again, we can derive that $ H^{(\lambda)}_{LT} = H^{(\beta)}_{LT} = H^{(\theta)}_{LT} = H_{LT} $ by calculating the second order derivative of $ \phi^{(\lambda)} _{LT}(\lambda, \beta, \theta) $, $ \phi^{(\beta)} _{LT}(\lambda, \beta, \theta) $ and $ \phi^{(\theta)} _{LT}(\lambda, \beta, \theta) $ at $ (\overline{\lambda}_{LT}^{(\lambda)}, \overline{\beta}_{LT}^{(\lambda)}, \overline{\theta}_{LT}^{(\lambda)}) $, $ (\overline{\lambda}_{LT}^{(\beta)}, \overline{\beta}_{LT}^{(\beta)}, \overline{\theta}_{LT}^{(\beta)}) $ and $ (\overline{\lambda}_{LT}^{(\theta)}, \overline{\beta}_{LT}^{(\theta)}, \overline{\theta}_{LT}^{(\theta)}) $ and by using the same manner as in (4.12)-(4.15). Therefore, the approximate Bayes estimate of $ \lambda $ based on LINEX loss function are:

    $ ˆλLT=1ρlog(n[ϕ(λ)LT(¯λ(λ)LT,¯β(λ)LT,¯θ(λ)LT)ϕ(¯λ,¯β,¯θ)]),ˆβLT=1ρlog(n[ϕ(β)LT(¯λ(β)LT,¯β(β)LT,¯θ(β)LT)ϕ(¯λ,¯β,¯θ)]),ˆθLT=1ρlog(n[ϕ(θ)LT(¯λ(θ)LT,¯β(θ)LT,¯θ(θ)LT)ϕ(¯λ,¯β,¯θ)]).}
    $
    (4.18)

    In order to calculate $ 100(1-\alpha)\% $ HPD credible intervals for the Bayesian estimates using both of SE and LE loss functions for any parameter, for example $ \delta $, we follow the steps below:

    HPD credible interval:

    1. Simulate censored sample of size $ n $ from Dagum distribution given in (1.1) and calculate the estimate of $ \delta $ under a certain choice of $ k, r, T_1 $ and $ T_2 $, say $ \delta^* $.

    2. Repeat the previous step M times to get $ \delta^*_1, \delta^*_2, \dots, \delta^*_M $, and the order values are: $ \delta^*_{1:M}, \delta^*_{2:M}, \dots, \delta^*_{M:M} $

    3. $ 100(1-\alpha)\% $ HPD credible interval for $ \delta $ is shortest length through the intervals $ (\delta^*_{j:M}, \delta^*_{j+(1-\alpha)M:M}), j = 1, 2, \dots, \alpha M. $

    In the previous subsection, we have used Tierney-Kadane's approximation to derive the Bayes estimates of the parameters. However, it is not possible to obtain HPD credible intervals using this method. In this subsection, we adopt a Metropolis-Hastings within Gibbs sampling approach to generate random samples from the conditional densities of the parameters and use them to obtain the HPD credible intervals and point Bayes estimates. From (3.1) and (4.5), the posterior density of $ \lambda, \beta, $ and $ \theta $ can be extracted as

    $ π(λ,β,θ)λkβkθkexp[(ν1λ+ν2β+ν3θ)]×[1(1+λTβ)θ]nkki=1x(β+1)i(1+λxβi)(θ+1).
    $
    (4.19)

    In the following algorithm, we employ Metropolis-Hastings (M-H) technique with normal proposal distribution to generate samples from these distributions.

    1. Start with initial values of the parameters ($ \lambda^{(0)}, \beta^{(0)}, \theta^{(0)} $). Then, simulate censored sample of size $ k $ under a certain choice of $ m, r, T_1 $ and $ T_2 $ from Dagum distribution given in (1.1) and set $ l = 1 $.

    2. Generate $ \lambda^{(*)}, \beta^{(*)}, \theta^{(*)} $ using the proposal distributions $ N(\lambda^{(l-1)}, 1), N(\beta^{(l-1)}, 1) $ and $ N(\theta^{(l-1)}, 1) $, respectively.

    3. Calculate the acceptance probability $ r = min\left(1, \frac{ \pi^*(\lambda^{(*)}, \beta^{(*)}, \theta^{(*)})}{ \pi^*(\lambda^{(l-1)}, \beta^{(l-1)}, \theta^{(l-1)})}\right) $.

    4. Generate $ U $ from uniform(0, 1).

    5. Accept the proposal distribution and set $ \left(\lambda^{(l)}, \beta^{(l)}, \theta^{(l)}\right) = \left(\lambda^{(*)}, \beta^{(*)}, \theta^{(*)}\right) $ if $ U < r $. Otherwise, reject the proposal distribution and set $ \left(\lambda^{(l)}, \beta^{(l)}, \theta^{(l)}\right) = \left(\lambda^{(l-1)}, \beta^{(l-1)}, \theta^{(l-1)}\right) $.

    6. Set $ l = 1+1 $.

    7. Repeat Steps 2-6, M times, and obtain $ \lambda^{(l)}, \beta^{(l)} $ and $ \theta^{(l)} $ for $ l = 1, ..., M $.

    By using the generated random samples from the above Gibbs sampling technique and for $ N $ is the nburn, the approximate Bayes estimate of the parameters under squared error and LINEX loss functions can be obtained as

    $ \hat{\lambda}_{SM} = \frac{1}{M-N}\sum\limits_{l = N+1}^{M}\lambda^{(l)},\qquad \hat{\beta}_{SM} = \frac{1}{M-N}\sum\limits_{l = N+1}^{M}\beta^{(l)},\qquad \hat{\theta}_{SM} = \frac{1}{M-N}\sum\limits_{l = N+1}^{M}\theta^{(l)}, $
    $ \hat{\lambda}_{LM} = \frac{-1}{\rho}\log\left(\frac{\sum\limits_{l = N+1}^{M}exp\left(-\rho \lambda^{(l)}\right)}{M-N} \right),\qquad \hat{\beta}_{LM} = \frac{-1}{\rho}\log\left(\frac{\sum\limits_{l = N+1}^{M}exp\left(-\rho \beta^{(l)}\right)}{M-N} \right), $

    and

    $ \hat{\theta}_{LM} = \frac{-1}{\rho}\log\left(\frac{\sum\limits_{l = N+1}^{M}exp\left(-\rho \theta^{(l)}\right)}{M-N} \right). $

    MCMC HPD credible interval Algorithm:

    1. Arrange the values of $ \lambda^{(*)}, \beta^{(*)} $ and $ \theta^{(*)} $ in increasing magnitude.

    2. Find the positions of the lower bounds which is $ (M -N) * \alpha/2 $, then determine the lower bounds of $ \lambda, \beta $ and $ \theta $.

    3. Find the positions of the upper bounds which is $ (M -N) * (1-\alpha/2) $, then determine the upper bounds of $ \lambda, \beta $ and $ \theta $.

    4. Repeat the above steps $ M $ times. Find the average value of the lower and upper bounds MCMC HPD credible interval of $ \lambda, \beta $ and $ \theta $.

    5. Get $ n $ number of MCMC HPD credible intervals. Find the average value of the lower and upper bounds credible interval of $ \lambda, \beta $ and $ \theta $.

    In this section, we show the usefulness of the theatrical findings in this paper by conducting series of simulation experiments. The simulations show the bias and estimated risk of the maximum likelihood and Bayesian estimates, respectively. The Bayesian estimate are calculated based on mean squared and LINEX loss functions. In addition, $ 95\% $ and $ 90\% $ the confidence, Bootstrap and HPD credible intervals are calculated with the corresponding width. The simulation experiments can be explained though the following steps:

    Evaluate the performances of Bayes predictors obtained from LINEX and squared error loss functions. In this case, to investigate the sensitivity of the predictors with respect to the choices of hyper parameters, the above mentioned priors are considered. We perform simulations to investigate the behavior of the different methods for $ n = 30 $ and for various $ r, k, T_1, T_2. $

    1. Fix different censoring cases as given in (7) as $ X_{r = 10:n} $, $ X_{r = 15:n} $, $ X_{k = 20:n} $, $ X_{k = 25:n} $, $ T_1 = 18 $, $ T_1 = 24 $, $ T_2 = 30 $ and $ T_2 = 36 $, next generate the censored samples from Dagum distribution using $ \lambda = 5, \beta = 2 \mbox{ and } \theta = 2 $.

    2. Use each of the censoring cases in step (1) for calculating the MLEs by solving the system of nonlinear equations (3.3), (3.4) and (3.5).

    3. Again, we use each of the censoring cases in step (1) for calculating the Bayesian estimates by using Tierney-Kadane's approximation in Section (4) for both cases of mean squared and LINEX loss functions. Let the hyper-parameters be the inverse of initial values which are $ \nu_{1} = 0.2 $ and $ \nu_{2} = \nu_{3} = 0.5 $, respectively. The parameter $ \rho $ in LINEX is chosen as -0.5, 1.0 and 1.5.

    4. The steeps (1)-(3) are repeated 1000 times, then the bias and estimated risk (ER) in each cases are calculated in Table 1. The ER of parameter $ \varphi $ under squared error and LINEX loss functions by:

    $ ERS=1RRi=1(ˆφiφ)2,ERL=1RRi=1(Exp(ρ(ˆφiφ))(ˆφiφ)1),
    $
    Table 1.  Bias, MSE and estimated risk (ER) of the parameters estimation when $n = 30, \lambda = 5, \beta = 2$ and $\theta = 2$.
    $T$ $\lambda_{ML}$ $\lambda_{ST}$ $\lambda_{LT}$ $\beta_{ML}$ $\beta_{ST}$ $\beta_{LT}$ $\theta_{ML}$ $\theta_{ST}$ $\theta_{LT}$
    $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$ $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$ $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$
    $X_{r=10:n}$ 0.2363 0.2324 0.2149 -0.0605 0.0144 -0.0284 0.0214 0.0264 0.0157 0.0085 0.1374 -0.4014 -0.2774 -0.3524 -0.2393
    8.0744 0.1993 0.1928 0.1553 0.1618 0.6231 0.2048 0.1938 0.0941 0.0976 0.4086 0.6174 0.2512 0.3189 0.2156
    $X_{r=15:n}$ 0.0067 0.3116 0.2985 0.0925 0.1149 0.0702 0.0753 0.0834 0.078 0.0866 -0.1417 -0.1755 -0.1803 -0.2811 0.1547
    4.0237 0.1795 0.1768 0.1428 0.1529 0.3233 0.1876 0.1851 0.0802 0.0879 0.2084 0.3061 0.1431 0.2539 0.1394
    $X_{k=20:n}$ 0.4301 0.3551 0.3714 0.2116 0.2363 0.1619 0.1027 0.1113 0.1141 0.1085 -0.106 -0.1258 -0.1694 -0.2164 0.1096
    2.359 0.1517 0.1435 0.1339 0.1421 0.179 0.1419 0.1471 0.0726 0.0676 0.1209 0.1637 0.1128 0.1953 0.0987
    $X_{k=25:n}$ 0.1065 0.4041 0.3928 0.2801 0.3183 0.0599 0.131 0.134 0.1255 0.133 -0.0684 -0.0768 -0.1111 -0.1882 -0.0928
    1.5964 0.1407 0.1327 0.1216 0.1158 0.1059 0.1081 0.0555 0.0628 0.0646 0.0811 0.0993 0.1002 0.1698 0.0835
    $T_1=18$ 0.4675 0.3669 0.3542 0.2379 0.2477 0.1206 0.1256 0.1258 0.1235 0.1209 -0.1262 0.1234 -0.1131 -0.1814 0.1035
    1.9223 0.1461 0.1381 0.1237 0.1224 0.1505 0.1168 0.1011 0.0711 0.0668 0.0952 0.112 0.1021 0.1737 0.0932
    $T_1=24$ -0.1587 0.2724 0.2641 0.06 0.1081 -0.0359 0.0564 0.0765 0.0665 0.0544 0.1528 -0.1616 -0.157 -0.3176 0.1755
    5.1027 0.1839 0.1871 0.1535 0.1565 0.3924 0.1959 0.1888 0.0898 0.0949 0.2527 0.3225 0.1621 0.287 0.1581
    $T_2=30$ 0.3561 0.4094 0.4098 0.3132 0.2822 0.0627 0.1302 0.1395 0.1346 0.1357 -0.0716 -0.0833 -0.1149 -0.1548 -0.0851
    1.4869 0.1339 0.1309 0.2813 0.1135 0.0952 0.0877 0.0455 0.0521 0.0532 0.0618 0.0777 0.0936 0.1396 0.0767
    $T_2=36$ 0.2507 0.3533 0.3359 0.1864 0.2058 0.1621 0.0937 0.1057 0.092 0.0951 -0.1454 0.1328 -0.1532 -0.2664 0.1301
    3.1372 0.1598 0.1516 0.1372 0.1446 0.2343 0.1507 0.1781 0.0727 0.0756 0.1525 0.2219 0.1385 0.2406 0.1172
    The first row represents the Bias while the second row represents the MSE and ER.

     | Show Table
    DownLoad: CSV

    where $ \hat{\varphi} $ is the estimate of $ \varphi $ and R is the number of replication.

    5. The $ 90\% $ and $ 95\% $ approximate confidence, bootstrap and HPD credible intervals with their width for the parameters $ \lambda, \beta $ and $ \theta $ are calculated in Tables (2), (3) and (4), respectively.

    From Tables 1, 2, 3 and 4, we see that:

    Table 2.  Bias and the estimated risk (ER) of the parameters estimation using MCMC when $n = 30, \lambda = 5, \beta = 2$ and $\theta = 2$.
    $T$ $\lambda_{SE}$ $\lambda_{LE}$ $\beta_{SE}$ $\beta_{LE}$ $\theta_{SE}$ $\theta_{LE}$
    $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$ $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$ $\rho=-0.5$ $\rho=1.0$ $\rho=1.5$
    $X_{r=10:n}$ -2.3890 -2.1640 -2.8200 -2.7040 -1.1830 -1.1730 -1.2130 -1.2030 -0.1960 -0.1470 -0.3210 -0.2830
    0.0276 0.0373 0.0188 0.0181 0.0637 0.0653 0.0564 0.0593 0.0081 0.0089 0.0062 0.0064
    $X_{r=15:n}$ -2.7440 -2.6220 -3.0110 -2.9340 -1.1810 -1.1740 -1.2030 -1.1960 -0.4460 -0.4160 -0.5270 -0.5020
    0.0222 0.0270 0.0151 0.0184 0.0382 0.0386 0.0336 0.0346 0.0063 0.0065 0.0047 0.0047
    $X_{k=20:n}$ -2.9700 -2.8930 -3.1550 -3.1000 -1.0840 -1.0770 -1.1050 -1.0980 -0.6160 -0.5960 -0.6740 -0.6550
    0.0118 0.0091 0.0068 0.0065 0.0143 0.0147 0.0118 0.0129 0.0027 0.0018 0.0014 0.0025
    $X_{k=25:n}$ -3.1700 -3.1150 -3.3110 -3.2680 -1.0820 -1.0770 -1.0990 -1.0940 -0.7550 -0.7380 -0.8000 -0.7850
    0.0083 0.0038 0.0021 0.0038 0.0073 0.0059 0.0066 0.0055 0.0003 0.0013 0.0011 0.0017
    $T_1=18$ -2.6400 -2.4940 -2.9540 -2.8650 -1.2480 -1.2410 -1.2700 -1.2630 -0.3480 -0.3110 -0.4470 -0.4160
    0.0187 0.0200 0.0118 0.0113 0.0351 0.0354 0.0299 0.0311 0.0059 0.0060 0.0040 0.0048
    $T_1=24$ -2.8350 -2.7320 -3.0700 -3.0010 -1.1650 -1.1590 -1.1860 -1.1790 -0.5250 -0.4990 -0.5960 -0.5740
    0.0092 0.0112 0.0065 0.0086 0.0254 0.0240 0.0213 0.0229 0.0034 0.0036 0.0031 0.0025
    $T_2=30$ -3.1250 -3.0620 -3.2830 -3.2350 -1.1130 -1.1070 -1.1300 -1.1250 -0.7110 -0.6930 -0.7610 -0.7450
    0.0100 0.0083 0.0043 0.0068 0.0201 0.0208 0.0195 0.0184 0.0017 0.0018 0.0012 0.0014
    $T_2=36$ -3.2540 -3.2040 -3.3820 -3.3430 -1.1130 -1.1080 -1.1280 -1.1230 -0.8060 -0.7920 -0.8470 -0.8340
    0.0096 0.0075 0.0087 0.0072 0.0061 0.0057 0.0064 0.0065 0.0020 0.0008 0.0013 0.0012
    The first row represents the Bias while the second row represents ER.

     | Show Table
    DownLoad: CSV
    Table 3.  $ 95\% $ and $ 90\% $ Interval estimation of the parameter $ \lambda $ when $ n = 30, \lambda = 5, \beta = 2 $ and $ \theta = 2 $.
    $ T $ $ {ML} $ $ {Bootstrap} $ $ {HPD_{S}} $ $ HPD_{LT} $
    $ \rho=-0.5 $ $ \rho=1.0 $ $ \rho=1.5 $
    $ X_{r=10:n} $ $ 95\% $ 3.8809 7.1027 4.301 5.85 4.2645 5.691 4.703 5.6513 4.322 5.6302 4.2076 5.6923
    3.2218 1.549 1.4266 0.9483 1.3082 1.4846
    $ 90\% $ 3.8901 7.0931 4.3333 5.8146 4.2644 5.6723 4.703 5.5978 4.322 5.5623 4.2076 5.6161
    3.203 1.4813 1.4079 0.8949 1.2402 1.4085
    $ X_{r=15:n} $ $ 95\% $ 4.2337 5.8233 4.6868 5.7555 4.6531 5.6376 4.9182 5.6107 4.6543 5.5599 4.6097 5.6464
    1.5896 1.0687 0.9845 0.6925 0.9057 1.0368
    $ 90\% $ 4.2337 5.8099 4.7149 5.7202 4.6431 5.6065 4.9254 5.5946 4.6543 5.5128 4.6097 5.6421
    1.5762 1.0052 0.9634 0.6692 0.8585 1.0324
    $ X_{k=20:n} $ $ 95\% $ 4.6878 6.1134 4.8764 5.6987 4.8931 5.6201 4.9164 5.5526 4.8637 5.5532 4.8085 5.5939
    1.4256 0.8222 0.727 0.6362 0.6895 0.7854
    $ 90\% $ 4.6896 6.0885 4.898 5.6722 4.8931 5.6135 4.9204 5.5567 4.8637 5.5449 4.8085 5.5508
    1.3989 0.7743 0.7204 0.6363 0.6812 0.7423
    $ X_{k=25:n} $ $ 95\% $ 4.9406 5.8616 4.9989 5.655 4.9981 5.5968 4.9585 5.5467 4.9862 5.5477 4.9615 5.5974
    0.921 0.6561 0.5987 0.5882 0.5614 0.6359
    $ 90\% $ 4.9501 5.8366 4.9989 5.6381 4.9984 5.5068 4.9985 5.5381 4.9862 5.5197 4.9615 5.5821
    0.8865 0.6392 0.5084 0.5396 0.5335 0.6206
    $ T_1=18 $ $ 95\% $ 4.5417 5.8111 4.9665 5.676 4.9704 5.5971 4.9182 5.5517 4.9683 5.5676 4.909 5.6031
    1.2694 0.7095 0.6267 0.6335 0.5993 0.6942
    $ 90\% $ 4.5451 5.8043 4.9865 5.6605 4.9904 5.5962 4.9645 5.5421 4.9683 5.5277 4.909 5.5548
    1.2592 0.674 0.6058 0.5776 0.5594 0.6458
    $ T_1=24 $ $ 95\% $ 4.4725 5.9568 4.3615 5.7797 4.3455 5.6537 4.885 5.6217 4.5357 5.5698 4.4527 5.6489
    1.4843 1.4181 1.3082 0.7367 1.0341 1.1962
    $ 90\% $ 4.4765 5.9381 4.5019 5.743 4.5445 5.6123 4.885 5.6061 4.5357 5.516 4.4527 5.5817
    1.4616 1.1411 1.0678 0.721 0.9803 1.129
    $ T_2=30 $ $ 95\% $ 4.8469 5.8925 4.9994 5.651 4.9983 5.5882 4.9772 5.5389 4.9832 5.5436 4.9779 5.6068
    1.0456 0.6516 0.5899 0.5617 0.5604 0.6289
    $ 90\% $ 4.8547 5.8712 4.9991 5.6347 4.9993 5.4456 4.9992 5.5373 4.9972 5.5161 4.9712 5.5067
    1.0165 0.6356 0.4463 0.5381 0.5189 0.5355
    $ T_2=36 $ $ 95\% $ 4.3165 6.296 4.7768 5.7342 4.6733 5.6214 4.9254 5.5959 4.7816 5.575 4.7175 5.6404
    1.9795 0.9574 0.9481 0.6705 0.7934 0.923
    $ 90\% $ 4.3255 6.2768 4.8078 5.7113 4.6893 5.5926 4.9645 5.6077 4.7816 5.5594 4.7175 5.5734
    1.9513 0.9035 0.9033 0.6432 0.7778 0.856
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 4.  $ 95\% $ and $ 90\% $ Interval estimation of the parameter $ \beta $ when $ n = 30, \lambda = 5, \beta = 2 $ and $ \theta = 2 $.
    $ T $ $ {ML} $ $ {Bootstrap} $ $ {HPD_{S}} $ $ HPD_{LT} $
    $ \rho=-0.5 $ $ \rho=1.0 $ $ \rho=1.5 $
    $ X_{r=10:n} $ $ 95\% $ 1.7489 2.656 1.9078 2.2267 1.9125 2.1361 1.4056 2.02 1.1589 2.5683 1.4042 2.0164
    0.9071 0.3189 0.2236 0.6144 1.4094 0.6122
    $ 90\% $ 1.7495 2.6351 1.9145 2.2176 1.9101 2.1315 1.419 2.0052 1.1669 2.5609 1.4208 2.0078
    0.8856 0.3031 0.2214 0.5862 1.394 0.587
    $ X_{r=15:n} $ $ 95\% $ 1.8257 2.514 1.9782 2.2249 1.9681 2.1799 1.5921 2.0159 1.4358 2.3563 1.5886 2.0187
    0.6883 0.2467 0.2118 0.4239 0.9205 0.4302
    $ 90\% $ 1.8301 2.5079 1.9788 2.2192 1.9491 2.1611 1.6049 2.0028 1.4378 2.3499 1.5977 2.0115
    0.6778 0.2404 0.212 0.3979 0.9122 0.4138
    $ X_{k=20:n} $ $ 95\% $ 1.8536 2.4237 1.9789 2.2207 1.9671 2.1645 1.6915 2.0137 1.7138 2.2432 1.6882 2.0141
    0.5701 0.2418 0.1974 0.3222 0.5294 0.3259
    $ 90\% $ 1.8584 2.4104 1.9912 2.2169 1.9895 2.1515 1.7022 2.0032 1.7165 2.1922 1.6973 2.0022
    0.552 0.2257 0.162 0.301 0.4757 0.3049
    $ X_{k=25:n} $ $ 95\% $ 1.805 2.2553 1.9888 2.2191 1.9878 2.1742 1.7504 2.0143 1.6093 2.0064 1.7519 2.013
    0.4503 0.2303 0.1864 0.2639 0.3971 0.2611
    $ 90\% $ 1.8102 2.2424 1.9989 2.2144 1.9978 2.1793 1.7576 2.0076 1.6094 2.0009 1.7599 2.0049
    0.4322 0.2155 0.1815 0.25 0.3914 0.245
    $ T_1=18 $ $ 95\% $ 1.8275 2.2544 1.9882 2.2192 1.9718 2.1698 1.7269 2.0157 1.6462 2.1718 1.7317 2.0148
    0.4269 0.231 0.198 0.2888 0.5256 0.283
    $ 90\% $ 1.8337 2.2401 1.9982 2.2156 1.9899 2.175 1.7356 2.01 1.7526 2.156 1.7401 2.007
    0.4064 0.2174 0.1851 0.2744 0.4034 0.2669
    $ T_1=24 $ $ 95\% $ 1.8581 2.2525 1.9785 2.2282 1.9698 2.1857 1.5349 2.016 1.556 2.4197 1.5334 2.0189
    0.3944 0.2497 0.2159 0.4812 0.8637 0.4855
    $ 90\% $ 1.8769 2.2362 1.9868 2.2208 1.9581 2.1738 1.5473 2.0055 1.5628 2.4043 1.5461 2.0107
    0.3593 0.234 0.2157 0.4581 0.8415 0.4646
    $ T_2=30 $ $ 95\% $ 1.8678 2.2417 1.9978 2.217 1.9998 2.1812 1.7666 2.0137 1.7155 2.0919 1.7681 2.0111
    0.3739 0.2192 0.1814 0.2471 0.3764 0.243
    $ 90\% $ 1.899 2.2401 1.9995 2.2141 1.9971 2.1768 1.7726 2.0079 1.7156 2.0682 1.7737 2.005
    0.3411 0.2146 0.1797 0.2353 0.3526 0.2313
    $ T_2=36 $ $ 95\% $ 1.9684 2.264 1.9784 2.2246 1.9668 2.1652 1.6365 2.0148 1.3802 2.3056 1.6354 2.0157
    0.2956 0.2462 0.1984 0.3784 0.9254 0.3803
    $ 90\% $ 1.9762 2.2549 1.9802 2.2186 1.988 2.1536 1.6502 2.0056 1.3855 2.2989 1.6443 2.0019
    0.2786 0.2384 0.1656 0.3553 0.9134 0.3576
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 5.  $ 95\% $ and $ 90\% $ Interval estimation of the parameter $ \theta $ when $ n = 30, \lambda = 5, \beta = 2 $ and $ \theta = 2 $.
    $ T $ $ {ML} $ $ {Bootstrap} $ $ {HPD_{S}} $ $ HPD_{LT} $
    $ \rho=-0.5 $ $ \rho=1.0 $ $ \rho=1.5 $
    $ X_{r=10:n} $ $ 95\% $ 0.7189 3.2086 0.4627 2.8471 1.4028 2.6683 1.2825 2.2195 1.5545 2.3768 1.625 2.7123
    2.4898 2.3844 1.2655 0.937 0.8223 1.0873
    $ 90\% $ 0.8205 3.1491 0.5627 2.9076 1.4032 2.6546 1.2825 2.1632 1.5636 2.3573 1.625 2.7123
    2.3286 2.3449 1.2514 0.8806 0.7937 1.0873
    $ X_{r=15:n} $ $ 95\% $ 1.1368 2.8548 1.0812 2.6651 1.7381 2.4344 1.5355 2.1759 1.5868 2.215 1.7435 2.5074
    1.7181 1.5839 0.6963 0.6404 0.6282 0.7638
    $ 90\% $ 1.1726 2.8136 1.1012 2.6801 1.7442 2.4119 1.5555 2.1887 1.5964 2.1953 1.7435 2.47
    1.641 1.5789 0.6677 0.6332 0.5989 0.7265
    $ X_{k=20:n} $ $ 95\% $ 1.3395 2.647 1.2596 2.4642 1.5794 2.2142 1.6333 2.1204 1.5763 2.0877 1.8405 2.4217
    1.3076 1.2046 0.6348 0.4871 0.5114 0.5812
    $ 90\% $ 1.3623 2.6044 1.3175 2.5177 1.8941 2.516 1.6333 2.104 1.5833 2.0654 1.8405 2.3672
    1.2421 1.2002 0.6219 0.4707 0.4821 0.5268
    $ X_{k=25:n} $ $ 95\% $ 1.4763 2.5274 1.4401 2.3948 1.7151 2.3253 1.6987 2.0984 1.766 2.0669 1.87 2.3354
    1.0511 0.9546 0.6102 0.3997 0.3009 0.4654
    $ 90\% $ 1.5002 2.5044 1.4401 2.3452 1.723 2.2991 1.6987 2.0823 1.7685 2.0411 1.87 2.3101
    1.0043 0.9051 0.5761 0.3835 0.2726 0.4401
    $ T_1=18 $ $ 95\% $ 1.4217 2.5676 1.3419 2.3888 1.6474 2.2818 1.6718 2.0971 1.6078 2.0092 1.8339 2.3407
    1.1459 1.0468 0.6344 0.4253 0.4014 0.5068
    $ 90\% $ 1.447 2.5303 1.3419 2.3042 1.589 2.1938 1.6818 2.104 1.615 1.9903 1.8339 2.3349
    1.0833 0.9623 0.6048 0.4222 0.3753 0.501
    $ T_1=24 $ $ 95\% $ 1.0018 2.9762 0.9066 2.7038 1.6136 2.6236 1.4692 2.2074 1.4449 2.0777 1.7081 2.5718
    1.9744 1.7972 1.01 0.7382 0.6328 0.8638
    $ 90\% $ 1.082 2.8934 1.1066 2.7118 1.6187 2.6062 1.4892 2.2104 1.4482 2.0705 1.7081 2.5337
    1.8114 1.6052 0.9875 0.7213 0.6223 0.8256
    $ T_2=30 $ $ 95\% $ 1.4913 2.4924 1.4721 2.3811 1.7284 2.2596 1.7319 2.1083 1.7869 2.0398 1.8609 2.2978
    1.001 0.909 0.5312 0.3765 0.2529 0.437
    $ 90\% $ 1.5199 2.4638 1.4721 2.3679 1.7331 2.2516 1.7419 2.1107 1.7968 2.0363 1.8709 2.316
    0.9438 0.8959 0.5185 0.3688 0.2395 0.4251
    $ T_2=36 $ $ 95\% $ 1.2225 2.7506 1.1188 2.5164 1.8853 2.5395 1.5903 2.1688 1.5935 2.1895 1.7732 2.4407
    1.5281 1.3977 0.6542 0.5784 0.596 0.6675
    $ 90\% $ 1.2715 2.7046 1.1188 2.4309 1.648 2.2724 1.5913 2.1692 1.5942 2.1794 1.7732 2.4396
    1.4331 1.3121 0.6244 0.5799 0.5852 0.6664
    HPDS: HPD credible interval based on least squared error
    HPDLT: HPD credible interval based on LINEX loss function

     | Show Table
    DownLoad: CSV
    Table 6.  CI using McMc the parameters when $ n = 30, \lambda = 5, \beta = 2 $ and $ \theta = 2 $.
    $ T $ $ \lambda $ $ \beta $ $ \theta $
    $ 95\% $ $ 90\% $ $ 95\% $ $ 90\% $ $ 95\% $ $ 90\% $
    $ X_{r=10:n} $ 2.2612 5.7079 2.397 5.2419 1.4673 2.2299 1.5146 2.1574 1.089 2.7555 1.1792 2.568
    3.4467 2.8449 0.7627 0.6427 1.6665 1.3888
    $ X_{r=15:n} $ 3.2011 5.774 3.3156 5.4827 1.5123 2.1693 1.5558 2.1016 0.9799 2.3119 1.0516 2.1579
    2.573 2.1672 0.657 0.5459 1.332 1.1063
    $ X_{k=20:n} $ 3.1594 5.2554 3.2552 5.0265 1.6151 2.2627 1.6567 2.1992 1.8853 2.991 1.9577 2.8866
    2.096 1.7713 0.6476 0.5425 1.1057 0.9289
    $ X_{r=25:n} $ 4.0678 5.8652 4.1599 5.6728 1.6377 2.2228 1.6794 2.1732 1.8064 2.7845 1.8634 2.6919
    1.7974 1.5129 0.5851 0.4938 0.9781 0.8285
    $ T_1=18 $ 2.2023 5.0454 3.3361 5.7186 1.4538 2.0967 1.4946 2.0382 1.0156 2.5102 1.0914 2.3422
    2.8431 2.3825 0.6429 0.5436 1.4946 1.2508
    $ T_1=24 $ 1.1682 5.5887 1.2789 5.2896 1.5336 2.1702 1.5748 2.1128 0.92 2.1664 0.9965 2.036
    2.4205 2.0107 0.6367 0.538 1.2464 1.0395
    $ T_1=30 $ 4.0586 5.978 4.1574 5.7554 1.606 2.2067 1.646 2.1437 1.8271 2.8706 1.8869 2.7562
    1.9194 1.598 0.6007 0.4976 1.0434 0.8693
    $ T_1=36 $ 4.0154 5.7364 4.1155 5.5376 1.6228 2.1806 1.6627 2.1322 1.7817 2.7215 1.8362 2.6184
    1.7209 1.4221 0.5577 0.4695 0.9398 0.7823

     | Show Table
    DownLoad: CSV

    1. The estimate of $ \lambda $ is overestimated except just few cases. Also, the Bayesian estimate of $ \lambda $ is best in terms of the bias in the case of LINEX loss function at $ \rho = 1.0. $ Also, the ER supports the Bayesian estimate in the case of LINEX loss function.

    2. Again, the estimate of $ \beta $ is overestimated except just few cases. Also, the Bayesian estimate of $ \lambda $ behaves better in terms of the bias in the case of LINEX loss function. Similar argument is also can be stated for ER.

    3. One again, the estimate of $ \theta $ is underestimated for most of the cases. Also, the ER shows that the Bayesian estimate of $ \theta $ is the best in terms of the bias in the case of LINEX loss function.

    4. HPD credible interval estimation for $ \lambda $ behaves better in terms of the the interval width for the LINEX loss function when $ \rho = -0.5. $

    5. HPD credible interval estimation for $ \beta $ behaves better in terms of the the interval width for the SE loss function.

    6. HPD credible interval estimation for $ \theta $ behaves better in terms of the the interval width for the LINEX loss function when $ \rho = 1.0. $

    Here we use one data set that will be used for the purpose of making comparisons between the estimators presented in this paper. The data set is taken from Nichols and Padgett [29] consisting of 100 observations on breaking stress of carbon fibers (in Gba). Dey, et al. [30] have fitted the Dagum distribution to this data set. The data are: 3.7, 2.74, 2.73, 3.11, 3.27, 2.87, 4.42, 2.41, 3.19, 3.28, 3.09, 1.87, 3.75, 2.43, 2.95, 2.96, 2.3, 2.67, 3.39, 2.81, 4.2, 3.31, 3.31, 2.85, 3.15, 2.35, 2.55, 2.81, 2.77, 2.17, 1.41, 3.68, 2.97, 2.76, 4.91, 3.68, 3.19, 1.57, 0.81, 1.59, 2, 1.22, 2.17, 1.17, 5.08, 3.51, 2.17, 1.69, 1.84, 0.39, 3.68, 1.61, 2.79, 4.7, 1.57, 1.08, 2.03, 1.89, 2.88, 2.82, 2.5, 3.6, 1.47, 3.11, 3.22, 1.69, 3.15, 4.9, 2.97, 3.39, 2.93, 3.22, 3.33, 2.55, 2.56, 3.56, 2.59, 2.38, 2.83, 1.92, 1.36, 0.98, 1.84, 1.59, 5.56, 1.73, 1.12, 1.71, 2.48, 1.18, 1.25, 4.38, 2.48, 0.85, 2.03, 1.8, 1.61, 2.12, 2.05, 3.65.

    The point and estimation techniques in Section (3), (4) and (5) can be applied to this data set throughout the steps below:

    1. Sorting the data set in ascending order.

    2. Applying the censoring scheme C-UHCS$ (m, r; T_{1}, T_{2}) $ using one arbitrary case of Type-Ⅱ censoring at $ X_{25:100} = 1.74 $ and arbitrary case of Type-Ⅰ censoring at $ T = 2.4 $.

    3. Applying the point estimations of the parameters $ \lambda, \beta $ and $ \theta $ using MLE, Tierney-Kadane and MCMC methods (MCMC are based on 15000 repetitions and 5000 burns).

    4. Calculating the 95% and 90% HPD credible intervals using MCMC based on squared error loss function.

    5. The results of the point and interval estimation of the unknown parameters are displayed in Tables 7 and 8.

    Table 7.  Pint estimation and the estimated variances of the unknown parameters using the real data set.
    Censoring MLE Tierney-Kadane MCMC
    SE LE(-.5) LE(1) LE(1.5) SE LE(-.5) LE(1) LE(1.5)
    $ x_{25:100} $ $ \lambda $ 6.6126 1.7776 1.6093 1.6511 1.6632 1.4560 1.5198 1.5566 1.5972
    6.6948 0.0482 0.0434 0.0431 0.0432 0.0508 0.0432 0.0434 0.0433
    $ \beta $ 2.5055 1.3135 1.3900 1.3308 1.3922 1.3444 1.3639 1.3737 1.3837
    2.5544 0.0460 0.0455 0.0411 0.0488 0.0469 0.0465 0.0468 0.0409
    $ \theta $ 2.4578 2.7547 2.8720 2.8539 2.7734 2.4850 2.5687 2.6144 2.6644
    1.6470 0.0268 0.0218 0.0271 0.0263 0.0272 0.0247 0.0242 0.0246
    $ T=2.4 $ $ \lambda $ 2.9549 1.3291 1.3711 1.4598 1.4453 1.4568 1.4973 1.5183 1.5396
    2.8270 0.0364 0.0367 0.0337 0.0361 0.0367 0.0327 0.0327 0.0326
    $ \beta $ 1.7964 1.3560 1.3494 1.3707 1.3223 1.3630 1.3763 1.3830 1.3898
    1.7344 0.0340 0.0360 0.0345 0.0370 0.0328 0.0456 0.0337 0.0316
    $ \theta $ 1.9669 2.7632 2.7933 2.8013 2.7365 2.3428 2.4169 2.4573 2.4995
    1.4303 0.0142 0.0170 0.0169 0.0146 0.0197 0.0181 0.0188 0.0123
    The first row represents the point estimation and the second row is the estimated variance.
    LE(a)=Linex loss function when ρ = a

     | Show Table
    DownLoad: CSV
    Table 8.  90% and 95% C.I and the interval widths of the unknown parameters using the real data set.
    Censoring MLE Tierney-Kadane MCMC
    $ 95\% $ $ 90\% $ $ 95\% $ $ 90\% $ $ 95\% $ $ 90\% $
    $ x_{25:100} $ $ \lambda $ 0.052 13.174 1.089 12.136 0.981 2.000 1.102 2.004 0.968 2.020 0.987 2.018
    13.122 11.046 1.019 0.902 1.052 1.031
    $ \beta $ 0.002 5.009 0.398 4.613 1.006 1.685 1.009 1.632 1.018 1.672 1.021 1.666
    5.007 4.215 0.639 0.622 0.654 0.645
    $ \theta $ 0.844 4.072 1.099 3.817 1.693 3.200 1.884 3.173 1.833 3.183 1.900 3.129
    3.228 2.718 1.507 1.290 1.350 1.230
    $ T=2.4 $ $ \lambda $ 0.184 5.725 0.623 5.287 0.942 2.069 1.028 1.967 0.927 2.070 1.026 1.983
    5.541 4.665 1.127 0.938 1.144 0.957
    $ \beta $ 0.097 3.496 0.366 3.227 1.051 1.699 1.091 1.675 1.053 1.695 1.092 1.621
    3.399 2.862 0.648 0.584 0.642 0.529
    $ \theta $ 0.565 3.369 0.787 3.147 1.713 3.148 1.793 3.057 1.742 3.131 1.790 3.057
    2.803 2.360 1.435 1.264 1.388 1.267
    The first row represents the interval estimation while the second row is the interval width.

     | Show Table
    DownLoad: CSV

    The underline selections in Tables (7) represent the best point estimation with minimum variances. Also, underline selections in Tables (8) represent the selected best interval estimates with minimum interval width.

    In this paper, parameters point and interval estimation of C-U hybrid censored model of the Dagum model under classical and Bayesian perspectives are discussed. The MLEs and asymptotic CIs for the interested parameters are computed. Since the Bayesian estimates of the involved parameters could not be obtained analytically, so Tierney and Kadane's approach have employed to obtain approximate Bayes estimates. It is found that, the performances of the Bayesian estimates based on LINEX loss function are superior than those of the corresponding ML estimators. Similar improvements are observed for the Bayesian estimates evaluated for different loss functions. However, depending on the different values of asymmetry parameter $ \rho $, the ER of LINEX loss function may be smaller than those of MLEs. The points estimates in the real data set show that both of Tierney-Kadanean approximation and MCMC are comparable in terms of the estimated variances as well as in the interval estimation in terms on the interval width.

    The authors would like to thank the editor and referees for their helpful comments, which improved the presentation of the paper. Also, the authors would like to extend their sincere appreciation to the Deanship of Scientific Research, King Saud University for funding the Research Group (RG -1435-056).

    The authors have no conflict of interest.

    [1] Lee AG (2004) How lipids affect the activities of integral membrane proteins. BBA-Biomembranes 1666: 62–87. doi: 10.1016/j.bbamem.2004.05.012
    [2] Pohorille A, Schweighofer K, Wilson MA (2006) The origin and early evolution of membrane channels. Astrobiology 5: 1–17. doi: 10.1017/S1473550406002886
    [3] Hille B (2001) Ion Channels of Excitable Membranes, 3 Eds., Sinauer.
    [4] Kaczorowski GJ, Mcmanus OB, Priest BT, et al. (2008) Ion channels as drug targets: The next GPCRs. J Gen Physiol 131: 399–405. doi: 10.1085/jgp.200709946
    [5] Ackerman MJ, Clapham DE (1997) Ion channels-basic science and clinical disease. N Engl J Med 336: 1575–1586. doi: 10.1056/NEJM199705293362207
    [6] Lear JD, Wasserman ZR, Degrado WF (1988) Synthetic amphiphilic peptide models for protein ion channels. Science 240: 1177–1181. doi: 10.1126/science.2453923
    [7] Kienker PK, Degrado WF, Lear JD (1994) A helical-dipole model describes the single-channel current rectification of an uncharged peptide ion channel. Proc Natl Acad Sci USA 91: 4859–4863. doi: 10.1073/pnas.91.11.4859
    [8] Petrache HI, Zuckerman DM, Sachs JN, et al. (2002) Hydrophobic matching mechanism investigated by molecular dynamics simulations. Langmuir 18: 1340–1351. doi: 10.1021/la011338p
    [9] Nguyen THT, Liu Z, Moore PB (2013) Molecular dynamics simulations of homo-oligomeric bundles embedded within a lipid bilayer. Biophys J 105: 1569–1580. doi: 10.1016/j.bpj.2013.07.053
    [10] Howard KP, Lear JD, Degrado WF (2002) Sequence determinants of the energetics of folding of a transmembrane four-helix-bundle protein. Proc Natl Acad Sci USA 99: 8568–8572. doi: 10.1073/pnas.132266099
    [11] Arseneault M, Dumont M, Otis F, et al. (2012) Characterization of channel-forming peptide nanostructures. Biophys Chem 162: 6–13. doi: 10.1016/j.bpc.2011.12.001
    [12] Fischer WB (2005) Viral Membrane Proteins: Structure, Function, and Drug Design, In: Protein Rev, Kluwer Academic/Plenum Publishers.
    [13] Wang J, Kim S, Kovacs F, et al. (2001) Structure of the transmembrane region of the M2 protein H+ channel. Protein Sci 10: 2241–2250.
    [14] Stouffer AL, Acharya R, Salom D, et al. (2008) Structural basis for the function and inhibition of an influenza virus proton channel. Nature 451: 596–599. doi: 10.1038/nature06528
    [15] Schnell JR, Chou JJ (2008) Structure and mechanism of the M2 proton channel of influenza A virus. Nature 451: 591–595. doi: 10.1038/nature06531
    [16] Kovacs FA, Cross TA (1997) Transmembrane four-helix bundle of influenza A M2 protein channel: Structural implications from helix tilt and orientation. Biophys J 73: 2511–2517. doi: 10.1016/S0006-3495(97)78279-1
    [17] Acharya R, Carnevale V, Fiorin G, et al. (2010) Structure and mechanism of proton transport through the transmembrane tetrameric M2 protein bundle of the influenza A virus. Proc Natl Acad Sci USA 107: 15075–15080. doi: 10.1073/pnas.1007071107
    [18] Moore PB, Zhong Q, Husslein T, et al. (1998) Simulation of the HIV-1 Vpu transmembrane domain as a pentameric bundle. FEBS Lett 431: 143–148. doi: 10.1016/S0014-5793(98)00714-5
    [19] Woolley GA, Wallace BA (1992) Model ion channels: Gramicidin and alamethicin. J Membrane Biol 129: 109–136.
    [20] Opella SJ, Marassi FM, Gesell JJ, et al. (1999) Structures of the M2 channel-lining segments from nicotinic acetylcholine and NMDA receptors by NMR spectroscopy. Nat Struct Mol Biol 6: 374–379. doi: 10.1038/7610
    [21] Akerfeldt KS, Kienker PK, Lear JD (1996) Structure and conduction mechanisms of minimalist ion channels. Compr Supramol Chem 10: 659–686.
    [22] Gratkowski H, Lear JD, Degrado WF (2001) Polar side chains drive the association of model transmembrane peptides. Proc Natl Acad Sci USA 98: 880–885. doi: 10.1073/pnas.98.3.880
    [23] Randa HS, Forrest LR, Voth GA, et al. (1999) Molecular dynamics of synthetic leucine-serine ion channels in a phospholipid membrane. Biophys J 77: 2400–2410. doi: 10.1016/S0006-3495(99)77077-3
    [24] Oiki S, Danho W, Madison V, et al. (1988) M2 δ, a candidate for the structure lining the ionic channel of the nicotinic cholinergic receptor. Proc Natl Acad Sci USA 85: 8703–8707. doi: 10.1073/pnas.85.22.8703
    [25] Carruthers A, Melchior DL (1986) How bilayer lipids affect membrane protein activity. Trends Biochem Sci 11: 331–335. doi: 10.1016/0968-0004(86)90292-6
    [26] Palsdottir H, Hunte C (2004) Lipids in membrane protein structures. BBA-Biomembranes 1666: 2–18. doi: 10.1016/j.bbamem.2004.06.012
    [27] Lindahl E, Sansom MSP (2008) Membrane proteins: Molecular dynamics simulations. Curr Opin Struct Biol 18: 425–431. doi: 10.1016/j.sbi.2008.02.003
    [28] Phillips JC, Braun R, Wang W, et al. (2005) Scalable molecular dynamics with NAMD. J Comput Chem 26: 1781–1802. doi: 10.1002/jcc.20289
    [29] Duan Y, Wu C, Chowdhury S, et al. (2003) A point-charge force field for molecular mechanics simulations of proteins based on condensed-phase quantum mechanical calculations. J Comput Chem 24: 1999–2012. doi: 10.1002/jcc.10349
    [30] Hu Z, Jiang J (2010) Assessment of biomolecular force fields for molecular dynamics simulations in a protein crystal. J Comput Chem 31: 371–380.
    [31] Darden T, York D, Pedersen L (1993) Particle mesh Ewald: An N.log(N) method for Ewald sums in large systems. J Chem Phys 98: 10089–10092.
    [32] Feller SE, Yin D, Pastor RW, et al. (1997) Molecular dynamics simulation of unsaturated lipid bilayers at low hydration: Parameterization and comparison with diffraction studies. Biophys J 73: 2269–2279. doi: 10.1016/S0006-3495(97)78259-6
    [33] Jojart B, Martinek TA (2007) Performance of the general amber force field in modeling aqueous POPC membrane bilayers. J Comput Chem 28: 2051–2058. doi: 10.1002/jcc.20748
    [34] Taylor J, Whiteford NE, Bradley G, et al. (2009) Validation of all-atom phosphatidylcholine lipid force fields in the tensionless NPT ensemble. BBA-Biomembranes 1788: 638–649. doi: 10.1016/j.bbamem.2008.10.013
    [35] Rosso L, Gould IR (2007) Structure and dynamics of phospholipid bilayers using recently developed general all-atom force fields. J Comput Chem 29: 24–37.
    [36] Kučerka N, Tristram-Nagle S, Nagle JF (2006) Structure of fully hydrated fluid phase lipid bilayers with monounsaturated chains. J Membrane Biol 208: 193–202. doi: 10.1007/s00232-005-7006-8
    [37] Nielsen SO, Ensing B, Ortiz V, et al. (2005) Lipid bilayer perturbations around a transmembrane nanotube: A coarse grain molecular dynamics study. Biophys J 88: 3822–3828. doi: 10.1529/biophysj.104.057703
    [38] De Planque MR, Killian JA (2003) Protein-lipid interactions studied with designed transmembrane peptides: Role of hydrophobic matching and interfacial anchoring (Review). Mol Membr Biol 20: 271–284. doi: 10.1080/09687680310001605352
    [39] Nyholm TKM, Oezdirekcan S, Killian JA (2007) How protein transmembrane segments sense the lipid environment. Biochemistry 46: 1457–1465. doi: 10.1021/bi061941c
    [40] Sonne J, Jensen MO, Hansen FY, et al. (2007) Reparameterization of all-atom dipalmitoylphosphatidylcholine lipid parameters enables simulation of fluid bilayers at zero tension. Biophys J 92: 4157–4167. doi: 10.1529/biophysj.106.087130
    [41] Venturoli M, Smit B, Sperotto MM (2005) Simulation studies of protein-induced bilayer deformations, and lipid-induced protein tilting, on a mesoscopic model for lipid bilayers with embedded proteins. Biophys J 88: 1778–1798. doi: 10.1529/biophysj.104.050849
    [42] Chung LA, Lear JD, Degrado WF (1992) Fluorescence studies of the secondary structure and orientation of a model ion channel peptide in phospholipid vesicles. Biochemistry 31: 6608–6616. doi: 10.1021/bi00143a035
    [43] Choma C, Gratkowski H, Lear JD, et al. (2000) Asparagine-mediated self-association of a model transmembrane helix. Nat Struct Mol Biol 7: 161–166. doi: 10.1038/72440
    [44] Douliez JP, Leonard A, Dufourc EJ (1995) Restatement of order parameters in biomembranes: Calculation of C-C bond order parameters from C-D quadrupolar splittings. Biophys J 68: 1727–1739. doi: 10.1016/S0006-3495(95)80350-4
    [45] Douliez JP, Ferrarini A, Dufourc EJ (1998) On the relationship between C-C and C-D order parameters and its use for studying the conformation of lipid acyl chains in biomembranes. J Chem Phys 109: 2513–2518. doi: 10.1063/1.476823
    [46] Seelig J, Niederberger W (1974) Two pictures of a lipid bilayer. A comparison between deuterium label and spin-label experiments. Biochemistry 13: 1585–1588.
    [47] Seelig J, Waespesarcevic N (1978) Molecular order in cis and trans unsaturated phospholipid bilayers. Biochemistry 17: 3310–3315. doi: 10.1021/bi00609a021
    [48] Smart OS, Breed J, Smith GR, et al. (1997) A novel method for structure-based prediction of ion channel conductance properties. Biophys J 72: 1109–1126. doi: 10.1016/S0006-3495(97)78760-5
    [49] Smart OS, Neduvelil JG, Wang X, et al. (1996) HOLE: A program for the analysis of the pore dimensions of ion channel structural models. J Mol Graphics 14: 354–360. doi: 10.1016/S0263-7855(97)00009-X
    [50] Jorgensen WL, Chandrasekhar J, Madura JD, et al. (1983) Comparison of simple potential functions for simulating liquid water. J Chem Phys 79: 926–935. doi: 10.1063/1.445869
    [51] Zhong Q, Jiang Q, Moore PB, et al. (1998) Molecular dynamics simulation of a synthetic ion channel. Biophys J 74: 3–10. doi: 10.1016/S0006-3495(98)77761-6
    [52] Larsen RJ, Marx ML (2017) An Introduction to Mathematical Statistics and Its Applications, Pearson, 742.
    [53] Heijmans RDH, Pollock DSG, Satorra A (2000) Innovations in Multivariate Statistical Analysis, Springer US, 298.
    [54] Canal L (2005) A normal approximation for the chi-square distribution. Comput Stat Data An 48: 803–808. doi: 10.1016/j.csda.2004.04.001
  • This article has been cited by:

    1. N. Balakrishnan, Erhard Cramer, Debasis Kundu, 2023, 9780123983879, 189, 10.1016/B978-0-12-398387-9.00014-3
    2. Khalaf S. Sultan, Walid Emam, The Combined-Unified Hybrid Censored Samples from Pareto Distribution: Estimation and Properties, 2021, 11, 2076-3417, 6000, 10.3390/app11136000
    3. Heba S. Mohammed, Mazen Nassar, Refah Alotaibi, Ahmed Elshahhat, Analysis of Adaptive Progressive Type-II Hybrid Censored Dagum Data with Applications, 2022, 14, 2073-8994, 2146, 10.3390/sym14102146
    4. Walid Emam, Yusra Tashkandy, Emilio Gómez-Déniz, The Weibull Claim Model: Bivariate Extension, Bayesian, and Maximum Likelihood Estimations, 2022, 2022, 1563-5147, 1, 10.1155/2022/8729529
    5. 2023, 9780123983879, 361, 10.1016/B978-0-12-398387-9.00023-4
    6. Walid Emam, Ghadah Alomani, Predictive modeling of reliability engineering data using a new version of the flexible Weibull model, 2023, 20, 1551-0018, 9948, 10.3934/mbe.2023436
    7. Mustafa M. Hasaballah, Yusra A. Tashkandy, Oluwafemi Samson Balogun, M. E. Bakr, Statistical inference of unified hybrid censoring scheme for generalized inverted exponential distribution with application to COVID-19 data, 2024, 14, 2158-3226, 10.1063/5.0201467
    8. Mustafa M Hasaballah, Oluwafemi Samson Balogun, M E Bakr, Bayesian estimation for the power rayleigh lifetime model with application under a unified hybrid censoring scheme, 2024, 99, 0031-8949, 105209, 10.1088/1402-4896/ad72b2
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5697) PDF downloads(1004) Cited by(2)

Figures and Tables

Figures(13)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog