Processing math: 100%
Research article

Bounds for the stop-loss distance of an independent random sum via Stein's method

  • Received: 23 January 2025 Revised: 19 May 2025 Accepted: 22 May 2025 Published: 06 June 2025
  • MSC : 60F05

  • Let W=X1+X2++XN be a random sum and Z be the standard normal random variable. In this paper, we investigated uniform and non-uniform bounds of the stop-loss distance, which measures the difference between two random variables, W and Z, using the expression |Ehk(W)Ehk(Z)|, where hk(x)=(xk)+ is a call function. In particular, we focused on the case that X1,X2, are independent random variables, and N is a non-negative, integer-valued random variable independent of the Xj's. Our methods were Stein's method and the concentration inequality approach. The value Ehk(W)=E(Wk)+ represents the excess over a threshold and is relevant to applications in collateralized debt obligations (CDOs) and the collective risk model.

    Citation: Punyapat Kammoo, Kritsana Neammanee, Kittipong Laipaporn. Bounds for the stop-loss distance of an independent random sum via Stein's method[J]. AIMS Mathematics, 2025, 10(6): 13082-13103. doi: 10.3934/math.2025587

    Related Papers:

    [1] Tatpon Siripraparat, Suporn Jongpreechaharn . The exponential non-uniform bound on the half-normal approximation for the number of returns to the origin. AIMS Mathematics, 2024, 9(7): 19031-19048. doi: 10.3934/math.2024926
    [2] Huiming Zhang, Hengzhen Huang . Concentration for multiplier empirical processes with dependent weights. AIMS Mathematics, 2023, 8(12): 28738-28752. doi: 10.3934/math.20231471
    [3] Amani Alahmadi, Abdelkader Benkhaled, Waleed Almutiry . On the effectiveness of the new estimators obtained from the Bayes estimator. AIMS Mathematics, 2025, 10(3): 5762-5784. doi: 10.3934/math.2025265
    [4] Baishuai Zuo, Chuancun Yin . Stein’s lemma for truncated generalized skew-elliptical random vectors. AIMS Mathematics, 2020, 5(4): 3423-3433. doi: 10.3934/math.2020221
    [5] Ran Tamir . Testing for correlation in Gaussian databases via local decision making. AIMS Mathematics, 2025, 10(4): 7721-7766. doi: 10.3934/math.2025355
    [6] Xueping Hu, Jingya Wang . A Berry-Ess$\acute{e}$n bound of wavelet estimation for a nonparametric regression model under linear process errors based on LNQD sequence. AIMS Mathematics, 2020, 5(6): 6985-6995. doi: 10.3934/math.2020448
    [7] Rasha Abd El-Wahab Attwa, Shimaa Wasfy Sadk, Hassan M. Aljohani . Investigation the generalized extreme value under liner distribution parameters for progressive type-Ⅱ censoring by using optimization algorithms. AIMS Mathematics, 2024, 9(6): 15276-15302. doi: 10.3934/math.2024742
    [8] Min-Ku Lee, Jeong-Hoon Kim . Closed-form approximate solutions for stop-loss and Russian options with multiscale stochastic volatility. AIMS Mathematics, 2023, 8(10): 25164-25194. doi: 10.3934/math.20231284
    [9] Emad R. Attia . On the upper bounds for the distance between zeros of solutions of a first-order linear neutral differential equation with several delays. AIMS Mathematics, 2024, 9(9): 23564-23583. doi: 10.3934/math.20241145
    [10] Haichao Yu, Yong Zhang . The law of iterated logarithm for a class of random variables satisfying Rosenthal type inequality. AIMS Mathematics, 2021, 6(10): 11076-11083. doi: 10.3934/math.2021642
  • Let W=X1+X2++XN be a random sum and Z be the standard normal random variable. In this paper, we investigated uniform and non-uniform bounds of the stop-loss distance, which measures the difference between two random variables, W and Z, using the expression |Ehk(W)Ehk(Z)|, where hk(x)=(xk)+ is a call function. In particular, we focused on the case that X1,X2, are independent random variables, and N is a non-negative, integer-valued random variable independent of the Xj's. Our methods were Stein's method and the concentration inequality approach. The value Ehk(W)=E(Wk)+ represents the excess over a threshold and is relevant to applications in collateralized debt obligations (CDOs) and the collective risk model.



    Let X1,X2,X3, be a sequence of random variables and let N be a non-negative, integer-valued random variable independent of the Xj's. The summation, X1+X2++XN, is called a random sum. Random sums appear frequently in modern probability theory and are widely applied across various fields, including finance for risk assessment and portfolio optimization, telecommunications for modeling call arrivals, operations research for inventory management, and particularly in insurance contexts (see Chapter 17 in [1,2,3] for additional examples).

    The approximation of random sums X1+X2++XN by the standard normal random variable Z was begun by Robbins [4] and Gnedenko & Korolev [5]. A widely used probability metric to compare such approximations is the Wasserstein distance. The Wasserstein distance between random variables W and Z, represented as dW(W,Z), is defined by

    dW(W,Z)=suphH|Eh(W)Eh(Z)|,

    where H is the class of all Lipschitz continuous functions h on R with a Lipschitz constant not greater than 1. Many authors have studied approximations of the random sums using this metric under various assumptions about the random variable N. In the absence of specific distributional assumptions about N, the results for Gaussian approximation are provided in works such as [6,7,8]. When N is treated as a known random variable, common distributional assumptions include Poisson (see [6,9]), binomial [10,11], negative binomial[9,12], or mixed Poisson [9].

    For k>0, let hk(x)=(xk)+ be a call function, where x+=max{0,x} and let W be any random variable. For an insurance context, if we consider W as the claim amount claimed by an insured and W as a retention, then (Wk)+ represents excess money paid by a reinsurer (see [13] for more details). The main concern for a reinsurer is determining the magnitude of excess losses, i.e., E(Wk)+. In this paper, we focus on a specialized metric known as the stop-loss distance. This distance is useful for modeling the excess over a set threshold, which is important for assessing potential losses. For any random variables W and Z, the stop-loss distance is defined for some kR+ as

    d(k)SL(W,Z)=|E(Wk)+E(Zk)+| (1.1)

    and the uniform version as

    dSL(W,Z)=supkR+|E(Wk)+E(Zk)+|. (1.2)

    The stop-loss distances can be classified into two distinct categories. For a fixed value of k in Eq (1.1), the resulting error bound typically depends on k, which is referred to as a non-uniform bound. In contrast, the error bound corresponding to Eq (1.2), which applies uniformly across all values of kR+, is called a uniform bound. In situations where the stop-loss threshold k is sufficiently large, non-uniform bounds provide a more precise approximation than uniform bounds.

    We focus on the stop-loss distance because it is useful for estimating potential losses and can be applied in many different areas. In risk management and insurance, for example, it models excess losses in the collective risk model and default risks in financial products, such as collateralized debt obligations (CDOs). Its relevance also extends to finance and investment, where understanding the distribution of excess losses beyond a certain point is crucial for effective risk management. For instance, in an insurance context, if we consider W:=X1+X2++XN as the total claim amount with a retention k, where Xj denotes the claim amount from the jth contract and N represents the number of claims, then (Wk)+ represents excess money paid by a reinsurer. A key application of this setup is in modeling total claims from a portfolio of insurance contracts. There are two approaches to modeling W: the individual risk model and the collective risk model, which differ based on whether N is fixed or random. In the individual risk model, the total claims are represented as X1+X2++Xn, where n is the fixed number of contracts in the portfolio. However, in many real-world insurance products, such as accident insurance, the number of claims is not fixed and can vary unpredictably. This uncertainty leads to the collective risk model, where N is treated as a random variable representing the number of claims. In practice, obtaining the precise value of E(Wk)+ is extremely difficult. To address this, we estimate E(Wk)+ using stop-loss distances with suitable limiting distributions Z. Beyond applications in the insurance context, this concept is also applied to risk management for CDOs, as discussed further in Section 4.

    Throughout this paper, we let X1,X2, be independent random variables with a zero mean and finite absolute third moment, and let N be a non-negative, integer-valued random variable with finite variance that is independent of the Xj's.

    We will consistently use the following notation for any random variable Xi: σ2i=Var(Xi)=EX2i and γi=E|Xi|3. For a positive integer n, let

    s2n=ni=1σ2iandδn=1s2nEs2Nni=1γi.

    For a non-negative integer-valued random variable N, we similarly define the random variables

    s2N=Ni=1σ2iandδN=1s2NEs2NNi=1γi.

    Additionally, we denote

    Yi=XiEs2N,andWN=Ni=1Yi.

    From these, we observe that EWN=0 and Var(WN)=1, since Var(Ni=1Xi)=Es2N.

    As previously discussed, numerous authors have explored approximations for the random sum X1+X2++XN with a particular emphasis on Gaussian approximations. In this paper, we aim to find both uniform and non-uniform bounds for the stop-loss distance under the assumption mentioned earlier. A recent result on the Wasserstein distance was presented by Döbler [7] in 2015, which is stated as follows:

    Theorem 1.1. [7] Let X1,X2, be identically distributed. Then

    dW(WN,Z)2Var(N)EN+3γ1σ31EN.

    We observe that dSL(WN,Z)dW(WN,Z), since any call function is a Lipschitz continuous function. Using this inequality, Theorem 1.1 directly provides a uniform bound for dSL(WN,Z) in the case where the Xj's are independent and identically distributed (i.i.d.). However, we cannot apply this to obtain a non-uniform bound. In this paper, we provide uniform and non-uniform bounds for the stop-loss distance without requiring the Xj's to be identically distributed. A key tool in our approach, as well as in Döbler's work [7], is Stein's method. Additionally, Stein's method has been applied in non-i.i.d. settings in works such as [14,15]. Using Stein's method along with the concentration inequality approach outlined in Section 3, we derive the main results presented in the following theorems.

    Theorem 1.2. (Uniform bound)

    dSL(WN,Z)2π(Var(s2N)Es2N)+7.24E[δNsN]Es2N+3E[δNs2N]Es2N.

    Theorem 1.3. (Non-uniform bound) For k3, we have

    d(k)SL(WN,Z)11+k[1.38Var(s2N)Es2N+25.047E[δNsN]Es2N+1.725E[δNs2N]Es2N+1.68E[δNs4N](Es2N)2].

    To gain insight into the behavior of the bounds, which we expect to converge to zero, we often examine a specific case where the Xi's are assumed to be identically distributed. Under this assumption, we can directly derive the following corollary.

    Corollary 1.1. Let X1,X2, be identically distributed. Then

    (i)

    dSL(WN,Z)2π(Var(N)EN)+10.24γ1σ31EN,and

    (ii) for k3, we have

    d(k)SL(WN,Z)11+k[1.38Var(N)EN+26.772γ1σ31EN+1.68γ1EN2σ31(EN)52].

    To ensure the convergence of each bound, it is essential to verify that all terms involving the random variable N such as Var(N)EN, 1EN, and EN2(EN)52, approach zero. These terms represent different aspects of the variability and scale of N, and their convergence is crucial for controlling the bounds effectively. As stated in the above corollary, in the trivial case where N=n, it is evident that the bound converges to zero as n. To explore other possible and more complex forms of N, we provide the following remark, which illustrates the behavior of the bound in such cases.

    Remark 1.1. We present the result from Corollary 1.1 for the case where N follows specific, well-known distributions:

    (i) For NBin(n,p),nN,p(0,1), we have

    dSL(WN,Z)1np[2(1p)π+10.24γ1σ31]andd(k)SL(WN,Z)1(1+k)np[1.381p+26.772γ1σ31+1.68γ1((1p)+np)npσ31],for k3.

    In this case, it is straightforward to observe that the bounds converge to zero as n.

    (ii) For NPoi(λ), λ>0,

    dSL(WN,Z)1λ[2π+10.24γ1σ31]andd(k)SL(WN,Z)1(1+k)λ[1.38+26.772γ1σ31+1.68γ1(λ+1)λσ31],for k3.

    As λ, the bounds still converge to zero.

    (iii) For NNB(r,p), r>0,p(0,1),

    dSL(WN,Z)1r(1p)[2π+10.24γ1pσ31]andd(k)SL(WN,Z)1(1+k)r(1p)[1.38+26.772γ1pσ31+1.68γ1p(1+r(1p))r(1p)σ31],for k3.

    In the final case of the remark, it is clear that both bounds approach zero, depending on the size of r.

    We observe that to apply the theorem most effectively, it is important to carefully select an appropriate form of N along with its parameters, as these choices significantly influence the efficiency and tightness of the resulting bounds.

    In the following remark, we present specific examples and situations where Döbler's approach yields better bounds and, conversely, where our results show improvements over Döbler's.

    Remark 1.2. Let X1,X2, be independent and identically distributed random walks with probability 0.5, i.e., P(Xj=±1)=0.5, and let N be a non-negative, integer-valued random variable independent of the Xj's.

    (i) For NBin(n,0.5), we apply Theorem 1.1 to obtain that dSL(WN,Z)6.25n while Corollary 1.1 provides dSL(WN,Z)15.28n.

    (ii) For NNB(r,0.001), using Theorem 1.1 and Corollary 1.1, we have, for r1, dSL(WN,Z)3.2r and dSL(WN,Z)1.13r, respectively.

    In this work, we discuss Stein's method, which is an important tool for our study, in Section 2. Subsequently, the detailed proof of the main result is presented in Section 3. Finally, we demonstrate the applications of our results in specific areas, such as collective risk models and CDOs.

    In this section, we introduce the foundational tools for our work, beginning with the ingenious approach developed by Charles Stein in 1972, commonly referred to as Stein's method [16]. Let Φ be the distribution function of the standard normal Z, and Cbd be the set of continuous and piecewise continuously differentiable functions f:RR with E|f(Z)|<. Stein's method begins with the Stein equation for normal approximation,

    xf(x)f(x)=h(x)Eh(Z) (2.1)

    for a given function h and fCbd. The solution of Eq (2.1) is

    f(x)=ex22xet22[h(t)Eh(Z)] dt,

    see [17, p. 15].

    In this work, we apply the Stein equation (2.1) with h(x)=(xk)+ and k>0. Then we have

    xf(x)f(x)=(xk)+E(Zk)+. (2.2)

    The solution of Eq (2.2) is

    fk(x)={2πex22E(Zk)+Φ(x),if xk,12πex22(k+E(Zk)+)Φ(x),if x>k,

    and the expression for the first derivative, denoted as fk, is as follows:

    fk(x)={E(Zk)+(1+2πxex22Φ(x)),if x<k,(k+E(Zk)+)(12πxex22Φ(x)),if x>k. (2.3)

    It is important to note that fk is not defined at the point x=k due to the discontinuity of fk at this point. However, by using the solution at x=k in conjunction with Stein's equation (2.2), we can refine the expression for fk(k) by

    fk(k)=E(Zk)+(1+2πkek22Φ(k)).

    From this fact and (2.3), we have

    fk(x)={E(Zk)+(1+2πxex22Φ(x)),if xk,(k+E(Zk)+)(12πxex22Φ(x)),if x>k.

    A crucial component of Stein's method is identifying the properties of the solution fk. From Lemma 2.4 of Chen et al. [18] and some observations shown by Jongpreechaharn and Neammanee [19, p. 210–211], we have that

    0fk(x)2π,for xR. (2.4)

    For the non-uniform bound for fk, we use some results from [19] to derive the bounds for fk in the following proposition.

    Proposition 2.1. For k3 and xR, 0fk(x)1.381+k.

    Proof. Using the fact that 0fk(x)ek222πk2+1k for all xR and k1 [19, p. 210–211], and

    1k43(1+k)andek221+k221+k,for k3,

    we have

    0fk(x)12π(1+k)k2+43(1+k)1.381+k.

    Additional, the second derivative, denoted as fk, is expressed as follows:

    fk(x)={E(Zk)+[x+(2πex22Φ(x))(x2+1)],if x<k,(k+E(Zk)+)[x(2πex22Φ(x))(x2+1)],if x>k.

    We notice that fk does not exist at x=k. It is important to highlight that this prevents the direct application of the mean value theorem on any interval [a,b] where k(a,b), since the theorem requires differentiability over the entire open interval (a,b). The uniform bound for fk was observed by [20, p. 3501], indicating that

    |fk(x)|2,for xR{k}. (2.5)

    Beyond the uniform bound for fk, Jongpreechaharn and Neammanee [20, p. 3502] utilized the result

    E(Zk)+ek222πmin{1,1k2} (2.6)

    in [21, p. 115] to show that

    |fk(x)|1.43e3k28 for xk2. (2.7)

    We employ some ideas from the process of finding (2.7) to obtain the following results.

    Proposition 2.2. For k3,

    (i) |fk(x)|0.231+k for xk1,

    (ii) |fk(x)|1.12 for xR{k}.

    Proof. (i) For x<0, by (2.6) and the fact that |fk(x)|2E(Zk)+ [20, p. 3502], we obtain

    |fk(x)|2ek22k22π89(1+k)e922π0.0041+k,

    where we use 1k249(1+k) in the second inequality. Otherwise, for 0xk1, we first observe that

    0fk(x)=E(Zk)+[x+2πΦ(x)ex22(x2+1)]E(Zk)+[(k1)+2πe(k1)22((k1)2+1)]=A+B,

    where

    A=(k1)E(Zk)+andB=E(Zk)+[2πe(k1)22((k1)2+1)].

    Using (2.6) along with the fact that k1k211+k, we obtain that

    (k1)E(Zk)+(k1)ek22k22π1(1+k)e922π0.00451+k.

    To bound B, we divide into two cases. If 3k4, we can use (2.6) along with the facts that 1ek15(1+k) and (k1)2k2916 to show that

    E(Zk)+[2π(k1)2e(k1)22]ek22k22π[2π(k1)2e(k1)22]=(k1)2ek2ek<0.18551+k

    and

    E(Zk)+[2πe(k1)22]ek22k22π[2πe(k1)22]=ek2ek0.041+k.

    Thus, for 3k4, |fk(x)|0.231+k. For k4, we follow the same argument, using the facts that 1ek0.11+k and (k1)2k21, to obtain that |fk(x)|0.181+k.

    (ii) We first observe that, for k1<x<k, we make use of (2.6) to conclude that

    0fk(x)=E(Zk)+[x+(2πex22Φ(x))(x2+1)]ek22k22π[k+(2πek22Φ(k))(k2+1)]ek22k2π+1+1k2<1.12for k3. (2.8)

    For the final case where x>k, we note that

    fk(x)=(k+E(Zk)+)[x(2πex22Φ(x))(x2+1)].

    To bound this term, we employ the Gaussian tail bound (see [18, p. 16, 38] and [22, p. 252]):

    xex22(x2+1)2πΦ(x)min{12,1x2π}ex222 for x>0,

    which gives us

    x2πex22Φ(x)(x2+1)x+1x.

    Thus, for x>k, we have fk(x)0 which leads to

    |fk(x)|=(k+E(Zk)+)[(2πex22Φ(x))(x2+1)x](k+E(Zk)+)1x1+ek22k32π1.01for k3, (2.9)

    where we use (2.6) prior to the last inequality. For x<k1, (ii) follows directly from (i). Using this fact, along with (2.8) and (2.9), we conclude that (ii) holds.

    We apply Stein's equation (2.2) to establish the following:

    EWNfk(WN)Efk(WN)=E(WNk)+E(Zk)+. (3.1)

    Thus, we can bound |EWNfk(WN)Ef(WN)| instead of |E(WNk)+E(Zk)+|. To handle the term |EWNfk(WN)Ef(WN)|, we use the concentration inequality approach as a complementary tool alongside Stein's method. This technique was first applied by Chen [23] and has been frequently employed to find bounds in this field (see [18,20,22,24] for more examples). Furthermore, the concentration inequality approach has been applied in recent studies, including the work of [25] and Auld and Neammanee [26]. We recall that, for i=1,2,,n, Yi=XiEs2N and, in context outside of the random sum framework, we denote

    Wn=ni=1Yi,andW(i)n=WnYi.

    From these notations, we can observe that

    EW2n=ni=1EY2i=s2nEs2N,andni=1E|Yi|3=δns2nEs2N. (3.2)

    For i=1,2,, let Ki(t)=E(YiI(0tYi)YiI(Yit<0)). The K-function acts as a bridge that links the concentration inequality approach with Stein's method. It allows us to control and understand how well a test function approximates a distribution, especially in terms of its tail behavior and deviations from expected values. To illustrate its tail behavior more clearly, we provide examples in Figures 1 and 2.

    Figure 1.  The case when Yi has the standard normal distribution.
    Figure 2.  The case when Yi1 follows the chi-square distribution with parameter 1.

    Chen [23, p. 100] showed that for all real t,

    Ki(t)dt=EY2i and |t|Ki(t)dt=12E|Yi|3,

    which implies the following:

    ni=1Ki(t)dt=ni=1EY2i=s2nEs2N and ni=1|t|Ki(t)dt=12ni=1E|Yi|3=δns2n2Es2N. (3.3)

    Using these results, together with Lyapunov's inequality, we derive that

    Ki(t)E(|Yi|+|t|)dt=EY2iE|Yi|+12E|Yi|332E|Yi|3.

    This leads to the following bound:

    ni=1Ki(t)E(|Yi|+|t|)dt32ni=1E|Yi|3=3δns2n2Es2N. (3.4)

    In order to apply Stein's method, we encounter terms like E[Wnfk(Wn)] and Efk(Wn) that require careful handling. By the fact that Yi is independent of W(i)n and EYi=0, we have that

    E[Wnfk(Wn)]=ni=1Efk(W(i)n+t)Ki(t)dt, (3.5)

    and

    Efk(Wn)=Es2Ns2nni=1Efk(Wn)Ki(t)dt, (3.6)

    see [18, p. 20]. These results lead to the key expression:

    |E[Wnfk(Wn)]Efk(Wn)|=|ni=1E(fk(W(i)n+t)Es2Ns2nfk(Wn))Ki(t)dt|,

    which plays a crucial role in our analysis. Beyond the importance of the K-function highlighted in its introduction, we will see that the K-function serves as a key tool in addressing the expression above, paving the way for the application of concentration inequalities. The use of these inequalities, however, will be demonstrated later in the proof of the main results. One famous concentration inequality theorem, which will be used in our work, is stated as follows:

    Lemma 3.1. [18, p. 54] Let ξ1,ξ2,,ξn be independent random variables with zero means, satisfying nj=1Var(ξj)=1. For all real a<b, and for every 1in,

    P(anj=1,jiξjb)2(ba)+2(2+1)nj=1E|ξj|3.

    We are now ready to prove the main results.

    Proof of Theorem 1.2. From (3.3), (3.5), and (3.6), we have that

    |E[Wnfk(Wn)]Efk(Wn)|=|ni=1E(fk(W(i)n+t)Es2Ns2nfk(Wn))Ki(t)dt|ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)dt+ni=1E|1Es2Ns2n||fk(Wn)|Ki(t)dtni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)dt+fk|s2nEs2Ns2n|ni=1EKi(t)dt=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)dt+fk|s2nEs2Ns2n|s2nEs2NT+fk|s2nEs2NEs2N|, (3.7)

    where fk=supxR|fk(x)| and

    T=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)dt.

    To bound T, we note that

    T=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(Ai,t)dt+ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(Aci,t)dt:=T1+T2, (3.8)

    where

    Ai,t={W(i)n+Yi>k,W(i)n+tk}{W(i)n+Yik,W(i)n+t>k}.

    To bound T1, we observe that

    I(Ai,t)=I(kYi<W(i)nkt)+I(kt<W(i)nkYi)I(k|t||Yi|<W(i)nk+|t|+|Yi|). (3.9)

    Using Lemma 3.1, (3.2), and the fact that Var(WnEs2Nsn) = 1, we have

    E[P(k|t||Yi|<W(i)nk+|t|+|Yi|Yi)]=E[P(k|t||Yi|Var(Wn)<W(i)nVar(Wn)k+|t|+|Yi|Var(Wn)Yi)]=E[P((k|t||Yi|)Es2Nsn<W(i)nVar(Wn)(k+|t|+|Yi|)Es2NsnYi)]22(|t|+E|Yi|)Es2Nsn+2(2+1)(Es2N)32s3nnj=1E|Yj|3=22(|t|+E|Yi|)Es2Nsn+2(2+1)δnEs2NsnEs2N. (3.10)

    Applying this inequality along with (3.3), (3.4), (3.9), and (3.10), we obtain that

    ni=1EKi(t)I(Ai,t)dtni=1EKi(t)I(k|t||Yi|<W(i)nk+|t|+|Yi|)dt=ni=1E[E(Ki(t)I(k|t||Yi|<W(i)nk+|t|+|Yi|)Yi)]dt=ni=1Ki(t)E(P(k|t||Yi|<W(i)nk+|t|+|Yi|Yi))dtni=1Ki(t)E[22(|t|+E|Yi|)Es2Nsn+2(2+1)δnEs2NsnEs2N]dtni=1[32E|Yi|3Es2Nsn+2(2+1)δnEY2iEs2NsnEs2N](52+2)δnsnEs2N.

    Combining this fact and (2.4), we derive the bound for T1 as follows:

    T12π((52+2)δnsnEs2N)7.24δnsnEs2N. (3.11)

    To bound T2, we note that

    I(W(i)n+tk,W(i)n+Yik)+I(W(i)n+t>k,W(i)n+Yi>k)=I(Aci,t)1.

    Since fk exists on (,k) and (k,), we can apply the mean value theorem on Aci,t, (2.5) and (3.4) to obtain that

    T2=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(Aci,t)dt2ni=1E(|t|+|Yi|)Ki(t)I(Aci,t)dt3δns2nEs2N. (3.12)

    From (2.4), (3.1), (3.7), (3.8), (3.11), and (3.12), we have

    |E(WNk)+E(Zk)+|n=1P(N=n)|E[Wnfk(Wn)]Efk(Wn)|n=1P(N=n)[7.24δnsnEs2N+3δns2nEs2N+2π|s2nEs2NEs2N|]7.24E[δNsN]Es2N+3E[δNs2N]Es2N+2π(Var(s2N)Es2N).

    Proof of Theorem 1.3. From the proof of Theorem 1.2, we start to bound T. Note that

    T=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B1,i,t)dt+ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B2,i,t)dt+ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B3,i,t)dt:=S1+S2+S3, (3.13)

    where

    B1,i,t={W(i)n+Yi>k1,W(i)n+tk1}{W(i)n+Yik1,W(i)n+t>k1}{k1<Wnk,W(i)n+t>k}{Wn>k,k1<W(i)n+tk},B2,i,t={Wnk1,W(i)n+tk1},andB3,i,t={k1<Wnk,k1<W(i)n+tk}{Wn>k,W(i)n+t>k}.

    To bound S1, we note from (3.9) that

    I(B1,i,t)=I(k1Yi<W(i)nk1t)+I(k1t<W(i)nk1Yi)+I(k1<Wnk,W(i)n+t>k)+I(Wn>k,k1<W(i)n+tk)I(k1|t||Yi|<W(i)nk1+|t|+|Yi|)+I(k|t||Yi|<W(i)nk+|t|+|Yi|).

    By applying Lemma 3.1 in a similar fashion as (3.10), we derive the following inequality:

    E[P(k1|t||Yi|<W(i)nk1+|t|+|Yi|Yi)]=22(|t|+E|Yi|)Es2Nsn+2(2+1)δnEs2NsnEs2N.

    Applying this inequality along with (3.3), (3.4), and (3.10), we obtain that

    ni=1EKi(t)I(B1,i,t)dt=ni=1Ki(t)[E(P(k1|t||Yi|<W(i)nk1+|t|+|Yi|)Yi)+E(P(k|t||Yi|<W(i)nk+|t|+|Yi|)Yi)]dtni=1Ki(t)E[42(|t|+|Yi|)Es2Nsn+4(2+1)δnEs2NsnEs2N]dt=18.15δnsnEs2N.

    Combining this fact and Proposition 2.1, we derive the bound for S1 as follows:

    S118.15δnsnfkEs2N11+k(25.047δnsnEs2N). (3.14)

    To bound S2, we use the mean value theorem, Proposition 2.2(i), and (3.4) to obtain that

    S2=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(W(i)n+tk1,W(i)n+Yik1)dtni=1E|fk(w)|(|t|+|Yi|)Ki(t)dt,for some wk10.231+k[ni=1E(|t|+|Yi|)Ki(t)dt]0.231+k[3δns2n2Es2N]=0.345δns2n(1+k)Es2N. (3.15)

    To bound S3, we note that

    S3=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B3,i,t)dt=ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B3,i,t)I(|Yi|1)dt+ni=1E|fk(W(i)n+t)fk(Wn)|Ki(t)I(B3,i,t)I(|Yi|>1)dt:=S3,1+S3,2.

    For S3,1, we note that

    I(k1<Wnk,k1<W(i)n+tk)I(|Yi|1)I(Wn>k1)I(|Yi|1)I(W(i)n>k2)I(|Yi|1).

    Similarly,

    I(Wn>k,W(i)n+t>k)I(|Yi|1)I(Wn>k)I(|Yi|1)I(W(i)n>k2)I(|Yi|1).

    From these observations and since {k1<Wnk,k1<W(i)n+tk} and {Wn>k,W(i)n+t>k} are disjoint, we conclude that

    I(B3,i,t)I(|Yi|1)I(W(i)n>k2)I(|Yi|1).

    By applying the mean value theorem in conjunction with the above fact, Proposition 2.2(ii), and (3.2), we obtain

    S3,11.12ni=1E(|Yi|+|t|)Ki(t)I(B3,i,t)I(|Yi|1)dt1.12ni=1E(|Yi|+|t|)Ki(t)I(W(i)n>k2)I(|Yi|1)dt1.12ni=1[3E|Yi|32P(W(i)n>k2)]1.68ni=1[E|Yi|3(E(W(i)n)2(k2)2)]1.68ni=1[E|Yi|3(s2n(1+k)Es2N)]=1.68δns4n(1+k)(Es2N)2,

    where we use the fact that for k3, 1(k2)211+k in the fifth inequality. To bound S3,2, we use Proposition 2.1, (3.2), and (3.3) to obtain

    S3,21.381+kni=1EKi(t)I(|Yi|>1)dt1.381+kni=1EY2iP(|Yi|>1)1.381+kni=1E|Yi|3=1.38δns2n(1+k)Es2N.

    Then

    S31.68δns4n(1+k)(Es2N)2+1.38δns2n(1+k)Es2N. (3.16)

    Combining (3.7) and (3.13)–(3.16), we have

    |E[Wnfk(Wn)]Efk(Wn)|11+k[25.047δnsnEs2N+1.725δns2nEs2N+1.68δns4n(1+k)(Es2N)2+1.38|s2nEs2NEs2N|].

    Therefore, using this inequality along with (3.1), we conclude that

    |E(WNk)+E(Zk)+|11+k[25.047E[δNsN]Es2N+1.725E[δNs2N]Es2N+1.68E[δNs4N](Es2N)2+1.38Var(s2N)Es2N].

    Since the Xj's are identically distributed, we have δn=γ1σ31EN, which implies that

    Es2N=σ21EN,δnsnEs2N=γ1nσ31EN,andδns2nEs2N=γ1nσ31(EN)32.

    Then

    E[δNsN]Es2N=E[δNs2N]Es2N=γ1σ31EN.

    In conclusion, the proof can be finalized by noting that

    E[δNs4N](Es2N)2=γ1EN2σ31(EN)52.

    A fascinating example of utilizing the call function has emerged in the pricing of CDO tranches. A CDO is a complex financial product that pools together various debt instruments, such as bonds and loans, and then divides the pool into different tranches with varying levels of risk and return. Each tranche is assigned a different payment priority and interest rate. Normally, the tranches primarily used in CDOs are typically known as senior, mezzanine, and junior. The senior tranche includes securities with high credit ratings and tends to be low risk, and therefore has lower returns. Investors have the option to invest in various tranches based on their preferences. In the conventional pricing model for a CDO with a total of n portfolios, each portfolio i is assumed to possess a recovery rate R>0. This recovery rate signifies the proportion of bad debt that can be recovered. We can obtain the percentage loss at the time T by the total loss on the portfolio,

    L(T)=(1R)nni=1I{τiT},

    where τi is the default time of the ith portfolio and IA is an indicator function of the set A.

    For each CDO tranche, there exists a detachment point (a limit above which the tranche loss does not increase) and an attachment point (a limit below which the tranche bears none of the loss). Since the cash flow of a CDO tranche is driven by its loss, the pricing problem can be reduced to the problem of calculating the expectation of a call function, i.e., E(L(T)l)+ where l is the attachment or the detachment point of the tranche (see [27,28,29,30] for more details).

    The approximation of the stop-loss model for CDOs has been widely studied, as evidenced by [20,27,31,32]. To apply Theorem 1.3 effectively in the Gaussian approximation, we need to centralize the summand variables. Specifically, for each i=1,2,,n, define

    ξi=1n(1R)I{τiT}andXi=ξiμi,

    where μi=Eξi=(1R)pin, pi=P(τiT). Then we let

    Yi=XisnandWn=ni=1Yi=L(T)μsn,

    where μ:=ni=1μi. Then we can apply Theorems 1.2 and 1.3 to obtain that

    |E(Wnk)+E(Zk)+|10.24δn

    and for k3,

    |E(Wnk)+E(Zk)+|28.452δn1+k,

    where

    s2n=ni=1Var(Xi)=(1Rn)2ni=1pi(1pi)andδn=(1Rnsn)3ni=1pi(1pi)(12pi+2p2i).

    Based on the behavior of the bounds above, we observe that both bounds converge to zero at the rate O(δn)=O(1n).

    As previously discussed, the collective risk model represents total claims as ξ1+ξ2++ξN, where ξi's are independent and identically distributed, each ξi represents the claim amount from the ith claim, and N represents either the number of claims or the number of claim payments. Allow us to provide a more precise explanation of why we regard N as a random variable: When selling insurance that permits individuals to make multiple claims within a given period, such as accident insurance, we face uncertainty regarding the exact number of claims. To address this uncertainty, we use a random variable, N, to interpret it. Commonly used distributions for the number of claims include the binomial, Poisson, and negative binomial distributions.

    Before applying our results, we need to denote that for i=1,2,

    Xi=ξiμ,Yi=XiσEN,andWN=Ni=1Yi,

    where μ=Eξi and σ2=Var(ξi). To align our results more closely with realistic scenarios, we consider ξ as the claim amount in a health insurance contract. It is common to model the random index N, representing the number of claims, using a Poisson distribution, as it is well-suited for rare events (e.g., the likelihood of a single person frequently claiming health insurance is relatively low). Referring to Remark 1.1, we observe that if NPoi(λ), then

    |E(WNk)+E(Zk)+|1λ[2π+10.24γ1σ31]

    and for k3,

    |E(WNk)+E(Zk)+|1(1+k)λ[1.38+28.46γ1σ31+1.68γ1λσ31].

    As noted in Remark 1.1, these bounds converge to zero at a rate of O(1λ) when λ.

    In this work, we studied the uniform and non-uniform bounds for the stop-loss distance between a random sum W=X1+X2++XN and the standard normal random variable Z. The stop-loss distance, expressed as |Ehk(W)Ehk(Z)|, with hk(x)=(xk)+, serves as a crucial measure of the deviation between the two random variables.

    Our approach combined Stein's method with concentration inequalities, leveraging their strengths to derive precise bounds. By focusing on the case where X1,X2,,XN are independent random variables and N is a non-negative integer-valued random variable independent of the Xj's, we provided detailed insights into the behavior of the stop-loss distance under various conditions.

    In particular, for the i.i.d. case, our results demonstrated improvements over Döbler's results in specific scenarios, as illustrated with examples in the paper. Furthermore, our investigation into non-uniform bounds represents a novel contribution, offering a detailed characterization of the stop-loss distance for random sums compared to the normal distribution.

    Lastly, we explored practical applications of our results, emphasizing their relevance to financial and insurance contexts, such as collateralized debt obligations (CDOs) and the collective risk model. These applications highlight the utility of our findings in understanding and managing risk in real-world scenarios.

    Punyapat Kammoo: Visualization, writing – original draft, funding acquisition, formal analysis, validation, writing – review & editing; Kritsana Neammanee: Conceptualization, methodology, project administration, visualization, writing – original draft, Formal analysis, Validation, Writing – review & editing; Kittipong Laipaporn: Conceptualization, methodology, project administration, funding acquisition, formal analysis, validation, writing – review & editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the Development and Promotion of Science and Technology Talents Project (DPST).

    All authors declare no conflicts of interest in this paper.



    [1] S. D. Promislow, Fundamentals of actuarial mathematics, 2 Eds., John Wiley & Sons, Ltd, 2006. http://dx.doi.org/10.1002/9781119971528
    [2] N. Kolev, D. Paiva, Multinomial model for random sums, Insurance Math. Econom., 37 (2005), 494–504. http://dx.doi.org/10.1016/j.insmatheco.2005.05.005 doi: 10.1016/j.insmatheco.2005.05.005
    [3] N. Kolev, D. Paiva, Random sums of exchangeable variables and actuarial applications, Insurance Math. Econom., 42 (2008), 147–153. http://dx.doi.org/10.1016/j.insmatheco.2007.01.010 doi: 10.1016/j.insmatheco.2007.01.010
    [4] H. Robbins, The asymptotic distribution of the sum of a random number of random variables, Bull. Amer. Math. Soc., 54 (1948), 1151–1161. https://doi.org/10.1090/S0002-9904-1948-09142-X doi: 10.1090/S0002-9904-1948-09142-X
    [5] B. V. Gnedenko, V. Y. Korolev, Random summation: Limit theorems and applications, Boca Raton: CRC Press, 1996. http://dx.doi.org/10.1201/9781003067894
    [6] F. Daly, Gamma, Gaussian and Poisson approximations for random sums using size-biased and generalized zero-biased couplings, Scand. Actuar. J., 2022 (2022), 471–487. https://doi.org/10.1080/03461238.2021.1984293 doi: 10.1080/03461238.2021.1984293
    [7] C. Döbler, New Berry-Esseen and Wasserstein bounds in the CLT for non-randomly centered random sums by probabilistic methods, ALEA, Lat. Am. J. Probab. Math. Stat., 12 (2015), 863–902.
    [8] J. K. Sunklodas, L1 bounds for asymptotic normality of random sums of independent random variables, Lith. Math. J., 53 (2013), 438–447. https://doi.org/10.1007/S10986-013-9220-X doi: 10.1007/S10986-013-9220-X
    [9] I. G. Shevtsova, Convergence rate estimates in the global CLT for compound mixed Poisson distributions, Theory Prob. Appl., 63 (2018), 72–93. https://doi.org/10.1137/S0040585X97T988927 doi: 10.1137/S0040585X97T988927
    [10] I. G. Shevtsova, A moment inequality with application to convergence rate estimates in the global CLT for Poisson-binomial random sums, Theory Prob. Appl., 62 (2018), 278–294. https://doi.org/10.1137/S0040585X97T988605 doi: 10.1137/S0040585X97T988605
    [11] J. K. Sunklodas, On the normal approximation of a binomial random sum, Lith. Math. J., 54 (2014), 356–365. http://dx.doi.org/10.1007/s10986-014-9248-6 doi: 10.1007/s10986-014-9248-6
    [12] J. K. Sunklodas, On the normal approximation of a negative binomial random sum, Lith. Math. J., 55 (2015), 150–158. https://doi.org/10.1007/s10986-015-9271-2 doi: 10.1007/s10986-015-9271-2
    [13] H. You, X. Zhou, The Pareto-optimal stop-loss reinsurance, Math. Probl. Eng., 2021 (2021), 2839726. https://doi.org/10.1155/2021/2839726 doi: 10.1155/2021/2839726
    [14] X. Fang, Y. Koike, High-dimensional central limit theorems by Stein's method, Ann. Appl. Probab., 31 (2021), 1660–1686. https://doi.org/10.1214/20-AAP1629 doi: 10.1214/20-AAP1629
    [15] S. Bhar, R. Mukherjee, P. Patil, Kac's central limit theorem by Stein's method, Stat. Probab. Lett., 219 (2025), 110329. https://doi.org/10.1016/j.spl.2024.110329 doi: 10.1016/j.spl.2024.110329
    [16] C. Stein, A bound for the error in the normal approximation to the distribution of a sum of dependent random variables, In: Proceedings of the sixth Berkeley symposium on mathematical statistics and probability, 2 (1972), 583–602.
    [17] C. Stein, Approximation computaional of expectations, In: Institute of mathematical statistics lecture notes, 7 (1986), 161–164.
    [18] L. H. Y. Chen, L. Goldstein, Q. M. Shao, Normal approximation by Stein's method, 1 Eds., Heidelberg: Springer Berlin, 2010. https://doi.org/10.1007/978-3-642-15007-4
    [19] S. Jongpreechaharn, K. Neammanee, Non-uniform bound on normal approximation for call function of locally dependent random variables, Int. J. Math. Comput. Sci., 17 (2022), 207–216.
    [20] S. Jongpreechaharn, K. Neammanee, Normal approximation for call function via Stein's method, Comm. Statist. Theory Methods, 48 (2019), 3498–3517. https://doi.org/10.1080/03610926.2018.1476716 doi: 10.1080/03610926.2018.1476716
    [21] S. Jongpreechaharn, K. Neammanee, A constant of approximation of the expectation of call function by the expectation of normal distributed random variable, In: Proceeding for 11th conference on science and technology for youth year 2016, 2016,112–121.
    [22] L. H. Y. Chen, Q. M. Shao, A non-uniform Berry–Esseen bound via Stein's method, Probab. Theory Relat. Fields, 120 (2001), 236–254. https://doi.org/10.1007/PL00008782 doi: 10.1007/PL00008782
    [23] L. H. Y. Chen, Stein's method: Some perspectives with applications, In: Probability towards 2000, New York: Springer, 1998, 97–122. https://doi.org/10.1007/978-1-4612-2224-8_6
    [24] N. Chaidee, M. Tuntapthai, Berry-Esseen bounds for random sums of Non-i.i.d. random variables, Int. Math. Forum, 4 (2009), 1281–1288.
    [25] L. H. Y. Chen, Stein's method of normal approximation: Some recollections and reflections, Ann. Statist., 49 (2021), 1850–1863. https://doi.org/10.1214/21-aos2083 doi: 10.1214/21-aos2083
    [26] G. Auld, K. Neammanee, Explicit constants in the nonuniform local limit theorem for Poisson binomial random variables, J. Inequal. Appl., 2024 (2024), 67. https://doi.org/10.1186/s13660-024-03143-z doi: 10.1186/s13660-024-03143-z
    [27] N. E. Karoui, Y. Jiao, Stein's method and zero bias transformation for CDO tranche pricing, Finance Stoch., 13 (2009), 151–180. https://doi.org/10.1007/s00780-008-0084-6 doi: 10.1007/s00780-008-0084-6
    [28] P. Glasserman, S. Suchintabandid, Correlations for CDO pricing, J. Bank. Finance, 31 (2007), 1375–1398. https://doi.org/10.1016/j.jbankfin.2006.10.018 doi: 10.1016/j.jbankfin.2006.10.018
    [29] J. Hull, A. White, Valuation of a CDO and an nth to default CDS without Monte Carlo simulation, J. Deriv., 12 (2004), 8–23. http://dx.doi.org/10.3905/jod.2004.450964 doi: 10.3905/jod.2004.450964
    [30] N. E. Karoui, Y. Jiao, D. Kurtz, Gaussian and poisson approximation: Applications to CDOs tranche pricing, J. Comput. Finance, 12 (2008), 31–58. http://dx.doi.org/10.21314/JCF.2008.180 doi: 10.21314/JCF.2008.180
    [31] K. Neammanee, N. Yonghint, Poisson approximation for call function via Stein-Chen method, Bull. Malays. Math. Sci. Soc., 43 (2020), 1135–1152. https://doi.org/10.1007/s40840-019-00729-5 doi: 10.1007/s40840-019-00729-5
    [32] N. Yonghint, K. Neammanee, Refinement on Poisson approximation of CDOs, Sci. Asia, 47 (2021), 388–392. http://dx.doi.org/10.2306/scienceasia1513-1874.2021.041 doi: 10.2306/scienceasia1513-1874.2021.041
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(186) PDF downloads(23) Cited by(0)

Figures and Tables

Figures(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog