Research article

Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations

  • Received: 10 April 2023 Revised: 17 May 2023 Accepted: 28 May 2023 Published: 09 June 2023
  • MSC : 60F05, 60F15

  • In this article, we study the complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. We also give some sufficient assumptions for the convergence. Moreover, we get the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of extended negatively dependent random variables. The results obtained in this paper generalize the relevant ones in probability space.

    Citation: Mingzhou Xu. Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992

    Related Papers:

    [1] Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138
    [2] Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706
    [3] He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340
    [4] Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428
    [5] Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094
    [6] Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470
    [7] Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165
    [8] Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029
    [9] Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540
    [10] Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871
  • In this article, we study the complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. We also give some sufficient assumptions for the convergence. Moreover, we get the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of extended negatively dependent random variables. The results obtained in this paper generalize the relevant ones in probability space.



    Peng [1,2] introduced important concepts of the sub-linear expectations space to describe the uncertainty in probability. Stimulated by the works of Peng [1,2], many scholars tried to discover the results under sub-linear expectations space, similar to those in classic probability space. Zhang [3,4] proved exponential inequalities and Rosenthal's inequality under sub-linear expectations. Xu et al. [5], Xu and Kong [6] obtained complete convergence and complete moment convergence of weighted sums of negatively dependent random variables under sub-linear expectations. For more limit theorems under sub-linear expectations, the readers could refer to Zhang [7], Xu and Zhang [8,9], Wu and Jiang [10], Zhang and Lin [11], Zhong and Wu [12], Hu et al. [13], Gao and Xu [14], Kuczmaszewska [15], Xu and Cheng [16,17], Zhang [18], Chen [19], Zhang [20], Chen and Wu [21], Xu et al. [5], Xu and Kong [6], and references therein.

    In classic probability space, Yan [22] established complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables. For references on complete moment convergence and complete convergence in probability space, the reader could refer to Hsu and Robbins [23], Chow [24], Hosseini and Nezakati[25], Meng et al. [26] and refercences therein. Stimulated by the works of Yan [22], Xu et al. [5], Xu and Kong [6], we try to prove complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, and the corresponding Marcinkiewicz-Zygmund strong law of large number, which extends the corresponding results in Yan [22]. Another main innovation point here is Rosenthal-type inequality for extended negatively dependent random variables, provided by Lemma 2.3.

    We organize the remainders of this article as follows. We present relevant basic notions, concepts and properties, and give relevant lemmas under sub-linear expectations in Section 2. In Section 3, we give our main results, Theorems 3.1–3.3, the proofs of which are presented in Section 4.

    Hereafter, we use notions similar to that in the works by Peng [2] and Zhang [4]. Assume that (Ω,F) is a given measurable space. Suppose that H is a set of all random variables on (Ω,F) fulfilling φ(X1,,Xn)H for X1,,XnH, and each φCl,Lip(Rn), where Cl,Lip(Rn) is the set of φ fulfilling

    |φ(x)φ(y)|C(1+|x|m+|y|m)(|xy|),x,yRn

    for C>0, mN relying on φ.

    Definition 2.1. A sub-linear expectation E on H is a functional E:HˉR:=[,] fulfilling the following: for every X,YH,

    (a) XY implies E[X]E[Y];

    (b) E[c]=c, cR;

    (c) E[λX]=λE[X], λ0;

    (d) E[X+Y]E[X]+E[Y] whenever E[X]+E[Y] is not of the form or +.

    Definition 2.2. We say that {Xn;n1} is stochastically dominated by a random variable X under (Ω,H,E), if there exist a constant C such that n1, for all non-negative hCl,Lip(R), E(h(Xn))CE(h(X)).

        V:F[0,1] is named to be a capacity if

    (a)V()=0, V(Ω)=1.

    (b)V(A)V(B), AB, A,BF.

        Furthermore, if V is continuous, then V obeys

    (c) AnA yields V(An)V(A).

    (d) AnA yields V(An)V(A).

    V is said to be sub-additive when V(AB)V(A)+V(B), A,BF.

    Under (Ω,H,E), set V(A):=inf{E[ξ]:IAξ,ξH}, AF (cf. Zhang [3]). V is a sub-additive capacity. Write

    CV(X):=0V(X>x)dx+0(V(X>x)1)dx.

    Under sub-linear expectation space (Ω,H,E), {Xn;n1} are called to be upper (resp. lower) extended negatively dependent if there is a constant K1 fulfilling

    E[ni=1gi(Xi)]Kni=1E[gi(Xi)],n1,

    whenever the non-negative functions giCb,Lip(R), i=1,2, are all non-decreasing (resp. all non-increasing) (cf. Definition 2.4 of Zhang [18]). They are named extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.

    Suppose X1 and X2 are two n-dimensional random vectors under (Ω1,H1,E1) and (Ω2,H2,E2) respectively. They are said to be identically distributed if for every ψCl,Lip(Rn),

    E1[ψ(X1)]=E2[ψ(X2)]. 

    {Xn;n1} is called to be identically distributed if for every i1, Xi and X1 are identically distributed.

    Throughout this paper, we suppose that E is countably sub-additive, i.e., E(X)n=1E(Xn) could be implied by Xn=1Xn, X,XnH, and X0, Xn0, n=1,2,. Write Sn=ni=1Xi, n1. Let C denote a positive constant which may change from line to line. I(A) or IA is the indicator function of A. The symbol axbx means that there exists two positive constants C1, C2 fulfilling C1|bx||ax|C2|bx|, x+ stands for max{x,0}, x=(x)+, for xR.

    As in Zhang [18], if X1,X2,,Xn are extended negatively dependent random variables and f1(x),f2(x),,fn(x)Cl,Lip(R) are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),,fn(Xn) are extended negatively dependent random variables.

    We cite the following under sub-linear expectations.

    Lemma 2.1. (Cf. Lemma 4.5 (iii) of Zhang [3]) If E is countably sub-additive under (Ω,H,E), then for XH,

    E|X|CV(|X|).

    Lemma 2.2. Assume that p>1 and {Xn;n1} is a sequence of upper extended negatively dependent random varables with E[Xk]0, k0, under (Ω,H,E). Then for every n1, there exists a positive constant C=C(p) relying on p such that for p2,

    E[((nj=1Xj)+)p]CV[((nj=1Xj)+)p]C{ni=1CV[(X+i)p]+(ni=1E[X2i])p/2}. (2.1)

    By (2.1) of Lemma 2.2 and similar proof of Lemma 2.4 of Xu et al. [5], we could get the following.

    Lemma 2.3. Assume that p>1 and {Xn;n1} is a sequence of upper extended negatively dependent random varables with E[Xk]0, k0, under (Ω,H,E). Then for every n1, there exists a positive constant C=C(p) relying on p such that for p2,

    E[max1in((ij=1Xj)+)p]C(logn)p{ni=1CV[(X+i)p]+(ni=1E[X2i])p/2}. (2.2)

    We first give two lemmas.

    Lemma 2.4. Suppose that 0<α<2, γ>0, and {ani,1in,n1} is an array of real numbers fulfilling

    ni=1|ani|α=O(n),forsomeα>0. (2.3)

    Assume that {Xni,i1,n1} is stochastically dominated by a random variable X with CV{|X|α}<. Moreover, suppose that E(aniXni)=0 for 1<α<2 and bn=n1/α(logn)3/γ for some γ>0. Then

    1bnmax1jnji=1E(Yni)C(logn)3α/γCV{|X|α}0,asn, (2.4)

    where Yni=aniXniI(|aniXni|bn)+bnI(aniXni>bn)bnI(aniXni<bn) for each 1in, n1.

    Lemma 2.5. Assume that {ani,1in,n1} is an array of real numbers fuffilling (2.3) and XH under sub-linear expectation space (Ω,H,E). Suppose bn is as in Lemma 2.4.

    (i) If p>max{α,γ(β+1)/3} for some β, then

    n=2(logn)βnbpnni=1bpn0V{|aniX|p>x}dx{CCV{|X|α},forα>γ(β+1)/3,CCV{|X|αlog(|X|+1)},forα=γ(β+1)/3,CCV{|X|γ(β+1)/3},forα<γ(β+1)/3.

    (ii) If p=α, β=2, then

    n=2(logn)2nbαnni=1bαnV{|aniX|α>x}dx{CCV{|X|α},forα>γ,CCV{|X|αlog(1+|X|)},forα=γ,CCV{|X|γ},forα<γ.

    Below are our main results.

    Theorem 3.1. Suppose {Xni,i1,n1} is an array of rowwise END random variables, which is stochastic dominated by X under (Ω,H,E). Assume that for some 0<α<2, 0<γ<2, {ani,1in,n1} is an array of real numbers, being all non-negative or all non-positive, fulfilling (2.3) and bn is as in Lemma 2.5. Moreover, suppose that E(aniXni)=0 for 1<α<2. If

    {CV{|X|α}<,forα>γ,CV{|X|αlog(1+|X|)}<,forα=γ,CV{|X|γ}<,forα<γ, (3.1)

    then for all ε>0,

    n=21nV(max1jn(ji=1aniXni)>εbn)<. (3.2)

    Similarly, moreover, with the condition that E(aniXni)=0 for 1<α<2 in place of that E(aniXni)=0 for 1<α<2, we have for all ε>0,

    n=21nV(max1jn(ji=1(aniXni))>εbn)<. (3.3)

    Moreover, suppose that E(Xni)=E(Xni)=0 for 1<α<2. Then (3.1) implies

    n=21nV(max1jn|ji=1aniXni|>εbn)<,forallε>0. (3.4)

    Theorem 3.2. Suppose {Xni,i1,n1} is an array of rowwise END random variables, which is stochastic dominated by X under (Ω,H,E). Assume that for some 0<α<2, 0<γ<2, {ani,1in,n1} is an array of real numbers, being all non-negative or all non-positive, fulfilling (2.3) and bn is as in Lemma 2.5. Suppose (3.1) holds. Moreover, suppose that E(aniXni)=0 for 1<α<2. Then for 0<τ<α,

    n=21nCV{(1bnmax1jnji=1aniXniε)τ+}<,forallε>0. (3.5)

    Similarly, moreover, with the condition that E(aniXni)=0 for 1<α<2 in place of that E(aniXni)=0 for 1<α<2, we have for 0<τ<α,

    n=21nCV{(1bnmax1jnji=1(aniXni)ε)τ+}<,forallε>0. (3.6)

    Moreover, if E(Xni)=E(Xni)=0 for 1<α<2, then

    n=21nCV{(1bnmax1jn|ji=1aniXni|ε)τ+}<,forallε>0. (3.7)

    Remark 3.1. From (3.7) follows that (3.4) holds. Hence we know that the complete moment convergence implies the complete convergence.

    Theorem 3.3. Under the same conditions of Theorem 3.1, and assume that V induced by E is countably sub-additive, we have

    V(lim supnni=1aniXnibn0)=1,V(lim supnni=1(aniXni)bn0)=1.

    Moreover, if E(Xni)=E(Xni)=0 for 1<α<2, then

    V(lim supn|ni=1aniXnibn|=0)=1.

    Proof of Lemma 2.4. We will investigate (2.4) in two cases.

    Case I. 0<α1.

    By Markov's inequality under sub-linear expectations and CV{|X|α}<, we see that

    1bnmax1jnji=1E(Yni)1bnmax1jnji=1E(Y+ni)1bnni=1E(Y+ni)C1bnni=1E((Yni)+)Cbnni=1CV{(Yni)+}Cbnni=1bn0V{|aniX|>x}dxCbnni=1bn0E|aniX|αxαdx=Cbnb1αnni=1|ani|αCV{|X|α}C(logn)3α/γCV{|X|α}0, (4.1)

    where YnianiXI{|aniX|bn}+bnI{aniX>bn}bnI(aniX<bn).

    Case II. 1<α<2.

    For all 1in, n1, write

    Zni=aniXniYni=(aniXnibn)I{aniXni>bn}+(aniXni+bn)I{aniXni<bn}.

    Then 0<Zni=aniXnibn<aniXni for aniXni>bn, aniXni<Zni=aniXni+bn<0 for aniXni<bn. Therefore, |Zni||aniXni|I{|aniXni|>bn}.

    From E(aniXni)=0 for 1<α<2 and CV{|X|α}< follows that

    1bnmax1jnji=1E(Yni)1bnmax1jn|ji=1E(Yni)|1bnmax1jnji=1|E(Yni)|1bnmax1jnji=1|E(Yni)E(aniXni)|1bnni=1E|Zni|1bnni=1E|Zni|1bnni=1CV{|Zni|}1bnni=10V{|aniX|I{|aniX|>bn}>x}dx1bnni=10V{|aniX|αI{|aniX|>bn}>xbα1n}dx1bnni=10V{|aniX|αI{|aniX|>bn}>x}dx/bα1n1bαnni=1|ani|αCV{|X|α}C(logn)3α/γCV{|X|α}0, asn, (4.2)

    where Zni(aniXbn)I{aniX>bn}+(aniX+bn)I{aniX<bn}. Combining (4.1) and (4.2) results in (2.4).

    Proof of Lemma 2.5. Without loss of generality, we suppose that

    ni=1|ani|αn,forsomeα>0.

    Write

    Inj:={1in:n1/α(j+1)1/α<|ani|n1/αj1/α}.

    Then by Lemma 2.4 of Yan [22], we have

    n=2(logn)βnbpnni=1bpn0V{|aniX|p>x}dx=n=2n1p/α(logn)β3p/γni=1|ani|pbpn/|ani|p0V{|X|p>x}dx=n=2n1p/α(logn)β3p/γj=1iInj|ani|pbpn/|ani|p0V{|X|p>x}dxn=2n1(logn)β3p/γj=1#Injjp/α(j+1)p/α(logn)3p/γ0V{|X|p>x}dx=n=2n1(logn)β3p/γj=1#Injjp/αjk=0(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dxn=2n1(logn)β3p/γj=1#Injjp/α(logn)3p/γ0V{|X|p>x}dx+n=2n1(logn)β3p/γj=1#Injjp/αjk=1(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dx=:I1+I2.

    First, for I1, when α>γ(β+1)3, from Lemma 2.4 of Yan [22] and p>α follows that

    I1Cn=2n1(logn)β3p/γ(logn)3/γ0V{|X|>x}xp1dxCn=2n1(logn)β3p/γ(logn)3/γ0V{|X|>x}xα1(logn)3(pα)/γdxCn=2n1(logn)β3α/γCV{|X|α}CCV{|X|α}.

    When αγ(β+1)3, from Lemma 2.4 of Yan [22], and p>γ(β+1)3 follows that

    I1Cn=2n1(logn)β3p/γ(logn)3/γ0V{|X|>x}xp1dx=Cn=2n1(logn)β3p/γnm=2(logm)3/γ(log(m1))3/γV{|X|>x}xp1dx=Cm=2(logm)3/γ(log(m1))3/γV{|X|>x}xp1dxn=mn1(logn)β3p/γCm=2(logm)3/γ(log(m1))3/γV{|X|>x}xp1(logm)β3p/γ+1dxCm=2(logm)3/γ(log(m1))3/γV{|X|>x}xγ(β+1)31(logm)β3p/γ+1(logm)3γ(pγ(β+1)3)dxCCV{|X|γ(β+1)3}.

    Next for I2, from Lemma 2.4 of Yan [22] and p>α follows that

    I2Cn=2n1(logn)β3p/γk=1(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dxj=k#Injjp/αCn=2n1(logn)β3p/γk=1(k+1)1p/α(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dxCn=2n1(logn)β3p/γk=1(k+1)1p/α(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα1xpαdxCn=2n1(logn)β3α/γk=1(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα1dxCn=2n1(logn)β3α/γk=1(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα1dx=Cn=2n1(logn)β3α/γ(logn)3/γV{|X|>x}xα1dx=Cn=2n1(logn)β3α/γm=n(log(m+1))3/γ(logm)3/γV{|X|>x}xα1dxCm=2(log(m+1))3/γ(logm)3/γV{|X|>x}xα1dxmn=2n1(logn)β3α/γ.

    Noting that

    mn=2n1(logn)β3α/γ{C, forα>γ(β+1)3,Cloglogm, forα=γ(β+1)3,C(logm)β3αγ+1, forα<γ(β+1)3,

    we obtain

    I2{CCV{|X|α}, forα>γ(β+1)3,CCV{|X|αlog(1+|X|)}, forα=γ(β+1)3,CCV{|X|γ(β+1)3}, forα<γ(β+1)3.

    Hence,

    I=I1+I2{CCV{|X|α}, forα>γ(β+1)3,CCV{|X|αlog(1+|X|)}, forα=γ(β+1)3,CCV{|X|γ(β+1)3}, forα<γ(β+1)3.

    The proof is completed.

    Proof of Theorem 3.1. By (2.3) and ani=a+niani, without loss of generality, we suppose that ani0 and ni=1aαnin. We need only to prove (3.2).

    For fixed n1, write Zni as in the proof of Lemma 2.4, and

    A=ni=1{Yni=aniXi},
    B=ˉA=ni=1{YnianiXni}=ni=1{|aniXni|>bn},
    En={max1jnji=1aniXni>εbn}.

    We easily see that for all ε>0,

    En=EnAEnB{max1jnji=1Yni>εbn}{ni=1{|aniXni|>bn}}.

    Then from Lemma 2.4, for n sufficiently large follows that

    V(En)V(max1jnji=1Yni>εbn)+V(ni=1{|aniXni|>bn})V(max1jnji=1(YniEYni)>εbnmax1jnji=1EYni)+V(ni=1{|aniXni|>bn})V(max1jnji=1(YniEYni)>εbn/2)+V(ni=1{|aniXni|>bn}). (4.3)

    To establish (4.3), we only need to prove that

    J1:=n=21nV(max1jnji=1(YniEYni)>εbn/2)<, (4.4)
    J2:=n=21nni=1V(|aniXni|>bn)<. (4.5)

    For J1, we see that {YniE(Yni),1in,n1} is an array of rowwise END random variables under sub-linear expectations. Therefore, by Markov's inequality under sub-linear expectations, Lemmas 2.1, 2.3, and similar proof of (2.8) of Zhang [20], we obtain

    J1n=21nV(max1jn(ji=1(YniEYni))+>εbn/2)Cn=11nb2nE(max1jn((ji=1(YniEYni))+)2)Cn=2(logn)2nb2n(ni=1CV{((YniE(Yni))+)2}+ni=1E((YniE(Yni))2))Cn=2(logn)2nb2nni=1CV{|Yni|2}+Cn=2(logn)2nb2n(ni=1|E(Yni)|)2Cn=2(logn)2nb2nni=1CV{|Yni|2}+Cn=2(logn)2nb2n(ni=1|E(Yni)|)2Cn=2(logn)2nb2nni=1b2n0V{|aniX|2>x}dx+Cn=2(logn)2nb2n(ni=1|E(Yni)|)2=:J11+J12.

    By Lemma 2.5(i) (for p=β=2) and its proof, we have J11<.

    For J12, when 0<α1, by Lemma 2.4 of Yan [22] and its proof, we see that

    J12Cn=2(logn)2nb2n(ni=1E(|Yni|))2Cn=2(logn)2nb2n(ni=1E(|Yni|))2Cn=2(logn)2nb2n(ni=1CV{|Yni|})2Cn=2n12/α(logn)26/γ(ni=1bn0V{|aniX|>x}dx)2Cn=2n12/α(logn)26/γ(j=1iInj(logn)3/γn1/α0V{|aniX|>x}dx)2Cn=2n12/α(logn)26/γ(j=1#Inj(logn)3/γn1/α0V{|X|>xj1/α/n1/α}dx)2Cn=2n12/α(logn)26/γ(j=1#Inj(logn)3/γj1/α0V{|X|>x}n1/αj1/αdx)2Cn=2n1(logn)26/γ(j=1#Injj1/αj1k=0(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2Cn=2n1(logn)26/γ(k=0(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dxj=k+1#Injj1/α)2Cn=2n1(logn)26/γ(k=0(logn)3/γ(k+1)1/α(logn)3/γk1/α(k+1)11/αV{|X|>x}dx)2J121+Cn=2n1(logn)26/γ×{(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αxα1V{|X|>x}dx/(logn)3(α1)/γ)2, forαγ,(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αxγ1V{|X|>x}dxk1γ/α/(logn)3(γ1)/γ)2, forα<γ,J121+{Cn=2n1(logn)26α/γ(CV{|X|α})2<, forαγ,Cn=2n1(logn)4(CV{|X|γ})2<, forα<γ,

    where for 0<γ<2,

    J121=Cn=2n1(logn)26/γ((logn)3/γ0V{|X|>x}dx)2C2y1(logy)26/γdy(logy)3/γ0V{|X|>z}dzz0V{|X|>x}dxC2y1(logy)26/γdy(logy)3/γ0zV{|X|>z}dzC0zV{|X|>z}dzmax{2,ezγ/3}y1(logy)26/γdyC+C1zγ1V{|X|>z}dzC+CCV{|X|γ}<.

    When 1<α<2, by E(aniXni)=0, we have

    J12Cn=2(logn)2nb2n(ni=1|E(Yni)E(aniXni)|)2Cn=2(logn)2nb2n(ni=1E(|YnianiXni|))2Cn=2(logn)2nb2n(ni=1E(|YnianiX|))2Cn=2(logn)2nb2n(ni=1CV(|YnianiX|))2Cn=2(logn)2nb2n(ni=1CV{|aniX|I{|aniX|>bn}})2Cn=2(logn)26/γn12/α(ni=1|ani|CV{|X|I{|aniX|>bn}})2Cn=2(logn)26/γn12/α(j=1iInjn1/αj1/α0V{|X|I{|X|>(logn)3/γj1/α}>x}dx)2Cn=2(logn)26/γn12/α(j=1iInjn1/αj1/α(logn)3/γj1/α0V{|X|>(logn)3/γj1/α}dx)2+n=2(logn)26/γn12/α(j=1iInjn1/αj1/αk=j(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2=:J121+J122.

    By #Inj(j+1), we see that

    J121Cn=2(logn)26/γn12/α(j=1#Injn1/α(logn)3/γV{|X|>(logn)3/γj1/α})2Cn=2(logn)26/γn12/α(j=1jn1/α(logn)3/γV{|X|>(logn)3/γj1/α})2{Cn=2(logn)26α/γn1(j=1j(logn)3α/γV{|X|α>(logn)3α/γj})2, forα>γ,Cn=2(logn)26n1(j=1jγ/α1(logn)3V{|X|γ>(logn)3jγ/α}j22γ/α)2, forαγ,{Cn=2(logn)26α/γn1(CV{|X|α})2<, forα>γ,Cn=2(logn)4n1(CV{|X|γ})2<, forαγ.

    By kj=1#Injj1C, kj=1#Injj1/αkj=1#Injj1/αj11/αCk11/α, we see that

    J122Cn=2(logn)26/γn12/α(j=1iInjn1/αj1/αk=j(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2Cn=2(logn)26/γn1(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dxkj=1#Injj1/α)2Cn=2(logn)26/γn1(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dxk11/α)2{Cn=2(logn)26α/γn1(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}xα1dx)2, forα>γ,Cn=2(logn)26n1(k=1(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}xγ1dxk11/α(γ1)/α)2, for αγ,{Cn=2(logn)26α/γn1(CV{|X|α})2, forα>γ,Cn=2(logn)4n1(CV{|X|γ})2, forαγ.

    Hence, we prove that J1<.

    Define gμ(x)Cl,Lip(R) such that I{|x|μ}gμ(|x|)<I{|x|1}, for some 0<μ<1. Then I{|x|>μ}1gμ(|x|)I{|x|>1}. For J2, by j=1#Injj+11, we see that

    J2n=21nni=1V(|aniXni|>n1/α(logn)3/γ)Cn=21nj=1iInjE(1gμ(|aniXnin1/α(logn)3/γ|))Cn=21nj=1iInjE(1gμ(|aniXn1/α(logn)3/γ|))Cn=21nj=1#InjV{|X|>μj1/α(logn)3/γ}Cn=21nj=1#Injj+1(j+1)V{|X|>μj1/α(logn)3/γ}Cn=21nmaxy1yV{|X|>μy1/α(logn)3/γ}{Cn=21n(logn)3α/γmaxyμαy(logn)3α/γV{|X|α>y(logn)3α/γ}, for α>γ,Cn=21n(logn)3maxyμαyγ/αV{|X|γ>yγ/α(logn)3}y1γ/α, for αγ,{Cn=21n(logn)3α/γ<, for α>γ,Cn=21n(logn)3<, forαγ.

    The proof of Theorem 3.1 is finished.

    Proof of Theorem 3.2. We only prove (3.5). For all ε>0, we see that

    n=21nCV{(1bnmax1jnji=1aniXniε)τ+}=n=21n0V{1bnmax1jnji=1aniXniε>t1/τ}dt=n=21n10V{1bnmax1jnji=1aniXniε>t1/τ}dt+n=21n1V{1bnmax1jnji=1aniXniε>t1/τ}dtn=21nV{max1jnji=1aniXni>εbn}+n=21n1V{max1jnji=1aniXni>bnt1/τ}dt=:K1+K2. (4.6)

    To establish (3.5), it is enough to prove that K1< and K2<. By Theorem 3.1, we know K1<. For K2, for each 1in, n1, and t1, write

    Yni=aniXniI{|aniXni|bnt1/τ}+bnt1/τI{aniXni>bnt1/τ}bnt1/τI{aniXni<bnt1/τ},
    Zni=aniXniYni,A=ni=1{Yni=aniXni},
    B=¯A=ni=1{YnianiXni}=ni=1{|aniXni|>bnt1/τ},
    En={max1jnji=1aniXni>bnt1/τ}.

    By Lemma 2.4, for all t1 and n large sufficiently, we obtain

    V{En}V{max1jnji=1Yni>bnt1/τ}+V{ni=1{|aniXni|>bnt1/τ}}V{max1jnji=1(YniE(Yni))>bnt1/τmax1jnji=1E(Yni)}+ni=1V{|aniXni|>bnt1/τ}V{max1jn(ji=1(YniE(Yni)))+>bnt1/τ/2}+ni=1V{|aniXni|>bnt1/τ}. (4.7)

    To establish K2<, we only need to prove that

    K21:=n=21n1V{max1jn(ji=1(YniE(Yni)))+>bnt1/τ/2}dt<, (4.8)
    K22:=n=21n1ni=1V{|aniXni|>bnt1/τ}dt<. (4.9)

    We know that {YniE(Yni),1in,n1} is an array of rowwise END random variables under sub-linear expectations. Therefore, by Markov's inequality under sub-linear expectations, Lemmas 2.1, 2.3, and the similar proof of (2.8) of Zhang [20], we obtain

    K21Cn=21n11b2nt2/τE{max1jn((ji=1(YniE(Yni)))+)2}dtCn=21n1(logn)2b2nt2/τ{ni=1CV{((YniE(Yni))+)2}+ni=1E[(YniE(Yni))2]}dtCn=21n1(logn)2b2nt2/τni=1CV{(Yni)2}dt+Cn=21n1(logn)2b2nt2/τ(ni=1|E[Yni]|)2dtCn=21n1(logn)2b2nt2/τni=1bn0V(|aniX|>x)xdxdt+Cn=21n1(logn)2b2nt2/τni=1bnt1/τbnV(|aniX|>x)xdxdt+Cn=21n1(logn)2b2nt2/τ(ni=1|E[Yni]|)2dt=:K211+K212+K213. (4.10)

    By 0<τ<α<2 and Lemma 2.5 and its proof, we have

    K211=Cn=2(logn)2nb2nni=1bn0V(|aniX|>x)xdx<. (4.11)

    By using t=xτ, Markov's inequality under sub-linear expectations, Lemmas 2.1 and 2.5, we see that

    K212Cn=2(logn)2nb2n1xτ3ni=1bnxbnV(|aniX|>y)ydydxCn=2(logn)2nb2nm=1m+1mxτ3ni=1bnxbnV(|aniX|>y)ydydxCn=2(logn)2nb2nm=1mτ3ni=1bn(m+1)bnV(|aniX|>y)ydyCn=2(logn)2nb2nni=1m=1mτ3ms=1bn(s+1)bnsV(|aniX|>y)ydyCn=2(logn)2nb2nni=1s=1bn(s+1)bnsV(|aniX|>y)ydym=smτ3Cn=2(logn)2nb2nni=1s=1sτ2bn(s+1)bnsV(|aniX|>y)ydyCn=2(logn)2nb2nni=1s=1sτ2bn(s+1)bnsV(|aniX|>y)yτ1y2τdyCn=2(logn)2nbτnni=1bnV(|aniX|>y)yτ1dyCn=2(logn)2nbτnni=1bτnV(|aniX|τ>y)dy<. (4.12)

    For K213, for 0<α1, by similar proof of J12 of Theorem 3.1, we see that

    K213Cn=21n1(logn)2b2nt2/τ(ni=1E[|Yni|])2dtCn=21n1(logn)2b2nt2/τ(ni=1E[|aniX|I{|aniX|bnt1/τ}+bnt1/τI{|aniX|>bnt1/τ}])2dtCn=21n1(logn)2b2nt2/τ(ni=1CV{|aniX|I{|aniX|bnt1/τ}+bnt1/τI{|aniX|>bnt1/τ}})2dtCn=21n1(logn)2b2nt2/τ(j=1iInjbnt1/τ0V{|X|>xn1/αj1/α}dx)2dtCn=21n1(logn)2b2nt2/τ(j=1#Injn1/αj1/α(logn)3/γj1/αt1/τ0V{|X|>x}dx)2dtCn=21n1(logn)2b2nt2/τ(j=1#Injn1/αj1/αj1k=0(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx)2dtCn=21n1(logn)2b2nt2/τ(k=0(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dxj=k+1#Injn1/αj1/α)2dtCn=21n1(logn)2(logn)6/γt2/τ(k=0(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx(k+1)11/α)2dtK2130+Cn=21n1(logn)2(logn)6/γt2/τ(k=1(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx(k+1)11/α)2dtK2130+{Cn=21n1(logn)2(logn)6α/γt2α/τ(k=1(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τxα1V{|X|>x}dx)2dt, forα>γ,Cn=21n1(logn)2(logn)6t2γ/τ(k=1(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dxk1α/γ)2dt, for αγ,K2130+{Cn=21n1(logn)2(logn)6α/γt2α/τ(CV{|X|α})2dt<, for α>γ,Cn=21n1(logn)2(logn)6t2γ/τ(CV{|X|γ})2dt<, for αγ, (4.13)

    where

    K2130=Cn=21n1(logn)2(logn)6/γt2/τ((logn)3/γt1/τ0V{|X|>x}dx)2dtC21y1(logy)2(logy)6/γt2/τdtdy(logy)3/γt1/τ0V{|X|>z}dzz0V{|X|>x}dxC21y1(logy)2(logy)6/γt2/τdtdy(logy)3/γt1/τ0zV{|X|>z}dzC0zV{|X|>z}dz1dtmax{2,e(z/t1/τ)γ/3}(logy)2y(logy)6/γt2/τdyC0zV{|X|>z}dz1dtmax{2,ezγ/3}(logy)2y(logy)6/γt2/τdyC0zγ1V{|X|>z}dz1t2/τdtCCV{|X|γ}<.

    When 1<α<2, by E(aniXni)=0 and the similar proof of J12 for 1<α<2 in Theorem 3.1, we have

    K213Cn=21n1(logn)2b2nt2/τ(ni=1|E[Yni]E[aniXni]|)2dtCn=21n1(logn)2b2nt2/τ(ni=1E[|YnianiXni|])2dtCn=21n1(logn)2b2nt2/τ(ni=1E[|aniXnibnt1/τ|I{|aniXni|>bnt1/τ}])2dtCn=21n1(logn)2b2nt2/τ(ni=1E[|aniXbnt1/τ|I{|aniX|>bnt1/τ}])2dtCn=21n1(logn)2b2nt2/τ(ni=1CV{|aniXbnt1/τ|I{|aniX|>bnt1/τ}})2dtCn=21n1(logn)2b2nt2/τ(ni=1CV{|aniX|I{|aniX|>bnt1/τ}})2dtCn=21n(logn)2b2n(ni=1CV{|aniX|I{|aniX|>bn}})2<. (4.14)

    Combining (4.10)–(4.14) results in (4.8). By the similar proof of J2 of Theorem 3.1, for 0<μ<1, we have

    J2Cn=21n1ni=1V{|aniXni|>bnt1/τ}dtCn=21n1dtj=1#Injj+1(j+1)V{|X|>μj1/α(logn)3/γt1/τ}Cn=21n1dtmaxy1yV{|X|>μy1/α(logn)3/γt1/τ}{Cn=21n(logn)3α/γ1tα/τdtmaxyμαy(logn)3α/γtα/τV{|X|α>y(logn)3α/γtα/τ}, forα>γ,Cn=21n(logn)31tγ/τdtmaxyμαyγ/α(logn)3tγ/τV{|X|γ>yγ/α(logn)3tγ/τ}y1γ/α, forαγ,{Cn=21n(logn)3α/γ1tα/τdt<, forα>γ,Cn=21n(logn)31tγ/τdt<, for αγ. (4.15)

    (4.15) together with (4.6)–(4.9) completes the proof of Theorem 3.2.

    Proof of Theorem 3.3. The proof here is similar to that of Corollary 3.1 of Xu and Kong [6] with Theorem 3.1 here in place of Theorem 3.1 of Xu and Kong [6] (also cf. the proof of Theorem 2.11 of Yan [22]), hence the proof here is omitted. This completes the proof.

    We have obtained new results about complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. Results obtained in our article generalize those for extended negatively dependent random variables in probability space, and Theorems 3.1–3.3 complement the results of Xu et al. [5], Xu and Kong [6] in some sense. In addition, Lemma 2.3 is the Rosenthal-type inequality for extended negatively dependent random variables, which is another main innovation point of this article.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This study was supported by Science and Technology Research Project of Jiangxi Provincial Department of Education of China (No. GJJ2201041), Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (No. 102/01003002031), Academic Achievement Re-cultivation Project of Jingdezhen Ceramic University (Grant No. 215/20506277).

    The author states no conflict of interest in this article.



    [1] S. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stochastic analysis and applications, Berlin: Springer, 2007,541–567. https://doi.org/10.1007/978-3-540-70847-6_25
    [2] S. Peng, Nonlinear expectations and stochastic calculus under uncertainty, 1 Eds., Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7
    [3] L. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
    [4] L. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
    [5] M. Xu, K. Cheng, W. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
    [6] M. Xu, X. Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
    [7] L. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
    [8] J. Xu, L. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta. Math. Sin.-English Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
    [9] J. Xu, L. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta. Math. Sin.-English Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
    [10] Q. Wu, Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
    [11] L. Zhang, J. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probabil. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
    [12] H. Zhong, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
    [13] F. Hu, Z. Chen, D. Zhang, How big are the increments of G-Brownian motion? Sci. China Math., 57 (2014), 1687–1700. https://doi.org/10.1007/s11425-014-4816-0 doi: 10.1007/s11425-014-4816-0
    [14] F. Gao, M. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations (Chinese), Scientia Sinica Mathematica, 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
    [15] A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
    [16] M. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
    [17] M. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probabil. Lett., 186 (2022), 109464. https://doi.org/10.1016/j.spl.2022.109464 doi: 10.1016/j.spl.2022.109464
    [18] L. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
    [19] Z. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
    [20] L. Zhang, On the laws of the iterated logarithm under sub-linear expectations, Probab. Uncertain. Qua., 6 (2021), 409–460. https://doi.org/10.3934/puqr.2021020 doi: 10.3934/puqr.2021020
    [21] X. Chen, Q. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
    [22] J. Yan, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables, Acta. Math. Sin.-English Ser., 34 (2018), 1501–1516. https://doi.org/10.1007/s10114-018-7133-7 doi: 10.1007/s10114-018-7133-7
    [23] P. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
    [24] Y. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201.
    [25] S. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta. Math. Sin.-English Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
    [26] B. Meng, D. Wang, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Commun. Stat.-Theor. M., 51 (2022), 3847–3863. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
  • This article has been cited by:

    1. Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1567) PDF downloads(39) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog