In this article, we study the complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. We also give some sufficient assumptions for the convergence. Moreover, we get the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of extended negatively dependent random variables. The results obtained in this paper generalize the relevant ones in probability space.
Citation: Mingzhou Xu. Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations[J]. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992
[1] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[2] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[3] | He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise $ m $-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340 |
[4] | Mingzhou Xu, Xuhang Kong . Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(4): 8504-8521. doi: 10.3934/math.2023428 |
[5] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[6] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[7] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[8] | Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029 |
[9] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[10] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
In this article, we study the complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. We also give some sufficient assumptions for the convergence. Moreover, we get the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of extended negatively dependent random variables. The results obtained in this paper generalize the relevant ones in probability space.
Peng [1,2] introduced important concepts of the sub-linear expectations space to describe the uncertainty in probability. Stimulated by the works of Peng [1,2], many scholars tried to discover the results under sub-linear expectations space, similar to those in classic probability space. Zhang [3,4] proved exponential inequalities and Rosenthal's inequality under sub-linear expectations. Xu et al. [5], Xu and Kong [6] obtained complete convergence and complete moment convergence of weighted sums of negatively dependent random variables under sub-linear expectations. For more limit theorems under sub-linear expectations, the readers could refer to Zhang [7], Xu and Zhang [8,9], Wu and Jiang [10], Zhang and Lin [11], Zhong and Wu [12], Hu et al. [13], Gao and Xu [14], Kuczmaszewska [15], Xu and Cheng [16,17], Zhang [18], Chen [19], Zhang [20], Chen and Wu [21], Xu et al. [5], Xu and Kong [6], and references therein.
In classic probability space, Yan [22] established complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables. For references on complete moment convergence and complete convergence in probability space, the reader could refer to Hsu and Robbins [23], Chow [24], Hosseini and Nezakati[25], Meng et al. [26] and refercences therein. Stimulated by the works of Yan [22], Xu et al. [5], Xu and Kong [6], we try to prove complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, and the corresponding Marcinkiewicz-Zygmund strong law of large number, which extends the corresponding results in Yan [22]. Another main innovation point here is Rosenthal-type inequality for extended negatively dependent random variables, provided by Lemma 2.3.
We organize the remainders of this article as follows. We present relevant basic notions, concepts and properties, and give relevant lemmas under sub-linear expectations in Section 2. In Section 3, we give our main results, Theorems 3.1–3.3, the proofs of which are presented in Section 4.
Hereafter, we use notions similar to that in the works by Peng [2] and Zhang [4]. Assume that (Ω,F) is a given measurable space. Suppose that H is a set of all random variables on (Ω,F) fulfilling φ(X1,⋯,Xn)∈H for X1,⋯,Xn∈H, and each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) is the set of φ fulfilling
|φ(x)−φ(y)|≤C(1+|x|m+|y|m)(|x−y|),∀x,y∈Rn |
for C>0, m∈N relying on φ.
Definition 2.1. A sub-linear expectation E on H is a functional E:H↦ˉR:=[−∞,∞] fulfilling the following: for every X,Y∈H,
(a) X≥Y implies E[X]≥E[Y];
(b) E[c]=c, ∀c∈R;
(c) E[λX]=λE[X], ∀λ≥0;
(d) E[X+Y]≤E[X]+E[Y] whenever E[X]+E[Y] is not of the form ∞−∞ or −∞+∞.
Definition 2.2. We say that {Xn;n≥1} is stochastically dominated by a random variable X under (Ω,H,E), if there exist a constant C such that ∀n≥1, for all non-negative h∈Cl,Lip(R), E(h(Xn))≤CE(h(X)).
V:F↦[0,1] is named to be a capacity if
(a)V(∅)=0, V(Ω)=1.
(b)V(A)≤V(B), A⊂B, A,B∈F.
Furthermore, if V is continuous, then V obeys
(c) An↑A yields V(An)↑V(A).
(d) An↓A yields V(An)↓V(A).
V is said to be sub-additive when V(A⋃B)≤V(A)+V(B), A,B∈F.
Under (Ω,H,E), set V(A):=inf{E[ξ]:IA≤ξ,ξ∈H}, ∀A∈F (cf. Zhang [3]). V is a sub-additive capacity. Write
CV(X):=∫∞0V(X>x)dx+∫0−∞(V(X>x)−1)dx. |
Under sub-linear expectation space (Ω,H,E), {Xn;n≥1} are called to be upper (resp. lower) extended negatively dependent if there is a constant K≥1 fulfilling
E[n∏i=1gi(Xi)]≤Kn∏i=1E[gi(Xi)],n≥1, |
whenever the non-negative functions gi∈Cb,Lip(R), i=1,2,… are all non-decreasing (resp. all non-increasing) (cf. Definition 2.4 of Zhang [18]). They are named extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.
Suppose X1 and X2 are two n-dimensional random vectors under (Ω1,H1,E1) and (Ω2,H2,E2) respectively. They are said to be identically distributed if for every ψ∈Cl,Lip(Rn),
E1[ψ(X1)]=E2[ψ(X2)]. |
{Xn;n≥1} is called to be identically distributed if for every i≥1, Xi and X1 are identically distributed.
Throughout this paper, we suppose that E is countably sub-additive, i.e., E(X)≤∑∞n=1E(Xn) could be implied by X≤∑∞n=1Xn, X,Xn∈H, and X≥0, Xn≥0, n=1,2,…. Write Sn=∑ni=1Xi, n≥1. Let C denote a positive constant which may change from line to line. I(A) or IA is the indicator function of A. The symbol ax≈bx means that there exists two positive constants C1, C2 fulfilling C1|bx|≤|ax|≤C2|bx|, x+ stands for max{x,0}, x−=(−x)+, for x∈R.
As in Zhang [18], if X1,X2,…,Xn are extended negatively dependent random variables and f1(x),f2(x),…,fn(x)∈Cl,Lip(R) are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),…,fn(Xn) are extended negatively dependent random variables.
We cite the following under sub-linear expectations.
Lemma 2.1. (Cf. Lemma 4.5 (iii) of Zhang [3]) If E is countably sub-additive under (Ω,H,E), then for X∈H,
E|X|≤CV(|X|). |
Lemma 2.2. Assume that p>1 and {Xn;n≥1} is a sequence of upper extended negatively dependent random varables with E[Xk]≤0, k≥0, under (Ω,H,E). Then for every n≥1, there exists a positive constant C=C(p) relying on p such that for p≥2,
E[((n∑j=1Xj)+)p]≤CV[((n∑j=1Xj)+)p]≤C{n∑i=1CV[(X+i)p]+(n∑i=1E[X2i])p/2}. | (2.1) |
By (2.1) of Lemma 2.2 and similar proof of Lemma 2.4 of Xu et al. [5], we could get the following.
Lemma 2.3. Assume that p>1 and {Xn;n≥1} is a sequence of upper extended negatively dependent random varables with E[Xk]≤0, k≥0, under (Ω,H,E). Then for every n≥1, there exists a positive constant C=C(p) relying on p such that for p≥2,
E[max1≤i≤n((i∑j=1Xj)+)p]≤C(logn)p{n∑i=1CV[(X+i)p]+(n∑i=1E[X2i])p/2}. | (2.2) |
We first give two lemmas.
Lemma 2.4. Suppose that 0<α<2, γ>0, and {ani,1≤i≤n,n≥1} is an array of real numbers fulfilling
n∑i=1|ani|α=O(n),forsomeα>0. | (2.3) |
Assume that {Xni,i≥1,n≥1} is stochastically dominated by a random variable X with CV{|X|α}<∞. Moreover, suppose that E(aniXni)=0 for 1<α<2 and bn=n1/α(logn)3/γ for some γ>0. Then
1bnmax1≤j≤nj∑i=1E(Yni)≤C(logn)−3α/γCV{|X|α}→0,asn→∞, | (2.4) |
where Yni=aniXniI(|aniXni|≤bn)+bnI(aniXni>bn)−bnI(aniXni<bn) for each 1≤i≤n, n≥1.
Lemma 2.5. Assume that {ani,1≤i≤n,n≥1} is an array of real numbers fuffilling (2.3) and X∈H under sub-linear expectation space (Ω,H,E). Suppose bn is as in Lemma 2.4.
(i) If p>max{α,γ(β+1)/3} for some β, then
∞∑n=2(logn)βnbpnn∑i=1∫bpn0V{|aniX|p>x}dx≤{CCV{|X|α},forα>γ(β+1)/3,CCV{|X|αlog(|X|+1)},forα=γ(β+1)/3,CCV{|X|γ(β+1)/3},forα<γ(β+1)/3. |
(ii) If p=α, β=2, then
∞∑n=2(logn)2nbαnn∑i=1∫∞bαnV{|aniX|α>x}dx≤{CCV{|X|α},forα>γ,CCV{|X|αlog(1+|X|)},forα=γ,CCV{|X|γ},forα<γ. |
Below are our main results.
Theorem 3.1. Suppose {Xni,i≥1,n≥1} is an array of rowwise END random variables, which is stochastic dominated by X under (Ω,H,E). Assume that for some 0<α<2, 0<γ<2, {ani,1≤i≤n,n≥1} is an array of real numbers, being all non-negative or all non-positive, fulfilling (2.3) and bn is as in Lemma 2.5. Moreover, suppose that E(aniXni)=0 for 1<α<2. If
{CV{|X|α}<∞,forα>γ,CV{|X|αlog(1+|X|)}<∞,forα=γ,CV{|X|γ}<∞,forα<γ, | (3.1) |
then for all ε>0,
∞∑n=21nV(max1≤j≤n(j∑i=1aniXni)>εbn)<∞. | (3.2) |
Similarly, moreover, with the condition that E(−aniXni)=0 for 1<α<2 in place of that E(aniXni)=0 for 1<α<2, we have for all ε>0,
∞∑n=21nV(max1≤j≤n(j∑i=1(−aniXni))>εbn)<∞. | (3.3) |
Moreover, suppose that E(Xni)=E(−Xni)=0 for 1<α<2. Then (3.1) implies
∞∑n=21nV(max1≤j≤n|j∑i=1aniXni|>εbn)<∞,forallε>0. | (3.4) |
Theorem 3.2. Suppose {Xni,i≥1,n≥1} is an array of rowwise END random variables, which is stochastic dominated by X under (Ω,H,E). Assume that for some 0<α<2, 0<γ<2, {ani,1≤i≤n,n≥1} is an array of real numbers, being all non-negative or all non-positive, fulfilling (2.3) and bn is as in Lemma 2.5. Suppose (3.1) holds. Moreover, suppose that E(aniXni)=0 for 1<α<2. Then for 0<τ<α,
∞∑n=21nCV{(1bnmax1≤j≤nj∑i=1aniXni−ε)τ+}<∞,forallε>0. | (3.5) |
Similarly, moreover, with the condition that E(−aniXni)=0 for 1<α<2 in place of that E(aniXni)=0 for 1<α<2, we have for 0<τ<α,
∞∑n=21nCV{(1bnmax1≤j≤nj∑i=1(−aniXni)−ε)τ+}<∞,forallε>0. | (3.6) |
Moreover, if E(Xni)=E(−Xni)=0 for 1<α<2, then
∞∑n=21nCV{(1bnmax1≤j≤n|j∑i=1aniXni|−ε)τ+}<∞,forallε>0. | (3.7) |
Remark 3.1. From (3.7) follows that (3.4) holds. Hence we know that the complete moment convergence implies the complete convergence.
Theorem 3.3. Under the same conditions of Theorem 3.1, and assume that V induced by E is countably sub-additive, we have
V(lim supn→∞∑ni=1aniXnibn≤0)=1,V(lim supn→∞∑ni=1(−aniXni)bn≤0)=1. |
Moreover, if E(Xni)=E(−Xni)=0 for 1<α<2, then
V(lim supn→∞|∑ni=1aniXnibn|=0)=1. |
Proof of Lemma 2.4. We will investigate (2.4) in two cases.
Case I. 0<α≤1.
By Markov's inequality under sub-linear expectations and CV{|X|α}<∞, we see that
1bnmax1≤j≤nj∑i=1E(Yni)≤1bnmax1≤j≤nj∑i=1E(Y+ni)≤1bnn∑i=1E(Y+ni)≤C1bnn∑i=1E((Y′ni)+)≤Cbnn∑i=1CV{(Y′ni)+}≤Cbnn∑i=1∫bn0V{|aniX|>x}dx≤Cbnn∑i=1∫bn0E|aniX|αxαdx=Cbnb1−αnn∑i=1|ani|αCV{|X|α}≤C(logn)−3α/γCV{|X|α}→0, | (4.1) |
where Y′ni≡aniXI{|aniX|≤bn}+bnI{aniX>bn}−bnI(aniX<bn).
Case II. 1<α<2.
For all 1≤i≤n, n≥1, write
Zni=aniXni−Yni=(aniXni−bn)I{aniXni>bn}+(aniXni+bn)I{aniXni<−bn}. |
Then 0<Zni=aniXni−bn<aniXni for aniXni>bn, aniXni<Zni=aniXni+bn<0 for aniXni<−bn. Therefore, |Zni|≤|aniXni|I{|aniXni|>bn}.
From E(aniXni)=0 for 1<α<2 and CV{|X|α}<∞ follows that
1bnmax1≤j≤nj∑i=1E(Yni)≤1bnmax1≤j≤n|j∑i=1E(Yni)|≤1bnmax1≤j≤nj∑i=1|E(Yni)|≤1bnmax1≤j≤nj∑i=1|E(Yni)−E(aniXni)|≤1bnn∑i=1E|Zni|≤1bnn∑i=1E|Z′ni|≤1bnn∑i=1CV{|Z′ni|}≤1bnn∑i=1∫∞0V{|aniX|I{|aniX|>bn}>x}dx≤1bnn∑i=1∫∞0V{|aniX|αI{|aniX|>bn}>xbα−1n}dx≤1bnn∑i=1∫∞0V{|aniX|αI{|aniX|>bn}>x}dx/bα−1n≤1bαnn∑i=1|ani|αCV{|X|α}≤C(logn)−3α/γCV{|X|α}→0, asn→∞, | (4.2) |
where Z′ni≡(aniX−bn)I{aniX>bn}+(aniX+bn)I{aniX<−bn}. Combining (4.1) and (4.2) results in (2.4).
Proof of Lemma 2.5. Without loss of generality, we suppose that
n∑i=1|ani|α≤n,forsomeα>0. |
Write
Inj:={1≤i≤n:n1/α(j+1)−1/α<|ani|≤n1/αj−1/α}. |
Then by Lemma 2.4 of Yan [22], we have
∞∑n=2(logn)βnbpnn∑i=1∫bpn0V{|aniX|p>x}dx=∞∑n=2n−1−p/α(logn)β−3p/γn∑i=1|ani|p∫bpn/|ani|p0V{|X|p>x}dx=∞∑n=2n−1−p/α(logn)β−3p/γ∞∑j=1∑i∈Inj|ani|p∫bpn/|ani|p0V{|X|p>x}dx≤∞∑n=2n−1(logn)β−3p/γ∞∑j=1#Injj−p/α∫(j+1)p/α(logn)3p/γ0V{|X|p>x}dx=∞∑n=2n−1(logn)β−3p/γ∞∑j=1#Injj−p/αj∑k=0∫(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dx≤∞∑n=2n−1(logn)β−3p/γ∞∑j=1#Injj−p/α∫(logn)3p/γ0V{|X|p>x}dx+∞∑n=2n−1(logn)β−3p/γ∞∑j=1#Injj−p/αj∑k=1∫(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dx=:I1+I2. |
First, for I1, when α>γ(β+1)3, from Lemma 2.4 of Yan [22] and p>α follows that
I1≤C∞∑n=2n−1(logn)β−3p/γ∫(logn)3/γ0V{|X|>x}xp−1dx≤C∞∑n=2n−1(logn)β−3p/γ∫(logn)3/γ0V{|X|>x}xα−1(logn)3(p−α)/γdx≤C∞∑n=2n−1(logn)β−3α/γCV{|X|α}≤CCV{|X|α}. |
When α≤γ(β+1)3, from Lemma 2.4 of Yan [22], and p>γ(β+1)3 follows that
I1≤C∞∑n=2n−1(logn)β−3p/γ∫(logn)3/γ0V{|X|>x}xp−1dx=C∞∑n=2n−1(logn)β−3p/γn∑m=2∫(logm)3/γ(log(m−1))3/γV{|X|>x}xp−1dx=C∞∑m=2∫(logm)3/γ(log(m−1))3/γV{|X|>x}xp−1dx∞∑n=mn−1(logn)β−3p/γ≤C∞∑m=2∫(logm)3/γ(log(m−1))3/γV{|X|>x}xp−1(logm)β−3p/γ+1dx≤C∞∑m=2∫(logm)3/γ(log(m−1))3/γV{|X|>x}xγ(β+1)3−1(logm)β−3p/γ+1(logm)3γ(p−γ(β+1)3)dx≤CCV{|X|γ(β+1)3}. |
Next for I2, from Lemma 2.4 of Yan [22] and p>α follows that
I2≤C∞∑n=2n−1(logn)β−3p/γ∞∑k=1∫(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dx∞∑j=k#Injj−p/α≤C∞∑n=2n−1(logn)β−3p/γ∞∑k=1(k+1)1−p/α∫(k+1)p/α(logn)3p/γkp/α(logn)3p/γV{|X|p>x}dx≤C∞∑n=2n−1(logn)β−3p/γ∞∑k=1(k+1)1−p/α∫(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα−1xp−αdx≤C∞∑n=2n−1(logn)β−3α/γ∞∑k=1∫(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα−1dx≤C∞∑n=2n−1(logn)β−3α/γ∞∑k=1∫(k+1)1/α(logn)3/γk1/α(logn)3/γV{|X|>x}xα−1dx=C∞∑n=2n−1(logn)β−3α/γ∫∞(logn)3/γV{|X|>x}xα−1dx=C∞∑n=2n−1(logn)β−3α/γ∞∑m=n∫(log(m+1))3/γ(logm)3/γV{|X|>x}xα−1dx≤C∞∑m=2∫(log(m+1))3/γ(logm)3/γV{|X|>x}xα−1dxm∑n=2n−1(logn)β−3α/γ. |
Noting that
m∑n=2n−1(logn)β−3α/γ≤{C, forα>γ(β+1)3,Cloglogm, forα=γ(β+1)3,C(logm)β−3αγ+1, forα<γ(β+1)3, |
we obtain
I2≤{CCV{|X|α}, forα>γ(β+1)3,CCV{|X|αlog(1+|X|)}, forα=γ(β+1)3,CCV{|X|γ(β+1)3}, forα<γ(β+1)3. |
Hence,
I=I1+I2≤{CCV{|X|α}, forα>γ(β+1)3,CCV{|X|αlog(1+|X|)}, forα=γ(β+1)3,CCV{|X|γ(β+1)3}, forα<γ(β+1)3. |
The proof is completed.
Proof of Theorem 3.1. By (2.3) and ani=a+ni−a−ni, without loss of generality, we suppose that ani≥0 and ∑ni=1aαni≤n. We need only to prove (3.2).
For fixed n≥1, write Zni as in the proof of Lemma 2.4, and
A=n⋂i=1{Yni=aniXi}, |
B=ˉA=n⋃i=1{Yni≠aniXni}=n⋃i=1{|aniXni|>bn}, |
En={max1≤j≤nj∑i=1aniXni>εbn}. |
We easily see that for all ε>0,
En=EnA⋃EnB⊂{max1≤j≤nj∑i=1Yni>εbn}⋃{n⋃i=1{|aniXni|>bn}}. |
Then from Lemma 2.4, for n sufficiently large follows that
V(En)≤V(max1≤j≤nj∑i=1Yni>εbn)+V(n⋃i=1{|aniXni|>bn})≤V(max1≤j≤nj∑i=1(Yni−EYni)>εbn−max1≤j≤nj∑i=1EYni)+V(n⋃i=1{|aniXni|>bn})≤V(max1≤j≤nj∑i=1(Yni−EYni)>εbn/2)+V(n⋃i=1{|aniXni|>bn}). | (4.3) |
To establish (4.3), we only need to prove that
J1:=∞∑n=21nV(max1≤j≤nj∑i=1(Yni−EYni)>εbn/2)<∞, | (4.4) |
J2:=∞∑n=21nn∑i=1V(|aniXni|>bn)<∞. | (4.5) |
For J1, we see that {Yni−E(Yni),1≤i≤n,n≥1} is an array of rowwise END random variables under sub-linear expectations. Therefore, by Markov's inequality under sub-linear expectations, Lemmas 2.1, 2.3, and similar proof of (2.8) of Zhang [20], we obtain
J1≤∞∑n=21nV(max1≤j≤n(j∑i=1(Yni−EYni))+>εbn/2)≤C∞∑n=11nb2nE(max1≤j≤n((j∑i=1(Yni−EYni))+)2)≤C∞∑n=2(logn)2nb2n(n∑i=1CV{((Yni−E(Yni))+)2}+n∑i=1E((Yni−E(Yni))2))≤C∞∑n=2(logn)2nb2nn∑i=1CV{|Yni|2}+C∞∑n=2(logn)2nb2n(n∑i=1|E(Yni)|)2≤C∞∑n=2(logn)2nb2nn∑i=1CV{|Y′ni|2}+C∞∑n=2(logn)2nb2n(n∑i=1|E(Yni)|)2≤C∞∑n=2(logn)2nb2nn∑i=1∫b2n0V{|aniX|2>x}dx+C∞∑n=2(logn)2nb2n(n∑i=1|E(Yni)|)2=:J11+J12. |
By Lemma 2.5(i) (for p=β=2) and its proof, we have J11<∞.
For J12, when 0<α≤1, by Lemma 2.4 of Yan [22] and its proof, we see that
J12≤C∞∑n=2(logn)2nb2n(n∑i=1E(|Yni|))2≤C∞∑n=2(logn)2nb2n(n∑i=1E(|Y′ni|))2≤C∞∑n=2(logn)2nb2n(n∑i=1CV{|Y′ni|})2≤C∞∑n=2n−1−2/α(logn)2−6/γ(n∑i=1∫bn0V{|aniX|>x}dx)2≤C∞∑n=2n−1−2/α(logn)2−6/γ(∞∑j=1∑i∈Inj∫(logn)3/γn1/α0V{|aniX|>x}dx)2≤C∞∑n=2n−1−2/α(logn)2−6/γ(∞∑j=1#Inj∫(logn)3/γn1/α0V{|X|>xj1/α/n1/α}dx)2≤C∞∑n=2n−1−2/α(logn)2−6/γ(∞∑j=1#Inj∫(logn)3/γj1/α0V{|X|>x}n1/αj−1/αdx)2≤C∞∑n=2n−1(logn)2−6/γ(∞∑j=1#Injj−1/αj−1∑k=0∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2≤C∞∑n=2n−1(logn)2−6/γ(∞∑k=0∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx∞∑j=k+1#Injj−1/α)2≤C∞∑n=2n−1(logn)2−6/γ(∞∑k=0∫(logn)3/γ(k+1)1/α(logn)3/γk1/α(k+1)1−1/αV{|X|>x}dx)2≤J121+C∞∑n=2n−1(logn)2−6/γ×{(∑∞k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αxα−1V{|X|>x}dx/(logn)3(α−1)/γ)2, forα≥γ,(∑∞k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αxγ−1V{|X|>x}dx⋅k1−γ/α/(logn)3(γ−1)/γ)2, forα<γ,≤J121+{C∑∞n=2n−1(logn)2−6α/γ(CV{|X|α})2<∞, forα≥γ,C∑∞n=2n−1(logn)−4(CV{|X|γ})2<∞, forα<γ, |
where for 0<γ<2,
J121=C∞∑n=2n−1(logn)2−6/γ(∫(logn)3/γ0V{|X|>x}dx)2≤C∫∞2y−1(logy)2−6/γdy∫(logy)3/γ0V{|X|>z}dz∫z0V{|X|>x}dx≤C∫∞2y−1(logy)2−6/γdy∫(logy)3/γ0zV{|X|>z}dz≤C∫∞0zV{|X|>z}dz∫∞max{2,ezγ/3}y−1(logy)2−6/γdy≤C+C∫∞1zγ−1V{|X|>z}dz≤C+CCV{|X|γ}<∞. |
When 1<α<2, by E(aniXni)=0, we have
J12≤C∞∑n=2(logn)2nb2n(n∑i=1|E(Yni)−E(aniXni)|)2≤C∞∑n=2(logn)2nb2n(n∑i=1E(|Yni−aniXni|))2≤C∞∑n=2(logn)2nb2n(n∑i=1E(|Y′ni−aniX|))2≤C∞∑n=2(logn)2nb2n(n∑i=1CV(|Y′ni−aniX|))2≤C∞∑n=2(logn)2nb2n(n∑i=1CV{|aniX|I{|aniX|>bn}})2≤C∞∑n=2(logn)2−6/γn−1−2/α(n∑i=1|ani|CV{|X|I{|aniX|>bn}})2≤C∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1∑i∈Injn1/αj−1/α∫∞0V{|X|I{|X|>(logn)3/γj1/α}>x}dx)2≤C∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1∑i∈Injn1/αj−1/α∫(logn)3/γj1/α0V{|X|>(logn)3/γj1/α}dx)2+∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1∑i∈Injn1/αj−1/α∞∑k=j∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2=:J121+J122. |
By #Inj≤(j+1), we see that
J121≤C∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1#Injn1/α(logn)3/γV{|X|>(logn)3/γj1/α})2≤C∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1j⋅n1/α(logn)3/γV{|X|>(logn)3/γj1/α})2≤{C∑∞n=2(logn)2−6α/γn−1(∑∞j=1j⋅(logn)3α/γV{|X|α>(logn)3α/γj})2, forα>γ,C∑∞n=2(logn)2−6n−1(∑∞j=1jγ/α−1⋅(logn)3V{|X|γ>(logn)3jγ/α}j2−2γ/α)2, forα≤γ,≤{C∑∞n=2(logn)2−6α/γn−1(CV{|X|α})2<∞, forα>γ,C∑∞n=2(logn)−4n−1(CV{|X|γ})2<∞, forα≤γ. |
By ∑kj=1#Injj−1≤C, ∑kj=1#Injj−1/α≤∑kj=1#Injj−1/αj1−1/α≤Ck1−1/α, we see that
J122≤C∞∑n=2(logn)2−6/γn−1−2/α(∞∑j=1∑i∈Injn1/αj−1/α∞∑k=j∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx)2≤C∞∑n=2(logn)2−6/γn−1(∞∑k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dxk∑j=1#Injj−1/α)2≤C∞∑n=2(logn)2−6/γn−1(∞∑k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}dx⋅k1−1/α)2≤{C∑∞n=2(logn)2−6α/γn−1(∑∞k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}xα−1dx)2, forα>γ,C∑∞n=2(logn)2−6n−1(∑∞k=1∫(logn)3/γ(k+1)1/α(logn)3/γk1/αV{|X|>x}xγ−1dx⋅k1−1/α−(γ−1)/α)2, for α≤γ,≤{C∑∞n=2(logn)2−6α/γn−1(CV{|X|α})2, forα>γ,C∑∞n=2(logn)−4n−1(CV{|X|γ})2, forα≤γ. |
Hence, we prove that J1<∞.
Define gμ(x)∈Cl,Lip(R) such that I{|x|≤μ}≤gμ(|x|)<I{|x|≤1}, for some 0<μ<1. Then I{|x|>μ}≥1−gμ(|x|)≥I{|x|>1}. For J2, by ∑∞j=1#Injj+1≤1, we see that
J2≤∞∑n=21nn∑i=1V(|aniXni|>n1/α(logn)3/γ)≤C∞∑n=21n∞∑j=1∑i∈InjE(1−gμ(|aniXnin1/α(logn)3/γ|))≤C∞∑n=21n∞∑j=1∑i∈InjE(1−gμ(|aniXn1/α(logn)3/γ|))≤C∞∑n=21n∞∑j=1#InjV{|X|>μj1/α(logn)3/γ}≤C∞∑n=21n∞∑j=1#Injj+1(j+1)V{|X|>μj1/α(logn)3/γ}≤C∞∑n=21nmaxy≥1y⋅V{|X|>μy1/α(logn)3/γ}≤{C∑∞n=21n⋅(logn)3α/γmaxy≥μαy(logn)3α/γ⋅V{|X|α>y(logn)3α/γ}, for α>γ,C∑∞n=21n(logn)3maxy≥μαyγ/α⋅V{|X|γ>yγ/α(logn)3}⋅y1−γ/α, for α≤γ,≤{C∑∞n=21n⋅(logn)3α/γ<∞, for α>γ,C∑∞n=21n(logn)3<∞, forα≤γ. |
The proof of Theorem 3.1 is finished.
Proof of Theorem 3.2. We only prove (3.5). For all ε>0, we see that
∞∑n=21nCV{(1bnmax1≤j≤nj∑i=1aniXni−ε)τ+}=∞∑n=21n∫∞0V{1bnmax1≤j≤nj∑i=1aniXni−ε>t1/τ}dt=∞∑n=21n∫10V{1bnmax1≤j≤nj∑i=1aniXni−ε>t1/τ}dt+∞∑n=21n∫∞1V{1bnmax1≤j≤nj∑i=1aniXni−ε>t1/τ}dt≤∞∑n=21nV{max1≤j≤nj∑i=1aniXni>εbn}+∞∑n=21n∫∞1V{max1≤j≤nj∑i=1aniXni>bnt1/τ}dt=:K1+K2. | (4.6) |
To establish (3.5), it is enough to prove that K1<∞ and K2<∞. By Theorem 3.1, we know K1<∞. For K2, for each 1≤i≤n, n≥1, and t≥1, write
Y′ni=aniXniI{|aniXni|≤bnt1/τ}+bnt1/τI{aniXni>bnt1/τ}−bnt1/τI{aniXni<−bnt1/τ}, |
Z′ni=aniXni−Y′ni,A′=n⋂i=1{Y′ni=aniXni}, |
B′=¯A′=n⋃i=1{Y′ni≠aniXni}=n⋃i=1{|aniXni|>bnt1/τ}, |
E′n={max1≤j≤nj∑i=1aniXni>bnt1/τ}. |
By Lemma 2.4, for all t≥1 and n large sufficiently, we obtain
V{E′n}≤V{max1≤j≤nj∑i=1Y′ni>bnt1/τ}+V{n⋃i=1{|aniXni|>bnt1/τ}}≤V{max1≤j≤nj∑i=1(Y′ni−E(Y′ni))>bnt1/τ−max1≤j≤nj∑i=1E(Y′ni)}+n∑i=1V{|aniXni|>bnt1/τ}≤V{max1≤j≤n(j∑i=1(Y′ni−E(Y′ni)))+>bnt1/τ/2}+n∑i=1V{|aniXni|>bnt1/τ}. | (4.7) |
To establish K2<∞, we only need to prove that
K21:=∞∑n=21n∫∞1V{max1≤j≤n(j∑i=1(Y′ni−E(Y′ni)))+>bnt1/τ/2}dt<∞, | (4.8) |
K22:=∞∑n=21n∫∞1n∑i=1V{|aniXni|>bnt1/τ}dt<∞. | (4.9) |
We know that {Y′ni−E(Y′ni),1≤i≤n,n≥1} is an array of rowwise END random variables under sub-linear expectations. Therefore, by Markov's inequality under sub-linear expectations, Lemmas 2.1, 2.3, and the similar proof of (2.8) of Zhang [20], we obtain
K21≤C∞∑n=21n∫∞11b2nt2/τE{max1≤j≤n((j∑i=1(Y′ni−E(Y′ni)))+)2}dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ{n∑i=1CV{((Y′ni−E(Y′ni))+)2}+n∑i=1E[(Y′ni−E(Y′ni))2]}dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τn∑i=1CV{(Y′ni)2}dt+C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1|E[Y′ni]|)2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τn∑i=1∫bn0V(|aniX|>x)xdxdt+C∞∑n=21n∫∞1(logn)2b2nt2/τn∑i=1∫bnt1/τbnV(|aniX|>x)xdxdt+C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1|E[Y′ni]|)2dt=:K211+K212+K213. | (4.10) |
By 0<τ<α<2 and Lemma 2.5 and its proof, we have
K211=C∞∑n=2(logn)2nb2nn∑i=1∫bn0V(|aniX|>x)xdx<∞. | (4.11) |
By using t=xτ, Markov's inequality under sub-linear expectations, Lemmas 2.1 and 2.5, we see that
K212≤C∞∑n=2(logn)2nb2n∫∞1xτ−3n∑i=1∫bnxbnV(|aniX|>y)ydydx≤C∞∑n=2(logn)2nb2n∞∑m=1∫m+1mxτ−3n∑i=1∫bnxbnV(|aniX|>y)ydydx≤C∞∑n=2(logn)2nb2n∞∑m=1mτ−3n∑i=1∫bn(m+1)bnV(|aniX|>y)ydy≤C∞∑n=2(logn)2nb2nn∑i=1∞∑m=1mτ−3m∑s=1∫bn(s+1)bnsV(|aniX|>y)ydy≤C∞∑n=2(logn)2nb2nn∑i=1∞∑s=1∫bn(s+1)bnsV(|aniX|>y)ydy∞∑m=smτ−3≤C∞∑n=2(logn)2nb2nn∑i=1∞∑s=1sτ−2∫bn(s+1)bnsV(|aniX|>y)ydy≤C∞∑n=2(logn)2nb2nn∑i=1∞∑s=1sτ−2∫bn(s+1)bnsV(|aniX|>y)yτ−1y2−τdy≤C∞∑n=2(logn)2nbτnn∑i=1∫∞bnV(|aniX|>y)yτ−1dy≤C∞∑n=2(logn)2nbτnn∑i=1∫∞bτnV(|aniX|τ>y)dy<∞. | (4.12) |
For K213, for 0<α≤1, by similar proof of J12 of Theorem 3.1, we see that
K213≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1E[|Y′ni|])2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1E[|aniX|I{|aniX|≤bnt1/τ}+bnt1/τI{|aniX|>bnt1/τ}])2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1CV{|aniX|I{|aniX|≤bnt1/τ}+bnt1/τI{|aniX|>bnt1/τ}})2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(∞∑j=1∑i∈Inj∫bnt1/τ0V{|X|>xn−1/αj1/α}dx)2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(∞∑j=1#Injn1/αj−1/α∫(logn)3/γj1/αt1/τ0V{|X|>x}dx)2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(∞∑j=1#Injn1/αj−1/αj−1∑k=0∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx)2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(∞∑k=0∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx∞∑j=k+1#Injn1/αj−1/α)2dt≤C∞∑n=21n∫∞1(logn)2(logn)6/γt2/τ(∞∑k=0∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx(k+1)1−1/α)2dt≤K2130+C∞∑n=21n∫∞1(logn)2(logn)6/γt2/τ(∞∑k=1∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dx(k+1)1−1/α)2dt≤K2130+{C∑∞n=21n∫∞1(logn)2(logn)6α/γt2α/τ(∑∞k=1∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τxα−1V{|X|>x}dx)2dt, forα>γ,C∑∞n=21n∫∞1(logn)2(logn)6t2γ/τ(∑∞k=1∫(logn)3/γ(k+1)1/αt1/τ(logn)3/γk1/αt1/τV{|X|>x}dxk1−α/γ)2dt, for α≤γ,≤K2130+{C∑∞n=21n∫∞1(logn)2(logn)6α/γt2α/τ(CV{|X|α})2dt<∞, for α>γ,C∑∞n=21n∫∞1(logn)2(logn)6t2γ/τ(CV{|X|γ})2dt<∞, for α≤γ, | (4.13) |
where
K2130=C∞∑n=21n∫∞1(logn)2(logn)6/γt2/τ(∫(logn)3/γt1/τ0V{|X|>x}dx)2dt≤C∫∞21y∫∞1(logy)2(logy)6/γt2/τdtdy∫(logy)3/γt1/τ0V{|X|>z}dz∫z0V{|X|>x}dx≤C∫∞21y∫∞1(logy)2(logy)6/γt2/τdtdy∫(logy)3/γt1/τ0zV{|X|>z}dz≤C∫∞0zV{|X|>z}dz∫∞1dt∫∞max{2,e(z/t1/τ)γ/3}(logy)2y(logy)6/γt2/τdy≤C∫∞0zV{|X|>z}dz∫∞1dt∫∞max{2,ezγ/3}(logy)2y(logy)6/γt2/τdy≤C∫∞0zγ−1V{|X|>z}dz∫∞1t−2/τdt≤CCV{|X|γ}<∞. |
When 1<α<2, by E(aniXni)=0 and the similar proof of J12 for 1<α<2 in Theorem 3.1, we have
K213≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1|E[Y′ni]−E[aniXni]|)2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1E[|Y′ni−aniXni|])2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1E[|aniXni−bnt1/τ|I{|aniXni|>bnt1/τ}])2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1E[|aniX−bnt1/τ|I{|aniX|>bnt1/τ}])2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1CV{|aniX−bnt1/τ|I{|aniX|>bnt1/τ}})2dt≤C∞∑n=21n∫∞1(logn)2b2nt2/τ(n∑i=1CV{|aniX|I{|aniX|>bnt1/τ}})2dt≤C∞∑n=21n(logn)2b2n(n∑i=1CV{|aniX|I{|aniX|>bn}})2<∞. | (4.14) |
Combining (4.10)–(4.14) results in (4.8). By the similar proof of J2 of Theorem 3.1, for 0<μ<1, we have
J2≤C∞∑n=21n∫∞1n∑i=1V{|aniXni|>bnt1/τ}dt≤C∞∑n=21n∫∞1dt∞∑j=1#Injj+1(j+1)V{|X|>μj1/α(logn)3/γt1/τ}≤C∞∑n=21n∫∞1dtmaxy≥1y⋅V{|X|>μy1/α(logn)3/γt1/τ}≤{C∑∞n=21n(logn)3α/γ∫∞1t−α/τdtmaxy≥μαy(logn)3α/γtα/τ⋅V{|X|α>y(logn)3α/γtα/τ}, forα>γ,C∑∞n=21n(logn)3∫∞1t−γ/τdtmaxy≥μαyγ/α(logn)3tγ/τ⋅V{|X|γ>yγ/α(logn)3tγ/τ}⋅y1−γ/α, forα≤γ,≤{C∑∞n=21n(logn)3α/γ∫∞1t−α/τdt<∞, forα>γ,C∑∞n=21n(logn)3∫∞1t−γ/τdt<∞, for α≤γ. | (4.15) |
(4.15) together with (4.6)–(4.9) completes the proof of Theorem 3.2.
Proof of Theorem 3.3. The proof here is similar to that of Corollary 3.1 of Xu and Kong [6] with Theorem 3.1 here in place of Theorem 3.1 of Xu and Kong [6] (also cf. the proof of Theorem 2.11 of Yan [22]), hence the proof here is omitted. This completes the proof.
We have obtained new results about complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. Results obtained in our article generalize those for extended negatively dependent random variables in probability space, and Theorems 3.1–3.3 complement the results of Xu et al. [5], Xu and Kong [6] in some sense. In addition, Lemma 2.3 is the Rosenthal-type inequality for extended negatively dependent random variables, which is another main innovation point of this article.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This study was supported by Science and Technology Research Project of Jiangxi Provincial Department of Education of China (No. GJJ2201041), Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (No. 102/01003002031), Academic Achievement Re-cultivation Project of Jingdezhen Ceramic University (Grant No. 215/20506277).
The author states no conflict of interest in this article.
[1] | S. Peng, G-expectation, G-Brownian motion and related stochastic calculus of Itô type, In: Stochastic analysis and applications, Berlin: Springer, 2007,541–567. https://doi.org/10.1007/978-3-540-70847-6_25 |
[2] | S. Peng, Nonlinear expectations and stochastic calculus under uncertainty, 1 Eds., Berlin: Springer, 2019. https://doi.org/10.1007/978-3-662-59903-7 |
[3] |
L. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math., 59 (2016), 2503–2526. https://doi.org/10.1007/s11425-016-0079-1 doi: 10.1007/s11425-016-0079-1
![]() |
[4] |
L. Zhang, Rosenthal's inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math., 59 (2016), 751–768. https://doi.org/10.1007/s11425-015-5105-2 doi: 10.1007/s11425-015-5105-2
![]() |
[5] |
M. Xu, K. Cheng, W. Yu, Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 7 (2022), 19998–20019. https://doi.org/10.3934/math.20221094 doi: 10.3934/math.20221094
![]() |
[6] |
M. Xu, X. Kong, Note on complete convergence and complete moment convergence for negatively dependent random variables under sub-linear expectations, AIMS Mathematics, 8 (2023), 8504–8521. https://doi.org/10.3934/math.2023428 doi: 10.3934/math.2023428
![]() |
[7] |
L. Zhang, Donsker's invariance principle under the sub-linear expectation with an application to Chung's law of the iterated logarithm, Commun. Math. Stat., 3 (2015), 187–214. https://doi.org/10.1007/s40304-015-0055-0 doi: 10.1007/s40304-015-0055-0
![]() |
[8] |
J. Xu, L. Zhang, Three series theorem for independent random variables under sub-linear expectations with applications, Acta. Math. Sin.-English Ser., 35 (2019), 172–184. https://doi.org/10.1007/s10114-018-7508-9 doi: 10.1007/s10114-018-7508-9
![]() |
[9] |
J. Xu, L. Zhang, The law of logarithm for arrays of random variables under sub-linear expectations, Acta. Math. Sin.-English Ser., 36 (2020), 670–688. https://doi.org/10.1007/s10255-020-0958-8 doi: 10.1007/s10255-020-0958-8
![]() |
[10] |
Q. Wu, Y. Jiang, Strong law of large numbers and Chover's law of the iterated logarithm under sub-linear expectations, J. Math. Anal. Appl., 460 (2018), 252–270. https://doi.org/10.1016/j.jmaa.2017.11.053 doi: 10.1016/j.jmaa.2017.11.053
![]() |
[11] |
L. Zhang, J. Lin, Marcinkiewicz's strong law of large numbers for nonlinear expectations, Stat. Probabil. Lett., 137 (2018), 269–276. https://doi.org/10.1016/j.spl.2018.01.022 doi: 10.1016/j.spl.2018.01.022
![]() |
[12] |
H. Zhong, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl., 2017 (2017), 261. https://doi.org/10.1186/s13660-017-1538-1 doi: 10.1186/s13660-017-1538-1
![]() |
[13] |
F. Hu, Z. Chen, D. Zhang, How big are the increments of G-Brownian motion? Sci. China Math., 57 (2014), 1687–1700. https://doi.org/10.1007/s11425-014-4816-0 doi: 10.1007/s11425-014-4816-0
![]() |
[14] |
F. Gao, M. Xu, Large deviations and moderate deviations for independent random variables under sublinear expectations (Chinese), Scientia Sinica Mathematica, 41 (2011), 337–352. https://doi.org/10.1360/012009-879 doi: 10.1360/012009-879
![]() |
[15] |
A. Kuczmaszewska, Complete convergence for widely acceptable random variables under the sublinear expectations, J. Math. Anal. Appl., 484 (2020), 123662. https://doi.org/10.1016/j.jmaa.2019.123662 doi: 10.1016/j.jmaa.2019.123662
![]() |
[16] |
M. Xu, K. Cheng, Convergence for sums of iid random variables under sublinear expectations, J. Inequal. Appl., 2021 (2021), 157. https://doi.org/10.1186/s13660-021-02692-x doi: 10.1186/s13660-021-02692-x
![]() |
[17] |
M. Xu, K. Cheng, How small are the increments of G-Brownian motion, Stat. Probabil. Lett., 186 (2022), 109464. https://doi.org/10.1016/j.spl.2022.109464 doi: 10.1016/j.spl.2022.109464
![]() |
[18] |
L. Zhang, Strong limit theorems for extended independent random variables and extended negatively dependent random variables under sub-linear expectations, Acta Math. Sci., 42 (2022), 467–490. https://doi.org/10.1007/s10473-022-0203-z doi: 10.1007/s10473-022-0203-z
![]() |
[19] |
Z. Chen, Strong laws of large numbers for sub-linear expectations, Sci. China Math., 59 (2016), 945–954. https://doi.org/10.1007/s11425-015-5095-0 doi: 10.1007/s11425-015-5095-0
![]() |
[20] |
L. Zhang, On the laws of the iterated logarithm under sub-linear expectations, Probab. Uncertain. Qua., 6 (2021), 409–460. https://doi.org/10.3934/puqr.2021020 doi: 10.3934/puqr.2021020
![]() |
[21] |
X. Chen, Q. Wu, Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations, AIMS Mathematics, 7 (2022), 9694–9715. https://doi.org/10.3934/math.2022540 doi: 10.3934/math.2022540
![]() |
[22] |
J. Yan, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables, Acta. Math. Sin.-English Ser., 34 (2018), 1501–1516. https://doi.org/10.1007/s10114-018-7133-7 doi: 10.1007/s10114-018-7133-7
![]() |
[23] |
P. Hsu, H. Robbins, Complete convergence and the law of large numbers, PNAS, 33 (1947), 25–31. https://doi.org/10.1073/pnas.33.2.25 doi: 10.1073/pnas.33.2.25
![]() |
[24] | Y. Chow, On the rate of moment convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin., 16 (1988), 177–201. |
[25] |
S. Hosseini, A. Nezakati, Complete moment convergence for the dependent linear processes with random coefficients, Acta. Math. Sin.-English Ser., 35 (2019), 1321–1333. https://doi.org/10.1007/s10114-019-8205-z doi: 10.1007/s10114-019-8205-z
![]() |
[26] |
B. Meng, D. Wang, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables, Commun. Stat.-Theor. M., 51 (2022), 3847–3863. https://doi.org/10.1080/03610926.2020.1804587 doi: 10.1080/03610926.2020.1804587
![]() |
1. | Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165 |