Currently, the gastric cancer is the source of the high mortality rate where it is diagnoses from the stomach and esophagus tests. To this end, the whole of studies in the analysis of cancer are built on AI (artificial intelligence) to develop the analysis accuracy and decrease the danger of death. Mostly, deep learning methods in images processing has made remarkable advancement. In this paper, we present a method for detection, recognition and segmentation of gastric cancer in endoscopic images. To this end, we propose a deep learning method named GAS-Net to detect and recognize gastric cancer from endoscopic images. Our method comprises at the beginning a preprocessing step for images to make all images in the same standard. After that, the GAS-Net method is based an entire architecture to form the network. A union between two loss functions is applied in order to adjust the pixel distribution of normal/abnormal areas. GAS-Net achieved excellent results in recognizing lesions on two datasets annotated by a team of expert from several disciplines (Dataset1, is a dataset of stomach cancer images of anonymous patients that was approved from a private medical-hospital clinic, Dataset2, is a publicly available and open dataset named HyperKvasir
Citation: Lamia Fatiha KAZI TANI, Mohammed Yassine KAZI TANI, Benamar KADRI. Gas-Net: A deep neural network for gastric tumor semantic segmentation[J]. AIMS Bioengineering, 2022, 9(3): 266-282. doi: 10.3934/bioeng.2022018
[1] | Lunyi Liu, Qunying Wu . Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(9): 22319-22337. doi: 10.3934/math.20231138 |
[2] | Mingzhou Xu, Kun Cheng, Wangke Yu . Complete convergence for weighted sums of negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2022, 7(11): 19998-20019. doi: 10.3934/math.20221094 |
[3] | Mingzhou Xu . Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(8): 19442-19460. doi: 10.3934/math.2023992 |
[4] | Shuyan Li, Qunying Wu . Complete integration convergence for arrays of rowwise extended negatively dependent random variables under the sub-linear expectations. AIMS Mathematics, 2021, 6(11): 12166-12181. doi: 10.3934/math.2021706 |
[5] | Mingzhou Xu . On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2024, 9(2): 3369-3385. doi: 10.3934/math.2024165 |
[6] | He Dong, Xili Tan, Yong Zhang . Complete convergence and complete integration convergence for weighted sums of arrays of rowwise m-END under sub-linear expectations space. AIMS Mathematics, 2023, 8(3): 6705-6724. doi: 10.3934/math.2023340 |
[7] | Chengcheng Jia, Qunying Wu . Complete convergence and complete integral convergence for weighted sums of widely acceptable random variables under the sub-linear expectations. AIMS Mathematics, 2022, 7(5): 8430-8448. doi: 10.3934/math.2022470 |
[8] | Mingzhou Xu . Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations. AIMS Mathematics, 2023, 8(7): 17067-17080. doi: 10.3934/math.2023871 |
[9] | Xiaocong Chen, Qunying Wu . Complete convergence and complete integral convergence of partial sums for moving average process under sub-linear expectations. AIMS Mathematics, 2022, 7(6): 9694-9715. doi: 10.3934/math.2022540 |
[10] | Haiwu Huang, Yuan Yuan, Hongguo Zeng . An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables. AIMS Mathematics, 2023, 8(1): 622-632. doi: 10.3934/math.2023029 |
Currently, the gastric cancer is the source of the high mortality rate where it is diagnoses from the stomach and esophagus tests. To this end, the whole of studies in the analysis of cancer are built on AI (artificial intelligence) to develop the analysis accuracy and decrease the danger of death. Mostly, deep learning methods in images processing has made remarkable advancement. In this paper, we present a method for detection, recognition and segmentation of gastric cancer in endoscopic images. To this end, we propose a deep learning method named GAS-Net to detect and recognize gastric cancer from endoscopic images. Our method comprises at the beginning a preprocessing step for images to make all images in the same standard. After that, the GAS-Net method is based an entire architecture to form the network. A union between two loss functions is applied in order to adjust the pixel distribution of normal/abnormal areas. GAS-Net achieved excellent results in recognizing lesions on two datasets annotated by a team of expert from several disciplines (Dataset1, is a dataset of stomach cancer images of anonymous patients that was approved from a private medical-hospital clinic, Dataset2, is a publicly available and open dataset named HyperKvasir
Peng [1,2] introduced seminal concepts of the sub-linear expectations space to study the uncertainty in probability. The works of Peng [1,2] stimulate many scholars to investigate the results under sub-linear expectations space, extending those in classic probability space. Zhang [3,4] got exponential inequalities and Rosenthal's inequality under sub-linear expectations. For more limit theorems under sub-linear expectations, the readers could refer to Zhang [5], Xu and Zhang [6,7], Wu and Jiang [8], Zhang and Lin [9], Zhong and Wu [10], Chen [11], Chen and Wu [12], Zhang [13], Hu et al. [14], Gao and Xu [15], Kuczmaszewska [16], Xu and Cheng [17,18,19], Xu et al. [20] and references therein.
In probability space, Shen et al. [21] obtained equivalent conditions of complete convergence and complete moment convergence for extended negatively dependent random variables. For references on complete moment convergence and complete convergence in probability space, the reader could refer to Hsu and Robbins [22], Chow [23], Ko [24], Meng et al. [25], Hosseini and Nezakati [26], Meng et al. [27] and refercences therein. Inspired by the work of Shen et al. [21], we try to investigate complete convergence and complete moment convergence for negatively dependent (ND) random variables under sub-linear expectations, and the Marcinkiewicz-Zygmund type result for ND random variables under sub-linear expectations, which complements the relevant results in Shen et al. [21].
Recently, Srivastava et al. [28] introduced and studied comcept of statistical probability convergence. Srivastava et al. [29] investigated the relevant results of statistical probability convergence via deferred Nörlund summability mean. For more recent works, the interested reader could refer to Srivastava et al. [30,31,32], Paikary et al. [33] and references therein. We conjecture the relevant notions and results of statistical probability convergence could be extended to that under sub-linear expectation.
We organize the remainders of this article as follows. We cite relevant basic notions, concepts and properties, and present relevant lemmas under sub-linear expectations in Section 2. In Section 3, we give our main results, Theorems 3.1 and 3.2, the proofs of which are given in Section 4.
In this article, we use notions as in the works by Peng [2], Zhang [4]. Suppose that (Ω,F) is a given measurable space. Assume that H is a collection of all random variables on (Ω,F) satisfying φ(X1,⋯,Xn)∈H for X1,⋯,Xn∈H, and each φ∈Cl,Lip(Rn), where Cl,Lip(Rn) represents the space of φ fulfilling
|φ(x)−φ(y)|≤C(1+|x|m+|y|m)(|x−y|),∀x,y∈Rn |
for some C>0, m∈N relying on φ.
Definition 2.1. A sub-linear expectation E on H is a functional E:H↦ˉR:=[−∞,∞] fulfilling the following: for every X,Y∈H,
(a) X≥Y yields E[X]≥E[Y];
(b) E[c]=c, ∀c∈R;
(c) E[λX]=λE[X], ∀λ≥0;
(d) E[X+Y]≤E[X]+E[Y] whenever E[X]+E[Y] is not of the form ∞−∞ or −∞+∞.
We name a set function V:F↦[0,1] a capacity if
(a)V(∅)=0, V(Ω)=1;
(b)V(A)≤V(B), A⊂B, A,B∈F.
Moreover, if V is continuous, then V obey
(c) An↑A concludes V(An)↑V(A);
(d) An↓A concludes V(An)↓V(A).
V is named to be sub-additive when V(A+B)≤V(A)+V(B), A,B∈F.
Under the sub-linear expectation space (Ω,H,E), set V(A):=inf{E[ξ]:IA≤ξ,ξ∈H}, ∀A∈F (cf. Zhang [3,4,9,13], Chen and Wu [12], Xu et al. [20]). V is a sub-additive capacity. Set
V∗(A)=inf{∞∑n=1V(An):A⊂∞⋃n=1An},A∈F. |
By Definition 4.2 and Lemma 4.3 of Zhang [34], if E=E is linear expectation, V∗ coincide with the probability measure introduced by the linear expectation E. As in Zhang [3], V∗ is countably sub-additive, V∗(A)≤V(A). Hence, in Theorem 3.1, Corollary 3.1, V could be replaced by V∗, implying that the results here could be considered as natural extensions of the corresponding ones in classic probability space. Write
CV(X):=∫∞0V(X>x)dx+∫0−∞(V(X>x)−1)dx. |
Assume X=(X1,⋯,Xm), Xi∈H and Y=(Y1,⋯,Yn), Yi∈H are two random vectors on (Ω,H,E). Y is called to be negatively dependent to X, if for ψ1 on Cl,Lip(Rm), ψ2 on Cl,Lip(Rn), we have E[ψ1(X)ψ2(Y)]≤E[ψ1(X)]E[ψ2(Y)] whenever ψ1(X)≥0, E[ψ2(Y)]≥0, E[|ψ1(X)ψ2(Y)|]<∞, E[|ψ1(X)|]<∞, E[|ψ2(Y)|]<∞, and either ψ1 and ψ2 are coordinatewise nondecreasing or ψ1 and ψ2 are coordinatewise nonincreasing (cf. Definition 2.3 of Zhang [3], Definition 1.5 of Zhang [4]).
{Xn}∞n=1 is called to be negatively dependent, if Xn+1 is negatively dependent to (X1,⋯,Xn) for each n≥1. The existence of negatively dependent random variables {Xn}∞n=1 under sub-linear expectations could be yielded by Example 1.6 of Zhang [4] and Kolmogorov's existence theorem in classic probabililty space. We below give an concrete example.
Example 2.1. Let P={Q1,Q2} be a family of probability measures on (Ω,F). Suppose that {Xn}∞n=1 are independent, identically distributed under each Qi, i=1,2 with Q1(X1=−1)=Q1(X1=1)=1/2, Q2(X1=−1)=1. Define E[ξ]=supQ∈PEQ[ξ], for each random variable ξ. Here E[⋅] is a sub-linear expectation. By the discussion of Example 1.6 of Zhang [4], we see that {Xn}∞n=1 are negatively dependent random variables under E.
Assume that X1 and X2 are two n-dimensional random vectors in sub-linear expectation spaces (Ω1,H1,E1) and (Ω2,H2,E2) repectively. They are named identically distributed if for every ψ∈Cl,Lip(Rn),
E1[ψ(X1)]=E2[ψ(X2)]. |
{Xn}∞n=1 is called to be identically distributed if for every i≥1, Xi and X1 are identically distributed.
In this article we assume that E is countably sub-additive, i.e., E(X)≤∑∞n=1E(Xn) could be implied by X≤∑∞n=1Xn, X,Xn∈H, and X≥0, Xn≥0, n=1,2,…. Write Sn=∑ni=1Xi, n≥1. Let C denote a positive constant which may vary in different occasions. I(A) or IA represent the indicator function of A. The notion ax≈bx means that there exist two positive constants C1, C2 such that C1|bx|≤|ax|≤C2|bx|.
As in Zhang [4], by definition, if X1,X2,…,Xn are negatively dependent random variables and f1, f2,…,fn are all non increasing (or non decreasing) functions, then f1(X1), f2(X2),…,fn(Xn) are still negatively dependent random variables.
We cite the useful inequalities under sub-linear expectations.
Lemma 2.1. (See Lemma 4.5 (III) of Zhang [3]) If E is countably sub-additive under (Ω,H,E), then for X∈H,
E|X|≤CV(|X|). |
Lemma 2.2. (See Lemmas 2.3, 2.4 of Xu et al. [20] and Theorem 2.1 of Zhang [4]) Assume that p≥1 and {Xn;n≥1} is a sequence of negatively dependent random varables under (Ω,H,E). Then there exists a positive constant C=C(p) relying on p such that
E[|n∑j=1Xj|p]≤C{n∑i=1E|Xi|p+(n∑i=1[|E(−Xi)|+|E(Xi)|])p},1≤p≤2, | (2.1) |
E[max1≤i≤n|i∑j=1Xj|p]≤C(logn)p{n∑i=1E|Xi|p+(n∑i=1[|E(−Xi)|+|E(Xi)|])p},1≤p≤2, | (2.2) |
E[max1≤i≤n|i∑j=1Xi|p]≤C{n∑i=1E|Xi|p+(n∑i=1EX2i)p/2+(n∑i=1[|E(−Xi)|+|E(Xi)|])p},p≥2. | (2.3) |
Lemma 2.3. Assume that X∈H, α>0, γ>0, CV(|X|α)<∞. Then there exists a positive constant C relying on α,γ such that
∫∞0V{|X|>γy}yα−1dy≤CCV(|X|α)<∞. |
Proof. By the method of substitution of definite integral, letting γy=z1/α, we get
∫∞0V{|X|>γy}yα−1dy≤∫∞0V{|X|α>z}(z1/α/γ)α−1z1/α−1/γdz≤CCV(|X|α)<∞. |
Lemma 2.4. Let Yn,Zn∈H. Then for any q>1, ε>0 and a>0,
E(max1≤j≤n|j∑i=1(Yi+Zi)|−εa)+≤(1εq+1q−1)1aq−1E(max1≤j≤n|j∑i=1Yi|q)+E(max1≤j≤n|j∑i=1Zi|). | (2.4) |
Proof. By Markov' inequality under sub-linear expectations, Lemma 2.1, and the similar proof of Lemma 2.4 of Sung [35], we could finish the proof. Hence, the proof is omitted here.
Our main results are below.
Theorem 3.1. Suppose α>12 and αp>1. Assume that {Xn,n≥1} is a sequence of negatively dependent random variables, and for each n≥1, Xn is identically distributed as X under sub-linear expectation space (Ω,H,E). Moreover, assume E(X)=E(−X)=0 if p≥1. Suppose CV(|X|p)<∞. Then for all ε>0,
∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Xi|>εnα}<∞. | (3.1) |
Remark 3.1. By Example 2.1, the assumption E(X)=E(−X)=0 if p≥1 in Theorem 3.1 can not be weakened to E(X)=0 if p≥1. In fact, in the case of Example 2.1, if 12<α≤1, αp>1, then for any 0<ε<1,
∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Xi|>εnα}≥∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Xi|≥n}=∞∑n=1nαp−2=+∞, |
which implies that Theorems 3.1, 3.2, Corollary 3.1 do not hold. However, by Example 1.6 of Zhang [4], the assumptions of Theorem 3.3, Corollary 3.2 hold for random variables in Example 2.1, hence Theorem 3.3, Corollary 3.2 are valid in this example.
By Theorem 3.1, we could get the Marcinkiewicz-Zygmund strong law of large numbers for negatively dependent random variables under sub-linear expectations below.
Corollary 3.1. Let α>12 and αp>1. Assume that under sub-linear expectation space (Ω,H,E), {Xn} is a sequence of negatively dependent random variables and for each n, Xn is identically distributed as X. Moreover, assume E(X)=E(−X)=0 if p≥1. Assume that V induced by E is countably sub-additive. Suppose CV{|X|p}<∞. Then
V(lim supn→∞1nα|n∑i=1Xi|>0)=0. | (3.2) |
Theorem 3.2. If the assumptions of Theorem 3.1 hold for p≥1 and CV{|X|plogθ|X|}<∞ for some θ>max{αp−1α−12,p}, then for any ε>0,
∞∑n=1nαp−2−αE(max1≤j≤n|j∑i=1Xi|−εnα)+<∞. | (3.3) |
By the similar proof of Theorem 3.1, with Theorem 2.1 (b) for negative dependent random variables of Zhang [4] (cf. the proof of Theorem 2.1 (c) there) in place of Lemma 2.2 here, we could obtain the following result.
Theorem 3.3. Suppose α>12, p≥1, and αp>1. Assume that Xk is negatively dependent to (Xk+1,…,Xn), for each k=1,…,n, n≥1. Suppose for each n, Xn is identically distributed as X under sub-linear expectation space (Ω,H,E). Suppose CV(|X|p)<∞. Then for all ε>0,
∞∑n=1nαp−2V{max1≤j≤nj∑i=1[Xi−E(Xi)]>εnα}<∞, |
∞∑n=1nαp−2V{max1≤j≤nj∑i=1[−Xi−E(−Xi)]>εnα}<∞. |
By the similar proof of Corollary 3.1, with Theorem 3.3 in place of Theorem 3.1, we get the following result.
Corollary 3.2. Let α>12, p≥1, and αp>1. Assume that Xk is negatively dependent to (Xk+1,…,Xn), for each k=1,…,n, n≥1. Suppose for each n, Xn is identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that V induced by E is countably sub-additive. Suppose CV{|X|p}<∞. Then
V({lim supn→∞1nαn∑i=1[Xi−E(Xi)]>0}⋃{lim supn→∞1nαn∑i=1[−Xi−E(−Xi)]>0})=0. |
By the similar proof of Theorem 3.1 and Corollary 3.1, and adapting the proof of (4.10), we could obtain the following result.
Corollary 3.3. Suppose α>1 and p≥1. Assume that {Xn,n≥1} is a sequence of negatively dependent random variables, and for each n≥1, Xn is identically distributed as X under sub-linear expectation space (Ω,H,E). Suppose CV(|X|p)<∞. Then for all ε>0,
∞∑n=1nαp−2V{max1≤j≤nj∑i=1[Xi−E(Xi)]>εnα}<∞, |
∞∑n=1nαp−2V{max1≤j≤nj∑i=1[−Xi−E(−Xi)]>εnα}<∞. |
Moreover assume that V induced by E is countably sub-additive. Then
V({lim supn→∞1nαn∑i=1[Xi−E(Xi)]>0}⋃{lim supn→∞1nαn∑i=1[−Xi−E(−Xi)]>0})=0. |
By the discussion below Definition 4.1 of Zhang [34], and Corollary 3.2, we conjecture the following.
Conjecture 3.1. Suppose 12<α≤1 and αp>1. Assume that {Xn,n≥1} is a sequence of negatively dependent random variables, and for each n≥1, Xn is identically distributed as X under sub-linear expectation space (Ω,H,E). Assume that V induced by E is continuous. Suppose CV{|X|p}<∞. Then
V({lim supn→∞1nαn∑i=1[Xi−E(Xi)]>0}⋃{lim supn→∞1nαn∑i=1[−Xi−E(−Xi)]>0})=0. |
Proof of Theorem 3.1. We investigate the following cases.
Case 1. 0<p<1.
For fixed n≥1, for 1≤i≤n, write
Yni=−nαI{Xi<−nα}+XiI{|Xi|≤nα}+nαI{Xi>nα}, |
Zni=(Xi−nα)I{Xi>nα}+(Xi+nα)I{Xi<−nα}, |
Yn=−nαI{X<−nα}+XI{|X|≤nα}+nαI{X>nα}, |
Zn=X−Yn. |
Observing that Xi=Yni+Zni, we see that for all ε>0,
∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Xi|>εnα}≤∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Yni|>εnα/2}+∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Zni|>εnα/2}=:I1+I2. | (4.1) |
By Markov's inequality under sub-linear expectations, Cr inequality, and Lemmas 2.1, 2.3, we conclude that
I1≤C∞∑n=1nαp−2−αn∑i=1E|Yni|=C∞∑n=1nαp−1−αE|Yn|≤C∞∑n=1nαp−1−αCV(|Yn|)≤C∞∑n=1nαp−1−α∫nα0V{|Yn|>x}dx=C∞∑n=1nαp−1−αn∑k=1∫kα(k−1)αV{|X|>x}dx=C∞∑k=1∫kα(k−1)αV{|X|>x}dx∞∑n=knαp−1−α=C∞∑k=1kα−1V{|X|>(k−1)α}kαp−α≤C∞∑k=1kαp−1V{|X|>kα}+C≤C∫∞0xαp−1V{|X|>xα}dx+C≤CCV{|X|p}+C<∞, | (4.2) |
and
I2≤C∞∑n=1nαp−2−αp/2n∑i=1E|Zni|p/2≤C∞∑n=1nαp/2−1E|Zn|p/2≤C∞∑n=1nαp/2−1CV{|Zn|p/2}≤C∞∑n=1nαp/2−1CV{|X|p/2I{|X|>nα}}≤C∞∑n=1nαp/2−1[∫nα0V{|X|>nα}sp/2−1ds+∫∞nαV{|X|>s}sp/2−1ds]≤C∞∑n=1nαp−1V{|X|>nα}+C∞∑n=1nαp/2−1∞∑k=n∫(k+1)αkαV{|X|>s}sp/2−1ds≤CCV{|X|p}+C∞∑k=1∫(k+1)αkαV{|X|>s}sp/2−1dsk∑n=1nαp/2−1≤CCV{|X|p}+C∞∑k=1V{|X|>kα}kαp−1≤CCV{|X|p}+CCV{|X|p}<∞. | (4.3) |
Therefore, by (4.1)–(4.3), we deduce that (3.1) holds.
Case 2. p≥1.
Observing that αp>1, we choose a suitable q such that 1αp<q<1. For fixed n≥1, for 1≤i≤n, write
X(1)ni=−nαqI{Xi<−nαq}+XiI{|Xi|≤nαq}+nαqI(Xi>nαq), |
X(2)ni=(Xi−nαq)I{Xi>nαq},X(3)ni=(Xi+nαq)I{Xi<−nαq}, |
and X(1)n, X(2)n, X(3)n is defined as X(1)ni, X(2)ni, X(3)ni only with X in place of Xi above. Observing that ∑ji=1Xi=∑ji=1X(1)ni+∑ji=1X(2)ni+∑ji=1X(3)ni, for 1≤j≤n, we see that for all ε>0,
∞∑n=1nαp−2V{max1≤j≤n|j∑i=1Xi|>εnα}≤∞∑n=1nαp−2V{max1≤j≤n|j∑i=1X(1)ni|>εnα/3}+∞∑n=1nαp−2V{max1≤j≤n|j∑i=1X(2)ni|>εnα/3}+∞∑n=1nαp−2V{max1≤j≤n|j∑i=1X(3)ni|>εnα/3}=:II1+II2+II3. | (4.4) |
Therefore, to establish (3.1), it is enough to prove that II1<∞, II2<∞, II3<∞.
For II1, we first establish that
n−αmax1≤j≤n|j∑i=1EX(1)ni|→0, as n→∞. | (4.5) |
By E(X)=0, Markov's inequality under sub-linear expectations, Lemma 2.1, we conclude that
n−αmax1≤j≤n|j∑i=1EX(1)ni|≤n−αn∑i=1|E(X(1)n)|≤n1−α|E(X(1)n)−E(X)|≤n1−αE|X(1)n−X|≤n1−αCV(|X(1)n−X|)≤Cn1−α[∫∞0V{|X|I{|X|>nαq}>x}dx]≤Cn1−α[∫nαq0V{|X|>nαq}dx+∫∞nαqV{|X|>y}dy]≤Cn1−α+αqV{|X|>nαq}+Cn1−α∫∞nαqV{|X|>y}yp−1nαq(p−1)dy≤Cn1−α+αqE|X|pnαqp+Cn1−α+αq−αqpCV{|X|p}≤Cn1−αqp−α+αqCV{|X|p}, |
which results in (4.5) by CV{|X|p}<∞ and 1/(αp)<q<1. Thus, from (4.5), it follows that
II1≤C∞∑n=1nαp−2V{max1≤j≤n|j∑i=1(X(1)ni−EX(1)ni)|>εnα6}. | (4.6) |
For fixed n≥1, we note that {X(1)ni−EX(1)ni,1≤i≤n} are negatively dependent random variables. By (4.6), Markov's inequality under sub-linear expectations, and Lemma 2.2, we see that for any β≥2,
II1≤C∞∑n=1nαp−2−αβE(max1≤j≤n|j∑i=1(X(1)ni−EX(1)ni)|β)≤C∞∑n=1nαp−2−αβ[n∑i=1E|X(1)ni|β+(n∑i=1E|X(1)ni|2)β/2+(n∑i=1[|EX(1)ni|+|E(−X(1)ni)|])β]=:II11+II12+II13. | (4.7) |
Taking β>max{αp−1α−1/2,2,p,αp−1αqp−αq+α−1}, we obtain
αp−αβ+αqβ−αpq−1=α(p−β)(1−q)−1<−1, |
αp−2−αβ+β/2<−1, |
and
αp−2−αβ+β−αq(p−1)β<−1. |
By Cr inequality, Markov's inequality under sub-linear expectations, Lemma 2.1, we see that
II11≤C∞∑n=1nαp−2−αβn∑i=1E|X(1)ni|β≤C∞∑n=1nαp−1−αβE|X(1)n|β≤C∞∑n=1nαp−1−αβCV{|X(1)n|β}=C∞∑n=1nαp−1−αβ∫nαqβ0V{|X|β>x}dx≤C∞∑n=1nαp−1−αβ∫nαq0V{|X|>x}xβ−1dx≤C∞∑n=1nαp−1−αβ∫nαq0V{|X|>x}xp−1nαq(β−p)dx≤C∞∑n=1nαp−1−αβ+αqβ−αqpCV{|X|p}<∞, | (4.8) |
II12≤C∞∑n=1nαp−2−αβ(n∑i=1E|X(1)n|2)β/2≤C∞∑n=1nαp−2−αβ+β/2(CV{|X(1)n|2})β/2≤C∞∑n=1nαp−2−αβ+β/2(∫nαq0V{|X|>x}xdx)β/2≤{C∑∞n=1nαp−2−αβ+β/2(CV{|X|2})β/2, if p≥2;C∑∞n=1nαp−2−αβ+β/2(∫nαq0V{|X|>x}xp−1nαq(2−p)dx)β/2, if 1≤p<2,≤{C∑∞n=1nαp−2−αβ+β/2(CV{|X|2})β/2<∞, if p≥2;C∑∞n=1n(αp−1)(1−β/2)−1(CV(|X|p))β/2<∞, if 1≤p<2, | (4.9) |
and
II13≤C∞∑n=1nαp−2−αβ(n∑i=1[|EX(1)n|+|E(−X(1)n)|])β≤C∞∑n=1nαp−2−αβ+β(E|X(1)n−X|)β≤C∞∑n=1nαp−2−αβ+β(E|X|p)βnαq(p−1)β≤C∞∑n=1nαp−2−αβ+β−αq(p−1)β(CV(|X|p))β<∞. | (4.10) |
Therefore, combining (4.7)–(4.10) results in II1<∞.
Next, we will establish that II2<∞. Let gμ(x) be a non-increasing Lipschitz function such that I{x≤μ}≤gμ(x)≤I{x≤1}, μ∈(0,1). Obviously, I{x>μ}>1−gμ(x)>I{x>1}. For fixed n≥1, for 1≤i≤n, write
X(4)ni=(Xi−nαq)I(nαq<Xi≤nα+nαq)+nαI(Xi>nα+nαq), |
and
X(4)n=(X−nαq)I(nαq<X≤nα+nαq)+nαI(X>nα+nαq). |
We see that
(max1≤j≤n|i∑j=1X(2)ni|>εnα3)⊂(max1≤i≤n|Xi|>nα)⋃(max1≤j≤n|i∑j=1X(4)ni|>εnα3), |
which results in
II2≤∞∑n=1nαp−2n∑i=1V{|Xi|>nα}+∞∑n=1nαp−2V{max1≤j≤n|i∑j=1X(4)ni|>εnα3}=:II21+II22. | (4.11) |
By CV{|X|p}<∞, we conclude that
II21≤C∞∑n=1nαp−2n∑i=1E[1−gμ(|Xi|)]=C∞∑n=1nαp−1E[1−gμ(|X|)]≤C∞∑n=1nαp−1V{|X|>μnα}≤C∫∞0xαp−1V{|X|>μxα}dx≤CCV(|X|p)<∞. | (4.12) |
Observing that 1αp<q<1, from the definition of X(2)ni, follows that
n−αmax1≤j≤n|j∑i=1EX(4)ni|≤Cn1−αE|X(4)n|≤Cn1−αCV(|X(4)n|)≤Cn1−α[∫nαq0V{|X|I{|X|>nαq}>x}dx+∫∞nαqV{|X|>x}dx]≤Cn1−α+αqE|X|pnαpq+Cn1−α∫∞nαqV{|X|>x}xp−1nαq(p−1)dx≤Cn1−α+αq−αpqCV{|X|p}→0 as n→∞. | (4.13) |
By X(4)ni>0, (4.11)–(4.13), we see that
II2≤C∞∑n=1nαp−2V{|n∑i=1[X(4)ni−EX(4)ni]|>εnα6}. | (4.14) |
For fixed n≥1, we know that {X(4)ni−EX(4)ni,1≤i≤n} are negatively dependent random variables under sub-linear expectations. By Markov's inequality under sub-linear expectations, Cr-inequality, Lemma 2.2, we obtain
II2≤C∞∑n=1nαp−2−αβE(|n∑i=1[X(4)ni−EX(4)ni]|β)≤C∞∑n=1nαp−2−αβ[n∑i=1E|X(4)ni|β+(n∑i=1E(X(4)ni)2)β/2+(n∑i=1[|EX(4)ni|+|E(−X(4)ni)|])β]=:II21+II22+II23. | (4.15) |
By Cr inequality, Lemma 2.3, we have
II21≤C∞∑n=1nαp−2−αβn∑i=1E|X(4)n|β≤C∞∑n=1nαp−1−αβCV{|X(4)n|β}≤C∞∑n=1nαp−1−αβCV{|X|βI{nαq<X≤nα+nαq}+nαqβI{X>nα+nαq}}≤C∞∑n=1nαp−1−αβ∫2nα0V{|X|>x}xβ−1dx≤C∞∑n=1nαp−1−αβn∑k=1∫2kα2(k−1)αV{|X|>x}xβ−1dx≤C∞∑k=1V{|X|>2(k−1)α}kαβ−1∞∑n=knαp−1−αβ≤C∞∑k=1V{|X|>2(k−1)α}kαp−1≤C∫∞0V{|X|>2xα}xαp−1dx≤CCV{|X|p}<∞. | (4.16) |
As in the proof of (4.9) and (4.16), we can deduce that II22<∞.
By Lemma 2.1, we see that
II23≤C∞∑n=1nαp−1−αβnβ(E|X(4)n|)β≤C∞∑n=1nαp−1−αβ+β(E|X|pnαq(p−1))β≤C∞∑n=1nαp−1−αβ+β−αq(p−1)(CV{|X|p})β<∞. | (4.17) |
By (4.15)–(4.17), we deduce that II2<∞.
As in the proof of II2<∞, we also can obtain II3<∞. Therefore, combining (4.5), II1<∞, II2<∞, and II3<∞ results in (3.1). This finishes the proof.
Proof of Corollary 3.1. By CV{|X|p}<∞, and Theorem 3.1, we deduce that for all ε>0,
∞∑n=1nαp−2V(max1≤j≤n|j∑i=1Xi|>εnα)<∞. | (4.18) |
By (4.18), we conclude that for any ε>0,
∞>∞∑n=1nαp−2V(max1≤j≤n|j∑i=1Xi|>εnα)=∞∑k=02k+1−1∑n=2knαp−2V(max1≤j≤n|j∑i=1Xi|>εnα)≥{∑∞k=0(2k)αp−22kV(max1≤j≤2k|∑ji=1Xi|>ε2(k+1)α), if αp≥2,∑∞k=0(2k+1)αp−22kV(max1≤j≤2k|∑ji=1Xi|>ε2(k+1)α), if 1<αp<2,≥{∑∞k=0V(max1≤j≤2k|∑ji=1Xi|>ε2(k+1)α), if αp≥2,∑∞k=012V(max1≤j≤2k|∑ji=1Xi|>ε2(k+1)α), if 1<αp<2, |
which, combined with Borel-Cantell lemma under sub-linear expectations, yields that
V(lim supn→∞max1≤j≤2k|∑ji=1Xi|2(k+1)α>0)=0. | (4.19) |
For all positive integers n, ∃ a positive integer k satisfying 2k−1≤n<2k, we see that
n−α|n∑i=1Xi|≤max2k−1≤n≤2kn−α|n∑i=1Xi|≤22αmax1≤j≤2k|∑ji=1Xi|2(k+1)α, |
which yields (3.2). This completes the proof.
Proof of Theorem 3.2. For fixed n≥1, for 1≤i≤n, write
Yni=−nαI{Xi<−nα}+XiI{|Xi|≤nα}+nαI{Xi>nα}, |
Zni=Xi−Yni=(Xi−nα)I{Xi>nα}+(Xi+nα)I{Xi<−nα}, |
and
Yn=−nαI{X<−nα}+XI{|X|≤nα}+nαI{X>nα}, |
Zn=X−Yn=(X−nα)I{X>nα}+(X+nα)I{X<−nα}. |
From Lemma 2.4 follows that for any β>1,
∞∑n=1nαp−2−αE(max1≤j≤n|j∑i=1Xi|−εnα)+≤C∞∑n=1nαp−2−αβE(max1≤j≤n|j∑i=1(Yni−EYni)|β)+∞∑n=1nαp−2−αE(max1≤j≤n|j∑i=1(Zni−EZni)|)=:III1+III2. | (4.20) |
Noticing that Zn≤(|X|−nα)I(|X|>nα)≤|X|I(|X|>nα), by Lemma 2.3, we see that
III2≤C∞∑n=1nαp−2−αn∑i=1E|Zni|≤C∞∑n=1nαp−1−αE|Zn|≤C∞∑n=1nαp−1−αCV{|Zn|}≤C∞∑n=1nαp−1−αCV{|X|I(|X|>nα)}≤C∞∑n=1nαp−1−α[∫nα0V{|X|>nα}dx+∫∞nαV{|X|>x}dx]≤C∞∑n=1nαp−1V{|X|>nα}+C∞∑n=1nαp−1−α∞∑k=n∫(k+1)αkαV{|X|>x}dx≤CCV{|X|p}+C∞∑k=1V{|X|>kα}kα−1k∑n=1nαp−1−α≤{CCV{|X|p}+C∑∞k=1V{|X|>kα}kα−1log(k), if p=1,CCV{|X|p}+C∑∞k=1V{|X|>kα}kαp−1, if p>1,≤{CCV{|X|log|X|}<∞, if p=1,CCV{|X|p}<∞, if p>1. | (4.21) |
Now, we will establish III1<∞. Observing that θ>p≥1, we can choose β=θ. We analysize the following two cases.
Case 1. 1<θ≤2. From (2.2) of Lemma 2.2, Lemma 2.1, E(Y)=E(−Y)=0, and Markov's inequality under sub-linear expectations follows that
III1=C∞∑n=1nαp−2−αθE(max1≤j≤n|j∑i=1(Yni−EYni)|θ)≤C∞∑n=1nαp−2−αθlogθn[n∑i=1E|Yni|θ+(n∑i=1[|E(Yni)|+|E(−Yni)|])θ]=C∞∑n=1nαp−2−αθlogθn[n∑i=1E|Yn|θ+(n∑i=1[|E(Yn)|+|E(−Yn)|])θ]≤C∞∑n=1nαp−1−αθlogθnCV{|Yn|θ}+C∞∑n=1nαp−2−αθ+θlogθn(E|Yn−X|)θ≤C∞∑n=1nαp−1−αθlogθn∫nα0V{|X|>x}xθ−1dx+C∞∑n=1nαp−2−αθ+θlogθn(CV{|Yn−X|})θ≤C∞∑n=1nαp−1−αθlogθnn∑k=1∫kα(k−1)αV{|X|>x}xθ−1dx+C∞∑n=1nαp−2−αθ+θlogθn(CV{|X|I{|X|>nα}})θ≤C∞∑k=1V{|X|>(k−1)α}kαθ−1∞∑n=knαp−1−αθ+C∞∑n=1nαp−2−αθ+θlogθn(∫∞0V{|X|I{|X|>nα}>y}dy)θ≤C∞∑k=1V{|X|>(k−1)α}kαp−1logθk+C∞∑n=1nαp−2−αθ+θlogθn(nαE{|X|plogθ|X|}nαplogθn)θ+C∞∑n=1nαp−2−αθ+θlogθn(∫∞nαV{|X|plogθ|X|>yplogθy}d(yplogθy)/(nα(p−1)logθn))θ≤C∫∞0V{|X|>xα}xαp−1logθxdx+C(CV{|X|plogθ|X|})θ+C∞∑n=1nαp−2+θ−αpθlogθ−θ2n(CV{|X|plogθ|X|})θ≤CCV{|X|plogθ|X|}+C(CV{|X|plogθ|X|})θ<∞. | (4.22) |
Case 2. θ>2. Observe that θ>αp−1α−12, we conclude that αp−2−αθ+θ2<−1. As in the proof of (4.22), by Lemma 2.2, and Cr inequality, we see that
III1=C∞∑n=1nαp−2−αθE(max1≤j≤n|j∑i=1(Yni−E(Yni))|θ)≤C∞∑n=1nαp−2−αθ[n∑i=1E|Yni|θ+(n∑i=1E|Yni|2)θ/2+(n∑i=1[|E(Yni)|+|E(−Yni)|])θ]=:III11+III12+III13. | (4.23) |
By Lemma 2.3, we see that
III11≤C∞∑n=1nαp−1−αθE|Yn|θ≤C∞∑n=1nαp−1−αθCV{|Yn|θ}≤C∞∑n=1nαp−1−αθ∫nα0V{|X|>x}xθ−1dx=C∞∑n=1nαp−1−αθn∑k=1∫kα(k−1)αV{|X|>x}xθ−1dx≤C∞∑k=1V{|X|>(k−1)α}kαθ−1∞∑n=knαp−1−αθ≤C∞∑k=1V{|X|>(k−1)α}kαp−1≤C∫∞0V{|X|>xα}xαp−1dx≤CCV{|X|p}<∞. |
By Lemma 2.1, we deduce that
III12≤C∞∑n=1nαp−2−αθ(n∑i=1E|Yn|2)θ/2≤{C∑∞n=1nαp−2−αθ+θ/2(E|X|2)θ/2, if p≥2,C∑∞n=1nαp−2−αθ+θ/2(E|X|pnα(2−p))θ/2, if 1≤p<2,≤{CCV{|X|2}<∞, if p≥2,C∑∞n=1nαp−2+θ/2−αpθ/2(CV{|X|p})θ/2<∞, if 1≤p<2. |
And the proof of III13<∞ is similar to that of (4.22). This finishes the proof.
We have obtained new results about complete convergence and complete moment convergence for maximum partial sums of negatively dependent random variables under sub-linear expectations. Results obtained in our article extend those for negatively dependent random variables under classical probability space, and Theorems 3.1, 3.2 here are different from Theorems 3.1, 3.2 of Xu et al. [20], and the former can not be deduced from the latter. Corollary 3.1 complements Theorem 3.1 in Zhang [9] in the case p≥2, Corollaries 3.2, 3.3 complement Theorem 3.3 in Zhang [4] in the case p>1 in some sense.
This study was supported by Science and Technology Research Project of Jiangxi Provincial Department of Education of China (No. GJJ2201041), Academic Achievement Re-cultivation Project of Jingdezhen Ceramic University (No. 215/20506135), Doctoral Scientific Research Starting Foundation of Jingdezhen Ceramic University (No. 102/01003002031).
All authors state no conflicts of interest in this article.
[1] |
Borgli H, de Lange T, Eskeland SL, et al. (2020) HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci Data 7: 283. https://doi.org/10.1038/s41597-020-00622-y ![]() |
[2] |
Cutler J, Grannell A, Rae R (2021) Size-susceptibility of Cornu aspersum exposed to the malacopathogenic nematodes Phasmarhabditis hermaphrodita and P. californica. Biocontrol Sci Technol 31: 1149-1160. https://doi.org/10.1080/09583157.2021.1929072 ![]() |
[3] | Kitagawa Y, Wada N (2021) Application of AI in Endoscopic Surgical Operations. Surgery and Operating Room Innovation . Singapore: Springer 71-77. https://doi.org/10.1007/978-981-15-8979-9_8 |
[4] |
Jia Y, Shelhamer E, Donahue J, et al. (2014) Caffe: Convolutional architecture for fast feature embedding. Proceedings of the 22nd ACM international conference on Multimedia MM14: 675-678. https://doi.org/10.1145/2647868.2654889 ![]() |
[5] |
Hirasawa T, Aoyama K, Tanimoto T, et al. (2018) Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 21: 653-660. https://doi.org/10.1007/s10120-018-0793-2 ![]() |
[6] |
Kim JH, Yoon HJ (2020) Lesion-based convolutional neural network in diagnosis of early gastric cancer. Clin Endosc 53: 127-131. https://doi.org/10.5946/ce.2020.046 ![]() |
[7] | Sakai Y, Takemoto S, Hori K, et al. (2018) Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE : 4138-4141. https://doi.org/10.1109/EMBC.2018.8513274 |
[8] |
de Groof AJ, Struyvenberg MR, van der Putten J, et al. (2020) Deep-learning system detects neoplasia in patients with barrett's esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking. Gastroenterology 158: 915-929. e4. https://doi.org/10.1053/j.gastro.2019.11.030 ![]() |
[9] | Haque A, Lyu B (2018) Deep learning based tumor type classification using gene expression data. Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics 18: 89-96. https://doi.org/10.1145/3233547.3233588 |
[10] |
Wang P, Xiao X, Glissen Brown JR, et al. (2018) Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng 2: 741-748. https://doi.org/10.1038/s41551-018-0301-3 ![]() |
[11] |
Arsalan M, Owais M, Mahmood T, et al. (2020) Artificial intelligence-based diagnosis of cardiac and related diseases. J Clin Med 9: 871. https://www.mdpi.com/2077-0383/9/3/871 ![]() |
[12] | Nopour R, Shanbehzadeh M, Kazemi-Arpanahi H (2021) Developing a clinical decision support system based on the fuzzy logic and decision tree to predict colorectal cancer. Med J Islam Repub Iran 35: 44. https://doi.org/10.47176/mjiri.35.44 |
[13] | Darrell T, Long J, Shelhamer E (2016) Fully convolutional networks for semantic segmentation. IEEE transactions on pattern analysis and machine intelligence 39: 640-651. https://doi.org/10.1109/TPAMI.2016.2572683 |
[14] |
Ghomari A, Kazi Tani MY, Lablack A, et al. (2017) OVIS: ontology video surveillance indexing and retrieval system. Int J Multimed Info Retr 6: 295-316. https://doi.org/10.1007/s13735-017-0133-z ![]() |
[15] | He K, Gkioxari G, Dollár P, et al. (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision. IEEE : 2961-2969. https://doi.org/10.48550/arXiv.1703.06870 |
[16] | Tani LFK, Ghomari A, Tani MYK (2019) A semi-automatic soccer video annotation system based on Ontology paradigm. 2019 10th International Conference on Information and Communication Systems. ICICS: : 88-93. https://doi.org/0.1109/IACS.2019.8809161 |
[17] | KAZI TANI LF Conception and Implementation of a Semi-Automatic Tool Dedicated To The Analysis And Research Of Web Video Content. Doctoral Thesis, Oran, Algeria, 2020. Available from: https://theses.univ-oran1.dz/document/TH5141.pdf |
[18] |
Tani LFK, Ghomari A, Tani MYK (2019) Events recognition for a semi-automatic annotation of soccer videos: a study based deep learning. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 42: 135-141. https://doi.org/10.5194/isprs-archives-XLII-2-W16-135-2019 ![]() |
[19] |
Sun JY, Lee SW, Kang MC, et al. (2018) A novel gastric ulcer differentiation system using convolutional neural networks. 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS). IEEE : 351-356. https://doi.org/10.1109/CBMS.2018.00068 ![]() |
[20] |
Martin DR, Hanson JA, Gullapalli RR, et al. (2020) A deep learning convolutional neural network can recognize common patterns of injury in gastric pathology. Arch Pathol Lab Med 144: 370-378. https://doi.org/10.5858/arpa.2019-0004-OA ![]() |
[21] | Gammulle H, Denman S, Sridharan S, et al. (2020) Two-stream deep feature modelling for automated video endoscopy data analysis. International Conference on Medical Image Computing and Computer-Assisted Intervention . Cham: Springer 742-751. https://doi.org/10.1007/978-3-030-59716-0_71 |
[22] | Li Z, Togo R, Ogawa T, et al. (2019) Semi-supervised learning based on tri-training for gastritis classification using gastric X-ray images. 2019 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE : 1-5. https://doi.org/10.1109/ISCAS.2019.8702261 |
[23] |
Kanai M, Togo R, Ogawa T, et al. (2019) Gastritis detection from gastric X-ray images via fine-tuning of patch-based deep convolutional neural network. 2019 IEEE International Conference on Image Processing (ICIP). IEEE : 1371-1375. https://doi.org/0.1109/ICIP.2019.8803705 ![]() |
[24] |
Cho BJ, Bang CS, Park SW, et al. (2019) Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network. Endoscopy 51: 1121-1129. https://doi.org/10.1055/a-0981-6133 ![]() |
[25] | Redmon J, Divvala S, Girshick R, et al. (2015) You look only once: unified real-time object detection. arXiv preprint arXiv : 1506.02640. https://doi.org/10.48550/arXiv.1506.02640 |
[26] | Islam M, Seenivasan L, Ming LC, et al. (2020) Learning and reasoning with the graph structure representation in robotic surgery. International Conference on Medical Image Computing and Computer-Assisted Intervention . Cham: Springer 627-636. https://doi.org/10.1007/978-3-030-59716-0_60 |
[27] |
Tran VP, Tran TS, Lee HJ, et al. (2021) One stage detector (RetinaNet)-based crack detection for asphalt pavements considering pavement distresses and surface objects. J Civil Struct Health Monit 11: 205-222. https://doi.org/0.1007/s13349-020-00447-8 ![]() |
[28] |
Badrinarayanan V, Cipolla R, Kendall A (2017) Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39: 2481-2495. https://doi.org/10.1109/TPAMI.2016.2644615 ![]() |
[29] |
Urbanos G, Martín A, Vázquez G, et al. (2021) Supervised machine learning methods and hyperspectral imaging techniques jointly applied for brain cancer classification. Sensors 21: 3827. https://doi.org/10.3390/s21113827 ![]() |
[30] |
Urban G, Tripathi P, Alkayali T, et al. (2018) Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology 155: 1069-1078.e8. https//doi.org/10.1053/j.gastro.2018.06.037 ![]() |
[31] |
Li L, Chen Y, Shen Z, et al. (2020) Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer 23: 126-132. https://doi.org/10.1007/s10120-019-00992-2 ![]() |
[32] |
Ishihara K, Ogawa T, Haseyama M (2017) Detection of gastric cancer risk from X-ray images via patch-based convolutional neural network. 2017 IEEE International Conference on Image Processing (ICIP). IEEE : 2055-2059. https://doi.org/10.1109/ICIP.2017.8296643 ![]() |
[33] |
Song Z, Zou S, Zhou W, et al. (2020) Clinically applicable histopathological diagnosis system for gastric cancer detection using deep learning. Nat Commun 11: 1-9. https://doi.org/10.1038/s41467-020-18147-8 ![]() |
[34] |
Ikenoyama Y, Hirasawa T, Ishioka M, et al. (2021) Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Digest Endosc 33: 141-150. https://doi.org/10.1111/den.13688 ![]() |
[35] | Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention . Cham: Springer 234-241. https://doi.org/10.1007/978-3-319-24574-4_28 |
[36] | Folgoc LL, Heinrich M, Lee M, et al. Attention u-net: Learning where to look for the pancreas (2018). Available from: https://arxiv.org/pdf/1804.03999.pdf%E4%BB%A3%E7%A0%81%E5%9C%B0%E5%9D%80%EF%BC%9Ahttps://github.com/ozan-oktay/Attention-Gated-Networks |
[37] |
Li X, Chen H, Qi X, et al. (2018) H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE transactions on medical imaging 37: 2663-2674. https://doi.org/10.1109/TMI.2018.2845918 ![]() |
[38] |
Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, et al. (2018) Unet++: A nested u-net architecture for medical image segmentation. Deep learning in medical image analysis and multimodal learning for clinical decision support : 3-11. https://doi.org/10.1007/978-3-030-00889-5_1 ![]() |
1. | Mingzhou Xu, Complete convergence of moving average processes produced by negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 17067, 10.3934/math.2023871 | |
2. | Mingzhou Xu, Xuhang Kong, Complete qth moment convergence of moving average processes for m-widely acceptable random variables under sub-linear expectations, 2024, 214, 01677152, 110203, 10.1016/j.spl.2024.110203 | |
3. | Mingzhou Xu, Kun Cheng, Wangke Yu, Convergence of linear processes generated by negatively dependent random variables under sub-linear expectations, 2023, 2023, 1029-242X, 10.1186/s13660-023-02990-6 | |
4. | Mingzhou Xu, On the complete moment convergence of moving average processes generated by negatively dependent random variables under sub-linear expectations, 2024, 9, 2473-6988, 3369, 10.3934/math.2024165 | |
5. | Mingzhou Xu, Complete convergence and complete moment convergence for maximal weighted sums of extended negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 19442, 10.3934/math.2023992 | |
6. | Lunyi Liu, Qunying Wu, Complete integral convergence for weighted sums of negatively dependent random variables under sub-linear expectations, 2023, 8, 2473-6988, 22319, 10.3934/math.20231138 | |
7. | Yuyan Wei, Xili Tan, Peiyu Sun, Shuang Guo, Weak and strong law of large numbers for weakly negatively dependent random variables under sublinear expectations, 2025, 10, 2473-6988, 7540, 10.3934/math.2025347 |