Citation: Xinglong Yin, Lei Liu, Huaxiao Liu, Qi Wu. Heterogeneous cross-project defect prediction with multiple source projects based on transfer learning[J]. Mathematical Biosciences and Engineering, 2020, 17(2): 1020-1040. doi: 10.3934/mbe.2020054
[1] | Hongzeng He, Shufen Dai . A prediction model for stock market based on the integration of independent component analysis and Multi-LSTM. Electronic Research Archive, 2022, 30(10): 3855-3871. doi: 10.3934/era.2022196 |
[2] | Jingyun Lv, Xiaoyan Lu . Convergence of finite element solution of stochastic Burgers equation. Electronic Research Archive, 2024, 32(3): 1663-1691. doi: 10.3934/era.2024076 |
[3] | Tong Wu, Yong Wang . The semiclassical limit of the Kastler–Kalau–Walze-type theorem. Electronic Research Archive, 2025, 33(4): 2452-2474. doi: 10.3934/era.2025109 |
[4] | Erlin Guo, Patrick Ling . Estimation of the quadratic variation of log prices based on the Itô semi-martingale. Electronic Research Archive, 2024, 32(2): 799-811. doi: 10.3934/era.2024038 |
[5] | Hao Wen, Yantao Luo, Jianhua Huang, Yuhong Li . Stochastic travelling wave solution of the N-species cooperative systems with multiplicative noise. Electronic Research Archive, 2023, 31(8): 4406-4426. doi: 10.3934/era.2023225 |
[6] | Xintao Li, Rongrui Lin, Lianbing She . Periodic measures for a neural field lattice model with state dependent superlinear noise. Electronic Research Archive, 2024, 32(6): 4011-4024. doi: 10.3934/era.2024180 |
[7] | Peng Yu, Shuping Tan, Jin Guo, Yong Song . Data-driven optimal controller design for sub-satellite deployment of tethered satellite system. Electronic Research Archive, 2024, 32(1): 505-522. doi: 10.3934/era.2024025 |
[8] | J. S. Peng, Q. W. Kong, Y. X. Gao, L. Zhang . Straddle monorail noise impact evaluation considering acoustic propagation characteristics and the subjective feelings of residents. Electronic Research Archive, 2023, 31(12): 7307-7336. doi: 10.3934/era.2023370 |
[9] | Yue Ma, Zhongfei Li . Robust portfolio choice with limited attention. Electronic Research Archive, 2023, 31(7): 3666-3687. doi: 10.3934/era.2023186 |
[10] | Fei Shi . Incompressible limit of Euler equations with damping. Electronic Research Archive, 2022, 30(1): 126-139. doi: 10.3934/era.2022007 |
Recently, [1] introduced the process SH,K={SH,Kt,t≥0} on the probability space (Ω,F,P) with indices H∈(0,1) and K∈(0,1], named the sub-bifractional Brownian motion (sbfBm) and defined as follows:
SH,Kt=12(2−K)/2 (BH,Kt+BH,K−t), |
where {BH,Kt,t∈R} is a bifractional Brownian motion (bfBm) with indices H∈(0,1) and K∈(0,1], namely, {BH,Kt,t∈R} is a centered Gaussian process, starting from zero, with covariance
E[BH,KtBH,Ks]=12K[(|t|2H+|s|2H)K−|t−s|2HK], |
with H∈(0,1) and K∈(0,1].
Clearly, the sbfBm is a centered Gaussian process such that SH,K0=0, with probability 1, and Var(SH,Kt)=(2K−22HK−1)t2HK. Since (2H−1)K−1<K−1≤0, it follows that 2HK−1<K. We can easily verify that SH,K is self-similar with index HK. When K=1, SH,1 is the sub-fractional Brownian motion (sfBm). For more on sub-fractional Brownian motion, we can see [2,3,4,5] and so on. The following computations show that for all s,t≥0,
RH,K(t,s)=E(SH,KtSH,Ks)=(t2H+s2H)K−12(t+s)2HK−12|t−s|2HK | (1.1) |
and
C1|t−s|2HK≤E[(SH,Kt−SH,Ks)2]≤C2|t−s|2HK, | (1.2) |
where
C1=min{2K−1,2K−22HK−1}, C2=max{1,2−22HK−1}. | (1.3) |
(See [1]). [6] investigated the collision local time of two independent sub-bifractional Brownian motions. [7] obtained Berry-Esséen bounds and proved the almost sure central limit theorem for the quadratic variation of the sub-bifractional Brownian motion. For more on sbfBm, we can see [8,9,10].
Reference [11] studied the limits of bifractional Brownian noises. [12] obtained limit results of sub-fractional Brownian and weighted fractional Brownian noises. Motivated by all these studies, in this paper, we will study the increment process {SH,Kh+t−SH,Kh,t≥0} of SH,K and the noise generated by SH,K and see how close this process is to a process with stationary increments. In principle, since the sub-bifractional Brownian motion is not a process with stationary increments, its increment process depends on h.
We have organized our paper as follows: In Section 2 we prove our main result that the increment process of SH,K converges to the fractional Brownian motion BHK. Section 3 is devoted to a different view of this main result and we analyze the noise generated by the sub-bifractional Brownian motion and study its asymptotic behavior. In Section 4 we prove limit theorems to the sub-bifractional Brownian motion from a correlated non-stationary Gaussian sequence. Finally, Section 5 describes the behavior of the tangent process of sbfBm.
In this section, we prove the following main result which says that the increment process of the sub-bifractional Brownian motion SH,K converges to the fractional Brownian motion with Hurst index HK.
Theorem 2.1. Let K∈(0,1). Then, as h→∞,
{SH,Kh+t−SH,Kh,t≥0}d⇒{BHKt,t≥0}, |
where d⇒ means convergence of all finite dimensional distributions and BHK is the fractional Brownian motion with Hurst index HK.
In order to prove Theorem 2.1, we first show a decomposition of the sub-bifractional Brownian motion with parameters H and K into the sum of a sub-fractional Brownian motion with Hurst parameter HK plus a stochastic process with absolutely continuous trajectories. Some similar results were obtained in [13] for the bifractional Brownian motion and in [14] for the sub-fractional Brownian motion. Such a decomposition is useful in order to derive easier proofs for different properties of sbfBm (like variation, strong variation and Chung's LIL).
We consider the following decomposition of the covariance function of the sub-bifractional Brownian motion:
RH,K(t,s)=E(SH,KtSH,Ks)=(t2H+s2H)K−12(t+s)2HK−12|t−s|2HK |
=[(t2H+s2H)K−t2HK−s2HK] |
+[t2HK+s2HK−12(t+s)2HK−12|t−s|2HK]. | (2.1) |
The second summand in (2.1) is the covariance of a sub-fractional Brownian motion with Hurst parameter HK. The first summand turns out to be a non-positive definite and with a change of sign it will be the covariance of a Gaussian process. Let {Wt,t≥0} a standard Brownian motion, for any 0<K<1, define the process XK={XKt,t≥0} by
XKt=∫∞0(1−e−θt)θ−1+K2dWθ. | (2.2) |
Then, XK is a centered Gaussian process with covariance:
E(XKtXKs)=∫∞0(1−e−θt)(1−e−θs)θ−1−Kdθ |
=∫∞0(1−e−θt)θ−1−Kdθ−∫∞0(1−e−θt)e−θsθ−1−Kdθ |
=∫∞0(∫t0θe−θudu)θ−1−Kdθ−∫∞0(∫t0θe−θudu)e−θsθ−1−Kdθ |
=∫t0(∫∞0θ−Ke−θudθ)du−∫t0(∫∞0θ−Ke−θ(u+s)dθ)du |
=Γ(1−K)K[tK+sK−(t+s)K], | (2.3) |
where Γ(α)=∫∞0xα−1e−xdx.
Therefore we obtain the following result:
Lemma 2.1. Let SH,K be a sub-bifractional Brownian motion, K∈(0,1) and assume that {Wt,t≥0} is a standard Brownian motion independent of SH,K. Let XK be the process defined by (2.2). Then the processes {√KΓ(1−K)XKt2H+SH,Kt,t≥0} and {SHKt,t≥0} have the same distribution, where {SHKt,t≥0} is a sub-fractional Brownian motion with Hurst parameter HK.
Proof. Let Yt=√KΓ(1−K)XKt2H+SH,Kt. Then, from (2.1) and (2.3), we have, for s,t≥0,
E(YsYt)=KΓ(1−K)E(XKs2HXKt2H)+E(SH,KsSH,Kt) |
=t2HK+s2HK−(t2H+s2H)K |
+(t2H+s2H)K−12(t+s)2HK−12|t−s|2HK |
=t2HK+s2HK−12(t+s)2HK−12|t−s|2HK, |
which completes the proof.
Lemma 2.1 implies that
{SH,Kt,t≥0}d={SHKt−√KΓ(1−K)XKt2H,t≥0} | (2.4) |
where d= means equality of all finite-dimensional distributions.
By Theorem 2 in [13], the process XK has a version with trajectories that are infinitely differentiable trajectories on (0,∞) and absolutely continuous on [0,∞).
Reference [15] presented a decomposition of the sub-fractional Brownian motion into the sum of a fractional Brownian motion plus a stochastic process with absolutely continuous trajectories. Namely, we have the following lemma.
Lemma 2.2. Let BH be a fractional Brownian motion with Hurst parameter H, SH be a sub-fractional Brownian motion with Hurst parameter H and B={Bt,t≥0} is a standard Brownian motion. Let
YHt=∫∞0(1−e−θt)θ−1+2H2dBθ. | (2.5) |
(1) If 0<H<12 and suppose that BH and B are independent, then the processes
{√HΓ(1−2H)YHt+BHt,t≥0} and {SHt,t≥0} have the same distribution.
(2) If 12<H<1 and suppose that SH and B are independent, then the processes
{√H(2H−1)Γ(2−2H)YHt+SHt,t≥0} and {BHt,t≥0} have the same distribution.
Proof. See the proof of Theorem 2.2 in [15] or the proof of Theorem 3.5 in [14].
By (2.4) and Lemma 2.2, we get, as 0<HK<12,
{SH,Kt,t≥0}d={BHKt+√HKΓ(1−2HK)YHKt−√KΓ(1−K)XKt2H,t≥0} | (2.6) |
and as 12<HK<1,
{SH,Kt,t≥0}d={BHKt−√HK(2HK−1)Γ(2−2HK)YHKt−√KΓ(1−K)XKt2H,t≥0}. | (2.7) |
The following Lemma 2.3 comes from Proposition 2.2 in [11].
Lemma 2.3. Let XKt be defined by (2.2). Then, as h→∞,
E[(XK(h+t)2H−XKh2H)2]=Γ(1−K)K2KH2K(1−K)t2h2(HK−1)(1+o(1)). |
Therefore, as h→∞,
{XK(h+t)2H−XKh2H,t≥0}d⇒{Xt≡0,t≥0}. |
Lemma 2.4. Let YHt be defined by (2.5). Then, as h→∞,
E[(YHKh+t−YHKh)2]=22HK−2Γ(2−2HK)t2h2(HK−1)(1+o(1)). |
Therefore, as h→∞,
{YHKh+t−YHKh,t≥0}d⇒{Yt≡0,t≥0}. |
Proof. By Proposition 2.1 in [15], we have
E(YHtYHs)={Γ(1−2H)2H[t2H+s2H−(t+s)2H],if 0<H<12;Γ(2−2H)2H(2H−1)[(t+s)2H−t2H−s2H],if 12<H<1. |
When 0<HK<12, we get
E(YHKtYHKs)=Γ(1−2HK)2HK[t2HK+s2HK−(t+s)2HK]. |
In particular, for every t≥0,
E[(YHKt)2]=Γ(1−2HK)2HK(2−22HK)t2HK. |
Hence, we obtain
E[(YHKh+t−YHKh)2]=−Γ(1−2HK)2HK22HK[(h+t)2HK+h2HK]+Γ(1−2HK)2HK2(2h+t)2HK. |
Then, for every large h>0, by using Taylor's expansion, we have
I:=2HKΓ(1−2HK)E[(YHKh+t−YHKh)2] |
=−22HK[(h+t)2HK+h2HK]+2(2h+t)2HK |
=−22HKh2HK[(1+th−1)2HK+1]+2h2HK(2+th−1)2HK |
=−22HKh2HK[2+2HKth−1+HK(2HK−1)t2h−2(1+o(1))] |
+2h2HK[22HK+22HK−12HKth−1+22HK−2HK(2HK−1)t2h−2(1+o(1))] |
=22HK−1HK(1−2HK)t2h2(HK−1)(1+o(1)). |
Thus,
E[(YHKh+t−YHKh)2]=22HK−2(1−2HK)Γ(1−2HK)t2h2(HK−1)(1+o(1)) |
=22HK−2Γ(2−2HK)t2h2(HK−1)(1+o(1)). |
Similarly, we can prove the case 12<HK<1. Therefore we finished the proof of Lemma 2.4.
Proof of Theorem 2.1. It is obvious that Theorem 2.1 is the consequence of (2.6), (2.7), Lemma 2.3 and Lemma 2.4.
In this section, we can understand Theorem 2.1 by considering the sub-bifractional Brownian noise, which is increments of sub-bifractional Brownian motion. For every integer n≥0, the sub-bifractional Brownian noise is defined by
Yn:=SH,Kn+1−SH,Kn. |
Denote
R(a,a+n):=E(YaYa+n)=E[(SH,Ka+1−SH,Ka)(SH,Ka+n+1−SH,Ka+n)]. | (3.1) |
We obtain
R(a,a+n)=fa(n)+g(n)−g(2a+n+1), | (3.2) |
where
fa(n)=[(a+1)2H+(a+n+1)2H]K−[(a+1)2H+(a+n)2H]K |
−[a2H+(a+n+1)2H]K+[a2H+(a+n)2H]K |
and
g(n)=12[(n+1)2HK+(n−1)2HK−2n2HK]. |
We know that the function g is the covariance function of the fractional Brownian noise with Hurst index HK. Thus we need to analyze the function fa to understand "how far" the sub-bifractional Brownian noise is from the fractional Brownian noise. In other words, how far is the sub-bifractional Brownian motion from a process with stationary increments?
The sub-bifractional Brownian noise is not stationary. However, the meaning of the following theorem is that it converges to a stationary sequence.
Theorem 3.1. For each n, as a→∞, we have
fa(n)=2H2K(K−1)a2(HK−1)(1+o(1)) | (3.3) |
and
g(2a+n+1)=22HK−2HK(2HK−1)a2(HK−1)(1+o(1)). | (3.4) |
Therefore lima→∞fa(n)=0 and lima→∞g(2a+n+1)=0 for each n.
Proof. (3.3) is obtained by Theorem 3.3 in Maejima and Tudor. For (3.4), we have
g(2a+n+1)=12[(2a+n+2)2HK+(2a+n)2HK−2(2a+n+1)2HK] |
=22HK−1a2HK[(1+n+22a−1)2HK+(1+n2a−1)2HK−2(1+n+12a−1)2HK] |
=22HK−1a2HK[1+2HKn+22a−1+HK(2HK−1)(n+22)2a−2(1+o(1)) |
+1+2HKn2a−1+HK(2HK−1)(n2)2a−2(1+o(1)) |
−2(1+2HKn+12a−1+HK(2HK−1)(n+12)2a−2(1+o(1)))] |
=22HK−2HK(2HK−1)a2(HK−1)(1+o(1)). |
Hence the proof of Theorem 3.1 is completed.
We are now interested in the behavior of the sub-bifractional Brownian noise (3.1) with respect to n (as n→∞). We have the following result.
Theorem 3.2. For integers a,n≥0, let R(a,a+n) be given by (3.1). Then for large n,
R(a,a+n)=HK(K−1)[(a+1)2H−a2H]n2(HK−1)+(1−2H)+o(n2(HK−1)+(1−2H)). |
Proof. By (3.2), we have
R(a,a+n)=fa(n)+g(n)−g(2a+n+1). |
By the proof of Theorem 4.1 in [11], we get, for large n, the term fa(n) behaves as
HK(K−1)[(a+1)2H−a2H]n2(HK−1)+(1−2H)+o(n2(HK−1)+(1−2H)). |
We know that the term g(n) behaves as HK(2HK−1)n2(HK−1) for large n. For g(2a+n+1), it is similar to the computation for Theorem 3.1, we can obtain g(2a+n+1) also behaves as HK(2HK−1)n2(HK−1) for large n. Hence we have finished the proof of Theorem 3.2.
It is easy to obtain the following corollary.
Corollary 3.1. For integers a≥1 and n≥0, let R(a,a+n) be given by (3.1). Then, for every a∈N, we have
∑n≥0R(a,a+n)<∞. |
Proof. By Theorem 3.2, we get that the main term of R(a,a+n) is n2HK−2H−1, and since 2HK−2H−1<−1, the series is convergent.
In this section, we prove two limit theorems to the sub-bifractional Brownian motion. Define a function g(t,s),t≥0,s≥0 by
g(t,s)=∂2RH,K(t,s)∂t∂s=4H2K(K−1)(t2H+s2H)K−2(ts)2H−1+HK(2HK−1)|t−s|2HK−2 |
−HK(2HK−1)(t+s)2HK−2 |
=:g1(t,s)+g2(t,s)−g3(t,s), | (4.1) |
for (t,s) with t≠s, t≠0, s≠0 and t+s≠0.
Theorem 4.1. Assume that 2HK>1 and let {ξj,j=1,2,⋯} be a sequence of standard normal random variables. g(t,s) is defined by (4.1). Suppose that E(ξiξj)=g(i,j). Then, as n→∞,
{n−HK[nt]∑j=1ξj,t≥0}d⇒{SH,Kt,t≥0}. |
Remark 1. Theorem 4.1 and 4.2 (below) are similar to the central limit theorem and can be used as a basis for many subsequent studies.
In order to prove Theorem 4.1, we need the following lemma.
Lemma 4.1. When 2HK>1, we have
∫t0∫s0g(u,v)dudv=(t2H+s2H)K−12(t+s)2HK−12|t−s|2HK. |
Proof. It follows from the fact that g(t,s)=∂2RH,K(t,s)∂t∂s for every t≥0,s≥0 and by using that 2HK>1.
Proof of Theorem 4.1. It is enough to show that, as n→∞,
In:=E[(n−HK[nt]∑i=1ξi)(n−HK[ns]∑j=1ξj)]→E(SH,KtSH,Ks). |
In fact, we have
In=n−2HK[nt]∑i=1[ns]∑j=1E(ξiξj)=n−2HK[nt]∑i=1[ns]∑j=1g(i,j). |
Note that
g(in,jn)=4H2K(K−1)[(in)2H+(jn)2H]K−2(ijn2)2H−1 |
+HK(2HK−1)|in−jn|2HK−2−HK(2HK−1)(in+jn)2HK−2 |
=n2(1−HK)g(i,j). | (4.2) |
Thus, as n→∞,
In=n−2HK[nt]∑i=1[ns]∑j=1n2HK−2g(in,jn) |
=n−2[nt]∑i=1[ns]∑j=1g(in,jn) |
→∫t0∫s0g(u,v)dudv |
=(t2H+s2H)K−12(t+s)2HK−12|t−s|2HK |
=E(SH,KtSH,Ks). |
Hence, we finished the proof of Theorem 4.1.
We now consider more general sequence of nonlinear functional of standard normal random variables. Let f be a real valued function such that f(x) does not vanish on a set of positive measure, E[f(ξ1)]=0 and E[(f(ξ1))2]<∞. Let Hk denote the k-th Hermite polynomial with highest coefficient 1. We have
f(x)=∞∑k=1ckHk(x), |
where ∑∞k=1c2kk!<∞ and ck=E[f(ξj)Hk(ξj)] (see e.g. [16]). Assume that c1≠0. Let ηj=f(ξj),j=1,2,⋯, where {ξj,j=1,2,⋯} is the same sequence of standard normal random variables as before.
Theorem 4.2. Assume that 2HK>32 and let {ξj,j=1,2,⋯} be a sequence of standard normal random variables. g(t,s) is defined by (4.1). Suppose that E(ξiξj)=g(i,j). Then, as n→∞,
{n−HK[nt]∑j=1ηj,t≥0}d⇒{c1SH,Kt,t≥0}. |
Proof. Note that ηj=f(ξj)=c1ξj+∑∞k=2ckHk(ξj). We obtain
n−HK[nt]∑j=1ηj=c1n−HK[nt]∑j=1ξj+n−HK[nt]∑j=1∞∑k=2ckHk(ξj). |
Using Theorem 4.1, it is enough to show that, as n→∞,
E[(n−HK[nt]∑j=1∞∑k=2ckHk(ξj))2]→0. | (4.3) |
In fact, we get
Jn:=E[(n−HK[nt]∑j=1∞∑k=2ckHk(ξj))2] |
=n−2HK[nt]∑i=1[nt]∑j=1∞∑k=2∞∑l=2ckclE[Hk(ξi)Hl(ξj)]. |
We know that, if ξ and η are two random variables with joint Gaussian distribution such that E(ξ)=E(η)=0, E(ξ2)=E(η2)=1 and E(ξη)=r, then
E[Hk(ξ)Hl(η)]=δk,lrkk!, |
where
δk,l={1,if k=l;0,if k≠l. |
Thus,
Jn=n−2HK[nt]∑i=1[nt]∑j=1∞∑k=2c2k(E(ξiξj))kk! |
=n−2HK[nt]∞∑k=2c2kk!+n−2HK[nt]∑i,j=1;i≠j∞∑k=2c2kk![g(i,j)]k. |
Since |g(i,j)|≤(E(ξ2i))12(E(ξ2j))12=1, we get, by (4.2),
Jn≤n−2HK[nt]∞∑k=2c2kk!+n−2HK[nt]∑i,j=1;i≠j∞∑k=2c2kk![g(i,j)]2 |
=n−2HK[nt]∞∑k=2c2kk!+n−2HK∞∑k=2c2kk![nt]∑i,j=1;i≠j[g(i,j)]2 |
≤tn1−2HK∞∑k=2c2kk!+n2(HK−1)(∞∑k=2c2kk!)n−2[nt]∑i,j=1;i≠j[g(in,jn)]2. | (4.4) |
On one hand, by ∑∞k=2c2kk!<∞ and 2HK>32>1, we get, as n→∞,
tn1−2HK∞∑k=2c2kk!→0. | (4.5) |
On the other hand, we have
n−2[nt]∑i,j=1;i≠j[g(in,jn)]2=n−2[nt]∑i,j=1;i≠j[g1(in,jn)+g2(in,jn)−g3(in,jn)]2 |
≤3n−2[nt]∑i,j=1;i≠j{[g1(in,jn)]2+[g2(in,jn)]2+[g3(in,jn)]2}. |
Since |g1(u,v)|≤C(uv)HK−1 and 2HK>32>1, we obtain
n−2[nt]∑i,j=1;i≠j[g1(in,jn)]2→∫t0∫t0g21(u,v)dudv≤C∫t0∫t0(uv)2HK−2dudv<∞. | (4.6) |
We know that
n−2[nt]∑i,j=1;i≠j[g3(in,jn)]2→∫t0∫t0g23(u,v)dudv |
=H2K2(2HK−1)2∫t0∫t0(u+v)4HK−4dudv |
<∞, | (4.7) |
since 2HK>32>1.
We have also
n−2[nt]∑i,j=1;i≠j[g2(in,jn)]2→∫t0∫t0g22(u,v)dudv |
=H2K2(2HK−1)2∫t0∫t0(u−v)4HK−4dudv |
<∞, | (4.8) |
since 2HK>32. Thus (4.3) holds from (4.4)–(4.8) and 2HK>32. The proof is completed.
Remark 2. [11] pointed out, when 2HK>1, the convergence of
n2(HK−1)n−2[nt]∑i,j=1;i≠j[g2(in,jn)]2 |
had been already proved in [16]. But we can not find the details in [16]. Here we only give the proof when 2HK>32, because the holding condition for (4.8) is 2HK>32.
In this section, we study an approximation in law of the fractional Brownian motion via the tangent process generated by the sbfBm SH,K.
Theorem 5.1. Let H∈(0,1) and K∈(0,1). For every t0>0, as ϵ→0, we have, the tangent process
{SH,Kt0+ϵu−SH,Kt0ϵHK,u≥0}d⇒{BHKu,u≥0}, | (5.1) |
where BHKu is the fractional Brownian motion with Hurst index HK.
Proof. As 0<HK<12, by (2.6), we get
{SH,Kt,t≥0}d={BHKt+√HKΓ(1−2HK)YHKt−√KΓ(1−K)XKt2H,t≥0}. |
By (2.5) in [12], there exists a constant C(H,K)>0 such that
E[(XK(t0+ϵu)2H−XK(t0)2HϵHK)2]=C(H,K)t2(HK−1)0u2ϵ2(1−HK)(1+o(1)), |
which tends to zero, as ϵ→0, since 1−HK>0.
On the other hand, similar to the proof of Lemma 2.4, we obtain
E[(YHKt0+ϵu−YHKt0ϵHK)2]=22HK−2Γ(2−2HK)t2(HK−1)0u2ϵ2(1−HK)(1+o(1)), |
which also tends to zero, as ϵ→0. Therefore (5.1) holds. Similarly, (5.1) also holds for the case 12<HK<1. We finished the proof.
In this paper, we prove that the increment process generated by the sub-bifractional Brownian motion converges to the fractional Brownian motion. Moreover, we study the behavior of the noise associated to the sbfBm and the behavior of the tangent process of the sbfBm. In the future, we will investigate limits of Gaussian noises.
Nenghui Kuang was supported by the Natural Science Foundation of Hunan Province under Grant 2021JJ30233. The author wishes to thank anonymous referees for careful reading of the previous version of this paper and also their comments which improved the paper.
The author declares there is no conflict of interest.
[1] | J. Nam, S. J. Pan and S. Kim, Transfer defect learning, 2013 35th International Conference on Software Engineering (ICSE), 2013, 382-391. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/6606584. |
[2] | X. Y. Jing, S. Ying, Z. W. Zhang, et al., Dictionary learning based software defect prediction, Proceedings of the 36th International Conference on Software Engineering, ACM, 2014, 414-423. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2568320. |
[3] | Z. Mahmood, D. Bowes, P. C. R. Lane, et al., What is the Impact of Imbalance on Software Defect Prediction Performance?, Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering, ACM, 2015. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2810150. |
[4] | C. Tantithamthavorn, Towards a better understanding of the impact of experimental components on defect prediction modeling, 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C), 2016, 867-870. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/7883423. |
[5] | B. Turhan, T. Menzies, A. B. Bener, et al., On the relative value of cross-company and within-company data for defect prediction, Empirical Software Eng., 14 (2009), 540-578. |
[6] | Y. Ma, G. Luo, X. Zeng, et al., Transfer learning for cross-company software defect prediction, Inf. Software Technol., 54 (2012), 248-256. |
[7] | G. Canfora, A. De Lucia, M. Di Penta, et al., Multi-objective cross-project defect prediction, 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, 2013, 252-261. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/6569737. |
[8] | F. Peters, T. Menzies and A. Marcus, Better cross company defect prediction, Proceedings of the 10th Working Conference on Mining Software Repositories, 2013, 409-418. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2487161. |
[9] | L. Chen, B. Fang, Z. Shang, et al., Negative samples reduction in cross-company software defects prediction, Inf. Software Technol., 62 (2015), 67-77. |
[10] | J. Nam and S. Kim, Heterogeneous defect prediction, Proceedings of the 2015 10th joint meeting on foundations of software engineering, ACM, 2015, 508-519. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2786814. |
[11] | X. Jing, F. Wu, X. Dong, et al., Heterogeneous cross-company defect prediction by unified metric representation and CCA-based transfer learning, Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ACM, 2015, 496-507. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2786813. |
[12] | M. H. Halstead, Elements of Software Science, Elsevier Science, New York, 1977. |
[13] | T. J. McCabe, A complexity measure, IEEE Trans. Software Eng., 4 (1976), 308-320. |
[14] | S. R. Chidamber and C. F. Kemerer, A metrics suite for object oriented design, IEEE Trans. Software Eng., 20 (1994), 476-493. |
[15] | T. L. Graves, A. F. Karr, J. S. Marron, et al., Predicting fault incidence using software change history, IEEE Trans. Software Eng., 26 (2000), 653-661. |
[16] | K. O. Elish and M. O. Elish, Predicting defect-prone software modules using support vector machines, J. Syst. Software, 81 (2008), 649-660. |
[17] | A. S. Andreou and E. Papatheocharous, Software cost estimation using fuzzy decision trees, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, 2008, 371-374. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/4639344. |
[18] | N. Bettenburg, M. Nagappan and A. E. Hassan, Think locally, act globally: Improving defect and effort prediction models, 2012 9th IEEE Working Conference on Mining Software Repositories (MSR), 2012, 60-69. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/6224300. |
[19] | S. J. Pan and Q. Yang, A Survey on Transfer Learning, IEEE Trans. Knowl. Data Eng., 22 (2010), 1345-1359. |
[20] | H. F. Chang and A. Mockus, Constructing universal version history, Proceedings of the 2006 international workshop on Mining software repositories, 2006, 76-79. Available from: https://dl_acm.xilesou.top/citation.cfm?id=1138002. |
[21] | T. Menzies, B. Caglayan, E. Kocaguneli, et al., The promise repository of empirical software engineering data, 2012 (2012). |
[22] | M. Shepperd, Q. Song, Z. Sun, et al., Data quality: Some comments on the NASA software defect datasets, IEEE Trans. Software Eng., 39 (2013), 1208-1215. |
[23] | M. D'Ambros, M. Lanza, R. Robbes, An extensive comparison of bug prediction approaches, 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010), 2010, 31-41. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/5463279. |
[24] | R. Wu, H. Zhang, S. Kim, et al., Relink: Recovering links between bugs and changes, Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering, ACM, 2011, 15-25. Available from: https://dl_acm.xilesou.top/citation.cfm?id=2025120. |
[25] | S. Zhong, T. M. Khoshgoftaar and N. Seliya, Unsupervised Learning for Expert-Based Software Quality Estimation, HASE, 2004, 149-155. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.1471&rep=rep1&type=pdf. |
[26] | P. S. Bishnu and V. Bhattacherjee, Software fault prediction using quad tree-based k-means clustering algorithm, IEEE Trans. Knowl. Data Eng., 24 (2012), 1146-1150. |
[27] | G. Abaei, Z. Rezaei and A. Selamat, Fault prediction by utilizing self-organizing Map and Threshold, 2013 IEEE International Conference on Control System, Computing and Engineering, 2013, 465-470. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/6720010. |
[28] | J. Nam and S. Kim, CLAMI: Defect Prediction on Unlabeled Datasets (T), 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2015, 452-463. Available from: https://ieeexplore_ieee.xilesou.top/abstract/document/7372033. |
[29] | F. Zhang, Q. Zheng, Y. Zou, et al., Cross-project defect prediction using a connectivity-based unsupervised classifier, Proceedings of the 38th International Conference on Software Engineering, ACM, 2016, 309-320. |
[30] | J. Han, J. Pei and M. Kamber, Data Mining: Concepts and Techniques, Elsevier, 2012. |
[31] | A. B. A. Graf and S. Borer, Normalization in support vector machines, Joint Pattern Recognition Symposium, Springer, Berlin, Heidelberg, 2001, 277-282. |
[32] | M. Harel and S. Mannor, Learning from multiple outlooks, arXiv preprint arXiv1005.0027, 2010. |
[33] | L. Yang, L. P. Jing, J. Yu, et al., Heterogeneous transductive transfer learning algorithm, J. Software, 26 (2015), 2762-2780 (in Chinese). |
[34] | J. C. Gower and G. B. Dijksterhuis, Procrustes problems, Oxford University Press on Demand, 2004. |
[35] | F. Wilcoxon, Individual comparisons by ranking methods, Breakthroughs in Statistics, Springer Series in Statistics (Perspectives in Statistics), Springer, New York, 1992, 196-202. |
1. | Nenghui Kuang, Huantian Xie, Least squares type estimators for the drift parameters in the sub-bifractional Vasicek processes, 2023, 26, 0219-0257, 10.1142/S0219025723500042 |