Research article

Driver identification and fatigue detection algorithm based on deep learning


  • Received: 07 December 2022 Revised: 03 February 2023 Accepted: 12 February 2023 Published: 27 February 2023
  • In order to avoid traffic accidents caused by driver fatigue, smoking and talking on the phone, it is necessary to design an effective fatigue detection algorithm. Firstly, this paper studies the detection algorithms of driver fatigue at home and abroad, and analyzes the advantages and disadvantages of the existing algorithms. Secondly, a face recognition module is introduced to crop and align the acquired faces and input them into the Facenet network model for feature extraction, thus completing the identification of drivers. Thirdly, a new driver fatigue detection algorithm based on deep learning is designed based on Single Shot MultiBox Detector (SSD) algorithm, and the additional layer network structure of SSD is redesigned by using the idea of reverse residual. By adding the detection of drivers' smoking and making phone calls, adjusting the size and number of prior boxes of SSD algorithm, improving FPN network and SE network, the identification and verification of drivers can be realized. The experimental results showed that the number of parameters decreased from 96.62 MB to 18.24 MB. The average accuracy rate increased from 89.88% to 95.69%. The projected number of frames per second increased from 51.69 to 71.86. When the confidence threshold was set to 0.5, the recall rate of closed eyes increased from 46.69% to 65.87%, that of yawning increased from 59.72% to 82.72%, and that of smoking increased from 65.87% to 83.09%. These results show that the improved network model has better feature extraction ability for small targets.

    Citation: Yuhua Ma, Ye Tao, Yuandan Gong, Wenhua Cui, Bo Wang. Driver identification and fatigue detection algorithm based on deep learning[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 8162-8189. doi: 10.3934/mbe.2023355

    Related Papers:

    [1] Hongzeng He, Shufen Dai . A prediction model for stock market based on the integration of independent component analysis and Multi-LSTM. Electronic Research Archive, 2022, 30(10): 3855-3871. doi: 10.3934/era.2022196
    [2] Jingyun Lv, Xiaoyan Lu . Convergence of finite element solution of stochastic Burgers equation. Electronic Research Archive, 2024, 32(3): 1663-1691. doi: 10.3934/era.2024076
    [3] Tong Wu, Yong Wang . The semiclassical limit of the Kastler–Kalau–Walze-type theorem. Electronic Research Archive, 2025, 33(4): 2452-2474. doi: 10.3934/era.2025109
    [4] Erlin Guo, Patrick Ling . Estimation of the quadratic variation of log prices based on the Itô semi-martingale. Electronic Research Archive, 2024, 32(2): 799-811. doi: 10.3934/era.2024038
    [5] Hao Wen, Yantao Luo, Jianhua Huang, Yuhong Li . Stochastic travelling wave solution of the N-species cooperative systems with multiplicative noise. Electronic Research Archive, 2023, 31(8): 4406-4426. doi: 10.3934/era.2023225
    [6] Xintao Li, Rongrui Lin, Lianbing She . Periodic measures for a neural field lattice model with state dependent superlinear noise. Electronic Research Archive, 2024, 32(6): 4011-4024. doi: 10.3934/era.2024180
    [7] Peng Yu, Shuping Tan, Jin Guo, Yong Song . Data-driven optimal controller design for sub-satellite deployment of tethered satellite system. Electronic Research Archive, 2024, 32(1): 505-522. doi: 10.3934/era.2024025
    [8] J. S. Peng, Q. W. Kong, Y. X. Gao, L. Zhang . Straddle monorail noise impact evaluation considering acoustic propagation characteristics and the subjective feelings of residents. Electronic Research Archive, 2023, 31(12): 7307-7336. doi: 10.3934/era.2023370
    [9] Yue Ma, Zhongfei Li . Robust portfolio choice with limited attention. Electronic Research Archive, 2023, 31(7): 3666-3687. doi: 10.3934/era.2023186
    [10] Fei Shi . Incompressible limit of Euler equations with damping. Electronic Research Archive, 2022, 30(1): 126-139. doi: 10.3934/era.2022007
  • In order to avoid traffic accidents caused by driver fatigue, smoking and talking on the phone, it is necessary to design an effective fatigue detection algorithm. Firstly, this paper studies the detection algorithms of driver fatigue at home and abroad, and analyzes the advantages and disadvantages of the existing algorithms. Secondly, a face recognition module is introduced to crop and align the acquired faces and input them into the Facenet network model for feature extraction, thus completing the identification of drivers. Thirdly, a new driver fatigue detection algorithm based on deep learning is designed based on Single Shot MultiBox Detector (SSD) algorithm, and the additional layer network structure of SSD is redesigned by using the idea of reverse residual. By adding the detection of drivers' smoking and making phone calls, adjusting the size and number of prior boxes of SSD algorithm, improving FPN network and SE network, the identification and verification of drivers can be realized. The experimental results showed that the number of parameters decreased from 96.62 MB to 18.24 MB. The average accuracy rate increased from 89.88% to 95.69%. The projected number of frames per second increased from 51.69 to 71.86. When the confidence threshold was set to 0.5, the recall rate of closed eyes increased from 46.69% to 65.87%, that of yawning increased from 59.72% to 82.72%, and that of smoking increased from 65.87% to 83.09%. These results show that the improved network model has better feature extraction ability for small targets.



    Recently, [1] introduced the process SH,K={SH,Kt,t0} on the probability space (Ω,F,P) with indices H(0,1) and K(0,1], named the sub-bifractional Brownian motion (sbfBm) and defined as follows:

    SH,Kt=12(2K)/2    (BH,Kt+BH,Kt),

    where {BH,Kt,tR} is a bifractional Brownian motion (bfBm) with indices H(0,1) and K(0,1], namely, {BH,Kt,tR} is a centered Gaussian process, starting from zero, with covariance

    E[BH,KtBH,Ks]=12K[(|t|2H+|s|2H)K|ts|2HK],

    with H(0,1) and K(0,1].

    Clearly, the sbfBm is a centered Gaussian process such that SH,K0=0, with probability 1, and Var(SH,Kt)=(2K22HK1)t2HK. Since (2H1)K1<K10, it follows that 2HK1<K. We can easily verify that SH,K is self-similar with index HK. When K=1, SH,1 is the sub-fractional Brownian motion (sfBm). For more on sub-fractional Brownian motion, we can see [2,3,4,5] and so on. The following computations show that for all s,t0,

    RH,K(t,s)=E(SH,KtSH,Ks)=(t2H+s2H)K12(t+s)2HK12|ts|2HK (1.1)

    and

    C1|ts|2HKE[(SH,KtSH,Ks)2]C2|ts|2HK, (1.2)

    where

    C1=min{2K1,2K22HK1},    C2=max{1,222HK1}. (1.3)

    (See [1]). [6] investigated the collision local time of two independent sub-bifractional Brownian motions. [7] obtained Berry-Esséen bounds and proved the almost sure central limit theorem for the quadratic variation of the sub-bifractional Brownian motion. For more on sbfBm, we can see [8,9,10].

    Reference [11] studied the limits of bifractional Brownian noises. [12] obtained limit results of sub-fractional Brownian and weighted fractional Brownian noises. Motivated by all these studies, in this paper, we will study the increment process {SH,Kh+tSH,Kh,t0} of SH,K and the noise generated by SH,K and see how close this process is to a process with stationary increments. In principle, since the sub-bifractional Brownian motion is not a process with stationary increments, its increment process depends on h.

    We have organized our paper as follows: In Section 2 we prove our main result that the increment process of SH,K converges to the fractional Brownian motion BHK. Section 3 is devoted to a different view of this main result and we analyze the noise generated by the sub-bifractional Brownian motion and study its asymptotic behavior. In Section 4 we prove limit theorems to the sub-bifractional Brownian motion from a correlated non-stationary Gaussian sequence. Finally, Section 5 describes the behavior of the tangent process of sbfBm.

    In this section, we prove the following main result which says that the increment process of the sub-bifractional Brownian motion SH,K converges to the fractional Brownian motion with Hurst index HK.

    Theorem 2.1. Let K(0,1). Then, as h,

    {SH,Kh+tSH,Kh,t0}d{BHKt,t0},

    where d means convergence of all finite dimensional distributions and BHK is the fractional Brownian motion with Hurst index HK.

    In order to prove Theorem 2.1, we first show a decomposition of the sub-bifractional Brownian motion with parameters H and K into the sum of a sub-fractional Brownian motion with Hurst parameter HK plus a stochastic process with absolutely continuous trajectories. Some similar results were obtained in [13] for the bifractional Brownian motion and in [14] for the sub-fractional Brownian motion. Such a decomposition is useful in order to derive easier proofs for different properties of sbfBm (like variation, strong variation and Chung's LIL).

    We consider the following decomposition of the covariance function of the sub-bifractional Brownian motion:

    RH,K(t,s)=E(SH,KtSH,Ks)=(t2H+s2H)K12(t+s)2HK12|ts|2HK                 
        =[(t2H+s2H)Kt2HKs2HK]
                                  +[t2HK+s2HK12(t+s)2HK12|ts|2HK]. (2.1)

    The second summand in (2.1) is the covariance of a sub-fractional Brownian motion with Hurst parameter HK. The first summand turns out to be a non-positive definite and with a change of sign it will be the covariance of a Gaussian process. Let {Wt,t0} a standard Brownian motion, for any 0<K<1, define the process XK={XKt,t0} by

    XKt=0(1eθt)θ1+K2dWθ. (2.2)

    Then, XK is a centered Gaussian process with covariance:

    E(XKtXKs)=0(1eθt)(1eθs)θ1Kdθ                                                    
    =0(1eθt)θ1Kdθ0(1eθt)eθsθ1Kdθ          
    =0(t0θeθudu)θ1Kdθ0(t0θeθudu)eθsθ1Kdθ
    =t0(0θKeθudθ)dut0(0θKeθ(u+s)dθ)du          
    =Γ(1K)K[tK+sK(t+s)K],                                     (2.3)

    where Γ(α)=0xα1exdx.

    Therefore we obtain the following result:

    Lemma 2.1. Let SH,K be a sub-bifractional Brownian motion, K(0,1) and assume that {Wt,t0} is a standard Brownian motion independent of SH,K. Let XK be the process defined by (2.2). Then the processes {KΓ(1K)XKt2H+SH,Kt,t0} and {SHKt,t0} have the same distribution, where {SHKt,t0} is a sub-fractional Brownian motion with Hurst parameter HK.

    Proof. Let Yt=KΓ(1K)XKt2H+SH,Kt. Then, from (2.1) and (2.3), we have, for s,t0,

    E(YsYt)=KΓ(1K)E(XKs2HXKt2H)+E(SH,KsSH,Kt)               
    =t2HK+s2HK(t2H+s2H)K                    
        +(t2H+s2H)K12(t+s)2HK12|ts|2HK
    =t2HK+s2HK12(t+s)2HK12|ts|2HK,

    which completes the proof.

    Lemma 2.1 implies that

    {SH,Kt,t0}d={SHKtKΓ(1K)XKt2H,t0} (2.4)

    where d= means equality of all finite-dimensional distributions.

    By Theorem 2 in [13], the process XK has a version with trajectories that are infinitely differentiable trajectories on (0,) and absolutely continuous on [0,).

    Reference [15] presented a decomposition of the sub-fractional Brownian motion into the sum of a fractional Brownian motion plus a stochastic process with absolutely continuous trajectories. Namely, we have the following lemma.

    Lemma 2.2. Let BH be a fractional Brownian motion with Hurst parameter H, SH be a sub-fractional Brownian motion with Hurst parameter H and B={Bt,t0} is a standard Brownian motion. Let

    YHt=0(1eθt)θ1+2H2dBθ. (2.5)

    (1) If 0<H<12 and suppose that BH and B are independent, then the processes

    {HΓ(12H)YHt+BHt,t0} and {SHt,t0} have the same distribution.

    (2) If 12<H<1 and suppose that SH and B are independent, then the processes

    {H(2H1)Γ(22H)YHt+SHt,t0} and {BHt,t0} have the same distribution.

    Proof. See the proof of Theorem 2.2 in [15] or the proof of Theorem 3.5 in [14].

    By (2.4) and Lemma 2.2, we get, as 0<HK<12,

    {SH,Kt,t0}d={BHKt+HKΓ(12HK)YHKtKΓ(1K)XKt2H,t0} (2.6)

    and as 12<HK<1,

    {SH,Kt,t0}d={BHKtHK(2HK1)Γ(22HK)YHKtKΓ(1K)XKt2H,t0}. (2.7)

    The following Lemma 2.3 comes from Proposition 2.2 in [11].

    Lemma 2.3. Let XKt be defined by (2.2). Then, as h,

    E[(XK(h+t)2HXKh2H)2]=Γ(1K)K2KH2K(1K)t2h2(HK1)(1+o(1)).

    Therefore, as h,

    {XK(h+t)2HXKh2H,t0}d{Xt0,t0}.

    Lemma 2.4. Let YHt be defined by (2.5). Then, as h,

    E[(YHKh+tYHKh)2]=22HK2Γ(22HK)t2h2(HK1)(1+o(1)).

    Therefore, as h,

    {YHKh+tYHKh,t0}d{Yt0,t0}.

    Proof. By Proposition 2.1 in [15], we have

    E(YHtYHs)={Γ(12H)2H[t2H+s2H(t+s)2H],if  0<H<12;Γ(22H)2H(2H1)[(t+s)2Ht2Hs2H],if  12<H<1.

    When 0<HK<12, we get

    E(YHKtYHKs)=Γ(12HK)2HK[t2HK+s2HK(t+s)2HK].

    In particular, for every t0,

    E[(YHKt)2]=Γ(12HK)2HK(222HK)t2HK.

    Hence, we obtain

    E[(YHKh+tYHKh)2]=Γ(12HK)2HK22HK[(h+t)2HK+h2HK]+Γ(12HK)2HK2(2h+t)2HK.

    Then, for every large h>0, by using Taylor's expansion, we have

    I:=2HKΓ(12HK)E[(YHKh+tYHKh)2]                                                                 
    =22HK[(h+t)2HK+h2HK]+2(2h+t)2HK                                               
    =22HKh2HK[(1+th1)2HK+1]+2h2HK(2+th1)2HK                              
    =22HKh2HK[2+2HKth1+HK(2HK1)t2h2(1+o(1))]                        
    +2h2HK[22HK+22HK12HKth1+22HK2HK(2HK1)t2h2(1+o(1))]
    =22HK1HK(12HK)t2h2(HK1)(1+o(1)).                                               

    Thus,

    E[(YHKh+tYHKh)2]=22HK2(12HK)Γ(12HK)t2h2(HK1)(1+o(1))              
    =22HK2Γ(22HK)t2h2(HK1)(1+o(1)).

    Similarly, we can prove the case 12<HK<1. Therefore we finished the proof of Lemma 2.4.

    Proof of Theorem 2.1. It is obvious that Theorem 2.1 is the consequence of (2.6), (2.7), Lemma 2.3 and Lemma 2.4.

    In this section, we can understand Theorem 2.1 by considering the sub-bifractional Brownian noise, which is increments of sub-bifractional Brownian motion. For every integer n0, the sub-bifractional Brownian noise is defined by

    Yn:=SH,Kn+1SH,Kn.

    Denote

    R(a,a+n):=E(YaYa+n)=E[(SH,Ka+1SH,Ka)(SH,Ka+n+1SH,Ka+n)]. (3.1)

    We obtain

    R(a,a+n)=fa(n)+g(n)g(2a+n+1), (3.2)

    where

    fa(n)=[(a+1)2H+(a+n+1)2H]K[(a+1)2H+(a+n)2H]K
    [a2H+(a+n+1)2H]K+[a2H+(a+n)2H]K

    and

    g(n)=12[(n+1)2HK+(n1)2HK2n2HK].

    We know that the function g is the covariance function of the fractional Brownian noise with Hurst index HK. Thus we need to analyze the function fa to understand "how far" the sub-bifractional Brownian noise is from the fractional Brownian noise. In other words, how far is the sub-bifractional Brownian motion from a process with stationary increments?

    The sub-bifractional Brownian noise is not stationary. However, the meaning of the following theorem is that it converges to a stationary sequence.

    Theorem 3.1. For each n, as a, we have

    fa(n)=2H2K(K1)a2(HK1)(1+o(1)) (3.3)

    and

    g(2a+n+1)=22HK2HK(2HK1)a2(HK1)(1+o(1)). (3.4)

    Therefore limafa(n)=0 and limag(2a+n+1)=0 for each n.

    Proof. (3.3) is obtained by Theorem 3.3 in Maejima and Tudor. For (3.4), we have

    g(2a+n+1)=12[(2a+n+2)2HK+(2a+n)2HK2(2a+n+1)2HK]                                    
                    =22HK1a2HK[(1+n+22a1)2HK+(1+n2a1)2HK2(1+n+12a1)2HK]
            =22HK1a2HK[1+2HKn+22a1+HK(2HK1)(n+22)2a2(1+o(1))
                        +1+2HKn2a1+HK(2HK1)(n2)2a2(1+o(1))
                                            2(1+2HKn+12a1+HK(2HK1)(n+12)2a2(1+o(1)))]
    =22HK2HK(2HK1)a2(HK1)(1+o(1)).                                    

    Hence the proof of Theorem 3.1 is completed.

    We are now interested in the behavior of the sub-bifractional Brownian noise (3.1) with respect to n (as n). We have the following result.

    Theorem 3.2. For integers a,n0, let R(a,a+n) be given by (3.1). Then for large n,

    R(a,a+n)=HK(K1)[(a+1)2Ha2H]n2(HK1)+(12H)+o(n2(HK1)+(12H)).

    Proof. By (3.2), we have

    R(a,a+n)=fa(n)+g(n)g(2a+n+1).

    By the proof of Theorem 4.1 in [11], we get, for large n, the term fa(n) behaves as

    HK(K1)[(a+1)2Ha2H]n2(HK1)+(12H)+o(n2(HK1)+(12H)).

    We know that the term g(n) behaves as HK(2HK1)n2(HK1) for large n. For g(2a+n+1), it is similar to the computation for Theorem 3.1, we can obtain g(2a+n+1) also behaves as HK(2HK1)n2(HK1) for large n. Hence we have finished the proof of Theorem 3.2.

    It is easy to obtain the following corollary.

    Corollary 3.1. For integers a1 and n0, let R(a,a+n) be given by (3.1). Then, for every aN, we have

    n0R(a,a+n)<.

    Proof. By Theorem 3.2, we get that the main term of R(a,a+n) is n2HK2H1, and since 2HK2H1<1, the series is convergent.

    In this section, we prove two limit theorems to the sub-bifractional Brownian motion. Define a function g(t,s),t0,s0 by

    g(t,s)=2RH,K(t,s)ts=4H2K(K1)(t2H+s2H)K2(ts)2H1+HK(2HK1)|ts|2HK2
    HK(2HK1)(t+s)2HK2
    =:g1(t,s)+g2(t,s)g3(t,s),                          (4.1)

    for (t,s) with ts, t0, s0 and t+s0.

    Theorem 4.1. Assume that 2HK>1 and let {ξj,j=1,2,} be a sequence of standard normal random variables. g(t,s) is defined by (4.1). Suppose that E(ξiξj)=g(i,j). Then, as n,

    {nHK[nt]j=1ξj,t0}d{SH,Kt,t0}.

    Remark 1. Theorem 4.1 and 4.2 (below) are similar to the central limit theorem and can be used as a basis for many subsequent studies.

    In order to prove Theorem 4.1, we need the following lemma.

    Lemma 4.1. When 2HK>1, we have

    t0s0g(u,v)dudv=(t2H+s2H)K12(t+s)2HK12|ts|2HK.

    Proof. It follows from the fact that g(t,s)=2RH,K(t,s)ts for every t0,s0 and by using that 2HK>1.

    Proof of Theorem 4.1. It is enough to show that, as n,

    In:=E[(nHK[nt]i=1ξi)(nHK[ns]j=1ξj)]E(SH,KtSH,Ks).

    In fact, we have

    In=n2HK[nt]i=1[ns]j=1E(ξiξj)=n2HK[nt]i=1[ns]j=1g(i,j).

    Note that

    g(in,jn)=4H2K(K1)[(in)2H+(jn)2H]K2(ijn2)2H1                                    
    +HK(2HK1)|injn|2HK2HK(2HK1)(in+jn)2HK2
    =n2(1HK)g(i,j).                                                                     (4.2)

    Thus, as n,

    In=n2HK[nt]i=1[ns]j=1n2HK2g(in,jn)                  
    =n2[nt]i=1[ns]j=1g(in,jn)                             
    t0s0g(u,v)dudv                                
    =(t2H+s2H)K12(t+s)2HK12|ts|2HK
    =E(SH,KtSH,Ks).                                     

    Hence, we finished the proof of Theorem 4.1.

    We now consider more general sequence of nonlinear functional of standard normal random variables. Let f be a real valued function such that f(x) does not vanish on a set of positive measure, E[f(ξ1)]=0 and E[(f(ξ1))2]<. Let Hk denote the k-th Hermite polynomial with highest coefficient 1. We have

    f(x)=k=1ckHk(x),

    where k=1c2kk!< and ck=E[f(ξj)Hk(ξj)] (see e.g. [16]). Assume that c10. Let ηj=f(ξj),j=1,2,, where {ξj,j=1,2,} is the same sequence of standard normal random variables as before.

    Theorem 4.2. Assume that 2HK>32 and let {ξj,j=1,2,} be a sequence of standard normal random variables. g(t,s) is defined by (4.1). Suppose that E(ξiξj)=g(i,j). Then, as n,

    {nHK[nt]j=1ηj,t0}d{c1SH,Kt,t0}.

    Proof. Note that ηj=f(ξj)=c1ξj+k=2ckHk(ξj). We obtain

    nHK[nt]j=1ηj=c1nHK[nt]j=1ξj+nHK[nt]j=1k=2ckHk(ξj).

    Using Theorem 4.1, it is enough to show that, as n,

    E[(nHK[nt]j=1k=2ckHk(ξj))2]0. (4.3)

    In fact, we get

    Jn:=E[(nHK[nt]j=1k=2ckHk(ξj))2]                   
    =n2HK[nt]i=1[nt]j=1k=2l=2ckclE[Hk(ξi)Hl(ξj)].

    We know that, if ξ and η are two random variables with joint Gaussian distribution such that E(ξ)=E(η)=0, E(ξ2)=E(η2)=1 and E(ξη)=r, then

    E[Hk(ξ)Hl(η)]=δk,lrkk!,

    where

    δk,l={1,if  k=l;0,if  kl.

    Thus,

    Jn=n2HK[nt]i=1[nt]j=1k=2c2k(E(ξiξj))kk!                              
    =n2HK[nt]k=2c2kk!+n2HK[nt]i,j=1;ijk=2c2kk![g(i,j)]k.

    Since |g(i,j)|(E(ξ2i))12(E(ξ2j))12=1, we get, by (4.2),

    Jnn2HK[nt]k=2c2kk!+n2HK[nt]i,j=1;ijk=2c2kk![g(i,j)]2                       
    =n2HK[nt]k=2c2kk!+n2HKk=2c2kk![nt]i,j=1;ij[g(i,j)]2                    
    tn12HKk=2c2kk!+n2(HK1)(k=2c2kk!)n2[nt]i,j=1;ij[g(in,jn)]2. (4.4)

    On one hand, by k=2c2kk!< and 2HK>32>1, we get, as n,

    tn12HKk=2c2kk!0. (4.5)

    On the other hand, we have

    n2[nt]i,j=1;ij[g(in,jn)]2=n2[nt]i,j=1;ij[g1(in,jn)+g2(in,jn)g3(in,jn)]2                        
                                3n2[nt]i,j=1;ij{[g1(in,jn)]2+[g2(in,jn)]2+[g3(in,jn)]2}.

    Since |g1(u,v)|C(uv)HK1 and 2HK>32>1, we obtain

    n2[nt]i,j=1;ij[g1(in,jn)]2t0t0g21(u,v)dudvCt0t0(uv)2HK2dudv<. (4.6)

    We know that

    n2[nt]i,j=1;ij[g3(in,jn)]2t0t0g23(u,v)dudv                                                
                        =H2K2(2HK1)2t0t0(u+v)4HK4dudv
    <,                                 (4.7)

    since 2HK>32>1.

    We have also

    n2[nt]i,j=1;ij[g2(in,jn)]2t0t0g22(u,v)dudv                                                
                        =H2K2(2HK1)2t0t0(uv)4HK4dudv
    <,                                (4.8)

    since 2HK>32. Thus (4.3) holds from (4.4)–(4.8) and 2HK>32. The proof is completed.

    Remark 2. [11] pointed out, when 2HK>1, the convergence of

    n2(HK1)n2[nt]i,j=1;ij[g2(in,jn)]2

    had been already proved in [16]. But we can not find the details in [16]. Here we only give the proof when 2HK>32, because the holding condition for (4.8) is 2HK>32.

    In this section, we study an approximation in law of the fractional Brownian motion via the tangent process generated by the sbfBm SH,K.

    Theorem 5.1. Let H(0,1) and K(0,1). For every t0>0, as ϵ0, we have, the tangent process

    {SH,Kt0+ϵuSH,Kt0ϵHK,u0}d{BHKu,u0}, (5.1)

    where BHKu is the fractional Brownian motion with Hurst index HK.

    Proof. As 0<HK<12, by (2.6), we get

    {SH,Kt,t0}d={BHKt+HKΓ(12HK)YHKtKΓ(1K)XKt2H,t0}.

    By (2.5) in [12], there exists a constant C(H,K)>0 such that

    E[(XK(t0+ϵu)2HXK(t0)2HϵHK)2]=C(H,K)t2(HK1)0u2ϵ2(1HK)(1+o(1)),

    which tends to zero, as ϵ0, since 1HK>0.

    On the other hand, similar to the proof of Lemma 2.4, we obtain

    E[(YHKt0+ϵuYHKt0ϵHK)2]=22HK2Γ(22HK)t2(HK1)0u2ϵ2(1HK)(1+o(1)),

    which also tends to zero, as ϵ0. Therefore (5.1) holds. Similarly, (5.1) also holds for the case 12<HK<1. We finished the proof.

    In this paper, we prove that the increment process generated by the sub-bifractional Brownian motion converges to the fractional Brownian motion. Moreover, we study the behavior of the noise associated to the sbfBm and the behavior of the tangent process of the sbfBm. In the future, we will investigate limits of Gaussian noises.

    Nenghui Kuang was supported by the Natural Science Foundation of Hunan Province under Grant 2021JJ30233. The author wishes to thank anonymous referees for careful reading of the previous version of this paper and also their comments which improved the paper.

    The author declares there is no conflict of interest.



    [1] D. Shi, C. Sun, X. Sheng, X. Bi, Design of monitoring system for driving safety based on convolutional neural network, J. Hebei North Univ., 36 (2020), 57–61. https://doi.org/10.3969/j.issn.1673-1492.2020.09.011 doi: 10.3969/j.issn.1673-1492.2020.09.011
    [2] X. Meng, Driving fatigue caused by tramc accident characteristics and effective prevention analysis, Logist. Eng. Manage., 8 (2014), 187–188. https://doi.org/10.3969/j.issn.1674-4993.2014.08.073 doi: 10.3969/j.issn.1674-4993.2014.08.073
    [3] X. Gong, J. Fang, X. Tan, A. Liao, C. Xiao, Analysis of the current situation of road traffic accidents in the 31 provinces/municipalities of China and the projection for achieving the SDGs target of halving the numbers of death and injury, Chin. J. Dis. Control Prev., 24 (2020), 4–8. http://doi.org/10.16462/j.cnki.zhjbkz.2020.01.002 doi: 10.16462/j.cnki.zhjbkz.2020.01.002
    [4] S. Chen, J. Hu, Causative analysis of road traffic accidents and research on safety prevention measures, Leg. Syst. Soc., 27 (2020), 143–144. https://doi.org/10.19387/j.cnki.1009-0592.2020.09.247 doi: 10.19387/j.cnki.1009-0592.2020.09.247
    [5] J. Wang, X. Yu, Q. Liu, Y. Zhou, Research on key technologies of intelligent transportation based on image recognition and anti-fatigue driving, EURASIP J. Image Video Process., 1 (2019), 33–45. https://doi.org/10.1186/s13640-018-0403-6 doi: 10.1186/s13640-018-0403-6
    [6] X. Wang, R. Chen, B. Huang, Implementation of driver driving safety monitoring system based on android system, Electron. Meas. Technol., 42 (2019), 56–60. https://doi.org/10.19651/j.cnki.emt.1802406 doi: 10.19651/j.cnki.emt.1802406
    [7] S. Liu, L. He, Fatigue driving detection system based on image processing, J. Yuncheng Univ., 39 (2021), 51–54. https://doi.org/10.15967/j.cnki.cn14-1316/g4.2021.06.013 doi: 10.15967/j.cnki.cn14-1316/g4.2021.06.013
    [8] F. Liu, D. Chen, J. Zhou, F. Xu, A review of driver fatigue detection and its advances on the use of RGB-D camera and deep learning, Eng. Appl. Artif. Intell., 116 (2022), 105399. https://doi.org/10.1016/j.engappai.2022.105399 doi: 10.1016/j.engappai.2022.105399
    [9] Y. Sui, Z. Yan, L. Dai. H. Jing, Face multi-attribute detection algorithm based on RetinaFace, Railway Comput. Appl., 30 (2021), 1–4. https://doi.org/10.3969/j.issn.1005-8451.2021.03.001 doi: 10.3969/j.issn.1005-8451.2021.03.001
    [10] S. Yang, P. Luo, C. C. Loy, X. Tang, Wider face: a face detection benchmark, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 5525–5533. https://doi.org/10.1109/CVPR.2016.596
    [11] G. M. Clayton, S. Devasia, Image-based compensation of dynamic effects in scanning tunnelling microscopes, Nanotechnology, 16 (2005), 809–818. https://doi.org/10.1088/0957-4484/16/6/032 doi: 10.1088/0957-4484/16/6/032
    [12] L. Huang, H. Yang, B. Wang, Research and improvement of multi-method combined face image illumination compensation algorithm, J. Chongqing Univ. Technol., 31 (2017), 6–12. https://doi.org/10.3969/j.issn.1674-8425(z).2017.11.027 doi: 10.3969/j.issn.1674-8425(z).2017.11.027
    [13] L. Shao, R. Yan, X. Li, Y. Liu, From heuristic optimization to dictionary learning: A review and comprehensive comparison of Image denoising algorithms, IEEE Trans. Cybern., 44 (2017), 1001–1013. https://doi.org/10.1109/TCYB.2013.2278548 doi: 10.1109/TCYB.2013.2278548
    [14] C. Shi, C. Zhang, Q. He, H. Wang, Target detection based on improved feature pyramid, Electron. Meas. Technol., 44 (2021), 150–156. https://doi.org/10.19651/j.cnki.emt.2107598 doi: 10.19651/j.cnki.emt.2107598
    [15] X. Guo, Research on Multi-Scale Face Detection Based on Convolution Neural Networks, M.S thesis, North China Electric Power University in Hebei, 2020.
    [16] F. Chen, Research on Cosine Loss Algorithm for Face Verification, M.S thesis, Xiangtan University in Hunan, 2020. https://doi.org/10.27426/d.cnki.gxtdu.2020.001269
    [17] Z. Yang, L. Hou, D. Yang, lmproved face recognition algorithm of attitude correction, Cyber Secur. Data Governance, 35 (2016), 56–60. https://doi.org/10.19358/j.issn.1674-7720.2016.03.019 doi: 10.19358/j.issn.1674-7720.2016.03.019
    [18] S. Preetha, S. V. Sheela, Security monitoring system using facenet for wireless sensor network, preprint, arXiv: 2112.01305.
    [19] X. Li, R. Huang, Z. Chen, Y. Long, L. Xu, An improved face detection and recognition algorithm based on FaceNet and MTCNN, J. Guangdong Univ. of Petrochem. Technol., 31 (2021), 45–47.
    [20] J. Wang, J. Li, X. Zhou, X. Zhang, Improved SSD algorithm and its performance analysis of small target detection in remote sensing images, Acta Opt. Sin. 39 (2019), 10. https://doi.org/10.3788/AOS201939.0628005 doi: 10.3788/AOS201939.0628005
    [21] S. Mao, H. Li, Research on improved SSD algorithm for detection in traffic, Microprocessors, 43 (2022), 26–29.
    [22] B. Wang, Y. Lv, X. Hei, H. Jin, Lightweight deep convolutional neural network model based on dilated convolution, 2020. Available from: https://kns.cnki.net/kcms2/article/abstract?v = kxaUMs6x7-4I2jr5WTdXti3zQ9F92xu0dKxhnJcY9pxwfrkG2rAGFOJWdZMiOIJjZJ9FLVWmYcCCgfpgeyHSjqedCLDh_ut5 & uniplatform = NZKPT
    [23] L. Jiang, J. Li, B. Huang, Research on face feature detection algorithm based on improved SSD, Mach. Des. Manuf. Eng., 50 (2021), 82–86. https://doi.org/10.3969/j.issn.2095-509X.2021.07.017 doi: 10.3969/j.issn.2095-509X.2021.07.017
    [24] X. Zhang, A. Jiang, SSD Small Target detection algorithm combining feature enhancement and self-attention, Comput. Eng. Appl., 58 (2022), 247–255. https://doi.org/10.3778/j.issn.1002-8331.2109-0356 doi: 10.3778/j.issn.1002-8331.2109-0356
    [25] J. Guo, T. Yu, Y. Cui, X. Zhou, Research on vehicle small target detection algorithm based on improved SSD, Comput. Technol. Dev., 32 (2022), 1–7.
    [26] Q. Zheng, L. Wang, F. Wang, Candidate box generation method based on improved ssd network, 2020. Available from: https://kns.cnki.net/kcms2/article/abstract?v = kxaUMs6x7-4I2jr5WTdXti3zQ9F92xu0ManZHCyoNk-lwS3y-OLIR4fcD18PUKrUkLhyHScAkvpkTgimuL-OfVjGi7Jisy2h & uniplatform = NZKPT
    [27] Q. Song, X. Wang, C. Zhang, Y. Chen, H. Song, A residual SSD model based on window size clustering for traffic sign detection, J. Hunan Univ., 46 (2019), 133–140. https://doi.org/10.16339/j.cnki.hdxbzkb.2019.10.016 doi: 10.16339/j.cnki.hdxbzkb.2019.10.016
    [28] W. Chen, Lightweight convolutional neural network remote sensing image target detection, Beijing Surv. Mapp., 36 (2018), 178–183. https://doi.org/10.19580/j.cnki.1007-3000.2022.02.014 doi: 10.19580/j.cnki.1007-3000.2022.02.014
    [29] K. Chen, Research on SSD-based Multi-scale Detection Algorithm, M.S thesis, Beijing Jiaotong University in Beijing, 2020. https://doi.org/10.26944/d.cnki.gbfju.2020.002225
    [30] H. Zhang, M. Zhang, SSD Target Detection Algorithm with Channel Attention Mechanism, Comput. Eng., 46 (2020), 264–270. https://doi.org/10.19678/j.issn.1000-3428.0054946 doi: 10.19678/j.issn.1000-3428.0054946
    [31] Z. A. Haq, Z. Hasan, Eye-blink rate detection for fatigue determination, in 2016 1st India International Conference on Information Processing (IICIP), (2016), 1–5. https://doi.org/10.1109/IICIP.2016.7975348
    [32] X. Zhou, S. Wang, W. Zhao, X. Zhao, T. Li, Fatigue Driving Detection Based on State Recognition of Eyes and Mouth, J. Jilin Univ., 35 (2017), 204–211. https://doi.org/10.19292/j.cnki.jdxxp.2017.02.015 doi: 10.19292/j.cnki.jdxxp.2017.02.015
  • This article has been cited by:

    1. Nenghui Kuang, Huantian Xie, Least squares type estimators for the drift parameters in the sub-bifractional Vasicek processes, 2023, 26, 0219-0257, 10.1142/S0219025723500042
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2961) PDF downloads(220) Cited by(4)

Figures and Tables

Figures(24)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog