Loading [MathJax]/jax/output/SVG/jax.js
Theory article Special Issues

A taxi detour trajectory detection model based on iBAT and DTW algorithm

  • Received: 19 July 2022 Revised: 10 September 2022 Accepted: 20 September 2022 Published: 17 October 2022
  • Taxi detour is a chronic problem in urban transport systems, which largely undermines passengers' riding experience and the city's image while unnecessarily worsening traffic congestion. Tourists unfamiliar with city roads often encounter detour problems. Therefore, it is important for regulatory authorities to develop a tool for detour behavior detection in order to discover or identify detours. This study proposes a detour trajectory detection model framework based on the trajectory data of taxis that can identify taxi driving detour fraud at the microscopic level and analyze the characteristics of detouring trajectories from the perspective of microscopic motion traits. The deviation from normal driving trajectories provides a framework for the automatic detection of detour trajectories for the off-site supervision platform of the taxis. Considering drawbacks of the isolation-Based Anomalous Trajectory (iBAT) algorithm, this paper made further improvements in trajectory anomaly detection. In this study, three methods including the iBAT, iBAT + Dynamic Time Warping (DTW), and iBAT + DTW algorithms considering the driving distance and time are compared using the relevant experimental data. The case studies verify that the proposed method outperforms the other methods. Verified by the experiments based on the trajectory data coming from Nanjing, the false positive rate of this framework is only 1.64%.

    Citation: Jian Wan, Peiyun Yang, Wenbo Zhang, Yaxing Cheng, Runlin Cai, Zhiyuan Liu. A taxi detour trajectory detection model based on iBAT and DTW algorithm[J]. Electronic Research Archive, 2022, 30(12): 4507-4529. doi: 10.3934/era.2022229

    Related Papers:

    [1] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [2] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [3] Ximing Fang . The simplified modulus-based matrix splitting iteration method for the nonlinear complementarity problem. AIMS Mathematics, 2024, 9(4): 8594-8609. doi: 10.3934/math.2024416
    [4] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [5] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [6] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [7] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [8] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [9] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [10] Yajun Xie, Minhua Yin, Changfeng Ma . Novel accelerated methods of tensor splitting iteration for solving multi-systems. AIMS Mathematics, 2020, 5(3): 2801-2812. doi: 10.3934/math.2020180
  • Taxi detour is a chronic problem in urban transport systems, which largely undermines passengers' riding experience and the city's image while unnecessarily worsening traffic congestion. Tourists unfamiliar with city roads often encounter detour problems. Therefore, it is important for regulatory authorities to develop a tool for detour behavior detection in order to discover or identify detours. This study proposes a detour trajectory detection model framework based on the trajectory data of taxis that can identify taxi driving detour fraud at the microscopic level and analyze the characteristics of detouring trajectories from the perspective of microscopic motion traits. The deviation from normal driving trajectories provides a framework for the automatic detection of detour trajectories for the off-site supervision platform of the taxis. Considering drawbacks of the isolation-Based Anomalous Trajectory (iBAT) algorithm, this paper made further improvements in trajectory anomaly detection. In this study, three methods including the iBAT, iBAT + Dynamic Time Warping (DTW), and iBAT + DTW algorithms considering the driving distance and time are compared using the relevant experimental data. The case studies verify that the proposed method outperforms the other methods. Verified by the experiments based on the trajectory data coming from Nanjing, the false positive rate of this framework is only 1.64%.



    Over the past decades, owing to a broad variety of applications in engineering, sciences and economics, the linear complementarity problem (LCP) has been an active topic in the optimization community and has garnered a flurry of interest. The LCP is a powerful mathematical model which is intimately related to many significant scientific problems, such as the well-known primal-dual linear programming, bimatrix game, convex quadratic programming, American option pricing problem and others, see e.g., [1,2,3] for more details. The LCP consists in determining a vector zRn such that

    z0,v=Az+q0andzv, (1.1)

    where ARn×n and qRn are given. We hereafter abbreviate the problem (1.1) by LCP(A,q).

    The LCP(A,q) of form (1.1) together with its extensions are extensively investigated in recent years, and designing efficient numerical algorithms to fast and economically obtain the solution of the LCP(A,q) (1.1) is of great significance. Some numerical iterative algorithms have been developed for solving the LCP(A,q) (1.1) over the past decades, such as the pivot algorithms [1,2,4], the projected iterative methods [5,6,7,8], the multisplitting methods [9,10,11,12,13,14], the Newton-type iteration methods [15,16] and others, see e.g., [17,18,19] and the references therein. The modulus-based matrix splitting (MMS) iteration method, which was first introduced in[20], is particularly attractive for solving the LCP(A,q) (1.1). Based on the general variable transformation, by setting z=|x|+xγ and v=Ωγ(|x|x), and let A=MN, Bai reformulated the LCP(A,q) (1.1) as the following equivalent form [20]

    (Ω+M)x=Nx+(ΩA)|x|γq,

    where γ>0 and ΩRn×n is a positive diagonal matrix. Then he skillfully designed a general framework of MMS iteration method for solving the large-scale sparse LCP(A,q) (1.1), which exhibits the following formal formulation.

    Algorithm 1.1. ([20]) (The MMS method) Let A=MN be a splitting of the matrix ARn×n. Assume that x0Rn is an arbitrary initial guess. For k=0,1,2,, compute {xk+1} by solving the linear system

    (Ω+M)xk+1=Nxk+(ΩA)|xk|γq,

    and then set

    zk+1=1γ(|xk+1|+xk+1)

    until the iterative sequence {zk} is convergent. Here, ΩRn×n is a positive diagonal matrix and γ is a positive constant.

    The MMS iteration method not only covers some presented iteration methods, such as the nonstationary extrapolated modulus method [21] and the modified modulus method [22] as its special cases, but also yields a series of modulus-based relaxation methods, such as the modulus-based Jacobi (MJ), the modulus-based Gauss-Seidel (MGS), the modulus-based successive overrelaxation (MSOR) and the modulus-based accelerated overrelaxation (MAOR) methods. Thereafter, since the promising behaviors and elegant mathematical properties of the MMS iterative scheme, it immediately received considerable attention and diverse versions of the MMS method occurred. For instance, Zheng and Yin [23] established a new class of accelerated MMS (AMMS) iteration methods for solving the large-scale sparse LCP(A,q) (1.1), and the convergence analyses of the AMMS method with the system matrix A being a positive definite matrix or an H+-matrix were explored. In order to further accelerate the MMS method, Zheng et al. [24] combined the relaxation strategy with the matrix splitting technique in the modulus equation of [25] and presented a relaxation MMS (RMMS) iteration method for solving the LCP(A,q) (1.1). The parametric selection strategies of the RMMS method were discussed in depth [24]. In addition, the RMMS method covers the general MMS (GMMS) method [25] as a special case. In the sequel, by extending the two-sweep iteration methods [26,27], Wu and Li [28] developed a general framework of two-sweep MMS (TMMS) iteration method to solve the LCP(A,q) (1.1), and the convergences of the TMMS method were established with the system matrix A being either an H+-matrix or a positive-definite matrix. Ren et al. [29] proposed a class of general two-sweep MMS (GTMMS) iteration methods to solve the LCP(A,q) (1.1) which encompasses the TMMS method by selecting appropriate parameter matrices. Peng et al. [30] presented a relaxation two-sweep MMS (RTMMS) iteration method for solving the LCP(A,q) (1.1) and gave its convergence theories with the system matrix A being an H+-matrix or a positive-definite matrix. Huang et al. [31] combined the parametric strategy, the relaxation technique and the acceleration technique to construct an accelerated relaxation MMS (ARMMS) iteration method for solving the LCP(A,q) (1.1). The ARMMS method can be regarded as a generalization of some existing methods, such as the MMS [20], the GMMS [25] and the RMMS [24]. For more modulus-based matrix splitting type iteration methods, see [32,33,34,35,36,37,38,39,40,41] and the references therein.

    On the other hand, Bai and Tong [42] equivalently transformed the LCP(A,q) (1.1) into a nonlinear equation without using variable transformation and proposed an efficient iterative algorithm by using the matrix splittings and extrapolation acceleration techniques. Then some relaxed versions of the method proposed in [42] were constructed by Bai and Huang [43] and the convergence theories were established under some mild conditions. Recently, Wu and Li [44] recasted the LCP(A,q) (1.1) into an implicit fixed-point equation

    (Ω+M)z=Nz+|(AΩ)z+q|q, (1.2)

    where A=MN. In fact, if M=A and Ω=I, then (1.2) reduces to the fixed-point equation proposed in [42]. Based on (1.2), the new MMS (NMMS) method for solving the LCP(A,q) (1.1) was constructed in [44].

    Algorithm 1.2. ([44]) (The NMMS method) Let A=MN be a splitting of the matrix ARn×n and the matrix Ω+M be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector z0Rn, for k=0,1,2, until the iteration sequence {zk} is convergent, compute zk+1Rn by solving the linear system

    (Ω+M)zk+1=Nzk+|(AΩ)zk+q|q.

    It is obvious that the NMMS method does not need any variable transformations, which is different from the above mentioned MMS type iteration methods. However, the NMMS method still inherits the merits of the MMS type iteration methods and some relaxation versions of it are studied.

    Remark 1.1. Let A=DALAUA, where DA, LA and UA are the diagonal, strictly lower-triangular and strictly upper-triangular parts of A, respectively. It has been mentioned in [44] that the Algorithm 1.2 can reduce to the following methods.

    (i) If M=A, Ω=I and N=0, then the Algorithm 1.2 becomes the new modulus method:

    (I+A)zk+1=|(AI)zk+q|q.

    (ii) If M=A, N=0 and Ω=αI, then Algorithm 1.2 turns into the new modified modulus iteration method:

    (αI+A)zk+1=|(AαI)zk+q|q.

    (iii) Let M=1α(DAβLA) and N=1α((1α)DA+(αβ)LA+αUA), then Algorithm 1.2 reduces to the new MAOR iteration method:

    (αΩ+DAβLA)zk+1=[(1α)DA+(αβ)LA+αUA]zk+α(|(AΩ)zk+q|q). (1.3)

    Evidently, based on (1.3), when (α,β) is equal to (α,α), (1,1) and (1,0), respectively, we can obtain the new MSOR (NMSOR), the new MGS (NMGS) and the new MJ (NMJ) iteration methods, respectively.

    The goal of this paper is to further improve the computing efficiency of the Algorithm 1.2 for solving the LCP(A,q) (1.1). To this end, we utilize the two-sweep matrix splitting iteration technique in [28,29] and the relaxation technique, and construct a new class of relaxed acceleration two-sweep MMS (NRATMMS) iteration method for solving the LCP(A,q) (1.1). Convergence analysis of the NRATMMS iteration method is studied in detail. By choosing suitable parameter matrices, the NRATMMS iteration method can generate some relaxation versions. Numerical results are reported to demonstrate the efficiency of the NRATMMS iteration method.

    The remainder of this paper is organized as follows. In Section 2, {we present some notations and definitions used hereinafter.} Section 3 is devoted to establishing the NRATMMS iteration method for solving the LCP(A,q) (1.1) and the global linear convergence of the proposed method is explored. Section 4 reports the numerical results. Finally, some concluding remarks are given in Section 5.

    In this section, we collect some notations, classical definitions and some auxiliary results which lay the foundation of our developments.

    Rn×n denotes the set of all n×n real matrices and Rn=Rn×1. I is the identity matrix with suitable dimension. || denotes absolute value for real scalar or modulus for complex scalar. For xRn, xi refers to its i-th entry, |x|=(|x1|,|x2|,,|xn|)Rn represents the componentwise absolute value of a vector x. tridiag(a,b,c) denotes a tridiagonal matrix that has a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix, respectively. Tridiag(A,B,C) denotes a block tridiagonal matrix that has A, B, C as the subdiagonal, main diagonal and superdiagonal block entries in the matrix, respectively.

    Let two matrices P=(pij)Rm×n and Q=(qij)Rm×n, we write PQ(P>Q) if pijqij(pij>qij) holds for any i and j. For A=(aij)Rm×n, A and |A| represent the transpose of A and the absolute value of A(|A|=(|aij|)Rm×n), respectively. For A=(aij)Rn×n, ρ(A) represents its spectral radius. Moreover, the comparison matrix A is defined by

    aij={|aij|,if i=j,|aij|,if ij, i,j=1,2,,n.

    A matrix ARn×n is called a Z-matrix if all of its off-diagonal entries are nonpositive, and it is a P-matrix if all of its principal minors are positive; we call a real matrix as an M-matrix if it is a Z-matrix with A10, and it is called an H-matrix if its comparison matrix A is an M-matrix. In particular, an H-matrix with positive diagonals is called an H+-matrix [9]. In addition, a sufficient condition for the matrix A to be a P-matrix is that A is an H+-matrix. ARn×n is called a strictly diagonal dominant matrix if |aii|>ji|aij| for all 1in.

    Let M be nonsingular, then A=MN is called an M-splitting if M is an M-matrix and N0, an H-splitting if M|N| is an M-matrix and an H-compatible splitting if A=M|N|[45]. Finally, the following lemmas are needed in the convergence analysis of the proposed method.

    Lemma 1. ([46]) Let ARn×n be an H+-matrix, then the LCP(A,q) (1.1) has a unique solution for any qRn.

    Lemma 2. ([47]) Let BRn×n be a strictly diagonal dominant matrix. Then for all CRn×n,

    B1Cmax1in(|C|e)i(Be)i

    holds, where e=(1,1,,1).

    Lemma 3. ([48]) Let A be an H-matrix, then |A1|A1.

    Lemma 4. ([49]) If A is an M-matrix, there exists a positive diagonal matrix V such that AV is a strictly diagonal dominant matrix with positive diagonal entries.

    Lemma 5. ([49]) Let A, B be two Z-matrices, A be an M-matrix, and BA. Then B is an M-matrix.

    Lemma 6. ([26]) Let

    A=(BCI0)0 and ρ(B+C)<1,

    then ρ(A)<1.

    Lemma 7. ([45]) If A=MN is an M-splitting of A, then ρ(M1N)<1 if and only if A is an M-matrix.

    In this section, the NRATMMS iteration method for solving the LCP(A,q) (1.1) is developed, and the general convergence analysis of the NRATMMS iteration method will be explored.

    Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi (i=1,2,3,4) being all nonnegative diagonal matrices, then (1.2) can be reformulated to the following fixed point format:

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q, (3.1)

    where θ0 is a relaxation parameter. Based on (3.1), the NRATMMS iteration method is established as in the following Algorithm 3.1.

    Algorithm 3.1. (The NRATMMS iteration method) Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi(i=1,2,3,4) being all nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Given two initial guesses z0,z1Rn and a nonnegative relaxation parameter θ, the iteration sequence {zk} is generated by

    (Ω1+M1)zk+1=(N1+Ω2)[θzk+(1θ)zk1]+|(M2Ω3)zk+(Ω4N2)zk1+q|q (3.2)

    for k=1,2, until convergence.

    The Algorithm 3.1 provides a general framework of NMMS iteration methods for solving the LCP(A,q) (1.1), and it can yield a series of NMMS type iteration methods with suitable choices of the matrix splittings and the relaxation parameter. For instance, when θ=1 and Ωi=0 (i=1,2,3,4), the Algorithm 3.1 reduces to the new accelerated two-sweep MMS (NATMMS) iteration method

    M1zk+1=N1zk+|M2zkN2zk1+q|q.

    When θ=1, Ω1=Ω3=Ω, Ω2=Ω4=0, M2=A and N2=0, the Algorithm 3.1 reduces to the Algorithm 1.2. When M1=1α(DAβLA), N1=1α[(1α)DA+(αβ)LA+αUA], M2=DAUA, N2=LA with α,β>0, the Algorithm 3.1 gives the new relaxed acceleration two-sweep MAOR (NRATMAOR) iteration method. If (α,β) is equal to (α,α),(1,1), and (1,0), the NRATMAOR iteration method reduces to the new relaxed acceleration two-sweep MSOR (NRATMSOR) iteration method, the new relaxed acceleration two-sweep MGS (NRATMGS) iteration method and the new relaxed acceleration two-sweep MJ (NRATMJ) iteration method, respectively.

    The convergence analysis for Algorithm 3.1 is investigated with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix.

    Lemma 8. Assume that ARn×n is an H+-matrix. Let A=M1N1 and A=M2N2 be an H-splitting and a general splitting of A, respectively, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z for arbitrary two initial vectors if ρ(˜L)<1.

    Proof. Let z be the exact solution of the LCP(A,q) (1.1), then it satisfies

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q. (3.3)

    Subtracting (3.3) from (3.2), we have

    |zk+1z|=|(Ω1+M1)1(N1+Ω2)[θ(zkz)+(1θ)(zk1z)]+(Ω1+M1)1|(M2Ω3)zk+(Ω4N2)zk1+q|(Ω1+M1)1|(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2||θ(zkz)+(1θ)(zk1z)|+|(Ω1+M1)1|||(M2Ω3)zk+(Ω4N2)zk1+q||(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1||(M2Ω3)(zkz)+(Ω4N2)(zk1z)||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1|[|M2Ω3||zkz|+|Ω4N2||zk1z|]=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|]|zkz|+|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]|zk1z|.

    For simplicity, let

    F=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|], (3.4)

    and

    G=|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]. (3.5)

    Then we have

    |(zk+1zzkz)|(FGI0)|(zkzzk1z)|.

    Let

    L=(FGI0),

    then the iteration sequence {zk} converges to the unique solution z if ρ(L)<1. Since L0, according to Lemma 6, ρ(F+G)<1 implies ρ(L)<1. To prove the convergence of the Algorithm 3.1, it is sufficient to prove ρ(F+G)<1.

    Under the conditions that A is an H+-matrix and A=M1N1 {is} an H-splitting of A, i.e., M1|N1| is an M-matrix, then by Lemma 5, M1M1|N1| implies that M1 is an H-matrix, and Ω1+M1 is also an H-matrix. In the light of Lemma 3, it follows that

    0|(Ω1+M1)1|(Ω1+M1)1.

    Recall (3.4) and (3.5), we obtain

    F+G=|(Ω1+M1)1|[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    which yields that

    0F+G(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]:=˜L.

    As a consequence, based on the monotone property of the spectral radius, the iteration sequence {zk} generated by Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) if ρ(˜L)<1. The proof is completed.

    Theorem 3.1. Assume that ARn×n is an H+-matrix. Let A=M1N1 be an H-compatible splitting and A=M2N2 be an M-splitting of A, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) for arbitrary two initial vectors if one of the following two conditions holds.

    (i) 0<θ1 and Ωi(i=1,2,3,4) satisfy

    {AVe>Ω4Ve,ifΩ3DM2,(A+Ω)Ve>DM2Ve,ifΩ3<DM2. (3.6)

    (ii) θ>1 and Ωi(i=1,2,3,4) satisfy

    {θ<1+min1in[(AΩ4)Ve]i[(|N1|+Ω2)Ve]iand[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3DM2,θ<1+min1in[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]iand[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3<DM2. (3.7)

    Here, Ω=Ω1Ω2=Ω3Ω4 and V is an arbitrary positive diagonal matrix such that (Ω1+M1)V is a strictly diagonal dominant matrix.

    Proof. According to Lemma 8, we only need to demonstrate ρ(˜L)<1. Then, on the basis of Lemma 2 and Lemma 4, it follows that

    ρ(˜L)=ρ(V1˜LV)||V1˜LV||=||[(Ω1+M1)V]1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]V||max1in{[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i.

    When 0<θ1, it holds that

    ρ(˜L)max1in{[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.8)

    Since A=M2N2 is an M-splitting of A, M2 is an M-matrix. Let M2=DM2BM2 be a splitting of M2, where DM2 is the positive diagonal matrix of M2.

    If Ω3DM2, it can be concluded that

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||Ω3DM2||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2Ω3+M2Ω4|N2|)Ve=(2A+Ω1Ω2Ω3Ω4)Ve=(2A2Ω4)Ve. (3.9)

    If Ω3<DM2, we get

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||DM2Ω3||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2+Ω32DM2+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2+Ω32DM2+M2Ω4|N2|)Ve=(2A2DM2+Ω1Ω2+Ω3Ω4)Ve=(2A2DM2+2Ω)Ve. (3.10)

    According to (3.8), (3.9) and (3.10), we have ρ(˜L)<1 if (3.6) holds.

    When θ>1, it follows that

    ρ(˜L)max1in{[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.11)

    If Ω3DM2, it can be derived that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|Ω3DM2||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω2Ω2Ω3+M2Ω4|N2|)Ve=(2A2Ω42(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A2Ω42(θ1)(|N1|+Ω2)]Ve>0 (3.12)

    provided that 1<θ<min1in1+[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i and [(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    If Ω3<DM2, it is implied that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|DM2Ω3||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2DM2+Ω3|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω22DM2Ω2+Ω3+M2Ω4|N2|)Ve=(2A+2Ω2DM22(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A+2Ω2DM22(θ1)(|N1|+Ω2)]Ve>0 (3.13)

    provided that 1<θ<min1in1+[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i and [(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    According to (3.11), (3.12) and (3.13), we have ρ(˜L)<1 if (3.7) holds.

    Theorem 3.2. Assume that ARn×n is an H+-matrix. Let ϱρ(D1A|BA|). Assume that the choices of the four nonnegative diagonal matrices Ωi(i=1,2,3,4) and the three positive parameters α,β,θ such that M1+Ω1 is nonsingular and either Ω3DAmin{2Ω,2Ω2(θ1)Ω2} or max{2Ω4,2Ω4+2(θ1)Ω2}DA<Ω3. Then the NRATMAOR iteration method is convergent for arbitrary two initial vectors if one of the following eight conditions holds:

    (i) 0<θ1,0<α1,0<βα,ϱ<12;

    (ii) 0<θ1,1<α<2,0<βα,ϱ<2α2α;

    (iii) 0<θ1,0<α1,0<αβ,ϱ<α2β;

    (iv) 0<θ1,1<α<2,0<αβ,ϱ<2α2β;

    (v) 1<θ<2α2αϱ2α+2,2(θ1)2θ1<α1,0<βα,ϱ<12;

    (vi) 1<θ<α2αϱ+2α2,1<α<2θ2θ1,0<βα,ϱ<2α2α;

    (vii) 1<θ<2α2βϱ2α+2,2(θ1)2θ1<α1,0<αβ,ϱ<α2β;

    (viii) 1<θ<α2βϱ+2α2,1<α<2θ2θ1,0<αβ,ϱ<2α2β.

    Proof. For the NRATMAOR iteration method, we have A=M1N1=M2N2 with

    M1=1α(DAβLA),N1=1α[(1α)DA+(αβ)LA+αUA] (3.14)

    and

    M2=DAUA,N2=LA,

    where α,β>0 are parameters. In order to use the result of Lemma 8, we need A=M1N1 to be an H-splitting of A. Since A is an H+-matrix, we have DA>0. It follows from (3.14) that

    M1|N1|=1α(DAβLA)|1α[(1α)DA+(αβ)LA+αUA]|=1α(DAβ|LA|)1α|[(1α)DA+(αβ)LA+αUA]|=1αDAβα|LA||1α|αDA|αβ|α|LA||UA|=1|1α|αDAβ+|αβ|α|LA||UA|S

    If 0<βα, then

    S=1|1α|αDA|LA||UA|=1|1α|αDA|BA|,

    and it follows from Lemma 7 that S is an M-matrix if

    1|1α|>0andϱ<1|1α|α,

    which is satisfied if

    0<α1andϱ<1

    or

    1<α<2andϱ<2αα.

    If 0<αβ, then

    S1|1α|αDA2βα|LA||UA|1|1α|αDA2βα|BA|ˉS. (3.15)

    It follows from Lemma 7 that ˉS is an M-matrix if

    1|1α|>0andϱ<1|1α|2β,

    which is satisfied if

    0<α1andϱ<α2β (3.16)

    or

    1<α<2andϱ<2α2β. (3.17)

    In this case, since S is a Z-matrix, it follows from Lemma 5 and (3.15) that S is an M-matrix if (3.16) or (3.17) holds.

    In conclusion, A=M1N1 is an H-splitting of A (or, equivalently, S is an M-matrix) if one of the following four conditions holds:

    0<α1,0<βα,ϱ<1, (3.18)
    1<α<2,0<βα,ϱ<2αα, (3.19)
    0<α1,0<αβ,ϱ<α2β (3.20)

    or

    1<α<2,0<αβ,ϱ<2α2β. (3.21)

    In the following, let ˆA=ˆMˆN with ˆM=Ω1+M1 and ˆN=(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|, then ˜L=ˆM1ˆN. In order to prove the convergence of the NRATMAOR iteration method, based on Lemma 8, it suffices to prove ρ(˜L)<1 provided that A=M1N1 is an H-splitting of A.

    Since

    ˆM=Ω1+1α(DAβLA)=Ω1+1α(DAβ|LA|)

    is a lower triangular matrix with positive diagonal entries and non-positive off-diagonal entries, it is an M-matrix. In addition, ˆN0. According to Lemma 7, ˆA is an M-matrix implies ρ(˜L)<1. Thus, we will prove that the Z-matrix ˆA is an M-matrix in the following.

    Case I: 0<θ1. In this case, we have

    ˆN=|N1+Ω2|+|M2Ω3|+|Ω4N2|=|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA||1α|αDA+α+|αβ|α|LA|+2|UA|+Ω2+Ω4+|DAΩ3|˜P,

    from which we have

    ˆA=ˆMˆNˆM˜P=Ω1+1α(DAβ|LA|)|1α|αDAα+|αβ|α|LA|2|UA|Ω2Ω4|DAΩ3|=(Ω32Ω4|DAΩ3|)+1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.22)

    It can be easy to prove that the first term of (3.22) is nonnegative if

    Ω3DA2Ω (3.23)

    or

    2Ω4DA<Ω3. (3.24)

    Then it follows from (3.22) that

    ˆA1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.25)

    (i) If 0<βα, then it can be deduced from (3.25) that

    ˆA1|1α|αDA2|LA|2|UA|=1|1α|αDA2|BA|T,

    from which and Lemma 5, we obtain that ˆA is an M-matrix whenever T is. It follows from Lemma 7 that T is an M-matrix if

    1|1α|>0andϱ<1|1α|2α,

    which is satisfied if

    0<α1andϱ<12

    or

    1<α<2andϱ<2α2α.

    (ii) If 0<αβ, it can be deduced from (3.25) that

    ˆA1|1α|αDA2βα|LA|2|UA|1|1α|αDA2βα|BA|=ˉS,

    which is an M-matrix if (3.16) or (3.17) holds.

    In Case I, it can be concluded from (i) and (ii) that ˆA is an M-matrix if one of the following four conditions holds:

    0<θ1,0<α1,0<βα,ϱ<12, (3.26)
    0<θ1,1<α<2,0<βα,ϱ<2α2α, (3.27)
    0<θ1,,0<α1,0<αβ,ϱ<α2β (3.28)

    or

    0<θ1,1<α<2,0<αβ,ϱ<2α2β. (3.29)

    Case II: θ>1. In this case, we have

    ˆN=(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|=(2θ1)|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA|(2θ1)|1α|αDA+α+(2θ1)|αβ|α|LA|+2θ|UA|+(2θ1)Ω2+Ω4+|DAΩ3|˜N,

    from which we obtain

    ˆA=ˆMˆNˆM˜N=Ω1+1α(DAβ|LA|)(2θ1)|1α|αDAα+(2θ1)|αβ|α|LA|2θ|UA|(2θ1)Ω2Ω4|DAΩ3|=(Ω32Ω42(θ1)Ω2|DAΩ3|)+1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.30)

    The first term of (3.30) is nonnegative if

    Ω3DA2Ω2(θ1)Ω2<2Ω (3.31)

    or

    2Ω42Ω4+2(θ1)Ω2DA<Ω3. (3.32)

    Then it follows from (3.30) that

    ˆA1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.33)

    (a) If 0<βα, then it follows from (3.33) that

    ˆA1(2θ1)|1α|αDA2θ|BA|R,

    from which and Lemma 5, we find that ˆA is an M-matrix whenever R is. It follows from Lemma 7 that R is an M-matrix if

    1(2θ1)|1α|>0andϱ<1(2θ1)|1α|2θα,

    which is satisfied if

    2(θ1)2θ1<α1,1<θ<2α2αϱ2α+2,ϱ<12

    or

    1<α<2θ2θ1,1<θ<α2αϱ+2α2,ϱ<2α2α.

    (b) If 0<αβ, then

    ˆA1(2θ1)|1α|αDA2θβ2α(θ1)α|LA|2θ|UA|=1(2θ1)|1α|αDA(2θβα|LA|+2θ|UA|)+2(θ1)|LA|1(2θ1)|1α|αDA2θ(βα|LA|+|UA|)1(2θ1)|1α|αDA2θβα|BA|˜R,

    from which and Lemma 5, we find that ˆA is an M-matrix whenever ˜R is. It follows from Lemma 7 that ˜R is an M-matrix if

    1(2θ1)|1α|>0andϱ<1(2θ1)|1α|2θβ,

    which is satisfied if

    2(θ1)2θ1<α1,1<θ<2α2βϱ2α+2,ϱ<α2β

    or

    1<α<2θ2θ1,1<θ<α2βϱ+2α2,ϱ<2α2β.

    In Case II, it can be concluded from (a) and (b) that ˆA is an M-matrix if one of the following four conditions holds:

    θ>1,2(θ1)2θ1<α1,0<βα,1<θ<2α2αϱ2α+2,ϱ<12, (3.34)
    θ>1,1<α<2θ2θ1,0<βα,1<θ<α2αϱ+2α2,ϱ<2α2α, (3.35)
    θ>1,2(θ1)2θ1<α1,0<αβ,1<θ<2α2βϱ2α+2,ϱ<α2β (3.36)

    or

    θ>1,1<α<2θ2θ1,0<αβ,1<θ<α2βϱ+2α2,ϱ<2α2β. (3.37)

    The proof is completed by combining (3.18)–(3.21), (3.23), (3.24), (3.26)–(3.29), (3.31), (3.32) and (3.34)–(3.37).

    In this section, three numerical examples are performed to validate the effectiveness of the NRATMMS iteration method.

    All test problems are conducted in MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel (R) Core (TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. In the numerical results, we report the number of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted as "CPU") and the norm of the absolute residual vector (denoted by "RES"). Here, RES is defined by

    RES(zk)min{Azk+q,zk}2.

    As [44], the following three examples are used.

    Example 4.1. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ0) with

    ˆA=Tridiag(Im,Sm,Im)Rm2×m2,Sm=tridiag(1,4,1)Rm×m,

    and q=AzRm2 with z=(1,2,1,2,,1,2,) being the unique solution of the LCP(A,q) (1.1).

    Example 4.2. ([20]) Consider the LCP(A,q), where the matrix A=ˆA+μIm2(μ0) with

    ˆA=Tridiag(1.5Im,Sm,0.5Im)Rm2×m2,Sm=tridiag(1.5,4,0.5)Rm×m,

    and q=AzRm2 with z=(1,2,1,2,,1,2,) being the unique solution of the LCP(A,q) (1.1).

    Example 4.3. (Black-Scholes American option pricing). In [50], the original Black-Scholes-Merton model changes to the following partial differential complementarity system

    (yτ2yx2)(y(x,τ)g(x,τ))=0, yτ2yx20, y(x,τ)g(x,τ)0. (4.1)

    The initial and boundary conditions of the American put option price y(x,τ) become y(x,0)=g(x,0) and limx±y(x,τ)=limx±g(x,τ), where g(x,τ) is the transformed payoff function. For the price x[a,b], (a,b) is equal to (1.5,1.5), (2,2) or (4,4). Let ϑ,η be a number of time steps and a number of x-nodes, σ,T be fixed volatility and expiry time. According to [50], by discretizing (4.1), we obtain the LCP:

    w:=Aud0, ug0 and w(ug)=0, (4.2)

    with A=tridiag(τ,1+2τ,τ) and τ=Δt(Δx)2, where Δt=0.5σ2Tϑ, Δx=baη denote the time step and the price step, respectively. And then, if we employ a transformation z=ug and q=Agd to formula (4.2), the American option pricing problem can be rewritten as LCP (1.1). In our numerical computations, we take g=0.5z, and z=(1,0,1,0,,1,0,). The vector d is adjusted such that d=Azw, where w=(0,1,0,1,,0,1,). The involved parameters are detailed in Table 3.

    As shown in [44], the NMGS method can be top-priority when the six tested methods there are used to solve the LCP(A,q) in the three examples. Therefore, in this paper, we focus on comparing the performance of the NMGS method in [44] with the NRATMGS method. For the NMGS iteration method, Ω=DA is used [44]. For the NRATMGS method, we take Ω1=Ω3=DA, Ω2=Ω4=0, M2=A, N2=0 and α=β=1. In addition, the optimal parameter θexp in the NRATMGS iteration method is obtained experimentally (ranging from 0 to 2 with step size 0.1 for Example 4.1 and Example 4.2, and with step size 0.01 for Example 4.3) by minimizing the corresponding iteration step number. For the sake of fairness, each methods are run 10 times and we take the average value of computing times as the reported CPU. Both methods are started from the initial vectors z0=z1=(1,0,1,0,,1,0,) and stopped if RES(zk)<105 or the prescribed maximal iteration number kmax=500 is exceeded. The involved linear systems are solved by "". Numerical results for Examples 1–3 are reported in Tables 13. It follows from Tables 13 the NRATMGS method is better than the NMGS method (and the other methods tested in [44]) in terms of the iteration step number and CPU time when the parameter θexp is selected appropriately.

    Table 1.  Numerical results of Example 4.1.
    Method m
    16 32 64 128
    μ=2 NMGS IT 28 31 32 34
    CPU 0.0009 0.0018 0.0055 0.0364
    RES 9.6887×106 6.1988×106 8.7866×106 6.8116×106
    NRATMGS θexp 1.4 1.4 1.4 1.4
    IT 24 25 26 28
    CPU 0.0005 0.0012 0.0048 0.0318
    RES 6.6171×106 8.5424×106 9.5986×106 5.5387×106
    μ=4 NMGS IT 18 20 21 21
    CPU 0.0003 0.0009 0.0036 0.0224
    RES 9.3072×106 4.4327×106 4.2816×106 9.0760×106
    NRATMGS θexp 1.2 1.2 1.3 1.3
    IT 16 17 17 18
    CPU 0.0002 0.0009 0.0032 0.0161
    RES 9.3811×106 9.4009×106 8.9594×106 5.9780×106
    μ=6 NMGS IT 15 16 16 17
    CPU 0.0002 0.0011 0.0030 0.0194
    RES 4.3687×106 3.6549×106 8.1157×106 5.6681×106
    NRATMGS θexp 1.2 1.2 1.2 1.2
    IT 13 14 14 15
    CPU 0.0002 0.0009 0.0025 0.0138
    RES 5.5869×106 3.4746×106 6.9767×106 3.9164×106
    μ=8 NMGS IT 13 13 14 15
    CPU 0.0003 0.0006 0.0027 0.0173
    RES 4.0397×106 9.9549×106 5.9143×106 3.3650×106
    NRATMGS θexp 1.2 1.2 1.2 1.2
    IT 11 12 12 13
    CPU 0.0003 0.0006 0.0022 0.0118
    RES 8.9972×106 4.0405×106 6.4052×106 2.6779×106

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of Example 4.2.
    Method m
    16 32 64 128
    μ=2 NMGS IT 24 26 28 29
    CPU 0.0007 0.0011 0.0051 0.0324
    RES 8.9826×106 9.8366×106 7.5002×106 9.2033×106
    NRATMGS θexp 1.8 1.9 1.9 1.9
    IT 18 19 20 21
    CPU 0.0003 0.0010 0.0037 0.0232
    RES 8.1939×106 9.1146×106 7.2571×106 5.7367×106
    μ=4 NMGS IT 16 17 18 19
    CPU 0.0002 0.0007 0.0030 0.0238
    RES 8.3608×106 8.5767×106 7.6698×106 6.2746×106
    NRATMGS θexp 1.5 1.5 1.6 1.6
    IT 13 14 17 14
    CPU 0.0002 0.0007 0.0026 0.0145
    RES 6.5082×106 4.9487×106 6.5055×106 9.6328×106
    μ=6 NMGS IT 13 14 15 15
    CPU 0.0003 0.0008 0.0028 0.0224
    RES 7.5880×106 5.6490×106 3.6901×106 7.7841×106
    NRATMGS θexp 1.4 1.4 1.4 1.4
    IT 11 11 12 12
    CPU 0.0002 0.0008 0.0022 0.0121
    RES 4.7675×106 8.9849×106 3.8629×106 7.2580×106
    μ=8 NMGS IT 12 12 13 13
    CPU 0.0003 0.0005 0.0024 0.0163
    RES 2.8368×106 7.0877×106 3.6923×106 7.7385×106
    NRATMGS θexp 1.3 1.3 1.3 1.3
    IT 10 10 11 11
    CPU 0.0002 0.0005 0.0021 0.0112
    RES 3.3256×106 7.2249×106 2.6219×106 5.2755×106

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.3.
    Case Grid(η, ϑ) τ NMGS NRATMGS
    IT CPU RES θexp IT CPU RES
    σ=0.2
    T=0.5
    a=1.5
    b=1.5
    (4000,2000) 8.8889 23 0.0034 3.4764×106 0.95 20 0.0028 3.2943×106
    (6000,3000) 13.3333 26 0.0062 7.7964×106 0.93 22 0.0046 5.1432×106
    (8000,5000) 14.2222 25 0.0075 1.5296×106 1.03 23 0.0062 7.9327×106
    (8000,8000) 8.8889 23 0.0065 4.9204×106 0.95 20 0.0056 4.6315×106
    (10000,10000) 11.1111 23 0.0077 9.1134×107 1.04 21 0.0073 5.2285×106
    σ=0.2
    T=0.25
    a=1.5
    b=1.5
    (6000,3000) 6.6667 21 0.0043 6.0408×106 0.93 18 0.0038 7.5730×106
    (8000,4000) 8.8889 23 0.0064 4.9204×106 0.95 20 0.0055 4.6315×106
    (10000,5000) 11.1111 23 0.0078 9.1134×107 1.04 21 0.0073 5.2285×106
    (15000,15000) 8.3333 22 0.0111 8.1100×106 0.82 19 0.0101 7.3832×106
    (20000,20000) 11.1111 23 0.0177 1.2828×106 1.04 21 0.0152 7.2461×106
    σ=0.3
    T=0.5
    a=2
    b=2
    (4000,2500) 9 23 0.0034 3.8926×106 0.95 20 0.0029 3.1298×106
    (6000,3000) 16.875 27 0.0056 7.5378×106 1.1 25 0.0052 7.0849×106
    (8000,4000) 22.5 32 0.0088 5.2859×106 0.99 28 0.0076 4.8398×106
    (8000,6000) 15 26 0.0071 4.9459×106 0.86 23 0.0062 7.8720×106
    (10000,10000) 14.0625 25 0.0087 2.5353×106 1.05 23 0.0081 6.5500×106
    σ=0.3
    T=0.25
    a=4
    b=4
    (8000,4000) 2.8125 18 0.0050 6.2985×106 0.92 16 0.0045 8.3277×106
    (16000,8000) 5.625 21 0.0112 5.5724×106 0.94 18 0.0101 7.3356×106
    (20000,10000) 7.0313 23 0.0177 6.2544×107 0.91 20 0.0144 6.1941×106
    (24000,15000) 6.75 23 0.0202 6.6919×107 0.91 20 0.0179 6.9170×106
    (30000,24000) 6.5918 23 0.0275 7.3343×107 0.91 20 0.0243 7.7511×106

     | Show Table
    DownLoad: CSV

    In this paper, by applying the matrix splitting, relaxation technique and two-sweep iteration form to the new modulus-based matrix splitting formula, we propose a new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method for solving the LCP(A,q) (1.1). We investigate the convergence properties of the NRATMMS iteration method with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix. Numerical experiments illustrate that the NRATMMS iteration method is effective, and it can be superior to some existing methods. However, the choices of the optimal parameters in theory require further investigation.

    The authors are grateful to the five reviewers and the editor for their helpful comments and suggestions that have helped to improve the paper. This research is supported by the National Natural Science Foundation of China (12201275, 12131004), the Ministry of Education in China of Humanities and Social Science Project (21YJCZH204), the Project of Liaoning Provincial Federation Social Science Circles (2023lslqnkt-044, 2022lslwtkt-069), the Natural Science Foundation of Fujian Province (2021J01661) and the Ministry of Science and Technology of China (2021YFA1003600).

    The authors confirm that there has no conflict of interest.



    [1] Q. Cheng, Z. Liu, Y. Lin, X. S. Zhou, An s-shaped three-parameter (S3) traffic stream model with consistent car following relationship, Transp. Res. Part B Methodol., 153 (2021), 246-271. https://doi.org/10.1016/j.trb.2021.09.004 doi: 10.1016/j.trb.2021.09.004
    [2] H. Wang, Transportation-enabled urban services: A brief discussion, Mutimodal Transp., 1 (2022), 100007. https://doi.org/10.1016/j.multra.2022.100007 doi: 10.1016/j.multra.2022.100007
    [3] M. Ester, H. P. Kriegel, J. Sander, X. Xu, A density-based algorithm for discovering clusters in large spatial databases with noise, in Kdd, AAAI, 96 (1996), 226-231.
    [4] Q. Cheng, Z. Liu, J. Guo, X. Wu, R. Pendyala, B. Belezamo, et al., Estimating key traffic state parameters through parsimonious spatial queue models, Transp. Res. Part C Emerging Technol., 137 (2022), 103596. https://doi.org/10.1016/j.trc.2022.103596 doi: 10.1016/j.trc.2022.103596
    [5] Z. Liu, Y. Wang, Q. Cheng, H. Yang, Analysis of the information entropy on traffic flows, IEEE Trans. Intell. Transp. Syst., 2022 (2022), 1-12. https://doi.org/10.1109/TITS.2022.3155933 doi: 10.1109/TITS.2022.3155933
    [6] D. Huang, J. Xing, Z. Liu, Q. An, A multi-stage stochastic optimization approach to the stop-skipping and bus lane reservation schemes, Transportmetrica A Transp. Sci., 17 (2021), 1272-1304. https://doi.org/10.1080/23249935.2020.1858206 doi: 10.1080/23249935.2020.1858206
    [7] F. Giannotti, M. Nanni, F. Pinelli, D. Pedreschi, Trajectory pattern mining, in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, San Jose, USA, (2007), 330-339. https://doi.org/10.1145/1281192.1281230
    [8] I. Syarif, A. Prugel-Bennett, G. Wills, Data mining approaches for network intrusion detection: from dimensionality reduction to misuse and anomaly detection, J. Inf. Technol. Rev., 3 (2012), 70-83.
    [9] Y. Yue, H. D. Wang, B. Hu, Q. Li, Y. G. Li, A. G. Yeh, Exploratory calibration of a spatial interaction model using taxi GPS trajectories, Comput. Environ. Urban Syst., 36 (2012), 140-153. https://doi.org/10.1016/j.compenvurbsys.2011.09.002 doi: 10.1016/j.compenvurbsys.2011.09.002
    [10] Q. Cheng, Y. Chen, Z. Liu, A bi-level programming model for the optimal lane reservation problem, Expert Syst. Appl., 189 (2022), 116147. https://doi.org/10.1016/j.eswa.2021.116147 doi: 10.1016/j.eswa.2021.116147
    [11] G. Münz, S. Li, G. Carle, Traffic anomaly detection using k-means clustering, in GI/ITG Workshop MMBnet, 7 (2007), 9.
    [12] I. N. Junejo, O. Javed, M. Shah, Multi feature path modeling for video surveillance, in Proceedings of the 17th International Conference on Pattern Recognition, IEEE, Cambridge, UK, 2 (2004), 716-719. https://doi.org/10.1109/ICPR.2004.1334359
    [13] Q. Meng, P. Liu, Z. Liu, Integrating multimodal transportation research, J. Multimodal Transport., 1 (2022), 100001. https://doi.org/10.1016/j.multra.2022.100001 doi: 10.1016/j.multra.2022.100001
    [14] Y. Zheng, Trajectory data mining: an overview, ACM Trans. Intell. Syst. Technol. (TIST), 6 (2015), 1-41. https://doi.org/10.1145/2743025 doi: 10.1145/2743025
    [15] Z. Feng, Y. Zhu, A survey on trajectory data mining: techniques and applications, IEEE Access, 4 (2016), 2056-2067. https://doi.org/10.1109/ACCESS.2016.2553681 doi: 10.1109/ACCESS.2016.2553681
    [16] N. Paragios, R. Deriche, Geodesic active regions: a new framework to deal with frame partition problems in computer vision, J. Visual Commun. Image Represent., 13 (2002), 249-268. https://doi.org/10.1006/jvci.2001.0475 doi: 10.1006/jvci.2001.0475
    [17] C. Stauffer, W. E. Grimson, Adaptive background mixture models for real-time tracking, in 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), IEEE, Collins, USA, 2 (1999), 246-252. https://doi.org/10.1109/CVPR.1999.784637
    [18] S. Coşar, G. Donatiello, V. Bogorny, C. Garate, L. O. Alvares, F. Bremond, Toward abnormal trajectory and event detection in video surveillance, IEEE Trans. Circuits Syst. Video Technol., 27 (2016), 683-695. https://doi.org/10.1109/TCSVT.2016.2589859 doi: 10.1109/TCSVT.2016.2589859
    [19] J. Huo, X. Fu, Z. Liu, Q. Zhang, Short-term estimation and prediction of pedestrian density in urban hot spots based on mobile phone data, IEEE Trans. Intell. Transp. Syst., 23 (2022), 10827-10838. https://doi.org/10.1109/TITS.2021.3096274 doi: 10.1109/TITS.2021.3096274
    [20] D. Huang, Y. Wang, S. Jia, Z. Liu, A Lagrangian relaxation approach for the electric bus charging scheduling optimisation problem, Transportmetrica A Transp. Sci., 2022 (2022), 1-24. https://doi.org/10.1080/23249935.2021.2023690 doi: 10.1080/23249935.2021.2023690
    [21] J. Simon, Remote supply revisited: the jeep problem with costly transfer points, Multimodal Transp., 1 (2022), 100019. https://doi.org/10.1016/j.multra.2022.100019 doi: 10.1016/j.multra.2022.100019
    [22] J. Qiu, K. Huang, J. Hawkins, The taxi sharing practices: matching, routing and pricing methods, Multimodal Transp., 1 (2022), 100003. https://doi.org/10.1016/j.multra.2022.100003 doi: 10.1016/j.multra.2022.100003
    [23] A. T. Palma, V. Bogorny, B. Kuijpers, L. O. Alvares, A clustering-based approach for discovering interesting places in trajectories, in Proceedings of the 2008 ACM Symposium on Applied Computing, ACM, Fortaleza, Brazil, (2008), 863-868. https://doi.org/10.1145/1363686.1363886
    [24] L. Grady, E. L. Schwartz, Isoperimetric graph partitioning for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 28 (2006), 469-475. https://doi.org/10.1109/TPAMI.2006.57 doi: 10.1109/TPAMI.2006.57
    [25] L. Zhao, G. Shi, J. Yang, An adaptive hierarchical clustering method for ship trajectory data based on DBSCAN algorithm, in 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA), IEEE, Beijing, China, (2017), 329-336. https://doi.org/10.1109/ICBDA.2017.8078834
    [26] Y. Xi, D. Huang, Y. Yuan, Z. Liu, K. Anish, N. Zheng, Improved dynamic time warping algorithm for bus route trajectory curve fitting, J. Transp. Eng., 147 (2021), 04021044. https://doi.org/10.1061/JTEPBS.0000544 doi: 10.1061/JTEPBS.0000544
    [27] R. F. Ibrahim, A recommendation system based on clustering and classification for optimal trajectory analysis, PhD thesis, Carleton University, 2019. https://doi.org/10.22215/etd/2019-13400
    [28] M. Khashei, M. Bijari, A novel hybridization of artificial neural networks and ARIMA models for time series forecasting, Appl. Soft Comput., 11 (2011), 2664-2675. https://doi.org/10.1016/j.asoc.2010.10.015 doi: 10.1016/j.asoc.2010.10.015
    [29] Y. Yuan, W. Zhang, X. Yang, Y. Liu, Z. Liu, W. Wang, Traffic state classification and prediction based on trajectory data, J. Intell. Transp. Syst., 2021 (2021), 1-15. https://doi.org/10.1080/15472450.2021.1955210 doi: 10.1080/15472450.2021.1955210
    [30] V. Hodge, J. Austin, A survey of outlier detection methodologies, Artif. Intell. Rev., 22 (2004), 85-126. https://doi.org/10.1023/B:AIRE.0000045502.10941.a9 doi: 10.1023/B:AIRE.0000045502.10941.a9
    [31] S. Y. Huang, Y. N. Huang, Network traffic anomaly detection based on growing hierarchical SOM, in 2013 43rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), IEEE, Budapest, Hungary, (2013), 1-2. https://doi.org/10.1109/DSN.2013.6575338
    [32] A. S. da Silva, J. A. Wickboldt, L. Z. Granville, A. Schaeffer-Filho, ATLANTIC: A framework for anomaly traffic detection, classification, and mitigation in SDN, in NOMS 2016-2016 IEEE/IFIP Network Operations and Management Symposium, IEEE, Istanbul, Turkey, (2016), 27-35. https://doi.org/10.1109/NOMS.2016.7502793
    [33] K. K. Santhosh, D. P. Dogra, P. P. Roy, Anomaly detection in road traffic using visual surveillance: A survey, ACM Comput. Surv. (CSUR), 53 (2020), 1-26. https://doi.org/10.1145/3417989 doi: 10.1145/3417989
    [34] S. Chawla, Y. Zheng, J. Hu, Inferring the root cause in road traffic anomalies, in 2012 IEEE 12th International Conference on Data Mining, IEEE, Brussels, Belgium, (2012), 141-150. https://doi.org/10.1109/ICDM.2012.104
    [35] P. R. Lei, A framework for anomaly detection in maritime trajectory behavior, Knowl. Inf. Syst., 47 (2016), 189-214. https://doi.org/10.1007/s10115-015-0845-4 doi: 10.1007/s10115-015-0845-4
    [36] J. Wang, I. C. Paschalidis, Statistical traffic anomaly detection in time-varying communication networks, IEEE Trans. Control Network Syst., 2 (2014), 100-111. https://doi.org/10.1109/TCNS.2014.2378631 doi: 10.1109/TCNS.2014.2378631
    [37] E. M. Knorr, R. T. Ng, V. Tucakov, Distance-based outliers: Algorithms and applications, VLDB J., 8 (2000), 237-253. https://doi.org/10.1007/s007780050006 doi: 10.1007/s007780050006
    [38] E. M. Knorr, R.T. Ng, Finding intensional knowledge of distance-based outliers, in Vldb, 99 (1999), 211-222.
    [39] J. G. Lee, J. Han, X. Li, Trajectory outlier detection: A partition-and-detect framework, in 2008 IEEE 24th International Conference on Data Engineering, IEEE, Cancun, Mexico, (2008), 140-149. https://doi.org/10.1109/ICDE.2008.4497422
    [40] S. A. Ahmed, D. P. Dogra, S. Kar, P. P. Roy, Surveillance scene representation and trajectory abnormality detection using aggregation of multiple concepts, Expert Syst. Appl., 101 (2018), 43-55. https://doi.org/10.1016/j.eswa.2018.02.013 doi: 10.1016/j.eswa.2018.02.013
    [41] Y. Ge, H. Xiong, Z. Zhou, H. Ozdemir, J. Yu, K. C. Lee, Top-eye: top-k evolving trajectory outlier detection, in Proceedings of the 19th ACM International Conference on Information and Knowledge Management, ACM, Toronto, Canada, (2010), 1733-1736. https://doi.org/10.1145/1871437.1871716
    [42] W. Qin, J. Tang, C. Lu, S. Lao, A trajectory abnormal detection method based on segmentation and clustering, in Journal of Physics: Conference Series, 2010 (2021), 012188. https://doi.org/10.1088/1742-6596/2010/1/012188
    [43] X. Zhao, Y. Rao, J. Cai, W. Ma, Abnormal trajectory detection based on a sparse subgraph, IEEE Access, 8 (2020), 29987-30000. https://doi.org/10.1109/ACCESS.2020.2972299 doi: 10.1109/ACCESS.2020.2972299
    [44] Z. Ding, M. Fei, An anomaly detection approach based on isolation forest algorithm for streaming data using sliding window, IFAC Proc. Vol., 46 (2013), 12-17. https://doi.org/10.3182/20130902-3-CN-3020.00044 doi: 10.3182/20130902-3-CN-3020.00044
    [45] D. Xu, Y. Wang, Y. Meng, Z. Zhang, An improved data anomaly detection method based on isolation forest, in 2017 10th International Symposium on Computational Intelligence and Design (ISCID), IEEE, Hangzhou, China, 2 (2017), 287-291. https://doi.org/10.1109/ISCID.2017.202
    [46] Z. Cheng, C. Zou, J. Dong, Outlier detection using isolation forest and local outlier factor, in Proceedings of the Conference on Research in Adaptive and Convergent Systems, ACM, Chongqing, China, (2019), 161-168. https://doi.org/10.1145/3338840.3355641
  • This article has been cited by:

    1. Zhengge Huang, Jingjing Cui, Accelerated Relaxation Two-Sweep Modulus-Based Matrix Splitting Iteration Method for Linear Complementarity Problems, 2024, 21, 0219-8762, 10.1142/S0219876223500251
    2. Dongmei Yu, Huiling Wei, Cairong Chen, Deren Han, Scalable Relaxation Two-Sweep Modulus-Based Matrix Splitting Methods for Vertical LCP, 2024, 203, 0022-3239, 714, 10.1007/s10957-024-02529-9
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2258) PDF downloads(91) Cited by(0)

Figures and Tables

Figures(14)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog