Research article Special Issues

Advanced analysis in epidemiological modeling: detection of waves

  • Received: 16 February 2022 Revised: 23 June 2022 Accepted: 18 July 2022 Published: 08 August 2022
  • MSC : 26A33, 65C40, 92D30

  • Mathematical concepts have been used in the last decades to predict the behavior of the spread of infectious diseases. Among them, the reproductive number concept has been used in several published papers to study the stability of the mathematical model used to predict the spread patterns. Some conditions were suggested to conclude if there would be either stability or instability. An analysis was also meant to determine conditions under which infectious classes will increase or die out. Some authors pointed out limitations of the reproductive number, as they presented its inability to help predict the spread patterns. The concept of strength number and analysis of second derivatives of the mathematical models were suggested as additional tools to help detect waves. This paper aims to apply these additional analyses in a simple model to predict the future.

    Citation: Abdon Atangana, Seda İğret Araz. Advanced analysis in epidemiological modeling: detection of waves[J]. AIMS Mathematics, 2022, 7(10): 18010-18030. doi: 10.3934/math.2022992

    Related Papers:

    [1] Chenchen Lu, Lin Chen, Shaoyong Lai . Local well-posedness and blow-up criterion to a nonlinear shallow water wave equation. AIMS Mathematics, 2024, 9(1): 1199-1210. doi: 10.3934/math.2024059
    [2] Sen Ming, Jiayi Du, Yaxian Ma . The Cauchy problem for coupled system of the generalized Camassa-Holm equations. AIMS Mathematics, 2022, 7(8): 14738-14755. doi: 10.3934/math.2022810
    [3] Ahmad Mohammed Alghamdi, Sadek Gala, Maria Alessandra Ragusa . A regularity criterion of weak solutions to the 3D Boussinesq equations. AIMS Mathematics, 2017, 2(3): 451-457. doi: 10.3934/Math.2017.2.451
    [4] Aissa Boukarou, Kaddour Guerbati, Khaled Zennir, Mohammad Alnegga . Gevrey regularity for the generalized Kadomtsev-Petviashvili I (gKP-I) equation. AIMS Mathematics, 2021, 6(9): 10037-10054. doi: 10.3934/math.2021583
    [5] Ebner Pineda, Luz Rodriguez, Wilfredo Urbina . Variable exponent Besov-Lipschitz and Triebel-Lizorkin spaces for the Gaussian measure. AIMS Mathematics, 2023, 8(11): 27128-27150. doi: 10.3934/math.20231388
    [6] Bauyrzhan Derbissaly, Makhmud Sadybekov . Inverse source problem for multi-term time-fractional diffusion equation with nonlocal boundary conditions. AIMS Mathematics, 2024, 9(4): 9969-9988. doi: 10.3934/math.2024488
    [7] Phuong Nguyen Duc, Ahmet Ocak Akdemir, Van Tien Nguyen, Anh Tuan Nguyen . Remarks on parabolic equation with the conformable variable derivative in Hilbert scales. AIMS Mathematics, 2022, 7(11): 20020-20042. doi: 10.3934/math.20221095
    [8] Zheng Dou, Kexin Luo . Global weak solutions of nonlinear rotation-Camassa-Holm model. AIMS Mathematics, 2023, 8(7): 15285-15298. doi: 10.3934/math.2023781
    [9] Ailing Ban . Asymptotic behavior of non-autonomous stochastic Boussinesq lattice system. AIMS Mathematics, 2025, 10(1): 839-857. doi: 10.3934/math.2025040
    [10] Xinli Wang, Haiyang Yu, Tianfeng Wu . Global well-posedness and optimal decay rates for the n-D incompressible Boussinesq equations with fractional dissipation and thermal diffusion. AIMS Mathematics, 2024, 9(12): 34863-34885. doi: 10.3934/math.20241660
  • Mathematical concepts have been used in the last decades to predict the behavior of the spread of infectious diseases. Among them, the reproductive number concept has been used in several published papers to study the stability of the mathematical model used to predict the spread patterns. Some conditions were suggested to conclude if there would be either stability or instability. An analysis was also meant to determine conditions under which infectious classes will increase or die out. Some authors pointed out limitations of the reproductive number, as they presented its inability to help predict the spread patterns. The concept of strength number and analysis of second derivatives of the mathematical models were suggested as additional tools to help detect waves. This paper aims to apply these additional analyses in a simple model to predict the future.



    The main research of this article is about the generalized absolute value equation (GAVE):

    AxB|x|=b, (1.1)

    where A,BRn×n,bRn and |x|:=(|x1|,|x2|,,|xn|)T. Actually, when B is a identity matrix, the GAVE (1.1) can be easily transformed into the absolute value equation (AVE):

    Ax|x|=b. (1.2)

    The GAVE (1.1) was first proposed by Rohn as a class of nonlinear nondifferentiable optimization problems [1]. After that, the AVE (1.2) and GAVE (1.1) has applications in many areas of optimization, including linear complementarity problem, bimatrix games, constrained least squares problems, and so on, see [1,2,3,4,5,6]. For instance, the famous linear complementarity problem (LCP) [5]: Determining z such that

    Mz+q0, z0 and zT(Mz+q)=0, with MRn×n, qRn. (1.3)

    In [3], Mangasarian has proved that the GAVE (1.1) and LCP (1.3) can be transformed into each other under some conditions, and he also proposed the unique solvability of the AVE. Moreover, many scholars have further studied the solvability and unique solution theory of the AVE (1.2) [7,8,9]. In the early algorithm research, some efficient iteration methods have been proposed to address the GAVE (1.1) and AVE (1.2), which can be found in Mangasarian [2,10,11], Rohn[12], and so on. Then in [13], in order to overcome the difficulty of constructing the gradient of the absolute value term, Mangasarian presented the generalized Newton (GN) algorithm through using the generalized Jacobian matrix. This method may be summed up as follows:

    x(k+1)=(AD(x(k)))1b,  k=0,1,2,..., (1.4)

    where D(x):=diag(sgn(x)), xRn, sgn(x) denotes a vector with components equal to 1, 0, 1 depending on whether the corresponding components of x is positive, zero or negative. Moreover, the GN algorithm is also able to apply on the GAVE (1.1) if we use BD(x(k)) instead of D(x(k)). Based on the GN method, Li[14] proposed a modified generalized Newton method. It should be noticed that the Jacobian matrix is changed with respect to the iteration index k, which will increase the amount of computation greatly. In order to stay away from the changed Jacobian matrix, many literature on the result of the AVE (1.2) and GAVE (1.1) are proposed. For instance, Rohn et al. presented the Picard method when A is invertible in [15]:

    x(k+1)=A1(|x(k)|+b),  k=0,1,2,... . (1.5)

    In [16], Wang et al. added a new term Ωx (Ω is positive semi-definite and it can be guaranteed that A+Ω is invertible) into the general problem and noticed that Ax+Ωx is differential but Ωx+B|x|+b non-differential, then studied a modified Newton-type(MN) iteration method:

    x(k+1)=(A+Ω)1(Ωx(k)+B|x(k)|+b),  k=0,1,2,..., (1.6)

    To further speed up the AOR method, Li et al.[17] presented the generalization version algorithm. Actually, the above algorithms all belong to Newton-based matrix splitting methods and their relaxed versions, which is proposed in [18]. Sometimes the coefficient matrix A has some special properties, for example, A is M-matrix, under which condition Ali et al. [19] proposed two new generalized Gauss-Seidel iteration methods. In [20], a new iteration method was presented by redesigning equivalently the AVE in (1.2) as a 2×2 block nonlinear equation. Also using matrix blocking, the block-diagonal and anti-block-diagonal splitting (BAS) method[21] and modified BAS method[22] are proposed.

    Recently, in order to continue to improve the efficiency of solving the AVE (1.2), some inexact iteration algorithms have been studied. Every step in iteration of the Picard method can be viewed as solving a linear system with the coefficient matrix A. Therefore, Salkuyeh[23] presented the Picard-HSS method for AVE (1.2), which used the HSS method [24] to estimate the result x(k+1) at each Picard iteration. On this basis, Miao et al. [25] used single-step HSS (SHSS) method [26] to address the linear system in Picard iteration and proposed the Picard-SHSS method, a new Picard-type method. Inspired by these, we adopt the modified Newton-type iteration method with faster convergence speed as the outer iterative method, HSS and SHSS as the inner iterative method respectively, then we obtain two new inexact iterative algorithms, abbreviated as MN-HSS method and MN-SHSS method.

    The remainder of this article is laid out as follows. In Section 2, the MN-HSS method is described, and we investigate its convergence property. Section 3 is devoted to introducing the MN-SHSS method and analyzing its convergence. Section 4 contains several numerical experiments, showing the feasibility and efficiency of the two new inexact iteration methods. Section 5 draws some conclusions.

    At the beginning of this section, some notations are given blow. ones(n,1) represents a vector like (1,1,,1)TR. ρ(A) indicates the spectral radius of A. The symbol ()T represents the transepose of a vector or matrix. And we have limk+Ak=0 if and only if ρ(A)<1 (see[27,28]).

    Given ARn×n be a non-Hermitian positive definite matrix. Then H=12(A+AT)S=12(AAT), which are the Hermitian part and skew-Hermitian part of A, A=H+S. Then the HSS iteration method was presented by Bai et al.[24] for addressing positive definite system Ax=b. The iterative process of the HSS method can be expressed as follows:

    {x(l+12)=(αI+H)1((αIS)x(l)+b),x(l+1)=(αI+S)1((αIH)x(l+12)+b), (2.1)

    where α is a positive constant.

    Section 1 has introduced the MN method, which is equivalent to the following form

    (A+Ω)x(k+1)=Ωx(k)+B|x(k)|+b,  k=0,1,2,... . (2.2)

    Similar to the Picard-HSS method, we can choose a suitable Ω to make sure the positive definiteness of A+Ω. The HSS method (2.1) can be applied as the inner iteration method of the MN method for solving the GAVE (1.1), called MN-HSS method.

    Algorithm (The MN-HSS iteration method):

    For k=0,1,..., until converge, do:

    Set x(k,0):=x(k)

    For l=0,1,..., until converge, do:

    {x(k,l+12)=(αI+H)1((αIS)x(k,l)+Ωx(k)+B|x(k)|+b),x(k,l+1)=(αI+S)1((αIH)x(k,l+12)+Ωx(k)+B|x(k)|+b), (2.3)

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1):=x(k,lk).

    Endfor

    The following theorem proves that the MN-HSS algorithm converges under necessary conditions.

    Theorem 2.1. Let the matrix A,BRn×n and choose suitable ΩRn×n such that A+Ω is a positive definite matrix, then H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T) are the Hermitian and skew-Hermitian parts of A+Ω respectively. Let also A1B2 <1 and (A+Ω)12<1B2+Ω2. Then the GAVE (1.1) has a unique solution x, and for any starting vector x(0)Rn and any sequence of positive integers lk, k=0,1,..., the iteration sequence {x(k)}k=0 produced by the MN-HSS iteration method converges to x provided that l=lim infklkN, where N is a natural number satisfying

    Tsα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), s>N,

    where Tα=M1αNα, Mα=12α(αI+H)(αI+S), Nα=12α(αIH)(αIS).

    Proof. Let Gα=M1α, then according to the iteration formula (2.3), we can write the (k+1)th iterate x(k+1) of the MN-HSS iteration method as

    x(k+1)=Tlkαx(k)+lk1j=0TjαGα(Ωx(k)+B|x(k)|+b), k=0,1,2,.... (2.4)

    Since A1B2<1, the GAVE (1.1) has a unique result x [3], which satisfying

    x=Tlkαx+lk1j=0TjαGα(Ωx+B|x|+b), k=0,1,2,.... (2.5)

    By subtracting (2.5) from (2.4), we have

    x(k+1)x=Tlkα(x(k)x)+lk1j=0TjαGα(Ω(x(k)x)+B|x(k)|B|x|). (2.6)

    It follows from [24] that ρ(Tα)<1, then

    lk1j=0TjαGα=(ITlkα)(ITα)1Gα=(ITlkα)(IM1αNα)1M1α=(ITlkα)(MαNα)1=(ITlkα)(A+Ω)1.

    Then the Eq (2.6) turns into

    x(k+1)x=Tlkα(x(k)x)+(ITlkα)(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|)=Tlkα((A+Ω)1A(x(k)x)(A+Ω)1B(|x(k)||x|))+(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|).

    It is obvious that for any x,yRn, |x||y|2≤∥xy2. Therefore,

    x(k+1)x2[Tlkα2((A+Ω)12A2+(A+Ω)12B2)+(A+Ω)12Ω2+(A+Ω)12B2]x(k)x2.

    On the other hand, since ρ(Tα)<1, limsTsα=0. Therefore, there is a natural number N such that

    Tsα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), sN.

    Thus, assume that l=lim infklkN, then we can obtain the result. This completes the proof.

    The MN-HSS method can also be expressed as the residual-updating form:

    Algorithm (The residual-updating variant of MN-HSS iteration method):

    For k=0,1,..., until converge, do:

    Set d(k,0)=0 and b(k)=B|x(k)|+bAx(k)

    For l=0,1,..., until converge, do:

    {d(k,l+12)=(αI+H)1((αIS)d(k,l)+b(k)),d(k,l+1)=(αI+S)1((αIH)d(k,l+12)+b(k)),

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1)=x(k)+d(k,lk).

    Endfor

    From the iteration scheme (2.3) we can notice that the each step of HSS iteration method is equivalent to solving two linear subsystems, whose coefficient matrices are Hermitian and skew-Hermitian respectively. The first equation with Hermitian coefficient matrix can be solved quickly by conjugate gradient method or Cholesky factorization. However, the other subsystem needs much more computation. So as to accelerate the algorithm, we replace the inner iteration method with single-step HSS (SHSS) method, then we present a new inexact method in this section, named as MN-SHSS iteration method.

    Compared with the HSS method, the SHSS method ignores the second subsystem with skew-Hermitian matrix, which can be expressed as

    (αI+H)x(l+1)=(αIS)x(l)+b, (3.1)

    by limiting the iteration parameter α, Li et al.[26] has proved that the SHSS converges to the unique solution for any initial guess x(0).

    Algorithm (The MN-SHSS iteration method):

    For k=0,1,..., until converge, do:

    Set x(k,0):=x(k)

    For l=0,1,..., until converge, do:

    x(k,l+1)=(αI+H)1((αIS)x(k,l)+Ωx(k)+B|x(k)|+b), (3.2)

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1):=x(k,lk).

    Endfor

    The next theorem shows that the MN-SHSS method is convergent under a restriction on the parameter α.

    Theorem 3.1. Let the matrix A,BRn×n and choose suitable ΩRn×n such that A+Ω is a positive definite matrix, then H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T) are the Hermitian and skew-Hermitian parts of A+Ω respectively. Let also A1B2 <1, (A+Ω)12<1B2+Ω2, and α be a constant number such that α>max{0,δ2maxλ2min2λmin}, where δmax is the largest singular value of S and λmin is the smallest eigenvalue of H. Then the GAVE (1.1) has a unique solution x, and for any starting vector x(0)Rn and any sequence of positive integers lk, k=0,1,..., the iteration sequence {x(k)}k=0 produced by the MN-SHSS iteration method converges to x provided that l=lim infklkN, where N is a natural number satisfying

    Ksα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), s>N,

    where Kα=M1αNα, Mα=αI+H, Nα=αIS.

    Proof. According to the iteration formula (3.2), we can get the kth iteration x(k+1) of the MN-SHSS method:

    x(k+1)=Klkαx(k)+lk1j=0KjαM1α(Ωx(k)+B|x(k)|+b). (3.3)

    On the other hand, the unique result x of GAVE (1.1) satisfies

    x=Klkαx+lk1j=0KjαM1α(Ωx+B|x|+b). (3.4)

    Subtracting (3.4) from (3.3), then we have

    x(k+1)x=Klkα(x(k)x)+lk1j=0KjαM1α(Ω(x(k)x)+B|x(k)|B|x|), (3.5)

    In [26], Li et al. has proved that ρ(Kα)<1 when α>max{0,δ2maxλ2min2λmin}, hence lk1j=0KjαM1α=(IKlkα)(A+Ω)1. Then (3.5) becomes

    x(k+1)x=Klkα(x(k)x)+(IKlkα)(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|). (3.6)

    Therefore, we can calculate that

    x(k+1)x2[Klkα2((A+Ω)12A2+(A+Ω)12B2)+(A+Ω)12Ω2+(A+Ω)12B2]x(k)x2. (3.7)

    Due to the condition of ρ(Kα)<1, we can get that Ksα0 as s. So there is a natural number N such that

    Ksα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), sN.

    Then if l=lim infklkN, it is distinct that x(k+1)x2<∥x(k)x2. That is to say, the iteration sequence {x(k)}k=0converges to x, which is produced by the MN-SHSS iteration method. This completes the proof.

    In the same way, the MN-SHSS method also has residual form:

    Algorithm (residual-updating variant of the MN-SHSS iteration method):

    For k=0,1,..., until converge, do:

    Set d(k,0)=0 and b(k)=B|x(k)|+bAx(k)

    For l=0,1,..., until converge, do:

    d(k,1+1)=(αI+H)1((αIS)d(k,l)+b(k)),

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1)=x(k)+d(k,lk).

    Endfor

    In this section, the effectiveness of MN-HSS method and MN-SHSS method is showed by comparing them with other existing inexact iteration method for solving GAVE (1.2): the Picard-HSS method and the Picard-SHSS method. In addition, we also compare our methods with a descent method [29]. For this purpose, we compare from the following aspects: the number of outer iteration steps (denoted by 'IT'), the elapsed CPU time (denoted by 'CPU') and the residual error(denoted by 'RES'), where the 'RES' is set to be

    RES:=Ax(k)B|x(k)|b2b2.

    In our computations, we all use the residual-updating version and optimal parameters α resulting in the least iteration numbers of the algorithms mentioned above and the zero vector as our starting iteration vector. We use the Cholesky factorization to solve the equations whose coefficient matrix is αI+H and the LU factorization for the other subsystem. All runs are aborted once the residual error satisfies RES<=107 or the maximum iteration number kmax=500 is exceeded. As for the inner iterations, we set the stopping criterion of different methods as:

    b(k)Ad(k,l)2bk20.01,

    and a limit on the number of iterations 10(lk=10, k=1,2,...) for inner iterations are used. The parameters used in the desent method are chosen as σ=0.2, δ=0.8, γ=0.001, p=3 and ϵ0=0.01.

    The two numerical experiments below are performed in MATLAB R2020b.

    Example 1.[30] The first example comes from the LCP (1.3). In [3,30], it has been known that the LCP (1.3) can be transformed into the following form:

    (M+I)x(MI)|x|=q    with x=12((MI)z+q), (4.1)

    which belongs to the GAVE (1.1) obviously. Then we let A=M+I and B=MI, where M=ˆM+μI and the right-hand vector q is determined by q=Mz, in which

    ˆM=Tridiag(I,S,I)=[SI000ISI00000IS]Rn×n

    is a block tridiagonal matrix,

    S=Tridiag(1,4,1)=[410001410000014]Rm×m

    is a tridiagonal matrix, n=m2. we can know that z=1.2ones(n,1) is the unique result of the LCP(1.3) and the exact solution of the GAVE(4.1) is x=0.6ones(n,1).

    In this experiment, we pick three scenarios for the parameter μ=4,1,4. The matrices A and M are both symmetric positive definite when μ=4, and symmetric indefinite when μ=4. The matrix A is symmetric positive definite if μ=1, but M symmetric indefinite. We choose four different sizes of n (n=3600,4900,6400,8100) for each parameter μ. Considering the simplicity and efficiency of our experiments, we use the same matrix Ω=5I in the MN-HSS and MN-SHSS methods. In the methods we test, the value α is chosen to result in the fewest iteration steps, see Table 1. In Tables 24, we list the numerical outcomes of the studied four inexact iteration methods for μ=4,1,4. In these tables, "-" indicates that the relevant algorithm is not convergent to the answer under the limitation of the maximum number kmax of iterative steps or even diverge.

    Table 1.  The values of α for the testing methods in Example 1.
    Method n=3600 n=4900 n=6400 n=8100
    μ=4 Picard-HSS 7.6 7.5 7.6 7.5
    Picard-SHSS 0.8 0.1 0.1 0.1
    MN-HSS 11.4 11.6 11.2 11.4
    MN-SHSS 0.1 0.1 0.1 0.1
    μ=1 Picard-HSS - - - -
    Picard-SHSS - - - -
    MN-HSS 10.4 10.4 10.4 10.4
    MN-SHSS 0.4 0.1 0.3 0.1
    μ=4 Picard-HSS - - - -
    Picard-SHSS - - - -
    MN-HSS 8.7 7.9 7.8 7
    MN-SHSS 0.1 0.1 0.1 0.5

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of the testing methods for Example 1 with μ=4.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT 63 62 63 62
    CPU 7.2267 13.2534 23.4046 37.3651
    RES 7.8433e-8 8.7166e-8 7.8872e-8 8.7574e-8
    Picard-SHSS IT 72 73 73 72
    CPU 4.8826 7.5158 13.4642 20.8528
    RES 9.6906e-8 8.8579e-8 8.4765e-8 9.938e-8
    The descent method IT 3 3 3 3
    CPU 3.1777 7.5666 16.1594 31.5073
    RES 1.1654e-9 1.1643e-9 1.1635e-9 1.1628e-9
    MN-HSS IT 12 12 12 12
    CPU 1.8227 3.5410 6.6261 11.3629
    RES 3.7154e-8 3.9922e-8 4.8592e-8 3.8995e-8
    MN-SHSS IT 12 12 12 12
    CPU 0.7606 1.4825 2.6502 4.4347
    RES 4.9155e-8 4.9827e-8 5.0337e-8 5.0737e-8

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of the testing methods for Example 1 with μ = 1.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT - - - -
    CPU - - - -
    RES - - - -
    Picard-SHSS IT - - - -
    CPU - - - -
    RES - - - -
    The descent method IT 6 6 6 6
    CPU 6.1136 14.769 31.1904 64.9632
    RES 2.4038e-9 2.0506e-9 1.7879e-9 1.5849e-9
    MN-HSS IT 69 68 68 68
    CPU 7.9967 14.7294 25.6464 43.1252
    RES 8.7811e-8 9.9034e-8 9.3516e-8 8.8168e-8
    MN-SHSS IT 68 68 68 68
    CPU 4.5587 7.8019 14.1982 22.0090
    RES 9.9988e-8 9.5126e-8 8.7701e-8 8.5202e-8

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of the testing methods for Example 1 with μ = 4.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT - - - -
    CPU - - - -
    RES - - - -
    Picard-SHSS IT - - - -
    CPU - - - -
    RES - - - -
    The descent method IT - - - -
    CPU - - - -
    RES - - - -
    MN-HSS IT 46 46 46 46
    CPU 6.9628 13.1563 21.5587 36.8080
    RES 9.8167e-8 9.3627e-8 8.8073e-8 8.5249e-8
    MN-SHSS IT 46 46 46 46
    CPU 3.1611 5.7271 9.9884 16.9793
    RES 9.7483e-8 9.0358e-8 8.4596e-8 8.2290

     | Show Table
    DownLoad: CSV

    From Table 2, we can see that when μ=4, for all matrix scales the five evaluated methods can successfully create an estimated solution to the GAVE (4.1). And for each tested method, the CPU time also increases with the increase of the scale n of the coefficient matrix. Through the study of the numerical results, we can know that the two inexact iteration methods proposed by us are better than the Picard-HSS and Picard-SHSS methods in the number of iteration steps and CPU time. Although the descent method need less iteration steps, our methods spend less CPU time. When μ=1,4, in which instance the convergence of Picard-HSS and Piard-SHSS iteration methods cannot be guaranteed, hence Tables 3 and 4 show that the two Picard-type methods converge slowly or do not converge, while the MN-HSS and MN-SHSS method can still converge fast. The selectivity of matrix Ω provides higher stability for the algorithm we propose. When μ=1, our methods still have better performance in elapsed CPU time than the descent method generally, and the descent method can not converge under constraints of maximum iteration number.

    Example 2. The coefficient matrix A of the GAVE (1.1) is from the finite difference approximation the two-dimensional convection-diffusion equation

    {(uxx+uyy)+q(ux+uy)+pu=f(x,y), (x,y)S,u(x,y)=0, (x,y)S,

    where S=(0,1)×(0,1), S is its boundary, q is a constant and p is a real number. For discretization, the five-point finite difference scheme are used on the diffusive terms and the central difference scheme on the convective terms respectively, then we can obtain the coefficient matrix A.

    A=TxIm+ImTx+pIn,

    where Im and In are the identity matrices of order m and n with n=m2, means the Kronecker product, Tx=tridiag(a2,a1,a3)m×m and Ty=tridiag(a2,0,a3)m×m with a1=4, a2=1Re, a3=1+Re. Here Re=qh2 and h=1m+1 are the mesh Reynolds number and the equidistant step size, respectively. q is a positive constant. Let the coefficient matrix B=I and the exact solution x=(1,2,3,,(1)ii,,(1)nn)T, then the right-hand side vector can also be determined. In this instance the GAVE becomes a AVE (1.2) actually.

    In our experiments, we test different values of p (p=0,1), q (q=0,1) and n (n=100,400,1600,6400). The parameters α of the four tested methods resulting in the fewest iteration steps are chosen, while the matrix Ω in our methods are chosen as Ω=ωI, see Tables 5 and 6. We refer to [25] for the parameters of Picard-HSS and Picard-SHSS methods.

    Table 5.  The values of α and ω for the testing methods in Example 2 (p=0).
    Methods n=100 n=400 n=1600 n=6400
    q=0 Picard-HSS 11.69 12.6 13.4 13
    Picard-SHSS 5.745 6.5 6.4 6.6
    MN-HSS 2.2 2.5 2.2 2.2
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    MN-SHSS 0.1 0.1 0.1 0.1
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    q=1 Picard-HSS 12.01 13.6 14 13
    Picard-SHSS 5.926 6.6 6.75 6.6
    MN-HSS 2.2 2.4 2.2 2
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    MN-SHSS 0.3 0.1 0.1 0.1
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I

     | Show Table
    DownLoad: CSV
    Table 6.  The values of α and ω for the testing methods in Example 2 (p=1).
    Methods n=100 n=400 n=1600 n=6400
    q=0 Picard-HSS 13.8 11.2 10.36 10.1
    Picard-SHSS 6.96 5.7 5.21 5.1
    MN-HSS 1.1 0.6 0.5 0.3
    Ω=I Ω=I Ω=I Ω=I
    MN-SHSS 0.1 0.1 0.01 0.01
    Ω=I Ω=I Ω=I Ω=I
    q=1 Picard-HSS 14 11.29 10.4 11
    Picard-SHSS 7.05 5.72 5.2 5.1
    MN-HSS 1 0.6 0.3 0.1
    Ω=I Ω=I Ω=I Ω=I
    MN-SHSS 0.1 0.1 0.1 0.1
    Ω=I Ω=I Ω=I Ω=I

     | Show Table
    DownLoad: CSV

    From Tables 7 and 8, the iteration steps reveal that the Picard-type methods and the descent method converges faster than the MN-HSS method, but in terms of the elapsed CPU times, the MN-SHSS method we new proposed has better computing efficiency generally. In Tables 9 and 10, that is when p=1, the iteration counts and CPU time of the Picard-type methods increase dramatically with the increase of matrix scale, which situation is more obvious on the descent method. In contrast, our methods still maintain a low iteration counts and high computational efficiency. Therefore, our new proposed method is very effective.

    Table 7.  Numerical results of the testing methods for Example 2 with p=0 and q=0.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 36 32 30 28
    CPU 0.0149 0.2109 2.8908 41.9898
    RES 9.8815e-8 9.3643e-8 9.9900e-8 9.9043e-8
    Picard-SHSS IT 37 32 30 29
    CPU 0.0086 0.0747 1.5120 23.6302
    RES 9.4432e-8 9.6857e-8 9.7569e-8 9.9256e-8
    The descent method IT 5 5 5 5
    CPU 0.0160 0.0334 0.7701 26.7046
    RES 1.5532e-12 8.5090e-11 3.6631e-16 2.0878e-16
    MN-HSS IT 52 44 41 38
    CPU 0.0120 0.1491 2.4287 32.2431
    RES 9.3740e-8 8.0719e-8 7.6775e-8 9.9322e-8
    MN-SHSS IT 51 44 41 39
    CPU 0.0036 0.0294 0.5693 9.2022
    RES 8.1308e-8 8.0474e-8 7.9437e-8 7.6008e-8

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical results of the testing methods for Example 2 with p=0 and q=1.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 35 32 31 28
    CPU 0.0131 0.2037 3.0079 42.4078
    RES 9.1372e-8 9.8614e-8 9.3177e-8 9.7012e-8
    Picard-SHSS IT 36 32 31 29
    CPU 0.0072 0.0782 1.5742 23.1650
    RES 9.8685e-8 9.4248e-8 9.4655e-8 9.6233e-8
    The descent method IT 6 5 5 5
    CPU 0.0048 0.0278 0.7420 26.1576
    RES 1.5836e-11 3.7231e-14 1.8960e-14 2.1382e-16
    MN-HSS IT 41 44 40 38
    CPU 0.0087 0.1642 2.1150 33.1043
    RES 8.5432e-8 9.9138e-8 9.6349e-8 9.4551e-8
    MN-SHSS IT 41 44 40 38
    CPU 0.0028 0.0299 0.5588 9.0679
    RES 8.4236e-8 9.8677e-8 9.9416e-8 9.8603e-8

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical results of the testing methods for Example 2 with p=1 and q=0.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 24 62 206 -
    CPU 0.0096 0.3811 19.5610 -
    RES 9.6164e-8 9.5608e-8 9.8436e-8 -
    Picard-SHSS IT 21 52 166 -
    CPU 0.0047 0.1255 8.2082 -
    RES 9.6537e-8 9.7425e-8 9.9905e-8 -
    The descent method IT 19 - - -
    CPU 0.0116 - - -
    RES 3.1376e-12 - - -
    MN-HSS IT 5 10 32 73
    CPU 0.0031 0.0700 3.0579 103.9036
    RES 9.5409e-8 7.5888e-8 6.886e-8 9.7832e-8
    MN-SHSS IT 4 5 4 6
    CPU 0.0007 0.0160 0.1692 5.6192
    RES 2.6017e-8 3.1319e-8 3.9399e-9 3.1310e-8

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical results of the testing methods for Example 2 with p=1 and q=1.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 24 61 203 -
    CPU 0.0095 0.3904 22.6934 -
    RES 9.7842e-8 9.9013e-8 9.3292e-8 -
    Picard-SHSS IT 21 51 166 -
    CPU 0.0043 0.1228 8.1885 -
    RES 9.5087e-8 9.5556e-8 9.5308e-8 -
    The descent method IT - - - -
    CPU - - - -
    RES - - - -
    MN-HSS IT 6 10 20 42
    CPU 0.0026 0.0735 1.9553 69.7912
    RES 1.0635e-8 6.2950e-8 5.5025e-8 9.4565e-8
    MN-SHSS IT 3 5 13 48
    CPU 0.0008 0.0138 0.6755 37.6225
    RES 8.1615e-8 6.7032e-8 7.7787e-8 9.9529e-8

     | Show Table
    DownLoad: CSV

    In this article, to accelerate the MN iteration method, we propose the MN-HSS and MN-SHSS iteration algorithms for addressing the generalized absolute value equation. Moreover, we give the sufficient conditions to show that our methods are convergent for addressing the GAVE. In the end, we provide some experiments to comfirm the feasibility and effectiveness of our novel presented methods.

    This work is supported by the National Natural Science Foundation of China (Grant no. 1227010254, Research on some efficient and fast algorithms for complex nonlinear problems).

    The authors declare that they have no conflicts of interest.



    [1] A. Atangana, D. Baleanu, New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model, Therm. Sci., 20 (2016), 763–769. https://doi.org/10.2298/TSCI160111018A doi: 10.2298/TSCI160111018A
    [2] M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel, Progress in Fractional Differentiation and Applications, 1 (2015), 73–85. https://doi.org/10.12785/pfda/010201 doi: 10.12785/pfda/010201
    [3] M. Caputo, Linear model of dissipation whose Q is almost frequency independent-Ⅱ, Geophys. J. Int., 13 (1967), 529–539. https://doi.org/10.1111/j.1365-246X.1967.tb02303.x doi: 10.1111/j.1365-246X.1967.tb02303.x
    [4] A. Atangana, Mathematical model of survival of fractional calculus, critics and their impact: How singular is our world?, Adv. Differ. Equ., 2021 (2021), 403. https://doi.org/10.1186/s13662-021-03494-7 doi: 10.1186/s13662-021-03494-7
    [5] C. Ji, D. Jiang, N. Shi, The behavior of an SIR epidemic model with stochastic perturbation, Stoch. Anal. Appl., 30 (2012), 755–773. https://doi.org/10.1080/07362994.2012.684319 doi: 10.1080/07362994.2012.684319
    [6] Y. Zhao, D. Jiang, The threshold of a stochastic SIRS epidemic model with saturated incidence, Appl. Math. Lett., 34 (2014), 90–93. https://doi.org/10.1016/j.aml.2013.11.002 doi: 10.1016/j.aml.2013.11.002
    [7] M. Bachar, M. A. Khamsi, M. Bounkhel, A mathematical model for the spread of Covid-19 and control mechanisms in Saudi Arabia, Adv. Differ. Equ., 2021 (2021), 253. https://doi.org/10.1186/s13662-021-03410-z doi: 10.1186/s13662-021-03410-z
    [8] Z. B. Dieudonné, Mathematical model for the mitigation of the economic effects of the Covid-19 in the Democratic Republic of the Congo, PLoS ONE, 16 (2021), e0250775. https://doi.org/10.1371/journal.pone.0250775 doi: 10.1371/journal.pone.0250775
    [9] M. Tomochi, M. Kono, A mathematical model for COVID-19 pandemic-SIIR model: Effects of asymptomatic individuals, J. Gen. Fam. Med., 22 (2020), 5–14. https://doi.org/10.1002/jgf2.382 doi: 10.1002/jgf2.382
    [10] P. Sahoo, H. S. Mondal, Z. Hammouch, T. Abdeljawad, D. Mishra, M. Reza, On the necessity of proper quarantine without lock down for 2019-nCoV in the absence of vaccine, Results Phys., 25 (2021), 104063. https://doi.org/10.1016/j.rinp.2021.104063 doi: 10.1016/j.rinp.2021.104063
    [11] A. Babaei, H. Jafari, S. Banihashemi, M. Ahmadi, Mathematical analysis of a stochastic model for spread of Coronavirus, Chaos Soliton. Fract., 145 (2021), 110788. https://doi.org/10.1016/j.chaos.2021.110788 doi: 10.1016/j.chaos.2021.110788
    [12] I. Ahmed, E. F. D. Goufo, A. Yusuf, P. Kumam, P. Chaipanya, K. Nonlaopon, An epidemic prediction from analysis of a combined HIV-COVID-19 co-infection model via ABC-fractional operator, Alex. Eng. J., 60 (2021), 2979–2995. https://doi.org/10.1016/j.aej.2021.01.041 doi: 10.1016/j.aej.2021.01.041
    [13] J. M. Carcione, J. E. Santos, C. Bagaini, J. Ba, A simulation of a COVID-19 epidemic based on a deterministic SEIR model, Front. Public Health, 8 (2020), 230. https://doi.org/10.3389/fpubh.2020.00230 doi: 10.3389/fpubh.2020.00230
    [14] P. van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29–48. https://doi.org/10.1016/S0025-5564(02)00108-6 doi: 10.1016/S0025-5564(02)00108-6
    [15] J. P. La Salle, The stability of dynamical systems, SIAM Press, 1976. https://doi.org/10.1137/1.9781611970432
    [16] A. Atangana, S. İğret Araz, New numerical scheme with Newton polynomial: theory, methods and applications, Academic Press, 2021. https://doi.org/10.1016/B978-0-12-775850-3.50017-0
    [17] A. Atangana, S. İğret Araz, New concept in calculus: Piecewise differential and integral operators, Chaos Soliton. Fract., 145 (2021), 110638. https://doi.org/10.1016/j.chaos.2020.110638 doi: 10.1016/j.chaos.2020.110638
  • This article has been cited by:

    1. 偌鸿 马, Global Well-Posednessfor the Boussinesq-CoriolisEquations in Variable ExponentFourier-Besov-Morrey Spaces, 2024, 14, 2160-7583, 682, 10.12677/PM.2024.142068
    2. Ahmed El Idrissi, Brahim El Boukari, Jalila El Ghordaf, Global well-posedness for the 3D rotating fractional Boussinesq equations in Fourier-Besov-Morrey spaces with variable exponent, 2024, 38, 0354-5180, 7597, 10.2298/FIL2421597E
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1965) PDF downloads(119) Cited by(4)

Figures and Tables

Figures(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog