Loading [MathJax]/jax/element/mml/optable/SuppMathOperators.js
Research article Special Issues

A mathematical model for fractal-fractional monkeypox disease and its application to real data

  • Received: 25 December 2023 Revised: 13 February 2024 Accepted: 19 February 2024 Published: 28 February 2024
  • MSC : 26A33, 34A08, 65L09, 92D25, 92D30

  • In this paper, we developed a nonlinear mathematical model for the transmission of the monkeypox virus among populations of humans and rodents under the fractal-fractional operators in the context of Atangana-Baleanu. For the theoretical analysis, the renowned theorems of fixed points, like Banach's and Krasnoselskii's types, were used to prove the existence and uniqueness of the solutions. Additionally, some results regarding the stability of the equilibrium points and the basic reproduction number were provided. In addition, the numerical schemes of the considered model were established using the Adams-Bashforth method. Our analytical findings were supported by the numerical simulations to explain the effects of changing a few sets of fractional orders and fractal dimensions. Some graphic simulations were displayed with some parameters calculated from real data to understand the behavior of the model.

    Citation: Weerawat Sudsutad, Chatthai Thaiprayoon, Jutarat Kongson, Weerapan Sae-dan. A mathematical model for fractal-fractional monkeypox disease and its application to real data[J]. AIMS Mathematics, 2024, 9(4): 8516-8563. doi: 10.3934/math.2024414

    Related Papers:

    [1] Fukui Li, Hui Xu, Feng Qiu . Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(3): 1770-1800. doi: 10.3934/era.2024081
    [2] Shuang Zhang, Songwen Gu, Yucong Zhou, Lei Shi, Huilong Jin . Energy efficient resource allocation of IRS-Assisted UAV network. Electronic Research Archive, 2024, 32(7): 4753-4771. doi: 10.3934/era.2024217
    [3] Fukui Li, Hui Xu, Feng Qiu . Correction: Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(7): 4515-4516. doi: 10.3934/era.2024204
    [4] Yejin Yang, Miao Ye, Qiuxiang Jiang, Peng Wen . A novel node selection method for wireless distributed edge storage based on SDN and a maldistributed decision model. Electronic Research Archive, 2024, 32(2): 1160-1190. doi: 10.3934/era.2024056
    [5] Chuang Ma, Helong Xia . A one-step graph clustering method on heterogeneous graphs via variational graph embedding. Electronic Research Archive, 2024, 32(4): 2772-2788. doi: 10.3934/era.2024125
    [6] Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224
    [7] Qi Hong, Hongyi Zhao, Shiyu Chen, Aya Selmoune, Kai Huang . Optimizing routing for autonomous delivery and pickup vehicles in three-dimensional space. Electronic Research Archive, 2025, 33(4): 2668-2697. doi: 10.3934/era.2025118
    [8] Dong-hyeon Kim, Se-woon Choe, Sung-Uk Zhang . Recognition of adherent polychaetes on oysters and scallops using Microsoft Azure Custom Vision. Electronic Research Archive, 2023, 31(3): 1691-1709. doi: 10.3934/era.2023088
    [9] Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang . Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174
    [10] Zongying Feng, Guoqiang Tan . Dynamic event-triggered H control for neural networks with sensor saturations and stochastic deception attacks. Electronic Research Archive, 2025, 33(3): 1267-1284. doi: 10.3934/era.2025056
  • In this paper, we developed a nonlinear mathematical model for the transmission of the monkeypox virus among populations of humans and rodents under the fractal-fractional operators in the context of Atangana-Baleanu. For the theoretical analysis, the renowned theorems of fixed points, like Banach's and Krasnoselskii's types, were used to prove the existence and uniqueness of the solutions. Additionally, some results regarding the stability of the equilibrium points and the basic reproduction number were provided. In addition, the numerical schemes of the considered model were established using the Adams-Bashforth method. Our analytical findings were supported by the numerical simulations to explain the effects of changing a few sets of fractional orders and fractal dimensions. Some graphic simulations were displayed with some parameters calculated from real data to understand the behavior of the model.



    The Sasa-Satsuma equation

    ut+uxxx+6|u|2ux+3u(|u|2)x=0, (1.1)

    so-called high-order nonlinear Schrödinger equation [1], is relevant to several physical phenomena, for example, in optical fibers [2,3], in deep water waves [4] and generally in dispersive nonlinear media [5]. Because this equation describes these important nonlinear phenomena, it has received considerable attention and extensive research. The Sasa-Satsuma equation has been discussed by means of various approaches such as the inverse scattering transform [1], the Riemann-Hilbert method [6], the Hirota bilinear method [7], the Darboux transformation [8], and others [9,10,11]. The initial-boundary value problem for the Sasa-Satsuma equation on a finite interval was studied by the Fokas method [12], which is also effective for the initial-boundary value problems on the half-line [35,36,37]. In Ref. [13], finite genus solutions of the coupled Sasa-Satsuma hierarchy are obtained in the basis of the theory of trigonal curves, the Baker-Akhiezer function and the meromorphic functions [14,15,16]. In Ref. [17], the super Sasa-Satsuma hierarchy associated with the 3×3 matrix spectral problem was proposed, and its bi-Hamiltonian structures were derived with the aid of the super trace identity.

    The nonlinear steepest descent method [18], also called Deift-Zhou method, for oscillatory Riemann-Hilbert problems is a powerful tool to study the long-time asymptotic behavior of the solution for the soliton equation, by which the long-time asymptotic behaviors for a number of integrable nonlinear evolution equations associated with 2×2 matrix spectral problems have been obtained, for example, the mKdV equation, the KdV equation, the sine-Gordon equation, the modified nonlinear Schrödinger equation, the Camassa-Holm equation, the derivative nonlinear Schrödinger equation and so on [19,20,21,22,23,24,25,26,27,28,29,30]. However, there is little literature on the long-time asymptotic behavior of solutions for integrable nonlinear evolution equations associated with 3×3 matrix spectral problems [31,32,33]. Usually, it is difficult and complicated for the 3×3 case. Recently, the nonlinear steepest descent method was successfully generalized to derive the long-time asymptotics of the initial value problems for the coupled nonlinear Schrödinger equation and the Sasa-Satsuma equation with the complex potentials [33,34]. The main differences between the 2×2 and 3×3 cases is that the former corresponds to a scalar Riemann-Hilbert problem, while the latter corresponds to a matrix Riemann-Hilbert problem. In general, the solution of the matrix Riemann-Hilbert problem can not be given in explicit form, but the scalar Riemann-Hilbert problem can be solved by the Plemelj formula.

    The main aim of this paper is to study the long-time asymptotics of the Cauchy problem for the generalized Sasa-Satsuma equation [38] via nonlinear steepest decent method,

    {ut+uxxx6a|u|2ux6bu2ux3au(|u|2)x3bu(|u|2)x=0,u(x,0)=u0(x), (1.2)

    where a is a real constant, b is a complex constant that satisfies a2|b|2, the asterisk "" denotes the complex conjugate. It is easy to see that the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation (1.1) when a=1 and b=0. Suppose that the initial value u0(x) lies in Schwartz space S(R)={f(x)C(R):supxR|xαβf(x)|<,α,βN}. The vector function γ(k) is determined by the initial data in (2.15) and (2.19), and γ(k) satisfies the conditions (P1) and (P2), where

    (P1):{γ(k)B1γ(k)+|b|24(γ(k)σ3γ(k))2<1,γ(k)B1γ(k)+aγ(k)σ3γ(k)<2,2γ(k)B1γ(k)+|γ(k)|2+|B1γ(k)|2<4,

    with

    σ3=(1001),B1=(abba); (1.3)

    (P2): When detB1>0 and a>0, (2a|B1γ(k)|2) and (2adetB1|γ(k)|2)/(1γ(k)B1γ(k)) are positive and bounded; otherwise, (|B1γ(k)|22a) and (detB1|γ(k)|22a)/(1γ(k)B1γ(k)) are positive and bounded.

    The main result of this paper is as following:

    Theorem 1.1. Let u(x,t) be the solution of the Cauchy problem for the generalized Sasa-Satsuma equation (1.2) with the initial value u0S(R). Suppose that the vector function γ(k) is defined in (2.19), the hypotheses (P1) and (P2) hold. Then, for x<0 and xt<C,

    u(x,t)=ua(x,t)+O(c(k0)t1logt). (1.4)

    where C is a fixed constant, and

    ua(x,t)=ν12tk0γ(k0)B1γ(k0)(|γ1(k0)|ei(ϕ+argγ1(k0))+|γ2(k0)|ei(ϕ+argγ2(k0))),k0=x12t,ν=12πlog(1γ(k0)B1γ(k0)),ϕ=νlog(196tk30)16tk30+argΓ(iν)+1πk0k0log|ξ+k0|d(1γ(ξ)B1γ(ξ))π4,

    c() is rapidly decreasing, Γ() is the Gamma function, γ1 and γ2 are the first and the second row of γ(k), respectively.

    Remark 1.1. The two conditions (P1) and (P2) satisfied by γ(k) are necessary. The condition (P1) guarantees the existence and the uniqueness of the solutions of the basic Riemann-Hilbert problem (2.16) and the Riemann-Hilbert problem (3.1). The boundedness of the function δ(k) defined in subsection 3.1 relies on the condition (P2).

    Remark 1.2. In the case of a=1 and b=0, the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation. Then it is obvious that the condition (P1) is true, and the condition (P2) is reduced to the case that |γ(k)| is bounded. Therefore, the conditions (P1) and (P2) in this case are equivalent to the condition related to the reflection coefficient in [34], that is, |γ(k)| is bounded for the Sasa-Satsuma equation.

    The outline of this paper is as follows. In section 2, we derive a Riemann-Hilbert problem from the scattering relation. The solution of the generalized Sasa-Satsuma equation is changed into the solution of the Riemann-Hilbert problem. In section 3, we deal with the Riemann-Hilbert problem via nonlinear steepest decent method, from which the long-time asymptotics in Theorem 1.1 is obtained at the end.

    We begin with the 3×3 Lax pair of the generalized Sasa-Satsuma equation

    ψx=(ikσ+U)ψ, (2.1a)
    ψt=(4ik3σ+V)ψ, (2.1b)

    where ψ is a matrix function and k is the spectral parameter, σ=diag(1,1,1),

    U=(00u00uau+buau+bu0) (2.2)
    V=4k2U+2ik(U2+Ux)σ+[Ux,U]Uxx+2U3. (2.3)

    We introduce a new eigenfunction μ through μ=ψeikσx4ik3σt, where eσ=diag(e,e,e1). Then (2.1a) and (2.1b) become

    μx=ik[σ,μ]+Uμ, (2.4a)
    μt=4ik3[σ,μ]+Vμ, (2.4b)

    where [,] is the commutator, [σ,μ]=σμμσ. From (2.4a), the matrix Jost solution μ± satisfy the Volterra integral equations

    μ±(k;x,t)=I+x±eikσ(xξ)U(ξ,t)μ±(k;ξ,t)eikσ(xξ)dξ. (2.5)

    Set μ±L represent the first two columns of μ±, and μ±R denote the third column, i.e., μ±=(μ±L,μ±R). Furthermore, we can infer that μ+R and μL are analytic in the lower complex k-plane C, μ+L and μR are analytic in the the upper complex k-plane C+. Then we can introduce sectionally analytic function P1(k) and P2(k) by

    P1(k)=(μL(k),μ+R(k)),kC,P2(k)=(μ+L(k),μR(k)),kC+.

    One can find that U is traceless from (2.2), so detμ± are independent of x. Besides, detμ±=1 according to the evolution of detμ± at x=±. Because all the μ±eikσx+4ik3σt satisfy the differential equations (2.1a) and (2.1b), they are linear related. So there exists a scattering matrix s(k) that satisfies

    μ=μ+eikσx+4ik3σts(k)eikσx4ik3σt,dets(k)=1. (2.6)

    In this paper, we denote a 3×3 matrix A by the block form

    A=(A11A12A21A22),

    where A11 is a 2×2 matrix and A22 is scalar. Let q=(u,u)T and we can rewrite U of (2.2) as

    U=(02×2qqB10),

    where "" is the Hermitian conjugate. In addition, there are two symmetry properties for U,

    B1U(k)B=U(k),τU(k)τ=U(k), (2.7)
    B=(B1001),τ=(σ1001),σ1=(0110), (2.8)

    where B and τ are represented as block forms. Hence, the Jost solutions μ± and the scattering matrix s(k) also have the corresponding symmetry properties

    B1μ±(k)B=μ1±(k),τμ±(k)τ=μ±(k); (2.9)
    B1s(k)B=s1(k),τs(k)τ=s(k). (2.10)

    We write s(k) as block form (sij)2×2 and from the symmetry properties (2.10) we have

    s22(k)=det[s11(k)],B11s21(k)=adj[s11(k)]s12(k), (2.11)

    where adjX denote the adjoint of matrix X. Then we can write s(k) as

    s(k)=(s11(k)s12(k)s12(k)adj[s11(k)]B1det[s11(k)]), (2.12)

    where

    σ1s11(k)σ1=s11(k),σ1s12(k)=s12(k). (2.13)

    From the evaluation of (2.6) at t=0, one infers

    s(k)=limx+eikxσμ(k;x,0)eikxσ, (2.14)

    which implies that

    {s11(k)=I++q(ξ,0)μ21(k;ξ,0)dξ,s12(k)=+e2ikξq(ξ,0)μ22(k;ξ,0)dξ. (2.15)

    Theorem 2.1. Let M(k;x,t) be analytic for kCR and satisfy the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kR,M(k;x,t)I,k, (2.16)

    where

    M±(k;x,t)=limϵ0+M(kiϵ;x,t), (2.17)
    J(k;x,t)=(Iγ(k)γ(k)B1e2itθγ(k)e2itθγ(k)B11) (2.18)
    θ(k;x,t)=xtk4k3,γ(k)=s111(k)s12(k), (2.19)

    γ(k) lies in Schwartz space and satisfies

    σ1γ(k)=γ(k). (2.20)

    Then the solution of this Riemann-Hilbert problem exists and is unique, the function

    q(x,t)=(u(x,t),u(x,t))T=2ilimk(k(M(k;x,t))12) (2.21)

    and u(x,t) is the solution of the generalized Sasa-Satsuma equation.

    Proof. The matrix (J(k;x,t)+J(k;x,t))/2 is positive definite because of the condition (P1) that γ(k) satisfies, then the solution of the Riemann-Hilbert problem (2.16) is existent and unique according to the Vanishing Lemma [39]. We define M(k;x,t) by

    M(k;x,t)={(μL(k),μ+R(k)det[a(k)]),kC,(μ+L(k)a(k),μR(k)),kC+. (2.22)

    Considering the scattering relation (2.6) and the construction of M(k;x,t), we can obtain the jump condition and the corresponding Riemann-Hilbert problem (2.16) after tedious but straightforward algebraic manipulations. Substituting the large k asymptotic expansion of M(k;x,t) into (2.4a) and compare the coefficients of O(1k), we can get (2.21).

    In this section, we compute the Riemann-Hilbert problem (2.16) by the nonlinear steepest decent method and study the long-time asymptotic behavior of the solution. We make the following basic notations. (i) For any matrix M define |M|=(trMM)12 and for any matrix function A() define A()p=|A()|p. (ii) For two quantities A and B define AB if there exists a constant C>0 such that |A|. If C depends on the parameter \alpha we shall say that A\lesssim_{\alpha}B . (iii) For any oriented contour \Sigma , we say that the left side is + and the right side is - .

    First of all, it is noteworthy that there are two stationary points \pm k_0 , where \pm k_0 = \pm \sqrt{-\frac{x}{12t}} satisfied \frac{\mathrm{d}\theta}{\mathrm{d}k}\big\vert_{k = \pm k_0} = 0 . The jump matrix J(k; x, t) have a lower-upper triangular factorization and a upper-lower triangular factorization. We can introduce an appropriate Rieman-Hilbert problem to unify these two forms of factorizations. In this process, we have to reorient the contour of the Riemann-Hilbert problem.

    The two factorizations of the jump matrix J are

    \begin{equation*} J = \begin{cases} \left(\begin{array}{cc} I & -e^{-2it\theta}\gamma(k)\\ 0 & 1\\ \end{array}\right) \left(\begin{array}{cc} I & 0\\ e^{2it\theta}\gamma^{\dagger}(k^{\ast})B_1 & 1\\ \end{array}\right), \\ \left(\begin{array}{cc} I & 0\\ \frac{e^{2it\theta}\gamma^{\dagger}(k^{\ast})B_1}{1-\gamma^{\dagger}(k^{\ast})B_1\gamma(k)} & 1\\ \end{array}\right) \left(\begin{array}{cc} I-\gamma(k)\gamma^{\dagger}(k^{\ast})B_1 & 0\\ 0 & (1-\gamma^{\dagger}(k^{\ast})B_1\gamma(k))^{-1}\\ \end{array}\right) \left(\begin{array}{cc} I & \frac{-e^{-2it\theta}\gamma(k)}{1-\gamma^{\dagger}(k^{\ast})B_1\gamma(k)}\\ 0 & 1\\ \end{array}\right). \end{cases} \end{equation*}

    We introduce a 2 \times 2 matrix function \delta(k) to make the two factorization unified, and \delta(k) satisfies the following Riemann-Hilbert problem

    \begin{align} \left\{\begin{array}{rll} \delta_{+}(k)& = \delta_{-}(k)(I-\gamma(k)\gamma^{\dagger}(k^{\ast})B_1), & k\in(-k_0, k_0), \\ & = \delta_{-}(k), & k\in(-\infty, -k_0)\cup(k_0, +\infty), \\ \delta(k)&\to I, & k\to\infty, \end{array}\right. \end{align} (3.1)

    which implies a scalar Riemann-Hilbert problem

    \begin{equation} \left\{\begin{array}{rll} \det\delta_{+}(k)& = \det\delta_{-}(k)(1-\gamma^\dagger(k^\ast)B_1\gamma(k)), & k\in(-k_0, k_0), \\ & = \det\delta_{-}(k), & k\in(-\infty, -k_0)\cup(k_0, +\infty), \\ \det\delta(k)&\to 1, & k\to\infty. \end{array}\right. \end{equation} (3.2)

    The jump matrix I-\gamma(k)\gamma^{\dagger}(k^{\ast})B_1 of Riemann-Hilbert problem (3.1) is positive definite, so the solution \delta(k) exists and is unique. The scalar Riemann-Hilbert problem (3.2) can be solved by the Plemelj formula,

    \begin{equation} \det\delta(k) = \left(\frac{k-k_0}{k+k_0}\right)^{-i\nu}e^{\chi(k)}, \end{equation} (3.3)

    where

    \begin{gather*} \nu = -\frac{1}{2\pi}\log(1-\gamma^\dagger(k_0)B_1\gamma(k_0)), \\ \chi(k) = -\frac{1}{2\pi i}\int_{-k_0}^{k_0}\log\left(\frac{1-\gamma^\dagger(\xi^\ast)B_1\gamma(\xi)}{1-\gamma^\dagger(k_0^\ast)B_1\gamma(k_0)}\right)\, \frac{\mathrm{d}\xi}{\xi-k}. \end{gather*}

    Then we have by uniqueness that

    \begin{equation} \delta(k) = B_1^{-1}(\delta^\dagger(k^\ast))^{-1}B_1, \quad \delta(k) = \sigma_1\delta^\ast(-k^\ast)\sigma_1. \end{equation} (3.4)

    Substituting (3.4) to (3.1), we have

    \begin{equation} \delta^\dagger_+(k^\ast)B_1\delta_+(k) = B_1-B_1\gamma(k)\gamma^\dagger(k^\ast)B_1, \end{equation} (3.5)

    which means that

    \begin{equation} \mathrm{tr}[\delta^\dagger_+(k^\ast)B_1\delta_+(k)] = 2a-|B_1\gamma(k)|^2. \end{equation} (3.6)

    Actually, the condition (P_2) satisfied by \gamma(k) guarantee the boundedness of \delta_\pm(k) and we give a brief proof below. When \det B_1 > 0 , we find that the Hermitian matrix B_1 can be decomposition. In other words, there exists a triangular matrix S that satisfies B_1 = a S^\dagger S . So \mathrm{tr}[\delta^\dagger_+B_1\delta_+] = a|S\delta_+|^2 . When \det B_1 < 0 and |a| > 0 , the matrix B_1 has a decomposition B_1 = S^\dagger DS , where S is a triangular matrix and D is a diagonal matrix and the diagonal elements have opposite signs. In the case of a > 0 , B_1 can be decomposed as below,

    \begin{equation} B_1 = \left(\begin{array}{cc} -a&b^\ast\\ 0&1\\ \end{array}\right)^{-1} \left(\begin{array}{cc} a\det B_1&0\\ 0&a\\ \end{array}\right) \left(\begin{array}{cc} -a&0\\ b&1\\ \end{array}\right)^{-1}. \end{equation} (3.7)

    We denote S\delta_+(k) by (G_{ij})_{3 \times 3} and c_1 = 2a-|B_1\gamma(k)|^2 is negative, then

    \begin{equation} a\det B_1(|G_{11}|^2+|G_{21}|^2)+a(|G_{12}|^2+|G_{22}|^2) = c_1. \end{equation} (3.8)

    Noticing that \det B_1 < 0 , we find a negative constant c_2 that satisfies c_2\leqslant a\det B_1(c_3-1)/(1-\det B_1c_3) , where c_3 is a constant and 0 < c_3 < 1 , which impies

    \begin{equation} |S\delta_+(k)|^2\leqslant\frac{c_1}{c_2}\lesssim1. \end{equation} (3.9)

    The case that a < 0 is similar. In particular, when a = 0 , then |b| > 0 , it is easy to see that B_1 is not definite. For |\mathrm{Re}b| > 0 , we have the decomposition

    \begin{equation} B_1 = \left(\begin{array}{cc} \frac{b}{|b|^2+(b^\ast)^2}&\frac{b^\ast}{|b|^2+(b^\ast)^2}\\ \frac{b^\ast}{|b|^2+(b^\ast)^2}&-\frac{b^\ast}{|b|^2+(b^\ast)^2}\\ \end{array}\right)^{-1} \left(\begin{array}{cc} \frac{1}{b+b^\ast}&0\\ 0&-\frac{1}{b+b^\ast}\\ \end{array}\right) \left(\begin{array}{cc} \frac{b^\ast}{|b|^2+b^2}&\frac{b}{|b|^2+b^2}\\ \frac{b}{|b|^2+b^2}&-\frac{b}{|b|^2+b^2}\\ \end{array}\right)^{-1}. \end{equation} (3.10)

    For |\mathrm{Re}b| = 0 , we have

    \begin{equation} B_1 = \left(\begin{array}{cc} \frac{i}{2}&\frac{1-i}{2}\\ -\frac{i}{2}&\frac{1+i}{2}\\ \end{array}\right)^{-1} \left(\begin{array}{cc} -ib/2&0\\ 0&ib/2\\ \end{array}\right) \left(\begin{array}{cc} -\frac{i}{2}&\frac{i}{2}\\ \frac{1+i}{2}&\frac{1-i}{2}\\ \end{array}\right)^{-1}. \end{equation} (3.11)

    So we get the boundedness of |\delta_+(k)| . The others have the same analysis,

    \begin{align} &\delta^\dagger_-(k^\ast)B_1\delta_-(k) = (B_1-\gamma(k)\gamma^\dagger(k^\ast))^{-1}, \quad \mathrm{as} \quad k\in(-k_0, k_0), \end{align} (3.12)
    \begin{align} &|\delta_+(k)|^2 = |\delta_-(k)|^2 = 2, \quad \mathrm{as} \quad k\in(-\infty, -k_0)\cup(k_0, +\infty), \end{align} (3.13)
    \begin{align} &|\det\delta_+(k)| = \begin{cases} 1-\gamma^\dagger(k^\ast)B_1\gamma(k), & k\in(-k_0, k_0), \\ 1 , & k\in(-\infty, -k_0)\cup(k_0, +\infty), \\ \end{cases} \end{align} (3.14)
    \begin{align} &|\det\delta_-(k)| = \begin{cases} \frac{1}{1-\gamma^\dagger(k^\ast)B_1\gamma(k)}, & k\in(-k_0, k_0), \\ 1, & k\in(-\infty, -k_0)\cup(k_0, +\infty).\\ \end{cases} \end{align} (3.15)

    Hence, by the maximum principle, we have

    \begin{equation} \vert\delta(k)\vert\leqslant \mathrm{const} \lt \infty, \quad \vert\det\delta(k)\vert\leqslant \mathrm{const} \lt \infty, \end{equation} (3.16)

    for all k\in\mathbb{C} . We define the functions

    \begin{align} &\rho(k) = \begin{cases} -\gamma(k), & k\in(-\infty, -k_0)\cup(k_0, +\infty), \\ \dfrac{\gamma(k)}{1-\gamma^\dagger(k^\ast)B_1\gamma(k)}, & k\in(-k_0, k_0), \\ \end{cases} \end{align} (3.17)
    \begin{align} &\Delta(k) = \left(\begin{array}{cc} \delta(k) & 0\\ 0 & (\det\delta(k))^{-1}\\ \end{array}\right). \end{align} (3.18)

    We reverse the orientation for k\in(-\infty, k_0)\cup(k_0, +\infty) as in Figure 1, and M^\Delta(k; x, t) = M(k; x, t)\Delta^{-1}(k) satisfies the Riemann-Hilbert problem on the reoriented contour

    \begin{equation} \begin{cases} M^\Delta_+(k;x, t) = M^\Delta_-(k;x, t)J^\Delta(k;x, t), & k\in\mathbb{R}, \\ M^\Delta(k;x, t)\to I, & k\to\infty, \\ \end{cases} \end{equation} (3.19)
    Figure 1.  The reoriented contour on \mathbb{R} .

    where the jump matrix J^\Delta(k; x, t) has a decomposition

    \begin{equation} J^\Delta(k;x, t) = (b_-)^{-1}b_+ = \left(\begin{array}{cc} I & 0\\ \dfrac{e^{2it\theta}(k)\rho^\dagger(k^\ast)B_1\delta_-^{-1}(k)}{\det\delta_-(k)} & 1\\ \end{array}\right) \left(\begin{array}{cc} I & -e^{-2it\theta}\delta_+(k)\rho(k)[\det\delta_+(k)]\\ 0 & 1\\ \end{array}\right). \end{equation} (3.20)

    For the convenience of discussion, we define

    \begin{align*} L = \{k_0+\alpha k_0e^{-\frac{3\pi i}{4}}:-\infty \lt \alpha\leqslant\sqrt{2}\}\cup\{-k_0+\alpha k_0e^{-\frac{\pi i}{4}}:-\infty \lt \alpha\leqslant\sqrt{2}\}, \\ L_\epsilon = \{k_0+\alpha k_0e^{-\frac{3\pi i}{4}}:-\epsilon \lt \alpha\leqslant\sqrt{2}\}\cup\{-k_0+\alpha k_0e^{-\frac{\pi i}{4}}:-\epsilon \lt \alpha\leqslant\sqrt{2}\}. \end{align*}

    Theorem 3.1. The vector function \rho(k) has a decomposition

    \begin{equation*} \rho(k) = h_1(k)+h_2(k)+R(k), \quad k\in\mathbb{R}, \end{equation*}

    where R(k) is a piecewise-rational function and h_2(k) has a analytic continuation to L . Besides, they admit the following estimates

    \begin{gather} |e^{-2it\theta(k)}h_{1}(k)|\lesssim\frac{1}{(1+\vert k\vert^{2})t^{l}}, \quad k\in\mathbb{R}, \end{gather} (3.21)
    \begin{gather} \vert e^{-2it\theta(k)}h_{2}(k)\vert\lesssim\frac{1}{(1+\vert k\vert^{2})t^{l}}, \quad k\in L, \end{gather} (3.22)
    \begin{gather} \vert e^{-2it\theta(k)}R(k)\vert\lesssim e^{-16\epsilon^{2}k^{3}_{0}t}, \quad k\in L_{\epsilon}, \end{gather} (3.23)

    for an arbitrary positive integer l . Considering the Schwartz conjugate

    \begin{equation*} \rho^{\dagger}(k^{\ast}) = R^{\dagger}(k^{\ast})+h^{\dagger}_{1}(k^{\ast})+h^{\dagger}_{2}(k^{\ast}), \end{equation*}

    we can obtain the same estimate for e^{2it\theta(k)}h^{\dagger}_{1}(k^{\ast}) , e^{2it\theta(k)}h^{\dagger}_{2}(k^{\ast}) and e^{2it\theta(k)}R^{\dagger}(k^{\ast}) on \mathbb{R}\cup L^{\ast} .

    Proof. It follows from Proposition 4.2 in [18].

    A direct calculation shows that b_\pm of (3.20) can be decomposed further

    \begin{align*} b_{+}& = b^{o}_{+}b^{a}_{+} = (I_{3 \times 3}+\omega^{o}_{+})(I_{3 \times 3}+\omega^{a}_{+})\notag\\ & = \left(\begin{array}{cc} I_{2 \times 2}&-e^{-2it\theta}[\mathrm{det}\delta_+(k)]\delta_+(k)h_{1}(k)\\ 0&1\\ \end{array}\right) \left(\begin{array}{cc} I_{2 \times 2}&-e^{-2it\theta}[\mathrm{det}\delta_+(k)]\delta_+(k)[h_{2}(k)+R(k)]\\ 0&1\\ \end{array}\right), \\ b_{-}& = b^{o}_{-}b^{a}_{-} = (I_{3 \times 3}-\omega^{o}_{-})(I_{3 \times 3}-\omega^{a}_{-})\notag\\ & = \left(\begin{array}{cc} I_{2 \times 2}&0\\ -\dfrac{e^{2it\theta}h^{\dagger}_{1}(k^{\ast})B_1\delta_-^{-1}(k)}{\mathrm{det}\delta_-(k)}&1\\ \end{array}\right) \left(\begin{array}{cc} I_{2 \times 2}&0\\ -\dfrac{e^{2it\theta}[h^{\dagger}_{2}(k^{\ast})+R^{\dagger}(k^{\ast})]B_1\delta_-^{-1}(k)}{\mathrm{det}\delta_-(k)}&1\\ \end{array}\right). \end{align*}

    Define the oriented contour \Sigma by \Sigma = L\cup L^\ast as in Figure 2. Let

    \begin{equation} M^\sharp(k;x, t) = \begin{cases} M^\Delta(k;x, t), & k\in\Omega_{1}\cup\Omega_{2}, \\ M^\Delta(k;x, t)(b_+^a)^{-1}, & k\in\Omega_{3}\cup\Omega_{4}\cup\Omega_{5}, \\ M^\Delta(k;x, t)(b_-^a)^{-1}, & k\in\Omega_{6}\cup\Omega_{7}\cup\Omega_{8}.\\ \end{cases} \end{equation} (3.24)
    Figure 2.  The contour \Sigma .

    Lemma 3.1. M^\sharp(k; x, t) is the solution of the Riemann-Hilbert problem

    \begin{equation} \begin{cases} M^\sharp_+(k;x, t) = M^\sharp_-(k;x, t)J^\sharp(k;x, t), & k\in\Sigma, \\ M^\sharp(k;x, t)\to I, & k\to\infty, \end{cases} \end{equation} (3.25)

    where the jump matrix J^\sharp(k; x, t) satisfies

    \begin{equation} J^\sharp(k;x, t) = (b_-^\sharp)^{-1}b_+^\sharp = \begin{cases} I^{-1}b_+^a, & k\in L, \\ (b_-^a)^{-1}I, & k\in L^\ast, \\ (b_-^o)^{-1} b_+^o, & k\in\mathbb{R}.\\ \end{cases} \end{equation} (3.26)

    Proof. We can construct the Riemann-Hilbert problem (3.25) based on the Riemann-Hilbert problem (3.19) and the decomposition of b_\pm . In the meantime, the asymptotics of M^\sharp(k; x, t) is derived from the convergence of b_\pm as k\to\infty . For fixed x and t , we pay attention to the domain \Omega_{3} . Noticing the boundedness of \delta(k) and \det\delta(k) in (3.16), we arrive at

    \begin{equation*} \vert e^{-2it\theta}[\mathrm{det}\delta(k)][h_{2}(k)+R(k)]\delta(k)\vert \lesssim \vert e^{-2it\theta}h_{2}(k)\vert+\vert e^{-2it\theta}R(k)\vert. \end{equation*}

    Consider the definition of R(k) in this domain,

    \begin{align*} |e^{-2it\theta}h_2(k)| \lesssim \dfrac{1}{\vert k+i\vert^{2}}, \quad \vert e^{-2it\theta}R(k)\vert \lesssim \dfrac{\vert\sum\limits_{i = 0}^m\mu_i(k-k_0)^i\vert}{\vert(k+i)^{m+5}\vert} \lesssim \dfrac{1}{\vert k+i\vert^{5}}, \end{align*}

    where m is a positive integer and \mu_i is the coefficient of the Taylor series around k_0 . Combining with the boundedness of h_2(k) in Theorem 3.1, we obtain that M^\sharp(k; x, t)\to I when k\in\Omega_3 and k\to\infty . The others are similar to this domain.

    The above Riemann-Hilbert problem (3.25) can be solved as follows. Set

    \begin{equation*} \omega^\sharp_\pm = \pm(b^\sharp_\pm-I), \quad \omega^\sharp = \omega^\sharp_++\omega^\sharp_-. \end{equation*}

    Let

    \begin{equation} (C_\pm f)(k) = \int_\Sigma\frac{f(\xi)}{\xi-k_\pm} \, \frac{\mathrm{d}\xi}{2\pi i}, \quad f\in \mathscr{L}^2(\Sigma) \end{equation} (3.27)

    denote the Cauchy operator, where C_+f\ (C_-f) denotes the left (right) boundary value for the oriented contour \Sigma in Figure 2. Define the operator C_{\omega^\sharp}:\mathscr{L}^2(\Sigma)+\mathscr{L}^\infty(\Sigma)\to\mathscr{L}^2(\Sigma) by

    \begin{equation} C_{\omega^{\sharp}}f = C_+\left(f\omega^\sharp_-\right)+C_-\left(f\omega^\sharp_+\right) \end{equation} (3.28)

    for the 3 \times 3 matrix function f .

    Lemma 3.2 (Beals-Coifman). If \mu^\sharp(k; x, t)\in \mathscr{L}^2(\Sigma)+\mathscr{L}^\infty(\Sigma) is the solution of the singular integral equation

    \begin{equation*} \mu^\sharp = I+C^\sharp_{\omega^\sharp}\mu^\sharp. \end{equation*}

    Then

    \begin{equation*} M^\sharp(k;x, t) = I+\int_\Sigma\dfrac{\mu^\sharp(\xi;x, t)\omega^\sharp(\xi;x, t)}{\xi-k} \, \frac{\mathrm{d}\xi}{2\pi i} \end{equation*}

    is the solution of the Riemann-Hilbert problem (3.25).

    Proof. See [18], P. 322 and [40].

    Theorem 3.2. The expression of the solution q(x, t) can be written as

    \begin{equation} q(x, t) = (u(x, t), u^\ast(x, t))^T = \frac{1}{\pi}\left(\int_\Sigma\left((1-C_{\omega^\sharp})^{-1}I\right)(\xi)\omega^\sharp(\xi)\, \mathrm{d}\xi\right)_{12}. \end{equation} (3.29)

    Proof. From (2.21), (3.24) and Lemma 2, the solution q(x, t) of the generalized Sasa-Satsuma equation is expressed by

    \begin{align} \begin{split} q(x, t)& = \lim\limits_{k\to\infty}-2i\left[k(M^\sharp(k;x, t))_{12}\right]\\ & = \frac{1}{\pi}\left(\int_\Sigma\mu^\sharp(\xi;x, t)\omega^\sharp(\xi)\, \mathrm{d}\xi\right)_{12}\\ & = \frac{1}{\pi}\left(\int_\Sigma((1-C_{\omega^\sharp})^{-1}I)(\xi)\omega^\sharp(\xi)\, \mathrm{d}\xi\right)_{12}. \end{split} \end{align}

    Set \Sigma^\prime = \Sigma\backslash(\mathbb{R}\cup L_\epsilon\cup L_\epsilon^\ast) oriented as in Figure 3. We will convert the Riemann-Hilbert problem on the contour \Sigma to a Riemann-Hilbert problem on the contour \Sigma^\prime and estimate the errors between the two Riemann-Hilbert problems. Let \omega^\sharp = \omega^e+\omega^\prime = \omega^a+\omega^b+\omega^c+\omega^\prime , where \omega^a = \omega^\sharp |_\mathbb{R} is supported on \mathbb{R} and is composed of terms of type h_1(k) and h_1^\dagger(k^\ast) ; \omega^b is supported on L\cup L^\ast and is composed of contribution to \omega^\sharp from terms of type h_2(k) and h_2^\dagger(k^\ast) ; \omega^c is supported on L_\epsilon\cup L_\epsilon^\ast and is composed of contribution to \omega^\sharp from terms of type R(k) and R^\dagger(k^\ast) .

    Figure 3.  The contour \Sigma^\prime .

    Lemma 3.3. For arbitrary positive integer l , as t\to\infty ,

    \begin{gather} \Vert\omega^a\Vert_{\mathscr{L}^1(\mathbb{R})\cap\mathscr{L}^2(\mathbb{R})\cap\mathscr{L}^\infty(\mathbb{R})}\lesssim t^{-l}, \end{gather} (3.30)
    \begin{gather} \Vert\omega^b\Vert_{\mathscr{L}^1(L\cup L^\ast)\cap\mathscr{L}^2(L\cup L^\ast)\cap\mathscr{L}^\infty(L\cup L^\ast)}\lesssim t^{-l}, \end{gather} (3.31)
    \begin{gather} \Vert\omega^c\Vert_{\mathscr{L}^1(L_\epsilon\cup L_\epsilon^\ast)\cap\mathscr{L}^2(L_\epsilon\cup L_\epsilon^\ast)\cap\mathscr{L}^\infty(L_\epsilon\cup L_\epsilon^\ast)}\lesssim e^{-16\epsilon^{2}k^{3}_{0}t}, \end{gather} (3.32)
    \begin{gather} \Vert\omega^\prime\Vert_{\mathscr{L}^2(\Sigma)}\lesssim (tk_0^3)^{-\frac{1}{4}}, \quad \Vert\omega^\prime\Vert_{\mathscr{L}^1(\Sigma)}\lesssim (tk_0^3)^{-\frac{1}{2}} \end{gather} (3.33)

    Proof. The proof of estimates (3.30), (3.31), (3.32) follows from Theorem 3.1. Afterwards, we consider the definition of R(k) on the contour \{k = k_0+\alpha k_0e^{\frac{-3\pi i}{4}}\vert -\infty < \alpha < \epsilon\} ,

    \begin{equation*} |R(k)|\lesssim (1+|k|^5)^{-1}. \end{equation*}

    Resorting to \mathrm{Re}(i\theta)\geqslant8\alpha^2 k_0^3 and the boundedness of \delta(k) and \det\delta(k) in (3.16), we can obtain

    \begin{equation*} \vert e^{-2it\theta}[\det\delta(k)]R(k)\delta(k)\vert \lesssim e^{-16tk_0^3\alpha^2}(1+|k|^5)^{-1}. \end{equation*}

    Then we obtain (3.33) by simple computations.

    Lemma 3.4. As t\to\infty , (1-C_{\omega^\prime})^{-1}:\mathscr{L}^2(\Sigma)\to\mathscr{L}^2(\Sigma) exists and is uniformly bounded:

    \begin{equation*} \Vert(1-C_{\omega^\prime})^{-1}\Vert_{\mathscr{L}^2(\Sigma)}\lesssim1. \end{equation*}

    Furthermore, \Vert(1-C_{\omega^\sharp})^{-1}\Vert_{\mathscr{L}^2(\Sigma)}\lesssim1 .

    Proof. It follows from Proposition 2.23 and Corollary 2.25 in [18].

    Lemma 3.5. As t\to\infty ,

    \begin{equation} \int_\Sigma((1-C_{\omega^\sharp})^{-1}I)(\xi)\omega^\sharp(\xi) \, \mathrm{d}\xi = \int_\Sigma((1-C_{\omega^\prime})^{-1}I)(\xi)\omega^\prime(\xi) \, \mathrm{d}\xi+O((tk_0^3)^{-l}). \end{equation} (3.34)

    Proof. A simple computation shows that

    \begin{align} \begin{split} \left((1-C_{\omega^\sharp}\right)^{-1}I)\omega^\sharp = &\left((1-C_{\omega^\prime})^{-1}I\right)\omega^\prime+\omega^e+\left((1-C_{\omega^\prime})^{-1}(C_{\omega^e}I)\right)\omega^\sharp\\ &+((1-C_{\omega^\prime})^{-1}(C_{\omega^\prime}I))\omega^e +\left((1-C_{\omega^\prime})^{-1}C_{\omega^e}(1-C_{\omega^\sharp})\right)(C_{\omega^\sharp}I)\omega^\sharp. \end{split} \end{align} (3.35)

    After a series of tedious computations and utilizing the consequence of Lemma 4, we arrive at

    \begin{align*} &\Vert\omega^e\Vert_{\mathscr{L}^1(\Sigma)} \leqslant \Vert\omega^a\Vert_{\mathscr{L}^1(\mathbb{R})}+\Vert\omega^b\Vert_{\mathscr{L}^1(L\cup L^\ast)}+\Vert\omega^c\Vert_{\mathscr{L}^1(L_\epsilon\cup L_\epsilon^\ast)} \lesssim (tk_0^3)^{-l}, \\ &\begin{aligned} \Vert\left((1-C_{\omega^\prime})^{-1}(C_{\omega^e}I)\right)\omega^\sharp\Vert_{\mathscr{L}^1(\Sigma)} &\leqslant \Vert(1-C_{\omega^\prime})^{-1}\Vert_{\mathscr{L}^2(\Sigma)}\Vert C_{\omega^e}I\Vert_{\mathscr{L}^2(\Sigma)}\Vert\omega^\sharp\Vert_{\mathscr{L}^2(\Sigma)}\\ &\lesssim \Vert\omega^e\Vert_{\mathscr{L}^2(\Sigma)}\Vert\omega^\sharp\Vert_{\mathscr{L}^2(\Sigma)} \lesssim (tk_0^3)^{-l-\frac{1}{4}}, \end{aligned}\\ &\begin{aligned} \Vert\left((1-C_{\omega^\prime})^{-1}(C_{\omega^\prime}I)\right)\omega^e\Vert_{\mathscr{L}^1(\Sigma)} &\leqslant \Vert(1-C_{\omega^\prime}^{-1})\Vert_{\mathscr{L}^2(\Sigma)}\Vert C_{\omega^\prime}I\Vert_{\mathscr{L}^2(\Sigma)}\Vert\omega^e\Vert_{\mathscr{L}^2(\Sigma)}\\ &\lesssim \Vert\omega^\prime\Vert_{\mathscr{L}^2(\Sigma)}\Vert\omega^e\Vert_{\mathscr{L}^2(\Sigma)} \lesssim (tk_0^3)^{-l-\frac{1}{4}}, \end{aligned}\\ &\Vert\left((1-C_{\omega^\prime})^{-1}C_{\omega^e}(1-C_{\omega^\sharp})\right)(C_{\omega^\sharp}I)\omega^\sharp\Vert_{\mathscr{L}^1(\Sigma)}\\ &\leqslant \Vert(1-C_{\omega^\prime})^{-1}\Vert_{\mathscr{L}^2(\Sigma)}\Vert(1-C_{\omega^\sharp})^{-1}\Vert_{\mathscr{L}^2(\Sigma)}\Vert C_{\omega^e}\Vert_{\mathscr{L}^2(\Sigma)}\Vert C_{\omega^\sharp}I\Vert_{\mathscr{L}^2(\Sigma)}\Vert\omega^\sharp\Vert_{\mathscr{L}^2(\Sigma)}\\ &\lesssim \Vert\omega^e\Vert_{\mathscr{L}^\infty(\Sigma)}\Vert\omega^\sharp\Vert^2_{\mathscr{L}^2(\Sigma)} \lesssim (tk_0^3)^{-l-\frac{1}{2}}. \end{align*}

    Then the proof is accomplished as long as we substitute the estimates above into (3.35).

    Notice that \omega^\prime(k) = 0 when k\in\Sigma\backslash\Sigma^\prime , let C_{\omega^\prime}|_{\mathscr{L}^2(\Sigma^\prime)} denote the restriction of C_{\omega^\prime} to \mathscr{L}^2(\Sigma^\prime) . For simplicity, we write C_{\omega^\prime}|_{\mathscr{L}^2(\Sigma^\prime)} as C_{\omega^\prime} . Then

    \begin{equation*} \int_\Sigma((1-C_{\omega^\prime})^{-1}I)(\xi)\omega^\prime(\xi) \, \mathrm{d}\xi = \int_{\Sigma^\prime}((1-C_{\omega^\prime})^{-1}I)(\xi)\omega^\prime(\xi) \, \mathrm{d}\xi. \end{equation*}

    Lemma 3.6. As t\to\infty ,

    \begin{equation} q(x, t) = (u(x, t), u^\ast(x, t))^T = \frac{1}{\pi}\left(\int_{\Sigma^\prime}((1-C_{\omega^\prime})^{-1}I)(\xi)\omega^\prime(\xi)\, \mathrm{d}\xi\right)_{12}+O((tk_0^3)^{-l}). \end{equation} (3.36)

    Proof. From (3.29) and (3.34), we can obtain the result directly.

    Let L^\prime = L\backslash L_\epsilon and \mu^\prime = (1-C_{\omega^\prime})^{-1}I . Then

    \begin{equation*} M^\prime(k;x, t) = I+\int_{\Sigma^\prime}\frac{\mu^\prime(k;x, t)\omega^\prime(k;x, t)}{\xi-k} \, \frac{\mathrm{d}\xi}{2\pi i} \end{equation*}

    solves the Riemann-Hilbert problem

    \begin{equation*} \begin{cases} M_+^\prime(k;x, t) = M_-^\prime(k;x, t)J^\prime(k;x, t), & k\in\Sigma^\prime, \\ M^\prime(k;x, t)\to I, & k\to\infty, \end{cases} \end{equation*}

    where

    \begin{gather*} J^\prime = (b_-^\prime)^{-1}b_+^\prime = (I-\omega_-^\prime)^{-1}(I+\omega_+^\prime), \\ \omega^\prime = \omega_+^\prime+\omega_-^\prime, \\ b_+^\prime = \left( \begin{array}{cc} I & -e^{-2it\theta}[\det\delta(k)]\delta(k)R(k)\\ 0 & 1 \\ \end{array} \right), \quad b_-^\prime = I, \quad \mathrm{on}\ L^\prime, \\ b_+^\prime = I, \quad b_-^\prime = \left( \begin{array}{cc} I & 0\\ -\dfrac{e^{2it\theta}R^\dagger(k^\ast)B_1\delta^{-1}(k)}{\det\delta(k)} & 1\\ \end{array} \right), \quad \mathrm{on}\ ({L^\prime})^\ast. \end{gather*}

    Let the contour \Sigma^{\prime} = \Sigma^{\prime}_{A}\cup\Sigma^{\prime}_{B} and \omega^{\prime}_{\pm} = \omega^{\prime}_{A\pm}+\omega^{\prime}_{B\pm} , where

    \begin{equation} \omega^{\prime}_{A\pm}(k) = \begin{cases} \omega^{\prime}_{\pm}(k), & k\in\Sigma^{\prime}_{A}, \\ 0, & k\in\Sigma^{\prime}_{B}, \\ \end{cases}\quad \omega^{\prime}_{B\pm}(k) = \begin{cases} 0, & k\in\Sigma^{\prime}_{A}, \\ \omega^{\prime}_{\pm}(k), & k\in\Sigma^{\prime}_{B}.\\ \end{cases} \end{equation} (3.37)

    Define the operators C_{\omega^{\prime}_{A}} and C_{\omega^{\prime}_{B}} : \mathscr{L}^{2}(\Sigma^{\prime})+\mathscr{L}^{\infty}(\Sigma^{\prime})\rightarrow\mathscr{L}^{2}(\Sigma^{\prime}) as in definition (3.28).

    Lemma 3.7.

    \begin{equation*} ||C_{\omega^{\prime}_{B}}C_{\omega^{\prime}_{A}}||_{\mathscr{L}^{2}(\Sigma^{\prime})} = || C_{\omega^{\prime}_{A}}C_{\omega^{\prime}_{B}}||_{\mathscr{L}^{2}(\Sigma^{\prime})}\lesssim_{k_{0}}(tk_0^3)^{-\frac{1}{2}}, \end{equation*}
    \begin{equation*} ||C_{\omega^{\prime}_{B}}C_{\omega^{\prime}_{A}}||_{\mathscr{L}^{\infty}(\Sigma^{\prime})\rightarrow\mathscr{L}^{2}(\Sigma^{\prime})}, || C_{\omega^{\prime}_{A}}C_{\omega^{\prime}_{B}}||_{\mathscr{L}^{\infty}(\Sigma^{\prime})\rightarrow\mathscr{L}^{2}(\Sigma^{\prime})}\lesssim_{k_{0}}(tk_0^3)^{-\frac{3}{4}}. \end{equation*}

    Proof. See Lemma 3.5 in [18].

    Lemma 3.8. As t\rightarrow\infty ,

    \begin{align} \begin{split} \int_{\Sigma^{\prime}}((1-C_{\omega^{\prime}})^{-1}I)(\xi)\omega^{\prime}(\xi)\, \mathrm{d}\xi = &\int_{\Sigma^{\prime}_{A}}((1-C_{\omega^{\prime}_{A}})^{-1}I)(\xi)\omega^{\prime}_{A}(\xi)\, \mathrm{d}\xi\\ &+\int_{\Sigma^{\prime}_{B}}((1-C_{\omega^{\prime}_{B}})^{-1}I)(\xi)\omega^{\prime}_{B}(\xi)\, \mathrm{d}\xi+O(\frac{c(k_{0})}{t}). \end{split} \end{align} (3.38)

    Proof. From identity

    \begin{align*} (1-C_{\omega^{\prime}_{A}}-C_{\omega^{\prime}_{B}})(1+C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}+C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1})\\ = 1-C_{\omega^{\prime}_{B}}C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}-C_{\omega^{\prime}_{A}}C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1}, \end{align*}

    we have

    \begin{align*} (1-C_{\omega^{\prime}})^{-1} = &1+C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}+C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1}\\ &+[1+C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}+C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1}][1-C_{\omega^{\prime}_{B}}C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}\\ &-C_{\omega^{\prime}_{A}}C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1}]^{-1}[C_{\omega^{\prime}_{B}}C_{\omega^{\prime}_{A}}(1-C_{\omega^{\prime}_{A}})^{-1}+C_{\omega^{\prime}_{A}}C_{\omega^{\prime}_{B}}(1-C_{\omega^{\prime}_{B}})^{-1}]. \end{align*}

    Based on Lemma (3.7) and Lemma (3.4), we arrive at (3.38).

    For the sake of convenience, we write the restriction C_{\omega^{\prime}_{A}}\mid_{\mathscr{L}^{2}(\Sigma^{\prime}_{A})} as C_{\omega^{\prime}_{A}} , similar for C_{\omega^{\prime}_{B}} . From the consequences of Lemma 3.6 and Lemma 3.8, as t\rightarrow\infty , we have

    \begin{equation} \begin{split} q(x, t) = &-\left(\int_{\Sigma^{\prime}_{A}}((1-C_{\omega^{\prime}_{A}})^{-1}I)(\xi)\omega^{\prime}_{A}(\xi)\, \frac{\mathrm{d}\xi}{\pi}\right)_{12}\\ &-\left(\int_{\Sigma^{\prime}_{B}}((1-C_{\omega^{\prime}_{B}})^{-1}I)(\xi)\omega^{\prime}_{B}(\xi)\, \frac{\mathrm{d}\xi}{\pi}\right)_{12}+O(\frac{c(k_{0})}{t}). \end{split} \end{equation} (3.39)

    Extend the contours \Sigma^{\prime}_{A} and \Sigma^{\prime}_{B} to the contours

    \begin{gather} \hat{\Sigma}^{\prime}_{A} = \left\{k = -k_{0}+k_{0}\alpha e^{\pm\frac{\pi i}{4}}:\alpha\in\mathbb{R}\right\}, \end{gather} (3.40)
    \begin{gather} \hat{\Sigma}^{\prime}_{B} = \left\{k = k_{0}+k_{0}\alpha e^{\pm\frac{3\pi i}{4}}:\alpha\in\mathbb{R}\right\}, \end{gather} (3.41)

    respectively. We introduce \hat{\omega}^{\prime}_{A} and \hat{\omega}^{\prime}_{B} on \hat{\Sigma}^{\prime}_{A} and \hat{\Sigma}^{\prime}_{B} , respectively, by

    \begin{gather} \hat{\omega}^{\prime}_{A\pm} = \begin{cases} \omega^{\prime}_{A\pm}(k), & k\in\Sigma^{\prime}_{A}, \\ 0, & k\in\hat{\Sigma}^{\prime}_{A}\backslash\Sigma^{\prime}_{A}, \end{cases}\quad \hat{\omega}^{\prime}_{B\pm} = \begin{cases} \omega^{\prime}_{B\pm}(k), & k\in\Sigma^{\prime}_{B}, \\ 0, & k\in\hat{\Sigma}^{\prime}_{B}\backslash\Sigma^{\prime}_{B}. \end{cases} \end{gather} (3.42)

    Let \Sigma_{A} and \Sigma_{B} denote the contours \{k = k_{0}\alpha e^{\pm\frac{\pi i}{4}}:\alpha\in\mathbb{R}\} oriented inward as in \Sigma^{\prime}_{A} , \hat{\Sigma}^{\prime}_{A} , and outward as in \Sigma^{\prime}_{B} , \hat{\Sigma}^{\prime}_{B} , respectively. Define the scaling operators

    \begin{gather} \begin{split} N_{A}:\ &\mathscr{L}^{2}(\hat{\Sigma}^{\prime}_{A})\rightarrow\mathscr{L}^{2}(\Sigma_{A}), \\ &f(k)\rightarrow(N_{A}f)(k) = f(\frac{k}{\sqrt{48tk_{0}}}-k_{0}), \end{split} \end{gather} (3.43)
    \begin{gather} \begin{split} N_{B}:\ &\mathscr{L}^{2}(\hat{\Sigma}^{\prime}_{B})\rightarrow\mathscr{L}^{2}(\Sigma_{B}), \\ &f(k)\rightarrow(N_{B}f)(k) = f(\frac{k}{\sqrt{48tk_{0}}}+k_{0}), \end{split} \end{gather} (3.44)

    and set

    \begin{equation*} \omega_{A} = N_{A}\hat{\omega}^{\prime}_{A}, \quad \omega_{B} = N_{B}\hat{\omega}^{\prime}_{B}. \end{equation*}

    A simple change-of-variable arguments shows that

    \begin{equation*} \label{Cwprime definition} C_{\hat{\omega}^{\prime}_{A}} = N^{-1}_{A}C_{\omega_{A}}N_{A}, \quad C_{\hat{\omega}^{\prime}_{B}} = N^{-1}_{B}C_{\omega_{B}}N_{B}, \end{equation*}

    where the operator C_{\omega_{A}}\ (C_{\omega_{B}}) is a bounded map from \mathscr{L}^{2}(\Sigma_{A})\ (\mathscr{L}^{2}(\Sigma_{B})) into \mathscr{L}^{2}(\Sigma_{A})\ (\mathscr{L}^{2}(\Sigma_{B})) . On the part

    \begin{equation*} L_{A} = \left\{k = \alpha k_{0}\sqrt{48tk_{0}}e^{\frac{3\pi i}{4}}:-\epsilon \lt \alpha \lt +\infty\right\} \end{equation*}

    of \Sigma_{A} , we have

    \begin{equation*} \omega_{A} = \omega_{A+} = \left(\begin{matrix}0&(N_{A}s_{1})(k)\cr 0&0\end{matrix}\right), \end{equation*}

    on L^{\ast}_{A} we have

    \begin{equation*} \omega_{A} = \omega_{A-} = \left(\begin{matrix}0&0\cr (N_{A}s_{2})(k)&0\end{matrix}\right), \end{equation*}

    where

    \begin{equation*} s_{1}(k) = -e^{-2it\theta(k)}[\det\delta(k)]\delta(k)R(k), \quad s_{2}(k) = \frac{e^{2it\theta}R^{\dagger}(k^{\ast})B_1\delta^{-1}(k)}{\det\delta(k)}. \end{equation*}

    Lemma 3.9. As t\rightarrow\infty , and k\in L_{A} , then

    \begin{equation} \left|(N_{A}\tilde{\delta})(k)\right|\lesssim t^{-l}, \end{equation} (3.45)

    where \tilde{\delta}(k) = e^{-2it\theta(k)}[\delta(k)R(k)-(\mathrm{det}\delta(k))R(k)] .

    Proof. It follows from (3.1) and (3.2) that \tilde{\delta} satisfies the following Riemann-Hilbert problem:

    \begin{equation} \begin{cases} \tilde{\delta}_{+}(k) = \tilde{\delta}_{-}(k)(1-\gamma^\dagger(k^\ast)B_1\gamma(k))+e^{-2it\theta}f(k), & k\in(-k_{0}, k_{0}), \\ \tilde{\delta}(k)\rightarrow0, & k\rightarrow\infty. \end{cases} \end{equation} (3.46)

    where f(k) = \delta_{-}(k)[\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1]R(k) . The solution for the above Riemann-Hilbert problem can be expressed by

    \begin{gather*} \tilde{\delta}(k) = X(k)\int_{k_{0}}^{-k_{0}}\frac{e^{-2it\theta(\xi)}f(\xi)}{X_{+}(\xi)(\xi-k)}\, \frac{\mathrm{d}\xi}{2\pi i}, \\ X(k) = \mathrm{exp}\left\{{\frac{1}{2\pi i}\int_{k_{0}}^{-k_{0}}\frac{\log(1-\left|\gamma(\xi)\right|^{2})}{\xi-k}}\, \mathrm{d}\xi\right\}. \end{gather*}

    Observing that

    \begin{equation*} \begin{split} (\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1)R(k)& = (\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1)(R(k)-\rho(k))\\ & = \mathrm{adj}[B_1]\mathrm{adj}[\gamma(k)\gamma^\dagger(k^\ast)](h_{1}(k)+h_{2}(k)), \end{split} \end{equation*}

    we obtain f(k) = O((k^{2}-k^{2}_{0})^{l}). Similar to the Lemma 3.1, f(k) can be decomposed into two parts: f(k) = f_{1}(k)+f_{2}(k) , and

    \begin{equation} \left|e^{-2it\theta(k)}f_{1}(k)\right|\lesssim\frac{1}{(1+\left|k\right|^{2})t^{l}}, \quad k\in\mathbb{R}, \end{equation} (3.47)
    \begin{equation} \left|e^{-2it\theta(k)}f_{2}(k)\right|\lesssim\frac{1}{(1+\left|k\right|^{2})t^{l}}, \quad k\in L_{t}, \end{equation} (3.48)

    where f_{2}(k) has an analytic continuation to L_{t} , l is a positive integer and l\geqslant2 ,

    \begin{equation*} \begin{split} L_{t} = &\left\{k = k_{0}+k_{0}\alpha e^{-\frac{3\pi i}{4}}:0\leqslant\alpha\leqslant\sqrt{2}(1-\frac{1}{2t})\right\}\\ &\cup\left\{k = \frac{k_{0}}{t}-k_{0}+k_{0}\alpha e^{\frac{-\pi i}{4}}:0\leqslant\alpha\leqslant\sqrt{2}(1-\frac{1}{2t})\right\}, \end{split} \end{equation*}

    (see Figure 5).

    Figure 4.  The contour \Sigma_A(\Sigma_B) .
    Figure 5.  The contour L_t .

    As k\in L_{A} , we obtain

    \begin{equation*} \begin{split} (N_{A}\tilde{\delta})(k) = &X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{-k_{0}}_{\frac{k_{0}}{t}-k_{0}}\frac{e^{-2it\theta(\xi)}f(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ &+X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{\frac{k_{0}}{t}-k_{0}}_{k_{0}}\frac{e^{-2it\theta(\xi)}f_{1}(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ &+X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{\frac{k_{0}}{t}-k_{0}}_{k_{0}}\frac{e^{-2it\theta(\xi)}f_{2}(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ = &I_{1}+I_{2}+I_{3}. \end{split} \end{equation*}
    \begin{equation*} \left|I_{1}\right|\lesssim\int_{-k_{0}}^{\frac{k_{0}}{t}-k_{0}}\frac{\left|f(\xi)\right|}{|\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}}|}\, \mathrm{d}\xi\lesssim t^{-l-1}, \\ \end{equation*}
    \begin{equation*} \left|I_{2}\right|\lesssim\int_{\frac{k_{0}}{t}-k_{0}}^{k_{0}}\frac{\left|e^{-2it\theta(\xi)}f_{1}(\xi)\right|}{|\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}}|}\, \mathrm{d}\xi\leqslant t^{-l}\frac{\sqrt{2}t}{k_{0}}(2k_{0}-\frac{k_{0}}{t})\lesssim t^{-l+1}. \end{equation*}

    As a consequence of Cauchy's Theorem, we can evaluate I_{3} along the contour L_{t} instead of the interval (\frac{k_{0}}{t}-k_{0}, k_{0}) and obtain \left|I_{3}\right|\lesssim t^{-l+1}. Therefore, (3.45) holds.

    Corollary 3.1. As t\rightarrow\infty , and k\in L_{A}^\ast , then

    \begin{equation} \left|(N_{A}\hat{\delta})(k)\right|\lesssim t^{-l}, \quad t\rightarrow\infty, \quad k\in L^{\ast}_{A}, \end{equation} (3.49)

    where \hat{\delta}(k) = e^{2it\theta(k)}R^{\dagger}(k^{\ast})B_1[\delta^{-1}(k)-(\mathrm{det}\delta(k))^{-1}I] .

    Let J^{A^0} = (I-\omega_{A^0-})^{-1}(I+\omega_{A^0+}) , where

    \begin{gather} \omega_{A^0} = \omega_{A^0+} = \begin{cases} \left(\begin{array}{cc} 0 & -(\delta_A^0)^{2}(-k)^{2i\nu}e^{-\frac{ik^2}{2}}\frac{\gamma(-k_0)}{1-\gamma^\dagger(-k_0)B_1\gamma(-k_0)}\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_A^1, \\ \left(\begin{array}{cc} 0 & (\delta_A^0)^{2}(-k)^{2i\nu}e^{-\frac{ik^2}{2}}\gamma(-k_0)\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_A^3, \\ \end{cases} \end{gather} (3.50)
    \begin{gather} \delta_A^0 = (196tk_0^3)^{-\frac{i\nu}{2}}e^{8itk_0^3}e^{\chi(-k_0)} \end{gather} (3.51)
    \begin{gather} \omega_{A^0} = \omega_{A^0-} = \begin{cases} \left(\begin{array}{cc} 0 & 0\\ (\delta_A^0)^{-2}(-k)^{-2i\nu}e^{\frac{ik^2}{2}}\frac{\gamma^\dagger(-k_0)B_1}{1-\gamma^\dagger(-k_0)B_1\gamma(-k_0)} & 0\\ \end{array}\right), & k\in\Sigma_A^2, \\ \left(\begin{array}{cc} 0 & 0\\ -(\delta_A^0)^{-2}(-k)^{-2i\nu}e^{\frac{ik^2}{2}}\gamma^\dagger(-k_0)B_1 & 0\\ \end{array}\right), & k\in\Sigma_A^4.\\ \end{cases} \end{gather} (3.52)

    It follows from (3.78) in [18] that

    \begin{equation} \Vert\omega_A-\omega_{A^0}\Vert_{\mathscr{L}^1(\Sigma_A)\cap\mathscr{L}^2(\Sigma_A)\cap\mathscr{L}^\infty(\Sigma_A)} \lesssim_{k_0} \frac{\log{t}}{\sqrt{tk_0^3}}. \end{equation} (3.53)

    There are similar consequences for k\in\Sigma_B . Let J^{B^0} = (I-\omega_{B^0-})^{-1}(I+\omega_{B^0+}) , where

    \begin{gather} \omega_{B^0} = \omega_{B^0+} = \begin{cases} \left(\begin{array}{cc} 0 & (\delta_B^0)^{2}k^{-2i\nu}e^{\frac{ik^2}{2}}\gamma(k_0)\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_B^2, \\ \left(\begin{array}{cc} 0 & -(\delta_B^0)^{2}k^{-2i\nu}e^{\frac{ik^2}{2}}\frac{\gamma(k_0)}{1-\gamma^\dagger(k_0)B_1\gamma(k_0)}\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_B^4, \\ \end{cases} \end{gather} (3.54)
    \begin{gather} \delta_B^0 = (196tk_0^3)^{\frac{i\nu}{2}}e^{-8itk_0^3}e^{\chi(k_0)} \end{gather} (3.55)
    \begin{gather} \omega_{B^0} = \omega_{B^0-} = \begin{cases} \left(\begin{array}{cc} 0 & 0\\ -(\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\gamma^\dagger(k_0)B_1 & 0\\ \end{array}\right), & k\in\Sigma_B^1, \\ \left(\begin{array}{cc} 0 & 0\\ (\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\frac{\gamma^\dagger(k_0)B_1}{1-\gamma^\dagger(k_0)B_1\gamma(k_0)} & 0\\ \end{array}\right), & k\in\Sigma_B^3.\\ \end{cases} \end{gather} (3.56)

    Theorem 3.3. As t\to\infty ,

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}\\ &+\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_B}\left((1-C_{\omega_{B^0}})^{-1}I\right)(\xi)\omega_{B^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.57)

    Proof. Notice that

    \begin{align*} \left((1-C_{\omega_A})^{-1}I\right)\omega_A = &\left((1-C_{\omega_{A^0}})^{-1}I\right)\omega_{A^0}+\left((1-C_{\omega_A})^{-1}I\right)(\omega_A-\omega_{A^0})\notag\\ &+(1-C_{\omega_A})^{-1}(C_{\omega_A}-C_{\omega_{A^0}})(1-C_{\omega_{A^0}})I\omega_{A^0}. \end{align*}

    Utilizing the triangle inequality and the boundedness in (3.53), we have

    \begin{align*} \int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi = \int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi+O\left(\frac{\log{t}}{\sqrt{t}}\right). \end{align*}

    According to (3.5) and a simple change-of-variable argument, we have

    \begin{align*} \begin{split} &\frac{1}{\pi}\left(\int_{\Sigma^\prime}\left((1-C_{\omega_A^\prime})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left(N_A^{-1}(1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left((1-C_{\omega_A})^{-1}I\right)\left((\xi+k_0)\sqrt{48tk_0}\right)(N_A\omega_A^\prime)\left((\xi+k_0)\sqrt{48tk_0}\right)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align*}

    There are similar computations for the other case. Together with (3.39), one can obtain (3.57).

    For k\in\mathbb{C}\backslash\Sigma_A , set

    \begin{equation} M^{A^0}(k;x, t) = I+\int_{\Sigma_A}\frac{\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)}{\xi-k} \, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.58)

    Then M^{A^0}(k; x, t) is the solution of the Riemann-Hilbert problem

    \begin{equation} \begin{cases} M^{A^0}_+(k;x, t) = M^{A^0}_-(k;x, t)J^{A^0}(k;x, t), & k\in\Sigma_A, \\ M^{A^0}(k;x, t)\to I, & k\to\infty. \end{cases} \end{equation} (3.59)

    In particular

    \begin{equation} M^{A^0}(k) = I+\frac{M^{A^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty, \end{equation} (3.60)

    then

    \begin{equation} M^{A^0}_1 = -\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.61)

    There is a analogous Riemann-Hilbert problem on \Sigma_{B} ,

    \begin{equation} \begin{cases} M^{B^0}_+(k;x, t) = M^{B^0}_-(k;x, t)J^{B^0}(k;x, t), & k\in\Sigma_B, \\ M^{B^0}(k;x, t)\to I, & k\to\infty, \end{cases} \end{equation} (3.62)

    where J^{B^0}(k; x, t) is defined in (3.54) and (3.56). In the meantime, we have

    \begin{equation} M^{B^0}(k) = I+\frac{M^{B^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty. \end{equation} (3.63)

    Next, we consider the relation between M^{A^0}_1 and M^{B^0}_1 . From the expression (3.50), (3.52), (3.54) and (3.56), we have the symmetry relation

    \begin{equation*} J^{A^0}(k) = \tau(J^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    By the uniqueness of the Riemann-Hilbert problem,

    \begin{equation*} M^{A^0}(k) = \tau(M^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    Combining with the expansion (3.60) and (3.63), one can verify that

    \begin{equation*} M^{A^0}_1 = -\tau(M^{B^0}_1)^\ast\tau, \quad (M^{A^0}_1)_{12} = -\sigma_1(M^{B^0}_1)^\ast_{12}. \end{equation*}

    Therefore, from (3.57) and (3.61), we have

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{-2i}{\sqrt{48tk_0}}\left(M_1^{A^0}+M_1^{B^0}\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right)\\ = &-\frac{i}{\sqrt{12tk_0}}\left((M_1^{A^0})_{12}-\sigma_1(M_1^{A^0})^\ast_{12}\right)+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.64)

    In this subsection, we compute (M_1^{A^0})_{12} explicitly. It is important to set

    \begin{equation} \Psi(k) = H(k)(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}, \quad H(k) = (\delta_A^0)^{-\sigma}M^{A^0}(k)(\delta_A^0)^{\sigma}. \end{equation} (3.65)

    Then it follows from (3.59) that

    \begin{equation} \Psi_+(k) = \Psi_-(k)v(-k_0), \quad v = e^{\frac{1}{4}ik^2\sigma}(-k)^{-i\nu\sigma}(\delta_A^0)^{-\sigma}J^{A^0}(k)(\delta_A^0)^{\sigma}(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}. \end{equation} (3.66)

    The jump matrix is the constant one on the four rays \Sigma_A^1 , \Sigma_A^2 , \Sigma_A^3 , \Sigma_A^4 , so

    \begin{equation} \frac{\mathrm{d}\Psi_+(k)}{\mathrm{d}k} = \frac{\mathrm{d}\Psi_-(k)}{\mathrm{d}k}v(-k_0). \end{equation} (3.67)

    Then it follows that (\mathrm{d}\Psi/\mathrm{d}k+ik\sigma\Psi)\Psi^{-1} has no jump discontinuity along any of the four rays. Besides, from the relation between \Psi(k) and H(k) , we have

    \begin{align*} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}\Psi^{-1}(k) = &\frac{\mathrm{d}H(k)}{\mathrm{d}k}H^{-1}(k)-\frac{ik}{2}H(k)\sigma H^{-1}(k)+\frac{i\nu}{k}H(k)\sigma H^{-1}(k)\notag\\ = &O(k^{-1})-\frac{ik\sigma}{2}+\frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma}. \end{align*}

    It follows by the Liouville's Theorem that

    \begin{equation} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}+\frac{ik}{2}\sigma\Psi(k) = \beta\Psi(k), \end{equation} (3.68)

    where

    \begin{equation*} \beta = \frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma} = \left(\begin{array}{cc} 0 & \beta_{12}\\ \beta_{21} & 0 \end{array}\right). \end{equation*}

    Moreover,

    \begin{equation} (M_1^{A^0})_{12} = -i(\delta_A^0)^{-2}\beta_{12}. \end{equation} (3.69)

    Set

    \begin{equation*} \Psi(k) = \left(\begin{array}{cc} \Psi_{11}(k) & \Psi_{12}(k)\\ \Psi_{21}(k) & \Psi_{22}(k)\\ \end{array}\right). \end{equation*}

    From (3.68) and its differential, we obtain

    \begin{gather*} \frac{\mathrm{d}^{2}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k^{2}}+\left(\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\beta_{21}\Psi_{11}(k) = 0, \\ \Psi_{21}(k) = \frac{1}{\beta_{21}\beta_{12}}\left(\frac{\mathrm{d}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k}+\frac{ik}{2}\beta_{21}\Psi_{11}(k)\right), \\ \frac{\mathrm{d}^{2}\Psi_{22}(k)}{\mathrm{d}k^{2}}+\left(-\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\Psi_{22}(k) = 0, \\ \beta_{21}\Psi_{12}(k) = \left(\frac{\mathrm{d}\Psi_{22}(k)}{\mathrm{d}k}-\frac{ik}{2}\Psi_{22}(k)\right). \end{gather*}

    As is well known, the Weber's equation

    \begin{equation*} \frac{\mathrm{d}^{2}g(\zeta)}{\mathrm{d}\zeta^{2}}+\left(\varrho+\frac{1}{2}-\frac{\zeta^{2}}{4}\right)g(\zeta) = 0 \end{equation*}

    has the solution

    \begin{equation*} g(\zeta) = c_{1}D_{\varrho}(\zeta)+c_{2}D_{\varrho}(-\zeta), \end{equation*}

    where D_{\varrho}(\cdot) denotes the standard parabolic-cylinder function, and c_1 , c_2 are constants. The parabolic-cylinder function satisfies [41]

    \begin{gather} \frac{\mathrm{d}D_{\varrho}(\zeta)}{\mathrm{d}\zeta}+\frac{\zeta}{2}D_{\varrho}(\zeta)-\varrho D_{\varrho-1}(\zeta) = 0, \end{gather} (3.70)
    \begin{gather} D_{\varrho}(\pm\zeta) = \frac{\Gamma(\varrho+1)e^{\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\pm i\zeta)+\frac{\Gamma(\varrho+1)e^{-\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\mp i\zeta). \end{gather} (3.71)

    As \zeta\rightarrow\infty , from [42], we have

    \begin{equation} D_{\varrho}(\zeta) = \begin{cases} \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2})), & \left|\arg{\zeta}\right| \lt \frac{3\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & \frac{\pi}{4} \lt \arg{\zeta} \lt \frac{5\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{-\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & -\frac{5\pi}{4} \lt \arg{\zeta} \lt -\frac{\pi}{4}, \end{cases} \end{equation} (3.72)

    where \Gamma(\cdot) is the Gamma function. Set \varrho = i\beta_{21}\beta_{12} ,

    \begin{gather} \beta_{21}\Psi_{11}(k) = c_1D_\varrho\left(e^{\frac{\pi i}{4}}k\right)+c_2D_\varrho\left(e^{\frac{-3\pi i}{4}}k\right), \end{gather} (3.73)
    \begin{gather} \Psi_{22}(k) = c_3D_{-\varrho}\left(e^{\frac{-\pi i}{4}}k\right)+c_4D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right), \end{gather} (3.74)

    where a_1, a_2, a_3, a_4 are constants. As \arg{k}\in(-\pi, -\frac{3\pi}{4})\cup(\frac{3\pi}{4}, \pi) and k\rightarrow\infty , we arrive at

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}D_{\varrho}\left(e^{-\frac{3\pi i}{4}}k\right), \quad \nu = \beta_{21}\beta_{12}, \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{\varrho-1}\left(e^{-\frac{3\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    For \arg{k}\in(-\frac{3\pi}{4}, -\frac{\pi}{4}) and k\rightarrow\infty , we have

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}D_{\varrho}\left(e^{\frac{\pi i}{4}}k\right), \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}e^{\frac{3\pi i}{4}}D_{\varrho-1}\left(e^{\frac{\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Along the ray \arg k = -\frac{3\pi}{4},

    \begin{equation} \Psi_{+}(k) = \Psi_{-}(k) \left(\begin{array}{cc} I & 0\\ -\gamma^\dagger(-k_0)B_1 & 1\\ \end{array}\right). \end{equation} (3.75)

    Notice the (2, 1) entry of the Riemann-Hilbert problem,

    \begin{align*} &\beta_{21}e^{\frac{\pi(\nu-i)}{4}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)\\ = &\beta_{21}e^{\frac{\pi(3i-3\nu)}{4}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k)-e^{\frac{\pi\nu}{4}}D_{-\varrho}(e^{\frac{3\pi i}{4}}k)\gamma^\dagger(-k_0)B_1. \end{align*}

    It follows from (3.71) that

    \begin{equation*} D_{-\varrho}(e^{\frac{3\pi i}{4}}k) = \frac{\Gamma(-\varrho+1)e^{\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)+\frac{\Gamma(-\varrho+1)e^{-\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k). \end{equation*}

    Then we separate the coefficients of the two independent functions and obtain

    \begin{gather} \beta_{21} = e^{-\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}}\gamma^\dagger(-k_0)B_1. \end{gather} (3.76)

    Noting that B^{-1}(J^{A^0}(k^\ast))^\dagger B = (J^{A^0}(k))^{-1} , we have \beta_{12} = -B_1^{-1}\beta_{21}^\dagger , which means that

    \begin{equation} \beta_{12} = -B_1^{-1}B_1^\dagger\gamma(-k_0)e^{\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}} = e^{-\frac{\pi i}{4}}e^{\frac{\pi\nu}{2}}\nu\frac{\Gamma(-i\nu)}{\sqrt{2\pi}}\gamma(-k_0). \end{equation} (3.77)

    Finally, we can obtain (1.4) from (3.64), (3.69) and (3.77).

    This work is supported by the National Natural Science Foundation of China (Grant Nos. 11871440 and 11931017).

    The authors declare no conflict of interest.



    [1] Monkeypox outbreak 2022-Global, WHO. Available from: https://www.who.int/emergencies/situations/monkeypox-oubreak-2022.
    [2] I. D. Ladnyj, P. Ziegler, E. Kima, A human infection caused by monkeypox virus in basankusu territory, democratic Republic of the Congo, B. World Health Organ., 46 (1972), 593–597. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2480792/
    [3] A. Jezek, S. S. Marennikova, M. Mutumbo, J. H. Nakano, K. M. Paluku, M. Szczeniowski, Human monkeypox: A study of 2510 contacts of 214 patients, J. Infect. Dis., 154 (1986), 551–555. https://doi.org/10.1093/infdis/154.4.551 doi: 10.1093/infdis/154.4.551
    [4] D. A. Kulesh, B. M. Loveless, D. Norwood, J. Garrison, C. A. Whitehouse, C. Hartmann, Monkeypox virus detection in rodents using real-time 3^{\prime}-minor groove binder TaqMan assays on the Roche LightCycler, Lab Invest., 84 (2004), 1200–1208. https://doi.org/10.1038/labinvest.3700143 doi: 10.1038/labinvest.3700143
    [5] Y. Li, V. A. Olson, T. Laue, M. T. Laker, I. K. Damon, Detection of monkeypox virus with real-time PCR assays, J. Clin. Virol., 36 (2006), 194–203. https://doi.org/10.1016/j.jcv.2006.03.012 doi: 10.1016/j.jcv.2006.03.012
    [6] V. A. Olson, T. Laue, M. T. Laker, I. V. Babkin, C. Drosten, S. N. Shchelkunov, et al., Real-time PCR system for detection of orthopoxviruses and simultaneous identification of smallpox virus, J. Clin. Microbiol., 42 (2004), 1940–1946. https://doi.org/10.1128/jcm.42.5.1940-1946.2004 doi: 10.1128/jcm.42.5.1940-1946.2004
    [7] J. G. Breman, D. A. Henderson, Diagnosis and management of smallpox, N. Engl. J. Med., 346 (2002), 1300–1308. https://www.nejm.org/doi/full/10.1056/NEJMra020025
    [8] J. G. Breman, R. Kalisa, M. V. Steniowski, E. Zanotto, A. I. Gromyko, I. Arita, Human monkeypox 1970–1979, B. World Health Organ., 58 (1980), 165–182. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2395797/
    [9] Z. Jezek, F. Fenner, Human monkeypox, New York: Karger, 1988.
    [10] P. E. M. Fine, Z. Jezek, B. Grab, H. Dixon, The transmission potential of monkeypox virus in human populations, Int. J. Epidemiol., 17 (1988), 643–650. https://doi.org/10.1093/ije/17.3.643 doi: 10.1093/ije/17.3.643
    [11] H. Meyer, R. Ehmann, G. L. Smith, Smallpox in the post-eradication era, Viruses, 12 (2020), 138. https://doi.org/10.3390/v12020138 doi: 10.3390/v12020138
    [12] A. W. Rimoin, P. M. Mulembakani, S. C. Johnston, J. O. L. Smith, N. K. Kisalu, T. L. Kinkela, et al., Major increase in human monkeypox incidence 30 years after smallpox vaccination campaigns cease in the Democratic Republic of Congo, Proc. Natl. Acad. Sci., 107 (2010), 16262–16267. https://doi.org/10.1073/pnas.100576910 doi: 10.1073/pnas.100576910
    [13] C. P. Bhunu, S. Mushayabasa, Modelling the transmission dynamics of pox-like infections, IAENG Int. J. Appl. Math., 41 (2011), 1–9. Available from: https://www.iaeng.org/IJAM/issues_v41/issue_2/.
    [14] S. Usman, I. I. Adamu, Modeling the transmission dynamics of the monkeypox virus infection with treatment and vaccination interventions, J. Appl. Math. Phys., 5 (2017), 2335–2353. https://doi.org/10.4236/jamp.2017.512191 doi: 10.4236/jamp.2017.512191
    [15] S. A. Somma, N. I. Akinwande, U. D. Chado, A mathematical model of monkeypox virus transmission dynamics, Ife J. Sci., 21 (2019), 195–204. https://doi.org/10.4314/ijs.v21i1.17 doi: 10.4314/ijs.v21i1.17
    [16] S. V. Bankuru, S. Kossol, W. Hou, P. Mahmoudi, J. Rychtár, D. Taylor, A game-theoretic model of monkeypox to assess vaccination strategies, PeerJ, 8 (2020), https://doi.org/10.7717/peerj.9272 doi: 10.7717/peerj.9272
    [17] O. J. Peter, S. Kumar, N. Kumari, F. A. Oguntolu, K. Oshinubi, R. Musa, Transmission dynamics of monkeypox virus: A mathematical modelling approach, Model. Earth Syst. Environ., 8 (2022), 3423–3434. https://doi.org/10.1007/s40808-021-01313-2 doi: 10.1007/s40808-021-01313-2
    [18] L. E. Depero, E. Bontempi, Comparing the spreading characteristics of monkeypox (MPX) and COVID-19: Insights from a quantitative model, Environ. Res., 235 (2023), 116521. https://doi.org/10.1016/j.envres.2023.116521 doi: 10.1016/j.envres.2023.116521
    [19] B. Liu, S. Farid, S. Ullah, M. Altanji, R. Nawaz, S. W. Teklu, Mathematical assessment of monkeypox disease with the impact of vaccination using a fractional epidemiological modeling approach, Sci. Rep., 13 (2023), 13550. https://doi.org/10.1038/s41598-023-40745-x doi: 10.1038/s41598-023-40745-x
    [20] A. Elsonbaty, W. Adel, A. Aldurayhim, A. El-Mesady, Mathematical modeling and analysis of a novel monkeypox virus spread integrating imperfect vaccination and nonlinear incidence rates, Ain Shams Eng. J., 15 (2024). https://doi.org/10.1016/j.asej.2023.102451 doi: 10.1016/j.asej.2023.102451
    [21] A. A. Kilbas, H. H. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, Elsevier: Amsterdam, The Netherlands, 2006.
    [22] M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel, Prog. Fract. Differ. Appl., 1 (2015), 73–85.
    [23] A. Atangana, D. Baleanu, New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model, Therm. Sci., 20 (2016), 763–69. https://doi.org/10.2298/TSCI160111018A doi: 10.2298/TSCI160111018A
    [24] M. U. Rahman, Generalized fractal-fractional order problems under non-singular Mittag-Leffler kernel, Results Phys., 35 (2022), https://doi.org/10.1016/j.rinp.2022.105346 doi: 10.1016/j.rinp.2022.105346
    [25] J. Losada, J. J. Nieto, Properties of a new fractional derivative without singular kernel, Progr. Fract. Differ. Appl., 1 (2015), 87-–92. https://doi.org/10.12785/pfda/010202 doi: 10.12785/pfda/010202
    [26] R. Kanno, Representation of random walk in fractal space-time, Physica A, 248 (1998), 165–-175. https://doi.org/10.1016/S0378-4371(97)00422-6 doi: 10.1016/S0378-4371(97)00422-6
    [27] B. Ghanbari, K. S. Nisar, Some effective numerical techniques for chaotic systems involving fractal-fractional derivatives with different laws, Front. Phys., 8 (2020), 192. https://doi.org/10.3389/fphy.2020.00192 doi: 10.3389/fphy.2020.00192
    [28] A. Atangana, Modelling the spread of COVID-19 with new fractal-fractional operators: Can the lockdown save mankind before vaccination? Chaos Soliton. Fract., 136 (2020), 109860. https://doi.org/10.1016/j.chaos.2020.109860 doi: 10.1016/j.chaos.2020.109860
    [29] M. Arfan, H. Alrabaiah, M. ur Rahman, Y. L. Sun, A. S. Hashim, B. A. Pansera, et al., Investigation of fractal-fractional order model of COVID-19 in Pakistan under Atangana-Baleanu Caputo (ABC) derivative, Results Phys., 24 (2021), 104046. https://doi.org/10.1016/j.rinp.2021.104046 doi: 10.1016/j.rinp.2021.104046
    [30] J. F. Gomez-Aguilar, T. Cordova-Fraga, T. Abdeljawad, A. Khan, H. Khan, Analysis of fractal-fractional malaria transmission model, Fractals, 28 (2020), 2040041. https://doi.org/10.1142/S0218348X20400411 doi: 10.1142/S0218348X20400411
    [31] M. Farman, A. Akgül, M. T. Tekin, M. M. Akram, A. Ahmad, E. E. Mahmoud, et al., Fractal fractional-order derivative for HIV/AIDS model with Mittag-Leffler kernel, Alex. Eng. J., 61 (2022), 10965–10980. https://doi.org/10.1016/j.aej.2022.04.030 doi: 10.1016/j.aej.2022.04.030
    [32] E. Addai, A. Adeniji, O. J. Peter, J. O. Agbaje, K. Oshinubi, Dynamics of age-structure smoking models with government intervention coverage under fractal-fractional order derivatives, Fractal. Fract., 7 (2023), 370. https://doi.org/10.3390/fractalfract7050370 doi: 10.3390/fractalfract7050370
    [33] N. Zhang, E. Addai, L. Zhang, M. Ngungu, E. Marinda, J. K. K. Asamoah, Fractional modeling and numerical simulation for unfolding marburg-monkeypox virus co-infection transmission, Fractals, 31 (2023), 2350086. https://doi.org/10.1142/S0218348X2350086X doi: 10.1142/S0218348X2350086X
    [34] E. Addai, A. Adeniji, M. Ngungu, G. K. Tawiah, E. Marinda, J. K. K. Asamoah, et al., A nonlinear fractional epidemic model for the Marburg virus transmission with public health education, Sci. Rep., 13 (2023), 19292. https://doi.org/10.1038/s41598-023-46127-7 doi: 10.1038/s41598-023-46127-7
    [35] H. Najafi, S. Etemad, N. Patanarapeelert, J. K. K. Asamoah, S. Rezapour, T. Sitthiwirattham, A study on dynamics of CD4^+ T-cells under the effect of HIV-1 infection based on a mathematical fractal-fractional model via the Adams-Bashforth scheme and Newton polynomials, Mathematics, 10 (2022), 1366. https://doi.org/10.3390/math10091366 doi: 10.3390/math10091366
    [36] A. Atangana, S. I. Araz, New numerical scheme with Newton polynomial: Theory, methods, and applications, 1 Eds, Elsevier, 2021. https://doi.org/10.1016/C2020-0-02711-8
    [37] V. S. Erturk, P. Kumar, Solution of a COVID-19 model via new generalized Caputo-type fractional derivatives, Chaos Soliton. Fract., 139 (2020), 110280, 1–9. https://doi.org/10.1016/j.chaos.2020.110280 doi: 10.1016/j.chaos.2020.110280
    [38] A. El. Mesady, A. Elsonbaty, W. Adel, On nonlinear dynamics of a fractional order monkeypox virus model, Chaos Soliton. Fract., 164 (2022), 112716. https://doi.org/10.1016/j.chaos.2022.112716 doi: 10.1016/j.chaos.2022.112716
    [39] M. A. Qurashi, S. Rashid, A. M. Alshehri, F. Jarad, F. Safdar, New numerical dynamics of the fractional monkeypox virus model transmission pertaining to nonsingular kernels, Math. Biosci. Eng., 20 (2022), 40236. https://doi.org/10.3934/mbe.2023019 doi: 10.3934/mbe.2023019
    [40] O. J. Peter, F. A. Oguntolu, M. M. Ojo, A. O. Oyeniyi, R. Jan, I. Khan, Fractional order mathematical model of monkeypox transmission dynamics, Phys. Scr., 97 (2022), 084005. https://doi.org/10.1088/1402-4896/ac7ebc doi: 10.1088/1402-4896/ac7ebc
    [41] A. Atangana, A. Akgu, K. M. Owolabi, Analysis of fractal fractional differential equations, Alex. Eng. J., 59 (2020), 1117–1134. https://doi.org/10.1016/j.aej.2020.01.005 doi: 10.1016/j.aej.2020.01.005
    [42] S. Qureshi, A. Atangana, A. Shaikh, Strange chaotic attractors under fractal fractional operators using newly proposed numerical methods, Eur. Phys. J. Plus, 134 (2019), https://doi.org/10.1140/epjp/i2019-13003-7 doi: 10.1140/epjp/i2019-13003-7
    [43] A. Granas, J. Dugundji, Fixed point theory, Springer: New York, 2003. https://doi.org/10.1007/978-0-387-21593-8
    [44] M. A. Krasnosel'skii, Two remarks on the method of successive approximations, Usp. Mat. Nauk., 10 (1955), 123–127.
    [45] G. O. Fosu, E. Akweittey, A. S. Albert, Next-generation matrices and basic reproductive numbers for all phases of the coronavirus disease, Open J. Math. Sci., 4 (2020), 261–272. https://doi.org/10.30538/oms2020.0117 doi: 10.30538/oms2020.0117
    [46] C. P. Bhunu, W. Garira, G. Magombedze, Mathematical analysis of a two strain HIV/AIDS model with antiretroviral treatment, Acta Biotheor., 57 (2009), 361–381. https://doi.org/10.1007/s10441-009-9080-2 doi: 10.1007/s10441-009-9080-2
    [47] M. R. Odom, R. C. Hendrickson, E. J. Lefkowitz, Poxvirus protein evolution: Family wide assessment of possible horizontal gene transfer events, Virus Res., 144 (2009), 233–249. https://doi.org/10.1016/j.virusres.2009.05.006 doi: 10.1016/j.virusres.2009.05.006
    [48] M. Ngungu, E. Addai, A. Adeniji, U. M. Adam, K. Oshinubi, Mathematical epidemiological modeling and analysis of monkeypox dynamism with non-pharmaceutical intervention using real data from United Kingdom, Front. Public Health., 11 (2023), 1101436. https://doi.org/10.3389/fpubh.2023.1101436 doi: 10.3389/fpubh.2023.1101436
    [49] Monkeypox cases confirmed in England-Latest updates, UK Health Security Agency, 2022. Available from: https://www.gov.uk/government/news/monkeypox-casesconfirmed-in-england-latest-updates (accessed August 29, 2022).
  • This article has been cited by:

    1. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, EACH-COA: An Energy-Aware Cluster Head Selection for the Internet of Things Using the Coati Optimization Algorithm, 2023, 14, 2078-2489, 601, 10.3390/info14110601
    2. Ashok Thangavelu, Prabakaran Rajendran, Energy-Efficient Secure Routing for a Sustainable Heterogeneous IoT Network Management, 2024, 16, 2071-1050, 4756, 10.3390/su16114756
    3. N Saikiran, K Yogeswar Reddy, C Pavaneswara Reddy, S Karthik, 2024, Advanced Anomaly Detection in Cloud Security Using Gini Impurity and ML, 979-8-3503-7519-0, 380, 10.1109/ICAAIC60222.2024.10575757
    4. 尚秋峰 Shang Qiufeng, 刘峰 Liu Feng, FBG形状传感器的曲率和弯曲方向误差修正模型, 2023, 43, 0253-2239, 2228002, 10.3788/AOS231140
    5. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, SWARAM: Osprey Optimization Algorithm-Based Energy-Efficient Cluster Head Selection for Wireless Sensor Network-Based Internet of Things, 2024, 24, 1424-8220, 521, 10.3390/s24020521
    6. Muniyan Rajeswari, Rajakumar Ramalingam, Shakila Basheer, Keerthi Samhitha Babu, Mamoon Rashid, Ramar Saranya, Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem, 2023, 12, 2075-1680, 395, 10.3390/axioms12040395
    7. Haripriya R, Vinutha C B, Shoba M, 2023, Genetic Algorithm with Bacterial Conjugation Based Cluster Head Selection for Dynamic WSN, 979-8-3503-0082-6, 1, 10.1109/NMITCON58196.2023.10275829
    8. Fukui Li, Hui Xu, Feng Qiu, Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection, 2024, 32, 2688-1594, 1770, 10.3934/era.2024081
    9. Ferzat Anka, Nazim Agaoglu, Sajjad Nematzadeh, Mahsa Torkamanian-afshar, Farhad Soleimanian Gharehchopogh, Advances in Artificial Rabbits Optimization: A Comprehensive Review, 2024, 1134-3060, 10.1007/s11831-024-10202-7
    10. Muhammed A. Mahdi, Ali Y. Yousif, Mahdi Abed Salman, 2025, Chapter 1, 978-3-031-81064-0, 3, 10.1007/978-3-031-81065-7_1
    11. SriHasini J, Oudaya Coumar, 2024, Reinforced Learning Model with Coati optimization Algorithm for Energy Efficient Routing for WSN Connecting IoT, 979-8-3315-1002-2, 1, 10.1109/ICSCAN62807.2024.10894220
    12. Khushwant Singh, Mohit Yadav, R K Yadav, Bharat Bhushan Naib, Sandeep Bhatia, Deepak Kumar Panda, 2024, Prognosis of Relocation Disease in Animals using Aggregation Method with Optimization Techniques, 979-8-3315-2134-9, 15, 10.1109/PDGC64653.2024.10984354
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1536) PDF downloads(130) Cited by(7)

Figures and Tables

Figures(17)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog