Processing math: 83%
Research article

Explicit evaluations of subfamilies of the hypergeometric function 3F2(1) along with specific fractional integrals

  • Received: 02 February 2025 Revised: 01 March 2025 Accepted: 07 March 2025 Published: 14 March 2025
  • MSC : 26A33, 33C05, 33C20

  • The present study explores the application of hypergeometric functions in evaluating fractional integrals, providing a comprehensive framework to bridge fractional calculus and special functions. As a generalization of classical integrals, fractional integrals have gained prominence due to their wide applicability in modeling anomalous diffusion, viscoelastic systems, and other non-local phenomena. Hypergeometric functions, renowned for their rich analytical properties and ability to represent solutions to differential equations, offer an elegant and versatile tool for solving fractional integrals. In this paper, we evaluate a new class of fractional integrals, presenting results that contribute significantly to the study of generalized hypergeometric functions, particularly 3F2(1). The results reveal previously unexplored connections within these functions, providing new insights and extending their applicability. Furthermore, evaluating these fractional integrals holds promise for advancing the theoretical understanding and practical applications of fractional differential equations.

    Citation: Abdelhamid Zaidi, Saleh Almuthaybiri. Explicit evaluations of subfamilies of the hypergeometric function 3F2(1) along with specific fractional integrals[J]. AIMS Mathematics, 2025, 10(3): 5731-5761. doi: 10.3934/math.2025264

    Related Papers:

    [1] Fukui Li, Hui Xu, Feng Qiu . Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(3): 1770-1800. doi: 10.3934/era.2024081
    [2] Shuang Zhang, Songwen Gu, Yucong Zhou, Lei Shi, Huilong Jin . Energy efficient resource allocation of IRS-Assisted UAV network. Electronic Research Archive, 2024, 32(7): 4753-4771. doi: 10.3934/era.2024217
    [3] Fukui Li, Hui Xu, Feng Qiu . Correction: Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(7): 4515-4516. doi: 10.3934/era.2024204
    [4] Yejin Yang, Miao Ye, Qiuxiang Jiang, Peng Wen . A novel node selection method for wireless distributed edge storage based on SDN and a maldistributed decision model. Electronic Research Archive, 2024, 32(2): 1160-1190. doi: 10.3934/era.2024056
    [5] Chuang Ma, Helong Xia . A one-step graph clustering method on heterogeneous graphs via variational graph embedding. Electronic Research Archive, 2024, 32(4): 2772-2788. doi: 10.3934/era.2024125
    [6] Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224
    [7] Qi Hong, Hongyi Zhao, Shiyu Chen, Aya Selmoune, Kai Huang . Optimizing routing for autonomous delivery and pickup vehicles in three-dimensional space. Electronic Research Archive, 2025, 33(4): 2668-2697. doi: 10.3934/era.2025118
    [8] Dong-hyeon Kim, Se-woon Choe, Sung-Uk Zhang . Recognition of adherent polychaetes on oysters and scallops using Microsoft Azure Custom Vision. Electronic Research Archive, 2023, 31(3): 1691-1709. doi: 10.3934/era.2023088
    [9] Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang . Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174
    [10] Zongying Feng, Guoqiang Tan . Dynamic event-triggered H control for neural networks with sensor saturations and stochastic deception attacks. Electronic Research Archive, 2025, 33(3): 1267-1284. doi: 10.3934/era.2025056
  • The present study explores the application of hypergeometric functions in evaluating fractional integrals, providing a comprehensive framework to bridge fractional calculus and special functions. As a generalization of classical integrals, fractional integrals have gained prominence due to their wide applicability in modeling anomalous diffusion, viscoelastic systems, and other non-local phenomena. Hypergeometric functions, renowned for their rich analytical properties and ability to represent solutions to differential equations, offer an elegant and versatile tool for solving fractional integrals. In this paper, we evaluate a new class of fractional integrals, presenting results that contribute significantly to the study of generalized hypergeometric functions, particularly 3F2(1). The results reveal previously unexplored connections within these functions, providing new insights and extending their applicability. Furthermore, evaluating these fractional integrals holds promise for advancing the theoretical understanding and practical applications of fractional differential equations.



    The Sasa-Satsuma equation

    ut+uxxx+6|u|2ux+3u(|u|2)x=0, (1.1)

    so-called high-order nonlinear Schrödinger equation [1], is relevant to several physical phenomena, for example, in optical fibers [2,3], in deep water waves [4] and generally in dispersive nonlinear media [5]. Because this equation describes these important nonlinear phenomena, it has received considerable attention and extensive research. The Sasa-Satsuma equation has been discussed by means of various approaches such as the inverse scattering transform [1], the Riemann-Hilbert method [6], the Hirota bilinear method [7], the Darboux transformation [8], and others [9,10,11]. The initial-boundary value problem for the Sasa-Satsuma equation on a finite interval was studied by the Fokas method [12], which is also effective for the initial-boundary value problems on the half-line [35,36,37]. In Ref. [13], finite genus solutions of the coupled Sasa-Satsuma hierarchy are obtained in the basis of the theory of trigonal curves, the Baker-Akhiezer function and the meromorphic functions [14,15,16]. In Ref. [17], the super Sasa-Satsuma hierarchy associated with the 3×3 matrix spectral problem was proposed, and its bi-Hamiltonian structures were derived with the aid of the super trace identity.

    The nonlinear steepest descent method [18], also called Deift-Zhou method, for oscillatory Riemann-Hilbert problems is a powerful tool to study the long-time asymptotic behavior of the solution for the soliton equation, by which the long-time asymptotic behaviors for a number of integrable nonlinear evolution equations associated with 2×2 matrix spectral problems have been obtained, for example, the mKdV equation, the KdV equation, the sine-Gordon equation, the modified nonlinear Schrödinger equation, the Camassa-Holm equation, the derivative nonlinear Schrödinger equation and so on [19,20,21,22,23,24,25,26,27,28,29,30]. However, there is little literature on the long-time asymptotic behavior of solutions for integrable nonlinear evolution equations associated with 3×3 matrix spectral problems [31,32,33]. Usually, it is difficult and complicated for the 3×3 case. Recently, the nonlinear steepest descent method was successfully generalized to derive the long-time asymptotics of the initial value problems for the coupled nonlinear Schrödinger equation and the Sasa-Satsuma equation with the complex potentials [33,34]. The main differences between the 2×2 and 3×3 cases is that the former corresponds to a scalar Riemann-Hilbert problem, while the latter corresponds to a matrix Riemann-Hilbert problem. In general, the solution of the matrix Riemann-Hilbert problem can not be given in explicit form, but the scalar Riemann-Hilbert problem can be solved by the Plemelj formula.

    The main aim of this paper is to study the long-time asymptotics of the Cauchy problem for the generalized Sasa-Satsuma equation [38] via nonlinear steepest decent method,

    {ut+uxxx6a|u|2ux6bu2ux3au(|u|2)x3bu(|u|2)x=0,u(x,0)=u0(x), (1.2)

    where a is a real constant, b is a complex constant that satisfies a2|b|2, the asterisk "" denotes the complex conjugate. It is easy to see that the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation (1.1) when a=1 and b=0. Suppose that the initial value u0(x) lies in Schwartz space S(R)={f(x)C(R):supxR|xαβf(x)|<,α,βN}. The vector function γ(k) is determined by the initial data in (2.15) and (2.19), and γ(k) satisfies the conditions (P1) and (P2), where

    (P1):{γ(k)B1γ(k)+|b|24(γ(k)σ3γ(k))2<1,γ(k)B1γ(k)+aγ(k)σ3γ(k)<2,2γ(k)B1γ(k)+|γ(k)|2+|B1γ(k)|2<4,

    with

    σ3=(1001),B1=(abba); (1.3)

    (P2): When detB1>0 and a>0, (2a|B1γ(k)|2) and (2adetB1|γ(k)|2)/(1γ(k)B1γ(k)) are positive and bounded; otherwise, (|B1γ(k)|22a) and (detB1|γ(k)|22a)/(1γ(k)B1γ(k)) are positive and bounded.

    The main result of this paper is as following:

    Theorem 1.1. Let u(x,t) be the solution of the Cauchy problem for the generalized Sasa-Satsuma equation (1.2) with the initial value u0S(R). Suppose that the vector function γ(k) is defined in (2.19), the hypotheses (P1) and (P2) hold. Then, for x<0 and xt<C,

    u(x,t)=ua(x,t)+O(c(k0)t1logt). (1.4)

    where C is a fixed constant, and

    ua(x,t)=ν12tk0γ(k0)B1γ(k0)(|γ1(k0)|ei(ϕ+argγ1(k0))+|γ2(k0)|ei(ϕ+argγ2(k0))),k0=x12t,ν=12πlog(1γ(k0)B1γ(k0)),ϕ=νlog(196tk30)16tk30+argΓ(iν)+1πk0k0log|ξ+k0|d(1γ(ξ)B1γ(ξ))π4,

    c() is rapidly decreasing, Γ() is the Gamma function, γ1 and γ2 are the first and the second row of γ(k), respectively.

    Remark 1.1. The two conditions (P1) and (P2) satisfied by γ(k) are necessary. The condition (P1) guarantees the existence and the uniqueness of the solutions of the basic Riemann-Hilbert problem (2.16) and the Riemann-Hilbert problem (3.1). The boundedness of the function δ(k) defined in subsection 3.1 relies on the condition (P2).

    Remark 1.2. In the case of a=1 and b=0, the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation. Then it is obvious that the condition (P1) is true, and the condition (P2) is reduced to the case that |γ(k)| is bounded. Therefore, the conditions (P1) and (P2) in this case are equivalent to the condition related to the reflection coefficient in [34], that is, |γ(k)| is bounded for the Sasa-Satsuma equation.

    The outline of this paper is as follows. In section 2, we derive a Riemann-Hilbert problem from the scattering relation. The solution of the generalized Sasa-Satsuma equation is changed into the solution of the Riemann-Hilbert problem. In section 3, we deal with the Riemann-Hilbert problem via nonlinear steepest decent method, from which the long-time asymptotics in Theorem 1.1 is obtained at the end.

    We begin with the 3×3 Lax pair of the generalized Sasa-Satsuma equation

    ψx=(ikσ+U)ψ, (2.1a)
    ψt=(4ik3σ+V)ψ, (2.1b)

    where ψ is a matrix function and k is the spectral parameter, σ=diag(1,1,1),

    U=(00u00uau+buau+bu0) (2.2)
    V=4k2U+2ik(U2+Ux)σ+[Ux,U]Uxx+2U3. (2.3)

    We introduce a new eigenfunction μ through μ=ψeikσx4ik3σt, where eσ=diag(e,e,e1). Then (2.1a) and (2.1b) become

    μx=ik[σ,μ]+Uμ, (2.4a)
    μt=4ik3[σ,μ]+Vμ, (2.4b)

    where [,] is the commutator, [σ,μ]=σμμσ. From (2.4a), the matrix Jost solution μ± satisfy the Volterra integral equations

    μ±(k;x,t)=I+x±eikσ(xξ)U(ξ,t)μ±(k;ξ,t)eikσ(xξ)dξ. (2.5)

    Set μ±L represent the first two columns of μ±, and μ±R denote the third column, i.e., μ±=(μ±L,μ±R). Furthermore, we can infer that μ+R and μL are analytic in the lower complex k-plane C, μ+L and μR are analytic in the the upper complex k-plane C+. Then we can introduce sectionally analytic function P1(k) and P2(k) by

    P1(k)=(μL(k),μ+R(k)),kC,P2(k)=(μ+L(k),μR(k)),kC+.

    One can find that U is traceless from (2.2), so detμ± are independent of x. Besides, detμ±=1 according to the evolution of detμ± at x=±. Because all the μ±eikσx+4ik3σt satisfy the differential equations (2.1a) and (2.1b), they are linear related. So there exists a scattering matrix s(k) that satisfies

    μ=μ+eikσx+4ik3σts(k)eikσx4ik3σt,dets(k)=1. (2.6)

    In this paper, we denote a 3×3 matrix A by the block form

    A=(A11A12A21A22),

    where A11 is a 2×2 matrix and A22 is scalar. Let q=(u,u)T and we can rewrite U of (2.2) as

    U=(02×2qqB10),

    where "" is the Hermitian conjugate. In addition, there are two symmetry properties for U,

    B1U(k)B=U(k),τU(k)τ=U(k), (2.7)
    B=(B1001),τ=(σ1001),σ1=(0110), (2.8)

    where B and τ are represented as block forms. Hence, the Jost solutions μ± and the scattering matrix s(k) also have the corresponding symmetry properties

    B1μ±(k)B=μ1±(k),τμ±(k)τ=μ±(k); (2.9)
    B1s(k)B=s1(k),τs(k)τ=s(k). (2.10)

    We write s(k) as block form (sij)2×2 and from the symmetry properties (2.10) we have

    s22(k)=det[s11(k)],B11s21(k)=adj[s11(k)]s12(k), (2.11)

    where adjX denote the adjoint of matrix X. Then we can write s(k) as

    s(k)=(s11(k)s12(k)s12(k)adj[s11(k)]B1det[s11(k)]), (2.12)

    where

    σ1s11(k)σ1=s11(k),σ1s12(k)=s12(k). (2.13)

    From the evaluation of (2.6) at t=0, one infers

    s(k)=limx+eikxσμ(k;x,0)eikxσ, (2.14)

    which implies that

    {s11(k)=I++q(ξ,0)μ21(k;ξ,0)dξ,s12(k)=+e2ikξq(ξ,0)μ22(k;ξ,0)dξ. (2.15)

    Theorem 2.1. Let M(k;x,t) be analytic for kCR and satisfy the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kR,M(k;x,t)I,k, (2.16)

    where

    M±(k;x,t)=limϵ0+M(kiϵ;x,t), (2.17)
    J(k;x,t)=(Iγ(k)γ(k)B1e2itθγ(k)e2itθγ(k)B11) (2.18)
    θ(k;x,t)=xtk4k3,γ(k)=s111(k)s12(k), (2.19)

    γ(k) lies in Schwartz space and satisfies

    σ1γ(k)=γ(k). (2.20)

    Then the solution of this Riemann-Hilbert problem exists and is unique, the function

    q(x,t)=(u(x,t),u(x,t))T=2ilimk(k(M(k;x,t))12) (2.21)

    and u(x,t) is the solution of the generalized Sasa-Satsuma equation.

    Proof. The matrix (J(k;x,t)+J(k;x,t))/2 is positive definite because of the condition (P1) that γ(k) satisfies, then the solution of the Riemann-Hilbert problem (2.16) is existent and unique according to the Vanishing Lemma [39]. We define M(k;x,t) by

    M(k;x,t)={(μL(k),μ+R(k)det[a(k)]),kC,(μ+L(k)a(k),μR(k)),kC+. (2.22)

    Considering the scattering relation (2.6) and the construction of M(k;x,t), we can obtain the jump condition and the corresponding Riemann-Hilbert problem (2.16) after tedious but straightforward algebraic manipulations. Substituting the large k asymptotic expansion of M(k;x,t) into (2.4a) and compare the coefficients of O(1k), we can get (2.21).

    In this section, we compute the Riemann-Hilbert problem (2.16) by the nonlinear steepest decent method and study the long-time asymptotic behavior of the solution. We make the following basic notations. (i) For any matrix M define |M|=(trMM)12 and for any matrix function A() define A()p=|A()|p. (ii) For two quantities A and B define AB if there exists a constant C>0 such that |A|CB. If C depends on the parameter α we shall say that AαB. (iii) For any oriented contour Σ, we say that the left side is + and the right side is .

    First of all, it is noteworthy that there are two stationary points ±k0, where ±k0=±x12t satisfied dθdk|k=±k0=0. The jump matrix J(k;x,t) have a lower-upper triangular factorization and a upper-lower triangular factorization. We can introduce an appropriate Rieman-Hilbert problem to unify these two forms of factorizations. In this process, we have to reorient the contour of the Riemann-Hilbert problem.

    The two factorizations of the jump matrix J are

    J={(Ie2itθγ(k)01)(I0e2itθγ(k)B11),(I0e2itθγ(k)B11γ(k)B1γ(k)1)(Iγ(k)γ(k)B100(1γ(k)B1γ(k))1)(Ie2itθγ(k)1γ(k)B1γ(k)01).

    We introduce a 2×2 matrix function δ(k) to make the two factorization unified, and δ(k) satisfies the following Riemann-Hilbert problem

    {δ+(k)=δ(k)(Iγ(k)γ(k)B1),k(k0,k0),=δ(k),k(,k0)(k0,+),δ(k)I,k, (3.1)

    which implies a scalar Riemann-Hilbert problem

    {detδ+(k)=detδ(k)(1γ(k)B1γ(k)),k(k0,k0),=detδ(k),k(,k0)(k0,+),detδ(k)1,k. (3.2)

    The jump matrix Iγ(k)γ(k)B1 of Riemann-Hilbert problem (3.1) is positive definite, so the solution δ(k) exists and is unique. The scalar Riemann-Hilbert problem (3.2) can be solved by the Plemelj formula,

    detδ(k)=(kk0k+k0)iνeχ(k), (3.3)

    where

    ν=12πlog(1γ(k0)B1γ(k0)),χ(k)=12πik0k0log(1γ(ξ)B1γ(ξ)1γ(k0)B1γ(k0))dξξk.

    Then we have by uniqueness that

    δ(k)=B11(δ(k))1B1,δ(k)=σ1δ(k)σ1. (3.4)

    Substituting (3.4) to (3.1), we have

    δ+(k)B1δ+(k)=B1B1γ(k)γ(k)B1, (3.5)

    which means that

    tr[δ+(k)B1δ+(k)]=2a|B1γ(k)|2. (3.6)

    Actually, the condition (P2) satisfied by γ(k) guarantee the boundedness of δ±(k) and we give a brief proof below. When detB1>0, we find that the Hermitian matrix B1 can be decomposition. In other words, there exists a triangular matrix S that satisfies B1=aSS. So tr[δ+B1δ+]=a|Sδ+|2. When detB1<0 and |a|>0, the matrix B1 has a decomposition B1=SDS, where S is a triangular matrix and D is a diagonal matrix and the diagonal elements have opposite signs. In the case of a>0, B1 can be decomposed as below,

    B1=(ab01)1(adetB100a)(a0b1)1. (3.7)

    We denote Sδ+(k) by (Gij)3×3 and c1=2a|B1γ(k)|2 is negative, then

    adetB1(|G11|2+|G21|2)+a(|G12|2+|G22|2)=c1. (3.8)

    Noticing that detB1<0, we find a negative constant c2 that satisfies c2adetB1(c31)/(1detB1c3), where c3 is a constant and 0<c3<1, which impies

    |Sδ+(k)|2c1c21. (3.9)

    The case that a<0 is similar. In particular, when a=0, then |b|>0, it is easy to see that B1 is not definite. For |Reb|>0, we have the decomposition

    B1=(b|b|2+(b)2b|b|2+(b)2b|b|2+(b)2b|b|2+(b)2)1(1b+b001b+b)(b|b|2+b2b|b|2+b2b|b|2+b2b|b|2+b2)1. (3.10)

    For |Reb|=0, we have

    B1=(i21i2i21+i2)1(ib/200ib/2)(i2i21+i21i2)1. (3.11)

    So we get the boundedness of |δ+(k)|. The others have the same analysis,

    δ(k)B1δ(k)=(B1γ(k)γ(k))1,ask(k0,k0), (3.12)
    |δ+(k)|2=|δ(k)|2=2,ask(,k0)(k0,+), (3.13)
    |detδ+(k)|={1γ(k)B1γ(k),k(k0,k0),1,k(,k0)(k0,+), (3.14)
    |detδ(k)|={11γ(k)B1γ(k),k(k0,k0),1,k(,k0)(k0,+). (3.15)

    Hence, by the maximum principle, we have

    |δ(k)|const<,|detδ(k)|const<, (3.16)

    for all kC. We define the functions

    ρ(k)={γ(k),k(,k0)(k0,+),γ(k)1γ(k)B1γ(k),k(k0,k0), (3.17)
    Δ(k)=(δ(k)00(detδ(k))1). (3.18)

    We reverse the orientation for k(,k0)(k0,+) as in Figure 1, and MΔ(k;x,t)=M(k;x,t)Δ1(k) satisfies the Riemann-Hilbert problem on the reoriented contour

    {MΔ+(k;x,t)=MΔ(k;x,t)JΔ(k;x,t),kR,MΔ(k;x,t)I,k, (3.19)
    Figure 1.  The reoriented contour on R.

    where the jump matrix JΔ(k;x,t) has a decomposition

    JΔ(k;x,t)=(b)1b+=(I0e2itθ(k)ρ(k)B1δ1(k)detδ(k)1)(Ie2itθδ+(k)ρ(k)[detδ+(k)]01). (3.20)

    For the convenience of discussion, we define

    L={k0+αk0e3πi4:<α2}{k0+αk0eπi4:<α2},Lϵ={k0+αk0e3πi4:ϵ<α2}{k0+αk0eπi4:ϵ<α2}.

    Theorem 3.1. The vector function ρ(k) has a decomposition

    ρ(k)=h1(k)+h2(k)+R(k),kR,

    where R(k) is a piecewise-rational function and h2(k) has a analytic continuation to L. Besides, they admit the following estimates

    |e2itθ(k)h1(k)|1(1+|k|2)tl,kR, (3.21)
    |e2itθ(k)h2(k)|1(1+|k|2)tl,kL, (3.22)
    |e2itθ(k)R(k)|e16ϵ2k30t,kLϵ, (3.23)

    for an arbitrary positive integer l. Considering the Schwartz conjugate

    ρ(k)=R(k)+h1(k)+h2(k),

    we can obtain the same estimate for e2itθ(k)h1(k), e2itθ(k)h2(k) and e2itθ(k)R(k) on RL.

    Proof. It follows from Proposition 4.2 in [18].

    A direct calculation shows that b± of (3.20) can be decomposed further

    b+=bo+ba+=(I3×3+ωo+)(I3×3+ωa+)=(I2×2e2itθ[detδ+(k)]δ+(k)h1(k)01)(I2×2e2itθ[detδ+(k)]δ+(k)[h2(k)+R(k)]01),b=boba=(I3×3ωo)(I3×3ωa)=(I2×20e2itθh1(k)B1δ1(k)detδ(k)1)(I2×20e2itθ[h2(k)+R(k)]B1δ1(k)detδ(k)1).

    Define the oriented contour Σ by Σ=LL as in Figure 2. Let

    M(k;x,t)={MΔ(k;x,t),kΩ1Ω2,MΔ(k;x,t)(ba+)1,kΩ3Ω4Ω5,MΔ(k;x,t)(ba)1,kΩ6Ω7Ω8. (3.24)
    Figure 2.  The contour Σ.

    Lemma 3.1. M(k;x,t) is the solution of the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kΣ,M(k;x,t)I,k, (3.25)

    where the jump matrix J(k;x,t) satisfies

    J(k;x,t)=(b)1b+={I1ba+,kL,(ba)1I,kL,(bo)1bo+,kR. (3.26)

    Proof. We can construct the Riemann-Hilbert problem (3.25) based on the Riemann-Hilbert problem (3.19) and the decomposition of b±. In the meantime, the asymptotics of M(k;x,t) is derived from the convergence of b± as k. For fixed x and t, we pay attention to the domain Ω3. Noticing the boundedness of δ(k) and detδ(k) in (3.16), we arrive at

    |e2itθ[detδ(k)][h2(k)+R(k)]δ(k)||e2itθh2(k)|+|e2itθR(k)|.

    Consider the definition of R(k) in this domain,

    |e2itθh2(k)|1|k+i|2,|e2itθR(k)||mi=0μi(kk0)i||(k+i)m+5|1|k+i|5,

    where m is a positive integer and μi is the coefficient of the Taylor series around k0. Combining with the boundedness of h2(k) in Theorem 3.1, we obtain that M(k;x,t)I when kΩ3 and k. The others are similar to this domain.

    The above Riemann-Hilbert problem (3.25) can be solved as follows. Set

    ω±=±(b±I),ω=ω++ω.

    Let

    (C±f)(k)=Σf(ξ)ξk±dξ2πi,fL2(Σ) (3.27)

    denote the Cauchy operator, where C+f (Cf) denotes the left (right) boundary value for the oriented contour Σ in Figure 2. Define the operator Cω:L2(Σ)+L(Σ)L2(Σ) by

    Cωf=C+(fω)+C(fω+) (3.28)

    for the 3×3 matrix function f.

    Lemma 3.2 (Beals-Coifman). If μ(k;x,t)L2(Σ)+L(Σ) is the solution of the singular integral equation

    μ=I+Cωμ.

    Then

    M(k;x,t)=I+Σμ(ξ;x,t)ω(ξ;x,t)ξkdξ2πi

    is the solution of the Riemann-Hilbert problem (3.25).

    Proof. See [18], P. 322 and [40].

    Theorem 3.2. The expression of the solution q(x,t) can be written as

    q(x,t)=(u(x,t),u(x,t))T=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12. (3.29)

    Proof. From (2.21), (3.24) and Lemma 2, the solution q(x,t) of the generalized Sasa-Satsuma equation is expressed by

    q(x,t)=limk2i[k(M(k;x,t))12]=1π(Σμ(ξ;x,t)ω(ξ)dξ)12=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12.

    Set Σ=Σ(RLϵLϵ) oriented as in Figure 3. We will convert the Riemann-Hilbert problem on the contour Σ to a Riemann-Hilbert problem on the contour Σ and estimate the errors between the two Riemann-Hilbert problems. Let ω=ωe+ω=ωa+ωb+ωc+ω, where ωa=ω|R is supported on R and is composed of terms of type h1(k) and h1(k); ωb is supported on LL and is composed of contribution to ω from terms of type h2(k) and h2(k); ωc is supported on LϵLϵ and is composed of contribution to ω from terms of type R(k) and R(k).

    Figure 3.  The contour Σ.

    Lemma 3.3. For arbitrary positive integer l, as t,

    ωaL1(R)L2(R)L(R)tl, (3.30)
    ωbL1(LL)L2(LL)L(LL)tl, (3.31)
    ωcL1(LϵLϵ)L2(LϵLϵ)L(LϵLϵ)e16ϵ2k30t, (3.32)
    ωL2(Σ)(tk30)14,ωL1(Σ)(tk30)12 (3.33)

    Proof. The proof of estimates (3.30), (3.31), (3.32) follows from Theorem 3.1. Afterwards, we consider the definition of R(k) on the contour {k=k0+αk0e3πi4|<α<ϵ},

    |R(k)|(1+|k|5)1.

    Resorting to Re(iθ)8α2k30 and the boundedness of δ(k) and detδ(k) in (3.16), we can obtain

    |e2itθ[detδ(k)]R(k)δ(k)|e16tk30α2(1+|k|5)1.

    Then we obtain (3.33) by simple computations.

    Lemma 3.4. As t, (1Cω)1:L2(Σ)L2(Σ) exists and is uniformly bounded:

    (1Cω)1L2(Σ)1.

    Furthermore, (1Cω)1L2(Σ)1.

    Proof. It follows from Proposition 2.23 and Corollary 2.25 in [18].

    Lemma 3.5. As t,

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=Σ((1Cω)1I)(ξ)ω(ξ)dξ+O((tk30)l). (3.34)

    Proof. A simple computation shows that

    ((1Cω)1I)ω=((1Cω)1I)ω+ωe+((1Cω)1(CωeI))ω+((1Cω)1(CωI))ωe+((1Cω)1Cωe(1Cω))(CωI)ω. (3.35)

    After a series of tedious computations and utilizing the consequence of Lemma 4, we arrive at

    ωeL1(Σ)ωaL1(R)+ωbL1(LL)+ωcL1(LϵLϵ)(tk30)l,((1Cω)1(CωeI))ωL1(Σ)(1Cω)1L2(Σ)CωeIL2(Σ)ωL2(Σ)ωeL2(Σ)ωL2(Σ)(tk30)l14,((1Cω)1(CωI))ωeL1(Σ)(1C1ω)L2(Σ)CωIL2(Σ)ωeL2(Σ)ωL2(Σ)ωeL2(Σ)(tk30)l14,((1Cω)1Cωe(1Cω))(CωI)ωL1(Σ)(1Cω)1L2(Σ)(1Cω)1L2(Σ)CωeL2(Σ)CωIL2(Σ)ωL2(Σ)ωeL(Σ)ω2L2(Σ)(tk30)l12.

    Then the proof is accomplished as long as we substitute the estimates above into (3.35).

    Notice that ω(k)=0 when kΣΣ, let Cω|L2(Σ) denote the restriction of Cω to L2(Σ). For simplicity, we write Cω|L2(Σ) as Cω. Then

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=Σ((1Cω)1I)(ξ)ω(ξ)dξ.

    Lemma 3.6. As t,

    q(x,t)=(u(x,t),u(x,t))T=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12+O((tk30)l). (3.36)

    Proof. From (3.29) and (3.34), we can obtain the result directly.

    Let L=LLϵ and μ=(1Cω)1I. Then

    M(k;x,t)=I+Σμ(k;x,t)ω(k;x,t)ξkdξ2πi

    solves the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kΣ,M(k;x,t)I,k,

    where

    J=(b)1b+=(Iω)1(I+ω+),ω=ω++ω,b+=(Ie2itθ[detδ(k)]δ(k)R(k)01),b=I,on L,b+=I,b=(I0e2itθR(k)B1δ1(k)detδ(k)1),on (L).

    Let the contour Σ=ΣAΣB and ω±=ωA±+ωB±, where

    ωA±(k)={ω±(k),kΣA,0,kΣB,ωB±(k)={0,kΣA,ω±(k),kΣB. (3.37)

    Define the operators CωA and CωB: L2(Σ)+L(Σ)L2(Σ) as in definition (3.28).

    Lemma 3.7.

    ||CωBCωA||L2(Σ)=||CωACωB||L2(Σ)k0(tk30)12,
    ||CωBCωA||L(Σ)L2(Σ),||CωACωB||L(Σ)L2(Σ)k0(tk30)34.

    Proof. See Lemma 3.5 in [18].

    Lemma 3.8. As t,

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=ΣA((1CωA)1I)(ξ)ωA(ξ)dξ+ΣB((1CωB)1I)(ξ)ωB(ξ)dξ+O(c(k0)t). (3.38)

    Proof. From identity

    (1CωACωB)(1+CωA(1CωA)1+CωB(1CωB)1)=1CωBCωA(1CωA)1CωACωB(1CωB)1,

    we have

    (1Cω)1=1+CωA(1CωA)1+CωB(1CωB)1+[1+CωA(1CωA)1+CωB(1CωB)1][1CωBCωA(1CωA)1CωACωB(1CωB)1]1[CωBCωA(1CωA)1+CωACωB(1CωB)1].

    Based on Lemma (3.7) and Lemma (3.4), we arrive at (3.38).

    For the sake of convenience, we write the restriction CωAL2(ΣA) as CωA, similar for CωB. From the consequences of Lemma 3.6 and Lemma 3.8, as t, we have

    q(x,t)=(ΣA((1CωA)1I)(ξ)ωA(ξ)dξπ)12(ΣB((1CωB)1I)(ξ)ωB(ξ)dξπ)12+O(c(k0)t). (3.39)

    Extend the contours ΣA and ΣB to the contours

    ˆΣA={k=k0+k0αe±πi4:αR}, (3.40)
    ˆΣB={k=k0+k0αe±3πi4:αR}, (3.41)

    respectively. We introduce ˆωA and ˆωB on ˆΣA and ˆΣB, respectively, by

    ˆωA±={ωA±(k),kΣA,0,kˆΣAΣA,ˆωB±={ωB±(k),kΣB,0,kˆΣBΣB. (3.42)

    Let ΣA and ΣB denote the contours {k=k0αe±πi4:αR} oriented inward as in ΣA, ˆΣA, and outward as in ΣB, ˆΣB, respectively. Define the scaling operators

    NA: L2(ˆΣA)L2(ΣA),f(k)(NAf)(k)=f(k48tk0k0), (3.43)
    NB: L2(ˆΣB)L2(ΣB),f(k)(NBf)(k)=f(k48tk0+k0), (3.44)

    and set

    ωA=NAˆωA,ωB=NBˆωB.

    A simple change-of-variable arguments shows that

    CˆωA=N1ACωANA,CˆωB=N1BCωBNB,

    where the operator CωA (CωB) is a bounded map from L2(ΣA) (L2(ΣB)) into L2(ΣA) (L2(ΣB)). On the part

    LA={k=αk048tk0e3πi4:ϵ<α<+}

    of ΣA, we have

    ωA=ωA+=(0(NAs1)(k)00),

    on LA we have

    ωA=ωA=(00(NAs2)(k)0),

    where

    s1(k)=e2itθ(k)[detδ(k)]δ(k)R(k),s2(k)=e2itθR(k)B1δ1(k)detδ(k).

    Lemma 3.9. As t, and kLA, then

    |(NA˜δ)(k)|tl, (3.45)

    where ˜δ(k)=e2itθ(k)[δ(k)R(k)(detδ(k))R(k)].

    Proof. It follows from (3.1) and (3.2) that ˜δ satisfies the following Riemann-Hilbert problem:

    {˜δ+(k)=˜δ(k)(1γ(k)B1γ(k))+e2itθf(k),k(k0,k0),˜δ(k)0,k. (3.46)

    where f(k)=δ(k)[γ(k)B1γ(k)Iγ(k)γ(k)B1]R(k). The solution for the above Riemann-Hilbert problem can be expressed by

    ˜δ(k)=X(k)k0k0e2itθ(ξ)f(ξ)X+(ξ)(ξk)dξ2πi,X(k)=exp{12πik0k0log(1|γ(ξ)|2)ξkdξ}.

    Observing that

    (γ(k)B1γ(k)Iγ(k)γ(k)B1)R(k)=(γ(k)B1γ(k)Iγ(k)γ(k)B1)(R(k)ρ(k))=adj[B1]adj[γ(k)γ(k)](h1(k)+h2(k)),

    we obtain f(k)=O((k2k20)l). Similar to the Lemma 3.1, f(k) can be decomposed into two parts: f(k)=f1(k)+f2(k), and

    |e2itθ(k)f1(k)|1(1+|k|2)tl,kR, (3.47)
    |e2itθ(k)f2(k)|1(1+|k|2)tl,kLt, (3.48)

    where f2(k) has an analytic continuation to Lt, l is a positive integer and l2,

    Lt={k=k0+k0αe3πi4:0α2(112t)}{k=k0tk0+k0αeπi4:0α2(112t)},

    (see Figure 5).

    Figure 4.  The contour ΣA(ΣB).
    Figure 5.  The contour Lt.

    As kLA, we obtain

    (NA˜δ)(k)=X(k48tk0k0)k0k0tk0e2itθ(ξ)f(ξ)X+(ξ)(ξ+k0k48tk0)dξ2πi+X(k48tk0k0)k0tk0k0e2itθ(ξ)f1(ξ)X+(ξ)(ξ+k0k48tk0)dξ2πi+X(k48tk0k0)k0tk0k0e2itθ(ξ)f2(ξ)X+(ξ)(ξ+k0k48tk0)dξ2πi=I1+I2+I3.
    |I1|k0tk0k0|f(ξ)||ξ+k0k48tk0|dξtl1,
    |I2|k0k0tk0|e2itθ(ξ)f1(ξ)||ξ+k0k48tk0|dξtl2tk0(2k0k0t)tl+1.

    As a consequence of Cauchy's Theorem, we can evaluate I3 along the contour Lt instead of the interval (k0tk0,k0) and obtain |I3|tl+1. Therefore, (3.45) holds.

    Corollary 3.1. As t, and kLA, then

    |(NAˆδ)(k)|tl,t,kLA, (3.49)

    where ˆδ(k)=e2itθ(k)R(k)B1[δ1(k)(detδ(k))1I].

    Let JA0=(IωA0)1(I+ωA0+), where

    ωA0=ωA0+={(0(δ0A)2(k)2iνeik22γ(k0)1γ(k0)B1γ(k0)00),kΣ1A,(0(δ0A)2(k)2iνeik22γ(k0)00),kΣ3A, (3.50)
    δ0A=(196tk30)iν2e8itk30eχ(k0) (3.51)
    ωA0=ωA0={(00(δ0A)2(k)2iνeik22γ(k0)B11γ(k0)B1γ(k0)0),kΣ2A,(00(δ0A)2(k)2iνeik22γ(k0)B10),kΣ4A. (3.52)

    It follows from (3.78) in [18] that

    ωAωA0L1(ΣA)L2(ΣA)L(ΣA)k0logttk30. (3.53)

    There are similar consequences for kΣB. Let JB0=(IωB0)1(I+ωB0+), where

    ωB0=ωB0+={(0(δ0B)2k2iνeik22γ(k0)00),kΣ2B,(0(δ0B)2k2iνeik22γ(k0)1γ(k0)B1γ(k0)00),kΣ4B, (3.54)
    \begin{gather} \delta_B^0 = (196tk_0^3)^{\frac{i\nu}{2}}e^{-8itk_0^3}e^{\chi(k_0)} \end{gather} (3.55)
    \begin{gather} \omega_{B^0} = \omega_{B^0-} = \begin{cases} \left(\begin{array}{cc} 0 & 0\\ -(\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\gamma^\dagger(k_0)B_1 & 0\\ \end{array}\right), & k\in\Sigma_B^1, \\ \left(\begin{array}{cc} 0 & 0\\ (\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\frac{\gamma^\dagger(k_0)B_1}{1-\gamma^\dagger(k_0)B_1\gamma(k_0)} & 0\\ \end{array}\right), & k\in\Sigma_B^3.\\ \end{cases} \end{gather} (3.56)

    Theorem 3.3. As t\to\infty ,

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}\\ &+\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_B}\left((1-C_{\omega_{B^0}})^{-1}I\right)(\xi)\omega_{B^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.57)

    Proof. Notice that

    \begin{align*} \left((1-C_{\omega_A})^{-1}I\right)\omega_A = &\left((1-C_{\omega_{A^0}})^{-1}I\right)\omega_{A^0}+\left((1-C_{\omega_A})^{-1}I\right)(\omega_A-\omega_{A^0})\notag\\ &+(1-C_{\omega_A})^{-1}(C_{\omega_A}-C_{\omega_{A^0}})(1-C_{\omega_{A^0}})I\omega_{A^0}. \end{align*}

    Utilizing the triangle inequality and the boundedness in (3.53), we have

    \begin{align*} \int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi = \int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi+O\left(\frac{\log{t}}{\sqrt{t}}\right). \end{align*}

    According to (3.5) and a simple change-of-variable argument, we have

    \begin{align*} \begin{split} &\frac{1}{\pi}\left(\int_{\Sigma^\prime}\left((1-C_{\omega_A^\prime})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left(N_A^{-1}(1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left((1-C_{\omega_A})^{-1}I\right)\left((\xi+k_0)\sqrt{48tk_0}\right)(N_A\omega_A^\prime)\left((\xi+k_0)\sqrt{48tk_0}\right)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align*}

    There are similar computations for the other case. Together with (3.39), one can obtain (3.57).

    For k\in\mathbb{C}\backslash\Sigma_A , set

    \begin{equation} M^{A^0}(k;x, t) = I+\int_{\Sigma_A}\frac{\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)}{\xi-k} \, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.58)

    Then M^{A^0}(k; x, t) is the solution of the Riemann-Hilbert problem

    \begin{equation} \begin{cases} M^{A^0}_+(k;x, t) = M^{A^0}_-(k;x, t)J^{A^0}(k;x, t), & k\in\Sigma_A, \\ M^{A^0}(k;x, t)\to I, & k\to\infty. \end{cases} \end{equation} (3.59)

    In particular

    \begin{equation} M^{A^0}(k) = I+\frac{M^{A^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty, \end{equation} (3.60)

    then

    \begin{equation} M^{A^0}_1 = -\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.61)

    There is a analogous Riemann-Hilbert problem on \Sigma_{B} ,

    \begin{equation} \begin{cases} M^{B^0}_+(k;x, t) = M^{B^0}_-(k;x, t)J^{B^0}(k;x, t), & k\in\Sigma_B, \\ M^{B^0}(k;x, t)\to I, & k\to\infty, \end{cases} \end{equation} (3.62)

    where J^{B^0}(k; x, t) is defined in (3.54) and (3.56). In the meantime, we have

    \begin{equation} M^{B^0}(k) = I+\frac{M^{B^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty. \end{equation} (3.63)

    Next, we consider the relation between M^{A^0}_1 and M^{B^0}_1 . From the expression (3.50), (3.52), (3.54) and (3.56), we have the symmetry relation

    \begin{equation*} J^{A^0}(k) = \tau(J^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    By the uniqueness of the Riemann-Hilbert problem,

    \begin{equation*} M^{A^0}(k) = \tau(M^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    Combining with the expansion (3.60) and (3.63), one can verify that

    \begin{equation*} M^{A^0}_1 = -\tau(M^{B^0}_1)^\ast\tau, \quad (M^{A^0}_1)_{12} = -\sigma_1(M^{B^0}_1)^\ast_{12}. \end{equation*}

    Therefore, from (3.57) and (3.61), we have

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{-2i}{\sqrt{48tk_0}}\left(M_1^{A^0}+M_1^{B^0}\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right)\\ = &-\frac{i}{\sqrt{12tk_0}}\left((M_1^{A^0})_{12}-\sigma_1(M_1^{A^0})^\ast_{12}\right)+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.64)

    In this subsection, we compute (M_1^{A^0})_{12} explicitly. It is important to set

    \begin{equation} \Psi(k) = H(k)(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}, \quad H(k) = (\delta_A^0)^{-\sigma}M^{A^0}(k)(\delta_A^0)^{\sigma}. \end{equation} (3.65)

    Then it follows from (3.59) that

    \begin{equation} \Psi_+(k) = \Psi_-(k)v(-k_0), \quad v = e^{\frac{1}{4}ik^2\sigma}(-k)^{-i\nu\sigma}(\delta_A^0)^{-\sigma}J^{A^0}(k)(\delta_A^0)^{\sigma}(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}. \end{equation} (3.66)

    The jump matrix is the constant one on the four rays \Sigma_A^1 , \Sigma_A^2 , \Sigma_A^3 , \Sigma_A^4 , so

    \begin{equation} \frac{\mathrm{d}\Psi_+(k)}{\mathrm{d}k} = \frac{\mathrm{d}\Psi_-(k)}{\mathrm{d}k}v(-k_0). \end{equation} (3.67)

    Then it follows that (\mathrm{d}\Psi/\mathrm{d}k+ik\sigma\Psi)\Psi^{-1} has no jump discontinuity along any of the four rays. Besides, from the relation between \Psi(k) and H(k) , we have

    \begin{align*} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}\Psi^{-1}(k) = &\frac{\mathrm{d}H(k)}{\mathrm{d}k}H^{-1}(k)-\frac{ik}{2}H(k)\sigma H^{-1}(k)+\frac{i\nu}{k}H(k)\sigma H^{-1}(k)\notag\\ = &O(k^{-1})-\frac{ik\sigma}{2}+\frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma}. \end{align*}

    It follows by the Liouville's Theorem that

    \begin{equation} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}+\frac{ik}{2}\sigma\Psi(k) = \beta\Psi(k), \end{equation} (3.68)

    where

    \begin{equation*} \beta = \frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma} = \left(\begin{array}{cc} 0 & \beta_{12}\\ \beta_{21} & 0 \end{array}\right). \end{equation*}

    Moreover,

    \begin{equation} (M_1^{A^0})_{12} = -i(\delta_A^0)^{-2}\beta_{12}. \end{equation} (3.69)

    Set

    \begin{equation*} \Psi(k) = \left(\begin{array}{cc} \Psi_{11}(k) & \Psi_{12}(k)\\ \Psi_{21}(k) & \Psi_{22}(k)\\ \end{array}\right). \end{equation*}

    From (3.68) and its differential, we obtain

    \begin{gather*} \frac{\mathrm{d}^{2}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k^{2}}+\left(\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\beta_{21}\Psi_{11}(k) = 0, \\ \Psi_{21}(k) = \frac{1}{\beta_{21}\beta_{12}}\left(\frac{\mathrm{d}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k}+\frac{ik}{2}\beta_{21}\Psi_{11}(k)\right), \\ \frac{\mathrm{d}^{2}\Psi_{22}(k)}{\mathrm{d}k^{2}}+\left(-\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\Psi_{22}(k) = 0, \\ \beta_{21}\Psi_{12}(k) = \left(\frac{\mathrm{d}\Psi_{22}(k)}{\mathrm{d}k}-\frac{ik}{2}\Psi_{22}(k)\right). \end{gather*}

    As is well known, the Weber's equation

    \begin{equation*} \frac{\mathrm{d}^{2}g(\zeta)}{\mathrm{d}\zeta^{2}}+\left(\varrho+\frac{1}{2}-\frac{\zeta^{2}}{4}\right)g(\zeta) = 0 \end{equation*}

    has the solution

    \begin{equation*} g(\zeta) = c_{1}D_{\varrho}(\zeta)+c_{2}D_{\varrho}(-\zeta), \end{equation*}

    where D_{\varrho}(\cdot) denotes the standard parabolic-cylinder function, and c_1 , c_2 are constants. The parabolic-cylinder function satisfies [41]

    \begin{gather} \frac{\mathrm{d}D_{\varrho}(\zeta)}{\mathrm{d}\zeta}+\frac{\zeta}{2}D_{\varrho}(\zeta)-\varrho D_{\varrho-1}(\zeta) = 0, \end{gather} (3.70)
    \begin{gather} D_{\varrho}(\pm\zeta) = \frac{\Gamma(\varrho+1)e^{\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\pm i\zeta)+\frac{\Gamma(\varrho+1)e^{-\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\mp i\zeta). \end{gather} (3.71)

    As \zeta\rightarrow\infty , from [42], we have

    \begin{equation} D_{\varrho}(\zeta) = \begin{cases} \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2})), & \left|\arg{\zeta}\right| \lt \frac{3\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & \frac{\pi}{4} \lt \arg{\zeta} \lt \frac{5\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{-\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & -\frac{5\pi}{4} \lt \arg{\zeta} \lt -\frac{\pi}{4}, \end{cases} \end{equation} (3.72)

    where \Gamma(\cdot) is the Gamma function. Set \varrho = i\beta_{21}\beta_{12} ,

    \begin{gather} \beta_{21}\Psi_{11}(k) = c_1D_\varrho\left(e^{\frac{\pi i}{4}}k\right)+c_2D_\varrho\left(e^{\frac{-3\pi i}{4}}k\right), \end{gather} (3.73)
    \begin{gather} \Psi_{22}(k) = c_3D_{-\varrho}\left(e^{\frac{-\pi i}{4}}k\right)+c_4D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right), \end{gather} (3.74)

    where a_1, a_2, a_3, a_4 are constants. As \arg{k}\in(-\pi, -\frac{3\pi}{4})\cup(\frac{3\pi}{4}, \pi) and k\rightarrow\infty , we arrive at

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}D_{\varrho}\left(e^{-\frac{3\pi i}{4}}k\right), \quad \nu = \beta_{21}\beta_{12}, \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{\varrho-1}\left(e^{-\frac{3\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    For \arg{k}\in(-\frac{3\pi}{4}, -\frac{\pi}{4}) and k\rightarrow\infty , we have

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}D_{\varrho}\left(e^{\frac{\pi i}{4}}k\right), \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}e^{\frac{3\pi i}{4}}D_{\varrho-1}\left(e^{\frac{\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Along the ray \arg k = -\frac{3\pi}{4},

    \begin{equation} \Psi_{+}(k) = \Psi_{-}(k) \left(\begin{array}{cc} I & 0\\ -\gamma^\dagger(-k_0)B_1 & 1\\ \end{array}\right). \end{equation} (3.75)

    Notice the (2, 1) entry of the Riemann-Hilbert problem,

    \begin{align*} &\beta_{21}e^{\frac{\pi(\nu-i)}{4}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)\\ = &\beta_{21}e^{\frac{\pi(3i-3\nu)}{4}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k)-e^{\frac{\pi\nu}{4}}D_{-\varrho}(e^{\frac{3\pi i}{4}}k)\gamma^\dagger(-k_0)B_1. \end{align*}

    It follows from (3.71) that

    \begin{equation*} D_{-\varrho}(e^{\frac{3\pi i}{4}}k) = \frac{\Gamma(-\varrho+1)e^{\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)+\frac{\Gamma(-\varrho+1)e^{-\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k). \end{equation*}

    Then we separate the coefficients of the two independent functions and obtain

    \begin{gather} \beta_{21} = e^{-\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}}\gamma^\dagger(-k_0)B_1. \end{gather} (3.76)

    Noting that B^{-1}(J^{A^0}(k^\ast))^\dagger B = (J^{A^0}(k))^{-1} , we have \beta_{12} = -B_1^{-1}\beta_{21}^\dagger , which means that

    \begin{equation} \beta_{12} = -B_1^{-1}B_1^\dagger\gamma(-k_0)e^{\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}} = e^{-\frac{\pi i}{4}}e^{\frac{\pi\nu}{2}}\nu\frac{\Gamma(-i\nu)}{\sqrt{2\pi}}\gamma(-k_0). \end{equation} (3.77)

    Finally, we can obtain (1.4) from (3.64), (3.69) and (3.77).

    This work is supported by the National Natural Science Foundation of China (Grant Nos. 11871440 and 11931017).

    The authors declare no conflict of interest.



    [1] E. Özergin, Some properties of hypergeometric functions, Eastern Mediterranean University, 2011. Available from: http://hdl.handle.net/11129/217
    [2] F. Ouimet, Central and noncentral moments of the multivariate hypergeometric distribution, arXiv, 2024. https://doi.org/10.48550/arXiv.2404.09118
    [3] W. Briggs, R. Zaretzki, A new look at inference for the hypergeometric distribution, 2009.
    [4] P. Sheridan, M. Onsjö, The hypergeometric test performs comparably to TF-IDF on standard text analysis tasks, Multimed. Tools Appl., 83 (2024), 28875–28890. https://doi.org/10.1007/s11042-023-16615-z doi: 10.1007/s11042-023-16615-z
    [5] H. Alzer, K. Richards, Combinatorial identities and hypergeometric functions, Rocky Mountain J. Math., 52 (2022), 1921–1928. https://doi.org/10.1216/rmj.2022.52.1921 doi: 10.1216/rmj.2022.52.1921
    [6] H. Gould, Combinatorial identities: A standardized set of tables listing 500 binomial coefficient summations, Morgantown, 1972.
    [7] E. Diekema, Combinatorial identities and hypergeometric series, arXiv, 2022. https://doi.org/10.48550/arXiv.2204.05647
    [8] J. Borwein, A. Straub, C. Vignat, Densities of short uniform random walks in higher dimensions, J. Math. Anal. Appl., 437 (2016), 668–707. https://doi.org/10.1016/j.jmaa.2016.01.017 doi: 10.1016/j.jmaa.2016.01.017
    [9] J. Borwein, D. Nuyens, A. Straub, J. Wan, Random walks in the plane, Discrete Math. Theor. Comput. Sci., 2010,191–202.
    [10] J. McCrorie, Moments in Pearson's four-step uniform random walk problem and other applications of very well-poised generalized hypergeometric series, Sankhya B, 83 (2021), 244–281. https://doi.org/10.1007/s13571-020-00230-1 doi: 10.1007/s13571-020-00230-1
    [11] S. Janson, D. Knuth, T. Łuczak, B. Pittel, The birth of the giant component, Random Struct. Algor., 4 (1993), 233–358. https://doi.org/10.1002/rsa.3240040303 doi: 10.1002/rsa.3240040303
    [12] E. P. Wigner, On the matrices which reduce the Kronecker products of representations of S. R. groups, In: The collected works of eugene paul wigner, Berlin, Heidelberg: Springer, 1993,608–654. https://doi.org/10.1007/978-3-662-02781-3_42
    [13] K. S. Rao, Symmetries of 3n-j coefficients and generalized hypergeometric functions, In: Symmetries in science X, Boston: Springer, 1998,383–399. https://doi.org/10.1007/978-1-4899-1537-5_24
    [14] M. Harmer, Note on the Schwarz triangle functions, Bull. Aust. Math. Soc., 72 (2005), 38–389. https://doi.org/10.1017/S0004972700035218 doi: 10.1017/S0004972700035218
    [15] A. Ali, M. Islam, A. Noreen, Z. Nisa, Solution of fractional k-hypergeometric differential equation, Int. J. Math. Anal., 14 (2020), 125–132. https://doi.org/10.12988/ijma.2020.91287 doi: 10.12988/ijma.2020.91287
    [16] M. Abul-Ez, M. Zayed, A. Youssef, Further study on the conformable fractional Gauss hypergeometric function, AIMS Mathematics, 6 (2021), 10130–10163. https://doi.org/10.3934/math.2021588 doi: 10.3934/math.2021588
    [17] M. Chen, W. Chu, Yabu's formulae for hypergeometric _3F_2 -series through Whipple's quadratic transformations, AIMS Mathematics, 9 (2024), 21799–21815. https://doi.org/10.3934/math.20241060 doi: 10.3934/math.20241060
    [18] M. Atia, M. Alkilayh, Extension of Chu–Vandermonde identity and quadratic transformation conditions, Axioms, 13 (2024), 825. https://doi.org/10.3390/axioms13120825 doi: 10.3390/axioms13120825
    [19] M. Atia, On the inverse of the linearization coefficients of Bessel polynomials, Symmetry, 16 (2024), 737. https://doi.org/10.3390/sym16060737 doi: 10.3390/sym16060737
    [20] M. Atia, A. Rathie, On a generalization of the Kummer's quadratic transformation and a resolution of an isolated case, Axioms, 12 (2023), 821. https://doi.org/10.3390/axioms12090821 doi: 10.3390/axioms12090821
    [21] M. Atia, A. Al-Mohaimeed, On a resolution of another isolated case of a Kummer's quadratic transformation for _{2}F_{1}, Axioms, 12 (2023), 221. https://doi.org/10.3390/axioms12020221 doi: 10.3390/axioms12020221
    [22] A. Shehata, S. Moustafa, Some new results for Horn's hypergeometric functions \Gamma_{1} and \Gamma_{2}, J. Math. Comput. Sci., 23 (2020), 26–35. https://doi.org/10.22436/jmcs.023.01.03 doi: 10.22436/jmcs.023.01.03
    [23] W. Mohammed, C. Cesarano, F. Al-Askar, Solutions to the (4+1)-dimensional time-fractional Fokas equation with M-truncated derivative, Mathematics, 11 (2023), 194. https://doi.org/10.3390/math11010194 doi: 10.3390/math11010194
    [24] F. He, A. Bakhet, M. Hidan, H. Abd-Elmageed, On the construction of (p, k)-hypergeometric function and applications, Fractals, 30 (2022), 2240261. https://doi.org/10.1142/S0218348X22402617 doi: 10.1142/S0218348X22402617
    [25] H. Exton, A new two-term relation for the _3F_2 hypergeometric function of unit argument, J. Comput. Appl. Math., 106 (1999), 395–397. https://doi.org/10.1016/S0377-0427(99)00077-1 doi: 10.1016/S0377-0427(99)00077-1
    [26] M. Milgram, On hypergeometrics _3F_2(1)-A review, arXiv, 2010. https://doi.org/10.48550/arXiv.1011.4546
    [27] Y. Kim, J. Choi, A. Rathie, Two results for the terminating _3F_2(2) with applications, Bull. Korean Math. Soc., 49 (2012), 621–633. https://doi.org/10.4134/BKMS.2012.49.3.621 doi: 10.4134/BKMS.2012.49.3.621
    [28] K. Chen, Explicit formulas for some infinite _3F_2(1)-series, Axioms, 10 (2021), 125. https://doi.org/10.3390/axioms10020125 doi: 10.3390/axioms10020125
    [29] M. Chen, W. Chu, Bisection series approach for exotic _3F_2(1)-series, Mathematics, 12 (2024), 1915. https://doi.org/10.3390/math12121915 doi: 10.3390/math12121915
    [30] A. Kilbas, H. Srivastava, J. Trujillo, Theory and applications of fractional differential equations, Elsevier, 204 (2006).
    [31] E. D. Rainville, Special functions, New York: Macmillan, 1960.
  • This article has been cited by:

    1. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, EACH-COA: An Energy-Aware Cluster Head Selection for the Internet of Things Using the Coati Optimization Algorithm, 2023, 14, 2078-2489, 601, 10.3390/info14110601
    2. Ashok Thangavelu, Prabakaran Rajendran, Energy-Efficient Secure Routing for a Sustainable Heterogeneous IoT Network Management, 2024, 16, 2071-1050, 4756, 10.3390/su16114756
    3. N Saikiran, K Yogeswar Reddy, C Pavaneswara Reddy, S Karthik, 2024, Advanced Anomaly Detection in Cloud Security Using Gini Impurity and ML, 979-8-3503-7519-0, 380, 10.1109/ICAAIC60222.2024.10575757
    4. 尚秋峰 Shang Qiufeng, 刘峰 Liu Feng, FBG形状传感器的曲率和弯曲方向误差修正模型, 2023, 43, 0253-2239, 2228002, 10.3788/AOS231140
    5. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, SWARAM: Osprey Optimization Algorithm-Based Energy-Efficient Cluster Head Selection for Wireless Sensor Network-Based Internet of Things, 2024, 24, 1424-8220, 521, 10.3390/s24020521
    6. Muniyan Rajeswari, Rajakumar Ramalingam, Shakila Basheer, Keerthi Samhitha Babu, Mamoon Rashid, Ramar Saranya, Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem, 2023, 12, 2075-1680, 395, 10.3390/axioms12040395
    7. Haripriya R, Vinutha C B, Shoba M, 2023, Genetic Algorithm with Bacterial Conjugation Based Cluster Head Selection for Dynamic WSN, 979-8-3503-0082-6, 1, 10.1109/NMITCON58196.2023.10275829
    8. Fukui Li, Hui Xu, Feng Qiu, Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection, 2024, 32, 2688-1594, 1770, 10.3934/era.2024081
    9. Ferzat Anka, Nazim Agaoglu, Sajjad Nematzadeh, Mahsa Torkamanian-afshar, Farhad Soleimanian Gharehchopogh, Advances in Artificial Rabbits Optimization: A Comprehensive Review, 2024, 1134-3060, 10.1007/s11831-024-10202-7
    10. Muhammed A. Mahdi, Ali Y. Yousif, Mahdi Abed Salman, 2025, Chapter 1, 978-3-031-81064-0, 3, 10.1007/978-3-031-81065-7_1
    11. SriHasini J, Oudaya Coumar, 2024, Reinforced Learning Model with Coati optimization Algorithm for Energy Efficient Routing for WSN Connecting IoT, 979-8-3315-1002-2, 1, 10.1109/ICSCAN62807.2024.10894220
    12. Khushwant Singh, Mohit Yadav, R K Yadav, Bharat Bhushan Naib, Sandeep Bhatia, Deepak Kumar Panda, 2024, Prognosis of Relocation Disease in Animals using Aggregation Method with Optimization Techniques, 979-8-3315-2134-9, 15, 10.1109/PDGC64653.2024.10984354
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(429) PDF downloads(52) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog