Processing math: 73%
Research article

Modified PNN classifier for diagnosing skin cancer severity condition using SMO optimization technique

  • Received: 27 July 2022 Revised: 08 November 2022 Accepted: 10 November 2022 Published: 27 December 2022
  • Skin cancer is a pandemic disease now worldwide, and it is responsible for numerous deaths. Early phase detection is pre-eminent for controlling the spread of tumours throughout the body. However, existing algorithms for skin cancer severity detections still have some drawbacks, such as the analysis of skin lesions is not insignificant, slightly worse than that of dermatologists, and costly and time-consuming. Various machine learning algorithms have been used to detect the severity of the disease diagnosis. But it is more complex when detecting the disease. To overcome these issues, a modified Probabilistic Neural Network (MPNN) classifier has been proposed to determine the severity of skin cancer. The proposed method contains two phases such as training and testing the data. The collected features from the data of infected people are used as input to the modified PNN classifier in the current model. The neural network is also trained using Spider Monkey Optimization (SMO) approach. For analyzing the severity level, the classifier predicts four classes. The degree of skin cancer is determined depending on classifications. According to findings, the system achieved a 0.10% False Positive Rate (FPR), 0.03% error and 0.98% accuracy, while previous methods like KNN, NB, RF and SVM have accuracies of 0.90%, 0.70%, 0.803% and 0.86% correspondingly, which is lesser than the proposed approach.

    Citation: J. Rajeshwari, M. Sughasiny. Modified PNN classifier for diagnosing skin cancer severity condition using SMO optimization technique[J]. AIMS Electronics and Electrical Engineering, 2023, 7(1): 75-99. doi: 10.3934/electreng.2023005

    Related Papers:

    [1] Fukui Li, Hui Xu, Feng Qiu . Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(3): 1770-1800. doi: 10.3934/era.2024081
    [2] Shuang Zhang, Songwen Gu, Yucong Zhou, Lei Shi, Huilong Jin . Energy efficient resource allocation of IRS-Assisted UAV network. Electronic Research Archive, 2024, 32(7): 4753-4771. doi: 10.3934/era.2024217
    [3] Fukui Li, Hui Xu, Feng Qiu . Correction: Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection. Electronic Research Archive, 2024, 32(7): 4515-4516. doi: 10.3934/era.2024204
    [4] Yejin Yang, Miao Ye, Qiuxiang Jiang, Peng Wen . A novel node selection method for wireless distributed edge storage based on SDN and a maldistributed decision model. Electronic Research Archive, 2024, 32(2): 1160-1190. doi: 10.3934/era.2024056
    [5] Chuang Ma, Helong Xia . A one-step graph clustering method on heterogeneous graphs via variational graph embedding. Electronic Research Archive, 2024, 32(4): 2772-2788. doi: 10.3934/era.2024125
    [6] Yi Gong . Consensus control of multi-agent systems with delays. Electronic Research Archive, 2024, 32(8): 4887-4904. doi: 10.3934/era.2024224
    [7] Qi Hong, Hongyi Zhao, Shiyu Chen, Aya Selmoune, Kai Huang . Optimizing routing for autonomous delivery and pickup vehicles in three-dimensional space. Electronic Research Archive, 2025, 33(4): 2668-2697. doi: 10.3934/era.2025118
    [8] Dong-hyeon Kim, Se-woon Choe, Sung-Uk Zhang . Recognition of adherent polychaetes on oysters and scallops using Microsoft Azure Custom Vision. Electronic Research Archive, 2023, 31(3): 1691-1709. doi: 10.3934/era.2023088
    [9] Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang . Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174
    [10] Zongying Feng, Guoqiang Tan . Dynamic event-triggered H control for neural networks with sensor saturations and stochastic deception attacks. Electronic Research Archive, 2025, 33(3): 1267-1284. doi: 10.3934/era.2025056
  • Skin cancer is a pandemic disease now worldwide, and it is responsible for numerous deaths. Early phase detection is pre-eminent for controlling the spread of tumours throughout the body. However, existing algorithms for skin cancer severity detections still have some drawbacks, such as the analysis of skin lesions is not insignificant, slightly worse than that of dermatologists, and costly and time-consuming. Various machine learning algorithms have been used to detect the severity of the disease diagnosis. But it is more complex when detecting the disease. To overcome these issues, a modified Probabilistic Neural Network (MPNN) classifier has been proposed to determine the severity of skin cancer. The proposed method contains two phases such as training and testing the data. The collected features from the data of infected people are used as input to the modified PNN classifier in the current model. The neural network is also trained using Spider Monkey Optimization (SMO) approach. For analyzing the severity level, the classifier predicts four classes. The degree of skin cancer is determined depending on classifications. According to findings, the system achieved a 0.10% False Positive Rate (FPR), 0.03% error and 0.98% accuracy, while previous methods like KNN, NB, RF and SVM have accuracies of 0.90%, 0.70%, 0.803% and 0.86% correspondingly, which is lesser than the proposed approach.



    The Sasa-Satsuma equation

    ut+uxxx+6|u|2ux+3u(|u|2)x=0, (1.1)

    so-called high-order nonlinear Schrödinger equation [1], is relevant to several physical phenomena, for example, in optical fibers [2,3], in deep water waves [4] and generally in dispersive nonlinear media [5]. Because this equation describes these important nonlinear phenomena, it has received considerable attention and extensive research. The Sasa-Satsuma equation has been discussed by means of various approaches such as the inverse scattering transform [1], the Riemann-Hilbert method [6], the Hirota bilinear method [7], the Darboux transformation [8], and others [9,10,11]. The initial-boundary value problem for the Sasa-Satsuma equation on a finite interval was studied by the Fokas method [12], which is also effective for the initial-boundary value problems on the half-line [35,36,37]. In Ref. [13], finite genus solutions of the coupled Sasa-Satsuma hierarchy are obtained in the basis of the theory of trigonal curves, the Baker-Akhiezer function and the meromorphic functions [14,15,16]. In Ref. [17], the super Sasa-Satsuma hierarchy associated with the 3×3 matrix spectral problem was proposed, and its bi-Hamiltonian structures were derived with the aid of the super trace identity.

    The nonlinear steepest descent method [18], also called Deift-Zhou method, for oscillatory Riemann-Hilbert problems is a powerful tool to study the long-time asymptotic behavior of the solution for the soliton equation, by which the long-time asymptotic behaviors for a number of integrable nonlinear evolution equations associated with 2×2 matrix spectral problems have been obtained, for example, the mKdV equation, the KdV equation, the sine-Gordon equation, the modified nonlinear Schrödinger equation, the Camassa-Holm equation, the derivative nonlinear Schrödinger equation and so on [19,20,21,22,23,24,25,26,27,28,29,30]. However, there is little literature on the long-time asymptotic behavior of solutions for integrable nonlinear evolution equations associated with 3×3 matrix spectral problems [31,32,33]. Usually, it is difficult and complicated for the 3×3 case. Recently, the nonlinear steepest descent method was successfully generalized to derive the long-time asymptotics of the initial value problems for the coupled nonlinear Schrödinger equation and the Sasa-Satsuma equation with the complex potentials [33,34]. The main differences between the 2×2 and 3×3 cases is that the former corresponds to a scalar Riemann-Hilbert problem, while the latter corresponds to a matrix Riemann-Hilbert problem. In general, the solution of the matrix Riemann-Hilbert problem can not be given in explicit form, but the scalar Riemann-Hilbert problem can be solved by the Plemelj formula.

    The main aim of this paper is to study the long-time asymptotics of the Cauchy problem for the generalized Sasa-Satsuma equation [38] via nonlinear steepest decent method,

    {ut+uxxx6a|u|2ux6bu2ux3au(|u|2)x3bu(|u|2)x=0,u(x,0)=u0(x), (1.2)

    where a is a real constant, b is a complex constant that satisfies a2|b|2, the asterisk "" denotes the complex conjugate. It is easy to see that the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation (1.1) when a=1 and b=0. Suppose that the initial value u0(x) lies in Schwartz space S(R)={f(x)C(R):supxR|xαβf(x)|<,α,βN}. The vector function γ(k) is determined by the initial data in (2.15) and (2.19), and γ(k) satisfies the conditions (P1) and (P2), where

    (P1):{γ(k)B1γ(k)+|b|24(γ(k)σ3γ(k))2<1,γ(k)B1γ(k)+aγ(k)σ3γ(k)<2,2γ(k)B1γ(k)+|γ(k)|2+|B1γ(k)|2<4,

    with

    σ3=(1001),B1=(abba); (1.3)

    (P2): When detB1>0 and a>0, (2a|B1γ(k)|2) and (2adetB1|γ(k)|2)/(1γ(k)B1γ(k)) are positive and bounded; otherwise, (|B1γ(k)|22a) and (detB1|γ(k)|22a)/(1γ(k)B1γ(k)) are positive and bounded.

    The main result of this paper is as following:

    Theorem 1.1. Let u(x,t) be the solution of the Cauchy problem for the generalized Sasa-Satsuma equation (1.2) with the initial value u0S(R). Suppose that the vector function γ(k) is defined in (2.19), the hypotheses (P1) and (P2) hold. Then, for x<0 and xt<C,

    u(x,t)=ua(x,t)+O(c(k0)t1logt). (1.4)

    where C is a fixed constant, and

    ua(x,t)=ν12tk0γ(k0)B1γ(k0)(|γ1(k0)|ei(ϕ+argγ1(k0))+|γ2(k0)|ei(ϕ+argγ2(k0))),k0=x12t,ν=12πlog(1γ(k0)B1γ(k0)),ϕ=νlog(196tk30)16tk30+argΓ(iν)+1πk0k0log|ξ+k0|d(1γ(ξ)B1γ(ξ))π4,

    c() is rapidly decreasing, Γ() is the Gamma function, γ1 and γ2 are the first and the second row of γ(k), respectively.

    Remark 1.1. The two conditions (P1) and (P2) satisfied by γ(k) are necessary. The condition (P1) guarantees the existence and the uniqueness of the solutions of the basic Riemann-Hilbert problem (2.16) and the Riemann-Hilbert problem (3.1). The boundedness of the function δ(k) defined in subsection 3.1 relies on the condition (P2).

    Remark 1.2. In the case of a=1 and b=0, the generalized Sasa-Satsuma equation (1.2) can be reduced to the Sasa-Satsuma equation. Then it is obvious that the condition (P1) is true, and the condition (P2) is reduced to the case that |γ(k)| is bounded. Therefore, the conditions (P1) and (P2) in this case are equivalent to the condition related to the reflection coefficient in [34], that is, |γ(k)| is bounded for the Sasa-Satsuma equation.

    The outline of this paper is as follows. In section 2, we derive a Riemann-Hilbert problem from the scattering relation. The solution of the generalized Sasa-Satsuma equation is changed into the solution of the Riemann-Hilbert problem. In section 3, we deal with the Riemann-Hilbert problem via nonlinear steepest decent method, from which the long-time asymptotics in Theorem 1.1 is obtained at the end.

    We begin with the 3×3 Lax pair of the generalized Sasa-Satsuma equation

    ψx=(ikσ+U)ψ, (2.1a)
    ψt=(4ik3σ+V)ψ, (2.1b)

    where ψ is a matrix function and k is the spectral parameter, σ=diag(1,1,1),

    U=(00u00uau+buau+bu0) (2.2)
    V=4k2U+2ik(U2+Ux)σ+[Ux,U]Uxx+2U3. (2.3)

    We introduce a new eigenfunction μ through μ=ψeikσx4ik3σt, where eσ=diag(e,e,e1). Then (2.1a) and (2.1b) become

    μx=ik[σ,μ]+Uμ, (2.4a)
    μt=4ik3[σ,μ]+Vμ, (2.4b)

    where [,] is the commutator, [σ,μ]=σμμσ. From (2.4a), the matrix Jost solution μ± satisfy the Volterra integral equations

    μ±(k;x,t)=I+x±eikσ(xξ)U(ξ,t)μ±(k;ξ,t)eikσ(xξ)dξ. (2.5)

    Set μ±L represent the first two columns of μ±, and μ±R denote the third column, i.e., μ±=(μ±L,μ±R). Furthermore, we can infer that μ+R and μL are analytic in the lower complex k-plane C, μ+L and μR are analytic in the the upper complex k-plane C+. Then we can introduce sectionally analytic function P1(k) and P2(k) by

    P1(k)=(μL(k),μ+R(k)),kC,P2(k)=(μ+L(k),μR(k)),kC+.

    One can find that U is traceless from (2.2), so detμ± are independent of x. Besides, detμ±=1 according to the evolution of detμ± at x=±. Because all the μ±eikσx+4ik3σt satisfy the differential equations (2.1a) and (2.1b), they are linear related. So there exists a scattering matrix s(k) that satisfies

    μ=μ+eikσx+4ik3σts(k)eikσx4ik3σt,dets(k)=1. (2.6)

    In this paper, we denote a 3×3 matrix A by the block form

    A=(A11A12A21A22),

    where A11 is a 2×2 matrix and A22 is scalar. Let q=(u,u)T and we can rewrite U of (2.2) as

    U=(02×2qqB10),

    where "" is the Hermitian conjugate. In addition, there are two symmetry properties for U,

    B1U(k)B=U(k),τU(k)τ=U(k), (2.7)
    B=(B1001),τ=(σ1001),σ1=(0110), (2.8)

    where B and τ are represented as block forms. Hence, the Jost solutions μ± and the scattering matrix s(k) also have the corresponding symmetry properties

    B1μ±(k)B=μ1±(k),τμ±(k)τ=μ±(k); (2.9)
    B1s(k)B=s1(k),τs(k)τ=s(k). (2.10)

    We write s(k) as block form (sij)2×2 and from the symmetry properties (2.10) we have

    s22(k)=det[s11(k)],B11s21(k)=adj[s11(k)]s12(k), (2.11)

    where adjX denote the adjoint of matrix X. Then we can write s(k) as

    s(k)=(s11(k)s12(k)s12(k)adj[s11(k)]B1det[s11(k)]), (2.12)

    where

    σ1s11(k)σ1=s11(k),σ1s12(k)=s12(k). (2.13)

    From the evaluation of (2.6) at t=0, one infers

    s(k)=limx+eikxσμ(k;x,0)eikxσ, (2.14)

    which implies that

    {s11(k)=I++q(ξ,0)μ21(k;ξ,0)dξ,s12(k)=+e2ikξq(ξ,0)μ22(k;ξ,0)dξ. (2.15)

    Theorem 2.1. Let M(k;x,t) be analytic for kCR and satisfy the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kR,M(k;x,t)I,k, (2.16)

    where

    M±(k;x,t)=limϵ0+M(kiϵ;x,t), (2.17)
    J(k;x,t)=(Iγ(k)γ(k)B1e2itθγ(k)e2itθγ(k)B11) (2.18)
    θ(k;x,t)=xtk4k3,γ(k)=s111(k)s12(k), (2.19)

    γ(k) lies in Schwartz space and satisfies

    σ1γ(k)=γ(k). (2.20)

    Then the solution of this Riemann-Hilbert problem exists and is unique, the function

    q(x,t)=(u(x,t),u(x,t))T=2ilimk(k(M(k;x,t))12) (2.21)

    and u(x,t) is the solution of the generalized Sasa-Satsuma equation.

    Proof. The matrix (J(k;x,t)+J(k;x,t))/2 is positive definite because of the condition (P1) that γ(k) satisfies, then the solution of the Riemann-Hilbert problem (2.16) is existent and unique according to the Vanishing Lemma [39]. We define M(k;x,t) by

    M(k;x,t)={(μL(k),μ+R(k)det[a(k)]),kC,(μ+L(k)a(k),μR(k)),kC+. (2.22)

    Considering the scattering relation (2.6) and the construction of M(k;x,t), we can obtain the jump condition and the corresponding Riemann-Hilbert problem (2.16) after tedious but straightforward algebraic manipulations. Substituting the large k asymptotic expansion of M(k;x,t) into (2.4a) and compare the coefficients of O(1k), we can get (2.21).

    In this section, we compute the Riemann-Hilbert problem (2.16) by the nonlinear steepest decent method and study the long-time asymptotic behavior of the solution. We make the following basic notations. (i) For any matrix M define |M|=(trMM)12 and for any matrix function A() define A()p=|A()|p. (ii) For two quantities A and B define AB if there exists a constant C>0 such that |A|CB. If C depends on the parameter α we shall say that AαB. (iii) For any oriented contour Σ, we say that the left side is + and the right side is .

    First of all, it is noteworthy that there are two stationary points ±k0, where ±k0=±x12t satisfied dθdk|k=±k0=0. The jump matrix J(k;x,t) have a lower-upper triangular factorization and a upper-lower triangular factorization. We can introduce an appropriate Rieman-Hilbert problem to unify these two forms of factorizations. In this process, we have to reorient the contour of the Riemann-Hilbert problem.

    The two factorizations of the jump matrix J are

    J={(Ie2itθγ(k)01)(I0e2itθγ(k)B11),(I0e2itθγ(k)B11γ(k)B1γ(k)1)(Iγ(k)γ(k)B100(1γ(k)B1γ(k))1)(Ie2itθγ(k)1γ(k)B1γ(k)01).

    We introduce a 2×2 matrix function δ(k) to make the two factorization unified, and δ(k) satisfies the following Riemann-Hilbert problem

    {δ+(k)=δ(k)(Iγ(k)γ(k)B1),k(k0,k0),=δ(k),k(,k0)(k0,+),δ(k)I,k, (3.1)

    which implies a scalar Riemann-Hilbert problem

    {detδ+(k)=detδ(k)(1γ(k)B1γ(k)),k(k0,k0),=detδ(k),k(,k0)(k0,+),detδ(k)1,k. (3.2)

    The jump matrix Iγ(k)γ(k)B1 of Riemann-Hilbert problem (3.1) is positive definite, so the solution δ(k) exists and is unique. The scalar Riemann-Hilbert problem (3.2) can be solved by the Plemelj formula,

    detδ(k)=(kk0k+k0)iνeχ(k), (3.3)

    where

    ν=12πlog(1γ(k0)B1γ(k0)),χ(k)=12πik0k0log(1γ(ξ)B1γ(ξ)1γ(k0)B1γ(k0))dξξk.

    Then we have by uniqueness that

    δ(k)=B11(δ(k))1B1,δ(k)=σ1δ(k)σ1. (3.4)

    Substituting (3.4) to (3.1), we have

    δ+(k)B1δ+(k)=B1B1γ(k)γ(k)B1, (3.5)

    which means that

    tr[δ+(k)B1δ+(k)]=2a|B1γ(k)|2. (3.6)

    Actually, the condition (P2) satisfied by γ(k) guarantee the boundedness of δ±(k) and we give a brief proof below. When detB1>0, we find that the Hermitian matrix B1 can be decomposition. In other words, there exists a triangular matrix S that satisfies B1=aSS. So tr[δ+B1δ+]=a|Sδ+|2. When detB1<0 and |a|>0, the matrix B1 has a decomposition B1=SDS, where S is a triangular matrix and D is a diagonal matrix and the diagonal elements have opposite signs. In the case of a>0, B1 can be decomposed as below,

    B1=(ab01)1(adetB100a)(a0b1)1. (3.7)

    We denote Sδ+(k) by (Gij)3×3 and c1=2a|B1γ(k)|2 is negative, then

    adetB1(|G11|2+|G21|2)+a(|G12|2+|G22|2)=c1. (3.8)

    Noticing that detB1<0, we find a negative constant c2 that satisfies c2adetB1(c31)/(1detB1c3), where c3 is a constant and 0<c3<1, which impies

    |Sδ+(k)|2c1c21. (3.9)

    The case that a<0 is similar. In particular, when a=0, then |b|>0, it is easy to see that B1 is not definite. For |Reb|>0, we have the decomposition

    B1=(b|b|2+(b)2b|b|2+(b)2b|b|2+(b)2b|b|2+(b)2)1(1b+b001b+b)(b|b|2+b2b|b|2+b2b|b|2+b2b|b|2+b2)1. (3.10)

    For |Reb|=0, we have

    B1=(i21i2i21+i2)1(ib/200ib/2)(i2i21+i21i2)1. (3.11)

    So we get the boundedness of |δ+(k)|. The others have the same analysis,

    δ(k)B1δ(k)=(B1γ(k)γ(k))1,ask(k0,k0), (3.12)
    |δ+(k)|2=|δ(k)|2=2,ask(,k0)(k0,+), (3.13)
    |detδ+(k)|={1γ(k)B1γ(k),k(k0,k0),1,k(,k0)(k0,+), (3.14)
    |detδ(k)|={11γ(k)B1γ(k),k(k0,k0),1,k(,k0)(k0,+). (3.15)

    Hence, by the maximum principle, we have

    |δ(k)|const<,|detδ(k)|const<, (3.16)

    for all kC. We define the functions

    ρ(k)={γ(k),k(,k0)(k0,+),γ(k)1γ(k)B1γ(k),k(k0,k0), (3.17)
    Δ(k)=(δ(k)00(detδ(k))1). (3.18)

    We reverse the orientation for k(,k0)(k0,+) as in Figure 1, and MΔ(k;x,t)=M(k;x,t)Δ1(k) satisfies the Riemann-Hilbert problem on the reoriented contour

    {MΔ+(k;x,t)=MΔ(k;x,t)JΔ(k;x,t),kR,MΔ(k;x,t)I,k, (3.19)
    Figure 1.  The reoriented contour on R.

    where the jump matrix JΔ(k;x,t) has a decomposition

    JΔ(k;x,t)=(b)1b+=(I0e2itθ(k)ρ(k)B1δ1(k)detδ(k)1)(Ie2itθδ+(k)ρ(k)[detδ+(k)]01). (3.20)

    For the convenience of discussion, we define

    L={k0+αk0e3πi4:<α2}{k0+αk0eπi4:<α2},Lϵ={k0+αk0e3πi4:ϵ<α2}{k0+αk0eπi4:ϵ<α2}.

    Theorem 3.1. The vector function ρ(k) has a decomposition

    ρ(k)=h1(k)+h2(k)+R(k),kR,

    where R(k) is a piecewise-rational function and h2(k) has a analytic continuation to L. Besides, they admit the following estimates

    |e2itθ(k)h1(k)|1(1+|k|2)tl,kR, (3.21)
    |e2itθ(k)h2(k)|1(1+|k|2)tl,kL, (3.22)
    |e2itθ(k)R(k)|e16ϵ2k30t,kLϵ, (3.23)

    for an arbitrary positive integer l. Considering the Schwartz conjugate

    ρ(k)=R(k)+h1(k)+h2(k),

    we can obtain the same estimate for e2itθ(k)h1(k), e2itθ(k)h2(k) and e2itθ(k)R(k) on RL.

    Proof. It follows from Proposition 4.2 in [18].

    A direct calculation shows that b± of (3.20) can be decomposed further

    b+=bo+ba+=(I3×3+ωo+)(I3×3+ωa+)=(I2×2e2itθ[detδ+(k)]δ+(k)h1(k)01)(I2×2e2itθ[detδ+(k)]δ+(k)[h2(k)+R(k)]01),b=boba=(I3×3ωo)(I3×3ωa)=(I2×20e2itθh1(k)B1δ1(k)detδ(k)1)(I2×20e2itθ[h2(k)+R(k)]B1δ1(k)detδ(k)1).

    Define the oriented contour Σ by Σ=LL as in Figure 2. Let

    M(k;x,t)={MΔ(k;x,t),kΩ1Ω2,MΔ(k;x,t)(ba+)1,kΩ3Ω4Ω5,MΔ(k;x,t)(ba)1,kΩ6Ω7Ω8. (3.24)
    Figure 2.  The contour Σ.

    Lemma 3.1. M(k;x,t) is the solution of the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kΣ,M(k;x,t)I,k, (3.25)

    where the jump matrix J(k;x,t) satisfies

    J(k;x,t)=(b)1b+={I1ba+,kL,(ba)1I,kL,(bo)1bo+,kR. (3.26)

    Proof. We can construct the Riemann-Hilbert problem (3.25) based on the Riemann-Hilbert problem (3.19) and the decomposition of b±. In the meantime, the asymptotics of M(k;x,t) is derived from the convergence of b± as k. For fixed x and t, we pay attention to the domain Ω3. Noticing the boundedness of δ(k) and detδ(k) in (3.16), we arrive at

    |e2itθ[detδ(k)][h2(k)+R(k)]δ(k)||e2itθh2(k)|+|e2itθR(k)|.

    Consider the definition of R(k) in this domain,

    |e2itθh2(k)|1|k+i|2,|e2itθR(k)||mi=0μi(kk0)i||(k+i)m+5|1|k+i|5,

    where m is a positive integer and μi is the coefficient of the Taylor series around k0. Combining with the boundedness of h2(k) in Theorem 3.1, we obtain that M(k;x,t)I when kΩ3 and k. The others are similar to this domain.

    The above Riemann-Hilbert problem (3.25) can be solved as follows. Set

    ω±=±(b±I),ω=ω++ω.

    Let

    (C±f)(k)=Σf(ξ)ξk±dξ2πi,fL2(Σ) (3.27)

    denote the Cauchy operator, where C+f (Cf) denotes the left (right) boundary value for the oriented contour Σ in Figure 2. Define the operator Cω:L2(Σ)+L(Σ)L2(Σ) by

    Cωf=C+(fω)+C(fω+) (3.28)

    for the 3×3 matrix function f.

    Lemma 3.2 (Beals-Coifman). If μ(k;x,t)L2(Σ)+L(Σ) is the solution of the singular integral equation

    μ=I+Cωμ.

    Then

    M(k;x,t)=I+Σμ(ξ;x,t)ω(ξ;x,t)ξkdξ2πi

    is the solution of the Riemann-Hilbert problem (3.25).

    Proof. See [18], P. 322 and [40].

    Theorem 3.2. The expression of the solution q(x,t) can be written as

    q(x,t)=(u(x,t),u(x,t))T=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12. (3.29)

    Proof. From (2.21), (3.24) and Lemma 2, the solution q(x,t) of the generalized Sasa-Satsuma equation is expressed by

    q(x,t)=limk2i[k(M(k;x,t))12]=1π(Σμ(ξ;x,t)ω(ξ)dξ)12=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12.

    Set Σ=Σ(RLϵLϵ) oriented as in Figure 3. We will convert the Riemann-Hilbert problem on the contour Σ to a Riemann-Hilbert problem on the contour Σ and estimate the errors between the two Riemann-Hilbert problems. Let ω=ωe+ω=ωa+ωb+ωc+ω, where ωa=ω|R is supported on R and is composed of terms of type h1(k) and h1(k); ωb is supported on LL and is composed of contribution to ω from terms of type h2(k) and h2(k); ωc is supported on LϵLϵ and is composed of contribution to ω from terms of type R(k) and R(k).

    Figure 3.  The contour Σ.

    Lemma 3.3. For arbitrary positive integer l, as t,

    ωaL1(R)L2(R)L(R)tl, (3.30)
    ωbL1(LL)L2(LL)L(LL)tl, (3.31)
    ωcL1(LϵLϵ)L2(LϵLϵ)L(LϵLϵ)e16ϵ2k30t, (3.32)
    ωL2(Σ)(tk30)14,ωL1(Σ)(tk30)12 (3.33)

    Proof. The proof of estimates (3.30), (3.31), (3.32) follows from Theorem 3.1. Afterwards, we consider the definition of R(k) on the contour {k=k0+αk0e3πi4|<α<ϵ},

    |R(k)|(1+|k|5)1.

    Resorting to Re(iθ)8α2k30 and the boundedness of δ(k) and detδ(k) in (3.16), we can obtain

    |e2itθ[detδ(k)]R(k)δ(k)|e16tk30α2(1+|k|5)1.

    Then we obtain (3.33) by simple computations.

    Lemma 3.4. As t, (1Cω)1:L2(Σ)L2(Σ) exists and is uniformly bounded:

    (1Cω)1L2(Σ)1.

    Furthermore, (1Cω)1L2(Σ)1.

    Proof. It follows from Proposition 2.23 and Corollary 2.25 in [18].

    Lemma 3.5. As t,

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=Σ((1Cω)1I)(ξ)ω(ξ)dξ+O((tk30)l). (3.34)

    Proof. A simple computation shows that

    ((1Cω)1I)ω=((1Cω)1I)ω+ωe+((1Cω)1(CωeI))ω+((1Cω)1(CωI))ωe+((1Cω)1Cωe(1Cω))(CωI)ω. (3.35)

    After a series of tedious computations and utilizing the consequence of Lemma 4, we arrive at

    ωeL1(Σ)ωaL1(R)+ωbL1(LL)+ωcL1(LϵLϵ)(tk30)l,((1Cω)1(CωeI))ωL1(Σ)(1Cω)1L2(Σ)CωeIL2(Σ)ωL2(Σ)ωeL2(Σ)ωL2(Σ)(tk30)l14,((1Cω)1(CωI))ωeL1(Σ)(1C1ω)L2(Σ)CωIL2(Σ)ωeL2(Σ)ωL2(Σ)ωeL2(Σ)(tk30)l14,((1Cω)1Cωe(1Cω))(CωI)ωL1(Σ)(1Cω)1L2(Σ)(1Cω)1L2(Σ)CωeL2(Σ)CωIL2(Σ)ωL2(Σ)ωeL(Σ)ω2L2(Σ)(tk30)l12.

    Then the proof is accomplished as long as we substitute the estimates above into (3.35).

    Notice that ω(k)=0 when kΣΣ, let Cω|L2(Σ) denote the restriction of Cω to L2(Σ). For simplicity, we write Cω|L2(Σ) as Cω. Then

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=Σ((1Cω)1I)(ξ)ω(ξ)dξ.

    Lemma 3.6. As t,

    q(x,t)=(u(x,t),u(x,t))T=1π(Σ((1Cω)1I)(ξ)ω(ξ)dξ)12+O((tk30)l). (3.36)

    Proof. From (3.29) and (3.34), we can obtain the result directly.

    Let L=LLϵ and μ=(1Cω)1I. Then

    M(k;x,t)=I+Σμ(k;x,t)ω(k;x,t)ξkdξ2πi

    solves the Riemann-Hilbert problem

    {M+(k;x,t)=M(k;x,t)J(k;x,t),kΣ,M(k;x,t)I,k,

    where

    J=(b)1b+=(Iω)1(I+ω+),ω=ω++ω,b+=(Ie2itθ[detδ(k)]δ(k)R(k)01),b=I,on L,b+=I,b=(I0e2itθR(k)B1δ1(k)detδ(k)1),on (L).

    Let the contour Σ=ΣAΣB and ω±=ωA±+ωB±, where

    ωA±(k)={ω±(k),kΣA,0,kΣB,ωB±(k)={0,kΣA,ω±(k),kΣB. (3.37)

    Define the operators CωA and CωB: L2(Σ)+L(Σ)L2(Σ) as in definition (3.28).

    Lemma 3.7.

    ||CωBCωA||L2(Σ)=||CωACωB||L2(Σ)k0(tk30)12,
    ||CωBCωA||L(Σ)L2(Σ),||CωACωB||L(Σ)L2(Σ)k0(tk30)34.

    Proof. See Lemma 3.5 in [18].

    Lemma 3.8. As t,

    Σ((1Cω)1I)(ξ)ω(ξ)dξ=ΣA((1CωA)1I)(ξ)ωA(ξ)dξ+ΣB((1CωB)1I)(ξ)ωB(ξ)dξ+O(c(k0)t). (3.38)

    Proof. From identity

    (1CωACωB)(1+CωA(1CωA)1+CωB(1CωB)1)=1CωBCωA(1CωA)1CωACωB(1CωB)1,

    we have

    (1Cω)1=1+CωA(1CωA)1+CωB(1CωB)1+[1+CωA(1CωA)1+CωB(1CωB)1][1CωBCωA(1CωA)1CωACωB(1CωB)1]1[CωBCωA(1CωA)1+CωACωB(1CωB)1].

    Based on Lemma (3.7) and Lemma (3.4), we arrive at (3.38).

    For the sake of convenience, we write the restriction CωAL2(ΣA) as CωA, similar for CωB. From the consequences of Lemma 3.6 and Lemma 3.8, as t, we have

    q(x,t)=(ΣA((1CωA)1I)(ξ)ωA(ξ)dξπ)12(ΣB((1CωB)1I)(ξ)ωB(ξ)dξπ)12+O(c(k0)t). (3.39)

    Extend the contours ΣA and ΣB to the contours

    ˆΣA={k=k0+k0αe±πi4:αR}, (3.40)
    ˆΣB={k=k0+k0αe±3πi4:αR}, (3.41)

    respectively. We introduce ˆωA and ˆωB on ˆΣA and ˆΣB, respectively, by

    ˆωA±={ωA±(k),kΣA,0,kˆΣAΣA,ˆωB±={ωB±(k),kΣB,0,kˆΣBΣB. (3.42)

    Let ΣA and \Sigma_{B} denote the contours \{k = k_{0}\alpha e^{\pm\frac{\pi i}{4}}:\alpha\in\mathbb{R}\} oriented inward as in \Sigma^{\prime}_{A} , \hat{\Sigma}^{\prime}_{A} , and outward as in \Sigma^{\prime}_{B} , \hat{\Sigma}^{\prime}_{B} , respectively. Define the scaling operators

    \begin{gather} \begin{split} N_{A}:\ &\mathscr{L}^{2}(\hat{\Sigma}^{\prime}_{A})\rightarrow\mathscr{L}^{2}(\Sigma_{A}), \\ &f(k)\rightarrow(N_{A}f)(k) = f(\frac{k}{\sqrt{48tk_{0}}}-k_{0}), \end{split} \end{gather} (3.43)
    \begin{gather} \begin{split} N_{B}:\ &\mathscr{L}^{2}(\hat{\Sigma}^{\prime}_{B})\rightarrow\mathscr{L}^{2}(\Sigma_{B}), \\ &f(k)\rightarrow(N_{B}f)(k) = f(\frac{k}{\sqrt{48tk_{0}}}+k_{0}), \end{split} \end{gather} (3.44)

    and set

    \begin{equation*} \omega_{A} = N_{A}\hat{\omega}^{\prime}_{A}, \quad \omega_{B} = N_{B}\hat{\omega}^{\prime}_{B}. \end{equation*}

    A simple change-of-variable arguments shows that

    \begin{equation*} \label{Cwprime definition} C_{\hat{\omega}^{\prime}_{A}} = N^{-1}_{A}C_{\omega_{A}}N_{A}, \quad C_{\hat{\omega}^{\prime}_{B}} = N^{-1}_{B}C_{\omega_{B}}N_{B}, \end{equation*}

    where the operator C_{\omega_{A}}\ (C_{\omega_{B}}) is a bounded map from \mathscr{L}^{2}(\Sigma_{A})\ (\mathscr{L}^{2}(\Sigma_{B})) into \mathscr{L}^{2}(\Sigma_{A})\ (\mathscr{L}^{2}(\Sigma_{B})) . On the part

    \begin{equation*} L_{A} = \left\{k = \alpha k_{0}\sqrt{48tk_{0}}e^{\frac{3\pi i}{4}}:-\epsilon \lt \alpha \lt +\infty\right\} \end{equation*}

    of \Sigma_{A} , we have

    \begin{equation*} \omega_{A} = \omega_{A+} = \left(\begin{matrix}0&(N_{A}s_{1})(k)\cr 0&0\end{matrix}\right), \end{equation*}

    on L^{\ast}_{A} we have

    \begin{equation*} \omega_{A} = \omega_{A-} = \left(\begin{matrix}0&0\cr (N_{A}s_{2})(k)&0\end{matrix}\right), \end{equation*}

    where

    \begin{equation*} s_{1}(k) = -e^{-2it\theta(k)}[\det\delta(k)]\delta(k)R(k), \quad s_{2}(k) = \frac{e^{2it\theta}R^{\dagger}(k^{\ast})B_1\delta^{-1}(k)}{\det\delta(k)}. \end{equation*}

    Lemma 3.9. As t\rightarrow\infty , and k\in L_{A} , then

    \begin{equation} \left|(N_{A}\tilde{\delta})(k)\right|\lesssim t^{-l}, \end{equation} (3.45)

    where \tilde{\delta}(k) = e^{-2it\theta(k)}[\delta(k)R(k)-(\mathrm{det}\delta(k))R(k)] .

    Proof. It follows from (3.1) and (3.2) that \tilde{\delta} satisfies the following Riemann-Hilbert problem:

    \begin{equation} \begin{cases} \tilde{\delta}_{+}(k) = \tilde{\delta}_{-}(k)(1-\gamma^\dagger(k^\ast)B_1\gamma(k))+e^{-2it\theta}f(k), & k\in(-k_{0}, k_{0}), \\ \tilde{\delta}(k)\rightarrow0, & k\rightarrow\infty. \end{cases} \end{equation} (3.46)

    where f(k) = \delta_{-}(k)[\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1]R(k) . The solution for the above Riemann-Hilbert problem can be expressed by

    \begin{gather*} \tilde{\delta}(k) = X(k)\int_{k_{0}}^{-k_{0}}\frac{e^{-2it\theta(\xi)}f(\xi)}{X_{+}(\xi)(\xi-k)}\, \frac{\mathrm{d}\xi}{2\pi i}, \\ X(k) = \mathrm{exp}\left\{{\frac{1}{2\pi i}\int_{k_{0}}^{-k_{0}}\frac{\log(1-\left|\gamma(\xi)\right|^{2})}{\xi-k}}\, \mathrm{d}\xi\right\}. \end{gather*}

    Observing that

    \begin{equation*} \begin{split} (\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1)R(k)& = (\gamma^\dagger(k^\ast)B_1\gamma(k)I-\gamma(k)\gamma^\dagger(k^\ast)B_1)(R(k)-\rho(k))\\ & = \mathrm{adj}[B_1]\mathrm{adj}[\gamma(k)\gamma^\dagger(k^\ast)](h_{1}(k)+h_{2}(k)), \end{split} \end{equation*}

    we obtain f(k) = O((k^{2}-k^{2}_{0})^{l}). Similar to the Lemma 3.1, f(k) can be decomposed into two parts: f(k) = f_{1}(k)+f_{2}(k) , and

    \begin{equation} \left|e^{-2it\theta(k)}f_{1}(k)\right|\lesssim\frac{1}{(1+\left|k\right|^{2})t^{l}}, \quad k\in\mathbb{R}, \end{equation} (3.47)
    \begin{equation} \left|e^{-2it\theta(k)}f_{2}(k)\right|\lesssim\frac{1}{(1+\left|k\right|^{2})t^{l}}, \quad k\in L_{t}, \end{equation} (3.48)

    where f_{2}(k) has an analytic continuation to L_{t} , l is a positive integer and l\geqslant2 ,

    \begin{equation*} \begin{split} L_{t} = &\left\{k = k_{0}+k_{0}\alpha e^{-\frac{3\pi i}{4}}:0\leqslant\alpha\leqslant\sqrt{2}(1-\frac{1}{2t})\right\}\\ &\cup\left\{k = \frac{k_{0}}{t}-k_{0}+k_{0}\alpha e^{\frac{-\pi i}{4}}:0\leqslant\alpha\leqslant\sqrt{2}(1-\frac{1}{2t})\right\}, \end{split} \end{equation*}

    (see Figure 5).

    Figure 4.  The contour \Sigma_A(\Sigma_B) .
    Figure 5.  The contour L_t .

    As k\in L_{A} , we obtain

    \begin{equation*} \begin{split} (N_{A}\tilde{\delta})(k) = &X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{-k_{0}}_{\frac{k_{0}}{t}-k_{0}}\frac{e^{-2it\theta(\xi)}f(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ &+X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{\frac{k_{0}}{t}-k_{0}}_{k_{0}}\frac{e^{-2it\theta(\xi)}f_{1}(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ &+X(\frac{k}{\sqrt{48tk_{0}}}-k_{0})\int^{\frac{k_{0}}{t}-k_{0}}_{k_{0}}\frac{e^{-2it\theta(\xi)}f_{2}(\xi)}{X_{+}(\xi)(\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}})}\, \frac{\mathrm{d}\xi}{2\pi i}\\ = &I_{1}+I_{2}+I_{3}. \end{split} \end{equation*}
    \begin{equation*} \left|I_{1}\right|\lesssim\int_{-k_{0}}^{\frac{k_{0}}{t}-k_{0}}\frac{\left|f(\xi)\right|}{|\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}}|}\, \mathrm{d}\xi\lesssim t^{-l-1}, \\ \end{equation*}
    \begin{equation*} \left|I_{2}\right|\lesssim\int_{\frac{k_{0}}{t}-k_{0}}^{k_{0}}\frac{\left|e^{-2it\theta(\xi)}f_{1}(\xi)\right|}{|\xi+k_{0}-\frac{k}{\sqrt{48tk_{0}}}|}\, \mathrm{d}\xi\leqslant t^{-l}\frac{\sqrt{2}t}{k_{0}}(2k_{0}-\frac{k_{0}}{t})\lesssim t^{-l+1}. \end{equation*}

    As a consequence of Cauchy's Theorem, we can evaluate I_{3} along the contour L_{t} instead of the interval (\frac{k_{0}}{t}-k_{0}, k_{0}) and obtain \left|I_{3}\right|\lesssim t^{-l+1}. Therefore, (3.45) holds.

    Corollary 3.1. As t\rightarrow\infty , and k\in L_{A}^\ast , then

    \begin{equation} \left|(N_{A}\hat{\delta})(k)\right|\lesssim t^{-l}, \quad t\rightarrow\infty, \quad k\in L^{\ast}_{A}, \end{equation} (3.49)

    where \hat{\delta}(k) = e^{2it\theta(k)}R^{\dagger}(k^{\ast})B_1[\delta^{-1}(k)-(\mathrm{det}\delta(k))^{-1}I] .

    Let J^{A^0} = (I-\omega_{A^0-})^{-1}(I+\omega_{A^0+}) , where

    \begin{gather} \omega_{A^0} = \omega_{A^0+} = \begin{cases} \left(\begin{array}{cc} 0 & -(\delta_A^0)^{2}(-k)^{2i\nu}e^{-\frac{ik^2}{2}}\frac{\gamma(-k_0)}{1-\gamma^\dagger(-k_0)B_1\gamma(-k_0)}\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_A^1, \\ \left(\begin{array}{cc} 0 & (\delta_A^0)^{2}(-k)^{2i\nu}e^{-\frac{ik^2}{2}}\gamma(-k_0)\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_A^3, \\ \end{cases} \end{gather} (3.50)
    \begin{gather} \delta_A^0 = (196tk_0^3)^{-\frac{i\nu}{2}}e^{8itk_0^3}e^{\chi(-k_0)} \end{gather} (3.51)
    \begin{gather} \omega_{A^0} = \omega_{A^0-} = \begin{cases} \left(\begin{array}{cc} 0 & 0\\ (\delta_A^0)^{-2}(-k)^{-2i\nu}e^{\frac{ik^2}{2}}\frac{\gamma^\dagger(-k_0)B_1}{1-\gamma^\dagger(-k_0)B_1\gamma(-k_0)} & 0\\ \end{array}\right), & k\in\Sigma_A^2, \\ \left(\begin{array}{cc} 0 & 0\\ -(\delta_A^0)^{-2}(-k)^{-2i\nu}e^{\frac{ik^2}{2}}\gamma^\dagger(-k_0)B_1 & 0\\ \end{array}\right), & k\in\Sigma_A^4.\\ \end{cases} \end{gather} (3.52)

    It follows from (3.78) in [18] that

    \begin{equation} \Vert\omega_A-\omega_{A^0}\Vert_{\mathscr{L}^1(\Sigma_A)\cap\mathscr{L}^2(\Sigma_A)\cap\mathscr{L}^\infty(\Sigma_A)} \lesssim_{k_0} \frac{\log{t}}{\sqrt{tk_0^3}}. \end{equation} (3.53)

    There are similar consequences for k\in\Sigma_B . Let J^{B^0} = (I-\omega_{B^0-})^{-1}(I+\omega_{B^0+}) , where

    \begin{gather} \omega_{B^0} = \omega_{B^0+} = \begin{cases} \left(\begin{array}{cc} 0 & (\delta_B^0)^{2}k^{-2i\nu}e^{\frac{ik^2}{2}}\gamma(k_0)\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_B^2, \\ \left(\begin{array}{cc} 0 & -(\delta_B^0)^{2}k^{-2i\nu}e^{\frac{ik^2}{2}}\frac{\gamma(k_0)}{1-\gamma^\dagger(k_0)B_1\gamma(k_0)}\\ 0 & 0\\ \end{array}\right), & k\in\Sigma_B^4, \\ \end{cases} \end{gather} (3.54)
    \begin{gather} \delta_B^0 = (196tk_0^3)^{\frac{i\nu}{2}}e^{-8itk_0^3}e^{\chi(k_0)} \end{gather} (3.55)
    \begin{gather} \omega_{B^0} = \omega_{B^0-} = \begin{cases} \left(\begin{array}{cc} 0 & 0\\ -(\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\gamma^\dagger(k_0)B_1 & 0\\ \end{array}\right), & k\in\Sigma_B^1, \\ \left(\begin{array}{cc} 0 & 0\\ (\delta_B^0)^{-2}k^{2i\nu}e^{-\frac{ik^2}{2}}\frac{\gamma^\dagger(k_0)B_1}{1-\gamma^\dagger(k_0)B_1\gamma(k_0)} & 0\\ \end{array}\right), & k\in\Sigma_B^3.\\ \end{cases} \end{gather} (3.56)

    Theorem 3.3. As t\to\infty ,

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}\\ &+\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_B}\left((1-C_{\omega_{B^0}})^{-1}I\right)(\xi)\omega_{B^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.57)

    Proof. Notice that

    \begin{align*} \left((1-C_{\omega_A})^{-1}I\right)\omega_A = &\left((1-C_{\omega_{A^0}})^{-1}I\right)\omega_{A^0}+\left((1-C_{\omega_A})^{-1}I\right)(\omega_A-\omega_{A^0})\notag\\ &+(1-C_{\omega_A})^{-1}(C_{\omega_A}-C_{\omega_{A^0}})(1-C_{\omega_{A^0}})I\omega_{A^0}. \end{align*}

    Utilizing the triangle inequality and the boundedness in (3.53), we have

    \begin{align*} \int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi = \int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi+O\left(\frac{\log{t}}{\sqrt{t}}\right). \end{align*}

    According to (3.5) and a simple change-of-variable argument, we have

    \begin{align*} \begin{split} &\frac{1}{\pi}\left(\int_{\Sigma^\prime}\left((1-C_{\omega_A^\prime})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left(N_A^{-1}(1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A^\prime(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi}\left(\int_{\hat\Sigma_A^\prime}\left((1-C_{\omega_A})^{-1}I\right)\left((\xi+k_0)\sqrt{48tk_0}\right)(N_A\omega_A^\prime)\left((\xi+k_0)\sqrt{48tk_0}\right)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_A})^{-1}I\right)(\xi)\omega_A(\xi)\, \mathrm{d}\xi\right)_{12}\\ = &\frac{1}{\pi\sqrt{48tk_0}}\left(\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \mathrm{d}\xi\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align*}

    There are similar computations for the other case. Together with (3.39), one can obtain (3.57).

    For k\in\mathbb{C}\backslash\Sigma_A , set

    \begin{equation} M^{A^0}(k;x, t) = I+\int_{\Sigma_A}\frac{\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)}{\xi-k} \, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.58)

    Then M^{A^0}(k; x, t) is the solution of the Riemann-Hilbert problem

    \begin{equation} \begin{cases} M^{A^0}_+(k;x, t) = M^{A^0}_-(k;x, t)J^{A^0}(k;x, t), & k\in\Sigma_A, \\ M^{A^0}(k;x, t)\to I, & k\to\infty. \end{cases} \end{equation} (3.59)

    In particular

    \begin{equation} M^{A^0}(k) = I+\frac{M^{A^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty, \end{equation} (3.60)

    then

    \begin{equation} M^{A^0}_1 = -\int_{\Sigma_A}\left((1-C_{\omega_{A^0}})^{-1}I\right)(\xi)\omega_{A^0}(\xi)\, \frac{\mathrm{d}\xi}{2\pi i}. \end{equation} (3.61)

    There is a analogous Riemann-Hilbert problem on \Sigma_{B} ,

    \begin{equation} \begin{cases} M^{B^0}_+(k;x, t) = M^{B^0}_-(k;x, t)J^{B^0}(k;x, t), & k\in\Sigma_B, \\ M^{B^0}(k;x, t)\to I, & k\to\infty, \end{cases} \end{equation} (3.62)

    where J^{B^0}(k; x, t) is defined in (3.54) and (3.56). In the meantime, we have

    \begin{equation} M^{B^0}(k) = I+\frac{M^{B^0}_1}{k}+O(k^{-2}), \quad k\rightarrow\infty. \end{equation} (3.63)

    Next, we consider the relation between M^{A^0}_1 and M^{B^0}_1 . From the expression (3.50), (3.52), (3.54) and (3.56), we have the symmetry relation

    \begin{equation*} J^{A^0}(k) = \tau(J^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    By the uniqueness of the Riemann-Hilbert problem,

    \begin{equation*} M^{A^0}(k) = \tau(M^{B^0}(-k^\ast))^\ast\tau. \end{equation*}

    Combining with the expansion (3.60) and (3.63), one can verify that

    \begin{equation*} M^{A^0}_1 = -\tau(M^{B^0}_1)^\ast\tau, \quad (M^{A^0}_1)_{12} = -\sigma_1(M^{B^0}_1)^\ast_{12}. \end{equation*}

    Therefore, from (3.57) and (3.61), we have

    \begin{align} \begin{split} q(x, t) = &(u(x, t), u^\ast(x, t))^T\\ = &\frac{-2i}{\sqrt{48tk_0}}\left(M_1^{A^0}+M_1^{B^0}\right)_{12}+O\left(\frac{c(k_0)\log{t}}{t}\right)\\ = &-\frac{i}{\sqrt{12tk_0}}\left((M_1^{A^0})_{12}-\sigma_1(M_1^{A^0})^\ast_{12}\right)+O\left(\frac{c(k_0)\log{t}}{t}\right). \end{split} \end{align} (3.64)

    In this subsection, we compute (M_1^{A^0})_{12} explicitly. It is important to set

    \begin{equation} \Psi(k) = H(k)(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}, \quad H(k) = (\delta_A^0)^{-\sigma}M^{A^0}(k)(\delta_A^0)^{\sigma}. \end{equation} (3.65)

    Then it follows from (3.59) that

    \begin{equation} \Psi_+(k) = \Psi_-(k)v(-k_0), \quad v = e^{\frac{1}{4}ik^2\sigma}(-k)^{-i\nu\sigma}(\delta_A^0)^{-\sigma}J^{A^0}(k)(\delta_A^0)^{\sigma}(-k)^{i\nu\sigma}e^{-\frac{1}{4}ik^2\sigma}. \end{equation} (3.66)

    The jump matrix is the constant one on the four rays \Sigma_A^1 , \Sigma_A^2 , \Sigma_A^3 , \Sigma_A^4 , so

    \begin{equation} \frac{\mathrm{d}\Psi_+(k)}{\mathrm{d}k} = \frac{\mathrm{d}\Psi_-(k)}{\mathrm{d}k}v(-k_0). \end{equation} (3.67)

    Then it follows that (\mathrm{d}\Psi/\mathrm{d}k+ik\sigma\Psi)\Psi^{-1} has no jump discontinuity along any of the four rays. Besides, from the relation between \Psi(k) and H(k) , we have

    \begin{align*} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}\Psi^{-1}(k) = &\frac{\mathrm{d}H(k)}{\mathrm{d}k}H^{-1}(k)-\frac{ik}{2}H(k)\sigma H^{-1}(k)+\frac{i\nu}{k}H(k)\sigma H^{-1}(k)\notag\\ = &O(k^{-1})-\frac{ik\sigma}{2}+\frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma}. \end{align*}

    It follows by the Liouville's Theorem that

    \begin{equation} \frac{\mathrm{d}\Psi(k)}{\mathrm{d}k}+\frac{ik}{2}\sigma\Psi(k) = \beta\Psi(k), \end{equation} (3.68)

    where

    \begin{equation*} \beta = \frac{i}{2}(\delta_A^0)^{\sigma}[\sigma, M^{A^0}_1](\delta_A^0)^{-\sigma} = \left(\begin{array}{cc} 0 & \beta_{12}\\ \beta_{21} & 0 \end{array}\right). \end{equation*}

    Moreover,

    \begin{equation} (M_1^{A^0})_{12} = -i(\delta_A^0)^{-2}\beta_{12}. \end{equation} (3.69)

    Set

    \begin{equation*} \Psi(k) = \left(\begin{array}{cc} \Psi_{11}(k) & \Psi_{12}(k)\\ \Psi_{21}(k) & \Psi_{22}(k)\\ \end{array}\right). \end{equation*}

    From (3.68) and its differential, we obtain

    \begin{gather*} \frac{\mathrm{d}^{2}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k^{2}}+\left(\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\beta_{21}\Psi_{11}(k) = 0, \\ \Psi_{21}(k) = \frac{1}{\beta_{21}\beta_{12}}\left(\frac{\mathrm{d}\beta_{21}\Psi_{11}(k)}{\mathrm{d}k}+\frac{ik}{2}\beta_{21}\Psi_{11}(k)\right), \\ \frac{\mathrm{d}^{2}\Psi_{22}(k)}{\mathrm{d}k^{2}}+\left(-\frac{i}{2}+\frac{k^2}{4}-\beta_{21}\beta_{12}\right)\Psi_{22}(k) = 0, \\ \beta_{21}\Psi_{12}(k) = \left(\frac{\mathrm{d}\Psi_{22}(k)}{\mathrm{d}k}-\frac{ik}{2}\Psi_{22}(k)\right). \end{gather*}

    As is well known, the Weber's equation

    \begin{equation*} \frac{\mathrm{d}^{2}g(\zeta)}{\mathrm{d}\zeta^{2}}+\left(\varrho+\frac{1}{2}-\frac{\zeta^{2}}{4}\right)g(\zeta) = 0 \end{equation*}

    has the solution

    \begin{equation*} g(\zeta) = c_{1}D_{\varrho}(\zeta)+c_{2}D_{\varrho}(-\zeta), \end{equation*}

    where D_{\varrho}(\cdot) denotes the standard parabolic-cylinder function, and c_1 , c_2 are constants. The parabolic-cylinder function satisfies [41]

    \begin{gather} \frac{\mathrm{d}D_{\varrho}(\zeta)}{\mathrm{d}\zeta}+\frac{\zeta}{2}D_{\varrho}(\zeta)-\varrho D_{\varrho-1}(\zeta) = 0, \end{gather} (3.70)
    \begin{gather} D_{\varrho}(\pm\zeta) = \frac{\Gamma(\varrho+1)e^{\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\pm i\zeta)+\frac{\Gamma(\varrho+1)e^{-\frac{i\pi \varrho}{2}}}{\sqrt{2\pi}}D_{-\varrho-1}(\mp i\zeta). \end{gather} (3.71)

    As \zeta\rightarrow\infty , from [42], we have

    \begin{equation} D_{\varrho}(\zeta) = \begin{cases} \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2})), & \left|\arg{\zeta}\right| \lt \frac{3\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & \frac{\pi}{4} \lt \arg{\zeta} \lt \frac{5\pi}{4}, \\ \zeta^{\varrho}e^{-\frac{\zeta^{2}}{4}}(1+O(\zeta^{-2}))-\frac{\sqrt{2\pi}}{\Gamma(-\varrho)}e^{-\varrho\pi i+\frac{\zeta^{2}}{4}}\zeta^{-\varrho-1}(1+O(\zeta^{-2})), & -\frac{5\pi}{4} \lt \arg{\zeta} \lt -\frac{\pi}{4}, \end{cases} \end{equation} (3.72)

    where \Gamma(\cdot) is the Gamma function. Set \varrho = i\beta_{21}\beta_{12} ,

    \begin{gather} \beta_{21}\Psi_{11}(k) = c_1D_\varrho\left(e^{\frac{\pi i}{4}}k\right)+c_2D_\varrho\left(e^{\frac{-3\pi i}{4}}k\right), \end{gather} (3.73)
    \begin{gather} \Psi_{22}(k) = c_3D_{-\varrho}\left(e^{\frac{-\pi i}{4}}k\right)+c_4D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right), \end{gather} (3.74)

    where a_1, a_2, a_3, a_4 are constants. As \arg{k}\in(-\pi, -\frac{3\pi}{4})\cup(\frac{3\pi}{4}, \pi) and k\rightarrow\infty , we arrive at

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}D_{\varrho}\left(e^{-\frac{3\pi i}{4}}k\right), \quad \nu = \beta_{21}\beta_{12}, \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{\varrho-1}\left(e^{-\frac{3\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    For \arg{k}\in(-\frac{3\pi}{4}, -\frac{\pi}{4}) and k\rightarrow\infty , we have

    \begin{equation*} \Psi_{11}(k)(-k)^{-i\nu}e^{\frac{ik^{2}}{4}}\rightarrow I, \quad \Psi_{22}(k)(-k)^{i\nu}e^{-\frac{ik^{2}}{4}}\rightarrow 1, \end{equation*}

    then

    \begin{gather*} \beta_{21}\Psi_{11}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}D_{\varrho}\left(e^{\frac{\pi i}{4}}k\right), \\ \Psi_{22}(k) = e^{\frac{\pi\nu}{4}}D_{-\varrho}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Consequently,

    \begin{gather*} \Psi_{21}(k) = \beta_{21}e^{-\frac{3\pi\nu}{4}}e^{\frac{3\pi i}{4}}D_{\varrho-1}\left(e^{\frac{\pi i}{4}}k\right), \\ \beta_{21}\Psi_{12}(k) = \varrho e^{\frac{\pi\nu}{4}}e^{-\frac{\pi i}{4}}D_{-\varrho-1}\left(e^{\frac{3\pi i}{4}}k\right). \end{gather*}

    Along the ray \arg k = -\frac{3\pi}{4},

    \begin{equation} \Psi_{+}(k) = \Psi_{-}(k) \left(\begin{array}{cc} I & 0\\ -\gamma^\dagger(-k_0)B_1 & 1\\ \end{array}\right). \end{equation} (3.75)

    Notice the (2, 1) entry of the Riemann-Hilbert problem,

    \begin{align*} &\beta_{21}e^{\frac{\pi(\nu-i)}{4}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)\\ = &\beta_{21}e^{\frac{\pi(3i-3\nu)}{4}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k)-e^{\frac{\pi\nu}{4}}D_{-\varrho}(e^{\frac{3\pi i}{4}}k)\gamma^\dagger(-k_0)B_1. \end{align*}

    It follows from (3.71) that

    \begin{equation*} D_{-\varrho}(e^{\frac{3\pi i}{4}}k) = \frac{\Gamma(-\varrho+1)e^{\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{-\frac{3\pi i}{4}}k)+\frac{\Gamma(-\varrho+1)e^{-\frac{\pi\nu}{2}}}{\sqrt{2\pi}}D_{\varrho-1}(e^{\frac{\pi i}{4}}k). \end{equation*}

    Then we separate the coefficients of the two independent functions and obtain

    \begin{gather} \beta_{21} = e^{-\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}}\gamma^\dagger(-k_0)B_1. \end{gather} (3.76)

    Noting that B^{-1}(J^{A^0}(k^\ast))^\dagger B = (J^{A^0}(k))^{-1} , we have \beta_{12} = -B_1^{-1}\beta_{21}^\dagger , which means that

    \begin{equation} \beta_{12} = -B_1^{-1}B_1^\dagger\gamma(-k_0)e^{\frac{3\pi i}{4}}e^{\frac{\pi\nu}{2}}\frac{\Gamma(-\varrho+1)}{\sqrt{2\pi}} = e^{-\frac{\pi i}{4}}e^{\frac{\pi\nu}{2}}\nu\frac{\Gamma(-i\nu)}{\sqrt{2\pi}}\gamma(-k_0). \end{equation} (3.77)

    Finally, we can obtain (1.4) from (3.64), (3.69) and (3.77).

    This work is supported by the National Natural Science Foundation of China (Grant Nos. 11871440 and 11931017).

    The authors declare no conflict of interest.



    [1] Putra TA, Rufaida SI and Leu JS (2020) Enhanced skin condition prediction through machine learning using dynamic training and testing augmentation. IEEE Access 8: 40536–40546. https://doi.org/10.1109/ACCESS.2020.2976045 doi: 10.1109/ACCESS.2020.2976045
    [2] Nahata H and Singh SP (2020) Deep learning solutions for skin cancer detection and diagnosis. Machine Learning with Health Care Perspective, 159–182. Springer, Cham. https://doi.org/10.1007/978-3-030-40850-3_8
    [3] Mijwil MM (2021) Skin cancer disease images classification using deep learning solutions. Multimed Tools Appl 80: 26255–26271. https://doi.org/10.1007/s11042-021-10952-7 doi: 10.1007/s11042-021-10952-7
    [4] Jeihooni AK and Moradi M (2019) The effect of educational intervention based on PRECEDE model on promoting skin cancer preventive behaviors in high school students. J Cancer Educ 34: 796–802. https://doi.org/10.1007/s13187-018-1376-y doi: 10.1007/s13187-018-1376-y
    [5] Jeihooni AK and Rakhshani T (2019) The effect of educational intervention based on health belief model and social support on promoting skin cancer preventive behaviors in a sample of Iranian farmers. J Cancer Educ 34: 392–401. https://doi.org/10.1007/s13187-017-1317-1 doi: 10.1007/s13187-017-1317-1
    [6] Mohapatra S, Abhishek NVS, Bardhan D, et al. (2021) Skin cancer classification using convolution neural networks. Advances in Distributed Computing and Machine Learning, 433–442. Springer, Singapore. https://doi.org/10.1007/978-981-15-4218-3_42
    [7] Maxwell A, Li R, Yang B, et al. (2017) Deep learning architectures for multi-label classification of intelligent health risk prediction. BMC bioinformatics, 18: 121–131. https://doi.org/10.1186/s12859-017-1898-z doi: 10.1186/s12859-017-1898-z
    [8] Han SS, Park I, Chang SE, et al. (2020) Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Investigat Dermatol 140: 1753–1761. https://doi.org/10.1016/j.jid.2020.01.019 doi: 10.1016/j.jid.2020.01.019
    [9] Kadampur MA and Riyaee SA (2020) Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images. Informatics in Medicine Unlocked 18: 100282. https://doi.org/10.1016/j.imu.2019.100282 doi: 10.1016/j.imu.2019.100282
    [10] Nahata H and Singh SP (2020) Deep learning solutions for skin cancer detection and diagnosis. Machine Learning with Health Care Perspective, 159–182. Springer, Cham. https://doi.org/10.1007/978-3-030-40850-3_8
    [11] Pacheco AGC and Krohling A (2020) The impact of patient clinical information on automated skin cancer detection. Comput biol med 116: 103545. https://doi.org/10.1016/j.compbiomed.2019.103545 doi: 10.1016/j.compbiomed.2019.103545
    [12] Höhn J, Hekler A, Krieghoff-Henning E, et al. (2021) Integrating patient data into skin cancer classification using convolutional neural networks: systematic review. J Med Internet Res 23: e20708. https://doi.org/10.2196/20708 doi: 10.2196/20708
    [13] Bushehri SF and Zarchi MS (2019) An expert model for self-care problems classification using probabilistic neural network and feature selection approach. Appl Soft Comput 82: 105545. https://doi.org/10.1016/j.asoc.2019.105545 doi: 10.1016/j.asoc.2019.105545
    [14] Han SS, Park I, Chang SE, et al. (2020) Augmented intelligence dermatology: deep neural networks empower medical professionals in diagnosing skin cancer and predicting treatment options for 134 skin disorders. J Invest Dermatol 140: 1753–1761. https://doi.org/10.1016/j.jid.2020.01.019 doi: 10.1016/j.jid.2020.01.019
    [15] Kadampur MA and Riyaee SA (2020) Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images. Informatics in Medicine Unlocked 18: 100282. https://doi.org/10.1016/j.imu.2019.100282 doi: 10.1016/j.imu.2019.100282
    [16] Huang CW, Nguyen AP, Wu CC, et al. (2021) Develop a Prediction Model for Nonmelanoma Skin Cancer Using Deep Learning in EHR Data. Soft Computing for Biomedical Applications and Related Topics, 11–18. Springer, Cham. https://doi.org/10.1007/978-3-030-49536-7_2
    [17] Nahata H and Singh S (2020) Deep learning solutions for skin cancer detection and diagnosis. Machine Learning with Health Care Perspective, 2020,159–182. Springer, Cham. https://doi.org/10.1007/978-3-030-40850-3_8
    [18] Abhishek K, Kawahara J and Hamarneh G (2021) Predicting the clinical management of skin lesions using deep learning. Scientific reports 11: 1–14. https://doi.org/10.1038/s41598-021-87064-7 doi: 10.1038/s41598-021-87064-7
    [19] Ashraf R, Kiran I, Mahmood T, et al. (2020) An efficient technique for skin cancer classification using deep learning. 2020 IEEE 23rd International Multitopic Conference (INMIC), 1–5. IEEE. https://doi.org/10.1109/INMIC50486.2020.9318164
    [20] Maron RC, Schlager JG, Haggenmüller S, et al. (2021) A benchmark for neural network robustness in skin cancer classification. Eur J Cancer 155: 191–199. https://doi.org/10.1016/j.ejca.2021.06.047 doi: 10.1016/j.ejca.2021.06.047
    [21] Ali MS, Miah MS, Haque J, et al. (2021) An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. Machine Learning with Applications 5: 100036. https://doi.org/10.1016/j.mlwa.2021.100036 doi: 10.1016/j.mlwa.2021.100036
    [22] Pramanik A and Chakraborty R (2021) A Deep Learning Prediction Model for Detection of Cancerous Lesions from Dermatoscopic Images. Advanced Machine Learning Approaches in Cancer Prognosis, 395–423. Springer, Cham. https: //doi.org/10.1007/978-3-030-71975-3_15
    [23] Salian AC, Vaze S, Singh P, et al. (2020) Skin lesion classification using deep learning architectures. 2020 3rd International conference on communication system, computing and IT applications (CSCITA), 168–173. IEEE. https://doi.org/10.1109/CSCITA47329.2020.9137810
    [24] Wang JS, Song JD and Gao J (2015) Rough set-probabilistic neural networks fault diagnosis method of polymerization kettle equipment based on shuffled frog leaping algorithm. Information 6: 49–68. https://doi.org/10.3390/info6010049 doi: 10.3390/info6010049
    [25] Chakravarthy SSR and Rajaguru H (2015) Lung cancer detection using probabilistic neural network with modified crow-search algorithm. Asian Pacific journal of cancer prevention: APJCP 20: 2159. https://doi.org/10.31557/APJCP.2019.20.7.2159 doi: 10.31557/APJCP.2019.20.7.2159
    [26] Sharma H, Hazrati G and Bansal JC (2019) Spider monkey optimization algorithm. Evolutionary and swarm intelligence algorithms, 43–59. Springer, Cham. https://doi.org/10.1007/978-3-319-91341-4_4
    [27] Kumar S, Sharma B, Sharma VK, et al. (2020) Plant leaf disease identification using exponential spider monkey optimization. Sustainable computing: Informatics and systems 28: 100283. https://doi.org/10.1016/j.suscom.2018.10.004 doi: 10.1016/j.suscom.2018.10.004
    [28] Rajeshwari J and Sughasiny M (2022) Modified Filter Based Feature Selection Technique for Dermatology Dataset Using Beetle Swarm Optimization. EAI Endorsed Trans S, e2-e2. https://doi.org/10.4108/eetsis.vi.1998 doi: 10.4108/eetsis.vi.1998
    [29] Rajeshwari J and Sughasiny M (2022) Dermatology disease prediction based on firefly optimization of ANFIS classifier. AIMS Electronics and Electrical Engineering 6: 61–80. https://doi.org/10.3934/electreng.2022005 doi: 10.3934/electreng.2022005
    [30] Dermatology Data set. Available from: https://archive.ics.uci.edu/ml/datasets/dermatology.
    [31] Izonin I, Tkachenko R, Ryvak L, et al. (2020) Addressing Medical Diagnostics Issues: Essential Aspects of the PNN-based Approach. IDDM, 209–218.
    [32] Izonin I, Tkachenko R, Gregus M, et al. (2021) Hybrid Classifier via PNN-based Dimensionality Reduction Approach for Biomedical Engineering Task. Procedia Computer Science 191: 230–237. https://doi.org/10.1016/j.procs.2021.07.029 doi: 10.1016/j.procs.2021.07.029
    [33] Izonin I, Tkachenko R, Gregus M, et al. (2022) PNN-SVM Approach of Ti-Based Powder's Properties Evaluation for Biomedical Implants Production. CMC-COMPUT MATER CON 71: 5933–5947. https://doi.org/10.32604/cmc.2022.022582 doi: 10.32604/cmc.2022.022582
    [34] Guan Y, Aamir M, Rahman Z, et al. (2021) A framework for efficient brain tumor classification using MRI images. Math Biosci Eng 18: 5790–5815. https://doi.org/10.3934/mbe.2021292 doi: 10.3934/mbe.2021292
    [35] Aamir M, Rahman Z, Dayo ZA, et al. (2022) A deep learning approach for brain tumor classification using MRI images. Comput Electr Eng 101: 108105. https://doi.org/10.1016/j.compeleceng.2022.108105 doi: 10.1016/j.compeleceng.2022.108105
  • This article has been cited by:

    1. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, EACH-COA: An Energy-Aware Cluster Head Selection for the Internet of Things Using the Coati Optimization Algorithm, 2023, 14, 2078-2489, 601, 10.3390/info14110601
    2. Ashok Thangavelu, Prabakaran Rajendran, Energy-Efficient Secure Routing for a Sustainable Heterogeneous IoT Network Management, 2024, 16, 2071-1050, 4756, 10.3390/su16114756
    3. N Saikiran, K Yogeswar Reddy, C Pavaneswara Reddy, S Karthik, 2024, Advanced Anomaly Detection in Cloud Security Using Gini Impurity and ML, 979-8-3503-7519-0, 380, 10.1109/ICAAIC60222.2024.10575757
    4. 尚秋峰 Shang Qiufeng, 刘峰 Liu Feng, FBG形状传感器的曲率和弯曲方向误差修正模型, 2023, 43, 0253-2239, 2228002, 10.3788/AOS231140
    5. Ramasubbareddy Somula, Yongyun Cho, Bhabendu Kumar Mohanta, SWARAM: Osprey Optimization Algorithm-Based Energy-Efficient Cluster Head Selection for Wireless Sensor Network-Based Internet of Things, 2024, 24, 1424-8220, 521, 10.3390/s24020521
    6. Muniyan Rajeswari, Rajakumar Ramalingam, Shakila Basheer, Keerthi Samhitha Babu, Mamoon Rashid, Ramar Saranya, Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem, 2023, 12, 2075-1680, 395, 10.3390/axioms12040395
    7. Haripriya R, Vinutha C B, Shoba M, 2023, Genetic Algorithm with Bacterial Conjugation Based Cluster Head Selection for Dynamic WSN, 979-8-3503-0082-6, 1, 10.1109/NMITCON58196.2023.10275829
    8. Fukui Li, Hui Xu, Feng Qiu, Modified artificial rabbits optimization combined with bottlenose dolphin optimizer in feature selection of network intrusion detection, 2024, 32, 2688-1594, 1770, 10.3934/era.2024081
    9. Ferzat Anka, Nazim Agaoglu, Sajjad Nematzadeh, Mahsa Torkamanian-afshar, Farhad Soleimanian Gharehchopogh, Advances in Artificial Rabbits Optimization: A Comprehensive Review, 2024, 1134-3060, 10.1007/s11831-024-10202-7
    10. Muhammed A. Mahdi, Ali Y. Yousif, Mahdi Abed Salman, 2025, Chapter 1, 978-3-031-81064-0, 3, 10.1007/978-3-031-81065-7_1
    11. SriHasini J, Oudaya Coumar, 2024, Reinforced Learning Model with Coati optimization Algorithm for Energy Efficient Routing for WSN Connecting IoT, 979-8-3315-1002-2, 1, 10.1109/ICSCAN62807.2024.10894220
    12. Khushwant Singh, Mohit Yadav, R K Yadav, Bharat Bhushan Naib, Sandeep Bhatia, Deepak Kumar Panda, 2024, Prognosis of Relocation Disease in Animals using Aggregation Method with Optimization Techniques, 979-8-3315-2134-9, 15, 10.1109/PDGC64653.2024.10984354
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2270) PDF downloads(234) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog