Processing math: 86%
Research article Special Issues

Machine learning and artificial neural networks to construct P2P lending credit-scoring model: A case using Lending Club data

  • Received: 16 March 2022 Revised: 26 May 2022 Accepted: 30 May 2022 Published: 09 June 2022
  • JEL Codes: D12, E41, E44, G20

  • In this study, we constructed the credit-scoring model of P2P loans by using several machine learning and artificial neural network (ANN) methods, including logistic regression (LR), a support vector machine, a decision tree, random forest, XGBoost, LightGBM and 2-layer neural networks. This study explores several hyperparameter settings for each method by performing a grid search and cross-validation to get the most suitable credit-scoring model in terms of training time and test performance. In this study, we get and clean the open P2P loan data from Lending Club with feature engineering concepts. In order to find significant default factors, we used an XGBoost method to pre-train all data and get the feature importance. The 16 selected features can provide economic implications for research about default prediction in P2P loans. Besides, the empirical result shows that gradient-boosting decision tree methods, including XGBoost and LightGBM, outperform ANN and LR methods, which are commonly used for traditional credit scoring. Among all of the methods, XGBoost performed the best.

    Citation: An-Hsing Chang, Li-Kai Yang, Rua-Huan Tsaih, Shih-Kuei Lin. Machine learning and artificial neural networks to construct P2P lending credit-scoring model: A case using Lending Club data[J]. Quantitative Finance and Economics, 2022, 6(2): 303-325. doi: 10.3934/QFE.2022013

    Related Papers:

    [1] Shahid Mubeen, Rana Safdar Ali, Iqra Nayab, Gauhar Rahman, Thabet Abdeljawad, Kottakkaran Sooppy Nisar . Integral transforms of an extended generalized multi-index Bessel function. AIMS Mathematics, 2020, 5(6): 7531-7547. doi: 10.3934/math.2020482
    [2] Iqra Nayab, Shahid Mubeen, Rana Safdar Ali, Gauhar Rahman, Abdel-Haleem Abdel-Aty, Emad E. Mahmoud, Kottakkaran Sooppy Nisar . Estimation of generalized fractional integral operators with nonsingular function as a kernel. AIMS Mathematics, 2021, 6(5): 4492-4506. doi: 10.3934/math.2021266
    [3] Sabila Ali, Shahid Mubeen, Rana Safdar Ali, Gauhar Rahman, Ahmed Morsy, Kottakkaran Sooppy Nisar, Sunil Dutt Purohit, M. Zakarya . Dynamical significance of generalized fractional integral inequalities via convexity. AIMS Mathematics, 2021, 6(9): 9705-9730. doi: 10.3934/math.2021565
    [4] Sobia Rafeeq, Sabir Hussain, Jongsuk Ro . On fractional Bullen-type inequalities with applications. AIMS Mathematics, 2024, 9(9): 24590-24609. doi: 10.3934/math.20241198
    [5] Mustafa Gürbüz, Yakup Taşdan, Erhan Set . Ostrowski type inequalities via the Katugampola fractional integrals. AIMS Mathematics, 2020, 5(1): 42-53. doi: 10.3934/math.2020004
    [6] Hari M. Srivastava, Artion Kashuri, Pshtiwan Othman Mohammed, Abdullah M. Alsharif, Juan L. G. Guirao . New Chebyshev type inequalities via a general family of fractional integral operators with a modified Mittag-Leffler kernel. AIMS Mathematics, 2021, 6(10): 11167-11186. doi: 10.3934/math.2021648
    [7] Ghulam Farid, Maja Andrić, Maryam Saddiqa, Josip Pečarić, Chahn Yong Jung . Refinement and corrigendum of bounds of fractional integral operators containing Mittag-Leffler functions. AIMS Mathematics, 2020, 5(6): 7332-7349. doi: 10.3934/math.2020469
    [8] Maimoona Karim, Aliya Fahmi, Shahid Qaisar, Zafar Ullah, Ather Qayyum . New developments in fractional integral inequalities via convexity with applications. AIMS Mathematics, 2023, 8(7): 15950-15968. doi: 10.3934/math.2023814
    [9] Shuang-Shuang Zhou, Saima Rashid, Muhammad Aslam Noor, Khalida Inayat Noor, Farhat Safdar, Yu-Ming Chu . New Hermite-Hadamard type inequalities for exponentially convex functions and applications. AIMS Mathematics, 2020, 5(6): 6874-6901. doi: 10.3934/math.2020441
    [10] Muhammad Amer Latif, Humaira Kalsoom, Zareen A. Khan . Hermite-Hadamard-Fejér type fractional inequalities relating to a convex harmonic function and a positive symmetric increasing function. AIMS Mathematics, 2022, 7(3): 4176-4198. doi: 10.3934/math.2022232
  • In this study, we constructed the credit-scoring model of P2P loans by using several machine learning and artificial neural network (ANN) methods, including logistic regression (LR), a support vector machine, a decision tree, random forest, XGBoost, LightGBM and 2-layer neural networks. This study explores several hyperparameter settings for each method by performing a grid search and cross-validation to get the most suitable credit-scoring model in terms of training time and test performance. In this study, we get and clean the open P2P loan data from Lending Club with feature engineering concepts. In order to find significant default factors, we used an XGBoost method to pre-train all data and get the feature importance. The 16 selected features can provide economic implications for research about default prediction in P2P loans. Besides, the empirical result shows that gradient-boosting decision tree methods, including XGBoost and LightGBM, outperform ANN and LR methods, which are commonly used for traditional credit scoring. Among all of the methods, XGBoost performed the best.



    Fractional calculus signifies the identity of the distinguished materials in the modern research field due to its integrated applications in diverse regions such as mathematical physics, fluid dynamics, mathematical biology, etc. Convex function, exponentially convex function [1,2,3,4,5], related inequalities like as trapezium inequality, Ostrowski's inequality and Hermite Hadamard inequality, integrals [6,7,8,9,10] having succeed in mathematical analysis, approximation theory due to immense applications [11,12] have great importance in mathematics theory. Many authors established quadrature rules in numerical analysis for approximate definite integrals. Recently, Pólya-Szegö and Chebyshev inequalities occupied immense space in the field analysis. Chebyshev [13] was introduced the well-known inequality called Chebyshev inequality.

    In the literature of convex function, the Jensen inequality has gained much importance which describes a connection between an integral of the convex function and the value of the convex function of an interval [14,15,16]. Pshtiwan and Thabet [17] considered the modified Hermite Hadamard inequality in the context of fractional calculus using the Riemann-Liouville fractional integrals. Arran and Pshtiwan [18] discussed the Hermite Hadamard inequality results with fractional integrals and derivatives using Mittag-Leffler kernel. Pshtiwan and Thabet [19] constructed a connection between the Riemann-Liouville fractional integrals of a function concerning a monotone function with nonsingular kernel and Atangana-Baleanu. Pshtiwan and Brevik [20] obtained an inequality of Hermite Hadamard type for Riemann-Liouville fractional integrals, and proved the application of obtained inequalities on modified Bessel functions and q-digamma function. In [21], Set et al. introduced Grüss type inequalities by employing generalized k-fractional integrals. Recently, Nisar et al. [22] gave some new generalized fractional integral inequalities.

    Very recently, the fractional conformable and proportional fractional integral operators were given in [23,24]. Later on, Huang et al. [25] gave Hermite–Hadamard type inequalities by using fractional conformable integrals (FCI). Qi et al. [26] investigated Čebyšev type inequalities involving FCI. The Chebyshev type inequalities and certain Minkowski's type inequalities are found in [27,28,29]. Nisar et al. [30] have investigated some new inequalities for a class of n  (nN) positive, continuous, and decreasing functions by employing FCI. Rahman et al. [31] introduced Grüss type inequalities for k-fractional conformable integrals.

    Some significant inequalities are given as applications of fractional integrals [32,33,34,35,36,37,38]. Recently, Rahman et al. [39,40] presented fractional integral inequalities involving tempered fractional integrals. Qiang et al. [41] discussed a fractional integral containing the Mittag-Leffler function in inequality theory and contributed Hadamard type inequality, continuity, and boundedness, upper bounds of that integral. Nisar et al. [42] established weighted fractional Pólya-Szegö and Chebyshev type integral inequalities by operating the generalized weighted fractional integral involving kernel function. The dynamical approach of fractional calculus [43,44,45,46,47,48,49] in the field of inequalities.

    Grüss inequality [50] established for two integrable function as follows

    |T(h,l)|(kK)(sS)4, (1.1)

    where the h and l are two integrable functions which are synchronous on [a,b] and satisfy:

    sh(z)K,sl(y1)S, z,y1[a,b] (1.2)

    for some s,k,S,KR.

    Pólya and Szegö [51] proved the inequalities

    bah2(z)dzabl2(z)dz(abh(z)l(z)dz)214(KSks+ksKS)2. (1.3)

    Dragomir and Diamond [52], proves the inequality by using the Pólya-szegö inequality

    |T(h,l)|(Ss)(Kk)4(ba)2skSKbah(z)l(z)dz (1.4)

    where h and l are two integrable functions which are synchronous on [a,b], and

    0<sh(z)S<,0<kl(y1)K<, z,y1[a,b] (1.5)

    for some s,k,S,KR.

    The aim of this paper is to estimate a new version of Pólya-Szegö inequality, Chebyshev integral inequality, and Hermite Hadamard type integral inequality by a fractional integral operator having a nonsingular function (generalized multi-index Bessel function) as a kernel, and these established results have great contribution in the field of inequalities. The Hermite Hadamard type integral inequality provides the upper and lower estimate to find the average integral for the convex function of any defined interval.

    The structure of the paper follows:

    In section 2, we present some well-known definitions and mathematical preliminaries. The new generalized fractional integral with nonsingular function as a kernel is defined in section 3. In section 4, we present Hermite Hadamard type Mercer inequality of new designed fractional integral operator with nonsingular function (generalized multi-index Bessel function) as a kernel. some inequalities of (sm)-preinvex function involving new designed fractional integral operator with nonsingular function (generalized multi-index Bessel function) as a kernel are presented in section 5. Here section 6 and 7, we present Pólya-Szegö and Chebyshev integral inequalities involving generalized fractional integral operator with nonsingular function as a kernel, respectively.

    Definition 2.1. The inequality holds for the convex function if a mapping g:KR exist as

    g(δy1+(1δ)y2)δg(y1)+(1δ)g(y2), (2.1)

    where y1,y2K and δ[0,1].

    Definition 2.2. The inequality derived by Hermite [53] call as Hermite Hadamard inequality

    g(y1+y22)1y2y1y2y1g(t)dtg(y1)+g(y2)2, (2.2)

    where y1,y2I, with y2y1, if g:IRR is a convex function.

    Definition 2.3. Let yjK for all jIn, ωj>0 such that jInωj=1. Then the Jensen inequality holds

    g(jInωjyj)jInωjg(yj), (2.3)

    exist if g:kR is convex function.

    Mercer [54] derived the Mercer inequality by applying the Jensen inequality and properties of convex function.

    Definition 2.4. Let yjK for all jIn, ωj>0 such that jInωj=1, m=minjIn{yj} and n=maxjIn{yj}. Then the inequality holds for convex function as

    g(m+niInωjyj)g(m)+g(n)jInωjg(yj), (2.4)

    if g:kR is convex function.

    Definition 2.5. [55] The inequality holds for exponentially convex function, if a real valued mapping g:KR exist as

    g(δy1+(1δ)y2)δg(y1)eθy1+(1δ)g(y2)eθy2, (2.5)

    where y1,y2K and δ[0,1] and θR.

    Suppose that ΩRn is a set. Let g:ΩR continuous function and let ξ:Ω×ΩRn be continuous function:

    Definition 2.6. [56] With respect to bifunction ξ(.,.) a set Ω is called a invex set, if

    y1+δξ(y2,y1), (2.6)

    where y1,y2Ω,δ[0,1].

    Definition 2.7. [57] A invex set Ω and a mapping g with respect to ξ(.,.) is called a preinvex function, as

    g(y1+δξ(y2,y1))(1δ)g(y1)+δg(y2), (2.7)

    where y1,y2+ξ(y2,y1)Ω,δ[0,1].

    Definition 2.8. A invex set Ω with real valued mapping g and respect to ξ(.,.) is called a exponentially preinvex, if the inequality

    g(y1+δξ(y2,y1))(1δ)g(y1)eθy1+δg(y2)eθy2, (2.8)

    where for all y1,y2+ξ(y2,y1)Ω,δ[0,1] and θR.

    Definition 2.9. A invex set Ω with real valued mapping g and respect to ξ(.,.) is called a exponentially s-preinvex, if

    g(y1+δξ(y2,y1))(1δ)sg(y1)eθy1+δsg(y2)eθy2, (2.9)

    where for all y1,y2+ξ(y2,y1)Ω,δ[0,1], s(0,1] and θR.

    Definition 2.10. A invex set Ω with real valued mapping g and respect to ξ(.,.) is called exponentially (s-m)-preinvex, if

    g(y1+mδξ(y2,y1))(1δ)sg(y1)eθy1+mδsg(y2)eθy2, (2.10)

    where for all y1,y2+ξ(y2,y1)Ω, δ,m[0,1] and θR.

    Definition 2.11. [58] Generalized multi-index Bessel function is defined by Choi et al as follows

    J(ξj)m,λ(δj)m,σ(z)=s=0(λ)σsmj=1Γ(ξjs+δj+1)(z)ss!, (2.11)

    where ξj,δj,λC, (j=1,,m), (λ)>0,(δj)>1,mj=1(ξ)j>max{0:(σ)1},σ>0.

    Definition 2.12. [58] Pohhammer symbol is defined for λC as follows

    (λ)s={λ(λ+1)(λ+s1),sN1,s=0, (2.12)
    =Γ(λ+s)Γ(λ),(λC/Z0) (2.13)

    where Γ being the Gamma function.

    This section presents a generalized fractional integral operator with a nonsingular function (multi-index Bessel function) as a kernel.

    Definition 3.1. Let ξj,δj,λ,ζC,(j=1,,m),(λ)>0,(δj)>1,mj=1(ξj)>max{0:(σ)1},σ>0. Let gL  [y1,y2] and t[y1,y2]. Then the corresponding left sided and right sided generalized integral operators having generalized multi-index Bessel function defined as:

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)=zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)dt, (3.1)

    and

    (Œ(ξj,δj)mλ,σ,ζ;y2g)(z)=y2z(tz)δjJ(ξj)m,λ(δj)m,σ(ζ(tz)ξj)g(t)dt. (3.2)

    Remark 3.1. The special cases of generalized fractional integrals with nonsingular kernel are given below:

    1. If set j=m=1, σ=0 and limits from [0,z] in Eq (3.1), we get a fractional integral defined by Srivastava and Singh in [59] as

    (Œξ1,δ1λ,0,ζ;0+g)(z)=z0(zt)δ1Jξ1δ1(ζ(zt)ξ1)g(t)dt=f(z). (3.3)

    2. If set j=m=1, δ1=δ11 in Eq (3.1), we have a fractional integral defined by Srivastava and Tomovski in [60] as

    (Œξ1,δ11λ,σ,ζ;y+1g)(z)=(Eζ;λ,σy+1;ξ1,δ1g)(z). (3.4)

    3. If set j=m=1, δ1=δ11, ζ=0 in Eq (3.1), we get a Riemann-Liouville fractional integral operator defined in [61] as

    (Œξ1,δ1λ,σ,ζ;y+1g)(z)=(Iδ1y+1g)(z). (3.5)

    4. If set j=m=1, σ=1, δ1=δ11, in Eq (3.1) and Eq (3.2), we get the fractional integral operator defined by Prabhakar in [62] as follows

    (Œξ1,δ11λ,1,ζ;y+1g)(z)=E(ξ1,δ1;λ;ζ)g(z)=g(z) (3.6)
    (Œ(ξ1,δ11)λ,1,ζ;y2g)(z)=E(ξ1,δ1;λ;ζ)g(z). (3.7)

    Lemma 3.1. From generalized fractional integral operator, we have

    (Œ(ξj,δj)mλ,σ,ζ;y+11)(z)=zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)dt=zy1(zt)δjs=0(λ)σs(ζ)smj=1Γ(ξjs+δj+1)(zt)ξjss!dt=s=0(λ)σs(ζ)smj=1Γ(ξjs+δj+1)s!zy1(zt)ξjs+δjdt=(zy1)δj+1s=0(λ)σs(ζ)smj=1Γ(ξjs+δj+1)s!(zy1)ξjsξjs+δj+1. (3.8)

    Hence, the Eq (3.8) becomes

    (Œ(ξj,δj+1)mλ,σ,ζ;y+11)(z)=(zy1)δj+1J(ξj)m,λ(δj)m+1,σ(ζ(zy1)ξj), (3.9)

    and similarly we have

    (Œ(ξj,δj+1)mλ,σ,ζ;y21)(z)=(y2z)δj+1J(ξj)m,λ(δj)m+1,σ(ζ(y2z)ξj). (3.10)

    In this section, we derive Hermite Hadamard type Mercer inequality of new designed fractional integral operator in a generalized multi-index Bessel function using a kernel.

    Theorem 4.1. Let g:[m,n](0,) is convex function such that gχc(m,n), x,y[m,n] and the operator defined in Eq (5.2) in the form of left sense operator and Eq (3.2) in the form of right sense operator then we have

    g(m+nx+y2)g(m)+g(n)[J(ξj)m,λ(δj)m+1,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)] (4.1)
    g(m)+g(n)g(x)+g(y)2. (4.2)

    Proof. Consider the mercer inequality

    g(m+ny1+y22)g(m)+g(n)g(y1)+g(y2)2,y1,y2[m,n]. (4.3)

    Let x,y[m,n], t[z1,z], y1=(zt)x+(1z+t)y and y2=(1z+t)x+(zt)y then inequality (4.3) becomes

    g(m+ny1+y22)g(m)+g(n)g((zt)x+(1z+t)y)+g(1z+t)x+(zt)y)2. (4.4)

    Multiply both sides of Eq (4.4) by (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj) and integrating with respect to t from [z1,z], we get

    J(ξj)m,λ(δj)m+1,σ(ζ)g(m+nx+y2)J(ξj)m,λ(δj)m+1,σ(ζ)[g(m)+g(n)]12[zz1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)×[g((zt)y1+(1z+t)y2)+g(1z+t)x+(zt)y2]]dt=J(ξj)m,λ(δj)m+1,σ(ζ)[g(m)+g(n)]12[yx(yuyx)δjJ(ξj)m,λ(δj)m,σ(ζ(yuyx)ξj)×g(u)(yx)du+xy(uxyx)δjJ(ξj)m,λ(δj)m,σ(ζ(uxyx)ξj)g(u)(yx)du]=J(ξj)m,λ(δj)m+1,σ(ζ)[g(m)+g(n)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)],

    we get the desired inequality, as

    g(m+nx+y2)g(m)+g(n)[J(ξj)m,λ(δj)m+1,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)]. (4.5)

    Thus, we get the inequality (4.1). Let t[z1,z]. From the convexity of function g we have

    g(x+y2)=g[(zt)x+(1z+t)y+(1z+t)x+(zt)y]2g((zt)x+(1z+t)y)+g((1z+t)x+(zt)y)2. (4.6)

    Both sides multiply of Eq (4.6) by (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj) and integrating with respect to t from [z1,z], we obtain

    J(ξj)m,λ(δj)m,σ(ζ)g(x+y2)zz1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)×[g((zt)x+(1z+t)y)+g((1z+t)x+(zt)y)]dt=12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)].

    We get the inequality of negative sign

    g(x+y2)[J(ξj)m,λ(δj)m+1,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)]. (4.7)

    By adding g(m)+g(n) of both sides of inequality (4.7), we have

    g(m)+g(n)g(x+y2)g(m)+g(n)[J(ξj)m,λ(δj)m+1,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;x+g(y)+Œ(ξj,δj)mλ,σ,ζ;yg(x)].

    Hence, we get the inequality (4.2).

    Theorem 4.2. Let g:[m,n](0,) is convex function such that gχc(m,n) then we have the following inequalities:

    g(m+nx+y2)[J(ξj)m,λ(δj)m,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;(m+ny)+g(m+nx)+Œ(ξj,δj)mλ,σ,ζ;(m+nx)g(m+ny)]. (4.8)
    g(m+nx)+g(m+ny)2g(m)+g(n)g(m)+g(n)2. (4.9)

    Where x,y[m,n].

    Proof. We see that from the convexity of g as

    g(m+ny1+y22)=g(m+ny1+m+ny22)12[g(m+ny1)+g(m+ny2)],y1,y2[m,n]. (4.10)

    Let x,y[m,n], t[z1,z], m+ny1=(zt)(m+nx)+(1z+t)(m+ny), m+ny2=(1z+t)(m+nx)+(zt)(m+ny), then inequality (4.10) gives

    g(m+ny1+y22)12g[(zt)(m+nx)+(1z+t)(m+ny)]+12g[(1z+t)(m+nx)+(zt)(m+ny)], (4.11)

    multiply of both sides of inequality (4.11) by (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj) then integrate with respect to t from [z1,z], we get

    J(ξj)m,λ(δj)m,σ(ζ)g(m+nx+y2)12zz1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g[(zt)(m+nx)+(1z+t)(m+ny)]dt+12zz1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g[(1z+t)(m+nx)+(zt)(m+ny)]dt=12(yx)[m+nxm+ny(u(m+ny)yx)δj)J(ξj)m,λ(δj)m,σ(ζ(u(m+ny)yx)ξj)g(u)du+m+nym+nx((m+ny)uyx)δj)J(ξj)m,λ(δj)m,σ(ζ((m+ny)uyx)ξj)g(u)du]=12(yx)[Œ(ξj,δj)mλ,σ,ζ;(m+ny)+g(m+nx)+Œ(ξj,δj)mλ,σ,ζ;(m+nx)g(m+ny)].

    Thus, we get the inequality (4.8)

    g(m+nx+y2)[J(ξj)m,λ(δj)m,σ(ζ)]12(yx)[Œ(ξj,δj)mλ,σ,ζ;(m+ny)+g(m+nx)+Œ(ξj,δj)mλ,σ,ζ;(m+nx)g(m+ny)].

    From the convexity of g, we obtain

    g((zt)(m+nx)+(1z+t)(m+ny))(zt)g(m+nx)+(1z+t)g(m+ny), (4.12)

    and

    g((1z+t)(m+nx)+(zt)(m+ny))(1z+t)g(m+nx)+(zt)g(m+ny). (4.13)

    Adding up the above inequalities and applying Jensen-Mercer inequality, we get

    g((zt)(m+nx)+(1z+t)(m+ny))+g((1z+t)(m+nx)+(zt)(m+ny))g(m+nx)+g(m+ny)2[g(m)+g(n)][g(x)+g(y)]. (4.14)

    Multiply both sides of inequality (4.14) by (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj) and then integrating with respect to t from [z1,z] we obtain the two inequalities (4.9).

    In this section, we derive some inequalities of (sm) preinvex function involving new designed fractional integral operator Œ(ξj,δj)mλ,σ,ζg)(z) having generalized multi-index Bessel function as its kernel in the form of theorems.

    Theorem 5.1. Suppose a real valued function g:[y1,y1+ξ(y2,y1)]R be exponentially (s-m) preinvex function, then the following fractional inequality holds:

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[g(y1)eθ1y1+mg(z)eθ1z]+(y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)[g(y1+ξ(y2,y1))eθ2(y1+ξ(y2,y1))+mg(z)eθ2z].

    z[y1,y1+ξ(y2,y1)], θ1,θ2R.

    Proof. Let z[y1,y1+ξ(y2,y1)], and then for t[y1,z) and δj>1, we have the subsequent inequality

    (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj). (5.1)

    For g is exponentially (s-m)-preinvex function, we obtain

    g(t)(ztzy1)sg(y1)eθ1y1+m(ty1zy1)sg(z)eθ1z. (5.2)

    Taking product (5.1) and (5.2), and integrating with respect to t from y1 to z, we get

    zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)dtzy1(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)×[(ztzy1)sg(y1)eθ1y1+m(ty1zy1)sg(z)eθ1z]dt, (5.3)

    apply definition (13) in Eq (5.3), we have

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[g(y1)eθ1y1+mg(z)eθ1z]. (5.4)

    Analogously for t(z,y1+ξ(y2,y1)] and μj>1, we have

    (tz)μjJ(ξj)m,λ(μj)m,σ(ζ(tz)ξj)(y1+ξ(y2,y1)z)μjJ(ξj)m,λ(μj)m,σ(ζ(y1+ξ(y2,y1)z)ξj). (5.5)

    Further, the exponentially (s-m) convexity of g, we get

    g(t)(tzy1+ξ(y2,y1)z)sg(y1+ξ(y2,y1))eθ2(y1+ξ(y2,y1))+m(y1+ξ(y2,y1)ty1+ξ(y2,y1)z)sg(z)eθ2z. (5.6)

    Taking product of (5.5) and (5.6) and integrating with respect to t from z to y1+ξ(y2,y1), we have

    y1+ξ(y2,y1)z(tz)μjJ(ξj)m,λ(μj)m,σ(ζ(tz)ξj)g(t)dty1+ξ(y2,y1)z(y1+ξ(y2,y1)z)μjJ(ξj)m,λ(μj)m,σ(ζ(y1+ξ(y2,y1)z)ξj)×[(tzy1+ξ(y2,y1)z)sg(y1+ξ(y2,y1))eθ2(y1+ξ(y2,y1))+m(y1+ξ(y2,y1)ty1+ξ(y2,y1)z)sg(z)eθ2z]dt, (5.7)

    apply the definition (13) in inequality (5.7), we have

    (Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)(y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)[g(y1+ξ(y2,y1))eθ2(y1+ξ(y2,y1))+mg(z)eθ2z]. (5.8)

    Now, add the inequalities (5.4) and (5.8), we get the result

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[g(y1)eθ1y1+mg(z)eθ1z]+(y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)[g(y1+ξ(y2,y1))eθ2(y1+ξ(y2,y1))+mg(z)eθ2z].

    Corollary 5.1. If gL[y1,y1+ξ(y2,y1)], then under the assumption of theorem (5.1), we have

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)||g||s+1[(zy1)(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)(1eθ1y1+m1eθ1z)+(y1+η(y2,y1)z)(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)(1eθ2(y1+ξ(y2,y1))+m1eθ2z)].

    Corollary 5.2. Setting m=1 and gL[y1,y1+ξ(y2,y1)], then under the assumption of theorem (5.1), we have

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)||g||s+1[(zy1)(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)(1eθ1y1+m1eθ1z)+(y1+ξ(y2,y1)z)(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)(1eθ2(y1+ξ(y2,y1))+1eθ2z)].

    Corollary 5.3. Setting m=s=1 and gL[y1,y1+ξ(y2,y1)], then under the assumption of theorem (5.1), we have

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)||g||2[(zy1)(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)(1eθ1y1+m1eθ1z)+(y1+ξ(y2,y1)z)(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+1)(z)(1eθ2(y1+ξ(y2,y1))+1eθ2z)].

    Corollary 5.4. Setting ξ(y2,y1)=y2y1 and gL[y1,y2], then under the assumption of theorem (5.1), we have

    (Œ(ξj,δj)mλ,σ,ζ;y+1g)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))+g)(z)||g||s+1[(zy1)(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)(1eθ1y1+m1eθ1z)+(y2z)(Œ(ξj,μj)mλ,σ,ζ;y+21)(z)(1eθ2y2+1eθ2z)].

    Theorem 5.2. Suppose a real value function g:[y1,y1+ξ(y2,y1)]R is differentiable and |g| is exponentially (s-m) preinvex, then the following fractional inequality for (3.1) and (3.2) holds:

    |(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)+(Œ(ξj)m,(μj1)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)[(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)]g(y1)[(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)]g(y1+ξ(y2,y1))|(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]+((y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)[|g(y1+ξ(y2,y1))|eθ1(y1+ξ(y2,y1))+m|g(z)|eθ1z].

    z[y1,y1+ξ(y2,y1)], θ1,θ2R.

    Proof. Let z[y1,y1+ξ(y2,y1)], t[y1,z), and applying exponentially (s-m) preinvex of |g|, we get

    |g(t)|(ztzy1)s|g(y1)|eθ1y1+m(ty1zx1)s|g(z)|eθ1z. (5.9)

    Get the inequality (5.9), we have

    g(t)(ztzy1)s|g(y1)|eθ1y1+m(ty1zy1)s|g(x)|eθ1z. (5.10)

    Subsequently inequality as:

    (zt)δjJ(ξj)m,λ(δj)m,k(ζ(zt)ξj)(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj). (5.11)

    Conducting product of inequality (5.10) and (5.11), we have

    (zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)×[(ztzy1)s|g(y1)|eθ1y1+m(ty1zy1)s|g(x)|eθ1z], (5.12)

    integrating before mention inequality with respect to t from y1 to z, we have

    zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)dtzy1(zy1)δjJ(ξj)m,λ(δj)m,k(ζ(zy1)ξj)[(ztzy1)s|g(y1)|eθ1y1+m(ty1zy1)s|g(z)|eθ1z]dt=(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]. (5.13)

    Now, solving left side of (5.13) by putting zt=α, then we have

    zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)dt=zy10αδjJ(ξj)m,λ(δj)m,σ(ζ(α)ξj)g(zα)dα=(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)g(y1)+zy10αδj1J(ξj)m,λ(δj)m1,σ(ζ(α)ξj)g(zα)dα.

    Now, again subsisting zα=t, we get

    zy1(zt)δjJ(ξj)m,λ(δj)m,σ(ζ(zt)ξj)g(t)dt=zy1(zt)δj1J(ξj)m,λ(δj)m1,σ(ζ(zt)ξj)g(t)dt(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)g(y1)=(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)[(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)]g(y1).

    Therefore, the inequality (5.13) have the following form

    (Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(x)[(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)]g(y1)(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]. (5.14)

    Also from (5.9), we get

    g(t)(ztzy1)s|g(y1)|eθ1y1m(ty1zy1)s|g(z)|eθ1z. (5.15)

    Adopting the same procedure as we have done for (5.10), we obtain

    (Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)[(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)]g(y1)(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]. (5.16)

    From (5.14) and (5.16), we get

    |(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)[(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)]g(y1)|(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]. (5.17)

    Now, we let z[y1,y1+η(y2,y1)] and t(z,y1+ξ(y2,y1)], and by exponentially (s-m) preinvex of |g|, we get

    |g(t)|(tzy1+ξ(y2,y1)z)s|g(y1+ξ(y2,y1))|eθ2(y1+ξ(y2,y1))+m(y1+ξ(y2,y1)ty1+ξ(y2,y1)z)s|g(z)|eθ2z, (5.18)

    repeat the same procedure from Eq (5.9) to Eq (5.17), we get

    |(Œ(ξj)m,(μj1)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)[(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)]g(y1+ξ(y2,y1))|((y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)[|g(y1+ξ(y2,y1))|eθ1(y1+ξ(y2,y1))+m|g(z)|eθ1z]. (5.19)

    From inequalities (5.17) and (5.19), we have

    |(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)+(Œ(ξj)m,(μj1)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)[(Œ(ξj,δj)mλ,k,ζ;y+11)(z)]g(y1)[(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)]g(y1+ξ(y2,y1))|(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]+((y1+ξ(y2,y1)z)s+1(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)[|g(y1+ξ(y2,y1))|eθ1(y1+ξ(y2,y1))+m|g(z)|eθ1z].

    Corollary 5.5. Setting ξ(y2,y1)=y2y1, then under the assumption of theorem (5.2), we have

    |(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)+(Œ(ξj)m,(μj1)mλ,σ,ζ;y2g)(z)[(Œ(ξj,δj)mλ,k,ζ;y+11)(z)]g(y1)[(Œ(ξj,μj)mλ,σ,ζ;y21)(z)]g(y2)|(zy1)s+1(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+m|g(z)|eθ1z]+(y2z)s+1(Œ(ξj,μj)mλ,σ,ζ;y21)(z)[|g(y2)|eθ1(y2)+m|g(z)|eθ1z].

    t[y1,y2], θ1,θ2R.

    Corollary 5.6. Setting ξ(y2,y1)=y2y1, along with m=s=1 then under the assumption of theorem (5.2), we have

    |(Œ(ξj)m,(δj1)mλ,σ,ζ:y+1g)(z)+(Œ(ξj)m,(μj1)mλ,σ,ζ;y2g)(z)[(Œ(ξj,δj)mλ,k,ζ;y+11)(z)]g(y1)[(Œ(ξj,μj)mλ,σ,ζ;y21)(z)]g(y2)|(zy1)2(Œ(ξj,δj)mλ,σ,ζ;y+11)(z)[|g(y1)|eθ1y1+|g(z)|eθ1z]+(y2z)2(Œ(ξj,μj)mλ,σ,ζ;y21)(z)[|g(y2)|eθ1(y2)+|g(z)|eθ1z].

    t[y1,y2], θ1,θ2R.

    Definition 5.1. Let g:[y1,y1+ξ(y2,y1)]R is a function, and g is exponentially symmetric about 2y1+ξ(y2,y1)2 if

    g(z)eθz=g(2y1+ξ(y2,y1)z)eθ(2y1+ξ(y2,y1)z),θR. (5.20)

    Lemma 5.1. Let g:[y1,y1+ξ(y2,y1)]R be exponentially symmetric, then

    g(2y1+ξ(y2,y1)2)(1+m)g(z)2seθz,θR. (5.21)

    Proof. For g is exponentially (s-m) preinvex, therefore

    g(2y1+ξ(y2,y1)2)g(y1+δξ(y2,1))2seθ(y1+δξ(y2,y1))+mg(y1+(1δ)ξ(y2,y1))2seθ(y1+(1δ)ξ(y2,y1)). (5.22)

    Let t=y1+δξ(y2,y1), where t[y1,y1+ξ(y2,y1)], and then 2y1+ξ(y2,y1)=y1+(1δ)ξ(y2,y1), we have

    g(2y1+ξ(y2,y1)2)g(z)2seθz+mg(2y1+ξ(y2,y1)z)2seθ(2y1+ξ(y2,y1)z). (5.23)

    applying that g is exponentially symmetric, we obtain

    g(2y1+ξ(y2,y1)2)(1+m)g(z)2seθz. (5.24)

    Theorem 5.3. Suppose a real valued function g:[y1,y1+ξ(y2,y1)]R is exponentially (s-m) preinvex and symmetric about exponentially 2y1+ξ(y2,y1)2, then the following integral inequality for (3.1) and (3.2) holds:

    2s1+mf(2y1+ξ(y2,y1)2)[eθy1(Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1)+(Œ(μj,δj)mλ,σ,ζ;y+11)(y1+ξ(y2,y1))](Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)+(Œ(μj,τj)mλ,σ,ζ;y+1g)(y1+ξ(y2,y1))ξ(y2,y1)s+1(g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1)×[(Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1+ξ(y2,y1))]. (5.25)

    Proof. For z[y1,y1+ξ(y2,y1)], we have

    (zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)(ξ(y2,y1))δjJ(ξj)m,λ(δj)m,σ(ζ(ξ(y2,y1))ξj), (5.26)

    the real value function g is exponentially (s-m) preinvex, then for z[y1,y1+ξ(y2,y1)], we get

    g(z)(zy1ξ(y2,y1))sg(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+m((y1+ξ(y2,y1)z)ξ(y2,y1))sg(y1)eθ1y1. (5.27)

    Conducting product of (5.26) and (5.27), and integrating with respect to z from y1 to y2, we get

    y2y1(zy1)δjJ(ξj)m,λ(δj)m,σ(ζ(zy1)ξj)g(z)dzy2y1(ξ(y2,y1))δjJ(ξj)m,λ(δj)m,σ(ζ(ξ(y2,y1))ξj)×[(zy1ξ(y2,y1))sg(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+m((y1+ξ(y2,y1)z)ξ(y2,y1))sg(y1)eθ1y1]dz, (5.28)

    then we have

    (Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)(ξ(y2,y1))δjJ(ξj)m,λ(δj)m,σ(ζ(ξ(y2,y1))ξj)ξ(y2,y1)s+1[g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1]=(Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)ξ(y2,y1)s+1[g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1]. (5.29)

    Analogously for z[y1,y1+ξ(y2,y1)], we have

    (y1+ξ(y2,y1)z)μjJ(ξj)m,λ(μj)m,σ(ζ(zy1)ξj)(ξ(y2,y1))μjJ(ξj)m,λ(μj)m,σ(ζ(ξ(y2,y1))ξj). (5.30)

    Conducting product of (5.27) and (5.30), and integrating with respect to z from y1 to y2, we have

    y2y1(y1+ξ(y2,y1)z)μjJ(ξj)m,λ(μj)m,σ(ζ(zy1)ξj)g(z)dzy2y1(ξ(y2,y1))μjJ(ξj)m,λ(μj)m,σ(ζ(ξ(y2,y1))ξj)[(zy1ξ(y2,y1))sg(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+m((y1+ξ(y2,y1)z)ξ(y2,y1))sg(y1)eθ1y1]dz=(ξ(y2,y1))μjJ(ξj)m,λ(μj)m,σ(ζ(ξ(y2,y1))ξj)ξ(y2,y1)s+1[g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1],

    then

    (Œ(ξj,μj)mλ,σ,ζ;y+1g)(z)(Œ(ξj,μj)mλ,σ;(y1+ξ(y2,y1))1)(y1+ξ(y2,y1))ξ(y2,y1)s+1[g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1]. (5.31)

    Summing (5.29) and (5.31), we obtain

    (Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)+(Œ(ξj,μj)mλ,σ,ζ;y+1g)(z)ξ(y2,y1)s+1(g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1)[(Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1+ξ(y2,y1))]. (5.32)

    Take the product of Eq (5.21) with (zy1)τjJ(μj)m,λ(τj)m,σ(ζ(zy1)μj) and integrating with respect to t from y1 to y2, we have

    g(2y1+ξ(y2,y1)2)y2y1(zy1)τjJ(μj)m,λ(τj)m,σ(ζ(zy1)μj)dz(1+m)2sy2y1(zy1)τjJ(μj)m,λ(τj)m,σ(ζ(zy1)μj)g(z)eθzdz (5.33)

    using definition (13), we have

    g(2y1+ξ(y2,y1)2)(Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1)(1+m)2seθy1(Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z). (5.34)

    Taking product (5.21) with (y1+ξ(y2,y1)z)δjJ(μj)m,λ(δj)m,σ(ζ(y1+ξ(y2,y1)z)μj) and integrating with respect to variable z from y1 to y2, we have

    g(2y1+ξ(y2,y1)2)(Œ(μj,δj)mλ,σ,ζ;y+11)(y1+ξ(y2,y1))(1+m)2seθ1(y1+ξ(y2,y1))(Œ(μj,τj)mλ,σ,ζ;y+1g)(y1+ξ(y2,y1)). (5.35)

    Summing up (5.34) and (5.35), we get

    2s1+mg(2y1+ξ(y2,y1)2)[eθy1(Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1)+(Œ(μj,δj)mλ,σ,ζ;y+11)(y1+ξ(y2,y1))](Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)+(Œ(μj,τj)mλ,σ,ζ;y+1g)(y1+ξ(y2,y1)). (5.36)

    Now, combining (5.32) and (5.36), we get inequality

    2s1+mg(2y1+ξ(y2,y1)2)[eθy1(Œ(μj,τj)mλ,σ,ζ;(y1+η(y2,y1))1)(y1)+(Œ(μj,δj)mλ,σ,ζ;y+11)(y1+ξ(y2,y1))](Œ(μj,τj)mλ,σ,ζ;(y1+ξ(y2,y1))g)(z)+(Œ(μj,τj)mλ,σ,ζ;y+1g)(y1+ξ(y2,y1))ξ(y2,y1)s+1(g(y1+ξ(y2,y1))eθ1(y1+ξ(y2,y1))+mg(y1)eθ1y1)×[(Œ(ξj,δj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(z)+(Œ(ξj,μj)mλ,σ,ζ;(y1+ξ(y2,y1))1)(y1+ξ(y2,y1))].

    Corollary 5.7. Setting ξ(y2,y1)=y2y1, then under the assumption of theorem (5.3), we have

    2s1+mg(y1+y22)[eθy1(Œ(μj,τj)mλ,σ,ζ;y21)(y1)+(Œ(μj,δj)mλ,σ,ζ;y+11)(y2)](Œ(μj,τj)mλ,σ,ζ;y2g)(z)+(Œ(μj,τj)mλ,σ,ζ;y+1g)(y2)(y2y1)s+1(g(y2y1)eθ1(y2y1)+mg(y1)eθ1y1)×[(Œ(ξj,δj)mλ,σ,ζ;y21)(z)+(Œ(ξj,μj)mλ,σ,ζ;y21)(y2)]. (5.37)

    In this section, we derive some Pólya-Szegö inequalities for four positive integrable functions having fractional operator Œ(ξj,δj)mλ,σ(z) in the form of theorems.

    Theorem 6.1. Let h and l are integrable functions on [y1,). Suppose that there exist integrable functions θ1,θ2,ψ1 and ψ2 on [y1,) such that:

    (R1) 0<θ1(b)h(b)θ2(b),0<ψ1(b)l(b)ψ2(b) (b[y1,z],z>y1).

    Then, for z>y1,y10, ξj,δj,λC,(j=1,,m),(λ)>0,(δj)>1,mj=1(ξ)j>max{0:(σ)1},σ>0 and (zb)Ω, then the following inequalities hold:

    Œ(ξj,δj)mλ,σ,ζ;y1+[(ψ1ψ2)h2](z)Œ(ξj,δj)mλ,σ,ζ;y+1[(θ1θ2)l2](z)[Œ(ξj,δj)mλ,σ,ζ;y+1[(θ1ψ1+θ2ψ2)hl](z)]214. (6.1)

    Proof. From (R1), for b[y1,z], z>y1, we have

    h(b)l(b)θ2(b)ψ1(b), (6.2)

    the inequality write as

    (θ2(b)ψ1(b)h(b)l(b))0. (6.3)

    Similarly, we get

    θ1(b)ψ2(b)h(b)l(b), (6.4)

    thus

    (h(b)l(bθ1(b)ψ2(b))0. (6.5)

    Multiplying Eq (6.3) and Eq (6.5), it follows

    (θ2(b)ψ1(b)h(b)l(b))(h(b)l(b)θ1(b)ψ2(b))0, (6.6)

    i.e.

    (θ2(b)ψ1(b)+θ1(b)ψ2(b))h(b)l(b)h2(b)l2(b)+θ1(b)θ2(b)ψ1(b)ψ2(b). (6.7)

    The last inequality can be written as

    (θ1(b)ψ1(b)+θ2(b)ψ2(b))h(b)l(b)ψ1(b)ψ2(b)h2(b)+θ1(b)θ2(b)l2(b). (6.8)

    Consequently, multiply both sides of (6.8) by (y1b)δjJ(ξj)m,λ(δj)m,σ(ζ(y1b)ξj), (zb)Ω and integrating with respect to b from y1 to z, we get

    Œ(ξj,δj)mλ,σ,ζ;y1+[(θ1ψ1+θ2ψ2)hl](z)Œ(ξj,δj)mλ,σ,ζ;y1+[ψ1ψ2h2](z)+Œ(ξj,δj)mλ,σ,ζ;y1+[θ1θ2l2](z). (6.9)

    Besides, by AM-GM (arithmetic mean- geometric mean) inequality, i.e., a1+b12a1b1 a1,b1+, we get

    Œ(ξj,δj)mλ,σ,ζ;y1+[(θ1ψ1+θ2ψ2)hl](x)2Œ(ξj,δj)mλ,σ,ζ;y1+[ψ1ψ2h2](z)+Œ(ξj,δj)mλ,σ,ζ;y1+[θ1θ2l2](z), (6.10)

    and it follows straightforward the statement of Eq (6.1).

    Corollary 6.1.. Let h and l be two integrable functions on [0,) and satisfying the inequality

    (R2) 0<sh(b)S,0<kl(b)K(b[y1,τ],z>y1). (6.11)

    For z>y1,y10, ξj,δj,λC,(j=1,,m),(λ)>0,(δj)>1,mj=1(ξ)j>max{0:(σ)1},σ>0 and (zb)Ω, then the following inequalities hold:

    Œ(ξj,δj)mλ,σ,ζ;y1+[h2](z)Œ(ξj,δj)mλ,σ,ζ;y1+[l2](z)(Œ(ξj,δj)mλ,σ,ζ;y1+[hl](z))214(SKsk+skSK)2. (6.12)

    Theorem 6.2. Let h and l are positive integrable functions on [y1,). Suppose that there exist integrable functions θ1,θ2,ψ1 and ψ2 on [y1,) satisfying (R1) on [y1,). Then, for z>y1,y10, ξj,δj,λC,(j=1,,m),(λ)>0,(δj)>1,mj=1(ξ)j>max{0:(σ)1},σ>0 and (zb),(τz)Ω, then the following inequalities hold:

    Œ(ξj,δj)mλ,σ,ζ;y1+[h2](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ1ψ2](z)Œ(ξj,δj)mλ,σ,ζ;y1+[θ1θ2](z)Œ(ξj,δj)mλ,σ,ζ;y2[l2](z)[Œ(ξj,δj)mλ,σ,ζ;y1+[θ1h](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ1h](z)+Œ(ξj,δj)mλ,σ,ζ;y1+[θ2h](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ2l](z)]214. (6.13)

    Proof. By condition (R1), it is clear that

    (θ2(b)ψ1(α)h(b)l(α))0, (6.14)

    and

    (h(b)l(α)θ1(b)ψ2(α))0, (6.15)

    these inequalities implies that

    (θ1(b)ψ2(α)+θ2(b)ψ1(α))h(b)l(α)h2(b)l2(α)+θ1(b)θ2(b)ψ1(α)ψ2(α). (6.16)

    The Eq (6.16), multiply by ψ1(α)ψ2(α)l2(α) of both sides, we have

    θ1(b)h(b)ψ1(α)l(α)+θ2(b)h(b)ψ2(α)l(α)ψ1(α)ψ2(α)h2(b)+θ1(b)θ2(b)l2(α). (6.17)

    Hence, the Eq (6.17) multiply both sides by

    (zb)δjJ(ξj)m,λ(δj)m,σ(ζ(zb)ξj),(αz)δjJ(ξj)m,λ(δj)m,σ(ζ(αz)ξj). (6.18)

    And integrating double with respect to b and α from y1 to z and z to y2 respectively, we have

    Œ(ξj,δj)mλ,σ,ζ;y1+[θ1h](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ1l](z)+Œ(ξj,δj)mλ,σ,ζ;y1+[θ2h](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ2l](z)Œ(ξj,δj)mλ,σ,ζ;y1+[h2](z)Œ(ξj,δj)mλ,σ,ζ;y2[ψ1ψ2](z)Œ(ξj,δj)mλ,σ,ζ;y1+[θ1θ2](z)Œ(ξj,δj)mλ,σ,ζ;y2[l2](z). (6.19)

    At last, we come to Eq (6.13) by using the arithmetic and geometric mean inequality to the upper inequality.

    Theorem 6.3. Let h and l are integrable functions on [y1,). Suppose that there exist integrable functions θ1,θ2,ψ1 and ψ2 on [y1,) satisfying (R1) on [y1,). Then, for z>y1,y10, ξj,δj,λC,(j=1,,m),(λ)>0,(δj)>1,mj=1(ξ)j>max{0:(σ)1},σ>0 and (zb),(αz)Ω, then the following inequalities hold:

    Œ(ξj,δj)mλ,σ,ζ;y1+[h2](z)Œ(ξj,δj)mλ,σ,ζ;y2[l2](z)Œ(ξj,δj)mλ,σ,ζ;y1+[(θ2hl)/ψ1](z)Œ(ξj,δj)mλ,σ,ζ;y2[(ψ2hl)/θ1]. (6.20)

    Proof. We have for any (zb),(αz)Ω, from Eq (6.2), thus

    zy1(zb)δjJ(ξj,δj)mλ,σ(ζ(zb)ξj)h2(b)dby1z(αz)ξjJ(ξj,δj)mλ,σ(ζ(αz)ξj)θ2(α)ψ1(α)h(α)l(α)dα,

    which implies

    Œ(ξj,δj)mλ,σ,ζ;y1+[h2](z)Œ(ξj,δj)mλ,σ,ζ;y1+[(θ2hl)/ψ1](z). (6.21)

    and analogously, by Eq (6.4), we get

    Œ(ξj,δj)mλ,σ,ζ;y2[l2](x)Œ(ξj,δj)mλ,σ,ζ;y2[(ψ2hl)/θ1](z), (6.22)

    hence, by multiplying Eq (6.21) and Eq (6.22), follow Eq (6.20).

    Corollary 6.2. Let h and l be integrable functions on [y_{1}, \infty) satisfying (R2) . Then, for z > y_{1}, y_{1}\geq0 , \xi_j, \delta_j, \lambda \in\mathbb{C}, (j = 1, \cdots, m), \Re(\lambda) > 0, \Re(\delta_{j}) > -1, \sum^{m}_{j = 1} \Re(\xi)_j > max\{0: \Re(\sigma)-1\}, \sigma > 0 and (z-b), (\alpha-z)\in \Omega , we obtain

    \begin{equation} \frac{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h^{2}](z)Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l^{2}](z)}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[hl](z)Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[hl](z)}\leq\frac{SK}{sk}. \end{equation} (6.23)

    In this section, Chebyshev type integral inequalities established involving the fractional operator Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(z) and using the Pólya-Szegö fractional integral inequalities of theorem (6.1) in the form of theorem, and then discuss its corollary.

    Theorem 7.1. Let h and l be integrable functions on [y_{1}, \infty) , and suppose that there exist integrable functions \theta_{1}, \theta_{2}, \psi_{1} and \psi_{2} on [y_{1}, \infty) satisfying (R1) . Then, for z > y_{1}, y_{1}\geq0 , \xi_j, \delta_j, \lambda \in\mathbb{C}, (j = 1, \cdots, m), \Re(\lambda) > 0, \Re(\delta_{j}) > -1, \sum^{m}_{j = 1} \Re(\xi)_j > max\{0: \Re(\sigma)-1\}, \sigma > 0 and (z-b)(\alpha-z)\in \Omega the following inequality hold:

    \begin{align} |Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[hl](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)+Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[hl](z)Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)\\ -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l](z)-Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h](z)|\\ \leq 2[G_{y_{1}, y_{2}}(h, \theta_{1}, \theta_{2})G_{y_{1}, y_{2}}(l, \psi_{1}, \psi_{2})]^{\frac{1}{2}}. \end{align} (7.1)

    where

    \begin{align} G_{y_{1}, y_{2}}(b, y, x)(z) = \frac{1}{8}\frac{[Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[(y+x)b](z)]^{2}}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[yx](z)}Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)\\ +\frac{1}{8}\frac{[Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[(y+x)b](z)]^{2}}{Œ^{(\mu_j, \nu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[yx](z)}Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)\\ -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[b](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[b](z). \end{align}

    Proof. For (b, \alpha)\in(y_{1}, z) (z > y_{1}) , we defined A(b, \alpha) = (h(b)-h(\alpha))(l(b)-l(\alpha)) which is the same

    \begin{equation} A(b, \alpha) = h(b)l(b)+h(\alpha)l(\alpha)-h(b)l(\alpha)-h(\alpha)l(b). \end{equation} (7.2)

    Further, the Eq (7.2), multiply both sides by

    \begin{equation} (z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}}), \end{equation} (7.3)

    and integrating double with respect to b and \alpha from y_{1} to z and z to y_{2} respectively, we get

    \begin{align} &\int^{z}_{y_{1}} \int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})A(b,\alpha)dbd\alpha\\ & = \int^{z}_{y_{1}}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})h(b)l(b)db\int^{y_{2}}_{z}(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})d\alpha\\ &+\int^{z}_{y_{1}}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})db\int^{y_{2}}_{z}(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})h(\alpha)l(\alpha)d\alpha\\ &-\int^{y_{1}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})h(b)db\int^{y_{2}}_{z}(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})h(\alpha)d\alpha\\ &-\int^{y_{1}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(\zeta (z-b)^{\delta_{j}})l(b)db\int^{y_{2}}_{z}(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})h(\alpha)d\alpha\\ & = Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[hl](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)+Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[hl](z)\\ &-Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l](z)-Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h](z). \end{align} (7.4)

    Now, applying Cauchy-Schwartz inequality for integrals, we get

    \begin{multline} \bigg|\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})A(b, \alpha)dbd\alpha\bigg|\\ \leq\bigg(\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})\alpha[h(b)]^{2}dbd\alpha\\ +\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})[h(\alpha)]^{2}dbd\alpha\\ -2\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})h(b)h(\alpha)dbd\alpha \bigg)^{1/2}\\ \times\bigg(\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})\alpha[l(b)]^{2}dbd\alpha\\ +\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})[l(\alpha)]^{2}dbd\alpha\\ -2\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})l(b)l(\alpha)dbd\alpha \bigg)^{1/2}, \end{multline} (7.5)

    it follow as

    \begin{multline} \bigg|\int^{z}_{y_{1}}\int^{y_{2}}_{z}(z-b)^{\xi_{j}}\mathrm{J}^{(\xi_j)_m, \lambda}_{(\delta_j)_m, \sigma}(\zeta (z-b)^{\delta_{j}})(\alpha-z)^{\nu_{j}}\mathrm{J}^{(\mu_j)_m, \lambda}_{(\nu_j)_m, \sigma}(\zeta (\alpha-z)^{\mu_{j}})A(b, \alpha)dbd\alpha\bigg|\\ \leq2\{1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h^{2}](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)+1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h^{2}](z)\\ -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h](z)\}^{1/2}\times\{1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l^{2}](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)\\+1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l^{2}](z) -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l](z)\}^{1/2}. \end{multline} (7.6)

    By applying lemma (6.1) for \psi_{1}(z) = \psi_{2}(z) = l(z) = 1 , we get for any \mathrm{J}^{(\xi_j, \delta_j)_m}_{\lambda, \sigma}(z)^{\delta_{j}}\in \Omega

    \begin{align} Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}} [h^{2}](z)\leq\frac{1}{4}\frac{[Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}+\theta_{2})h](z)]^{2}}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}\theta_{2})](z)}, \end{align} (7.7)

    this implies

    \begin{align} &1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h^{2}](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)+1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h^{2}](z)\\ &-Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h](z)\leq\frac{1}{8}\frac{[Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}+\theta_{2})h](z)]^{2}}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}\theta_{2})](z)}Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)\\ &+\frac{1}{8}Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)\frac{[Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}+\theta_{2})h](z)]^{2}}{Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\theta_{1}\theta_{2})](z)} -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[h](z)\\ & = G_{y_{1}, y_{2}}(h, \theta_{1}, \theta_{2}). \end{align} (7.8)

    Analogously, it is clear when \theta_{1}(z) = \theta_{2}(z) = h(z) = 1 , according to Lemma (6.1), we get

    \begin{align} &1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l^{2}](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)+1/2Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l^{2}](z)\\ &-Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l](x)\leq\frac{1}{8}\frac{[Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\psi_{1}+\psi_{2})l](z)]^{2}}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\psi_{1}\psi_{2})](z)}Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[1](z)\\ &+\frac{1}{8}Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z)\frac{[Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\psi_{1}+\psi_{2})l](z)]^{2}}{Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; y_{1} ^{+}} [(\psi_{1}\psi_{2})](z)} -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)Œ^{(\nu_j, \mu_j)_m}_{\lambda, \sigma, \zeta; {y_{2}}^{-}}[l](z)\\ & = G_{y_{1}, y_{2}}(l, \psi_{1}, \psi_{2}). \end{align} (7.9)

    Thus, by resulting Eqs (7.4), (7.6), (7.8) and (7.9), we get the desired inequality (7.1).

    Corollary 7.1. Let h and l be integrable functions on [y_{1}, \infty) , suppose that there exist integrable functions \theta_{1}, \theta_{2}, \psi_{1} and \psi_{2} on [y_{1}, \infty) satisfying (R1) . Then, for z > y_{1}, y_{1}\geq0 , \xi_j, \delta_j, \lambda \in\mathbb{C}, (j = 1, \cdots, m), \Re(\lambda) > 0, \Re(\delta_{j}) > -1, \sum^{m}_{j = 1} \Re(\xi)_j > max\{0: \Re(\sigma)-1\}, \sigma > 0 and (z-b), (\alpha-z)\in \Omega the following inequalities hold:

    \begin{align} |Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[hl](z)Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1](z) -Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[h](z)Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[l](z)|\\ \leq [G_{y_{1}, y_{2}}(h, \theta_{1}, \theta_{2})G_{y_{1}, y_{1}}(l, \theta_{1}, \theta_{2})]^{\frac{1}{2}}, \end{align}

    where

    \begin{align} G_{y_{1}, y_{1}}(b, y, x)(z) = \frac{1}{4}\frac{[Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[(y+x)b](z)]^{2}}{Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[yx](z)}Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[1]-(Œ^{(\xi_j, \delta_j)_m}_{\lambda, \sigma, \zeta; {y_{1}}^{+}}[b](z))^{2}. \end{align}

    This article analyzed the generalized fractional integral operator having nonsingular function (generalized multi-index Bessel function) as kernel and developed a new version of inequalities. We estimate some inequalities (Hermite Hadamard type Mercer inequality, exponentially (s-m) preinvex inequality, Pólya-Szegö type integral inequality and the Chebyshev type inequality) with the generalized fractional integral operator in which nonsingular function as the kernel. Introducing the new version of inequalities of newly constricted operators have strengthened the idea and results.

    The authors declare that they have no competing interest.



    [1] Aldrich JH, Nelson FD (1984) Quantitative Applications in the Social Sciences: Linear Probability, Logit, and Probit Models, Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412984744
    [2] Alexander VE, Clifford CC (1996) Categorical Variables in Developmental Research: Methods of Analysis, Elsevier. https://doi.org/10.1016/B978-012724965-0/50003-1
    [3] Arya S, Eckel C, Wichman C (2013) Anatomy of the Credit Score. J Econ Behav Organ 95: 175–185. https://doi.org/10.1016/j.jebo.2011.05.005 doi: 10.1016/j.jebo.2011.05.005
    [4] Baesens B, Van Gestel T, Viaene S, et al. (2003) Benchmarking state-of-the-art classification algorithms for credit scoring. J Oper Res Soc 54: 627–635. https://doi.org/10.1057/palgrave.jors.2601545 doi: 10.1057/palgrave.jors.2601545
    [5] Baesens B, Gestel TV, Stepanova M, et al. (2004) Neural Network Survival Analysis for Personal Loan Data. J Oper Res Soc 56: 1089–1098. https://doi.org/10.1057/palgrave.jors.2601990 doi: 10.1057/palgrave.jors.2601990
    [6] Bishop CM (2006) Pattern Recognition and Machine Learning, Springer. https://doi.org/10.1007/978-0-387-45528-0_5
    [7] Bolton C (2010) Logistic Regression and its Application in Credit Scoring, University of Pretoria. Available from: http://hdl.handle.net/2263/27333.
    [8] Breiman L (1996) Bagging Predictors. Mach Learn 24: 123–140. https://doi.org/10.1007/BF00058655 doi: 10.1007/BF00058655
    [9] Breiman L, Friedman J, Stone CJ, et al. (1984) Classification and Regression Trees, Taylor & Francis. https://doi.org/10.1201/9781315139470
    [10] Brown M, Grundy M, Lin D, et al. (1999) Knowledge-Base Analysis of Microarray Gene Expression Data Using Support Vector Machines, University of California in Santa Cruz. https://doi.org/10.1073/pnas.97.1.262
    [11] Byanjankar A, Heikkilä M, Mezei J (2015) Predicting credit risk in peer-to-peer lending: A neural network approach. In 2015 IEEE symposium series on computational intelligence, IEEE, 719–725. https://doi.org/10.1109/SSCI.2015.109
    [12] Cao A, He H, Chen Z, et al. (2018) Performance evaluation of machine learning approaches for credit scoring. Int J Econ Financ Manage Sci 6: 255–260. https://doi.org/10.11648/j.ijefm.20180606.12 doi: 10.11648/j.ijefm.20180606.12
    [13] Chen S, Wang Q, Liu S (2019) Credit risk prediction in peer-to-peer lending with ensemble learning framework. In 2019 Chinese Control And Decision Conference (CCDC), IEEE, 4373–4377. https://doi.org/10.1109/CCDC.2019.8832412
    [14] Chen TQ, Guestrin C (2016) XGBoost: A Scalable Tree Boosting System. KDD'16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. https://doi.org/10.1145/2939672.2939785
    [15] Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20: 273–297. https://doi.org/10.1007/BF00994018 doi: 10.1007/BF00994018
    [16] Crouhy M, Galai D, Mark R (2014) The Essentials of Risk Management, 2nd Edition. McGraw-Hill. Available from: https://www.mhprofessional.com/9780071818513-usa-the-essentials-of-risk-management-second-edition-group.
    [17] Cybenko G (1989) Approximation by Superpositions of a Sigmoidal Function Mathematics of Control. Signals Syst 2: 303–314. https://doi.org/10.1007/BF02551274 doi: 10.1007/BF02551274
    [18] Duan J (2019) Financial system modeling using deep neural networks (DNNs) for effective risk assessment and prediction. J Franklin Inst 356: 4716–4731. https://doi.org/10.1016/j.jfranklin.2019.01.046 doi: 10.1016/j.jfranklin.2019.01.046
    [19] Duchi J, Hazan E, Singer Y (2011) Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J Mach Learn Rese 12: 2121–2159. https://doi.org/10.5555/1953048.2021068 doi: 10.5555/1953048.2021068
    [20] Elrahman SMA, Abraham A (2013) A Review of Class Imbalance Problem. J Network Innov Comput 1: 332–340. http://ias04.softcomputing.net/jnic2.pdf
    [21] Everett CR (2015) Group Membership, Relationship Banking and Loan Default Risk: the Case of Online Social Lending. Bank Financ Rev 7: 15–54. https://doi.org/10.2139/ssrn.1114428 doi: 10.2139/ssrn.1114428
    [22] Friedman JH (2001) Greedy Function Approximation: A Gradient Boosting Machine. Ann Stat 29: 1189–1232. https://doi.org/10.1214/aos/1013203451 doi: 10.1214/aos/1013203451
    [23] Genuer R, Poggi JM, Tuleau-Malot C (2010) Variable selection Using Random Forests. Pattern Recogn Lett 31: 2225–2236. https://doi.org/10.1016/j.patrec.2010.03.014 doi: 10.1016/j.patrec.2010.03.014
    [24] Glorot X, Bengio Y (2010) Understanding the Difficulty of Training Deep Feedforward Neural Networks. J Mach Learn Res 9: 249–256. http://proceedings.mlr.press/v9/glorot10a.html
    [25] Guyon I, ElNoeeff A (2003) An Introduction to Variable and Feature Selection. J Mach Learn Res 3: 1157–1182. https://www.jmlr.org/papers/v3/guyon03a.html
    [26] Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, Springer. https://doi.org/10.1007/978-0-387-84858-7
    [27] He KM, Zhang XY, Ren SQ, et al. (2015) Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE international conference on computer vision. https://doi.org/10.1109/ICCV.2015.123
    [28] Ho TK (1995) Random Decision Forest, Proceeding of the 3rd International Conference on Document Analysis and Recognition, 278–282. https://doi.org/10.1109/ICDAR.1995.598994
    [29] Ho TK (1998) The Random Subspace Method for Constructing Decision Forests. IEEE T Pattern Anal 20: 832–844. https://doi.org/10.1109/34.709601 doi: 10.1109/34.709601
    [30] Hochreiter S, Bengio Y, Frasconi P, et al. (2001) Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies. In A Field Guide to Dynamical Recurrent Networks, IEEE, 237–243. https://doi.org/10.1109/9780470544037.ch14.
    [31] Hsu CW, Chang CC, Lin CJ (2003) A Practical Guide to Support Vector Classification. National Taiwan University, 1–12. Available from: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
    [32] Ioffe S, Szegedy C (2015) Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. International conference on machine learning, 448–456. https://doi.org/10.48550/arXiv.1502.03167 doi: 10.48550/arXiv.1502.03167
    [33] Iyer R, Khwaja AI, Luttmer EF, et al. (2009) Screening in New Credit Markets: Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending? AFA 2011 Denver Meetings Paper. https://doi.org/10.2139/ssrn.1570115
    [34] Kang H (2013) The Prevention and Handling of the Missing Data. Korean J Anesthesiol 64: 402–406. https://doi.org/10.4097/kjae.2013.64.5.402 doi: 10.4097/kjae.2013.64.5.402
    [35] Ke GL, Meng Q, Finley T, et al. (2017) LightGBM: A highly Efficient Gradient Boosting Decision Tree, Neural Information Processing Systems, 3149–3157. Available from: https://proceedings.neurips.cc/paper/2017/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf.
    [36] Keogh E, Mueen A (2017) Curse of Dimensionality. Encyclopedia of Machine Learning and Data Mining, Boston: Springer. https://doi.org/10.1007/978-1-4899-7687-1_192
    [37] Kingma DP, Ba JL (2015) Adam: a Method for Stochastic Optimization. International Conference on Learning Representations, 1–13. https://doi.org/10.48550/arXiv.1412.6980
    [38] Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet Classification with Deep Convolutional Neural Networks, Advances in Neural Information Processing Systems, 1097–1105. https://doi.org/10.1145/3065386
    [39] Lantz B (2013) Machine Learning with R. Packt Publishing Limited. Available from: https://edu.kpfu.ru/pluginfile.php/278552/mod_resource/content/1/MachineLearningR__Brett_Lantz.pdf.
    [40] Li LH, Sharma AK, Ahmad R, Chen RC (2021) Predicting the Default Borrowers in P2P Platform Using Machine Learning Models. In International Conference on Artificial Intelligence and Sustainable Computing. https://doi.org/10.1007/978-3-030-82322-1_20
    [41] Lin HT, Lin CJ (2003) A Study on Sigmoid Kernels for SVM and the Training of Non-PSD Kernels by SMO-type Methods. National Taiwan University. Available from: https://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf
    [42] Lu L, Shin YJ, Su YH, et al. (2019) Dying ReLU and Initialization: Theory and Numerical Examples. arXiv preprint arXiv: 1903.06733. https://doi.org/10.4208/cicp.OA-2020-0165
    [43] Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities Improve Neural Network Acoustic Models. ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. Available from: https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf.
    [44] Madasamy K, Ramaswami M (2017) Data Imbalance and Classifiers: Impact and Solutions from a Big Data Perspective. Int J Comput Intell Res 13: 2267–2281. Available from: https://www.ripublication.com/ijcir17/ijcirv13n9_09.pdf.
    [45] McCulloch WS, Pitts W (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull Math Biophys 5: 115–133. https://doi.org/10.2307/2268029 doi: 10.2307/2268029
    [46] Mester LJ (1997) What's the Point of Credit Scoring? Bus Rev 3: 3–16. Available from: https://www.philadelphiafed.org/-/media/frbp/assets/economy/articles/business-review/1997/september-october/brso97lm.pdf.
    [47] Mijwel MM (2018) Artificial Neural Networks Advantages and Disadvantages. Available from: https://www.linkedin.com/pulse/artificial-neural-networks-advantages-disadvantages-maad-m-mijwel/.
    [48] Mills KG, McCarthy B (2016) The State of Small Business Lending: Innovation and Technology and the Implications for Regulation. HBS Working Paper No. 17-042. https://doi.org/10.2139/ssrn.2877201
    [49] Mountcastle VB (1957) Modality and Topographic Properties of Single Neurons of Cat's Somatic Sensory Cortex. J Neurophysiol 20: 408–434. https://doi.org/10.1152/jn.1957.20.4.408 doi: 10.1152/jn.1957.20.4.408
    [50] Ohlson JA (1980) Financial Ratios and the Probabilistic Prediction of Bankruptcy. J Account Res 18: 109–131. https://doi.org/10.2307/2490395 doi: 10.2307/2490395
    [51] Patro SGK, Sahu KK (2015) Normalization: A Preprocessing Stage. https://doi.org/10.17148/IARJSET.2015.2305
    [52] Pontil M, Verri A (1998) Support Vector Machines for 3D Object Recognition. IEEE Trans PAMI 20: 637–646. https://doi.org/10.1109/34.683777 doi: 10.1109/34.683777
    [53] Qian N (1999) On the Momentum Term in Gradient Descent Learning Algorithms. Neural Networks 12: 145–151. https://doi.org/10.1016/S0893-6080(98)00116-6 doi: 10.1016/S0893-6080(98)00116-6
    [54] Quinlan JR (1987) Simplifying Decision Trees. Int J Man-Mach Stud 27: 221–234. https://doi.org/10.1016/S0020-7373(87)80053-6 doi: 10.1016/S0020-7373(87)80053-6
    [55] Quinlan JR (1992) C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann Publishers Inc. Available from: https://www.elsevier.com/books/c45/quinlan/978-0-08-050058-4.
    [56] Rajan U, Seru A, Vig V (2015) The Failure of Models that Predict Failure: Distance, Incentives, and Defaults. J Financ Econ 115: 237–260. https://doi.org/10.1016/j.jfineco.2014.09.012 doi: 10.1016/j.jfineco.2014.09.012
    [57] Rosenblatt F (1958) The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol Rev 65: 386–408. https://doi.org/10.1037/h0042519 doi: 10.1037/h0042519
    [58] Ruder S (2017) An Overview of Gradient Descent Optimization Algorithms. arXiv preprint arXiv: 1609.04747. https://doi.org/10.6919/ICJE.202102_7(2).0058
    [59] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning Representations by Back-Propagating Errors. Nature 323: 533–536. https://doi.org/10.1038/323533a0 doi: 10.1038/323533a0
    [60] Samitsu A (2017) The Structure of P2P Lending and Legal Arrangements: Focusing on P2P Lending Regulation in the UK. IMES Discussion Paper Series, No. 17-J-3. Available from: https://www.boj.or.jp/en/research/wps_rev/lab/lab17e06.htm/
    [61] Serrano-Cinca C, Gutierrez-Nieto B, López-Palacios L (2015) Determinants of Default in P2P Lending. PloS One 10: e0139427. https://doi.org/10.1371/journal.pone.0139427
    [62] Serrano-Cinca C, Gutiérrez-Nieto B (2016) The use of profit scoring as an alternative to credit scoring systems in peer-to-peer (P2P) lending. Deci Support Syst 89: 113–122. https://doi.org/10.1016/j.dss.2016.06.014 doi: 10.1016/j.dss.2016.06.014
    [63] Shannon C (1948) A Mathematical Theory of Communication. Bell Syst Tech J 27: 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x doi: 10.1002/j.1538-7305.1948.tb01338.x
    [64] Shelke MS, Deshmukh PR, Shandilya VK (2017) A Review on Imbalanced Data Handling using Undersampling and Oversampling Technique. Int J Recent Trends Eng Res. https://doi.org/10.23883/IJRTER.2017.3168.0UWXM
    [65] Singh S, Gupta P (2014) Comparative Study Id3, Cart and C4.5 Decision Tree Algorithm: A Survey. Int J Adv Inf Sci Technol (IJAIST) 27: 97–103. https://doi.org/10.15693/ijaist/2014.v3i7.47-52
    [66] Srivastava N, Hinton G, Krizhevsky A, et al. (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res 15: 1929–1958. https://doi.org/https://jmlr.org/papers/v15/srivastava14a.html
    [67] Thomas LC (2000) A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers. Int J Forecast 16: 149–172. https://doi.org/10.1016/S0169-2070(00)00034-0 doi: 10.1016/S0169-2070(00)00034-0
    [68] Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Royal Stat Soc (Methodological) 58: 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [69] Tieleman T, Hinton G (2012) Lecture 6.5—RMSProp, COURSERA: Neural Networks for Machine Learning.
    [70] Verified Market Research (2021) Global Peer to Peer (P2P) Lending Market Size by Type, by End User, by Geographic Scope and Forecast. Available from: https://www.verifiedmarketresearch.com/product/peer-to-peer-p2p-lending-market/.
    [71] Wang Z, Cui P, Li FT, et al. (2014) A Data-Driven Study of Image Feature Extraction and Fusion. Inf Sci 281: 536–558. https://doi.org/10.1016/j.ins.2014.02.030 doi: 10.1016/j.ins.2014.02.030
  • This article has been cited by:

    1. Soubhagya Kumar Sahoo, Bibhakar Kodamasingh, Artion Kashuri, Hassen Aydi, Eskandar Ameer, Ostrowski-type inequalities pertaining to Atangana–Baleanu fractional operators and applications containing special functions, 2022, 2022, 1029-242X, 10.1186/s13660-022-02899-6
    2. Peng Xu, Saad Ihsan Butt, Saba Yousaf, Adnan Aslam, Tariq Javed Zia, Generalized Fractal Jensen–Mercer and Hermite–Mercer type inequalities via h-convex functions involving Mittag–Leffler kernel, 2022, 61, 11100168, 4837, 10.1016/j.aej.2021.10.033
    3. Ravi Kumar Jain, Alok Bhargava, Mohd. Rizwanullah, Certain New Integrals Including Generalized Bessel-Maitland Function and M-Series, 2022, 8, 2349-5103, 10.1007/s40819-021-01202-3
    4. Wedad Saleh, Abdelghani Lakhdari, Adem Kiliçman, Assia Frioui, Badreddine Meftah, Some new fractional Hermite-Hadamard type inequalities for functions with co-ordinated extended s,m-prequasiinvex mixed partial derivatives, 2023, 72, 11100168, 261, 10.1016/j.aej.2023.03.080
    5. Anupam Das, Mohsen Rabbani, Bipan Hazarika, An iterative algorithm to approximate the solution of a weighted fractional integral equation, 2023, 1793-5571, 10.1142/S1793557123502418
    6. Yong Tang, Ghulam Farid, M. Y. Youssif, Zakieldeen Aboabuda, Amna E. Elhag, Kahkashan Mahreen, Çetin Yildiz, Refinements of Various Types of Fractional Inequalities via Generalized Convexity, 2024, 2024, 2314-4629, 10.1155/2024/4082683
    7. Saad Ihsan Butt, Praveen Agarwal, Juan J. Nieto, New Hadamard–Mercer Inequalities Pertaining Atangana–Baleanu Operator in Katugampola Sense with Applications, 2024, 21, 1660-5446, 10.1007/s00009-023-02547-3
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7096) PDF downloads(536) Cited by(29)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog