Processing math: 52%
Research article Special Issues

Refined matrix completion for spectrum estimation of heart rate variability


  • Heart rate variability (HRV) is an important metric in cardiovascular health monitoring. Spectral analysis of HRV provides essential insights into the functioning of the cardiac autonomic nervous system. However, data artefacts could degrade signal quality, potentially leading to unreliable assessments of cardiac activities. In this study, we introduced a novel approach for estimating uncertainties in HRV spectrum based on matrix completion. The proposed method utilises the low-rank characteristic of HRV spectrum matrix to efficiently estimate data uncertainties. In addition, we developed a refined matrix completion technique to enhance the estimation accuracy and computational cost. Benchmarking on five public datasets, our model shows effectiveness and reliability in estimating uncertainties in HRV spectrum, and has superior performance against five deep learning models. The results underscore the potential of our developed matrix completion-based statistical machine learning model in providing reliable HRV spectrum uncertainty estimation.

    Citation: Lei Lu, Tingting Zhu, Ying Tan, Jiandong Zhou, Jenny Yang, Lei Clifton, Yuan-Ting Zhang, David A. Clifton. Refined matrix completion for spectrum estimation of heart rate variability[J]. Mathematical Biosciences and Engineering, 2024, 21(8): 6758-6782. doi: 10.3934/mbe.2024296

    Related Papers:

    [1] Saad Ihsan Butt, Artion Kashuri, Muhammad Umar, Adnan Aslam, Wei Gao . Hermite-Jensen-Mercer type inequalities via Ψ-Riemann-Liouville k-fractional integrals. AIMS Mathematics, 2020, 5(5): 5193-5220. doi: 10.3934/math.2020334
    [2] Jia-Bao Liu, Saad Ihsan Butt, Jamshed Nasir, Adnan Aslam, Asfand Fahad, Jarunee Soontharanon . Jensen-Mercer variant of Hermite-Hadamard type inequalities via Atangana-Baleanu fractional operator. AIMS Mathematics, 2022, 7(2): 2123-2141. doi: 10.3934/math.2022121
    [3] Yamin Sayyari, Mana Donganont, Mehdi Dehghanian, Morteza Afshar Jahanshahi . Strongly convex functions and extensions of related inequalities with applications to entropy. AIMS Mathematics, 2024, 9(5): 10997-11006. doi: 10.3934/math.2024538
    [4] Muhammad Bilal Khan, Hari Mohan Srivastava, Pshtiwan Othman Mohammed, Kamsing Nonlaopon, Y. S. Hamed . Some new Jensen, Schur and Hermite-Hadamard inequalities for log convex fuzzy interval-valued functions. AIMS Mathematics, 2022, 7(3): 4338-4358. doi: 10.3934/math.2022241
    [5] Muhammad Bilal Khan, Gustavo Santos-García, Hüseyin Budak, Savin Treanțǎ, Mohamed S. Soliman . Some new versions of Jensen, Schur and Hermite-Hadamard type inequalities for (p,J)-convex fuzzy-interval-valued functions. AIMS Mathematics, 2023, 8(3): 7437-7470. doi: 10.3934/math.2023374
    [6] Soubhagya Kumar Sahoo, Fahd Jarad, Bibhakar Kodamasingh, Artion Kashuri . Hermite-Hadamard type inclusions via generalized Atangana-Baleanu fractional operator with application. AIMS Mathematics, 2022, 7(7): 12303-12321. doi: 10.3934/math.2022683
    [7] Fangfang Shi, Guoju Ye, Dafang Zhao, Wei Liu . Some integral inequalities for coordinated log-h-convex interval-valued functions. AIMS Mathematics, 2022, 7(1): 156-170. doi: 10.3934/math.2022009
    [8] Muhammad Bilal Khan, Pshtiwan Othman Mohammed, Muhammad Aslam Noor, Abdullah M. Alsharif, Khalida Inayat Noor . New fuzzy-interval inequalities in fuzzy-interval fractional calculus by means of fuzzy order relation. AIMS Mathematics, 2021, 6(10): 10964-10988. doi: 10.3934/math.2021637
    [9] Waqar Afzal, Mujahid Abbas, Sayed M. Eldin, Zareen A. Khan . Some well known inequalities for (h1,h2)-convex stochastic process via interval set inclusion relation. AIMS Mathematics, 2023, 8(9): 19913-19932. doi: 10.3934/math.20231015
    [10] Muhammad Bilal Khan, Muhammad Aslam Noor, Thabet Abdeljawad, Bahaaeldin Abdalla, Ali Althobaiti . Some fuzzy-interval integral inequalities for harmonically convex fuzzy-interval-valued functions. AIMS Mathematics, 2022, 7(1): 349-370. doi: 10.3934/math.2022024
  • Heart rate variability (HRV) is an important metric in cardiovascular health monitoring. Spectral analysis of HRV provides essential insights into the functioning of the cardiac autonomic nervous system. However, data artefacts could degrade signal quality, potentially leading to unreliable assessments of cardiac activities. In this study, we introduced a novel approach for estimating uncertainties in HRV spectrum based on matrix completion. The proposed method utilises the low-rank characteristic of HRV spectrum matrix to efficiently estimate data uncertainties. In addition, we developed a refined matrix completion technique to enhance the estimation accuracy and computational cost. Benchmarking on five public datasets, our model shows effectiveness and reliability in estimating uncertainties in HRV spectrum, and has superior performance against five deep learning models. The results underscore the potential of our developed matrix completion-based statistical machine learning model in providing reliable HRV spectrum uncertainty estimation.



    Inequalities are essential and instrumental in dealing with several complex mathematical quantities that appeared in diverse domains of physical sciences. They have been investigated from multiple aspects, including expanding the applicable domain and eliminating the limitations of already proved results and utilizing various approaches from functional analysis, generalized calculus, and convex analysis. Originally, they were pivotal to acquiring bounds for different mappings, integrals, the uniqueness and stability of solutions, error analysis of quadrature algorithms, information theory, etc. Based on these factors, this theory has grown exponentially through convex analysis. Additionally, the impact of concavity is exemplary in inequalities due to plenty of factors, particularly the fact that several inequalities can be derived from the premise of convexity. This suggests that one of the main motives for studying classical inequalities is to characterize convexity and its generalizations. One of the notable classes of convexity depending upon quadratic support is known to us as strong convexity, which generalizes the classical concepts and is highly applied to conclude the novel refinements of already proven results. This class has inspired the development of several new mapping classes in the literature. For comprehensive details on generalizations of strongly convex, consult [1,2,3,4,5,6]. In 2007, Abramovich et al. [7] demonstrated another concept of a super-quadratic mapping incorporated with translations of itself and a support line. It is defined as:

    A mapping Ψ:[0,)R is considered to be a super-quadratic for μ0 if there exist a constant C(μ)R, such that

    Ψ(μ1)Ψ(μ)+C(μ)(μ1μ)+Ψ(|μμ1|),μ10.

    It can also be interpreted as:

    Definition 1.1 ([7]). A mapping Ψ:[0,)R is called super-quadratic if, and only if,

    Ψ((1φ)μ+φy)(1φ)Ψ(μ)+φΨ(y)φΨ((1φ)|μy|)(1φ)Ψ(φ|μy|), (1.1)

    holds μ,y0 and 0φ1.

    Here, we provide some instrumental results to discuss super-quadratic mappings.

    Lemma 1.1 ([7]). If Ψ:[0,)R is a super-quadratic mapping, then

    (1) Ψ(0)0,

    (2) C(μ)=Ψ(μ), when Ψ(μ) is differentiable with Ψ(0)=Ψ(0)=0, for all μ0,

    (3) for all μ0, if Ψ(μ)0, then Ψ is convex and Ψ(0)=Ψ(0)=0.

    Kian and his coauthors [8,9] came up with the idea of operator super-quadratic mappings and Jensen's kinds of inequalities that are related to them. Oguntuase and Persson [10] discussed Hardy-like inequalities utilizing the notion of super-quadratic mappings. Study these additional papers for more comprehensive research on super-quadraticity, [11,12,13,14,15].

    Varosanec [16] proposed the unified class of convexity through control mapping and provided new insight to conduct research in the following field. Throughout the investigation, let :(0,1)R be a mapping such that 0.

    Definition 1.2 ([16]). A mapping Ψ:[s3,s4]R is considered to be a -convex, if

    Ψ((1φ)μ+φy)(φ)Ψ(μ)+(1φ)Ψ(y).

    Inspired by the idea presented in [16], Alomari and Chesneau [17] developed a general class of super-quadratic mappings and investigated some of their essential properties, defined as:

    Definition 1.3 ([17]). Any mapping Ψ:[s3,s4]R is regraded as -super-quadratic, If

    Ψ((1φ)μ+φy)(φ)[Ψ(y)Ψ((1φ)|yμ|)]+(1φ)[Ψ(μ)Ψ(φ|μy|)].

    Lemma 1.2 ([17]). Suppose Ψ:[s3,s4]R is a -super-quadratic mapping, then

    (1) Ψ(0)0,

    (2) for all μ0, if Ψ(μ)0, then Ψ is -convex such that Ψ(0)=Ψ(0)=0.

    The Jensen's inequality for this class of mappings is given as

    Theorem 1.1 ([17]). Suppose Ψ:[s3,s4]R is a -super-quadratic mapping, then

    Ψ(1Cϑϑν=1φνμν)ϑν=1(φνCϑ)Ψ(μν)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=1φνμν|).

    Also, they proved the Jensen-Mercer inequality for -super-quadratic mappings.

    Theorem 1.2 ([17]). Let Ψ:[s3,s4]R be a -super-quadratic mapping, then

    Ψ(s3+s41Cϑϑν=1φνμν)Ψ(s3)+Ψ(s4)ϑν=1(φνCϑ)Ψ(μν)ϑν=1(φνCϑ)[Ψ(μνs3)+Ψ(s4μν)]ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=1φνμν|).

    Set-valued analysis and its subdomains are cornerstones in mathematical sciences to generalize the previously obtained results. In this regard, Moore [18] applied the set-valued mappings to establish bounded solutions of differential equations. Recently, researchers have focused on decision-making, multi-objective optimization, numerical analysis, mathematical modeling, and advanced nonlinear analysis through interval-valued mapping. Probabilistic and interval-valued techniques are utilized to extract the results from data having randomness. However, these approaches are not applicable to quantities that possess vagueness. To deal with such problems, Zadeh [19] proposed the idea of a fuzzy set based on generalized indicator mapping and also presented the idea of a fuzzy convex set. This theory emerged as a potential theory in the last few decades. The contribution of these concepts in optimization, decision-making, inequalities, differential equations, mathematical modeling, approximation methodologies, dynamic systems, and computer science is unprecedented. Note that this theory is not statistical in nature but sets new trends in possibility theory. Dubois and Prade [20] researched preliminary terminologies related to area and tangent problems and offered new insights to carry new developments. Nanda and Kar [21] explored diverse groups of fuzzy convex mappings and reported their essential characterization.

    As we move ahead, let us go over certain previously laid-out concepts and implications of fuzzy interval analysis. Assume that Kc symbolizes the space of all closed and bounded intervals in R, while K+c represents the space of positive intervals. The interval 1χ is defined as:

    [1χ]=[1χ,1χ]={μ:1χμ1χ,μR}.

    Given χ,ϕKc and δ1R, Minkowski's operations are given as:

    δ1.χ:={[δ1χ,δ1χ]ifδ10,[δ1χ,δ1χ]ifδ1<0.

    Then the Minkowski addition χ+ϕ and χ×ϕ for χ,ϕKc are defined by

    [ϕ,ϕ]+[χ,χ]:=[ϕ+χ,ϕ+χ],

    and

    [ϕ,ϕ]×[χ,χ]:=[min{ϕχ,ϕχ,ϕχ,ϕχ},max{ϕχ,ϕχ,ϕχ,ϕχ}].

    Definition 1.4 ([22]). For any compact intervals A=[s3,s3], B=[s4,s4] and C=[c,c], the generalized Hukuhara difference (gH-difference) is explored as:

    [s3,s3]g[s4,s4]=[c,c]{(i){s3=s4+cs3=s4+c,(ii){s4=s3cs4=s3c.

    Also, the gH-difference can be illustrated as:

    [s3,s3]g[s4,s4]=[min{s3s4,s3s4},max{s3s4,s3s4}].

    Also for AKc, the length of interval is computed by l(A)=s3s3. Then, for all A,BKc, we have

    AgB={[s3s4,s3s4],l(A)l(B),[s3s4,s3s4],l(A)l(B).

    Definition 1.5 ([23]). The "ρ" relation over Kc is provided as:

    [ϕ,ϕ]ρ[χ,χ],

    if and only if,

    ϕχ,ϕχ,

    for all [ϕ,ϕ],[χ,χ]Kc is a pseudo-order or left-right (LR) ordering relation.

    Theorem 1.3 ([23,24]). Every fuzzy set and δ1(0,1], the representations of δ1-level set of 1π are examined in the following order: 1πδ1={μR:1π(μ)δ1} and supp(1π)=cl{μR:1π(μ)>0}. The fuzzy sets in R are represented by Θδ1 and 1πΘδ1. A 1π is considered to be a fuzzy number (interval) if it is normal, fuzzy convex, semi-continuous, and has compact support. The space of all real fuzzy numbers are specified by Υδ1.

    Let 1πΥδ1 be a fuzzy interval if, and only if, δ1-levels [1π]δ1is a compact convex set of R. Now, we deliver the representation of fuzzy number:

    [1π]δ1=[1π(δ1),1π(δ1)],

    where

    1π(δ1):=inf{μR:1π(μ)δ1},1π(δ1):=sup{μR:1π(μ)δ1}.

    Thus, a fuzzy-interval can be investigated and characterized by a parameterized triplet. For more details, see [26].

    {1π(δ1),1π(δ1);δ1[0,1]}.

    These two endpoint mappings 1π(δ1) and 1π(δ1) play a vital role in exploring the fuzzy numbers.

    Proposition 1.1 ([25]). If V,χΥδ1, then the relation "" explored on Υδ1 by

    Vχ if, and only if, [V]δ1ρ[χ]δ1, for all δ1[0,1], this relation is known as a partial order relation.

    For V,χΥδ1 and cR, the scalar product cχ, sum with constant, the sum Vχ, and product Vχ are defined by:

    [cV]δ1=c[V]δ1,[cV]δ1=c+[V]δ1.[Vχ]δ1=[V]δ1+[χ]δ1,[Vχ]δ1=[V]δ1×[χ]δ1.

    The level wise difference of the fuzzy number is stated as follows:

    Definition 1.6 ([27]). Let V, χ be the two fuzzy numbers. Then the level-wise difference is defined as

    [Vχ]δ1=[V(δ1)χ(δ1),V(δ1)χ(δ1)].

    To overcome the limitations of Hukuhara difference, the following difference is defined as follows.

    Definition 1.7 ([22]). Let V, χ be the two fuzzy numbers. Then the generalized Hukuhara difference (gH-difference) of Vgχ is a fuzzy number ξ such that

    Vgχ=ξ{(i)V=χξ(ii)χ=V(1)ξ.

    Also the gH-difference based on δ1 can be illustrated as:

    [Vgχ]δ1=[min{V(δ1)χ(δ1),V(δ1)χ(δ1)},max{V(δ1)χ(δ1),V(δ1)χ(δ1)}].

    Also for VΥδ1, the length of the fuzzy interval is given by l(V(δ1))=V(δ1)V(δ1). Then, for all V,χΥδ1, we have

    Vgχ={[V(δ1)χ(δ1),V(δ1)χ(δ1)],l(V(δ))l(χ(δ)),[V(δ1)χ(δ1),V(δ1)χ(δ1)],l(V(δ))l(χ(δ)). (1.2)

    Note that a function Ψ:[s3,s4]RΥδ1 is said to be l-increasing, if length function len([Ψ(μ)]δ1)=Ψ(μ,δ1)Ψ(μ,δ1) is increasing with respect μ for all δ1[0,1]. Mathematically, for any μ1,μ2[s3,s4] and μ1μ2. Then len([Ψ(μ2)]δ1)len([Ψ(μ1)]δ1),δ1[0,1]. For more details, see [28].

    Proposition 1.2 ([28]). Let Ψ:T=(s3,s4)RΥδ1 be a Fuzzy number valued (F.N.V) mapping. If Ψ(μ+h)gΨ(μ) exists for some h such that μ+hT, then one of the following conditions hold:

    Case (i){len([Ψ(μ+h)]δ1)len([Ψ(μ)]δ1),δ1[0,1]Ψ(μ+h,δ1)Ψ(μ,δ1),is a monotonic increasing with respect toδ1Ψ(μ+h,δ1)Ψ(μ,δ1),is a monotonic decreasing with respect toδ1.
    Case (ii){len([Ψ(μ+h)]δ1)len([Ψ(μ)]δ1),δ1[0,1]Ψ(μ+h,δ1)Ψ(μ,δ1),is a monotonic decreasing with respect toδ1Ψ(μ+h,δ1)Ψ(μ,δ1),is a monotonic increasing with respect toδ1.

    Remark 1.1. From Proposition 2.4, the Ψ(μ+h)gΨ(μ) can be written by the definition of the fuzzy interval as:

    Case (i){len([Ψ(μ)]δ1),is a monotonic increasing with respect to μ,δ1[0,1]Ψ(μ+h)gΨ(μ)=[Ψ(μ+h,δ1)Ψ(μ,δ1),Ψ(μ+h,δ1)Ψ(μ,δ1)].
    Case (ii){len([Ψ(μ)]δ1),is a monotonic decreasing with respect to μ,δ1[0,1]Ψ(μ+h)gΨ(μ)=[Ψ(μ+h,δ1)Ψ(μ,δ1),Ψ(μ+h,δ1)Ψ(μ,δ1)].

    Definition 1.8 ([25]). If Ψ:[s3,s4]RΥδ1 is an F.N.V mapping. For each δ1[0,1], whose δ1-cuts highlight the bundle of I.V.F, such that Ψδ1:[s3,s4]RKc is described as Ψδ1(μ)=[Ψ(μ,δ1),Ψ(μ,δ1)], μ[s3,s4]. Every δ1[0,1], the left and right real valued mappings Ψ(μ,δ1),Ψ(μ,δ1):[s3,s4]R are sometimes referred to as Ψ end points.

    Definition 1.9 ([29]). Let Ψ:[s3,s4]RΥδ1 be an F.N.V mapping. Then, fuzzy integral of Ψ over [s3,s4] is projected as (FR)s4s3Ψ(μ)dμ,

    [(FR)s4s3Ψ(μ)dμ]δ1=(FR)s4s3Ψδ1(μ)dμ={s4s3Ψ(μ,δ1)dμ:Ψ(μ,δ1)R([s3,s4],δ1)},

    for all δ1[0,1], where R([s3,s4],δ1) describes the space of integrable mappings.

    Theorem 1.4 ([26]). If Ψ:[s3,s4]RΥδ1 is an F.N.V mapping. For each δ1[0,1], whose δ1-cuts highlight the bundle of I.V.F, such that Ψδ1:[s3,s4]RKc is described as Ψδ1(μ)=[Ψ(μ,δ1),Ψ(μ,δ1)], μ[s3,s4]. Then, Ψ is fuzzy Riemann integrable (FR-integrable) over [s3,s4], Ψ(μ,δ1),Ψ(μ,δ1)R([s3,s4],δ1), then

    [(FR)s4s3Ψ(μ)dμ]δ1=[(R)s4s3Ψ(μ,δ1),(R)s4s3Ψ(μ,δ1)]=(FR)s4s3Ψδ1(μ)dμ,

    for all δ1[0,1], where FR represents interval Riemann integration of Ψδ1(μ). For all δ1[0,1], FR([s3,s4],δ1) specifies the class of all FR-integrable F.N.V mappings over [s3,s4].

    Definition 1.10 ([21]). A mapping Ψ:[s3,s4]Υδ1 is termed as an F.N.V LR-convex mapping on [s3,s4] if

    Ψ((1φ)μ+φY)(1φ)Ψ(μ)˜+φΨ(Y), (1.3)

    for all μ,Y[s3,s4],ϱ[0,1], where Ψ(μ)˜0 for all μ[s3,s4].

    Definition 1.11 ([26]). Let τ>0 and Ł(s3,s4,Υδ1) be the space of all Lebesgue measurable F.N.V mapping on [s3,s4]. Then, the fuzzy left and right RL-fractional integral operator of ΨŁ(s3,s4,Υδ1) are defined as:

    Jτa+Ψ(s3)=1Γ(τ)s4s3(μs3)τ1Ψ(μ)dμ,μa

    and

    Jτs4Ψ(s4)=1Γ(τ)s4s3(s4μ)τ1Ψ(μ)dμ,μs4.

    Furthermore, the left and right RL-fractional operator based on left and right endpoint mappings can be defined, that is,

    [Jτa+Ψ(s3)]δ1=1Γ(τ)s4s3(μs3)τ1Ψδ1(ϱ)dϱ=1Γ(τ)s4s3(μs3)τ1Ψδ1[Ψ(ϱ,δ1),Ψ(ϱ,δ1)]dϱ,

    where

    Jτa+Ψ(a,δ1)=1Γ(τ)s4s3(μs3)τ1Ψ(ϱ,δ1)dϱ,

    and

    Jτa+Ψ(a,δ1)=1Γ(τ)s4s3(μs3)τ1Ψ(ϱ,δ1)dϱ.

    By similar argument, we can define the right operator.

    The authors [30] employed interval-valued unified approximate convexity to examine new refinements of inequalities. Nwaeze et al.[31] proposed the class of interval-valued ϑ-polynomial convex mappings and reported several interesting inequalities. Abdeljawad et al. [32] utilized the p mean to develop the idea of interval-valued p convexity and presented some corresponding general inequalities of Hermite-Hadamard type. Shi and his colleagues [33] studied the totally ordered unified convexity in the perspective of integral inequalities. Through interval-valued log-convexity and cr-harmonic convexity, Liu et al. [34,35] found the trapezium and Jensen's-like inequalities.

    Budak et al. [36] implemented the interval-valued RL-fractional operators and convexity to derive the trapezoidal inequalities. Vivas-Cortez [37] introduced the totally ordered τ-convex mappings and analyzed several Jensen's, Schur's, and fractional Hadamard's and kinds of inequalities. Cheng et al. [38] looked at new kinds of Hadamard-like inequalities using fuzzy-valued mapping and fractional quantum calculus. Bin-Mohsin et al. [39] bridged the harmonic coordinated convexity and fractional operators relying on Raina's special mapping to establish new 2-dimensional inequalities. For comprehensive details, consult [40,41,42,43,44].

    Recently, Fahad [45] proposed some novel bounds of classical inequalities pertaining to center-radius ordered geometric-arithmetic convexity and some interesting applications to information theory. Authors [46] explored the unified class of stochastic convex processes relying on quasi-weighted mean and cr ordering relation to conclude new forms of inequalities. For the first time, Khan and Butt established the new counterparts of classical inequalities depending upon partially and totally ordered super-quadraticity, respectively, in [47,48].

    Costa et al. [49] implemented the fuzzy-valued mappings to acquire some boundaries in a one-point quadrature scheme. Zhang and his coauthors [50] focused on set-valued Jensen-like inequalities along with some interesting applications. Khan et al. [51] examined fractional analogues of fuzzy interval-valued integral inequalities. In [52], authors discussed fuzzy valued Hadamard-like inequalities associated with log convexity. Abbaszadeh and Eshaghi employed the fuzzy valued r convex mappings to establish the trapezium type inequalities. Bin-Mohsin [53] introduced the idea of fuzzy bi-convex mappings and derived the various inequalities. For comprehensive details, see [54,55,56,57,58].

    The above literature is evidence that theories of inequalities are interlinked with convexity. Several classes of convexity have been introduced to reduce the limitations of classical convexity or to acquire better estimations of existing results. Researchers have applied several techniques to produce better estimations of mathematical quantities. The principle motivation is to explore super-quadratic mappings in fuzzy environments through a unified approach. First, we will propose a new class of fuzzy number-valued super-quadratic mappings based on left and right ordering relations. Further, we will give a detailed description of newly developed concepts along with their potential cases. Most importantly, we will derive classical inequalities like the trapezoidal inequality, the weighted form for symmetric mappings, and Jensen's and its related inequalities. Later on, some fractional inequalities will be presented and graphed. To increase the reliability and accuracy, some visuals and related numerical data will be provided. Finally, we will present applications based on our primary findings. This is the first study regarding super-quadraticity via fuzzy calculus.

    This part contains the results related to newly proposed notion of fuzzy super-quadratic mappings.

    First, we investigate the fuzzy-valued -super-quadratic mapping.

    Definition 2.1. Suppose 0. Let Ψ:[s3,s4][0,)Υδ1 be an F.N.V mapping such that Ψδ1(γ)=[Ψ(δ1,γ),Ψ(δ1,γ)], and len([Ψ(γ)]δ1)=Ψ(γ,δ1)Ψ(γ,δ1) is increasing with respect γ for all δ1[0,1]. Then Ψ is considered to be an F.N.V -super-quadratic mapping if

    Ψ((1φ)γ+φy)(φ)[Ψ(y)gΨ((1φ)|γy|)](1φ)[Ψ(γ)gΨ(φ|γy|)],

    holds γ,y[s3,s4] such that γ<y and |yγ|<γ where φ[0,1].

    Now, we enlist some potential deductions of Definitions 2.1.

    ● Inserting (φ)=φ, we recapture the class of F.N.V super-quadratic mappings.

    ● Inserting (φ)=φs, we recapture the class of F.N.V-s-super-quadratic mappings:

    Ψ((1φ)γ+φy)φs[Ψ(y)gΨ((1φ)|γy|)](1φ)s[Ψ(γ)gΨ(φ|γy|)].

    ● Inserting (φ)=φs, we recapture the class of F.N.V-s Godunova super-quadratic mappings:

    Ψ((1φ)γ+φy)φs[Ψ(y)gΨ((1φ)|γy|)](1φ)s[Ψ(γ)gΨ(φ|γy|)].

    ● Inserting (φ)=φ(1φ), we recapture the class of F.N.V-tgs super-quadratic mappings:

    Ψ((1φ)γ+φy)φ(1φ)[Ψ(y)gΨ((1φ)|γy|)]φ(1φ)[Ψ(γ)gΨ(φ|γy|)].

    ● Inserting (φ)=1, we recapture the class of F.N.V-P super-quadratic mappings:

    Ψ((1φ)γ+φy)[Ψ(y)gΨ((1φ)|γy|)][Ψ(γ)gΨ(φ|γy|)].

    ● Inserting (φ)=exp(φ)1, we recapture the class of F.N.V-exponential super-quadratic mappings:

    Ψ((1φ)γ+φy)[exp(φ)1][Ψ(y)gΨ((1φ)|γy|)][exp(1φ)1][Ψ(γ)gΨ(φ|γy|)].

    ● Selecting Ψ(γ,δ1)=Ψ(γ,δ1) and δ1=1, we acquire the notion of -super-quadratic mapping defined in [17].

    The spaces of -super-quadratic mappings and (F.N.V)– l-increasing super-quadratic mappings defined over [s3,s4] are represented by SSQF([s3,s4],) and SSQFNF([s3,s4],) respectively.

    Proposition 2.1. Let Ψ,g:[s3,s4]Υδ1 be two F.N.V mappings. If Ψ,gSSQFNF([s3,s4],), then

    Ψ+gSSQFNF([s3,s4],).

    cΨSSQFNF([s3,s4],), c0.

    Proof. The proof is obvious.

    Proposition 2.2. If ΨSSQFNF([s3,s4],1) and 1(φ)2(φ), then ΨSSQFNF([s3,s4],2).

    Now we prove the criteria to investigate the class of F.N.V super-quadratic mappings.

    Proposition 2.3. If ΨSSQFNF([s3,s4],1) and 1(φ)2(φ), then ΨSSQFNF([s3,s4],2).

    Now we prove the criteria to investigate class of F.N.V super-quadratic mappings.

    Proposition 2.4. Let Ψ:[s3,s4][0,)Υδ1 be an F.N.V mapping. For any γ,y[s3,s4] such that γ<y and satisfying the condition that |yγ|<γ. Then, ΨSSQFNF([s3,s4],) if, and only if, Ψ(γ,δ1),Ψ(γ,δ1)SSQF([s3,s4],) and len([Ψ(γ)]δ1)=Ψ(γ,δ1)Ψ(γ,δ1) is increasing with respect γ for all δ1[0,1].

    Proof. Let Ψ,ΨSSQF([s3,s4],) and γ,y[s3,s4] such that γ<y and satisfying the condition |yγ|<γ. Then:

    Ψ((1φ)y+φγ,δ1)(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)], (2.1)

    and

    Ψ((1φ)y+φγ,δ1)(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]. (2.2)

    Combining (2.1) and (2.2) by definition of pseudo ordering relation, we have

    [Ψ((1φ)y+φγ,δ1),Ψ((1φ)y+φγ,δ1)][(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)],(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]].

    Then by Case (ⅰ) of Remark (1.1), we have

    Ψ((1φ)y+φγ)(1φ)[Ψ(y)gΨ(φ|yγ|)](φ)[Ψ(γ)gΨ((1φ)|yγ|)].

    This completes the proof of first part. For the converse part, consider ΨSSQFNF([s3,s4],), then

    Ψ((1φ)y+φγ)(1φ)[Ψ(y)gΨ(φ|yγ|)](φ)[Ψ(γ)gΨ((1φ)|yγ|)].

    The above inequality can be written as

    [Ψ((1φ)y+φγ,δ1),Ψ((1φ)y+φγ,δ1)][min{(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)],(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]},max{(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)],(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]}].

    From Definition 2.1, it is clear that len([Ψ(γ)]δ1) is increasing. Using Case (ⅰ) of Remark 1.1, we have

    [Ψ((1φ)y+φγ,δ1),Ψ((1φ)y+φγ,δ1)][(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)],(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]]. (2.3)

    From (2.3), we can write as

    Ψ((1φ)y+φγ,δ1)(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)], (2.4)

    and

    Ψ((1φ)y+φγ,δ1)(1φ)[Ψ(y,δ1)Ψ(φ|yγ|,δ1)]+(φ)[Ψ(γ,δ1)Ψ((1φ)|yγ|,δ1)]. (2.5)

    It is evident from (2.4) and (2.5) that both Ψ,ΨSSQF([s3,s4],). Hence, the result is proved.

    It is noteworthy to mention that Proposition 2.4 provides the necessary and sufficient condition for F.N.V super-quadratic mapping. It is noteworthy to mention that Proposition 2.4 provides the necessary and sufficient condition for the F.N.V super-quadratic mapping.

    Example 2.1. Let us consider F.N.V. Ψ:[s3,s4]=[0,2]R, which is defined as follows

    Ψμ(μ1)={μ13μ3,μ1[0,3μ3]6μ3μ13μ3,μ1(3μ3,6μ3].

    Then, for δ1[0,1], we have

    Ψδ1=[3δ1μ3,(63δ1)μ3].

    Notice that both endpoint mappings Ψ(μ,δ1)=3δ1μ3 and Ψ(μ,δ1)=(63δ1)μ3 are -super-quadratic mappings, respectively. So, ΨSSQFNF([s3,s4],). Also len([Ψ(μ)]δ1)=(66δ1)μ3 is increasing with respect to μ for all δ1[0,1].

    Now, we prove an alternative definition of this class of convexity for ϑ different points of [s3,s4] known as Jensen's inequality. This inequality is useful for the development of further integral inequalities.

    Theorem 2.1. Let :(0,1]:→[0,) be a nonnegative super-multiplicative mapping. If ΨSSQFNF([s3,s4],), then

    Ψ(1Cϑϑν=0φν(μν))ϑν=1(φνCϑ)Ψ(μν)gϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|), (2.6)

    for μν[s3,s4],φν[0,1] such that Cϑ=ϑν=1φν.

    Proof. If ΨSSQFNF([s3,s4],), then it can be written as:

    [Ψ(1Cϑϑν=0φν(μν),δ1),Ψ(1Cϑϑν=0φν(μν),δ1)][ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|,δ1),ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|,δ1)].

    By pseudo order relation, one can resolve the above inequality as:

    Ψ(1Cϑϑν=0φν(μν),δ1)ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|,δ1), (2.7)

    and

    Ψ(1Cϑϑν=0φν(μν),δ1)ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|,δ1). (2.8)

    We employ induction technique to prove both inequalities (2.7) and (2.8). Fixing ϑ=2 and φ1C2=α and φ2C2=1α in (2.7), we acquire the definition of -super-quadratic mapping. To proceed further, we assume that (2.7) holds true for ϑ=ν1, then

    Ψ(ϑ1ν=1φνCϑ1(μν),δ1)ϑ1ν=1(φνCϑ1)Ψ(μν,δ1)ϑ1ν=1(φνCϑ1)Ψ(|μνϑ1ν=1φνCϑ1(μν)|,δ1). (2.9)

    Next, we prove the validity of (2.6).

    Ψ(1Cϑϑν=0φν(μν),δ1)=Ψ(φnxϑCϑ+Cϑ1Cϑϑ1ν=1φνCϑ1(μν),δ1)(φϑCϑ)Ψ(μϑ,δ1)+(Cϑ1Cϑ)Ψ(ϑ1ν=1(φνμνCϑ1),δ1)(φϑCϑ)Ψ(Cϑ1Cϑ|μϑϑ1ν=1(φνμνCϑ1)|,δ1)(Cϑ1Cϑ)Ψ(φϑCϑ|μϑϑ1ν=1(φνμνCϑ1)|,δ1). (2.10)

    Using (2.9) in (2.10) and the super-multiplicative property of , we recapture

    Ψ(1Cϑϑν=0φν(μν),δ1)(φϑCϑ)Ψ(μϑ,δ1)+(Cϑ1Cϑ)[ϑ1ν=1(φνCϑ1)Ψ(μν,δ1)ϑ1ν=1(φνCϑ1)Ψ(|μνϑ1ν=1φνCϑ1(μν)|,δ1)](φϑCϑ)Ψ(Cϑ1Cϑ|μϑϑ1ν=1(φνμνCϑ1)|,δ1)(Cϑ1Cϑ)Ψ(φϑCϑ|μϑϑ1ν=1(φνμνCϑ1)|,δ1).

    Thus, we have

    Ψ(1Cϑϑν=0φνμν,δ1)ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φνμν|,δ1), (2.11)

    By similar proceedings, we have

    Ψ(1Cϑϑν=0φν(μν),δ1)ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φνμν|,δ1). (2.12)

    Comparing inequalities (2.11) and (2.12) through Pseudo ordering relation, we have

    [Ψ(1Cϑϑν=0φνμν,δ1),Ψ(1Cϑϑν=0φν(μν),δ1)][ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φνμν|,δ1),ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φνμν|,δ1)].

    Finally, it can be transformed as

    Ψ(1Cϑϑν=0φν(μν))ϑν=1(φνCϑ)Ψ(μν)gϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=0φν(μν)|).

    Hence, the result is accomplished.

    We deliver some corollaries of Theorem 2.1.

    ● Setting ϑν=1φν=Cϑ=1 in Theorem 2.1, we attain the generalized Jensen's inequality:

    Ψ(ϑν=0φν(μν))ϑν=1(φν)Ψ(μν)gϑν=1(φν)Ψ(|μνϑν=0φν(μν)|).

    ● To attain Jensen's inequality for the fuzzy interval-valued super-quadratic mapping, we set (φ)=φ in Theorem 2.1.

    Ψ(1Cϑϑν=0φν(μν))ϑν=1φνCϑΨ(μν)gϑν=1φνCϑΨ(|μν1Cϑϑν=0φν(μν)|).

    ● To attain Jensen's inequality for the fuzzy interval-valued s-super-quadratic mapping, we set (φ)=φs in Theorem 2.1.

    Ψ(1Cϑϑν=0φν(μν))ϑν=1(φνCϑ)sf(μν)gϑν=1(φνCϑ)sf(|μν1Cϑϑν=0φν(μν)|).

    ● To attain Jensen's inequality for the fuzzy interval-valued s Godunova-Levin super-quadratic mapping, we set (φ)=φs in Theorem 2.1.

    Ψ(1Cϑϑν=0φν(μν))ϑν=1(φνCϑ)sΨ(μν)gϑν=1(φνCϑ)sΨ(|μν1Cϑϑν=0φν(μν)|).

    ● To attain Jensen's inequality for the fuzzy interval-valued P-super-quadratic mapping, we set (φ)=1 in Theorem 2.1.

    Ψ(1Cϑϑν=0φν(μν))ϑν=1(φνCϑ)sΨ(μν)gϑν=1(φνCϑ)sΨ(|μν1Cϑϑν=0φν(μν)|).

    ● To attain Jensen's inequality for the fuzzy interval-valued exponential super-quadratic mapping, we set (φ)=exp(φ)1 in Theorem 2.1.

    Ψ(1Cϑϑν=0φν(μν))ϑν=1[exp(φνCϑ)1]Ψ(μν)gϑν=1[exp(φνCϑ)1]Ψ(|μν1Cϑϑν=0φν(μν)|).

    ● By taking Ψ(μ,δ1)=Ψ(μ,δ1) and δ1 in Theorem 2.1, we get Theorem 1.1.

    Next, the result is the Schur inequality for the fuzzy interval-valued super-quadratic mappings.

    Theorem 2.2. If ΨSSQFNF([s3,s4],) and μ,y,μ3[s3,s4] with μ<y<μ3 such that yμ,μ3y and μ3μ[0,1], then

    (μ3μ)Ψ(y)(μ3y)[Ψ(μ)gΨ(yμ)](yμ)[Ψ(μ3)gΨ(μ3y)].

    Proof. Assume that μ,y,μ3I with μ<y<μ3 such that yμ,μ3y and μ3μ[0,1]. Since ΨSSQFNF([s3,s4],), and is a super-multiplicative mapping, then

    Ψ(y,δ1)=Ψ(μ3yμ3μμ+yμμ3μy,δ1)(μ3yμ3μ)[Ψ(μ,δ1)Ψ(yμμ3μ|μ3μ|,δ1)]+(yμμ3μ)[Ψ(μ3,δ1)Ψ(μ3yμ3μ|μ3μ|,δ1)].

    Multiplying both sides of the aforementioned inequality by (μ3μ) and utilizing the supermultiplicative property, we recapture

    (μ3μ)Ψ(y,δ1)(μ3y)[Ψ(μ,δ1)Ψ(yμ,δ1)]+(yμ)[Ψ(μ3,δ1)Ψ(μ3y,δ1)]. (2.13)

    Likewise, we can prove that

    (μ3μ)Ψ(y,δ1)(μ3y)[Ψ(μ,δ1)Ψ(yμ,δ1)]+(yμ)[Ψ(μ3,δ1)Ψ(μ3y,δ1)]. (2.14)

    From (2.13) and (2.14), we get the desired containment.

    Remark 2.1. For different substitution of (φ)=φ,φs,φs,φ(1φ), we get a blend of new counterparts for different classes of super-quadraticity. By taking Ψ(μ,δ1)=Ψ(μ,δ1) and δ1=1 in Theorem 2.2, we get the reverse Jensen's inequality, and that is proved in [17].

    Through Theorem 2.2, we construct reverse Jensen's inequality leveraging the fuzzy number valued super-quadraticity.

    Theorem 2.3. For φν0 and (v,V)I. Let :(0,1][0,) be nonnegative super-multiplicative mapping and ΨSSQFNF([s3,s4],). Then,

    ϑν=1(φνCϑ)Ψ(μν)ϑν=1(φνCϑ)[(VμνVv)Ψ(v)+(μνvVv)Ψ(V)]g(φνCϑ)[(VμνVv)Ψ(μνv)+(μνvVv)Ψ(Vμν)].

    Proof. Substitute μν=v,y=μν and μ3=V in Theorem 2.2 and multiply both sides by (φνCϑ). Finally, we apply the sum up to ϑ to acquire the desired estimate.

    Now, we prove the Jensen-Mercer inequality pertaining to the fuzzy interval-valued super-quaraticity.

    Theorem 2.4. Let ΨSSQFNF([s3,s4],) and μν(s3,s4) and φν0, then

    Ψ(s3+s41Cϑϑν=1φνμν)Ψ(s3)Ψ(s4)gϑν=1(φνCϑ)Ψ(μν)gϑν=1(φνCϑ)[Ψ(μνs3)Ψ(s4μν)]gϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=1φνμν|).

    Proof. Let Ψ,ΨSSQF([s3,s4],), then from Theorem 1.2, we get the following inequalities.

    Ψ(s3+s41Cϑϑν=1φνμν,δ1)Ψ(s3,δ1)+Ψ(s4,δ1)ϑν=1(φνCϑ)Ψ(μν,δ1)ϑν=1(φνCϑ)[Ψ(μνs3,δ1)+Ψ(s4μν,δ1)]ϑν=1(φνCϑ)Ψ(|μν1Cϑϑν=1φνμν|,δ1), (2.15)

    and

    \begin{align} {\Psi}^{*}\left({{s_{3}}}+{{s_{4}}}-\frac{1}{{\mathcal{C}}_{{\vartheta}}}\sum\limits_{{\nu} = 1}^{\vartheta} {{{\varphi}}}_{{\nu}} {{\mu}}_{{\nu}}, \delta^{1}\right)&\preceq {\Psi}^{*}({{s_{3}}}, \delta^{1})+{\Psi}^{*}({{s_{4}}}, \delta^{1})-\sum\limits_{{\nu} = 1}^{\vartheta} {\hslash_{\circ}}\left(\frac{{{{\varphi}}}_{{\nu}}}{{\mathcal{C}}_{{\vartheta}}}\right){\Psi}^{*}({{\mu}}_{{\nu}}, \delta^{1})\\ &-\sum\limits_{{\nu} = 1}^{\vartheta} {\hslash_{\circ}}\left(\frac{{{{\varphi}}}_{{\nu}}}{{\mathcal{C}}_{{\vartheta}}}\right)[{\Psi}^{*}({{\mu}}_{{\nu}}-{{s_{3}}}, \delta^{1})+{\Psi}^{*}({{s_{4}}}-{{\mu}}_{\nu}, \delta^{1})]\\ &-\sum\limits_{\nu = 1}^{\vartheta} {\hslash_{\circ}}\left(\frac{{{{\varphi}}}_{\nu}}{{\mathcal{C}}_{{\vartheta}}}\right){\Psi}^{*}\left(\left|{{\mu}}_{\nu}-\frac{1}{{\mathcal{C}}_{{\vartheta}}}\sum\limits_{\nu = 1}^{\vartheta} {{{\varphi}}}_{{\nu}}{{\mu}}_{{\nu}}\right|, \delta^{1}\right). \end{align} (2.16)

    Bridging (2.15) and (2.16) through Pseudo ordering, we acquire the fuzzy-valued Jensen-Mercer inequality.

    Remark 2.2. For different substitution of \hslash_{\circ}({\varphi}) = {\varphi}, {\varphi}^s, {\varphi}^{-s}, {\varphi}(1-{\varphi}) , we get fuzzy number-valued Jensen-Mercer inequalities for different classes of super-quadraticity. By taking \Psi_{*}({\mu}, \delta^{1}) = \Psi_{*}({\mu}, \delta^{1}) and \delta^{1} = 1 in Theorem 2.2, we get Jensen-Mercer inequality, and that is proved in [17].

    Theorem 2.5. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) , then

    \begin{align*} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right) {\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\preceq \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}\\ &\preceq\frac{{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}})}{2}\cdot\frac{1}{({s_{4}}-{s_{3}})} \int^{s_{4}}_{s_{3}}\left({\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right) +{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right)\mathrm{d}{{\mu}}\\ &\quad\ominus_{g}\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}({{\mu}}-{s_{3}}){\oplus} {\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align*}

    Proof. Since {\Psi}:[{{s_{3}}}, {{s_{4}}}]\rightarrow \mathbb{R}^+_I is a fuzzy interval-valued {\hslash_{\circ}} -super-quadratic mapping, we first consider {{{\varphi}}} = \frac{1}{2} , then we get

    \begin{align*} {\Psi}_{*}\left(\frac{{{\mu}}+{y}}{2}, \delta^{1}\right)\leq {\hslash_{\circ}}\left(\frac{1}{2}\right)[{\Psi}_{*}({{\mu}}, \delta^{1})+{\Psi}_{*}({y}, \delta^{1})] -2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\frac{1}{2}|{y}-{{\mu}}|, \delta^{1}\right). \end{align*}

    The above inequality can be transformed as

    \begin{align} {\Psi}_{*}\left(\frac{{{s_{3}}}+{{s_{4}}}}{2}, \delta^{1}\right) &\leq {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left({{{{\varphi}}}}{{s_{4}}}+(1-{{{{\varphi}}}}){{s_{3}}}, \delta^{1}\right) +{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left({{{{\varphi}}}}{{s_{3}}}+(1-{{{{\varphi}}}}){{s_{4}}}, \delta^{1}\right) \\&-2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\frac{{{s_{4}}}-{{s_{3}}}}{2}|1-2{{{{\varphi}}}}|, \delta^{1}\right). \end{align} (2.17)

    Integrating with respect to {{\varphi}} on [0, 1] , we get

    \begin{align*} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right) &\leq {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1})\mathrm{d}{{{\varphi}}}+ {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1})\mathrm{d}{{{\varphi}}}\\ &\quad-2{\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}\left(\frac{{s_{4}}-{s_{3}}}{2}\left|1-2{{\varphi}}\right|, \delta^{1}\right)\mathrm{d}{{{\varphi}}}\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right) \\ &\quad+\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}_{*}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right)\mathrm{d}{{\mu}}\\ &\leq \frac{1}{{s_{4}}+{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}_{*}({{\mu}}, \delta^{1})\mathrm{d}{{\mu}}. \end{align*}

    By similar arguments, we have

    \begin{align*} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right) +\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}^{*}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right)\mathrm{d}{{\mu}}\leq \frac{1}{{s_{4}}+{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}^{*}({{\mu}}, \delta^{1})\mathrm{d}{{\mu}}. \end{align*}

    This implies that,

    \begin{align} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right) {\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\prec \frac{1}{{s_{4}}+{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}. \end{align} (2.18)

    Since {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) , then

    \begin{align} {\Psi}\left((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}\right)&\preceq{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}({s_{4}}){\oplus}{\hslash_{\circ}}({{\varphi}}){\Psi}({s_{3}})\\&{{\ominus}_{g}}{\hslash_{\circ}}({{\varphi}}){\Psi}\left((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|\right) {{\ominus}_{g}}{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}\left({{{\varphi}}}|{s_{4}}-{s_{3}}|\right), \end{align} (2.19)

    and

    \begin{align} {\Psi}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})&\preceq{\hslash_{\circ}}({{\varphi}}){\Psi}({s_{4}}){\oplus}{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}({s_{3}})\\&{{\ominus}_{g}}{\hslash_{\circ}}({{\varphi}}){\Psi}((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|){{\ominus}_{g}}{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}({{{\varphi}}}|{s_{4}}-{s_{3}}|) . \end{align} (2.20)

    Adding (2.19) and (2.20), we have

    \begin{align*} &{\Psi}\left((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}\right){\oplus}{\Psi}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\nonumber\\ &\preceq [\hslash_{\circ}({{\varphi}})+\hslash_{\circ}(1-{{\varphi}})][\Psi(a){\oplus}\Psi({s_{4}})]{{\ominus}_{g}}2\hslash_{\circ}({{\varphi}}){\Psi}((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|){{\ominus}_{g}}2{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}({{{\varphi}}}|{s_{4}}-{s_{3}}|). \end{align*}

    This can be transformed as

    \begin{align} &{\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1})+{\Psi}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1})\\ &\leq [{\hslash_{\circ}}({{\varphi}})+\hslash_{\circ}(1-{{\varphi}})][{\Psi}_{*}({s_{4}}, \delta^{1})+{\Psi}_{*}({s_{3}}, \delta^{1})]-2{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|, \delta^{1}) \\ &\quad-2{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}({{{\varphi}}}|{s_{4}}-{s_{3}}|, \delta^{1}), \end{align} (2.21)

    Integrating (2.21) with respect to {{\varphi}} on [0, 1] ,

    \begin{align} &\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}{\Psi}_{*}({{\mu}}, \delta^{1})\mathrm{d}{{\mu}}\leq \frac{{\Psi}^{*}({s_{3}}, \delta^{1})+{\Psi}^{*}({s_{4}}, \delta^{1})}{2}\cdot \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} \left({\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}, \delta^{1}\right) +{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right), \delta^{1}\right)\mathrm{d}{{\mu}}\\ &\quad-\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({{\mu}}-{s_{3}}, \delta^{1}) +{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{{\mu}}, \delta^{1})\right]\mathrm{d}{{\mu}}. \end{align} (2.22)

    Through a similar strategy, we acquire

    \begin{align} &\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}{\Psi}^{*}({{\mu}})\mathrm{d}{{\mu}}\leq \frac{{\Psi}^{*}({s_{3}})+{\Psi}^{*}({s_{4}})}{2}\cdot \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} \left({\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right)+{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right)\mathrm{d}{{\mu}} \\ &\quad-\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({{\mu}}-{s_{3}})+{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align} (2.23)

    Combining inequalities (2.25) and (2.26), we recapture the desired result. So, the result is accomplished.

    Now, we present some deductions of Theorem 2.5.

    ● Taking {\hslash_{\circ}}({{\varphi}}) = {{\varphi}} , we recapture

    \begin{align*} &{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right){\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\preceq \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}\\ &\preceq\frac{{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}})}{2}{{\ominus}_{g}}\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}({{\mu}}-{s_{3}}){\oplus} \left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align*}

    ● Taking {\hslash_{\circ}}({{\varphi}}) = {{\varphi}}^s , we recapture

    \begin{align*} &2^{s-1}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right){\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\preceq \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}\\ &\preceq\frac{{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}})}{s+1}{{\ominus}_{g}}\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right)^sf({{\mu}}-{s_{3}}){\oplus}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)^sf({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align*}

    ● Taking {\hslash_{\circ}}({{\varphi}}) = {{\varphi}}^{-s} , we recapture

    \begin{align*} &2^{s+1}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right){\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\preceq \frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}\\ &\preceq\frac{{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}})}{1-s}{{\ominus}_{g}}\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}} \left[\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right)^{-s}{\Psi} ({{\mu}}-{s_{3}}){\oplus}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)^{-s}{\Psi}({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align*}

    ● Taking {\hslash_{\circ}}({{\varphi}}) = 1 , we recapture

    \begin{align*} &2^{-1}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right){\oplus}\frac{1}{({s_{4}}-{s_{3}})}\int^{s_{4}}_{s_{3}} {\Psi}\left(\left|{{\mu}}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right)\mathrm{d}{{\mu}}\preceq \frac{1}{{s_{4}}+{s_{3}}}\int^{s_{4}}_{s_{3}} {\Psi}({{\mu}})\mathrm{d}{{\mu}}\\ &\preceq[{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}})]{{\ominus}_{g}}\frac{1}{{s_{4}}-{s_{3}}}\int^{s_{4}}_{s_{3}}\left[{\Psi}({{\mu}}-{s_{3}}){\oplus}{\Psi}({s_{4}}-{{\mu}})\right]\mathrm{d}{{\mu}}. \end{align*}

    Remark 2.3. We can a get a blend of new Hermite-Hadamard's type inequalities for different values of \hslash_{\circ} and by taking \Psi_{*}({\mu}, \delta^{1}) = \Psi_{*}({\mu}, \delta^{1}) and \delta^{1} = 1 in Theorem 2.5, we get the classical Hermite-Hadamard inequality for super-quadratic mappings, which is derived in [15].

    Example 2.2. Let {\Psi}:[{s_{3}}, \, {s_{4}}] = [0, 2]\rightarrow \Upsilon_{\delta^{1}} be a fuzzy valued super-quadratic mapping, which is defined as

    \begin{align*} {\Psi}_{{{\mu}}}({\mu_{1}}) = \left\{ \begin{array}{ll} \frac{{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in [0, 3{{\mu}}^3]\\ \frac{6{{\mu}}^3-{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in(3{{\mu}}^3, 6{{\mu}}^3]. \end{array} \right. \end{align*}

    and its level cuts are {\Psi}_{{\delta^{1}}} = \left[3{\delta^{1}} {{\mu}}^3, \, (6-3 {\delta^{1}}){{\mu}}^3 \right] . It fulfils the condition of Theorem 2.5, then

    \begin{align*} {\mathit{\hbox{Left Term}}}& = \left[\frac{15 {s_{4}}^3}{64}, \, \frac{1}{32} \left(30-\frac{15}{2}\right) {s_{4}}^3\right], \nonumber\\ {\mathit{\hbox{Middle Term}}}& = \left[\frac{3 {s_{4}}^3}{8}, \frac{1}{4} \left(6-\frac{3}{2}\right) {s_{4}}^3\right], \nonumber\\ {\mathit{\hbox{Right Term}}}& = \left[\frac{6 {s_{4}}^3}{10}, \frac{2}{5} \left(6-\frac{3}{2}\right) {s_{4}}^3\right]. \end{align*}

    To visualize the above formulations, we fix \delta^{1} and vary {s_{4}} .

    Figure 1.  Visual analysis of Theorem 2.5.

    Note that L.L.F, L.U.F, M.L.F, M.U.F, R.L.F, and R.U.F are specifying the endpoint mappings of left, middle, and right terms of Theorem 2.5.

    Table 1.  Numerical validation of Example 2.2.
    (\delta^{1}, {s_{4}}) L_*(\delta^{1}, {s_{4}}) L^*(\delta^{1}, {s_{4}}) M_*(\delta^{1}, {s_{4}}) M^*(\delta^{1}, {s_{4}}) R_*(\delta^{1}, {s_{4}}) R^*(\delta^{1}, {s_{4}})
    (0.3, 1) 0.140625 0.796875 0.2250 1.2750 0.3600 2.0400
    (0.4, 1.3) 0.411938 1.64775 0.6591 2.6364 1.05456 4.21824
    (0.5, 1.5) 0.791016 2.37305 1.26563 3.79688 2.025 6.0750
    (0.8, 1.8) 0.1870 3.2805 3.4992 5.2488 5.59872 8.39808

     | Show Table
    DownLoad: CSV

    Theorem 2.6. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) and g:[s_{3}, {s_{4}}]\rightarrow \mathbb{R} is a symmetric mapping, then

    \begin{align*} &\frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu{\oplus}\int_{s_{3}}^{s_{4}} {\Psi}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}(\mu)\mathrm{d}{\mu}\nonumber\\ &\preceq \int_{s_{3}}^{s_{4}} {\Psi}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{\mu}\nonumber\\ &\leq \frac{[{\Psi}({s_{4}}){\oplus}{\Psi}({s_{3}})]}{2}\int_{s_{3}}^{s_{4}} \left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)+{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}(\mu)\mathrm{d}{\mu}\nonumber\\ &{{\ominus}_{g}}\int_{s_{3}}^{s_{4}}{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{\mu}){g}(\mu)\mathrm{d}{\mu}{{\ominus}_{g}}\int_{s_{3}}^{s_{4}} {\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}({\mu}-{s_{3}}){g}(\mu)\mathrm{d}{\mu}. \end{align*}

    Proof. Since {\Psi}:[{{s_{3}}}, {{s_{4}}}]\rightarrow \mathbb{R}^+_I is a fuzzy interval-valued {\hslash_{\circ}} -super-quadratic mapping, then by multiplying (2.17) by {g}(\varphi{s_{4}}+(1-{{\varphi}})a) and integrating with respect to {{\varphi}} on [0, 1] , we get

    \begin{align*} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_0^1 &{g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{\varphi}} \leq {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\&+ {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\ &-2{\hslash_{\circ}}\left(\frac{1}{2}\right)\int_0^1 {\Psi}_{*}\left(\frac{{s_{4}}-{s_{3}}}{2}\left|1-2{{\varphi}}\right|, \delta^{1}\right){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}. \end{align*}

    Since g is a symmetric mapping about \frac{{s_{3}}+{s_{4}}}{2} , then g({{{\varphi}}}{s_{4}}+(1-{{\varphi}}){s_{3}}) = g((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}) . Using this fact in the above inequality, we get

    \begin{align*} &{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_0^1 {g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{\varphi}}\nonumber\\ &\leq {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\ &\, \, + {\hslash_{\circ}}\left(\frac{1}{2}\right)\int^1_0 {\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}({{{\varphi}}}{s_{3}}+(1-{{\varphi}}){s_{4}})\mathrm{d}{{{\varphi}}}\\ &\quad-2{\hslash_{\circ}}\left(\frac{1}{2}\right)\int_0^a {\Psi_{\circ}}_{*}\left(\frac{{s_{4}}-{s_{3}}}{2}\left|1-2{{\varphi}}\right|, \delta^{1}\right){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}. \end{align*}

    This implies that

    \begin{align*} \frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi_{\circ}}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu+\int_{s_{3}}^{s_{4}} {\Psi_{\circ}}_{*}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}(\mu)\mathrm{d}{\mu}\leq \int_{s_{3}}^{s_{4}} {\Psi_{\circ}}_{*}(\mu){g}(\mu)\mathrm{d}{\mu}. \end{align*}

    Similarly, we have

    \begin{align*} \frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu +\int_{s_{3}}^{s_{4}} {\Psi}^{*}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right){g}(\mu)\mathrm{d}{\mu}\leq \int_{s_{3}}^{s_{4}} {\Psi}^{*}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{\mu}. \end{align*}

    Combining the last two inequalities in the Pseudo ordering relation, we have

    \begin{align*} &\left[\frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi_{\circ}}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu+\int_{s_{3}}^{s_{4}} {\Psi_{\circ}}_{*}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}(\mu)\mathrm{d}{\mu}\right.\nonumber\\ &\left.\frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu +\int_{s_{3}}^{s_{4}} {\Psi}^{*}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right){g}(\mu)\mathrm{d}{\mu}\quad\right]\nonumber\\ &\preceq \left[\int_{s_{3}}^{s_{4}} {\Psi}_{*}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{\mu}, \, \int_{s_{3}}^{s_{4}} {\Psi}^{*}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{\mu}\right]. \end{align*}

    Finally, we can write

    \begin{align} \frac{1}{2{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\int_{s_{3}}^{s_{4}} {g}(\mu)\mathrm{d}\mu{\oplus}\int_{s_{3}}^{s_{4}} {\Psi}\left(\left|\mu-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}(\mu)\mathrm{d}{\mu}\preceq \int_{s_{3}}^{s_{4}} {\Psi}(\mu){g}(\mu)\mathrm{d}{\mu}. \end{align} (2.24)

    Multiplying (2.21) by {g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}) and integrating with respect to {{{\varphi}}} on [0, 1] , we get

    \begin{align*} &\int_0^1{\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}+\int_0^1{\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}) {g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &\leq [{\Psi}_{*}({s_{4}}, \delta^{1})+{\Psi}_{*}({s_{3}}, \delta^{1})]\int_0^1[{\hslash_{\circ}}({{\varphi}})+\hslash_{\circ}(1-{{\varphi}})]{g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &-2\int_0^1{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|, \delta^{1}) {g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &-2\int_0^1 {\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}({{{\varphi}}}|{s_{4}}-{s_{3}}|, \delta^{1}){g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}. \end{align*}

    We can write

    \begin{align} &\int_{s_{3}}^{s_{4}}{\Psi}_{*}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{{{\varphi}}}\leq \frac{[{\Psi}_{*}({s_{4}}, \delta^{1})+{\Psi}_{*}({s_{3}}, \delta^{1})]}{2}\int_{s_{3}}^{s_{4}}\left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)+{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}(\mu)\mathrm{d}{\mu}\\ &-\int_{s_{3}}^{s_{4}}{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{\mu}, \delta^{1}) {g}(\mu)\mathrm{d}{\mu}-\int_{s_{3}}^{s_{4}} {\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({\mu}-{s_{3}}, \delta^{1}){g}(\mu)\mathrm{d}{\mu}. \end{align} (2.25)

    Also,

    \begin{align} &\int_{s_{3}}^{s_{4}}{\Psi}^{*}(\mu, \delta^{1}){g}(\mu)\mathrm{d}{{{\varphi}}}\leq \frac{[{\Psi}^{*}({s_{4}}, \delta^{1})+{\Psi}^{*}({s_{3}}, \delta^{1})]}{2}\int_{s_{3}}^{s_{4}} \left[{\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)+{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}(\mu)\mathrm{d}{\mu}\\ &-\int_{s_{3}}^{s_{4}}{\hslash_{\circ}}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({s_{4}}-{\mu}, \delta^{1}) {g}(\mu)\mathrm{d}{\mu}-\int_{s_{3}}^{s_{4}} {\hslash_{\circ}}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({\mu}-{s_{3}}, \delta^{1}){g}(\mu)\mathrm{d}{\mu}. \end{align} (2.26)

    Combining (2.25) and (2.26), we achieve the required inequality. Hence, the result is completed.

    Remark 2.4. By selecting \hslash_{\circ}({\varphi}) = {\varphi}, {\varphi}^s, {\varphi}^{-s}, {\varphi}(1-{\varphi}) in Theorem 2.6, we get \verb"F".\verb"N".\verb"V" Hermite-Hadamard-Fejer's inequalities for different classes of super-quadraticity. If we take g({{\mu}}) = 1 in Theorem 2.6, we obtain the Hermite-Hadamard's inequality. Also by taking \Psi_{*}({\mu}, \delta^{1}) = \Psi_{*}({\mu}, \delta^{1}) and \delta^{1} = 1 in Theorem 2.6, we obtain the Hermite-Hadamard-Fejer inequality.

    Example 2.3. Let {\Psi}:[{s_{3}}, \, {s_{4}}] = [0, 2]\rightarrow \Upsilon_{\delta^{1}} be fuzzy valued super-quadratic mapping, which is defined as

    \begin{align*} {\Psi}_{{{\mu}}}({\mu_{1}}) = \left\{ \begin{array}{ll} \frac{{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in [0, 3{{\mu}}^3]\\ \frac{6{{\mu}}^3-{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in(3{{\mu}}^3, 6{{\mu}}^3]. \end{array} \right. \end{align*}

    and its level cuts are {\Psi}_{{\delta^{1}}} = \left[3{\delta^{1}} {{\mu}}^3, \, (6-3 {\delta^{1}}){{\mu}}^3 \right] . Also g:[0, 2]\rightarrow \mathbb{R} is a symmetric integrable mapping and is defined as g({\mu}) = \left\{ \begin{array}{ll} {\mu}, \quad {\mu}\in[0, 1] \\ 2-{\mu}, \quad {\mu}\in (1, 2]. \end{array} \right. . Both mappings fulfill the condition of Theorem 2.6, then

    \begin{align*} {\mathit{\hbox{Left Term}}}& = \left[\frac{3}{16} {s_{4}}^3 \left(2-(2-{s_{4}})^2\right) \delta^{1}, \, \frac{1}{16} {s_{4}}^3 \left(2-(2-{s_{4}})^2\right) (6-3 \delta^{1} )\right], \nonumber\\ {\mathit{\hbox{Middle Term}}}& = \left[\frac{3}{10} \left({s_{4}}^4 (5-2 {s_{4}})-3\right) \delta^{1} +\frac{3 \delta^{1} }{5}, \frac{1}{10} \left({s_{4}}^4 (5-2 {s_{4}})-3\right) (6-3 \delta^{1} )+\frac{1}{5} (6-3 \delta^{1})\right], \nonumber\\ {\mathit{\hbox{Right Term}}}& = \left[\frac{3}{4} {s_{4}}^3 \left(2-(2-{s_{4}})^2\right) \delta^{1} -\frac{\left(-3 {s_{4}}^6+12 {s_{4}}^5-20 {s_{4}}^3+30 {s_{4}}^2-24 {s_{4}}+8\right) (3 \delta^{1} )}{60 {s_{4}}}, \right.\nonumber\\ &\left.\quad\frac{1}{4} {s_{4}}^3 \left(2-(2-{s_{4}})^2\right) (6-3 \delta^{1} )-\frac{\left(-3 {s_{4}}^6+12 {s_{4}}^5-20 {s_{4}}^3+30 {s_{4}}^2-24 {s_{4}}+8\right) (6-3 \delta^{1} )}{60 {s_{4}}}\right]. \end{align*}

    To visualize the above formulations, we fix \delta^{1} and vary \tau .

    Figure 2.  Visual analysis of Theorem 3.1.

    Note that L.L.F, L.U.F, M.L.F, M.U.F, R.L.F, and R.U.F are specifying the endpoint mappings of left, middle, and right terms of Theorem 2.6.

    Table 2.  Numerical validation of Example 2.3.
    (\delta^{1}, {s_{4}}) L_*(\delta^{1}, {s_{4}}) L^*(\delta^{1}, {s_{4}}) M_*(\delta^{1}, {s_{4}}) M^*(\delta^{1}, {s_{4}}) R_*(\delta^{1}, {s_{4}}) R^*(\delta^{1}, {s_{4}})
    (0.3, 1) 0.05625 0.31875 0.1800 1.0200 0.1800 1.0200
    (0.4, 1.3) 0.24881 0.995241 0.702557 2.81023 0.785476 3.1419
    (0.5, 1.5) 0.553711 1.66113 1.36875 4.10625 1.73229 5.19688
    (0.8, 1.8) 1.71461 2.57191 3.28719 4.93079 5.30129 7.95193

     | Show Table
    DownLoad: CSV

    This section contains fractional trapezoidal-like inequalities incorporated with \verb"F".\verb"N".\verb"V" - \hslash_{\circ} super-quadratic mappings.

    Lemma 3.1. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) , then

    \begin{align*} {\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\preceq {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}({{\mu}}){\oplus}{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}({s_{3}}+{s_{4}}-{{\mu}}){{\ominus}_{g}} 2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|\right). \end{align*}

    Proof. Let {\Psi}:[{s_{3}}, {s_{4}}]\rightarrow \mathbb{R}^{+}_I be \verb"F".\verb"N".\verb"V" \hslash_{\circ} -super-quadraticity, and we have

    \begin{align*} {\Psi}((1-{{{\varphi}}})y+{{{\varphi}}}{\mu})\preceq{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}(y){\oplus}{\hslash_{\circ}}({{\varphi}}){\Psi}({\mu}) {{\ominus}_{g}}{\hslash_{\circ}}({{\varphi}}){\Psi}((1-{{{\varphi}}})|y-{\mu}|){{\ominus}_{g}}{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}({{{\varphi}}}|y-{\mu}|). \end{align*}

    We can break the above inequality as

    \begin{align*} {\Psi}_{*}((1-{{{\varphi}}})y+{{{\varphi}}}{\mu}, \delta^{1}) & < {\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}(y, \delta^{1})+{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}({\mu}, \delta^{1})\\ &-{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}(|y-(tx_2+(1-{{{\varphi}}}){\mu}|), \delta^{1})-{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}(|{\mu}-((1-{{{\varphi}}}){\mu}+{{{\varphi}}} y)|, \delta^{1}), \end{align*}

    and

    \begin{align*} {\Psi}^{*}((1-{{{\varphi}}})y+{{{\varphi}}}{\mu}, \delta^{1}) & < {\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}^{*}(y, \delta^{1})+{\hslash_{\circ}}({{\varphi}}){\Psi}^{*}({\mu}, \delta^{1})\\ &-{\hslash_{\circ}}({{\varphi}}){\Psi}^{*}(|y-(tx_2+(1-{{{\varphi}}}){\mu}|), \delta^{1})-{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}(|{\mu}-((1-{{{\varphi}}}){\mu}+{{{\varphi}}} y)|, \delta^{1}). \end{align*}

    Furthermore, we can write:

    \begin{align} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)&\leq {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({{\mu}}, \delta^{1}) +{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1}) \\ &-{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\left|{{\mu}}-\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\right|, \delta^{1}\right)-{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\left|{s_{3}}+{s_{4}}-{{\mu}}-\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\right|, \delta^{1}\right)\\ &\leq {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({{\mu}}, \delta^{1}) +{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1}) -2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\left|\left(\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right)\right|, \delta^{1}\right). \end{align} (3.1)

    Moreover, we have

    \begin{align} {\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)&\leq {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}^{*}({{\mu}}, \delta^{1})+ {\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}^{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1}) \\ &-2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}^{*}\left(\left|\left(\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right)\right|, \delta^{1}\right). \end{align} (3.2)

    Comparing (3.1) and (3.2), we achieve the final result.

    Lemma 3.2. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) , then

    \begin{align*} &{\Psi}({{\mu}}){\oplus}{\Psi}({s_{3}}+{s_{4}}-{{\mu}})\preceq {\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}){{\ominus}_{g}}2{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{{\mu}}){{\ominus}_{g}}2{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}({{\mu}}-{s_{3}}). \end{align*}

    Proof. Assume that {\Psi}:[{s_{3}}, {s_{4}}]\rightarrow \mathbb{R}^{+}_I is a \verb"F".\verb"N".\verb"V" - {\hslash_{\circ}} super-quadratic mapping on [{s_{3}}, {s_{4}}] , then

    \begin{align*} {\Psi}_{*}((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}, \delta^{1}) &\leq {\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}({s_{4}}, \delta^{1})+{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}({s_{3}}, \delta^{1})\\ &\quad-{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}((1-{{{\varphi}}})|{s_{3}}-({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}|, \delta^{1})\\ &\quad-{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}({{{\varphi}}}|{s_{4}}-(1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}|, \delta^{1}). \end{align*}

    Substitute {{\mu}} = ((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}) , then

    \begin{align} {\Psi}_{*}({{\mu}}, \delta^{1})&\leq {\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}, \delta^{1}) +{\mu}\left(\frac{{{\leftthreetimes_{\circ}}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{3}}, \delta^{1})\\ &\quad -{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({{\mu}}-{s_{3}}, \delta^{1})-{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{{\mu}}, \delta^{1}). \end{align} (3.3)

    Replacing {{\mu}} by {s_{3}}+{s_{4}}-{{\mu}} in (3.3), we get

    \begin{align} {\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})&\leq {\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}, \delta^{1}) +{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{3}}, \delta^{1})\\ &\quad -{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({{\mu}}-{s_{3}}, \delta^{1}) -{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{{\mu}}, \delta^{1}). \end{align} (3.4)

    Adding (3.3) and (3.4), we have

    \begin{align} {\Psi}_{*}({{\mu}}, \delta^{1})+{\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})&\leq {\Psi}_{*}({s_{3}}, \delta^{1})+{\Psi}_{*}({s_{4}}, \delta^{1})-2{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}, \delta^{1}\right){\Psi}_{*}({s_{4}}-{{\mu}}, \delta^{1})\\ &\quad -2{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({{\mu}}-{s_{3}}, \delta^{1}). \end{align} (3.5)

    Similarly,

    \begin{align} {\Psi}^{*}({{\mu}})+{\Psi}^{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})&\leq {\Psi}^{*}({s_{3}}, \delta^{1})+{\Psi}^{*}({s_{4}}, \delta^{1})-2{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({s_{4}}-{{\mu}}, \delta^{1})\\ &\quad -2{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({{\mu}}-{s_{3}}, \delta^{1}). \end{align} (3.6)

    Comparison of (3.5) and (3.6) through Pseudo ordering relation, we get our final outcome.

    Theorem 3.1. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) , then

    \begin{align} &\frac{1}{\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right){\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}} \int^{s_{4}}_{s_{3}} {\Psi}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\\ &\preceq\frac{\Gamma(1+\tau)}{({s_{4}}-{s_{3}})^{\tau}}\biggl(J^{\tau}_{{s_{3}}+}{\Psi}({s_{4}}){\oplus}J^{\tau}_{{s_{4}}-}{\Psi}({s_{3}})\biggr)\\ &\preceq{\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}){{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{s_{4}}_{s_{3}} \left({\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{{\mu}}){\oplus}{\hslash_{\circ}} \left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}({{\mu}}-{s_{3}})\right) \end{align} (3.7)
    \begin{align} &\quad\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align} (3.8)

    Proof. Since {\Psi} is an \verb"F".\verb"N".\verb"V" super-quadratic mapping, then

    \begin{align*} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)& = {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\frac{\tau}{2({s_{4}}-{s_{3}})^{\tau}}\int^{s_{4}}_{s_{3}} \left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\\ & = \frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align*}

    Through Lemma 3.1, we can interpret

    \begin{align} {\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)&\leq\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}} \int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} \left[{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({{\mu}}, \delta^{1}) +{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})\right.\\ &\left.\quad-2{\hslash_{\circ}}\left(\frac{1}{2}\right){\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right) \left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\right]\mathrm{d}{{\mu}}. \end{align} (3.9)

    From (3.9), we have

    \begin{align} \frac{1}{{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)&\leq\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} \left[{\Psi}_{*}({{\mu}}, \delta^{1})+{\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})\right.\\ &\left.\quad-2{\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right) \left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\right]\mathrm{d}{{\mu}}\\ & = \frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\left[\int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} {\Psi}_{*}({{\mu}}, \delta^{1}) \left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\right.\\ &\quad\left.+\int_{\frac{{s_{3}}+{s_{4}}}{2}}^{s_{4}} {\Psi}_{*}({{\mu}}, \delta^{1})\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\right.\\ &\left.\quad-\int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} {\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right) \left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\right.\\ &\left.\quad-\int_{\frac{{s_{3}}+{s_{4}}}{2}}^{s_{4}} {\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+ ({{\mu}}-{s_{3}})^{\tau-1}\right) \mathrm{d}{{\mu}}\right]\\ & = \frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\left[\int_{s_{3}}^{s_{4}} {\Psi}_{*}({{\mu}}, \delta^{1})\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\right.\\ &\quad\left.-\int_{s_{3}}^{s_{4}} {\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+ ({{\mu}}-{s_{3}})^{\tau-1}\right) \mathrm{d}{{\mu}}\right]\\ & = \frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J^{\tau}_{{s_{3}}+}{\Psi}_{*}({s_{4}}, \delta^{1})+J^{\tau}_{{s_{4}}-}{\Psi}_{*}({s_{3}}, \delta^{1})\right]\\ &-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} {\Psi}_{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align} (3.10)

    Also,

    \begin{align} &\frac{1}{{\hslash_{\circ}}\left(\frac{1}{2}\right)}{\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\leq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J^{\tau}_{{s_{3}}+}{\Psi}^{*}({s_{4}}, \delta^{1})+J^{\tau}_{{s_{4}}-} {\Psi}^{*}({s_{3}}, \delta^{1})\right]\\ &-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} {\Psi}^{*}\left(\left|\frac{{s_{3}}+{s_{4}}}{2}-{{\mu}}\right|, \delta^{1}\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align} (3.11)

    Combining (3.10) and (3.11) via pseudo order relation, we acquire the first inequality of (3.7). From Lemma 3.2, we can write

    \begin{align} &\frac{\Gamma(1+\tau)}{({s_{4}}-{s_{3}})^{\tau}}\left(J^{\tau}_{{s_{3}}+}{\Psi}_{*}({s_{4}}, \delta^{1}) +J^{\tau}_{{s_{4}}-}{\Psi}_{*}({s_{3}}, \delta^{1})\right)\\ & = \frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{\frac{\tau_{1}+\varsigma_{1}}{2}}_{s_{3}} [{\Psi}_{*}({{\mu}}, \delta^{1})+{\Psi}_{*}({s_{3}}+{s_{4}}-{{\mu}}, \delta^{1})]\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\\ &\leq\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{\frac{{s_{3}}+{s_{4}}}{2}}_{s_{3}} \left({\Psi}_{*}({s_{3}}, \delta^{1})+{\Psi}_{*}({s_{4}}, \delta^{1}) -2{\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{{\mu}}, \delta^{1}) \right.\\ &\left.-2{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({{\mu}}-{s_{3}}, \delta^{1})\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}\\ & = {\Psi}_{*}({s_{3}})+{\Psi}_{*}({s_{4}})-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}} \int^{s_{4}}_{s_{3}}\left({\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right) {\Psi}_{*}({s_{4}}-{{\mu}})\right.\\ &\left.+{\hslash_{\circ}}\left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right) {\Psi}_{*}({{\mu}}-{s_{3}})\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align} (3.12)

    Similarly, we have

    \begin{align} &\frac{\Gamma(1+\tau)}{({s_{4}}-{s_{3}})^{\tau}}\left[J^{\tau}_{{s_{3}}+}{\Psi}^{*}({s_{4}}, \delta^{1}) +J^{\tau}_{{s_{4}}-}{\Psi}^{*}({s_{3}}, \delta^{1})\right]\leq{\Psi}^{*}({s_{3}}, \delta^{1})+{\Psi}^{*}({s_{4}}, \delta^{1})\\&-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int^{s_{4}}_{s_{3}} \left({\hslash_{\circ}}\left(\frac{{{\mu}}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({s_{4}}-{{\mu}}, \delta^{1})\right.\\ &\left.+{\hslash_{\circ}} \left(\frac{{s_{4}}-{{\mu}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({{\mu}}-{s_{3}}, \delta^{1})\right)\left(({s_{4}}-{{\mu}})^{\tau-1}+({{\mu}}-{s_{3}})^{\tau-1}\right)\mathrm{d}{{\mu}}. \end{align} (3.13)

    Comparing (3.12) and (3.13) through Pseudo ordering relation, we achieve our desired result.

    Remark 3.1. For \tau = 1 , the Theorem 3.1 transformed into Theorem 2.5. By selecting \hslash_{\circ}({\varphi}) = {\varphi}, {\varphi}^s, {\varphi}^{-s}, {\varphi}(1-{\varphi}) in Theorem 3.1, we get various fractional \verb"F".\verb"N".\verb"V" Hermite-Hadamard's inequalities for different classes of super-quadraticity. Also by taking \Psi_{*}({\mu}, \delta^{1}) = \Psi_{*}({\mu}, \delta^{1}) and \delta^{1} = 1 in Theorem 3.1, we obtain the fractional Hermite-Hadamard's inequality for {\hslash_{\circ}} –super-quadratic mappings, which is given in [59].

    Example 3.1. Let {\Psi}:[{s_{3}}, \, {s_{4}}] = [0, 2]\rightarrow \Upsilon_{\delta^{1}} be a fuzzy valued super-quadratic mapping which is defined as

    \begin{align*} {\Psi}_{{{\mu}}}({\mu_{1}}) = \left\{ \begin{array}{ll} \frac{{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in [0, 3{{\mu}}^3]\\ \frac{6{{\mu}}^3-{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in(3{{\mu}}^3, 6{{\mu}}^3]. \end{array} \right. \end{align*}

    and its level cuts are {\Psi}_{{\delta^{1}}} = \left[3{\delta^{1}} {{\mu}}^3, \, (6-3 {\delta^{1}}){{\mu}}^3 \right] . It fulfills the condition of Theorem 3.1, then

    \begin{align*} {\mathit{\hbox{Left Term}}}& = \left[\frac{\left(2 \left(2^{\tau } \tau ^3+5\ 2^{\tau } \tau -3\ 2^{\tau +1}+12\right)\right) (3 \tau \delta^{1} )}{2^{\tau } (\tau (\tau +1) (\tau +2) (\tau +3))}, \right.\nonumber\\ &\left.\quad +6 \delta^{1}\frac{\left(2 \left(2^{\tau } \tau ^3+5\ 2^{\tau } \tau -3\ 2^{\tau +1}+12\right)\right) (\tau (6-3 \delta^{1} ))}{2^{\tau } (\tau (\tau +1) (\tau +2) (\tau +3))}+(12-6 \delta^{1} )\right], \nonumber\\ {\mathit{\hbox{Middle Term}}}& = \left[\frac{(3 \tau \delta^{1} ) \left(2^{\tau +3} \left(\frac{1}{\tau +3}+\frac{6 \Gamma (\tau )}{\Gamma (\tau +4)}\right)\right)}{2^{\tau }}, \frac{(\tau (6-3 \delta^{1} )) \left(2^{\tau +3} \left(\frac{1}{\tau +3}+\frac{6 \Gamma (\tau )}{\Gamma (\tau +4)}\right)\right)}{2^{\tau }}\right], \nonumber\\ {\mathit{\hbox{Right Term}}}& = \left[24 \delta^{1} -\frac{\left(2^{\tau +5} (\tau (\tau +3)+8)\right) (3 \tau \delta^{1} )}{2^{\tau +1} ((\tau +1) (\tau +2) (\tau +3) (\tau +4))}, \right.\nonumber\\ &\left.\quad(48-24 \delta^{1} )-\frac{\left(2^{\tau +5} (\tau (\tau +3)+8)\right) (\tau (6-3 \delta^{1} ))}{2^{\tau +1} ((\tau +1) (\tau +2) (\tau +3) (\tau +4))}\right]. \end{align*}

    To visualize the above formulations, we fix \delta^{1} and vary \tau .

    Figure 3.  Graphical Validation of Theorem 3.1.

    Note that L.L.F, L.U.F, M.L.F, M.U.F, R.L.F, and R.U.F are specifying the endpoint mappings of left, middle, and right terms of Theorem 3.1.

    Table 3.  Numerical validation of Example 3.1.
    (\delta^{1}, {s_{4}}) L_*(\delta^{1}, {s_{4}}) L^*(\delta^{1}, {s_{4}}) M_*(\delta^{1}, {s_{4}}) M^*(\delta^{1}, {s_{4}}) R_*(\delta^{1}, {s_{4}}) R^*(\delta^{1}, {s_{4}})
    (0.3, 0.5) 2.50084 14.1714 4.32 24.48 6.01143 34.0648
    (0.4, 1) 3.0000 12.0000 4.8000 19.2000 5.7600 30.7200
    (0.5, 1.5) 3.69468 11.084 5.82857 17.4857 9.54805 28.6442
    (0.8, 2) 6.0000 9.0000 9.6000 14.4000 15.3600 23.0400

     | Show Table
    DownLoad: CSV

    Now, we prove the weighted Hermite-Hadamard's inequality for symmetric mappings.

    Theorem 3.2. If {\Psi}\in SSQFNF([{s_{3}}, {s_{4}}], {\hslash_{\circ}}) and {g}:[{s_{3}}, {s_{4}}]\rightarrow \mathbb{R} is a nonnegative integrable symmetric mapping about \frac{{s_{3}}+{s_{4}}}{2} , then

    \begin{align*} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\nonumber\\ &\quad{\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right] {\Psi}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\preceq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]\nonumber\\ &\preceq ({\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]\left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right) +\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\quad{{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}] \left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}({\mu}-{s_{3}}){\oplus}\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{\mu})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align*}

    Proof. Since {\Psi}:[{{s_{3}}}, {{s_{4}}}]\rightarrow \mathbb{R}^+_I is a fuzzy interval-valued {\hslash_{\circ}} -super-quadratic mapping, then by multiplying (2.17) by {{{\varphi}}}^{\tau-1}{g}(\varphi{s_{4}}+(1-{{\varphi}})a) and applying integration with respect to {{\varphi}} on [0, 1] , we get

    \begin{align*} &\frac{1}{\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_0^1 {{{\varphi}}}^{\tau-1} {g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{\varphi}}\nonumber\\ &\leq \int^1_0 {{{\varphi}}}^{\tau-1}{\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\&+ \int^1_0 {{{\varphi}}}^{\tau-1} {\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\ &\quad-2\int_0^1 {{{\varphi}}}^{\tau-1} {\Psi}_{*}\left(\frac{{s_{4}}-{s_{3}}}{2}\left|1-2{{\varphi}}\right|, \delta^{1}\right){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}. \end{align*}

    Since g is symmetric mapping about \frac{{s_{3}}+{s_{4}}}{2} , then g({{{\varphi}}}{s_{4}}+(1-{{\varphi}}){s_{3}}) = g((1-{{{\varphi}}}){s_{4}}+{{{\varphi}}}{s_{3}}) . Using this fact in the above inequality, we get

    \begin{align*} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\int_0^1 {{{\varphi}}}^{\tau-1}[{g}(\varphi{s_{4}}+(1-{{\varphi}})a)+{g}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}})]\mathrm{d}{{\varphi}}\nonumber\\ &\leq\int^1_0 {{{\varphi}}}^{\tau-1} {\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}){g}(\varphi{s_{4}}+(1-{{\varphi}})a)\mathrm{d}{{{\varphi}}}\\ &\, +\int^1_0 {{{\varphi}}}^{\tau-1} {\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}({{{\varphi}}}{s_{3}}+(1-{{\varphi}}){s_{4}})\mathrm{d}{{{\varphi}}}\\ &\quad-\int_0^1 {{{\varphi}}}^{\tau-1} {\Psi}_{*}\left(\frac{{s_{4}}-{s_{3}}}{2}\left|1-2{{\varphi}}\right|, \delta^{1}\right)[{g}(\varphi{s_{4}}+(1-{{\varphi}})a)+{g}({{{\varphi}}}a+(1-{{{\varphi}}}){s_{4}})]\mathrm{d}{{{\varphi}}}. \end{align*}

    After some simple computations, we have the following inequality

    \begin{align} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}_{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\\ &\leq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}_{*}({s_{4}}, \delta^{1}){g}({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}{\Psi}_{*}({s_{3}}, \delta^{1})g({s_{3}})\right]\\ &\quad-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]{\Psi}_{*}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right){g}({\mu})\mathrm{d}{\mu}. \end{align} (3.14)

    By following a similar procedure, we get

    \begin{align} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}^{*}\left(\frac{{s_{3}}+{s_{4}}}{2}, \delta^{1}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\\ &\leq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}^{*}({s_{4}}, \delta^{1}){g}({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}{\Psi}^{*}({s_{3}}, \delta^{1})g({s_{3}})\right]\\ &\quad-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]{\Psi}^{*}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|, \delta^{1}\right){g}({\mu})\mathrm{d}{\mu}. \end{align} (3.15)

    Implementing the pseudo ordering relation on (3.14) and (3.15) results in the following relation

    \begin{align} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\\ &\quad{\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right] {\Psi}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}({\mu})\mathrm{d}{\mu}\\ &\preceq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]. \end{align} (3.16)

    Now, we establish our second inequality. Multiplying (2.21) by {{{\varphi}}}^{\tau-1}{g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}) and integrating with respect to {{{\varphi}}} on [0, 1] , we get

    \begin{align*} &\int_0^1 {{{\varphi}}}^{\tau-1}{\Psi}_{*}({{{\varphi}}}{s_{3}}+(1-{{{\varphi}}}){s_{4}}, \delta^{1}){g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\\ &\, \, +\int_0^1{{{\varphi}}}^{\tau-1}{\Psi}_{*}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}}, \delta^{1}) {g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &\leq [{\Psi}_{*}({s_{4}}, \delta^{1})+{\Psi}_{*}({s_{3}}, \delta^{1})]\int_0^1{{{\varphi}}}^{\tau-1}[{\hslash_{\circ}}({{\varphi}})+\hslash_{\circ}(1-{{\varphi}})]{g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &-2\int_0^1{{{\varphi}}}^{\tau-1}{\hslash_{\circ}}({{\varphi}}){\Psi}_{*}((1-{{{\varphi}}})|{s_{4}}-{s_{3}}|, \delta^{1}) {g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}\nonumber\\ &-2\int_0^1 {{{\varphi}}}^{\tau-1}{\hslash_{\circ}}(1-{{{\varphi}}}){\Psi}_{*}({{{\varphi}}}|{s_{4}}-{s_{3}}|, \delta^{1}){g}({{{\varphi}}}{s_{4}}+(1-{{{\varphi}}}){s_{3}})\mathrm{d}{{{\varphi}}}. \end{align*}

    After performing some computations, we get

    \begin{align} &\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}_{*}({s_{4}}, \delta^{1}){g}({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}{\Psi}_{*}({s_{3}}, \delta^{1})g({s_{3}})\right]\\ &\leq ({{\Psi}_{*}({s_{3}}, \delta^{1})+{\Psi}_{*}({s_{4}}, \delta^{1})})\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \\ &\times\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]\left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)+\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}({\mu})\mathrm{d}{\mu}\\ &\quad-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}]\\ &\times \left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({\mu}-{s_{3}}, \delta^{1})+\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}_{*}({s_{4}}-{\mu}, \delta^{1})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align} (3.17)

    Similarly, we have

    \begin{align} &\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}^{*}({s_{4}}, \delta^{1}){g}({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}{\Psi}^{*}({s_{3}}, \delta^{1})g({s_{3}})\right]\\ &\leq ({\Psi}^{*}({s_{3}}, \delta^{1})+{\Psi}^{*}({s_{4}}, \delta^{1}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \\ &\, \, \times\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]\left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)+\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}({\mu})\mathrm{d}{\mu}\\ &\quad-\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}]\\ &\, \, \times\left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({\mu}-{s_{3}}, \delta^{1})+\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}^{*}({s_{4}}-{\mu}, \delta^{1})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align} (3.18)

    Inequalities (3.17) and (3.18) produce the following relation

    \begin{align} &\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]\\ &\preceq ({\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]\left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right) +\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)\right]{g}({\mu})\mathrm{d}{\mu}\\ &\quad{{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}] \left[\hslash_{\circ}\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}({\mu}-{s_{3}}){\oplus}\hslash_{\circ}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right){\Psi}({s_{4}}-{\mu})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align} (3.19)

    Finally, bridging inequalities (3.16) and (3.19), we achieve the Hermite-Hadmard-Fejer inequality.

    Now we discuss some special scenarios of Theorem 3.2.

    ● By setting \hslash_{\circ}({\varphi}) = {\varphi} in Theorem 3.2, we have

    \begin{align*} &{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\nonumber\\ &\quad{\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right] {\Psi}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\preceq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]\nonumber\\ &\preceq ({\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]{g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\quad{{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}] \left[\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right){\Psi}({\mu}-{s_{3}}){\oplus}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right) {\Psi}({s_{4}}-{\mu})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align*}

    ● By setting \hslash_{\circ}({\varphi}) = {\varphi}^s in Theorem 3.2, we have

    \begin{align*} &2^{s-1}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\nonumber\\ &\quad{\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right] {\Psi}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\preceq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]\nonumber\\ &\preceq ({\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]\left[\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)^s +\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)^s\right]{g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\quad{{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}] \left[\left(\frac{{s_{4}}-{\mu}}{{s_{4}}-{s_{3}}}\right)^s{\Psi}({\mu}-{s_{3}}){\oplus}\left(\frac{{\mu}-{s_{3}}}{{s_{4}}-{s_{3}}}\right)^s {\Psi}({s_{4}}-{\mu})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align*}

    ● By setting \hslash_{\circ}({\varphi}) = 1 in Theorem 3.2, we have

    \begin{align*} &\frac{1}{2\hslash_{\circ}\left(\frac{1}{2}\right)}{\Psi}\left(\frac{{s_{3}}+{s_{4}}}{2}\right)\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}g({s_{4}})+ J_{{s_{4}}^{-}}^{\tau}g({s_{3}})\right]\nonumber\\ &\quad{\oplus}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}\left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right] {\Psi}\left(\left|{\mu}-\frac{{s_{3}}+{s_{4}}}{2}\right|\right){g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\preceq\frac{\Gamma(\tau+1)}{({s_{4}}-{s_{3}})^{\tau}}\left[J_{{s_{3}}^{+}}^{\tau}{\Psi}{g}({s_{4}}){\oplus} J_{{s_{4}}^{-}}^{\tau}{\Psi}{g}({s_{3}})\right]\nonumber\\ &\preceq 2({\Psi}({s_{3}}){\oplus}{\Psi}({s_{4}}))\cdot\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}} \left[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}\right]{g}({\mu})\mathrm{d}{\mu}\nonumber\\ &\quad{{\ominus}_{g}}\frac{\tau}{({s_{4}}-{s_{3}})^{\tau}}\int_{s_{3}}^{s_{4}}[({s_{4}}-{\mu})^{\tau-1}+({\mu}-{s_{3}})^{\tau-1}] \left[{\Psi}({\mu}-{s_{3}}){\oplus}{\Psi}({s_{4}}-{\mu})\right]{g}({\mu})\mathrm{d}{\mu}. \end{align*}

    Remark 3.2. For \tau = 1 , the Theorem 3.2 transformed into Theorem 2.6. By selecting \hslash_{\circ}({\varphi}) = {\varphi}^{-s}, {\varphi}(1-{\varphi}), \exp({{{\varphi}}})-1 in Theorem 3.1, we get various fractional \verb"F".\verb"N".\verb"V" Hermite-Hadamard-Fejer's inequalities for different classes of super-quadraticity. Also by taking \Psi_{*}({\mu}, \delta^{1}) = \Psi_{*}({\mu}, \delta^{1}) and \delta^{1} = 1 in Theorem 3.2, we obtain the fractional Hermite-Hadamard-Fejer's inequality for {\hslash_{\circ}} –super-quadratic mappings.

    Example 3.2. Let {\Psi}:[{s_{3}}, \, {s_{4}}] = [0, 2]\rightarrow \Upsilon_{\delta^{1}} be a fuzzy valued super-quadratic mapping, which is defined as

    \begin{align*} {\Psi}_{{{\mu}}}({\mu_{1}}) = \left\{ \begin{array}{ll} \frac{{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in [0, 3{{\mu}}^3]\\ \frac{6{{\mu}}^3-{\mu_{1}}}{3{{\mu}}^3}, \quad {\mu_{1}}\in(3{{\mu}}^3, 6{{\mu}}^3]. \end{array} \right. \end{align*}

    and its level cuts are {\Psi}_{{\delta^{1}}} = \left[3{\delta^{1}} {{\mu}}^3, \, (6-3 {\delta^{1}}){{\mu}}^3 \right] . Also g:[0, 2]\rightarrow \mathbb{R} is a symmetric integrable mapping and is defined as g({\mu}) = ({\mu}-1)^2 . Both mappings fulfill the condition of Theorem 3.2, then

    \begin{align*} {\mathit{\hbox{Left Term}}}& = \left[\frac{\left(2 \left(2^{\tau } (\tau -1) (\tau (\tau +1) (\tau (\tau +5)+26)+120)+240\right)\right) (3 \delta^{1} )}{2^{\tau } ((\tau +1) (\tau +2) (\tau +3) (\tau +4) (\tau +5))}, \right.\nonumber\\ &\left.\quad\frac{\left(2 \left(2^{\tau } (\tau -1) (\tau (\tau +1) (\tau (\tau +5)+26)+120)+240\right)\right) (6-3 \delta^{1} )}{2^{\tau } ((\tau +1) (\tau +2) (\tau +3) (\tau +4) (\tau +5))}\right], \nonumber\\ {\mathit{\hbox{Middle Term}}}& = \left[\frac{\left(2^{\tau +3} (\tau (\tau (\tau (\tau +3)+10)-10)+24)\right) (3 \delta^{1} )}{2^{\tau } ((\tau +1) (\tau +2) (\tau +3) (\tau +4))}, \, \frac{\left(2^{\tau +3} (\tau (\tau (\tau (\tau +3)+10)-10)+24)\right) (6-3 \delta^{1} )}{2^{\tau } ((\tau +1) (\tau +2) (\tau +3) (\tau +4))}\right], \nonumber\\ {\mathit{\hbox{Right Term}}}& = \left[\frac{\left(2^{\tau +1} \left(\tau ^6+9 \tau ^5+55 \tau ^4+75 \tau ^3+304 \tau ^2-444 \tau +720\right)\right) (21 \delta^{1} )}{2^{\tau } (\tau (\tau +1) (\tau +2) (\tau +3) (\tau +4) (\tau +5) (\tau +6))}, \right.\nonumber\\ &\left.\quad\frac{\left(2^{\tau +1} \left(\tau ^6+9 \tau ^5+55 \tau ^4+75 \tau ^3+304 \tau ^2-444 \tau +720\right)\right) (48-21 \delta^{1} )}{2^{\tau } (\tau (\tau +1) (\tau +2) (\tau +3) (\tau +4) (\tau +5) (\tau +6))}\right]. \end{align*}

    To visualize the above formulations, we fix \delta^{1} and vary \tau .

    Figure 4.  Graphical validation of Theorem 3.2.

    Note that L.L.F, L.U.F, M.L.F, M.U.F, R.L.F, and R.U.F are specifying the endpoint mappings of left, middle, and right terms of Theorem 3.2.

    Table 4.  Numerical validation of Example 3.2.
    (\delta^{1}, {s_{4}}) L_*(\delta^{1}, {s_{4}}) L^*(\delta^{1}, {s_{4}}) M_*(\delta^{1}, {s_{4}}) M^*(\delta^{1}, {s_{4}}) R_*(\delta^{1}, {s_{4}}) R^*(\delta^{1}, {s_{4}})
    (0.3, 0.5) 0.548152 3.1062 2.67429 15.1543 7.00699 46.3796
    (0.4, 1) 0.4000 1.6000 2.2400 8.9600 2.4000 9.6000
    (0.5, 1.5) 0.451568 1.3547 2.58701 7.76104 2.68392 8.05175
    (0.8, 2) 0.8000 1.2000 4.4800 6.7200 4.8000 7.2000

     | Show Table
    DownLoad: CSV

    Now, we give some applications of our proposed results. First, we recall the binary means of positive real numbers.

    (1) The arithmetic mean:

    \begin{equation*} A({{{s_{3}}}}, {{{s_{4}}}}) = \frac{{{{s_{3}}}}+{{{s_{4}}}}}{2}, \end{equation*}

    (2) The generalized \log -mean:

    \begin{equation*} L_{r}({{{s_{3}}}}, {{{s_{4}}}}) = \Bigg[\frac{{{{s_{4}}}}^{r+1}-{{{s_{3}}}}^{r+1}}{(r+1)({{{s_{4}}}}-{{{s_{3}}}})}\Bigg]^{\frac{1}{r}};\;\;r\in \Re\setminus \{-1, 0\}. \end{equation*}

    Proposition 4.1. For {s_{3}}, {s_{4}}\geq0 , then

    \begin{align*} &\left[3\delta^{1}, (6-3\delta^{1})\right]\left[A^3({s_{3}}, {s_{4}})+\frac{({s_{4}}-{s_{3}})^4}{2^5}\right]\preceq\left[3\delta^{1}, (6-3\delta^{1})\right]L_{3}^{3}({s_{3}}, {s_{4}})\nonumber\\ &\preceq \left[3\delta^{1}, (6-3\delta^{1})\right]A({s_{3}}^3, {s_{4}}^3)\ominus_{g}\left[3\delta^{1}, (6-3\delta^{1})\right]\frac{2({s_{4}}-{s_{3}})^3}{20}. \end{align*}

    Proof. This result is acquired by applying {\Psi}({\mu}) = [3\delta^{1} {{\mu}}^3, (6-3\delta^{1}){{\mu}}^3] on Theorem 2.5.

    Note that Proposition 4.1 provides the bounds for generalized logarithmic mean. Next, we give the refinements of the triangular inequality.

    Proposition 4.2. Let \{{{\mu}}_{{\nu}}\}\in [{s}_{3}, {s}_{4}] be an increasing positive sequence. Then from Theorem 2.2, we have

    \begin{align*} \left|\sum\limits_{{\nu} = 1}^{{\vartheta}}{{\mu}}_{{\nu}}\right|\leq {\vartheta}^2 \sum\limits_{{\nu} = 1}^{{\vartheta}}|{{\mu}}_{{\nu}}|- {\vartheta}^2 \sum\limits_{{\nu} = 1}^{{\vartheta}}\left|{{\mu}}_{{\nu}}-\frac{1}{{\vartheta}}\sum\limits_{j = 1}^{{\vartheta}}{{\mu}}_{j}\right|, \end{align*}

    and

    \begin{align*} \|\sum\limits_{{\nu} = 1}^{{\vartheta}}{{\mu}}_{{\nu}}\|\leq {\vartheta}^2 \sum\limits_{{\nu} = 1}^{{\vartheta}}\|{{\mu}}_{{\nu}}\|- {\vartheta}^2 \sum\limits_{{\nu} = 1}^{{\vartheta}}\|{{\mu}}_{{\nu}}-\frac{1}{{\vartheta}}\sum\limits_{j = 1}^{{\vartheta}}{{\mu}}_{j}\|. \end{align*}

    Proof. Since {\Psi}({{\mu}}) = |{{\mu}}| and {\Psi}({\mu}) = \|{\mu}\| are \hslash_{\circ} super-quadratic mappings, by applying these mappings on Theorem 2.2 by taking \hslash_{\circ}({{\mu}}) = \frac{1}{{{\mu}}} , w_{{\nu}} = 1 , \Psi_{*}({{\mu}}, \delta^{1}) = \Psi^{*}({{\mu}}, \delta^{1}) , and \delta^{1} = 1 , we acquire our desired inequalities.

    Proposition 4.3. Let \{{{\mu}}_{{\nu}}\}\in [{s}_{3}, {s}_{4}] be an increasing positive sequence. Then, from Theorem 2.2, we have

    \begin{align*} \left|\sum\limits_{{\nu} = 1}^{{\vartheta}}{{\mu}}_{{\nu}}\right|^r\leq \sum\limits_{{\nu} = 1}^{{\vartheta}}|{{\mu}}_{{\nu}}|^r- \sum\limits_{{\nu} = 1}^{{\vartheta}}\left|{{\mu}}_{{\nu}}-\frac{1}{{\vartheta}}\sum\limits_{j = 1}^{{\vartheta}}{{\mu}}_{j}\right|^r. \end{align*}

    Particularly, we have

    \begin{align*} \left|{{\mu}}_{1}+{{\mu}}_{2}\right|^r\leq |{{\mu}}_{1}|^r+|{{\mu}}_{2}|^r- 2^{1-r}|{{\mu}}_{1}-{{\mu}}_{2}|^r. \end{align*}

    Proof. Since {\Psi}({{\mu}}) = |{{\mu}}|^r where r\geq1 is \hslash_{\circ} super-quadratic mapping, by applying {\Psi}({{\mu}}) on Theorem 2.2, and taking \hslash_{\circ}({{\mu}}) = {{{\mu}}} , w_{{\nu}} = 1 , \Psi_{*}({{\mu}}, \delta^{1}) = \Psi^{*}({{\mu}}, \delta^{1}) , and \delta^{1} = 1 , we acquire our desired inequalities.

    Proposition 4.4. Let \{{{\mu}}_{{\nu}}\}\in [{s}_{3}, {s}_{4}] be an increasing positive sequence. Then, from Theorem 2.4, we have

    \begin{align*} \sum\limits_{{\nu} = 1}^{{\vartheta}}|{{\mu}}_{{\nu}}|^r &\leq {\vartheta} [|{{\mu}}_{1}|^r+|{{\mu}}_{{\vartheta}}|^r]-{\vartheta}\left|\frac{{{\mu}}_{1}+{{\mu}}_{{\vartheta}}-{\vartheta}^{-1}\sum\limits_{{\nu} = 1}^{{\vartheta}}{{\mu}}_{j}}{{\vartheta}}\right|^r \\ &\, \, -\sum\limits_{{\nu} = 1}^{{\vartheta}}[|{{\mu}}_{{\nu}}-{{\mu}}_{1}|^r+|{{\mu}}_{{\vartheta}}-{{\mu}}_{{\nu}}|^r]- \sum\limits_{{\nu} = 1}^{{\vartheta}}\left|{{\mu}}_{{\nu}}-\frac{1}{{\vartheta}}\sum\limits_{j = 1}^{{\vartheta}}{{\mu}}_{j}\right|^r, \end{align*}

    and

    \begin{align*} \sum\limits_{{\nu} = 1}^{{\vartheta}}\|{{\mu}}_{{\nu}}\|^r &\leq {\vartheta} [\|{{\mu}}_{1}\|^r+\|{{\mu}}_{{\vartheta}}\|^r]-{\vartheta}\left\|\frac{{{\mu}}_{1}+{{\mu}}_{{\vartheta}}-{\vartheta}^{-1}\sum\limits_{{\nu} = 1}^{{\vartheta}}{{\mu}}_{j}}{{\vartheta}}\right\|^r \\ &\, \, -\sum\limits_{{\nu} = 1}^{{\vartheta}}[\|{{\mu}}_{{\nu}}-{{\mu}}_{1}\|^r+\|{{\mu}}_{{\vartheta}}-{{\mu}}_{{\nu}}\|^r]- \sum\limits_{{\nu} = 1}^{{\vartheta}}\left\|{{\mu}}_{{\nu}}-\frac{1}{{\vartheta}}\sum\limits_{j = 1}^{{\vartheta}}{{\mu}}_{j}\right\|^r. \end{align*}

    Proof. Since {\Psi}({{\mu}}) = |{{\mu}}|^r and {\Psi}({\mu}) = \|{\mu}\|^r for r\geq1 are \hslash_{\circ} super-quadratic mappings, by applying these mappings on Theorem 2.4, and taking \hslash_{\circ}({\varphi}) = {\varphi} , w_{{\nu}} = 1 , \Psi_{*}({{\mu}}, \delta^{1}) = \Psi^{*}({{\mu}}, \delta^{1}) , and \delta^{1} = 1 , we acquire our desired inequalities.

    The theory of inequalities is the main source used to investigate the various mapping classes. We've talked about the idea of a fuzzy number-valued super-quadratic mapping that works with the LR partially ordered ranking relation, \delta^{1} -levels mappings, and a nonnegative mapping \hslash_{\circ} . This class is novel and new in literature; it reduces to several mapping classes of super-quadraticity, like fuzzy-valued super-quadratic mappings, fuzzy-valued s super-quadratics, fuzzy-valued Godunova s super-quadratic mappings, and many more. Also, results obtained from this class of mappings refined classical inequalities. We have developed several Jensen's and Hadamard's-like inequalities pertaining to this class of mappings. This study is significant due to various aspects because the first-time idea of super-quadratic mapping in a fuzzy environment is investigated. The proposed definition is novel due to its unified nature and strengthening the properties of the class of fuzzy numbered valued mappings. Also, this is the first study exploring the fuzzy numbered valued Hermite-Hadamard-Fejer type inequalities for strong convexity. The obtained results provides the better approximation as compared to existing results. In the future, we will try to address these inequalities by leveraging the concepts of generalized fractional operators, quantum, and symmetric quantum calculus to analyze the bounds for error inequalities like the Ostrowski inequality, Simpson's inequality, Bullen's and Boole's inequality, etc. By utilizing this class of mappings, Hausdorff-Pompeiu distance, and generalized differentiability of mappings based on Hukuhara differences as well as adopting a similar technique, one can introduce more general function classes and their applicable aspects in different domains. We will also talk about fuzzy-valued inequalities for totally ordered fuzzy-valued super-quadratic mappings and how they can be used in optimization. We hope the strategy, techniques, and ideas developed in our study will create new sights for research.

    Muhammad Zakria Javed: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing-original draft, Writing-review and editing, Visualization; Muhammad Uzair Awan: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing-review and editing, Visualization, Supervision; Loredana Ciurdariu: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing-review and editing, Visualization; Omar Mutab Alsalami: Conceptualization, Software, Validation, Formal analysis, Investigation, Writing-review and editing, Visualization. All authors have read and agreed for the publication of this manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was funded by TAIF University, TAIF, Saudi Arabia, Project No. (TU-DSPP-2024-258).

    The authors are thankful to the editor and anonymous reviewers for their valuable comments and suggestions. The authors extend their appreciation to TAIF University, Saudi Arabia, for supporting this work through project number (TU-DSPP-2024-258).

    The authors declare no conflicts of interest.



    [1] L. Lu, T. Zhu, D. Morelli, A. Creagh, Z. Liu, J. Yang, et al., Uncertainties in the analysis of heart rate variability: A systematic review, IEEE Rev. Biomed. Eng., 17 (2023), 180–196. https://doi.org/10.1109/RBME.2023.3271595 doi: 10.1109/RBME.2023.3271595
    [2] S. Roy, D. P. Goswami, A. Sengupta, Geometry of the Poincaré plot can segregate the two arms of autonomic nervous system–a hypothesis, Med. Hypotheses, 138 (2020), 109574. https://doi.org/10.1016/j.mehy.2020.109574 doi: 10.1016/j.mehy.2020.109574
    [3] L. Vazquez, J. D. Blood, J. Wu, T. M. Chaplin, R. E. Hommer, H. J. Rutherford, et al., High frequency heart-rate variability predicts adolescent depressive symptoms, particularly anhedonia, across one year, J. Affect. Disorders, 196 (2016), 243–247. https://doi.org/10.1016/j.jad.2016.02.040 doi: 10.1016/j.jad.2016.02.040
    [4] S. Hillebrand, K. B. Gast, R. de Mutsert, C. A. Swenne, J. W. Jukema, S. Middeldorp, et al., Heart rate variability and first cardiovascular event in populations without known cardiovascular disease: Meta-analysis and dose–response meta-regression, Europace, 15 (2013), 742–749. https://doi.org/10.1093/europace/eus341 doi: 10.1093/europace/eus341
    [5] N. Townsend, D. Kazakiewicz, F. L. Wright, A. Timmis, R. Huculeci, A. Torbica, et al., Epidemiology of cardiovascular disease in Europe, Nat. Rev. Cardiol., 19 (2022), 133–143. https://doi.org/10.1038/s41569-021-00607-3 doi: 10.1038/s41569-021-00607-3
    [6] T. Xiang, N. Ji, D. A. Clifton, L. Lu, Y. T. Zhang, Interactive effects of HRV and P-QRS-T on the power density spectra of ECG signals, IEEE J. Biomed. Health. Inf., 25 (2021), 4163–4174. https://doi.org/10.1109/JBHI.2021.3100425 doi: 10.1109/JBHI.2021.3100425
    [7] A. L. Callara, L. Sebastiani, N. Vanello, E. P. Scilingo, A. Greco, Parasympathetic-sympathetic causal interactions assessed by time-varying multivariate autoregressive modeling of electrodermal activity and heart-rate-variability, IEEE Trans. Biomed. Eng., 68 (2021), 3019–3028. https://doi.org/10.1109/TBME.2021.3060867 doi: 10.1109/TBME.2021.3060867
    [8] L. Rodríguez-Liñares, D. Simpson, Spectral estimation of HRV in signals with gaps, Biomed. Signal Process. Control, 52 (2019), 187–197. https://doi.org/10.1016/j.bspc.2019.04.006 doi: 10.1016/j.bspc.2019.04.006
    [9] T. W. Bae, K. K. Kwon, ECG PQRST complex detector and heart rate variability analysis using temporal characteristics of fiducial points, Biomed. Signal Process. Control, 66 (2021), 102291. https://doi.org/10.1016/j.bspc.2020.102291 doi: 10.1016/j.bspc.2020.102291
    [10] F. Massie, S. Vits, A. Khachatryan, B. Van Pee, J. Verbraecken, J. Bergmann, Central sleep apnea detection by means of finger photoplethysmography, IEEE J. Transl. Eng. Health Med., 11 (2023), 126–136. https://doi.org/10.1109/JTEHM.2023.3236393 doi: 10.1109/JTEHM.2023.3236393
    [11] B. E. Ajtay, S. Béres, L. Hejjel, The oscillating pulse arrival time as a physiological explanation regarding the difference between ECG-and photoplethysmogram-derived heart rate variability parameters, Biomed. Signal Process. Control, 79 (2023), 104033. https://doi.org/10.1016/j.bspc.2022.104033 doi: 10.1016/j.bspc.2022.104033
    [12] J. Lee, M. Kim, H. K. Park, I. Y. Kim, Motion artifact reduction in wearable photoplethysmography based on multi-channel sensors with multiple wavelengths, Sensors, 20 (2020), 1493. https://doi.org/10.3390/s20051493 doi: 10.3390/s20051493
    [13] F. Xiong, D. Chen, Z. Chen, S. Dai, Cancellation of motion artifacts in ambulatory ECG signals using TD-LMS adaptive filtering techniques, J. Visual Commun. Image Represent., 58 (2019), 606–618. https://doi.org/10.1016/j.jvcir.2018.12.030 doi: 10.1016/j.jvcir.2018.12.030
    [14] P. Guyot, P. Voiriot, E. H. Djermoune, S. Papelier, C. Lessard, M. Felices, et al., R-peak detection in Holter ECG signals using non-negative matrix factorization, in 2018 Computing in Cardiology Conference (CinC), IEEE, (2018), 1–4. https://doi.org/10.22489/CinC.2018.123
    [15] Y. I. Jang, J. Y. Sim, J. R. Yang, N. K. Kwon, Improving heart rate variability information consistency in Doppler cardiogram using signal reconstruction system with deep learning for contact-free heartbeat monitoring, Biomed. Signal Process. Control, 76 (2022), 103691. https://doi.org/10.1016/j.bspc.2022.103691 doi: 10.1016/j.bspc.2022.103691
    [16] G. S. Lee, M. L. Chen, G. Y. Wang, Evoked response of heart rate variability using short-duration white noise, Auton. Neurosci., 155 (2010), 94–97. https://doi.org/10.1016/j.autneu.2009.12.008 doi: 10.1016/j.autneu.2009.12.008
    [17] E. J. Candès, B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717–772. https://doi.org/10.1007/s10208-009-9045-5 doi: 10.1007/s10208-009-9045-5
    [18] X. P. Li, L. Huang, H. C. So, B. Zhao, A survey on matrix completion: Perspective of signal processing, preprint, arXiv: 1901.10885.
    [19] L. Lu, Y. Tan, M. Klaic, M. P. Galea, F. Khan, A. Oliver, et al., Evaluating rehabilitation progress using motion features identified by machine learning, IEEE Trans. Biomed. Eng., 68 (2020), 1417–1428. https://doi.org/10.1109/TBME.2020.3036095 doi: 10.1109/TBME.2020.3036095
    [20] T. Liu, M. Sun, J. Meng, Z. Wu, Y. Shen, N. Feng, Compressive sampling photoacoustic microscope system based on low rank matrix completion, Biomed. Signal Process. Control, 26 (2016), 58–63. https://doi.org/10.1016/j.bspc.2015.12.008 doi: 10.1016/j.bspc.2015.12.008
    [21] L. Xu, M. E. Chavez-Echeagaray, V. Berisha, Unsupervised EEG channel selection based on nonnegative matrix factorization, Biomed. Signal Process. Control, 76 (2022), 103700. https://doi.org/10.1016/j.bspc.2022.103700 doi: 10.1016/j.bspc.2022.103700
    [22] E. Nasiri, K. Berahmand, M. Rostami, M. Dabiri, A novel link prediction algorithm for protein-protein interaction networks by attributed graph embedding, Comput. Biol. Med., 137 (2021), 104772. https://doi.org/10.1016/j.compbiomed.2021.104772 doi: 10.1016/j.compbiomed.2021.104772
    [23] S. Nousias, C. Tselios, D. Bitzas, A. S. Lalos, K. Moustakas, I. Chatzigiannakis, Uncertainty management for wearable IoT wristband sensors using Laplacian-based matrix completion, in 2018 IEEE 23rd International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), IEEE, (2018), 1–6. https://doi.org/10.1109/CAMAD.2018.8515001
    [24] L. Lu, T. Zhu, Y. T. Zhang, D. A. Clifton, Spectrum estimation of heart rate variability using low-rank matrix completion, in 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), IEEE, (2022), 01–04. https://doi.org/10.1109/BHI56158.2022.9926897
    [25] W. S. Chen, L. Hsieh, S. Y. Yuan, High performance data compression method with pattern matching for biomedical ecg and arterial pulse waveforms, Comput. Methods Programs Biomed., 74 (2004), 11–27. https://doi.org/10.1016/S0169-2607(03)00022-1 doi: 10.1016/S0169-2607(03)00022-1
    [26] G. Shabat, A. Averbuch, Interest zone matrix approximation, Electron. J. Linear Algebra, 23 (2012), 678–702.
    [27] W. J. Zeng, H. C. So, Outlier-robust matrix completion via \ell_p -minimization, IEEE Trans. Signal Process., 66 (2017), 1125–1140. https://doi.org/10.1109/TSP.2017.2784361 doi: 10.1109/TSP.2017.2784361
    [28] T. Hastie, R. Tibshirani, M. Wainwright, Statistical learning with sparsity, Monogr. Stat. Appl. Probab., 143 (2015), 8.
    [29] R. Mazumder, T. Hastie, R. Tibshirani, Spectral regularization algorithms for learning large incomplete matrices, J. Mach. Learn. Res., 11 (2010), 2287–2322.
    [30] T. Hastie, R. Mazumder, J. D. Lee, R. Zadeh, Matrix completion and low-rank SVD via fast alternating least squares, J. Mach. Learn. Res., 16 (2015), 3367–3402.
    [31] M. Daoud, P. Ravier, R. Harba, M. Jabloun, B. Yagoubi, O. Buttelli, HRV spectral estimation based on constrained Gaussian modeling in the nonstationary case, Biomed. Signal Process. Control, 8 (2013), 483–490. https://doi.org/10.1016/j.bspc.2013.04.007 doi: 10.1016/j.bspc.2013.04.007
    [32] M. A. García-González, A. Argelagós-Palau, M. Fernández-Chimeno, J. Ramos-Castro, A comparison of heartbeat detectors for the seismocardiogram, in Computing in Cardiology 2013, IEEE, (2013), 461–464.
    [33] S. Greenwald, Improved Detection and Classification of Arrhythmias in Noise-corrupted Electrocardiograms Using Contextual Information, Ph.D. thesis, Harvard-MIT Division of Health Sciences and Technology, 1990.
    [34] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, et al., PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals, Circulation, 101 (2000), e215–e220. https://doi.org/10.1161/01.CIR.101.23.e215 doi: 10.1161/01.CIR.101.23.e215
    [35] A. Taddei, G. Distante, M. Emdin, P. Pisani, G. Moody, C. Zeelenberg, et al., The European ST-T database: Standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography, Eur. Heart J., 13 (1992), 1164–1172. https://doi.org/10.1093/oxfordjournals.eurheartj.a060332 doi: 10.1093/oxfordjournals.eurheartj.a060332
    [36] T. Penzel, G. B. Moody, R. G. Mark, A. L. Goldberger, J. H. Peter, The Apnea-ECG database, in Computers in Cardiology 2000, IEEE, (2000), 255–258. https://doi.org/10.1109/CIC.2000.898505
    [37] A. E. Johnson, J. Behar, F. Andreotti, G. D. Clifford, J. Oster, R-peak estimation using multimodal lead switching, in Computing in Cardiology 2014, IEEE, (2014), 281–284.
    [38] A. N. Vest, G. Da Poian, Q. Li, C. Liu, S. Nemati, A. J. Shah, et al., An open source benchmarked toolbox for cardiovascular waveform and interval analysis, Physiol. Meas., 39 (2018), 105004. https://doi.org/10.1088/1361-6579/aae021 doi: 10.1088/1361-6579/aae021
    [39] J. Rahul, M. Sora, L. D. Sharma, A novel and lightweight P, QRS, and T peaks detector using adaptive thresholding and template waveform, Comput. Biol. Med., 132 (2021), 104307. https://doi.org/10.1016/j.compbiomed.2021.104307 doi: 10.1016/j.compbiomed.2021.104307
    [40] W. Zong, G. Moody, D. Jiang, A robust open-source algorithm to detect onset and duration of QRS complexes, in Computers in Cardiology 2003, IEEE, (2003), 737–740. https://doi.org/10.1109/CIC.2003.1291261
    [41] J. Behar, J. Oster, Q. Li, G. D. Clifford, ECG signal quality during arrhythmia and its application to false alarm reduction, IEEE Trans. Biomed. Eng., 60 (2013), 1660–1666. https://doi.org/10.1109/TBME.2013.2240452 doi: 10.1109/TBME.2013.2240452
    [42] C. Varon, J. Lázaro, J. Bolea, A. Hernando, J. Aguiló, E. Gil, et al., Unconstrained estimation of HRV indices after removing respiratory influences from heart rate, IEEE J. Biomed. Health. Inf., 23 (2018), 2386–2397. https://doi.org/10.1109/JBHI.2018.2884644 doi: 10.1109/JBHI.2018.2884644
    [43] L. Lu, Y. Tan, D. Oetomo, I. Mareels, E. Zhao, S. An, On model-guided neural networks for system identification, in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, (2019), 610–616. https://doi.org/10.1109/SSCI44817.2019.9002703
    [44] Y. Hu, H. Wang, Y. Zhang, B. Wen, Frequency prediction model combining isfr model and LSTM network, Int. J. Electr. Power Energy Syst., 139 (2022), 108001. https://doi.org/10.1016/j.ijepes.2022.108001 doi: 10.1016/j.ijepes.2022.108001
    [45] A. Ameri, M. A. Akhaee, E. Scheme, K. Englehart, Regression convolutional neural network for improved simultaneous EMG control, J. Neural Eng., 16 (2019), 036015. https://doi.org/10.1088/1741-2552/ab0e2e doi: 10.1088/1741-2552/ab0e2e
    [46] A. Veloz, R. Salas, H. Allende-Cid, H. Allende, C. Moraga, Identification of lags in nonlinear autoregressive time series using a flexible fuzzy model, Neural Process. Lett., 43 (2016), 641–666. https://doi.org/10.1007/s11063-015-9438-1 doi: 10.1007/s11063-015-9438-1
    [47] M. Benchekroun, B. Chevallier, V. Zalc, D. Istrate, D. Lenne, N. Vera, The impact of missing data on heart rate variability features: A comparative study of interpolation methods for ambulatory health monitoring, IRBM, 44 (2023), 100776. https://doi.org/10.1016/j.irbm.2023.100776 doi: 10.1016/j.irbm.2023.100776
    [48] M. Alkhodari, H. F. Jelinek, S. Saleem, L. J. Hadjileontiadis, A. H. Khandoker, Revisiting left ventricular ejection fraction levels: A circadian heart rate variability-based approach, IEEE Access, 9 (2021), 130111–130126. https://doi.org/10.1109/ACCESS.2021.3114029 doi: 10.1109/ACCESS.2021.3114029
    [49] L. Lu, T. Zhu, A. H. Ribeiro, L. Clifton, E. Zhao, J. Zhou, et al., Decoding 2.3 million ECGs: Interpretable deep learning for advancing cardiovascular diagnosis and mortality risk stratification, Eur. Heart J., Digit. Health., 5 (2024), 247–259. https://doi.org/10.1093/ehjdh/ztae014 doi: 10.1093/ehjdh/ztae014
    [50] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems 30, Curran Associates, Inc., 2017.
    [51] H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, et al., Informer: Beyond efficient transformer for long sequence time-series forecasting, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI Press, (2021), 11106–11115. https://doi.org/10.1609/aaai.v35i12.17325
    [52] H. Wu, T. Hu, Y. Liu, H. Zhou, J. Wang, M. Long, Timesnet: Temporal 2D-variation modeling for general time series analysis, in The Eleventh International Conference on Learning Representations, 2023.
    [53] J. Fan, T. W. Chow, Matrix completion by least-square, low-rank, and sparse self-representations, Pattern Recognit., 71 (2017), 290–305. https://doi.org/10.1016/j.patcog.2017.05.013 doi: 10.1016/j.patcog.2017.05.013
    [54] J. Zhao, L. Zhao, Low-rank and sparse matrices fitting algorithm for low-rank representation, Comput. Math. Appl., 79 (2020), 407–425. https://doi.org/10.1016/j.camwa.2019.07.012 doi: 10.1016/j.camwa.2019.07.012
    [55] A. Waters, A. Sankaranarayanan, R. Baraniuk, Sparcs: Recovering low-rank and sparse matrices from compressive measurements, in Advances in Neural Information Processing Systems 24, Curran Associates, Inc., 2011.
    [56] F. Nie, Z. Li, Z. Hu, R. Wang, X. Li, Robust matrix completion with column outliers, IEEE Trans. Cybern., 52 (2021), 12042–12055. https://doi.org/10.1109/TCYB.2021.3072896 doi: 10.1109/TCYB.2021.3072896
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1283) PDF downloads(49) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog