Natural language processing (NLP) performs a vital function in text summarization, a task targeted at refining the crucial information from the massive quantity of textual data. NLP methods allow computers to comprehend and process human language, permitting the development of advanced summarization methods. Text summarization includes the automatic generation of a concise and coherent summary of a specified document or collection of documents. Extracting significant insights from text data is crucial as it provides advanced solutions to end-users and business organizations. Automatic text summarization (ATS) computerizes text summarization by decreasing the initial size of the text without the loss of main data features. Deep learning (DL) approaches exhibited significant performance in abstractive and extractive summarization tasks. This research designed an extractive text summarization using NLP with an optimal DL (ETS-NLPODL) model. The major goal of the ETS-NLPODL technique was to exploit feature selection with a hyperparameter-tuned DL model for summarizing the text. In the ETS-NLPODL technique, an initial step of data preprocessing was involved to convert the input text into a compatible format. Next, a feature extraction process was carried out and the optimal set of features was chosen by the hunger games search optimization (HGSO) algorithm. For text summarization, the ETS-NLPODL model used an attention-based convolutional neural network with a gated recurrent unit (ACNN-GRU) model. Finally, the mountain gazelle optimization (MGO) algorithm was employed for the optimal hyperparameter selection of the ACNN-GRU model. The experimental results of the ETS-NLPODL system were examined under the benchmark dataset. The experimentation outcomes pointed out that the ETS-NLPODL technique gained better performance over other methods concerning diverse performance measures.
Citation: Abdulkhaleq Q. A. Hassan, Badriyya B. Al-onazi, Mashael Maashi, Abdulbasit A. Darem, Ibrahim Abunadi, Ahmed Mahmud. Enhancing extractive text summarization using natural language processing with an optimal deep learning model[J]. AIMS Mathematics, 2024, 9(5): 12588-12609. doi: 10.3934/math.2024616
[1] | Waqar Afzal, Sayed M. Eldin, Waqas Nazeer, Ahmed M. Galal . Some integral inequalities for harmonical cr-h-Godunova-Levin stochastic processes. AIMS Mathematics, 2023, 8(6): 13473-13491. doi: 10.3934/math.2023683 |
[2] | Waqar Afzal, Thongchai Botmart . Some novel estimates of Jensen and Hermite-Hadamard inequalities for h-Godunova-Levin stochastic processes. AIMS Mathematics, 2023, 8(3): 7277-7291. doi: 10.3934/math.2023366 |
[3] | Iqra Nayab, Shahid Mubeen, Rana Safdar Ali, Faisal Zahoor, Muath Awadalla, Abd Elmotaleb A. M. A. Elamin . Novel fractional inequalities measured by Prabhakar fuzzy fractional operators pertaining to fuzzy convexities and preinvexities. AIMS Mathematics, 2024, 9(7): 17696-17715. doi: 10.3934/math.2024860 |
[4] | Waqar Afzal, Khurram Shabbir, Thongchai Botmart . Generalized version of Jensen and Hermite-Hadamard inequalities for interval-valued (h1,h2)-Godunova-Levin functions. AIMS Mathematics, 2022, 7(10): 19372-19387. doi: 10.3934/math.20221064 |
[5] | Waqar Afzal, Khurram Shabbir, Thongchai Botmart, Savin Treanţă . Some new estimates of well known inequalities for (h1,h2)-Godunova-Levin functions by means of center-radius order relation. AIMS Mathematics, 2023, 8(2): 3101-3119. doi: 10.3934/math.2023160 |
[6] | Waqar Afzal, Khurram Shabbir, Savin Treanţă, Kamsing Nonlaopon . Jensen and Hermite-Hadamard type inclusions for harmonical h-Godunova-Levin functions. AIMS Mathematics, 2023, 8(2): 3303-3321. doi: 10.3934/math.2023170 |
[7] | Waqar Afzal, Waqas Nazeer, Thongchai Botmart, Savin Treanţă . Some properties and inequalities for generalized class of harmonical Godunova-Levin function via center radius order relation. AIMS Mathematics, 2023, 8(1): 1696-1712. doi: 10.3934/math.2023087 |
[8] | Sabila Ali, Rana Safdar Ali, Miguel Vivas-Cortez, Shahid Mubeen, Gauhar Rahman, Kottakkaran Sooppy Nisar . Some fractional integral inequalities via h-Godunova-Levin preinvex function. AIMS Mathematics, 2022, 7(8): 13832-13844. doi: 10.3934/math.2022763 |
[9] | Chahn Yong Jung, Muhammad Shoaib Saleem, Shamas Bilal, Waqas Nazeer, Mamoona Ghafoor . Some properties of η-convex stochastic processes. AIMS Mathematics, 2021, 6(1): 726-736. doi: 10.3934/math.2021044 |
[10] | Mujahid Abbas, Waqar Afzal, Thongchai Botmart, Ahmed M. Galal . Jensen, Ostrowski and Hermite-Hadamard type inequalities for h-convex stochastic processes by means of center-radius order relation. AIMS Mathematics, 2023, 8(7): 16013-16030. doi: 10.3934/math.2023817 |
Natural language processing (NLP) performs a vital function in text summarization, a task targeted at refining the crucial information from the massive quantity of textual data. NLP methods allow computers to comprehend and process human language, permitting the development of advanced summarization methods. Text summarization includes the automatic generation of a concise and coherent summary of a specified document or collection of documents. Extracting significant insights from text data is crucial as it provides advanced solutions to end-users and business organizations. Automatic text summarization (ATS) computerizes text summarization by decreasing the initial size of the text without the loss of main data features. Deep learning (DL) approaches exhibited significant performance in abstractive and extractive summarization tasks. This research designed an extractive text summarization using NLP with an optimal DL (ETS-NLPODL) model. The major goal of the ETS-NLPODL technique was to exploit feature selection with a hyperparameter-tuned DL model for summarizing the text. In the ETS-NLPODL technique, an initial step of data preprocessing was involved to convert the input text into a compatible format. Next, a feature extraction process was carried out and the optimal set of features was chosen by the hunger games search optimization (HGSO) algorithm. For text summarization, the ETS-NLPODL model used an attention-based convolutional neural network with a gated recurrent unit (ACNN-GRU) model. Finally, the mountain gazelle optimization (MGO) algorithm was employed for the optimal hyperparameter selection of the ACNN-GRU model. The experimental results of the ETS-NLPODL system were examined under the benchmark dataset. The experimentation outcomes pointed out that the ETS-NLPODL technique gained better performance over other methods concerning diverse performance measures.
In mathematics, stochastic processes are representations of random changes in systems. They can be described as random groups of variables by applying probability theory and other disciplines. Several academic fields, including mathematics, physics, economics, operational research, and finance, have given rise to interest in stochastic processes. Different random models have been used in reliability analysis to mathematically represent complex phenomena and systems that change in a stochastic way [1,2]. Stochastic models are best suited to study such situations because they can be specified robustly and manipulated easily. Relativistic transforms are popular in this field, and they describe the lifespan of a component that is changed with another component of the same age, but with a different lifetime distribution at a random failure time. An overview of stochastic optimization under constraints is presented, including insurance, finance, and portfolios with a diverse set of investors[3]. Whenever there is an expectation over random states involved in a stochastic optimization problem, a constrained stochastic successive convex approximation algorithm is applied [4]. The following are some recent applications of stochastic processes in different disciplines [5,6,7].
In certain cases, interval analysis can be a useful method of assessing uncertainty. Among the various branches of mathematics and topology, interval analysis is concerned with the analysis of intervals. Today, it is also very important in a number of computing languages to reduce uncertainty, such as in Python, Mathematica, Javascript, and Matlab. This has resulted in an increase in interest in this subject recently [8,9,10,11]. In addition to being applied in many disciplines, it has also been connected to inequalities by using various interval order approaches including inclusion, the center-radius order relationship, fuzzy order relation, pseudo order relation, and the left right order relationship. In relation to interval analysis, each has its own characteristics and is calculated differently. Some are full-order relationships, while others are partial-order relationships.
A significant portion of linear and nonlinear optimization problems are affected by inequalities. Mathematicians use convex inequalities extensively to understand many different issues. Among the various inequalities, these three are most important and have significant meaning in various aspects. Hermite-Hadamard inequalities and Jensen inequalities are geometrically interpretable convex mappings that are utilized in a variety of results, whereas Ostrowski type inequalities, and their different variants, allow us to obtain a new estimate of a function based on its integral mean, which can be applied to the estimation of quadrature rules when performing numerical analysis. The relationship between convex inequalities and stochastic processes is a well-known one. Originally, in 1980 Nikodem first defined convex stochastic processes with some intriguing properties [12]. Skowronski later expanded his results and presented them in a more comprehensive manner [13]. Li and Hao [14] constructed some intriguing Hermite-Hadamard inequality with various properties by using h-convex stochastic processes. Budak and Sarikaya [15] took inspiration from Li and Hao's results and refined their results by using various improved variants of the Hermite-Hadamard inequality. Several academics have also developed proposed inequalities by combining different concepts of convex stochastic processes with different approaches [16,17,18,19,20]. Initially, Tunc utilized the concept of h-convexity and developed famous Ostrowski-type double disparties [21]. In [22], the authors used stochastic processes for convex mappings and developed an Ostrowski type inequality, among other interesting results. Due to the accuracy of its results, interval analysis has increasingly been applied in various fields of mathematics over the past few decades; thus, by using the concept of set-valued mappings in the context of intervals, authors have connected inequalities with interval inclusions in a variety of ways [23]. With the help of Hukuhara differentiability, Chalco-Cano et al. [24] developed Ostrowski-type disparties. Chen et al. [25] developed Ostrowski type inclusions for ηh-convex mappings. Budak et al. [24] developed Ostrowski-type results by using fractional integral operators. Bai et al. [27] developed a famous double inequality and Jensen-type inclusion by using interval-non-p-convex (h1,h2) mappings. Agahi and Babakhani [28] developed inequalities by using fractional integral operators in a convex stochastic process. Hernandez [29] utilized the notion of a (m,h1,h2) dominated G-convex stochastic process and developed a generalized form of Hermite-Hadamard inequalities. Vivas-Cortez and Garcıa [30] created some variants of Ostrowski type inequalities by using the idea of (m,h1,h2)-convex mappings. In 2023, Afzal and Botmart [31] developed interval stochastic processes in connection with Godunova-Levin functions and refined some previously published results. There are some other recent developments regarding Godunova-Levin type functions by using a variety of integral operators and order relations [32,33,34,35,36,37].
Recently, Afzal et al. [31,38] formulated the Ostrowski-Hermite-Hadamard and Jensen-type inclusions based on the notions of h-convex and h-Godunova-Levin stochastic processes.
Theorem 1.1. [31]. Suppose that h:(0,1)→R+, such that h≠0. Then, the interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an h-Godunova-Levin stochastic process or V∈SGPX(h,[a,b],R+I); if ∀ a,b∈I and y∈(0,1); then, one has
h(12)2V(a+b2,⋅)⊇KC1b−a∫baV(y,⋅)dy⊇KC[V(a,⋅)+V(b,⋅)]∫10dyh(y). | (1.1) |
Theorem 1.2. [31]. Let gi∈R+. Consider that h:(0,1)→R+. An interval-valued stochastic process V=[V_,¯V]:I×v→R+I where I⊆R is considered to be an h-Godunova-Levin stochastic process or V∈SGPX(h,[a,b],R+I) and y∈(0,1); then, one has
V(1Gkk∑i=1giyi,.)⊇KCk∑i=1[V(yi,.)h(giGk)]. | (1.2) |
Theorem 1.3. [38]. Consider a non-negative function h:(0,1) to R with y⊆1h(y) for each y∈(0,1). Let a differentiable mean square interval-valued stochastic process V:I×v⊆R→R+I on Io with V′ as integrable in the mean square sense on [a,b]. If |V′| is an h-convex stochastic process satisfying that |V′(b,⋅)|⊇KCγ, for each b, then one has
{|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|,|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|}⊇KCγ[(b−a)2+(b−b)2]b−a∫10[h(y2)+h(y−y2)]dy |
∀ b∈[a,b].
The main contribution of this study is the introduction of a more generalized and larger class of Goduona-Levin stochastic processes that generalizes the recently developed results in [31,38] to obtain a more refined form, along with other findings that we unify with some remark. Moreover, we employ the Kulisch-Miranker type of order relations which is rarely discussed in conjunction with stochastic processes. Additionally, we have developed Hermite-Hadamard type inequalities for this class of generalized convexity for the first time by using set valued mappings for fractional integral operators. In addition, we have developed a new and improved form of Ostrowski and sequential variants of discrete Jensen type inequalities. The study of fractional integral inequalities is a very important and fascinating research topic. Various very recent research articles adopting fractional integral approaches are very closely related to the current topic. It would be interesting to develop these results by using fractional operators in a stochastic sense [39,40,41,42].
A review of the literature related to developed inequalities and various articles [29,30,31,38] motivated us to develop an improved and modified version of Ostrowski-Jensen and Hermite-Hadamard type inclusions for a generalized class of Godunova-Levin stochastic processes. The main results are backed up with numerically significant examples to demonstrate their validity. The presentation style of this note is as follows. In Section 2, our primary focus is on discussing some essential elements associated with interval calculus. In Section 3, we primarily talk about stochastic processes and some of their characteristics, as well as stochastic convexities and the various pertinent classes to which they belong to. Section 5 presents a definition of a novel class of Godunova-Levin stochastic processes and uses fractional and classical integral operators to derive several variants of Hermite-Hadamard type inclusions. In Section 6, we created an enhanced and more improved version of Ostrowski type inclusions. In Section 7, we develop a more generalized form of discrete sequential Jensen type inclusions. Lastly we summarize our results by providing a brief conclusion in Section 8.
Let R be the one-dimensional Euclidean space, and consider RI as the family of all non-empty compact convex subsets of R, that is
RI={[ρ,η]:ρ,η∈Randρ≤η}. |
The Hausdorff metric on RI is defined as
D(ρ,η)=max{d(ρ,η),d(η,ρ)} | (2.1) |
where d(ρ,η)=maxρ1∈ρd(ρ1,η) and d(ρ1,η)=minη1∈ηd(ρ1,η1)=minη1∈η|ρ1−η1|.
Remark 2.1. A parallel representation of the Hausdorff metric, as stated in (2.1) is given by
D([ρ1_,¯ρ1],[η1_,¯η1])=max{|ρ1_−η1_|,|¯ρ1−¯η1|} |
which is referred to as the Moore metric in interval space.
As is commonly known for metric space, (RI,D) is complete. Throughout this paper, we will be using the following notations:
● RI+ is considered to be a family of all positive compact intervals of R;
● RI− is considered to be a family of all negative compact intervals of R;
● RI is considered to be a family of all compact intervals of R.
Now, we define the scalar multiplication and Minkowski sum on RI by using
ρ+η={ρ1+η1∣ρ1∈ρ,η1∈η} and γρ={γρ1∣ρ1∈ρ}. |
Also, if ρ=[ρ1_,¯ρ1] and η=[η1_,¯η1] are two closed and bounded intervals, then we define the difference as follows:
ρ−η=[ρ1_−¯η1,¯ρ1−η1_] |
the product
ρ⋅η=[min{ρ1η1_,ρ1_¯η1,¯ρ1η1_,¯ρ1¯η1},max{ρ1η1_,ρ1_¯η1,¯ρ1η1_,¯ρ1¯η1}] |
and the division
ρη=[min{ρ1_η1_,ρ1¯η1,¯ρ1η1_,¯ρ1¯η1},max{ρ1_¯η1,ρ1_¯η1,¯ρ1η1_,¯ρ1¯η1}] |
whenever 0∉η. The order relation "⊆KC" was defined as follows by Kulisch and Miranker in 1981 [43].
[ρ1_,¯ρ1]⊆KC[η1_,¯η1]⇔η1_≤ρ1_ and ¯ρ1≤¯η1. |
Next, we will describe how interval-valued functions are defined, followed by how these kinds of functions are integrated.
If M=[ρ1,η1] is a closed interval and Y:M→RI is an interval set-valued mapping, then we will denote
Y(ηo)=[s_(ηo),¯s(ηo)] |
where s_(ηo)≤¯s(ηo),∀ηo∈M. The lower and upper endpoints of function Y are denoted by the functions s_(ηo) and ¯s(ηo), respectively. For interval-valued function it is clear that Y : M→KC is the continuous at ηo∈M if
limη→ηoY(η)=Y(ηo) |
where the limit is considered from the metric space (RI,D). Consequently, Y is continuous at ηo∈M if and only if its terminal functions s_(ηo) and ¯s(ηo) are continuous at any given point.
Theorem 2.1. [38] Let Y:[a,b]→RI be an interval-valued function defined by Y(η)=[S_(η),¯S(η)]. Y∈IR([a,b]) iff S_(η),¯S(η)∈R([a,b]) and
(IR)∫baY(η)dη=[(R)∫baS_(η)dη,(R)∫ba¯S(η)dη] |
where R([a,b]) is considered to be a pack of all interval-valued integrable functions. If Y(η)⊆V(η) for all η∈[a,b], then the following holds
(IR)∫baY(η)dη⊆(IR)∫baV(η)dη. |
Definition 3.1. Consider an arbitrary probability space (v,A,P). A mapping V:v→R is considered to be a stochastic variable if it is A-measurable. A mapping V:I×v→R where I⊆R is a stochastic process; if ∀ a∈I the mapping V(a,⋅) is considered to be stochastic variable.
A stochastic process V is said to adhere to the following conditions:
● Stochastically continous on I where, if ∀ ao∈I, then one has
p−lima→aoV(a,.)=V(ao,.) |
where p−lim denotes the limit in probability.
● In the mean square sense, stochastic continuity exists over I; if ∀ ao∈I, then we have
lima→aoE[(V(a,.)−V(ao,.))2]=0 |
and the random variable's expected value is represented as E[V(a,⋅)].
● In the mean square sense, stochastic differentiability exists over I; if ∀ a∈I, if one has stochastic variable V′:I×v→R, then
V′(a,⋅)=p−lima→aoV(a,⋅)−V(ao,⋅)a−ao. |
● In the mean square sense, stochastic integrability exists over I, if ∀ a∈I, with E[V(a,⋅)]<∞. Then, the stochastic variable V:v→R with the partition of all convergence sequences of an interval [a,b]⊆I, a=bo<b1<b2...<bk=b; suppose that one has
limk→∞E[(k∑n=1V(un,.)(bn−bn−1)−V(⋅,))2]=0. |
In that case, it is written as
V(⋅,)=∫baV(u,⋅)du (a.e). |
To maximize efficiency, it is best to carry integrals and derivatives in fractional or non-integer orders. Authors in [44] defined stochastic mean-square fractional integral operators, which are represented as follows:
Definition 3.2. [44]. Consider V:I×v→R+ to be a stochastic process; then, the mean-square fractional integral operators of order ''α'' are defined as follows:
Jαa+V(q)=1Γ(α)∫qa(q−w)(α−1)V(w,⋅)dw,q>a,α>0(a.e) |
and
Jαb−V(q)=1Γ(α)∫bq(w−q)(α−1)V(w,⋅)dw,q<b,α>0(a.e). |
Definition 3.3. [44]. Consider V=[V_,¯V]:I×v→RI to be an interval-valued stochastic process; then, the mean-square fractional integral operators of order ''α'' are defined as follows:
Jαa+V(q)=1Γ(α)(IR)∫qa(q−w)(α−1)V(w,⋅)dw,q>a,α>0(a.e) |
and
Jαb−V(q)=1Γ(α)(IR)∫bq(w−q)(α−1)V(w,⋅)dw,q<b,α>0(a.e) |
where Γ(⋅) is the gamma function and IR([a,b]) is a collection of all fractional integrals of interval order.
Corollary 3.1. [45]. Consider V=[V_,¯V]:I×v→RI to be an interval-valued stochastic process such that V(q)=[V_(q),¯V(q)] with V_(q),¯V(q)∈IR([a,b]); then, we have
Iαa+V(q)=[Jαa+V_(q),Jαa+¯V(q)]. |
Definition 3.4. [15]. Let h:[0,1]→R+, such that h≠0. Then, the stochastic process V:I×v→R+ is considered to be an h-convex stochastic process; if ∀ a,b∈I and y∈(0,1), one has
V(ya+(1−y)b,⋅)≤ h(y)V(a,⋅)+h(1−y)V(b,⋅). | (3.1) |
Definition 3.5. [31]. Let h:(0,1)→R+ such that h≠0. Then, the interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an h-Godunova-Levin stochastic process or V∈SGPX(h,[a,b],R+I); if ∀ a,b∈I and y∈(0,1), then one has
V(ya+(1−y)b,⋅)⊇KC V(a,⋅)h(y)+V(b,⋅)h(1−y). | (3.2) |
The set of all interval-valued h-Godunova-Levin convex stochastic processes is denoted by SGPX(h,[a,b],R+I).
Definition 3.6. [38]. Let h:[0,1]→R+ such that h≠0. Then, the interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an h-convex stochastic process or V∈SPX(h,[a,b],R+I); if ∀ a,b∈I and y∈(0,1), then one has
V(ya+(1−y)b,⋅)⊇KC h(y)V(a,⋅)+h(1−y)V(b,⋅). | (3.3) |
The set of all interval-valued h-convex stochastic processes is denoted by SPX(h,[a,b],R+I).
We can now define a new more general classes of Godunova-Levin stochastic processes by drawing ideas from the prior literature and definitions.
Definition 4.1. Consider h1,h2:[0,1]→R+. An interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an (h1,h2)-Godunova-Levin stochastic process or V∈SGPX((h1,h2),[a,b],R+I); if ∀ a,b∈I and y∈(0,1), we have
V(ya+(1−y)b,⋅)⊇KCV(a,⋅)h1(y)h2(1−y)+V(b,⋅)h1(1−y)h2(y). | (4.1) |
The set of all interval-valued (h1,h2)-Godunova-Levin convex stochastic processes is denoted by SGPX((h1,h2),[a,b],R+I).
Remark 4.1. (i) If h1(y)=h(y),h2=1 in Definition 4.1, then the (h1,h2)-Godunova-Levin stochastic process turns into an h-Godunova-Levin stochastic process [31].
(ii) If h1(y)=1y,h2=1 with V_=¯V in Definition 4.1, then the (h1,h2)-Godunova-Levin stochastic process turns into a convex stochastic process [46].
(iii) If h1(y)=1h(y),h2=1 with V_=¯V in Definition 4.1, then the (h1,h2)-Godunova-Levin stochastic process turns into an h-convex stochastic [15].
(iv) If h1=1ys,h2=1 with V_=¯V in Definition 4.1, then the (h1,h2)-Godunova-Levin stochastic process turns into an s-convex stochastic process [47].
Definition 4.2. Consider h1,h2:[0,1]→R+. An interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an harmonic (h1,h2)-Godunova-Levin stochastic process or V∈SGHPX((h1,h2),[a,b],R+I); if ∀ a,b∈I and y∈(0,1), we have
V(abya+(1−y)b,⋅)⊇KCV(a,⋅)h1(y)h2(1−y)+V(b,⋅)h1(1−y)h2(y). | (4.2) |
The set of all interval-valued harmonic (h1,h2)-Godunova-Levin convex stochastic processes is denoted by SGHPX((h1,h2),[a,b],R+I).
Using a fractional operator, we first construct the Hermite-Hadamard inequality. Next, we construct results for the products of two Godunova-Levin stochastic processes by using the standard Riemann integral. Lastly, we show that some results that have been published previously are generalized.
Theorem 5.1. Consider h1,h2:[0,1]→R+. An interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an (h1,h2)-Godunova-Levin stochastic process or V∈SGPX((h1,h2),[a,b],R+I); if ∀ a,b∈I and y∈(0,1), then one has
H(12,12)αV(a+b2,⋅)⊇KCΓ(α)b−a[Jαa+V(b,⋅)+Jαb−V(a,⋅)]⊇KC[V(a,⋅)+V(b,⋅)]∫10yα−1[1H(y,1−y)+1H(1−y,y)]dy |
with a>0.
Proof. Since V∈SGPX((h1,h2),[a,b],R+I), then one has
H(12,12)V(a+b2,⋅)⊇KCV(ya+(1−y)b,⋅)+V((1−y)a+yb,⋅). | (5.1) |
Multiplying (5.1) by yα−1 and integrating, we get
H(12,12)αV(a+b2,⋅)⊇KC[∫10yα−1V(ya+(1−y)b,⋅)dy+∫10yα−1V((1−y)a+yb,⋅)dy]=[∫10yα−1V_(ya+(1−y)b,⋅)dy+∫10yα−1V_((1−y)a+yb,⋅)dy,∫10yα−1¯V(ya+(1−y)b,⋅)dy+∫10yα−1¯V((1−y)a+yb,⋅)dy]=[∫ab(b−ub−a)α−1V_(u,⋅)dua−b+∫ba(1−b−ub−a)α−1V_(u,⋅)dub−a,∫ab(b−ub−a)α−1¯V(u,⋅)dua−b+∫ba(1−b−ub−a)α−1¯V(u,⋅)dub−a]=Γ(α)b−a[Jαa+V_(b,⋅),Jαb−V_(a,⋅),Jαa+¯V(b,⋅)+Jαb−¯V(a,⋅)]=Γ(α)b−a[Jαa+V(b,⋅),Jαb−V(a,⋅))]. | (5.2) |
Similarly since V∈SGPX((h1,h2),[a,b],R+I), we have
V(ya+(1−y)b,⋅)+V((1−y)a+yb,⋅)⊇KC[1H(y,1−y)+1H(1−y,y)][V(a,⋅)+V(b,⋅)]. | (5.3) |
Multiplying (5.3) by yα−1 and integrating on [0,1], we have
Γ(α)b−a[Jαa+f(b,⋅)+Jαb−f(a,⋅)]⊇KC[V(a,⋅)+V(b,⋅)]∫10yα−1[1H(y,1−y)+1H(1−y,y)]dy. | (5.4) |
Take into account (5.2) with (5.4), and the result follows.
Example 5.1. Consider that [a,b]=[1,2]. Let h1(y)=1y,h2(y)=1,∀ y∈ (0,1) and α=12. Suppose that a stochastic process V is defined as
V(u,⋅)=[−u12+2,u12+2]. |
Then,
H(12,12)αV(a+b2,⋅)=[2(4−√6),2(4+√6)],Γ(α)b−a[Jαa+V(b,⋅)+Jαb−V(a,⋅)]=[14−2√2−π2−0.38277,18+2√2+π2+0.38277],[V(a,⋅)+V(b,⋅)]∫10yα−1[1H(y,1−y)+1H(1−y,y)]dy=[2(3−√2),2(3+√2)]. |
As a result,
[2(4−√6),2(4+√6)]⊇KC[14−2√2−π2−0.38277,18+2√2+π2+0.38277]⊇KC[2(3−√2),2(3+√2)]. |
As a result, Theorem 5.1 is true.
Remark 5.1. (i) If α=1, h1(y)=1h(y) and h2(y)=1 with V_=¯V, then Theorem 5.1 turns into an h-convex stochastic process [48].
Theorem 5.2. Based on the same hypotheses in Theorem 5.1, the successive inclusion relation can be defined as follows:
[H(12,12)]2V(a+b2,⋅)⊇KC1b−a∫baV(u,⋅)du⊇KC[V(a,⋅)+V(b,⋅)]∫10dyH(y,1−y). | (5.5) |
Proof. Since V∈SGPX((h1,h2),[a,b],R+I), we have
[H(12,12)]V(a+b2,⋅)⊇KCV(ya+(1−y)b,⋅)+V((1−y)a+yb,⋅)[H(12,12)]V(a+b2,⋅)⊇KC[∫10V(ya+(1−y)b,⋅)dy+∫10V((1−y)a+yb,⋅)dy]=[∫10V_(ya+(1−y)b,⋅)dy+∫10V_((1−y)a+yb,⋅)dy,∫10¯V(ya+(1−y)b,⋅)dy+∫10¯V((1−y)a+yb,⋅)dy]=[2b−a∫baV_(u,⋅)du,2b−a∫ba¯V(u,⋅)du]=2b−a∫baV(u,⋅)du. | (5.6) |
By Definition 4.1, one has
V(ya+(1−y)b,⋅)⊇KCV(a,⋅)H(y,1−y)+V(b,⋅)H(1−y,y). |
Following integration, one has
∫10V(ya+(1−y)b,⋅)dy⊇KCV(a,⋅)∫10dyH(y,1−y)+V(b,⋅)∫10dyH(1−y,y) |
Accordingly,
1b−a∫baV(u,⋅)du⊇KC[V(a,⋅)+V(b,⋅)]∫10dyH(y,1−y). | (5.7) |
Now, combining (5.6) and (5.7), we achieve the desired outcome.
[H(12,12)]2V(a+b2,⋅)⊇KC1b−a∫baV(u,⋅)du⊇KC[V(a,⋅)+V(b,⋅)]∫10dyH(y,1−y). |
Remark 5.2. (i) If h1(y)=h(y) and h2(y)=1, then Theorem 5.2 turns into an h-Godunova-Levin stochastic process [31]:
h(12)2V(a+b2,⋅)⊇KC1b−a∫baV(u,⋅)du⊇KC ∫10dyh(y). |
(ii) If h1(y)=1h(y) and h2(y)=1 with V_=¯V, then Theorem 5.2 turns into an h-convex stochastic process [15]:
12h(12)V(a+b2,⋅)≤1b−a∫baV(u,⋅)du≤ ∫10h(y)dy. |
(iii) If h1(y)=1y and h2(y)=1 with V_=¯V, then Theorem 5.2 turns into a convex stochastic proces [46]:
V(a+b2,⋅)≤1b−a∫baV(u,⋅)du≤V(a,⋅)+V(b,⋅)2. |
(iv) If h1(y)=1ys,h2(y)=1 with V_=¯V, then Theorem 5.2 turns into an s-convex stochastic process [47]:
2s−1V(a+b2,⋅)≤1b−a∫baV(u,⋅)du≤V(a,⋅)+V(b,⋅)s+1. |
Example 5.2. Consider that [a,b]=[−1,1] with h1(y)=1y,h2=1, ∀ y∈ (0,1). Suppose that a stochastic process V is defined as
V(u,⋅)=[u2,4−eu]. |
Then,
[H(12,12)]2V(a+b2,⋅)=[0,3],1b−a∫baV(u,⋅)du≈[0.3333,2.82479],[V(a,⋅)+V(b,⋅)]∫10H(y,1−y)dy≈[1,2.45691]. |
As a result,
[0,3]⊇KC[0.3333,2.82479]⊇KC[1,2.45691]. |
This verifies Theorem 5.2.
Theorem 5.3. Based on the same hypotheses as in Theorem 5.1, the successive inclusion relation can be defined
[H(12,12)]24V(a+b2,⋅)⊇KC△1⊇KC1b−a∫baV(u,⋅)du⊇KC△2 |
⊇KC{[V(a,⋅)+V(b,⋅)][12+1H(12,12)]}∫10H(y,1−y)dy |
where
△1=H(12,12)4[V(3a+b4,⋅)+V(3b+a4,⋅)] |
△2=[V(a+b2,⋅)+V(a,⋅)+V(b,⋅)2]∫10H(y,1−y)dy. |
Proof. We get the required result by taking into account Definition 4.2 and using the same technique as Afzal and Botmart [31].
Example 5.3. From Example 5.2, one has
[H(12,12)]24V(a+b2,⋅)=[0,3]△1≈[0.25,2.87237]△2≈[0.5,1.95691] |
and
{[V(a,⋅)+V(b,⋅)][12+1H(12,12)]}∫10H(y,1−y)dy≈[1,2.45691]. |
Thus, we obtain
[0,3]⊇KC[0.25,2.87237]⊇KC[0.3333,2.82479]⊇KC[0.5,1.95691]⊇KC[1,2.45691] |
This verifies Theorem 5.3.
Theorem 5.4. Based on the same hypotheses as in Theorem 5.1, the successive inclusion relation can be defined
1b−a∫baV(u,⋅)S(u,⋅)du⊇KCT(a,b)∫10dyH2(y,1−y)+U(a,b)∫10dyH(y,y)H(1−y,1−y). |
Proof. Since V,S∈SGPX((h1,h2),[a,b],R+I), we have
V(ay+(1−y)b,⋅)⊇KCV(a,⋅)H(y,1−y)+V(b,⋅)H(1−y,y) |
S(ay+(1−y)b,⋅)⊇KCS(a,⋅)H(1−y,y)+S(b,⋅)H(1−y,y). |
Then,
V(ay+(1−y)b,⋅)S(ay+(1−y)b,⋅)⊇KCV(a,⋅)S(a,⋅)H2(y,1−y)+[V(a,⋅)S(b,⋅)+V(b,⋅)S(a,⋅)]H2(1−y,y)+V(b,⋅)S(b,⋅)H(y,y)H(1−y,1−y) |
Following integration, one has
∫10V(ay+(1−y)b,⋅)S(ay+(1−y)b,⋅)dy=[∫10V_(ay+(1−y)b,⋅)S_(ay+(1−y)b,⋅)dy,∫10¯V(ay+(1−y)b,⋅)¯S(ay+(1−y)b,⋅)dy]=[1b−a∫baV_(u,⋅)S_(u,⋅)du,1b−a∫ba¯V(u,⋅)¯S(u,⋅)du]=1b−a∫baV(u,⋅)S(u,⋅)du⊇KCT(a,b)∫10dyH2(y,1−y)+U(a,b)∫10dyH(y,y)H(1−y,1−y). |
Thus, it follows
1b−a∫baV(u,⋅)S(u,⋅)du⊇KCT(a,b)∫10dyH2(y,1−y)+U(a,b)∫10dyH(y,y)H(1−y,1−y). |
Example 5.4. Let [a,b]=[0,1] with h1(y)=1y, h2(y)=1 for all y∈ (0,1). Suppose that V,S are two stochastic process mappings that are defined as follows:
V(u,⋅)=[u2,4−eu]andy(u,⋅)=[u,3−u2]. |
Then, we have
1b−a∫baV(u,⋅)S(u,⋅)du≈[0.25,6.23010]T(a,b)∫10dyH2(y,1−y)=[0.3333,3.85447] |
and
U(a,b)∫10dyH(y,y)H(1−y,1−y)=[0,1.64085]. |
Since
[0.25,6.23010]⊇KC[0.3333,5.49533]. |
Consequently, Theorem 5.4 is verified.
Theorem 5.5. Based on the same hypotheses as in Theorem 5.1, the successive inclusion relation can be defined
[H(12,12)]22V(a+b2,⋅)S(a+b2,⋅)⊇KC1b−a∫baV(u,⋅)S(u,⋅)du+T(a,b)∫10dyH(y,y)H(1−y,1−y)+U(a,b)∫10dyH2(y,1−y). |
Proof. Since V,S∈SGPX((h1,h2),[a,b],R+I), then one has
V(a+b2,⋅)⊇KC1H(12,12)V(ay+(1−y)b,⋅)+1H(12,12)V(a(1−y)+yb,⋅),S(a+b2,⋅)⊇KC1H(12,12)S(ay+(1−y)b,⋅)+1H(12,12)S(a(1−y)+yb,⋅).V(a+b2,⋅)S(a+b2,⋅)⊇KC1[H(12,12)]2[V(ay+(1−y)b,⋅)S(ay+(1−y)b,⋅)+V(a(1−y)+yb,⋅)S(a(1−y)+yb,⋅)]+1[H(12,12)]2[V(ay+(1−y)b,⋅)S(a(1−y)+yb,⋅)+V(a(1−y)+yb,⋅)S(ay+(1−y)b,⋅)]⊇KC1[H(12,12)]2[V(ay+(1−y)b,⋅)S(ay+(1−y)b,⋅)+V(a(1−y)+yb,⋅)S(a(1−y)+yb,⋅)]+1[H(12,12)]2[V(a,⋅)h1(y)h2(1−y)+V(b,⋅)h1(1−y)h2(y)+S(a,⋅)h1(1−y)h2(1−y)+S(b,⋅)h1(1−y)h2(y)]+[(V(a,⋅)H(1−y,y)+V(b,⋅)H(y,1−y))+(S(a,⋅)H(y,1−y)+S(b,⋅)H(1−y,y))]⊇KC1[H(12,12)]2[V(ay+(1−y)b,⋅)S(ay+(1−y)b,⋅)+V(a(1−y)+yb,⋅)S(a(1−y)+yb,⋅)]+1[H(12,12)]2[2T(a,b)(H(y,y)H(1−y,1−y))+U(a,b)(H2(y,1−y)+H2(1−y,y))]. | (5.8) |
Integration over (0, 1) yields that
∫10V(a+b2,⋅)S(a+b2,⋅)dy=[∫10V_(a+b2,⋅)S_(a+b2,⋅)dy,∫10¯V(a+b2,⋅)¯S(a+b2,⋅)dy]⊇KC2[H(12,12)]2[1b−a∫baV(u,⋅)S(u,⋅)du]+2[H(12,12)]2[T(a,b)∫10dyH(y,y)H(1−y,1−y)+U(a,b)∫10dyH2(y,1−y)]. |
Multiplying both sides by [H(12,12)]22 in the above inclusion, we have
[H(12,12)]22V(a+b2,⋅)S(a+b2,⋅)⊇KC1b−a∫baV(u,⋅)S(u,⋅)du+T(a,b)∫10dyH(y,y)H(1−y,1−y)+U(a,b)∫10dyH2(y,1−y). |
Accordingly, the above theorem can be proved.
Example 5.5. By virtue of Example 6.1, one has
[H(12,12)]22V(a+b2,⋅)S(a+b2,⋅)≈[0.25,12.93203]T(a,b)∫10dyH2(y,1−y)≈[0.3333,3.85447] |
and
U(a,b)∫10dyH(y,y)H(1−y,1−y)≈[0,1.64085]. |
This implies
[0.25,12.93203]⊇KC[0.41666,11.43906]. |
This verifies Theorem 5.5.
An Ostrowski type inequality s developed here along with some examples for Godunova-Levin functions with a more generalized class. The lemma that follows helps us to accomplish our objective [22].
Lemma 6.1. Consider a differentiable mean square stochastic process V:I×v⊆R→R on Io. Likewise, if V′ is integrable in the mean square sense on [a,b], then one has
V(b,⋅)−1b−a∫baV(y,⋅)dy=(b−a)2b−a∫10yV′(yb+(1−y)a,⋅)dy−(b−b)2b−a∫10yV′(yb+(1−y)b,⋅)dy,∀b∈[a,b]. |
Theorem 6.1. Consider three non-negative functions h,h1,h2:(0,1)→R with y⊆1h(y) for each y∈(0,1). Let a differentiable mean square interval-valued stochastic process V:I×v⊆R→R+I on Io with V′ as integrable in the mean square sense on [a,b]. If |V′| is an (h1,h2)-Godunova-Levin stochastic process and satisfying that |V′(b,⋅)|⊇KCγ for each b, then one has
{|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|,|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|}⊇KCγ[(b−a)2+(b−b)2]b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy |
∀ b∈[a,b].
Proof. By virtue of Lemma 6.1 and the fact that |V′| is an (h1,h2)-Godunova-Levin stochastic process, we have
{|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|,|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|}⊇KC(b−a)2b−a∫10y|V′(yb+(1−y)a,⋅)|dy+(b−b)2b−a∫10y|V′(yb+(1−y)b,⋅)|dy. |
Utilizing the interval order inclusion relation, one has
{|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|}≤(b−a)2b−a∫10y|V_′(yb+(1−y)a,⋅)|dy+(b−b)2b−a∫10y|V_′(yb+(1−y)b,⋅)|dy |
{|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|}≥(b−a)2b−a∫10y|¯V′(yb+(1−y)a,⋅)|dy+(b−b)2b−a∫10y|¯V′(yb+(1−y)b,⋅)|dy. |
It follows that
(b−a)2b−a∫10y|V_′(yb+(1−y)a,⋅)|dy+(b−b)2b−a∫10y|V_′(yb+(1−y)b,⋅)|dy≤(b−a)2b−a∫10y[|V_′(b,⋅)|H(y,1−y)+|V_′(a,⋅)|H(1−y,y)]dy+(b−b)2b−a∫10y[|V_′(b,⋅)|H(y,1−y)+|V_′(b,⋅)|H(1−y,y)]dy. |
Also
(b−a)2b−a∫10y|¯V′(yb+(1−y)a,⋅)|dy+(b−b)2b−a∫10y|¯V′(yb+(1−y)b,⋅)|dy≥(b−a)2b−a∫10y[|¯V′(b,⋅)|H(y,1−y)+|¯V′(a,⋅)|H(1−y,y)]dy+(b−b)2b−a∫10y[|¯V′(b,⋅)|H(y,1−y)+|¯V′(b,⋅)|H(1−y,y)]dy. |
Consequently, we have
(b−a)2b−a∫10y[|V_′(b,⋅)|H(y,1−y)+|V_′(a,⋅)|H(1−y,y)]dy+(b−b)2b−a∫10y[|V_′(b,⋅)|H(y,1−y)+|V_′(b,⋅)|H(1−y,y)]dy≤γ(b−a)2b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy+(b−b)2b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy |
and
(b−a)2b−a∫10y[|¯V′(b,⋅)|H(y,1−y)+|¯V′(a,⋅)|H(1−y,y)]dy+(b−b)2b−a∫10y[|¯V′(b,⋅)|H(y,1−y)+|¯V′(b,⋅)|H(1−y,y)]dy≥γ(b−a)2b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy+(b−b)2b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy. |
This implies
|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|≤γ[(b−a)2+(b−b)2]b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy. | (6.1) |
Similarily
|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|≥γ[(b−a)2+(b−b)2]b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy. | (6.2) |
The proof is completed.
Example 6.1. Let [a,b]=[0,1],h(y)=1y, h1(y)=1y and h2(y)=1 for all y∈ (0,1). Suppose that a stochastic process V is defined as
V(u,⋅)=[u2,3−eu] |
Choose b=1; then, we have
|V_(b,⋅)−1b−a∫baV_(y,⋅)dy|=23. | (6.3) |
Since |V_′(b,⋅)|≤γ=2, we have
γ[(b−a)2+(b−b)2]b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy=1. | (6.4) |
Similarily
|¯V(b,⋅)−1b−a∫ba¯V(y,⋅)dy|=2. | (6.5) |
Since |¯V′(b,⋅)|≤γ=e, we have
γ[(b−a)2+(b−b)2]b−a∫10[1h(y)H(y,1−y)+1h(y)H(1−y,y)]dy=e2. | (6.6) |
Consequently,
[23,2]⊇KC[1,e2]. |
This verifies Theorem 6.1.
Remark 6.1.
If h(y)=1y,h1(y)=1h(y) and h2(y)=1 with V_=¯V, then Theorem 6 has a similar result for the h-convex-function [22].
In this section, we develop the Jensen type inclusion for the (h1,h2)-Godunova-Levin stochastic process, and with some remarks we show that this is a more generalized class. Throughout we make use of supermultiplicative and submultiplicative type mappings; regarding that concern, please see [49].
Theorem 7.1. Let gi,yi∈R+. Consider that h1,h2:[0,1]→R+. An interval-valued stochastic process V=[V_,¯V]:I×v→R+I where [a,b]⊆I⊆R is considered to be an (h1,h2)-Godunova-Levin stochastic process or V∈SGHPX((h1,h2),[a,b],R+I); if ∀ a,b∈I and y∈(0,1), then one has
V(11y1+1yd−1Gdd∑i=1giyi,⋅)⊇KCV(y1,⋅)+V(yd,⋅)−d∑i=1[V(yi,⋅)H(giGd,Gi−1Gd)]. | (7.1) |
Proof. Since Gd=∑di=1gi and V is a harmonic (h1,h2)-Godunova-Levin stochastic process, taking into account [31], [Theorem 3.5], we have
V(11y1+1yd−1Gdd∑i=1giyi,⋅)=V(1d∑i=1giGd(1y1+1yd−1yi),⋅)⊇KCd∑i=1[V(11y1+1yd−1yk,⋅)h1(giGd)h2(Gi−1Gd)]. |
By virtue of the Kulisch-Miranker order relation, if (h1,h2) denotes supermultiplicative type mappings, ∑di=1h1(giGd)h2(Gi−1Gd)≤1, then we have
V_(11y1+1yd−1Gd∑di=1giyi,⋅)≤d∑i=1[V_(11y1+1yd−1yk,⋅)h1(giGd)h2(Gi−1Gd)]≤d∑i=1[V_(y1,⋅)+V_(yd,⋅)−V_(yi,⋅)h1(giGd)h2(Gi−1Gd)]≤d∑i=1[V_(y1,⋅)+V_(yd,⋅)h1(giGd)h2(Gi−1Gd)−V_(yi,⋅)h1(giGd)h2(Gi−1Gd)]≤V_(y1,⋅)+V_(yd,⋅)−d∑i=1[V_(yi,⋅)h1(giGd)h2(Gi−1Gd)]=V_(y1,⋅)+V_(yd,⋅)−d∑i=1[V_(yi,⋅)H(giGd,Gi−1Gd)]. |
Similarly, if (h1,h2) denotes submultiplicative type mappings, ∑di=1h1(giGd)h2(Gi−1Gd)≥1, then we have
¯V(11y1+1yd−1Gd∑di=1giyi,⋅)≥d∑i=1[¯V(11y1+1yd−1yk,⋅)h1(giGd)h2(Gi−1Gd)]≥d∑i=1[¯V(y1,⋅)+¯V(yd,⋅)−¯V(yi,⋅)h1(giGd)h2(Gi−1Gd)]≥d∑i=1[¯V(y1,⋅)+¯V(yd,⋅)h1(giGd)h2(Gi−1Gd)−¯V(yi,⋅)h1(giGd)h2(Gi−1Gd)]≥¯V(y1,⋅)+¯V(yd,⋅)−d∑i=1[¯V(yi,⋅)h1(giGd)h2(Gi−1Gd)]=¯V(y1,⋅)+¯V(yd,⋅)−d∑i=1[¯V(yi,⋅)H(giGd,Gi−1Gd)]. |
Take into account results related to submultiplicative- and supermultiplicative-type mappings, we have
V(11y1+1yd−1Gd∑di=1giyi,⋅)⊇KCV(y1,⋅)+V(yd,⋅)−d∑i=1[V(yi,⋅)H(giGd,Gi−1Gd)]. |
Remark 7.1. (i) If h1(y)=h(y) and h2(y)=1, then Theorem 7.1 has a similar result for harmonic h-Godunova-Levin functions, which is new as well.
V(11y1+1yd−1Gd∑di=1giyi,⋅)⊇KCV(y1,⋅)+V(yd,⋅)−d∑i=1[V(yi,⋅)h(giGd)]. | (7.2) |
(ii) If h1(y)=1h(y) and h2(y)=1 with V_=¯V, then Theorem 7.1 has a similar result for harmonic h-convex functions, which is new as well.
V(11y1+1yd−1Gdd∑i=1giyi,⋅)≤V(y1,⋅)+V(yd,⋅)−d∑i=1h(giGy)V(yi,⋅). | (7.3) |
(iii) If h1(y)=h2(y)=1, Theorem 7.1 has a similar result for P-functions, which is new as well.
V(11y1+1yd−1Gdd∑i=1giyi,⋅)≤V(y1,⋅)+V(yd,⋅)−V(yi,⋅). | (7.4) |
As part of this note, we use Kulisch-Miranker types of inclusions in conjunction with stochastic processes, and we have refined and improved three well known inequalities, known as Hermite-Hadamard, Ostrowski, and Jensen types. Additionally, we have generalized the work in some recent articles related to stochastic convexity. To prove the Hermite-Hadamard type results, we use two types of integral operators: classical and generalized fractional integral operators. Moreover, we present a new way to treat Jensen type inclusions under interval stochastic processes by using a discrete sequential form. For further development of these results, we recommend that interested researchers use fractional operators based on the stochastic version defined in that [44]:
Jαa+V(q)=1Γ(α)∫qae−1−αα(q−w)V(w,⋅)dw,q>a,α>0(a.e) |
and
Jαb−V(q)=1Γ(α)∫bqe−1−αα(w−q)V(w,⋅)dw,q<b,α>0(a.e). |
According to inequality theory, there are various types of order relations, including total order relations, inclusions, pseudo-order relations, fuzzy order relations, standard partial order relations, and various others [50,51,52,53,54,55,56]. This paper demonstrates that some results, more specifically Theorem 11, do not apply to Milne type inequalites in the inclusion order setting [57]. In the context of the center and radius order relation, Abbas et al. [58] recently developed a number of inequalities that are of full order. Therefore, interested researchers can apply the above to test whether Theorem 11 holds with this type of order relation when using a fractional operator defined with an exponential kernel for Milne type results.
The authors declare that they have not used artificial intelligence tools in the creation of this article.
This work was supported by a National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2022R1A2C2004874). This work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20214000000280).
The authors declare that they have no competing interests.
[1] | M. Yadav, R. Katarya, A Systematic Survey of Automatic Text Summarization Using Deep Learning Techniques, In Modern Electronics Devices and Communication Systems: Select Proceedings of MEDCOM 2021, 397–405. Singapore: Springer Nature Singapore, 2023. https://doi.org/10.1007/978-981-19-6383-4_31 |
[2] |
Y. M. Wazery, M. E. Saleh, A. Alharbi, A. A. Ali, Abstractive Arabic text summarization based on deep learning, Comput. Intel. Neurosci., 2022. https://doi.org/10.1155/2022/1566890 doi: 10.1155/2022/1566890
![]() |
[3] |
P. J. Goutom, N. Baruah, P. Sonowal, An abstractive text summarization using deep learning in Assamese, Int. J. Inf. Technol., 2023, 1–8. https://doi.org/10.1007/s41870-023-01279-7 doi: 10.1007/s41870-023-01279-7
![]() |
[4] | V. L. Sireesha, Text Summarization for Resource-Poor Languages: Datasets and Models for Multiple Indian Languages (Doctoral dissertation, International Institute of Information Technology Hyderabad), 2023. |
[5] | S. Dhankhar, M. K. Gupta, Automatic Extractive Summarization for English Text: A Brief Survey. In Proceedings of Second Doctoral Symposium on Computational Intelligence: DoSCI 2021, 183–198. Singapore: Springer Singapore, 2021. https://doi.org/10.1007/978-981-16-3346-1_15 |
[6] | B. Shukla, S. Gupta, A. K. Yadav, D. Yadav, Text summarization of legal documents using reinforcement learning: A study, In Intelligent Sustainable Systems: Proceedings of ICISS 2022, 403–414. Singapore: Springer Nature Singapore, 2022. https://doi.org/10.1007/978-981-19-2894-9_30 |
[7] |
B. Baykara, T. Güngör, Turkish abstractive text summarization using pretrained sequence-to-sequence models, Nat. Lang. Eng., 29 (2023), 1275–1304. https://doi.org/10.1017/S1351324922000195 doi: 10.1017/S1351324922000195
![]() |
[8] |
M. Bani-Almarjeh, M. B. Kurdy, Arabic abstractive text summarization using RNN-based and transformer-based architectures, Inform. Process. Manag., 60 (2023), 103227. https://doi.org/10.1016/j.ipm.2022.103227 doi: 10.1016/j.ipm.2022.103227
![]() |
[9] | H. Aliakbarpour, M. T. Manzuri, A. M. Rahmani, Improving the readability and saliency of abstractive text summarization using a combination of deep neural networks equipped with auxiliary attention mechanisms, J. Supercomput., 2022, 1–28. |
[10] |
S. N. Turky, A. S. A. Al-Jumaili, R. K. Hasoun, Deep learning based on different methods for text summary: A survey, J. Al-Qadisiyah Comput. Sci. Math., 13 (2021), 26. https://doi.org/10.29304/jqcm.2021.13.1.766 doi: 10.29304/jqcm.2021.13.1.766
![]() |
[11] |
G. A. Babu, S. Badugu, Deep learning based sequence to sequence model for abstractive Telugu text summarization, Multimed. Tools Appl., 82 (2023), 17075–17096. https://doi.org/10.1007/s11042-022-14099-x doi: 10.1007/s11042-022-14099-x
![]() |
[12] |
S. A. Tripathy, A. Sharmila, Abstractive method-based text summarization using bidirectional long short-term memory and pointer generator mode, J. Appl. Res. Technol., 21 (2023), 73–86. https://doi.org/10.22201/icat.24486736e.2023.21.1.1446 doi: 10.22201/icat.24486736e.2023.21.1.1446
![]() |
[13] |
N. Shafiq, I. Hamid, M. Asif, Q. Nawaz, H. Aljuaid, H. Ali, Abstractive text summarization of low-resourced languages using deep learning, PeerJ Comput. Sci., 9 (2023), e1176. https://doi.org/10.7717/peerj-cs.1176 doi: 10.7717/peerj-cs.1176
![]() |
[14] | R. Karmakar, K. Nirantar, P. Kurunkar, P. Hiremath, D. Chaudhari, Indian regional language abstractive text summarization using attention-based LSTM neural network, In 2021 International Conference on Intelligent Technologies (CONIT), 1–8, IEEE, 2021. https://doi.org/10.1109/CONIT51480.2021.9498309 |
[15] |
R. Rani, D. K. Lobiyal, Document vector embedding based extractive text summarization system for Hindi and English text, Appl. Intell., 2022, 1–20. https://doi.org/10.1007/s10489-021-02871-9 doi: 10.1007/s10489-021-02871-9
![]() |
[16] |
W. Etaiwi, A. Awajan, SemG-TS: Abstractive Arabic text summarization using semantic graph embedding, Mathematics, 10 (2022), 3225. https://doi.org/10.3390/math10183225 doi: 10.3390/math10183225
![]() |
[17] |
R. T. AlTimimi, F. H. AlRubbiay, Multilingual text summarization using deep learning, Int. J. Eng. Adv. Technol., 7 (2021), 29–39. https://doi.org/10.31695/IJERAT.2021.3712 doi: 10.31695/IJERAT.2021.3712
![]() |
[18] | S. V. Moravvej, A. Mirzaei, M. Safayani, Biomedical text summarization using conditional generative adversarial network (CGAN), arXiv preprint arXiv: 2110.11870, 2021. |
[19] |
A. Al Abdulwahid, Software solution for text summarisation using machine learning based Bidirectional Encoder Representations from Transformers algorithm, IET SOFTWARE, 2023. https://doi.org/10.1049/sfw2.12098 doi: 10.1049/sfw2.12098
![]() |
[20] |
B. Muthu, S. Cb, P. M. Kumar, S. N. Kadry, C. H. Hsu, O. Sanjuan, et al., A framework for extractive text summarization based on deep learning modified neural network classifier, ACM T. Asian Low-Reso., 20 (2021), 1–20. https://doi.org/10.1145/3392048 doi: 10.1145/3392048
![]() |
[21] |
D. Izci, S. Ekinci, E. Eker, M. Kayri, Augmented hunger games search algorithm using logarithmic spiral opposition-based learning for function optimization and controller design, J. King Saud University-Eng. Sci., 2022. https://doi.org/10.1016/j.jksues.2022.03.001 doi: 10.1016/j.jksues.2022.03.001
![]() |
[22] |
M. Mafarja, T. Thaher, M. A. Al-Betar, J. Too, M. A. Awadallah, I. Abu Doush, et al., Classification framework for faulty-software using enhanced exploratory whale optimizer-based feature selection scheme and random forest ensemble learning, Appl. Intell., 2023, 1–43. https://doi.org/10.1007/s10489-022-04427-x doi: 10.1007/s10489-022-04427-x
![]() |
[23] |
B. Liu, J. Xu, W. Xia, State-of-Health Estimation for Lithium-Ion Battery Based on an Attention-Based CNN-GRU Model with Reconstructed Feature Series, Int. J. Energy Res., 2023. https://doi.org/10.1155/2023/8569161 doi: 10.1155/2023/8569161
![]() |
[24] |
P. Sarangi, P. Mohapatra, Evolved opposition-based Mountain Gazelle Optimizer to solve optimization problems, J. King Saud Univ-Com., 2023, 101812. https://doi.org/10.1016/j.jksuci.2023.101812 doi: 10.1016/j.jksuci.2023.101812
![]() |
[25] |
H. J. Alshahrani, K. Tarmissi, A. Yafoz, A. Mohamed, M. A. Hamza, I. Yaseen, et al., Applied linguistics with mixed leader optimizer based English text summarization model, Intell. Autom. Soft Co., 36 (2023). https://doi.org/10.32604/iasc.2023.034848 doi: 10.32604/iasc.2023.034848
![]() |
1. | Ahsan Fareed Shah, Serap Özcan, Miguel Vivas-Cortez, Muhammad Shoaib Saleem, Artion Kashuri, Fractional Hermite–Hadamard–Mercer-Type Inequalities for Interval-Valued Convex Stochastic Processes with Center-Radius Order and Their Related Applications in Entropy and Information Theory, 2024, 8, 2504-3110, 408, 10.3390/fractalfract8070408 | |
2. | Dawood Khan, Saad Ihsan Butt, Youngsoo Seol, Analysis of (P,m)-superquadratic function and related fractional integral inequalities with applications, 2024, 2024, 1029-242X, 10.1186/s13660-024-03218-x | |
3. | Muhammad Zakria Javed, Muhammad Uzair Awan, Loredana Ciurdariu, Silvestru Sever Dragomir, Yahya Almalki, On Extended Class of Totally Ordered Interval-Valued Convex Stochastic Processes and Applications, 2024, 8, 2504-3110, 577, 10.3390/fractalfract8100577 | |
4. | Waqar Afzal, Daniel Breaz, Mujahid Abbas, Luminiţa-Ioana Cotîrlă, Zareen A. Khan, Eleonora Rapeanu, Hyers–Ulam Stability of 2D-Convex Mappings and Some Related New Hermite–Hadamard, Pachpatte, and Fejér Type Integral Inequalities Using Novel Fractional Integral Operators via Totally Interval-Order Relations with Open Problem, 2024, 12, 2227-7390, 1238, 10.3390/math12081238 | |
5. | Waqar Afzal, Najla M. Aloraini, Mujahid Abbas, Jong-Suk Ro, Abdullah A. Zaagan, Hermite-Hadamard, Fejér and trapezoid type inequalities using Godunova-Levin Preinvex functions via Bhunia's order and with applications to quadrature formula and random variable, 2024, 21, 1551-0018, 3422, 10.3934/mbe.2024151 | |
6. | Zareen A. Khan, Waqar Afzal, Mujahid Abbas, Jong-Suk Ro, Abdullah A. Zaagan, Some well known inequalities on two dimensional convex mappings by means of Pseudo L−R interval order relations via fractional integral operators having non-singular kernel, 2024, 9, 2473-6988, 16061, 10.3934/math.2024778 | |
7. | Wedad Albalawi, Muhammad Imran Liaqat, Fahim Ud Din, Kottakkaran Sooppy Nisar, Abdel-Haleem Abdel-Aty, Well-posedness and Ulam-Hyers stability results of solutions to pantograph fractional stochastic differential equations in the sense of conformable derivatives, 2024, 9, 2473-6988, 12375, 10.3934/math.2024605 | |
8. | Waqar Afzal, Mujahid Abbas, Omar Mutab Alsalami, Bounds of Different Integral Operators in Tensorial Hilbert and Variable Exponent Function Spaces, 2024, 12, 2227-7390, 2464, 10.3390/math12162464 | |
9. | Zareen A. Khan, Waqar Afzal, Nikhil Khanna, An Estimation of Different Kinds of Integral Inequalities for a Generalized Class of Godunova–Levin Convex and Preinvex Functions via Pseudo and Standard Order Relations, 2025, 2025, 2314-8896, 10.1155/jofs/3942793 |