Citation: Wei Xue, Pengcheng Wan, Qiao Li, Ping Zhong, Gaohang Yu, Tao Tao. An online conjugate gradient algorithm for large-scale data analysis in machine learning[J]. AIMS Mathematics, 2021, 6(2): 1515-1537. doi: 10.3934/math.2021092
[1] | Zubair Ahmad, Zahra Almaspoor, Faridoon Khan, Sharifah E. Alhazmi, M. El-Morshedy, O. Y. Ababneh, Amer Ibrahim Al-Omari . On fitting and forecasting the log-returns of cryptocurrency exchange rates using a new logistic model and machine learning algorithms. AIMS Mathematics, 2022, 7(10): 18031-18049. doi: 10.3934/math.2022993 |
[2] | Naif Alotaibi, A. S. Al-Moisheer, Ibrahim Elbatal, Salem A. Alyami, Ahmed M. Gemeay, Ehab M. Almetwally . Bivariate step-stress accelerated life test for a new three-parameter model under progressive censored schemes with application in medical. AIMS Mathematics, 2024, 9(2): 3521-3558. doi: 10.3934/math.2024173 |
[3] | Jumanah Ahmed Darwish, Saman Hanif Shahbaz, Lutfiah Ismail Al-Turk, Muhammad Qaiser Shahbaz . Some bivariate and multivariate families of distributions: Theory, inference and application. AIMS Mathematics, 2022, 7(8): 15584-15611. doi: 10.3934/math.2022854 |
[4] | Alanazi Talal Abdulrahman, Khudhayr A. Rashedi, Tariq S. Alshammari, Eslam Hussam, Amirah Saeed Alharthi, Ramlah H Albayyat . A new extension of the Rayleigh distribution: Methodology, classical, and Bayes estimation, with application to industrial data. AIMS Mathematics, 2025, 10(2): 3710-3733. doi: 10.3934/math.2025172 |
[5] | Baishuai Zuo, Chuancun Yin . Stein’s lemma for truncated generalized skew-elliptical random vectors. AIMS Mathematics, 2020, 5(4): 3423-3433. doi: 10.3934/math.2020221 |
[6] | Aisha Fayomi, Ehab M. Almetwally, Maha E. Qura . A novel bivariate Lomax-G family of distributions: Properties, inference, and applications to environmental, medical, and computer science data. AIMS Mathematics, 2023, 8(8): 17539-17584. doi: 10.3934/math.2023896 |
[7] | Qian Li, Qianqian Yuan, Jianhua Chen . An efficient relaxed shift-splitting preconditioner for a class of complex symmetric indefinite linear systems. AIMS Mathematics, 2022, 7(9): 17123-17132. doi: 10.3934/math.2022942 |
[8] | Walid Emam . Benefiting from statistical modeling in the analysis of current health expenditure to gross domestic product. AIMS Mathematics, 2023, 8(5): 12398-12421. doi: 10.3934/math.2023623 |
[9] | Yanting Xiao, Wanying Dong . Robust estimation for varying-coefficient partially linear measurement error model with auxiliary instrumental variables. AIMS Mathematics, 2023, 8(8): 18373-18391. doi: 10.3934/math.2023934 |
[10] | Guangshuai Zhou, Chuancun Yin . Family of extended mean mixtures of multivariate normal distributions: Properties, inference and applications. AIMS Mathematics, 2022, 7(7): 12390-12414. doi: 10.3934/math.2022688 |
Ostrowski's Inequality. Let f:I⊂[0,+∞)→R be a differentiable function on int(I), such that f′∈L[a,b], where a,b∈I with a<b. If |f′(x)|≤M for all x∈[a,b], then the inequality:
|f(x)−1b−a∫baf(t)dt|≤M(b−a)[14+(x−a+b2)2(b−a)2], ∀x∈[a,b] | (1.1) |
holds for all x∈[a,b]. This inequality was introduced by Alexander Ostrowski in [26], and with the passing of the years, generalizations on the same, involving derivatives of the function under study, have taken place. It is playing a very important role in all the fields of mathematics, especially in the theory approximations. Thus such inequalities were studied extensively by many researches and numerous generalizations, extensions and variants of them for various kind of functions like bounded variation, synchronous, Lipschitzian, monotonic, absolutely continuous and n-times differentiable mappings etc.
For recent results and generalizations concerning Ostrowski's inequality, we refer the reader to the recent papers [1,3,4,31,32]. The convex functions play a significant role in many fields, for example in biological system, economy, optimization and so on [2,16,19,24,29,34,39]. And many important inequalities are established for these class of functions. Also the evolution of the concept of convexity has had a great impact in the community of investigators. In recent years, for example, generalized concepts such as s-convexity (see[10]), h-convexity (see [30,33]), m-convexity (see [7,15]), MT-convexity (see[21]) and others, as well as combinations of these new concepts have been introduced.
The role of convex sets, convex functions and their generalizations are important in applied mathematics specially in nonlinear programming and optimization theory. For example in economics, convexity plays a fundamental role in equilibrium and duality theory. The convexity of sets and functions have been the object of many studies in recent years. But in many new problems encountered in applied mathematics the notion of convexity is not enough to reach favorite results and hence it is necessary to extend the notion of convexity to the new generalized notions. Recently, several extensions have been considered for the classical convex functions such that some of these new concepts are based on extension of the domain of a convex function (a convex set) to a generalized form and some of them are new definitions that there is no generalization on the domain but on the form of the definition. Some new generalized concepts in this point of view are pseudo-convex functions [22], quasi-convex functions [5], invex functions [17], preinvex functions [25], B-vex functions [20], B-preinvex functions [8], E-convex functions [38], Ostrowski Type inequalities for functions whose derivatives are (m,h1,h2)-convex [35], Féjer Type inequalities for (s,m)-convex functions in the second sense [36] and Hermite-Hadamard-Féjer Type inequalities for strongly (s,m)-convex functions with modulus C, in the second sense [9]. In numerical analysis many quadrature rules have been established to approximate the definite integrals. Ostrowski inequality provides the bounds of many numerical quadrature rules [13].
In this paper we have established new Ostrowski's inequality given by Badreddine Meftah in [23] for s-φ-convex functions with f∈Cn([a,b]) such that f(n)∈L([a,b]) and we give some applications to some special means, the midpoint formula and some examples for the case n=2.
Recall that a real-valued function f defined in a real interval J is said to be convex if for all x,y∈J and for any t∈[0,1] the inequality
f(tx+(1−t)y)≤tf(x)+(1−t)f(y) | (2.1) |
holds. If inequality 2.1 is strict when we say that f is strictly convex, and if inequality 2.1 is reversed the function f is said to be concave. In [37] we introduced the notion of s-φ-convex functions as a generalization of s-convex functions in first sense.
Definition 1. Let 0<s≤1. A function f:I⊂R→R is called s-φ-convex with respect to bifunction φ:R×R→R (briefly φ-convex), if
f(tx+(1−t)y)≤f(y)+tsφ(f(x),f(y)) | (2.2) |
for all x,y∈I and t∈[0,1].
Example 1. Let f(x)=x2, then f is convex and 12-φ- convex with φ(u,v)=2u+v, indeed
f(tx+(1−t)y)=(tx+(1−t)y)2=t2x2+2t(1−t)xy+(1−t)2y2≤y2+tx2+2txy=y2+t12[t12x2+2t12xy]. |
On the other hand;
0<t<1⟹0<t12<1⟹t12x2+2t12xy≤x2+2xy≤x2+x2+y2. |
Hence,
f(tx+(1−t)y)≤y2+t12[2x2+y2]=f(y)+t12φ(f(x),f(y)). |
Example 2. Let f(x)=xn and 0<s≤1, then f is convex and s-φ- convex with φ(u,v)=∑nk=1(nk)v1−kn(u1n−v1n)n, indeed
f(tx+(1−t)y)=f(y+t(x−y))=(y+t(x−y))n=yn+n∑k=1(nk)yn−k(t(x−y))n=yn+ts[n∑k=1(nk)tn−syn−k(x−y)n]≤yn+ts[n∑k=1(nk)(yn)n−kn((xn)1n−(yn)1n)n]. |
Remark 1. If f is increasing monotone in [a,b], then f is s-φ- convex for φ(x,y)=K, where K∈[0,+∞) and s∈(0,1].
In this section, we give some integral approximation of f∈Cn([a,b]) such that f(n)∈L([a,b]), for n≥1 using the following lemma as the main tool (see [11]).
Lemma 1. Let f:[a,b]→R be a differentiable mapping such that f(n−1) is absolutely continuous on [a,b]. Then for all x∈[a,b] we have the identity
∫baf(t)dt=n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)+(−1)n∫baKn(x,t)f(n)(t)dt, |
where the kernel Kn:[a,b]2→R is given by
Kn(x,t)={(t−a)nn!ift∈[a,x](t−b)nn!ift∈(x,b] |
with x∈[a,b] and n is natural number, n≥1.
Theorem 1. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1 and 0<s≤1. If |f(n)| is s-φ-convex, then the following inequality
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1n!(1n+1|f(n)(a)|+1n+s+1φ(|f(n)(a)|,|f(n)(x)|))+(b−x)n+1n![|f(n)(x)|n+1+n∑k=0(nk)(−1)k1k+s+1φ(|f(n)(x)|,|f(n)(b)|)] |
holds for all x∈[a,b].
Proof. From Lemma 1, properties of modulus, making the changes of variables u=(1−t)a+tx in the first integral and u=(1−t)x+tb in the second integral we have that,
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤∫xa(u−a)nn!|f(n)(u)|du+∫bx(b−u)nn!|f(n)(u)|du=(x−a)n+1n!∫10tn |f(n)((1−t)a+tx)|dt+(b−x)n+1n!∫10(1−t)n |f(n)((1−t)x+tb)|dt. |
Since |f(n)| is s-φ- convex (2.2) gives
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1n!∫10tn(|f(n)(a)|+tsφ(|f(n)(a)|,|f(n)(x)|))dt+(b−x)n+1n!∫10(1−t)n(|f(n)(x)|+tsφ(|f(n)(x)|,|f(n)(b)|))dt=(x−a)n+1n!(1n+1|f(n)(a)|+1n+s+1φ(|f(n)(a)|,|f(n)(x)|))+(b−x)n+1n![|f(n)(x)|n+1+n∑k=0(nk)(−1)k1k+s+1φ(|f(n)(x)|,|f(n)(b)|)] |
which is the desired result. The proof is completed.
Remark 2. If we take s=1 then obtain a result of Meftah B. (see Theorem 2.1 in [23]).
Corollary 1. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1 and 0<s≤1. If |f(n)| is s-convex in the first sense, we have the following estimate
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤s(n+1)!(n+s+1)|f(n)(a)|+(b−x)n+1n!|f(n)(b)|n∑k=0(nk)(−1)kk+s+1+(n+1)[(x−a)n+1(n+s+1)(n+1)!+(b−x)n+1(n+1)!(1n+1−n∑k=0(nk)(−1)kk+s+1)]|f(n)(x)|. |
Proof. Taking φ(u,v)=v−u in Theorem 1.
Remark 3. It is important to notice that if s=1 we have that |f(n)| is convex and then obtain the corollary 2.2 of Meftah see [23].
Theorem 2. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1 with 1p+1q=1. If |f(n)|q is s-φ-convex, then the following inequality holds
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1(s+1)1q(np+1)1pn!((s+1)|f(n)(a)|q+φ(|f(n)(a)|q,|f(n)(x)|q))1q+(b−x)n+1(s+1)1q(np+1)1pn!((s+1)|f(n)(x)|q+φ(|f(n)(x)|q,|f(n)(b)|q))1q. |
Proof. From Lemma 1, properties of modulus, and Holder's inequality, we have
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤∫xa(u−a)nn!|f(n)(u)|du+∫bx(b−u)nn!|f(n)(u)|du=(x−a)n+1n!∫10tn |f(n)((1−t)a+tx)|dt+(b−x)n+1n!∫10(1−t)n |f(n)((1−t)x+tb)|dt≤(x−a)n+1n!(∫10tnpdt)1p(∫10|f(n)((1−t)a+tx)|qdt)1q+(b−x)n+1n!(∫10(1−t)npdt)1p(∫10|f(n)((1−t)x+tb)|qdt)1q=(x−a)n+1(np+1)1pn!(∫10|f(n)((1−t)a+tx)|qdt)1q+(b−x)n+1(np+1)1pn!(∫10|f(n)((1−t)x+tb)|qdt)1q. |
Since |f(n)|q is s-φ-convex, we deduce
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1(np+1)1pn!(∫10(|f(n)(a)|q+tsφ(|f(n)(a)|q,|f(n)(x)|q))dt)1q+(b−x)n+1(np+1)1pn!(∫10(|f(n)(x)|q+tsφ(|f(n)(x)|q,|f(n)(b)|q))dt)1q |
=(x−a)n+1(s+1)1q(np+1)1pn!((s+1)|f(n)(a)|q+φ(|f(n)(a)|q,|f(n)(x)|q))1q+(b−x)n+1(s+1)1q(np+1)1pn!((s+1)|f(n)(x)|q+φ(|f(n)(x)|q,|f(n)(b)|q))1q. |
Corollary 2. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1 with 1p+1q=1. If |f(n)|q is s-convex in the first sense, then the following inequality holds
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1(s+1)1q(np+1)1pn!(s|f(n)(a)|q+|f(n)(x)|q)1q+(b−x)n+1(s+1)1q(np+1)1pn!(s|f(n)(x)|q+|f(n)(b)|q)1q. | (3.1) |
Proof. Taking φ(u,v)=v−u in Theorem 1.
Corollary 3. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1 with 1p+1q=1. If |f(n)|q is s-convex in the first sense, then the following inequality holds
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1(s+1)1q(np+1)1pn!(s|f(n)(a)|+|f(n)(x)|)+(b−x)n+1(s+1)1q(np+1)1pn!(s|f(n)(x)|+|f(n)(b)|). |
Proof. Taking φ(u,v)=v−u in Theorem 1, we obtain 3.1. Then using the following algebraic inequality for all a,b≥0, and 0≤α≤1 we have (a+b)α≤aα+bα, we get the desired result.
Theorem 3. Let q>1 and f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1. If |f(n)|q is s-φ−convex, then the following inequality
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(n+1)1q(x−a)n+1(n+1)!(1n+1|f(n)(a)|q+1n+s+1φ(|f(n)(a)|q,|f(n)(x)|q))1q+(n+1)1q(b−x)n+1(n+1)!(1n+1|f(n)(x)|q+φ(|f(n)(x)|q,|f(n)(b)|q)n∑k=0(nk)(−1)kk+s+1)1q |
holds for all x∈[a,b].
Proof. From Lemma 1, properties of modulus, and power mean inequality, we have
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤∫xa(u−a)nn!|f(n)(u)|du+∫bx(b−u)nn!|f(n)(u)|du=(x−a)n+1n!∫10tn |f(n)((1−t)a+tx)|dt+(b−x)n+1n!∫10(1−t)n |f(n)((1−t)x+tb)|dt |
≤(x−a)n+1n!(∫10tndt)1−1q(∫10tn |f(n)((1−t)a+tx)|qdt)1q+(b−x)n+1n!(∫10(1−t)ndt)1−1q(∫10(1−t)n |f(n)((1−t)x+tb)|qdt)1q=(n+1)1q(x−a)n+1(n+1)!(∫10tn |f(n)((1−t)a+tx)|qdt)1q+(n+1)1q(b−x)n+1(n+1)!(∫10(1−t)n |f(n)((1−t)x+tb)|qdt)1q. |
Since |f(n)|q is s-φ-convex, we deduce
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(n+1)1q(x−a)n+1(n+1)!(|f(n)(a)|q∫10tndt+φ(|f(n)(a)|q,|f(n)(x)|q)∫10tn+sdt)1q+(n+1)1q(b−x)n+1(n+1)!(|f(n)(x)|q∫10(1−t)ndt+φ(|f(n)(x)|q,|f(n)(b)|q)∫10ts(1−t)ndt)1q=(n+1)1q(x−a)n+1(n+1)!(1n+1|f(n)(a)|q+1n+s+1φ(|f(n)(a)|q,|f(n)(x)|q))1q+(n+1)1q(b−x)n+1(n+1)!(1n+1|f(n)(x)|q+φ(|f(n)(x)|q,|f(n)(b)|q)n∑k=0(nk)(−1)kk+s+1)1q. |
The proof is completed.
Remark 4. If we take s=1 then obtain a result of Meftah B. (see Theorem 2.6 in [23]).
Corollary 4. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1. If |f(n)|q is s-convex in the first sense, then the following inequality
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(n+1)1q(x−a)n+1(n+s+1)1q(n+1)!(s|f(n)(a)|qn+1+|f(n)(x)|q)1q+(n+1)1q(b−x)n+1(n+1)!(1n+1|f(n)(x)|q+[|f(n)(b)|q−|f(n)(x)|q]n∑k=0(nk)(−1)kk+s+1)1q |
holds for all x∈[a,b].
Proof. Taking φ(u,v)=v−u in Theorem 3.
Theorem 4. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1. If |f(n)|q is s-φ-convex, then the following inequality
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1n!(1qn+1|f(n)(a)|q+1qn+s+1φ(|f(n)(a)|q,|f(n)(x)|q))1q+(b−x)n+1n!(1qn+1|f(n)(x)|q+qn∑k=0(qnk)(−1)kk+s+1φ(|f(n)(x)|q,|f(n)(b)|q))1q |
holds for all x∈[a,b].
Proof. From Lemma 1, properties of modulus, and power mean inequality, we have
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤∫xa(u−a)nn!|f(n)(u)|du+∫bx(b−u)nn!|f(n)(u)|du=(x−a)n+1n!∫10tn |f(n)((1−t)a+tx)|dt+(b−x)n+1n!∫10(1−t)n |f(n)((1−t)x+tb)|dt≤(x−a)n+1n!(∫10dt)1−1q(∫10tqn |f(n)((1−t)a+tx)|qdt)1q+(b−x)n+1n!(∫10dt)1−1q(∫10(1−t)qn |f(n)((1−t)x+tb)|qdt)1q=(x−a)n+1n!(∫10tqn |f(n)((1−t)a+tx)|qdt)1q+(b−x)n+1n!(∫10(1−t)qn |f(n)((1−t)x+tb)|qdt)1q. |
Since |f(n)|q is s-φ-convex, we deduce
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1n!(|f(n)(a)|q∫10tqndt+φ(|f(n)(a)|q,|f(n)(x)|q)∫10tqn+sdt)1q+(b−x)n+1n!(|f(n)(x)|q∫10(1−t)qndt+φ(|f(n)(x)|q,|f(n)(b)|q)∫10ts(1−t)qndt)1q |
=(x−a)n+1n!(1qn+1|f(n)(a)|q+1qn+s+1φ(|f(n)(a)|q,|f(n)(x)|q))1q+(b−x)n+1n!(1qn+1|f(n)(x)|q+qn∑k=0(qnk)(−1)kk+s+1φ(|f(n)(x)|q,|f(n)(b)|q))1q |
which in the desired result.
Remark 5. If we take s=1 then obtain a result of Meftah B. (see Theorem 2.9 in [23]).
Corollary 5. Let f:I→R be n-times differentiable on [a,b] such that f(n)∈L([a,b]) with n≥1, 0<s≤1 and let q>1. If |f(n)|q is s-convex in the first sense, then the following inequality
|∫baf(t)dt−n∑k=0[(b−x)k+1+(−1)k(x−a)k+1(k+1)!]f(k)(x)|≤(x−a)n+1n!(1qn+1|f(n)(a)|q+|f(n)(x)|q−|f(n)(a)|qqn+s+1)1q+(b−x)n+1n!(1qn+1|f(n)(x)|q+qn∑k=0(qnk)(−1)kk+s+1(|f(n)(b)|q−|f(n)(x)|q))1q |
holds for all x∈[a,b].
Proof. Taking φ(u,v)=v−u in Theorem 4.
In this section, using [12] we define s-φb-convex function as generalized form of s-φ convex functions [37] and give some results.
Definition 2. Let R+ be the set of nonnegative real numbers and b:R×R×[0,1]→R+ be a function with tsb(x,y,t)∈[0,1] for all x,y∈R, t∈[0,1] and s∈(0,1]. A function f:I→R is called s-φb-convex if
f(tx+(1−t)y)≤f(y)+tsb(x,y,t)φ(f(x),f(y)) |
for all x,y∈R and t∈[0,1].
Remark 6. If b(x,y,z)=1 then the definition of s-φb-convex function matches the definition of s-φ-convex function.
Theorem 5. Consider a function f:I→R and b:R×R×[0,1]→R+ be a function with tsb(x,y,t)∈[0,1] for all x,y∈R and s,t∈[0,1]. Then the following assertions are equivalent:
(i) f is s-φb-convex for some b and s∈[0,1].
(ii) f is φ-quasiconvex.
Proof. (i)→(ii) For any x,y∈I and t∈[0,1],
f(tx+(1−t)y)≤f(y)+tsb(x,y,t)φ(f(x),f(y))≤max{f(y),f(y)+φ(f(x),f(y))}. |
(ii)→(i) For x,y∈I and t∈[0,1], define
b(x,y,t)={1ts if t∈[0,1] and f(y)≤f(y)+φ(f(x),f(y))0 if t=0 or f(y)>f(y)+φ(f(x),f(y)) |
Notice that tsb(x,y,t)∈[0,1]. For a such function b we have
f(tx+(1−t)y)≤max{f(y),f(y)+φ(f(x),f(y))}=tsb(x,y,t)[f(y)+φ(f(x),f(y))+(1−tsb(x,y,t))]f(y)=f(y)+tsb(x,y,t)φ(f(x),f(y)). |
Remark 7. Let f:I→R be a s-φ-convex function. For x1,x2∈I and α1+α2=1, we have f(α1x1+α2x2)≤f(x2)+αs1φ(f(x1),f(x2)). Aso when n>2, for x1,x2,...,xn∈I, ∑ni=1αi=1 and Ti=∑ij=1αj, we have
f(n∑i=1αixi)=f((Tn−1n−1∑i=1αiTn−1xi)+αnxn)≤f(xn)+Tsn−1φ(f(n−1∑i=1αiTn−1xi),f(xn)). | (4.1) |
Theorem 6. Let f:I→R be a s-φ-convex function and φ be nondecreasing nonnegatively sublinear in first variable. If Ti=∑ij=1αj for i=1,2,...,n such that Tn=1, then
f(n∑i=1αixi)≤f(xn)+n−1∑i=1Tsiφf(xi,xi+1,...,xn), |
where φf(xi,xi+1,...,xn)=φ(φf(xi,xi+1,...,xn−1),f(xn)) and φf(x)=f(x) for all x∈I.
Proof. Since φ is nondecreasing nonnegatively sublinear on first variable, so from (4.1) it follows that:
f(n∑i=1αixi)=f((Tn−1n−1∑i=1αiTn−1xi)+αnxn)≤f(xn)+Tsn−1φ(f(n−1∑i=1αiTn−1xi),f(xn))=f(xn)+(Tn−1)sφ(f(Tn−2Tn−1n−2∑i=1αiTn−2xi+αn−1Tn−1xn−1),f(xn))≤f(xn)+(Tn−1)sφ(f(xn−1)+(Tn−2Tn−1)sφ(f(n−2∑i=1αiTn−2xi),f(xn−1)),f(xn))≤f(xn)+(Tn−1)sφ(f(xn−1),f(xn))+(Tn−2)sφ(φ(f(n−2∑i=1αiTn−2xi),f(xn−1)),f(xn))≤... |
≤f(xn)+(Tn−1)sφ(f(xn−1),f(xn))+(Tn−2)sφ(φ(f(xn−2),f(xn−1)),f(xn))+...+Ts1φ(φ(...φ(φ(f(x1),f(x2)),f(x3)...),f(xn−1)),f(xn))=f(xn)+(Tn−1)sφf(xn−1,xn)+(Tn−2)sφf(xn−2,xn−1,xn)+...+(T1)sφf(x1,x2,...,xn−1,xn)=f(xn)+n−1∑i=1Tsiφf(xi,xi+1,...,xn). |
Example 3. Consider f(x)=x2 and φ(x,y)=2x+y for x,y∈R+=[0,+∞). The function φ is nondecreasing nonnegatively sublinear in first variable and f is 12-φ-convex (see Example 1). Now for x1,x2,...,xn∈R+ and α1,α2,...,αn with ∑ni=1αi=1 according to Theorem 6 we have
(n∑i=1αixi)2≤(xn)2+n−1∑i=1T12iφf(xi,xi+1,...,xn)≤(xn)2+n−1∑i=1T12i[2[...2[2x2i+x2i+1]+x2i+2]+...+x2n]. |
In this section we give some applications for the special case where n=2 and the function φ(f(x),f(y))=f(y)−f(x), in this case we have that f is s-convex in the first sense.
Example 4. Let s∈(0,1) and p,q,r∈R, we define the function f:[0,+∞)→R as
f(t)={pift=0qts+rift>0, |
we have that if q≥0 and r≤p, then f is s-convex in the first sense (see [18]). If we do φ(f(x),f(y))=f(x)−f(y), then f is s-φ-convex, but is not φ-convex because f is not convex.
Example 5. In the previous example if s=12, p=1, q=2 and r=1 we have that f:[0,+∞)→R,f(t)=2t12+1 is 12-φ-convex. Then if we define g:[0,+∞)→R, g(t)=815t52+t22, we have to g″ is \frac{1}{2} - \varphi -convex in [0, +\infty) with \varphi(f(x), f(y)) = f(x)-f(y) . Using Theorem 1, for a, b\in [0, +\infty) with a < b and x\in [a, b] , we get
\begin{eqnarray*} &&\left|16\left(b^{\frac{7}{2}}-a^{\frac{7}{2}}\right)+35(a^3-b^3)\sqrt{x}+35(b^2-a^2)x^{\frac{3}{2}}+21(a-b)x^{\frac{5}{2}}\right|\\ &\le& \frac{35}{2}\left(1+2\sqrt{b}\right)(b-x)^3-\frac{10}{3}\left(\frac{7}{2}+\sqrt{a}+6\sqrt{b}\right)(a-x)^3. \end{eqnarray*} |
Remark 6. In particular if we choose a = 0 and b = 1 , we have for x\in [0, 1] , we get a graphic representation of the Example 5.
![]() |
Example 6. If we define g(t) = \frac{t^4}{12} we have that g''(t) is \frac{1}{2} - \varphi - convex with \varphi(u, v) = 2u+v (see example 1) and by Theorem 1, for a, b\in \mathbb{R} with a < b and x\in [a, b] , we have that
\begin{eqnarray*} &&\left|\frac{b^5-a^5}{60}-\frac{(b-a)}{12}x^4-\left[\frac{b^2-2x(b-a)-a^2}{3}\right]x^3-\left[\frac{(b-x)^3+(x-a)^3}{6}\right]x^2\right|\\ &\le& (x-a)^3\left[x^2- \frac{5}{6}a^2\right] +(b-x)^3\left[\frac{19}{210}x^2+\frac{8}{105}b^2\right]. \end{eqnarray*} |
Moreover, if choose x = \frac{a+b}{2} , we obtain that
\begin{eqnarray*} &&\left|\frac{b^5-a^5}{60}-\frac{(b-a)(a+b)^4}{192}-\frac{(b-a)^3(a+b)^2}{96}\right|\\ &\le& \frac{(b-a)^3}{16}\left[\frac{a^2}{3}+\frac{9a^2+2ab+b^2}{14}+\frac{(a+b)^2}{12} +\frac{8a^2+16ab+24b^2}{105}\right]. \end{eqnarray*} |
Then
\begin{eqnarray*} \left|(a-b)^5\right| \le \frac{(b-a)^3}{7}(477a^2+194ab+161b^2). \end{eqnarray*} |
Therefore
\begin{eqnarray*} (a-b)^2 \le \frac{477a^2+194ab+161b^2}{7}. \end{eqnarray*} |
Example 7. If we define g(t) = \frac{36}{91}\sqrt[3]{2} \ t^{\frac{13}{6}} we have that |g''(t)|^3 is \frac{1}{2} - \varphi -convex with \varphi(u, v) = 2u+v (see example 1) and by Theorem 4, for a, b\in \mathbb{R} with a < b and x\in [a, b] , we have
\begin{eqnarray*} &&\left|\frac{216}{1729}[b^{\frac{19}{6}}-a^{\frac{19}{6}}]-\frac{36x^{\frac{13}{6}}}{91}(b-a)+\frac{6x^{\frac{7}{6}}}{14}[(b-x)^2-(x-a)^2]+\frac{x^{\frac{1}{6}}}{6}[(b-x)^3+(x-a)^3]\right|\\ &\le & \frac{(x-a)^3}{2\sqrt[3]{48}}\left(a^{\frac{1}{2}}+2x^{\frac{1}{2}}\right)^{\frac{1}{3}}+\frac{(b-x)^3}{2\sqrt[3]{48}}\left(x^{\frac{1}{2}}+2b^{\frac{1}{2}}\right)^{\frac{1}{3}}. \end{eqnarray*} |
In this paper we have established new Ostrowski's inequality given by Badreddine Meftah in [23] for s-\varphi- convex functions with f\in C^n([a, b]) such that f^{(n)}\in L([a, b]) with n\ge 1 and we give some applications to some special means, the midpoint formula and some examples for the case n = 2 . We expect that the ideas and techniques used in this paper may inspire interested readers to explore some new applications of these newly introduced explore some new applications of these newly introduced functions in various fields of pure and applied sciences.
The authors want to give thanks to the Dirección de investigación from Pontificia Universidad Católica del Ecuador for technical support to our research project entitled: "Algunas desigualdades integrales para funciones convexas generalizadas y aplicaciones".
The authors declare that they have no conflicts of interest.
[1] |
J. Barzilai, J. M. Borwein, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148. doi: 10.1093/imanum/8.1.141
![]() |
[2] | E. Bisong, Batch vs. online larning, Building Machine Learning and Deep Learning Models on Google Cloud Platform, 2019. |
[3] |
L. Bottou, F. E. Curtis, J. Nocedal, Optimization methods for large-scale machine learning, SIAM Rev., 60 (2018), 223-311. doi: 10.1137/16M1080173
![]() |
[4] | Y. H. Dai, Y. Yuan, Nonlinear conjugate gradient methods, Shanghai: Shanghai Scientific Technical Publishers, 2000. |
[5] |
D. Davis, B. Grimmer, Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems, SIAM J. Optim., 29 (2019), 1908-1930. doi: 10.1137/17M1151031
![]() |
[6] |
R. Dehghani, N. Bidabadi, H. Fahs, M. M. Hosseini, A conjugate gradient method based on a modified secant relation for unconstrained optimization, Numer. Funct. Anal. Optim., 41 (2020), 621-634. doi: 10.1080/01630563.2019.1669641
![]() |
[7] |
P. Faramarzi, K. Amini, A modified spectral conjugate gradient method with global convergence, J. Optim. Theory Appl., 182 (2019), 667-690. doi: 10.1007/s10957-019-01527-6
![]() |
[8] |
R. Fletcher, C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7 (1964), 149-154. doi: 10.1093/comjnl/7.2.149
![]() |
[9] |
J. C. Gilbert, J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2 (1992), 21-42. doi: 10.1137/0802003
![]() |
[10] |
W. W. Hager, H. Zhang, Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Software, 32 (2006), 113-137. doi: 10.1145/1132973.1132979
![]() |
[11] |
A. S. Halilu, M. Y. Waziri, Y. B. Musa, Inexact double step length method for solving systems of nonlinear equations, Stat. Optim. Inf. Comput., 8 (2020), 165-174. doi: 10.19139/soic-2310-5070-532
![]() |
[12] |
H. Jiang, P. Wilford, A stochastic conjugate gradient method for the approximation of functions, J. Comput. Appl. Math., 236 (2012), 2529-2544. doi: 10.1016/j.cam.2011.12.012
![]() |
[13] |
X. Jiang, J. Jian, Improved Fletcher-Reeves and Dai-Yuan conjugate gradient methods with the strong Wolfe line search, J. Comput. Appl. Math., 348 (2019), 525-534. doi: 10.1016/j.cam.2018.09.012
![]() |
[14] | X. B. Jin, X. Y. Zhang, K. Huang, G. G. Geng, Stochastic conjugate gradient algorithm with variance reduction, IEEE Trans. Neural Networks Learn. Syst., 30 (2018), 1360-1369. |
[15] | R. Johnson, T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Advances in Neural Information Processing Systems, 2013. |
[16] |
X. L. Li, Preconditioned stochastic gradient descent, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 1454-1466. doi: 10.1109/TNNLS.2017.2672978
![]() |
[17] |
Y. Liu, X. Wang, T. Guo, A linearly convergent stochastic recursive gradient method for convex optimization, Optim. Lett., 2020. Doi: 10.1007/s11590-020-01550-x. doi: 10.1007/s11590-020-01550-x
![]() |
[18] |
M. Lotfi, S. M. Hosseini, An efficient Dai-Liao type conjugate gradient method by reformulating the CG parameter in the search direction equation, J. Comput. Appl. Math., 371 (2020), 112708. doi: 10.1016/j.cam.2019.112708
![]() |
[19] | S. Mandt, M. D. Hoffman, D. M. Blei, Stochastic gradient descent as approximate Bayesian inference, J. Mach. Learn. Res., 18 (2017), 4873-4907. |
[20] | P. Moritz, R. Nishihara, M. I. Jordan, A linearly-convergent stochastic L-BFGS algorithm, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016. |
[21] | L. M. Nguyen, J. Liu, K. Scheinberg, M. Takáč, SARAH: A novel method for machine learning problems using stochastic recursive gradient, Proceedings of the 34th International Conference on Machine Learning, 2017. |
[22] | A. Nitanda, Accelerated stochastic gradient descent for minimizing finite sums, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, 2016. |
[23] |
H. Robbins, S. Monro, A stochastic approximation method, Ann. Math. Statist., 22 (1951), 400-407. doi: 10.1214/aoms/1177729586
![]() |
[24] | N. N. Schraudolph, T. Graepel, Combining conjugate direction methods with stochastic approximation of gradients, Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics, 2003. |
[25] | G. Shao, W. Xue, G. Yu, X. Zheng, Improved SVRG for finite sum structure optimization with application to binary classification, J. Ind. Manage. Optim., 16 (2020), 2253-2266. |
[26] | C. Tan, S. Ma, Y. H. Dai, Y. Qian, Barzilai-Borwein step size for stochastic gradient descent, Advances in Neural Information Processing Systems, 2016. |
[27] | P. Toulis, E. Airoldi, J. Rennie, Statistical analysis of stochastic gradient methods for generalized linear models, Proceedings of the 31th International Conference on Machine Learning, 2014. |
[28] | V. Vapnik, The nature of statistical learning theory, New York: Springer, 1995. |
[29] |
L. Xiao, T. Zhang, A proximal stochastic gradient method with progressive variance reduction, SIAM J. Optim., 24 (2014), 2057-2075. doi: 10.1137/140961791
![]() |
[30] | Z. Xu, Y. H. Dai, A stochastic approximation frame algorithm with adaptive directions, Numer. Math. Theory Methods Appl., 1 (2008), 460-474. |
[31] | W. Xue, J. Ren, X. Zheng, Z. Liu, Y. Ling, A new DY conjugate gradient method and applications to image denoising, IEICE Trans. Inf. Syst., 101 (2018), 2984-2990. |
[32] |
Q. Zheng, X. Tian, N. Jiang, M. Yang, Layer-wise learning based stochastic gradient descent method for the optimization of deep convolutional neural network, J. Intell. Fuzzy Syst., 37 (2019), 5641-5654. doi: 10.3233/JIFS-190861
![]() |
1. | Xiangming Tang, Jin-Taek Seong, Randa Alharbi, Aned Al Mutairi, Said G. Nasr, A new probabilistic model: Theory, simulation and applications to sports and failure times data, 2024, 10, 24058440, e25651, 10.1016/j.heliyon.2024.e25651 | |
2. | Jianping Zhu, Xuxun Cai, Eslam Hussam, Jin-Taek Seong, Fatimah A. Almulhima, Afaf Alrashidi, A novel cosine-derived probability distribution: Theory and data modeling with computer knowledge graph, 2024, 103, 11100168, 1, 10.1016/j.aej.2024.05.114 | |
3. | Omalsad Hamood Odhah, Huda M. Alshanbari, Zubair Ahmad, Faridoon Khan, Abd Al-Aziz Hosni El-Bagoury, A Novel Probabilistic Approach Based on Trigonometric Function: Model, Theory with Practical Applications, 2023, 15, 2073-8994, 1528, 10.3390/sym15081528 | |
4. | Li Jiang, Jin-Taek Seong, Marwan H. Alhelali, Basim S.O. Alsaedi, Fatimah M. Alghamdi, Ramy Aldallal, A new cosine-based approach for modelling the time-to-event phenomena in sports and engineering sectors, 2024, 98, 11100168, 19, 10.1016/j.aej.2024.04.037 | |
5. | Zhidong Liang, A new statistical approach with simulation study: Its implementations in management sciences and reliability, 2025, 119, 11100168, 531, 10.1016/j.aej.2025.01.113 | |
6. | Zhidong Liang, A new statistical model with optimal fitting performance: Its assessments in management sciences and reliability, 2025, 119, 11100168, 545, 10.1016/j.aej.2025.01.094 |