Research article

The Ostrowski inequality for s-convex functions in the third sense

  • Received: 21 October 2021 Revised: 28 December 2021 Accepted: 30 December 2021 Published: 10 January 2022
  • MSC : 26A51

  • In this paper, the Ostrowski inequality for s-convex functions in the third sense is studied. By applying Hölder and power mean integral inequalities, the Ostrowski inequality is obtained for the functions, the absolute values of the powers of whose derivatives are s-convex in the third sense. In addition, by means of these inequalities, an error estimate for a quadrature formula via Riemann sums and some relations involving means are given as applications.

    Citation: Gültekin Tınaztepe, Sevda Sezer, Zeynep Eken, Sinem Sezer Evcan. The Ostrowski inequality for s-convex functions in the third sense[J]. AIMS Mathematics, 2022, 7(4): 5605-5615. doi: 10.3934/math.2022310

    Related Papers:

    [1] Kezheng Zuo, Yang Chen, Li Yuan . Further representations and computations of the generalized Moore-Penrose inverse. AIMS Mathematics, 2023, 8(10): 23442-23458. doi: 10.3934/math.20231191
    [2] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [3] Qi Xiao, Jin Zhong . Characterizations and properties of hyper-dual Moore-Penrose generalized inverse. AIMS Mathematics, 2024, 9(12): 35125-35150. doi: 10.3934/math.20241670
    [4] Mahmoud S. Mehany, Faizah D. Alanazi . An η-Hermitian solution to a two-sided matrix equation and a system of matrix equations over the skew-field of quaternions. AIMS Mathematics, 2025, 10(4): 7684-7705. doi: 10.3934/math.2025352
    [5] Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis . Computing quaternion matrix pseudoinverse with zeroing neural networks. AIMS Mathematics, 2023, 8(10): 22875-22895. doi: 10.3934/math.20231164
    [6] Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803
    [7] Li Wen, Feng Yin, Yimou Liao, Guangxin Huang . A greedy average block Kaczmarz method for the large scaled consistent system of linear equations. AIMS Mathematics, 2022, 7(4): 6792-6806. doi: 10.3934/math.2022378
    [8] Abdur Rehman, Ivan Kyrchei, Muhammad Zia Ur Rahman, Víctor Leiva, Cecilia Castro . Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications. AIMS Mathematics, 2024, 9(8): 19967-19996. doi: 10.3934/math.2024974
    [9] Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu . Further characterizations and representations of the Minkowski inverse in Minkowski space. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189
    [10] Jue Feng, Xiaoli Li, Kaicheng Fu . On the generalized spectrum of bounded linear operators in Banach spaces. AIMS Mathematics, 2023, 8(6): 14132-14141. doi: 10.3934/math.2023722
  • In this paper, the Ostrowski inequality for s-convex functions in the third sense is studied. By applying Hölder and power mean integral inequalities, the Ostrowski inequality is obtained for the functions, the absolute values of the powers of whose derivatives are s-convex in the third sense. In addition, by means of these inequalities, an error estimate for a quadrature formula via Riemann sums and some relations involving means are given as applications.



    The applications of the generalized inverse of matrices or operators are of interest in numerical mathematics. Indeed, when a matrix is singular or rectangular, many computational and theoretical problems require different forms of generalized inverses. In the finite-dimensional case, an important application of the Moore-Penrose inverse is to minimize a Hermitian positive definite quadratic form xtx where t denotes the transpose, under linear constraints. Precisely, weighted Moore-Penrose inverse plays a prominent role in the indefinite linear least-square problems [1,2].

    Let Cm×n denotes the set of all matrices of order m×n, having complex entries. Further, assume that for an arbitrary matrix ACm×n, and two Hermitian positive definite matrices MCm×m, and NCn×n, there exist a unique matrix SCn×m such that satisfying the following properties:

    1)ASA=A,2)SAS=S,3)(MAS)=MAS,4)(NSA)=NSA. (1.1)

    Then, S is said to be a weighted Moore-Penrose inverse (WMPI) of A with respect to matrices M and N; generally, it is denoted by AMN. In particular, when the matrix M and N are the identity matrix of order m and n, respectively, then S is known as Moore-Penrose inverse, denoted by A. Moreover, the above relation is reduced to well-known Penrose equations [3,4] as follows:

    1)ASA=A,2)SAS=S,3)(AS)=AS,4)(SA)=SA. (1.2)

    The elementary technique for computing the WMPI of the matrix is entirely based on the weighted singular value decomposition [5], in accordance with the following form. Let ACm×nr, where Cm×nr be a set of complex matrices of order m×n with rank r, there exist matrices PCm×m and QCn×n satisfying the conditions PMP=Im and QN1Q=In, such that

    A=P(D000)Q, (1.3)

    where D=diag(σ1,σ2,,σr), σ2i is the nonzero eigenvalue of matrix N1AMA, and it satisfies the relation: σ1σ2σr>0. Then, the WMPI (AMN) of matrix A could be expressed as:

    AMN=N1Q(D1000)PM. (1.4)

    Note that, in this manuscript weighted conjugate transpose of matrix A is denoted by A# and is equal to N1AM, whereas A denotes the conjugate transpose of the matrix ACm×n. Consequently, (AMN)#=M1(AMN)N=M1(A)N1M1N and (AAMN)#=P(Ir000)P1 [6, pp. 41]. Moreover, the following properties hold:

    A#AAMN=A#,A#(AAMN)#=A#,(AMNA)#A#=A#,AMNAA#=A#.

    A diverse range of other methodologies has presented in the literature to determine the WMPI of a matrix. For calculating the generalized inverse numerically, Greville's partitioning method was introduced in [7]. A new proof of Greville's method was illustrated by Wang [8] for WMPI. But, such methods involve more operations and therefore more rounding errors are accumulated. Often numerical techniques for finding the Moore-Penrose inverse lack in numerical stability [9]. Besides it, Wang [10] obtained a comprehensive proof for the WMPI of a partition matrix in a recursive form. Also, WMPI method was introduced in [9] for the multi-variable polynomial. Moreover, new determinantal representations for weighted generalized inverse were presented in [11]. Whereas, a representation of the WMPI of a quaternion matrix was discussed by Ivan Kyrchei [12,13]. Further, its explicit representation for the two-sided restricted quaternionic matrix equations was investigated in [14].

    The hyperpower iterative method was given by Altman [15] for inverting a linear bounded operator in the Hilbert space, whereas it's applicability for generating the Moore-Penrose inverse of a matrix was shown by Ben-Israel [16]. Several iterative methods fall in this category of hyperpower matrix iterations and the general pth order iterative method is written as follows:

    Sk+1=Sk(I+Rk+R2k++Rpk), (1.5)

    where Rk=IASk, I is identity matrix of order m and S0 is the initial approximation to input matrix A1. This scheme is attractive as it is entirely based on matrix-matrix products, which is implemented fruitfully in parallel machines. By this approach, each pth- order iterative method brought forward in terms of hyperpower series, which require p times matrix-matrix multiplications. If p=2, iterative method (1.5) yields well known Newton-Schulz method (derived in [17,18]):

    Sk+1=Sk(2IASk),k=0,1,. (1.6)

    Although it is a quadratically convergent method, and its complexity is poly-logarithmic and numerically stable (discussed in [19]). But, scheme (1.6) often shows slow convergence behavior during the initial process, thus would lead to an increment in computational workload while calculating matrix inverse.

    In 2017, H. Esmaeili and A. Pirnia [20] constructed a quadratic convergence iterative scheme as follows:

    Sk+1=Sk(5.5IASk(8I3.5ASk)),k=0,1,. (1.7)

    If p=3, the hyperpower iterative method (1.5) turns into a cubically convergent method and can also be derived from Chebyshev scheme [21]:

    Sk+1=Sk(3IASk(3IASk)), (1.8)

    and investigated by Li et al. [21] in 2001. Along with this, they developed two other third-order iterative methods, from the mid-point rule method [22] and Homeier's method [22], given as:

    Sk+1=(I+14(ISkA)(3ISkA)2)Sk, (1.9)

    and

    Sk+1=Sk(I+12(IASk)(I+(2IASk)2)), (1.10)

    respectively. In 2017, a fourth-order scheme has been presented by Esmaeili et al. [23] and demonstrated below:

    Sk+1=Sk(9I26(ASk)+34(ASk)221(ASk)3+5(ASk)4). (1.11)

    Toutounian and Soleymani [24] have proposed the following fourth-order method:

    Sk+1=12Sk(9IASk(16IASk(14IASk(6IASk)))). (1.12)

    As a general way of extracting can be done by using equation (1.5). Thus, for p=4 the fourth-order iterative scheme:

    Sk+1=Sk(4I6ASk+4(ASk)2(ASk)3), (1.13)

    that uses four matrix-matrix multiplications. In 2013, Soleymani [25] demonstrated the fifth-order iterative method with the use of six times matrix multiplications at each step and scheme is presented below:

    Sk+1=12Sk(11I+ASk(25I+ASk(30I+ASk(20I+ASk(7I+ASk))))). (1.14)

    In this paper, we have been investigated the fifth-order convergent iterative method for computing the weighted Moore-Penrose inverse. We focused on one of the major factors of computational cost which is paying close attention to reduce the time of computations. In addition, the theoretical study was carried out to justify the ability to finding the weighted Moore-Penrose inverse of any matrix. Also, the aim of the presented work have been supported by the numerical performance.

    Our aim is to derive a fifth-order iterative method with the help of Eq. (1.5) to find the weighted Moore-Penrose inverse of a matrix A, that uses minimum number of matrix multiplications than the required ones in (1.5). The hyperpower series for p=5 can be written as:

    Sk+1=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)=SkΦ(ASk), (2.1)

    where Φ(ASk)=5I10ASk+10(ASk)25(ASk)3+(ASk)4. The count of matrix multiplications in the iterative method (2.1) is five. The computational time used by the iterative method (2.1) can be minimized by reducing the count of matrix-matrix multiplications at each step. For this, we re-formulate the above scheme (2.1) as follows:

    {Xk=ASk,Yk=X2k,Vk=5I5Xk,Xk+1=Sk(Vk5Xk+Yk(5I+Vk+Yk))=SkΦ(ASk),k=0,1,2,. (2.2)

    This is a new iterative method (2.2) for computing the generalized inverse of any matrix. It can be seen easily that it uses four matrix-matrix multiplications at every step.

    In the next section, we will prove theoretically that the order of convergence of the above scheme is five and is applicable for generating the weighted Moore-Penrose inverse.

    Lemma 3.1. For the approximate sequence {Sk}k=0 generated by the iterative method (2.2) with the initial matrix

    S0=δA#, (3.1)

    the following Penrose equations hold:

    (a) SkAAMN=Sk,    (b) AMNASk=Sk,    (c) (MASk)=MASk,   (d) (NSkA)=NSkA.

    Proof. This lemma could be proved via mathematical induction on k. For k=0, the Eq. (a) is true. That is,

    S0AAMN=δA#AAMN=δN1AMAAMN=δN1Q(D000)PMP(D000)QN1Q(D1000)PM=δN1Q(D000)Im(D000)In(D1000)PM=δN1Q(D000)PM=δA#=S0.

    Further, we assume that the result holds for k, i.e.,

    SkAAMN=Sk or ASkAAMN=ASk. (3.2)

    Now, we will prove that the Eq. (a) continues to hold for k+1, i.e., Sk+1AAM,N=Sk+1. Thus, considering its left-hand side expression and using the iterative scheme (2.2), we get

    Sk+1AAMN=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)AAMN=5SkAAMN10SkASkAAMN+10Sk(ASk)1ASkAAMN5Sk(ASk)2ASkAAMN+Sk(ASk)3ASkAAMN.

    Substituting Eq. (3.2) in the above equation, one can obtain

    Sk+1AAMN=5Sk10SkASk+10Sk(ASk)1(ASk)5Sk(ASk)2ASk+Sk(ASk)3ASk=Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4)=Sk+1.

    Hence, by the principle of mathematical induction, the Eq. (a) holds for kW, where W={0,1,2, 3,}. Now, third Eq. (c) of this lemma can easily be verified for k=0. Let the result is true for k i.e., (MASk)=MASk. Next, we will show that the result holds for k+1. Using the iterative scheme (2.2),

    (MASk+1)=(MASk(5I10ASk+10(ASk)25(ASk)3+(ASk)4))=5(MASk)10(M(ASk)2)+10(M(ASk)3)5(M(ASk)4)+(M(ASk)5). (3.3)

    Using the fact (MASk)=MASk, and also for q>0,

    (M(ASk)q)=(M(ASk)(ASk)(ASk)q1 terms)=(ASk)(ASk)(ASk)q1 terms(MASk)=(ASk)(ASk)(ASk)q1 termsMAS=(ASk)(ASk)(ASk)q1 termsMAS=(ASk)(ASk)(ASk)q2 terms(MASk)AS=(ASk)(ASk)(ASk)q2 termsM(AS)2=M(ASk)q.

    Thus, the Eq. (3.3) becomes:

    (MASk+1)=5MASk10M(ASk)2+10M(ASk)35M(ASk)4+M(ASk)5=MASk+1. (3.4)

    Thus, third equality holds for k+1. The second and the fourth equations (i.e., (c) & (d)) can be proved analogously. Hence, the proof is completed.

    Let A be a complex matrix of order m×n having rank r. Assume that the matrices PCm×m, QCn×n, M and N are the Hermitian positive definite matrices satisfy PMP=Im and QN1Q=In. Then, the weighted singular value decomposition of matrix A can be expressed by Eq. (1.4).

    Lemma 3.2. Considering the conditions of Lemma 3.1, for each approximate inverse produced by the iterative scheme (2.2), the following expression holds:

    Θk=(Q1N)Sk(M1(P)1)=(Tk000), (3.5)

    where Tk is a diagonal matrix and it is given by

    Tk={Tk1Φ(DTk1),k1,δD,k=0. (3.6)

    Here, D denotes the diagonal matrix of order r.

    Proof. We prove this lemma by using the mathematical induction on k. For k=0, we have

    (Q1N)S0(M1(P)1)=δ(Q1N)A#(M1(P)1)=δ(Q1NN1)A(MM1(P)1)=δ(Q1NN1Q)(D000)(PMM1(P)1)=(δD000). (3.7)

    Further, we assume that the result holds for k. Now, we will prove that the result (3.5) is valid for k+1. For this, it is sufficient to prove that

    (Q1N)Sk+1(M1(P)1)=(TkΦ(DTk)000).

    Thus,

    (Q1N)Sk+1(M1(P)1)=Q1N(Sk(5I10ASk+10(ASk)25(ASk)3+(ASk)4))(M1(P)1)=5(Q1N)Sk(M1(P)1)10(Q1N)SkASk(M1(P)1)+10(Q1N)Sk(ASk)2(M1(P)1)5(Q1N)Sk(ASk)3(M1(P)1)+(Q1N)Sk(ASk)4(M1(P)1)=Θk(5I10DΘk+10(DΘk)25(DΘk)3+(DΘk)4)=(TkΦ(DTk)000). (3.8)

    Theorem 3.1. For a complex matrix ACm×n, the sequence {Sk}k=0 generated by (2.2) with initial matrix S0=δA#, for any k>0 converges to AMN with at least fifth-order of convergence.

    Proof. In view of iterative scheme (2.2) and to establish this result, we must show that

    limk(Q1N)Sk(M1(P)1)=(D1000). (3.9)

    It follows from Lemma 3.2, that

    Tk=diag(t(k)1,t(k)2,,t(k)r), where t(0)i=δξi, (3.10)

    and

    tk+1i=tki(5I10ξti+10(ξti)25(ξti)3+(ξti)4). (3.11)

    The sequence generated by Eq. (3.11) gives rise to the result of applying the iterative scheme (2.2) for computing the zero ξ1 of the function f(t)=ξt1 with the initial guess t(0)i. It could be seen that the iteration converges to ξ1i, provided 0<t(0)i<2ξi, which leads to the condition on δ (so the choice of initial guess is proved). Thus, TkD1 and relation (3.9) is satisfied. It proves that the iterative method (2.2) converges to its weighted matrix inversion AMN. Now, we will show that the obtained sequence {Sk}k=0 converges with the fifth-order. For this, assume that

    SkA=N1Q(Tk000)PMP(D000)Q. (3.12)

    Since PMP=Im, and QN1Q=In, we have (Q)1=N1Q. Consequently,

    SkA=(Q)1(TkD000)Q=(Q)1(Ek000)Q, (3.13)

    where Ek=TkD=diag(β(k)1,β(k)2,,β(k)3). This yields to

    Sk+1A=N1Q(TkΦ(DTk)000)PMP(D000)Q, (3.14)

    therefore

    Sk+1A=(Q)1(TkDΦ(DTk)000)Q. (3.15)

    Eqs. (3.13) and (3.15) implies Ek+1=Ekp(Ek). Now by simplifying, we obtain

    IEk+1=(IEk)5, (3.16)

    and thus for all j, 1jr, we have (1β(k+1)j)(1β(k)j)5. That shows at least the fifth-order of convergence of the method (2.2) for finding the WMPI. This completes the proof.

    The purpose of this section is to confirm the theoretical aspects through numerical testing. For this, an attempt is made to illustrate the comparison of the proposed strategy with the existing schemes on practical and academic models. The outcomes are estimated by using the Mathematica software, as numerical calculations are accomplished with high accuracy. Moreover, its programming language posses the symbolic calculations and exact arithmetics. The software Mathematica 11 with the specification of a processor is Intel(R) Core(TM) i7-8565U CPU @ 1.89GHz (64-bit operating system) Window 10 Pro @ 2019 Microsoft Corporation was used. The comparisons have been done by considering the total number of matrix multiplications (TMM), actual running time (T) in seconds, and the computational order of convergence (ρ). For calculating the ρ,

    ρ=ln(Sk+1A#MN/SkA#MN)ln(SkA#MN/Sk1A#MN),k=1,2,, (4.1)

    the last three approximations Sk1,Sk,Sk+1 are used and here, denotes the generic matrix norm. For comparison purposes, the proposed scheme PM5 (2.2) is compared with methods proposed by Schulz (1.6), Esmaeili et al. (1.7), Li. et al. {(1.8), (1.9) and (1.10)}, Esmaeili et al. (1.11), Toutounian and Soleymani (1.12), Li et al. (1.13), and Soleymani (1.14) and denoted by SM2, EM2, CM3, MP3, HM3, EM4, TM4, LM4, and SO5, respectively.

    Example 1. (Academic problem) Consider the rank-deficiency matrix

    A=[123411346223453345644567666778]. (4.2)

    The comparison of this test problem is calculated by using initial guess S0=1σ2min+σ2maxA [26], where σmin and σmax are the bounds of singular values of A. Moreover, the stopping criteria Sk+1Sk<10100 is used for finding a better approximation. From Table 1, we can observe that the presented method attains the desired result with the lesser number of multiplications of the matrix than the existing schemes, in minimum time. The presented scheme is, therefore, more efficient for this rank-deficient matrix because it shows better outcomes in each component, whereas some of the techniques are not able to determine the solution for this test problem.

    Table 1.  Outcomes for comparison by testing schemes on Example 1.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    ρ 2.0002 * 3.0000 3.0000 3.0000 * 4.0000 4.0000 5.0000 5.0467
    T 1.141 * 0.860 0.938 0.875 * 0.856 0.873 0.719 0.610
    TMM 38 * 39 48 48 * 50 44 54 32
    * denotes the divergence

     | Show Table
    DownLoad: CSV

    Example 2. Consider the following elliptic partial differential equation:

    2ϕx2+2ϕy2=32ϕ(x,y), (4.3)

    where ϕ is a function of x and y. It is satisfied at every point inside the square formed by x=±1, y=±1 and subject to the following the boundary conditions:

    (ⅰ) ϕ(x,y)=0 on y=1,    1x1,

    (ⅱ)ϕ(x,y)=1 on y=1,   1x1,

    (ⅲ) ϕx=12ϕ(x,y) on x=1,    1y1,

    (ⅳ) ϕx=12ϕ(x,y) on x=1,    1y1.

    By using the central difference formulae on Eq. (4.3), one can obtain

    ϕi+1,j2ϕi,j+ϕi1,jh2+ϕi,j+12ϕi,j+ϕi,j1h2=32ϕi,j, (4.4)

    here ϕi,j=ϕ(xi,yj). Consider square mesh size h=14 which yields seventy finite difference equations to find the approximate solution ϕ(xi,yj). By observing the boundary conditions, one can easily see that the function ϕ is symmetric about the y-axis. Finally, implementing the boundary conditions on (4.4), we obtain the linear system Aϕ=u, of thirty-five unknown parameters, where A=(YIIYIIYIIYIIYIIYIIY) is a tri-diagonal matrix, I is the identity matrix of order 5×5, Y=(620001610001610001610002254), ϕ and u are the column vector whose transpose is equal to (ϕ1,ϕ2,,ϕ34,ϕ35), and (0,0,,0,1,1,1,1,1), respectively. To tackle the large sparse array, the SparseArray and Band function are applied for saving the memory space and reduce the computational burden of matrix multiplication as follows:

    A=SparseArray[{Band[{1, 1}, {i, i}] {-6, -6, -6, -6, -254}, Band[{2, 1}, {i, i}] {1, 1, 1, 2, 0}, Band[{1, 2}, {i, i}] {2, 1, 1, 1, 0}, Band[{1, 6}] 1, Band[{6, 1}] 1}, {i, i}, 0.]

    The initial guess for this test problem is considered as S0=1A1AA [26], where A1=maxj(mi=1|ai,j|), and ||A||=maxi(nj=1|ai,j|). The results of comparisons are shown in Table 2. The methods LM4 and PM5 are better for this example than other methods, as uses minimum computational cost (i.e., matrix products), despite it PM5 yields the result faster. Thus, overall it demonstrates that the presented method converges faster than its competitors.

    Table 2.  Outcomes for comparison by testing schemes on Example 2.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    ρ 2.00234 1.98506 3.25404 3.12742 3.08096 4.08699 4.34940 4.58575 4.98237 5.10871
    T 0.250 0.265 0.203 0.250 0.218 0.406 0.266 0.249 0.235 0.188
    TMM 18 21 15 18 18 20 20 16 24 16

     | Show Table
    DownLoad: CSV

    Example 3. In this test, we compute the weighted Moore-Penrose inverse of random dense matrix Am×n as follows:

    A=RandomReal[{10,10},{m,n}], (4.5)

    where the M and N are the Hermitian positive definite matrices and considered as:

    MM=RandomReal[{2},{m,n}];MM=Transpose[MM].MM;NN=RandomReal[{3},{n,n}];NN=Transpose[NN].NN;

    The results are drawn by using initial guess and stopping criteria as 1σ2min+σ2maxA# [26] and Sk+1Sk<1012, respectively. The comparisons of this problem are listed in Table 3, which manifests that the PM5 is much efficient than other existing methods, in each aspect.

    Table 3.  Outcomes for comparison by testing schemes on Example 3.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    T 2.172 1.391 1.329 1.610 1.282 1.265 1.188 1.437 1.157 1.047
    TMM 156 105 132 200 168 130 190 164 186 116

     | Show Table
    DownLoad: CSV

    Example 4. Consider the following different order of ill-conditioned Hilbert matrix for computing the Moore-Penrose inverse

    A=Table[1i+j1,{i,m},{j,n}]. (4.6)

    The comparison is obtained with the initial approximation 1σ2min+σ2maxAT [26], with stopping criteria Sk+1Sk1020. The results are listed in Table 4 for various order of matrices. It can be concluded that the PM5 method gives the desired result faster than other methods, while in this test problem some methods are failed. Moreover, PM5 over-performs using a minimum number of matrix multiplications in each different order matrices. Hence, this justifies the aim of this paper.

    Table 4.  Outcomes for comparison by testing schemes on Example 4.
    Method SM2 EM2 CM3 MP3 HM3 EM4 TM4 LM4 SO5 PM5
    Order of matrix is 10 × 10
    ρ 2.0000 * 3.0001 3.0029 3.0047 * 4.1194 4.0004 5.0000 5.2569
    T 7.657 * 5.563 5.546 5.688 * 4.984 5.061 4.197 4.769
    TMM 184 * 174 216 204 * 184 210 222 160
    Order of matrix is 10 × 20
    ρ 2.0000 * 3.0000 3.0027 3.0001 * 4.0006 4.0157 5.0059 5.1114
    T 10.688 * 8.078 9.375 7.234 * 7.313 7.172 9.157 6.109
    TMM 160 * 153 188 180 * 160 185 198 136
    Order of matrix is 15 × 15
    ρ 2.0000 * 3.0000 3.0003 3.0028 * 4.0027 4.0033 5.0305 5.0336
    T 25.656 * 25.500 19.953 18.624 * 16.563 17.234 15.640 15.048
    TMM 286 * 270 252 237 * 284 330 348 244
    * denotes the divergence

     | Show Table
    DownLoad: CSV

    In this manuscript, we have established new formulations of the fifth-order hyperpower method to compute the weighted Moore-Penrose inverse. Compared to standard hyperpower method from a theoretical perspective, this new formulation has improved efficiency indices. Such approximations AMN are found to be robust and effective when implemented as a preconditioned matrix to solve the linear systems. Further, a wide range of the practical and academical test is performed to test our proposed iterative scheme consistency and effectiveness. The outcomes in each test problems show that the presented method gives the desired result with the least number of matrix multiplications in minimum computational time. Hence, it supports our attempt for new transformations of the hyperpower iterative scheme of order five.

    The authors wish to thank anonymous reviewers for careful reading and valuable comments which improved the quality of the paper.

    The authors declare no conflict of interest.



    [1] G. Adilov, I. Yesilce, B1-convex functions, J. Convex Anal., 24 (2017), 505–517. http://dx.doi.org/10.81043/aperta.44759 doi: 10.81043/aperta.44759
    [2] G. Anastassiou, General Grüss and Ostrowski type inequalities involving s-convexity, Bull. Allahabad Math. Soc., 28 (2013), 101–129.
    [3] A. Bayoumi, Foundation of complex analysis in non locally convex spaces: function theory without convexity condition, Amsterdam: Elsevier Science, 2003.
    [4] W. Breckner, Stetigkeitsaussagen für eine Klasse verallgemeinerter Funktionen in topologischen linearen Raumen, Publ. Inst. Math., 23 (1978), 13–20.
    [5] W. Briec, C. Horvath, B-convexity, Optimization, 53 (2004), 103–127. http://dx.doi.org/10.1080/02331930410001695283 doi: 10.1080/02331930410001695283
    [6] S. Dragomir, C. Pearce, Selected topics on Hermite-Hadamard inequalities and applications, Science Direct Working Paper, 2003, S1574-0358(04)70845-X.
    [7] T. Du, C. Luo, Z. Cao, On the Bullen-type inequalities via generalized fractional integrals and their applications, Fractals, 29 (2021), 2150188. http://dx.doi.org/10.1142/S0218348X21501887 doi: 10.1142/S0218348X21501887
    [8] Z. Eken, S. Kemali, G. Tinaztepe, G. Adilov, The Hermite-Hadamard inequalities for p-convex functions, Hacet. J. Math. Stat., 50 (2021), 1268–1279. https://dx.doi.org/10.15672/hujms.775508 doi: 10.15672/hujms.775508
    [9] K. Gdawiec, Fractal patterns from the dynamics of combined polynomial root finding methods, Nonlinear Dyn., 90 (2017), 2457–2479. https://dx.doi.org/10.1007/s11071-017-3813-6 doi: 10.1007/s11071-017-3813-6
    [10] S. Kemali, I. Yesilce, G. Adilov, B-convexity, B1-convexity, and their comparison, Numer. Func. Anal. Opt., 36 (2015), 133–146. https://dx.doi.org/10.1080/01630563.2014.970641 doi: 10.1080/01630563.2014.970641
    [11] S. Kemali, S. Sezer, G. Tınaztepe, G. Adilov, s-Convex functions in the third sense, Korean J. Math., 29 (2021), 593–602. https://dx.doi.org/10.11568/kjm.2021.29.3.593 doi: 10.11568/kjm.2021.29.3.593
    [12] Y. Kwun, M. Tanveer, W. Nazeer, K. Gdawiec, S. Kang, Mandelbrot and Julia Sets via Jungck-CR iteration with s-convexity, IEEE Access, 7 (2019), 12167–12176. https://dx.doi.org/10.1109/ACCESS.2019.2892013 doi: 10.1109/ACCESS.2019.2892013
    [13] W. Orlicz, A note on modular spaces Ⅰ, Bull. Acad. Polon. Sci., 9 (1961), 157–162.
    [14] A. Ostrowski, Über die Absolutabweichung einer differentiierbaren Funktion von ihrem Integralmittelwert, Comment. Math. Helv., 10 (1937), 226–227. doi: 10.1007/BF01214290
    [15] M. Özdemir, A. Ekinci, Some new integral inequalities for functions whose derivatives of absolute values are s-convex, Turkish Journal of Analysis and Number Theory, 7 (2019), 70–76. http://dx.doi.org/10.12691/tjant-7-3-3 doi: 10.12691/tjant-7-3-3
    [16] M. Sarikaya, F. Ertuğral, F. Yıldırım, On the Hermite-Hadamard-Fejér type integral inequality for s-convex function, Konuralp Journal of Mathematics, 6 (2018), 35–41.
    [17] S. Sezer, Z. Eken, G. Tınaztepe, G. Adilov, p-convex functions and some of their properties, Numer. Func. Anal. Opt., 42 (2021), 443–459. http://dx.doi.org/10.1080/01630563.2021.1884876 doi: 10.1080/01630563.2021.1884876
    [18] S. Sezer, The Hermite-Hadamard inequalities for s-convex functions in the third sense, AIMS Mathematics, 6 (2021), 7719–7732. https://dx.doi.org/10.3934/math.2021448 doi: 10.3934/math.2021448
    [19] I. Yesilce, G. Adilov, Some operations on B1-convex sets, Journal of Mathematical Sciences: Advances and Applications, 39 (2016), 99–104. http://dx.doi.org/10.18642/jmsaa_7100121669 doi: 10.18642/jmsaa_7100121669
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2661) PDF downloads(333) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog