In this paper, we propose a new hyperpower iterative method for approximating the weighted Moore-Penrose inverse of a given matrix. The main objective of the current work is to minimize the computational complexity of the hyperpower iterative method using some transformations. The proposed method attains the fifth-order of convergence using four matrix multiplications per iteration step. The theoretical convergence analysis of the method is discussed in detail. A wide range of numerical problems is considered from scientific literature, which demonstrates the applicability and superiority of the proposed method.
Kezheng Zuo, Yang Chen, Li Yuan .
Further representations and computations of the generalized Moore-Penrose inverse. AIMS Mathematics, 2023, 8(10): 23442-23458.
doi: 10.3934/math.20231191
[2]
Yang Chen, Kezheng Zuo, Zhimei Fu .
New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375.
doi: 10.3934/math.2022242
[3]
Qi Xiao, Jin Zhong .
Characterizations and properties of hyper-dual Moore-Penrose generalized inverse. AIMS Mathematics, 2024, 9(12): 35125-35150.
doi: 10.3934/math.20241670
[4]
Mahmoud S. Mehany, Faizah D. Alanazi .
An $ \eta $-Hermitian solution to a two-sided matrix equation and a system of matrix equations over the skew-field of quaternions. AIMS Mathematics, 2025, 10(4): 7684-7705.
doi: 10.3934/math.2025352
[5]
Vladislav N. Kovalnogov, Ruslan V. Fedorov, Denis A. Demidov, Malyoshina A. Malyoshina, Theodore E. Simos, Spyridon D. Mourtas, Vasilios N. Katsikis .
Computing quaternion matrix pseudoinverse with zeroing neural networks. AIMS Mathematics, 2023, 8(10): 22875-22895.
doi: 10.3934/math.20231164
[6]
Yongge Tian .
Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886.
doi: 10.3934/math.2021803
[7]
Li Wen, Feng Yin, Yimou Liao, Guangxin Huang .
A greedy average block Kaczmarz method for the large scaled consistent system of linear equations. AIMS Mathematics, 2022, 7(4): 6792-6806.
doi: 10.3934/math.2022378
[8]
Abdur Rehman, Ivan Kyrchei, Muhammad Zia Ur Rahman, Víctor Leiva, Cecilia Castro .
Solvability and algorithm for Sylvester-type quaternion matrix equations with potential applications. AIMS Mathematics, 2024, 9(8): 19967-19996.
doi: 10.3934/math.2024974
[9]
Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu .
Further characterizations and representations of the Minkowski inverse in Minkowski space. AIMS Mathematics, 2023, 8(10): 23403-23426.
doi: 10.3934/math.20231189
[10]
Jue Feng, Xiaoli Li, Kaicheng Fu .
On the generalized spectrum of bounded linear operators in Banach spaces. AIMS Mathematics, 2023, 8(6): 14132-14141.
doi: 10.3934/math.2023722
Abstract
In this paper, we propose a new hyperpower iterative method for approximating the weighted Moore-Penrose inverse of a given matrix. The main objective of the current work is to minimize the computational complexity of the hyperpower iterative method using some transformations. The proposed method attains the fifth-order of convergence using four matrix multiplications per iteration step. The theoretical convergence analysis of the method is discussed in detail. A wide range of numerical problems is considered from scientific literature, which demonstrates the applicability and superiority of the proposed method.
1.
Introduction
The applications of the generalized inverse of matrices or operators are of interest in numerical mathematics. Indeed, when a matrix is singular or rectangular, many computational and theoretical problems require different forms of generalized inverses. In the finite-dimensional case, an important application of the Moore-Penrose inverse is to minimize a Hermitian positive definite quadratic form xtx where t denotes the transpose, under linear constraints. Precisely, weighted Moore-Penrose inverse plays a prominent role in the indefinite linear least-square problems [1,2].
Let Cm×n denotes the set of all matrices of order m×n, having complex entries. Further, assume that for an arbitrary matrix A∈Cm×n, and two Hermitian positive definite matrices M∈Cm×m, and N∈Cn×n, there exist a unique matrix S∈Cn×m such that satisfying the following properties:
1)ASA=A,2)SAS=S,3)(MAS)∗=MAS,4)(NSA)∗=NSA.
(1.1)
Then, S is said to be a weighted Moore-Penrose inverse (WMPI) of A with respect to matrices M and N; generally, it is denoted by A†MN. In particular, when the matrix M and N are the identity matrix of order m and n, respectively, then S is known as Moore-Penrose inverse, denoted by A†. Moreover, the above relation is reduced to well-known Penrose equations [3,4] as follows:
1)ASA=A,2)SAS=S,3)(AS)∗=AS,4)(SA)∗=SA.
(1.2)
The elementary technique for computing the WMPI of the matrix is entirely based on the weighted singular value decomposition [5], in accordance with the following form. Let A∈Cm×nr, where Cm×nr be a set of complex matrices of order m×n with rank r, there exist matrices P∈Cm×m and Q∈Cn×n satisfying the conditions P∗MP=Im and Q∗N−1Q=In, such that
A=P(D000)Q∗,
(1.3)
where D=diag(σ1,σ2,…,σr), σ2i is the nonzero eigenvalue of matrix N−1A∗MA, and it satisfies the relation: σ1≥σ2≥…σr>0. Then, the WMPI (A†MN) of matrix A could be expressed as:
A†MN=N−1Q(D−1000)P∗M.
(1.4)
Note that, in this manuscript weighted conjugate transpose of matrix A is denoted by A# and is equal to N−1A∗M, whereas A∗ denotes the conjugate transpose of the matrix A∈Cm×n. Consequently, (A†MN)#=M−1(A†MN)∗N=M−1(A∗)†N−1M−1N and (AA†MN)#=P(Ir000)P−1 [6, pp. 41]. Moreover, the following properties hold:
A diverse range of other methodologies has presented in the literature to determine the WMPI of a matrix. For calculating the generalized inverse numerically, Greville's partitioning method was introduced in [7]. A new proof of Greville's method was illustrated by Wang [8] for WMPI. But, such methods involve more operations and therefore more rounding errors are accumulated. Often numerical techniques for finding the Moore-Penrose inverse lack in numerical stability [9]. Besides it, Wang [10] obtained a comprehensive proof for the WMPI of a partition matrix in a recursive form. Also, WMPI method was introduced in [9] for the multi-variable polynomial. Moreover, new determinantal representations for weighted generalized inverse were presented in [11]. Whereas, a representation of the WMPI of a quaternion matrix was discussed by Ivan Kyrchei [12,13]. Further, its explicit representation for the two-sided restricted quaternionic matrix equations was investigated in [14].
The hyperpower iterative method was given by Altman [15] for inverting a linear bounded operator in the Hilbert space, whereas it's applicability for generating the Moore-Penrose inverse of a matrix was shown by Ben-Israel [16]. Several iterative methods fall in this category of hyperpower matrix iterations and the general pth order iterative method is written as follows:
Sk+1=Sk(I+Rk+R2k+⋯+Rpk),
(1.5)
where Rk=I−ASk, I is identity matrix of order m and S0 is the initial approximation to input matrix A−1. This scheme is attractive as it is entirely based on matrix-matrix products, which is implemented fruitfully in parallel machines. By this approach, each pth- order iterative method brought forward in terms of hyperpower series, which require p times matrix-matrix multiplications. If p=2, iterative method (1.5) yields well known Newton-Schulz method (derived in [17,18]):
Sk+1=Sk(2I−ASk),k=0,1,….
(1.6)
Although it is a quadratically convergent method, and its complexity is poly-logarithmic and numerically stable (discussed in [19]). But, scheme (1.6) often shows slow convergence behavior during the initial process, thus would lead to an increment in computational workload while calculating matrix inverse.
In 2017, H. Esmaeili and A. Pirnia [20] constructed a quadratic convergence iterative scheme as follows:
Sk+1=Sk(5.5I−ASk(8I−3.5ASk)),k=0,1,….
(1.7)
If p=3, the hyperpower iterative method (1.5) turns into a cubically convergent method and can also be derived from Chebyshev scheme [21]:
Sk+1=Sk(3I−ASk(3I−ASk)),
(1.8)
and investigated by Li et al. [21] in 2001. Along with this, they developed two other third-order iterative methods, from the mid-point rule method [22] and Homeier's method [22], given as:
Sk+1=(I+14(I−SkA)(3I−SkA)2)Sk,
(1.9)
and
Sk+1=Sk(I+12(I−ASk)(I+(2I−ASk)2)),
(1.10)
respectively. In 2017, a fourth-order scheme has been presented by Esmaeili et al. [23] and demonstrated below:
Sk+1=Sk(9I−26(ASk)+34(ASk)2−21(ASk)3+5(ASk)4).
(1.11)
Toutounian and Soleymani [24] have proposed the following fourth-order method:
Sk+1=12Sk(9I−ASk(16I−ASk(14I−ASk(6I−ASk)))).
(1.12)
As a general way of extracting can be done by using equation (1.5). Thus, for p=4 the fourth-order iterative scheme:
Sk+1=Sk(4I−6ASk+4(ASk)2−(ASk)3),
(1.13)
that uses four matrix-matrix multiplications. In 2013, Soleymani [25] demonstrated the fifth-order iterative method with the use of six times matrix multiplications at each step and scheme is presented below:
In this paper, we have been investigated the fifth-order convergent iterative method for computing the weighted Moore-Penrose inverse. We focused on one of the major factors of computational cost which is paying close attention to reduce the time of computations. In addition, the theoretical study was carried out to justify the ability to finding the weighted Moore-Penrose inverse of any matrix. Also, the aim of the presented work have been supported by the numerical performance.
2.
Iterative method
Our aim is to derive a fifth-order iterative method with the help of Eq. (1.5) to find the weighted Moore-Penrose inverse of a matrix A, that uses minimum number of matrix multiplications than the required ones in (1.5). The hyperpower series for p=5 can be written as:
where Φ(ASk)=5I−10ASk+10(ASk)2−5(ASk)3+(ASk)4. The count of matrix multiplications in the iterative method (2.1) is five. The computational time used by the iterative method (2.1) can be minimized by reducing the count of matrix-matrix multiplications at each step. For this, we re-formulate the above scheme (2.1) as follows:
This is a new iterative method (2.2) for computing the generalized inverse of any matrix. It can be seen easily that it uses four matrix-matrix multiplications at every step.
In the next section, we will prove theoretically that the order of convergence of the above scheme is five and is applicable for generating the weighted Moore-Penrose inverse.
3.
Weighted Moore-Penrose inverse (WMPI):
Lemma 3.1.For the approximate sequence {Sk}∞k=0 generated by the iterative method (2.2) with the initial matrix
Further, we assume that the result holds for k, i.e.,
SkAA†MN=Sk or ASkAA†MN=ASk.
(3.2)
Now, we will prove that the Eq. (a) continues to hold for k+1, i.e.,Sk+1AA†M,N=Sk+1. Thus, considering its left-hand side expression and using the iterative scheme (2.2), we get
Hence, by the principle of mathematical induction, the Eq. (a) holds for k∈W, where W={0,1,2,3,…}. Now, third Eq. (c) of this lemma can easily be verified for k=0. Let the result is true for ki.e., (MASk)∗=MASk. Next, we will show that the result holds for k+1. Using the iterative scheme (2.2),
Thus, third equality holds for k+1. The second and the fourth equations (i.e., (c) & (d)) can be proved analogously. Hence, the proof is completed.
Let A be a complex matrix of order m×n having rank r. Assume that the matrices P∈Cm×m, Q∈Cn×n, M and N are the Hermitian positive definite matrices satisfy P∗MP=Im and Q∗N−1Q=In. Then, the weighted singular value decomposition of matrix A can be expressed by Eq. (1.4).
Lemma 3.2.Considering the conditions of Lemma 3.1, for each approximate inverse produced by the iterative scheme (2.2), the following expression holds:
Θk=(Q−1N)Sk(M−1(P∗)−1)=(Tk000),
(3.5)
where Tk is a diagonal matrix and it is given by
Tk={Tk−1Φ(DTk−1),k≥1,δD,k=0.
(3.6)
Here, D denotes the diagonal matrix of order r.
Proof. We prove this lemma by using the mathematical induction on k. For k=0, we have
Theorem 3.1.For a complex matrix A∈Cm×n, the sequence {Sk}∞k=0 generated by (2.2) with initial matrix S0=δA#, for any k>0 converges to A†MN with at least fifth-order of convergence.
Proof. In view of iterative scheme (2.2) and to establish this result, we must show that
limk→∞(Q−1N)Sk(M−1(P∗)−1)=(D−1000).
(3.9)
It follows from Lemma 3.2, that
Tk=diag(t(k)1,t(k)2,…,t(k)r), where t(0)i=δξi,
(3.10)
and
tk+1i=tki(5I−10ξti+10(ξti)2−5(ξti)3+(ξti)4).
(3.11)
The sequence generated by Eq. (3.11) gives rise to the result of applying the iterative scheme (2.2) for computing the zero ξ−1 of the function f(t)=ξ−t−1 with the initial guess t(0)i. It could be seen that the iteration converges to ξ−1i, provided 0<t(0)i<2ξi, which leads to the condition on δ (so the choice of initial guess is proved). Thus, Tk→D−1 and relation (3.9) is satisfied. It proves that the iterative method (2.2) converges to its weighted matrix inversion A†MN. Now, we will show that the obtained sequence {Sk}∞k=0 converges with the fifth-order. For this, assume that
SkA=N−1Q(Tk000)P∗MP(D000)Q∗.
(3.12)
Since P∗MP=Im, and Q∗N−1Q=In, we have (Q∗)−1=N−1Q. Consequently,
SkA=(Q∗)−1(TkD000)Q∗=(Q∗)−1(Ek000)Q∗,
(3.13)
where Ek=TkD=diag(β(k)1,β(k)2,…,β(k)3). This yields to
Sk+1A=N−1Q(TkΦ(DTk)000)P∗MP(D000)Q∗,
(3.14)
therefore
Sk+1A=(Q∗)−1(TkDΦ(DTk)000)Q∗.
(3.15)
Eqs. (3.13) and (3.15) implies Ek+1=Ekp(Ek). Now by simplifying, we obtain
I−Ek+1=(I−Ek)5,
(3.16)
and thus for all j, 1≤j≤r, we have (1−β(k+1)j)≤(1−β(k)j)5. That shows at least the fifth-order of convergence of the method (2.2) for finding the WMPI. This completes the proof.
4.
Numerical results and discussions
The purpose of this section is to confirm the theoretical aspects through numerical testing. For this, an attempt is made to illustrate the comparison of the proposed strategy with the existing schemes on practical and academic models. The outcomes are estimated by using the Mathematica software, as numerical calculations are accomplished with high accuracy. Moreover, its programming language posses the symbolic calculations and exact arithmetics. The software Mathematica 11 with the specification of a processor is Intel(R) Core(TM) i7-8565U CPU @ 1.89GHz (64-bit operating system) Window 10 Pro @ 2019 Microsoft Corporation was used. The comparisons have been done by considering the total number of matrix multiplications (TMM), actual running time (T) in seconds, and the computational order of convergence (ρ). For calculating the ρ,
the last three approximations Sk−1,Sk,Sk+1 are used and here, ‖⋅‖∗ denotes the generic matrix norm. For comparison purposes, the proposed scheme PM5 (2.2) is compared with methods proposed by Schulz (1.6), Esmaeili et al. (1.7), Li. et al. {(1.8), (1.9) and (1.10)}, Esmaeili et al. (1.11), Toutounian and Soleymani (1.12), Li et al. (1.13), and Soleymani (1.14) and denoted by SM2, EM2, CM3, MP3, HM3, EM4, TM4, LM4, and SO5, respectively.
Example 1. (Academic problem) Consider the rank-deficiency matrix
A=[123411346223453345644567666778].
(4.2)
The comparison of this test problem is calculated by using initial guess S0=1σ2min+σ2maxA[26], where σmin and σmax are the bounds of singular values of A. Moreover, the stopping criteria ‖Sk+1−Sk‖<10−100 is used for finding a better approximation. From Table 1, we can observe that the presented method attains the desired result with the lesser number of multiplications of the matrix than the existing schemes, in minimum time. The presented scheme is, therefore, more efficient for this rank-deficient matrix because it shows better outcomes in each component, whereas some of the techniques are not able to determine the solution for this test problem.
Table 1.
Outcomes for comparison by testing schemes on Example 1.
Example 2. Consider the following elliptic partial differential equation:
∂2ϕ∂x2+∂2ϕ∂y2=32ϕ(x,y),
(4.3)
where ϕ is a function of x and y. It is satisfied at every point inside the square formed by x=±1, y=±1 and subject to the following the boundary conditions:
(ⅰ) ϕ(x,y)=0 on y=1, −1≤x≤1,
(ⅱ)ϕ(x,y)=1 on y=−1, −1≤x≤1,
(ⅲ)
∂ϕ∂x=−12ϕ(x,y) on x=1, −1≤y≤1,
(ⅳ)
∂ϕ∂x=12ϕ(x,y) on x=−1, −1≤y≤1.
By using the central difference formulae on Eq. (4.3), one can obtain
here ϕi,j=ϕ(xi,yj). Consider square mesh size h=14 which yields seventy finite difference equations to find the approximate solution ϕ(xi,yj). By observing the boundary conditions, one can easily see that the function ϕ is symmetric about the y-axis. Finally, implementing the boundary conditions on (4.4), we obtain the linear system Aϕ=u, of thirty-five unknown parameters, where A=(YIIYIIYIIYIIYIIYIIY) is a tri-diagonal matrix, I is the identity matrix of order 5×5, Y=(−620001−610001−610001−610002−254), ϕ and u are the column vector whose transpose is equal to (ϕ1,ϕ2,…,ϕ34,ϕ35), and (0,0,…,0,−1,−1,−1,−1,−1), respectively. To tackle the large sparse array, the SparseArray and Band function are applied for saving the memory space and reduce the computational burden of matrix multiplication as follows:
The initial guess for this test problem is considered as S0=1‖A‖1‖A‖∞A∗[26], where ‖A‖1=maxj(∑mi=1|ai,j|), and ||A||∞=maxi(∑nj=1|ai,j|). The results of comparisons are shown in Table 2. The methods LM4 and PM5 are better for this example than other methods, as uses minimum computational cost (i.e., matrix products), despite it PM5 yields the result faster. Thus, overall it demonstrates that the presented method converges faster than its competitors.
Table 2.
Outcomes for comparison by testing schemes on Example 2.
The results are drawn by using initial guess and stopping criteria as 1σ2min+σ2maxA#[26] and ‖Sk+1−Sk‖<10−12, respectively. The comparisons of this problem are listed in Table 3, which manifests that the PM5 is much efficient than other existing methods, in each aspect.
Table 3.
Outcomes for comparison by testing schemes on Example 3.
Example 4. Consider the following different order of ill-conditioned Hilbert matrix for computing the Moore-Penrose inverse
A=Table[1i+j−1,{i,m},{j,n}].
(4.6)
The comparison is obtained with the initial approximation 1σ2min+σ2maxAT[26], with stopping criteria ‖Sk+1−Sk‖≤10−20. The results are listed in Table 4 for various order of matrices. It can be concluded that the PM5 method gives the desired result faster than other methods, while in this test problem some methods are failed. Moreover, PM5 over-performs using a minimum number of matrix multiplications in each different order matrices. Hence, this justifies the aim of this paper.
Table 4.
Outcomes for comparison by testing schemes on Example 4.
In this manuscript, we have established new formulations of the fifth-order hyperpower method to compute the weighted Moore-Penrose inverse. Compared to standard hyperpower method from a theoretical perspective, this new formulation has improved efficiency indices. Such approximations A†MN are found to be robust and effective when implemented as a preconditioned matrix to solve the linear systems. Further, a wide range of the practical and academical test is performed to test our proposed iterative scheme consistency and effectiveness. The outcomes in each test problems show that the presented method gives the desired result with the least number of matrix multiplications in minimum computational time. Hence, it supports our attempt for new transformations of the hyperpower iterative scheme of order five.
Acknowledgments
The authors wish to thank anonymous reviewers for careful reading and valuable comments which improved the quality of the paper.
Conflict of interest
The authors declare no conflict of interest.
References
[1]
S. Chandrasekaran, M. Gu, A. H. Sayed, A stable and efficient algorithm for the indefinite linearleast-squares problem, SIAM J. Matrix Anal. Appl., 20 (1998), 354-362. doi: 10.1137/S0895479896302229
[2]
S. F. Wang, B. Zheng, Z. P. Xiong, et al. The condition numbers for weighted Moore-Penroseinverse and weighted linear least squares problem, Appl. Math. Comput., 215 (2009), 197-205.
[3]
R. Penrose, A generalized inverse for matrices, In: Mathematical proceedings of the Cambridgephilosophical society, 51 (1955), 406-413. doi: 10.1017/S0305004100030401
[4]
R. Penrose, On best approximate solutions of linear matrix equations, In: MathematicalProceedings of the Cambridge Philosophical Society, 52 (1956), 17-19. doi: 10.1017/S0305004100030929
[5]
L. Van, F. Charles, Generalizing the singular value decomposition, SIAM J. Numer. Anal., 13 (1976), 76-83. doi: 10.1137/0713009
[6]
G. Wang, Y. Wei, S. Qiao, et al. Generalized Inverses: Theory and Computations, Springer, 2018.
[7]
T. N. E. Greville, Some applications of the pseudoinverse of a matrix, SIAM Rev., 2 (1960), 15-22. doi: 10.1137/1002004
[8]
G. R. Wang, A new proof of Grevile's method for computing the weighted MP inverse, J. Shangai Normal Uni. (Nat. Sci. Ed.), 1985.
[9]
M. D. Petković, P. S. Stanimirović, M. B. Tasić, Effective partitioning method for computingweighted Moore-Penrose inverse, Comput. Math. Appl., 55 (2008), 1720-1734. doi: 10.1016/j.camwa.2007.07.014
[10]
G. Wang, B. Zheng, The weighted generalized inverses of a partitioned matrix, Appl. Math. Comput., 155 (2004), 221-233.
[11]
X. Liu, Y. Yu, H. Wang, Determinantal representation of weighted generalized inverses, Appl. Math. Comput., 208 (2009), 556-563.
[12]
I. Kyrchei, Weighted singular value decomposition and determinantal representations of thequaternion weighted Moore-Penrose inverse, Appl. Math. Comput., 309 (2017), 1-16. doi: 10.1016/j.cam.2016.06.022
[13]
I. Kyrchei, Determinantal representations of the quaternion weighted moore-penrose inverse andits applications, In: A. R. Baswell, Editor, Advances in Mathematics Research, Nova Science Publications, New York, 23 (2017), 35-96.
[14]
I. Kyrchei, Explicit determinantal representation formulas for the solution of the two-sidedrestricted quaternionic matrix equation, J. Appl. Math. Comput., 58 (2018), 335-365. doi: 10.1007/s12190-017-1148-6
[15]
M. Altman, An optimum cubically convergent iterative method of inverting a linear boundedoperator in Hilbert space, Pacific J. Math., 10 (1960), 1107-1113. doi: 10.2140/pjm.1960.10.1107
[16]
A. Ben-Israel, A note on an iterative method for generalized inversion of matrices, Math. Comput., 20 (1966), 439-440. doi: 10.1090/S0025-5718-66-99922-4
[17]
H. Hotelling, Some new methods in matrix calculation, Ann. Math. Statist., 14 (1943), 1-34. doi: 10.1214/aoms/1177731489
[18]
G. Schulz, Iterative berechung der reziproken matrix, Z. Angew. Math. Mech., 13 (1933), 57-59. doi: 10.1002/zamm.19330130111
[19]
T. Söderström, G. W. Stewart, On the numerical properties of an iterative method for computingthe Moore-Penrose generalized inverse, SIAM J. Numer. Anal., 11 (1974), 61-74. doi: 10.1137/0711008
[20]
H. Esmaeili, A. Pirnia, An efficient quadratically convergent iterative method to find the Moore-Penrose inverse, Int. J. Comput. Math., 94 (2017), 1079-1088. doi: 10.1080/00207160.2016.1167883
[21]
H. B. Li, T. Z. Huang, Y. Zhang, et al. Chebyshev-type methods and preconditioning techniques, Appl. Math. Comput., 218 (2011), 260-270.
[22]
C. Chun, A geometric construction of iterative functions of order three to solve nonlinearequations, Comput. Math. Appl., 53 (2007), 972-976. doi: 10.1016/j.camwa.2007.01.007
[23]
H. Esmaeili, R. Erfanifar, M. Rashidi, A fourth-order iterative method for computing the MoorePenrose inverse, J. Hyperstruct., 6 (2017), 52-67.
[24]
F. Toutounian, F. Soleymani, An iterative method for computing the approximate inverse of asquare matrix and the Moore-Penrose inverse of a non-square matrix, Appl. Math. Comput., 224 (2013), 671-680.
[25]
F. Soleymani, On finding robust approximate inverses for large sparse matrices, Linear Multilinear A., 62 (2014), 1314-1334. doi: 10.1080/03081087.2013.825910
[26]
V. Pan, R. Schreiber, An improved Newton iteration for the generalized inverse of a matrix, withapplications, SIAM J. Sci. Stat. Comput., 12 (1991), 1109-1130. doi: 10.1137/0912058
This article has been cited by:
1.
Alicia Cordero, Pablo Soto-Quiros, Juan R. Torregrosa,
A general class of arbitrary order iterative methods for computing generalized inverses,
2021,
409,
00963003,
126381,
10.1016/j.amc.2021.126381
2.
Munish Kansal, Sanjeev Kumar, Manpreet Kaur,
An efficient matrix iteration family for finding the generalized outer inverse,
2022,
430,
00963003,
127292,
10.1016/j.amc.2022.127292
3.
Jeffry Chavarría-Molina, Juan José Fallas-Monge, Pablo Soto-Quiros,
Effective implementation to reduce execution time of a low-rank matrix approximation problem,
2022,
401,
03770427,
113763,
10.1016/j.cam.2021.113763
4.
Alicia Cordero, Javier G. Maimó, Juan R. Torregrosa, María P. Vassileva,
Improving Newton–Schulz Method for Approximating Matrix Generalized Inverse by Using Schemes with Memory,
2023,
11,
2227-7390,
3161,
10.3390/math11143161
5.
Pablo Soto-Quiros, Anatoli Torokhti,
Fast random vector transforms in terms of pseudo-inverse within the Wiener filtering paradigm,
2024,
448,
03770427,
115927,
10.1016/j.cam.2024.115927
6.
Salman Sheikhi, Hamid Esmaeili,
A new efficient iterative method to compute the weighted Moore–Penrose inverse,
2025,
0019-5588,
10.1007/s13226-025-00753-1