In this paper, based on the shift splitting (SS) technique, we propose a special SS iteration method for solving the absolute value equation (AVE), which is obtained by reformulating equivalently the AVE as a two-by-two block nonlinear equation. Theoretical analysis shows that the special SS method is absolutely convergent under proper conditions. Numerical experiments are given to demonstrate the feasibility, robustness and effectiveness of the special SS method.
Citation: ShiLiang Wu, CuiXia Li. A special shift splitting iteration method for absolute value equation[J]. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
Related Papers:
[1]
Wan-Chen Zhao, Xin-Hui Shao .
New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578.
doi: 10.3934/math.2023536
[2]
Cui-Xia Li, Long-Quan Yong .
Modified BAS iteration method for absolute value equation. AIMS Mathematics, 2022, 7(1): 606-616.
doi: 10.3934/math.2022038
[3]
Li-Tao Zhang, Xian-Yu Zuo, Shi-Liang Wu, Tong-Xiang Gu, Yi-Fan Zhang, Yan-Ping Wang .
A two-sweep shift-splitting iterative method for complex symmetric linear systems. AIMS Mathematics, 2020, 5(3): 1913-1925.
doi: 10.3934/math.2020127
[4]
Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen .
On Picard-SHSS iteration method for absolute value equation. AIMS Mathematics, 2021, 6(2): 1743-1753.
doi: 10.3934/math.2021104
[5]
Xu Li, Rui-Feng Li .
Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118.
doi: 10.3934/math.2021243
Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail .
The development of new efficient iterative methods for the solution of absolute value equations. AIMS Mathematics, 2024, 9(8): 22565-22577.
doi: 10.3934/math.20241098
[8]
Miao Guo, Qingbiao Wu .
Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689.
doi: 10.3934/math.20221027
[9]
Yang Cao, Quan Shi, Sen-Lai Zhu .
A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275.
doi: 10.3934/math.2021078
[10]
Yajun Xie, Changfeng Ma .
The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611.
doi: 10.3934/math.20231050
Abstract
In this paper, based on the shift splitting (SS) technique, we propose a special SS iteration method for solving the absolute value equation (AVE), which is obtained by reformulating equivalently the AVE as a two-by-two block nonlinear equation. Theoretical analysis shows that the special SS method is absolutely convergent under proper conditions. Numerical experiments are given to demonstrate the feasibility, robustness and effectiveness of the special SS method.
1.
Introduction
Considering the absolute value equation (AVE)
Ax−|x|=b,
(1.1)
where A∈Rn×n, b∈Rn and |⋅| denotes the absolute value. The AVE (1.1) is used as a very important tool and often arises in scientific and engineering computing, such as linear programming, bimatrix games, quadratic programming, the quasi-complementarity problems, and so on [1,2,3,4,5].
In recent years, to obtain the numerical solution of the AVE (1.1), a large number of efficient iteration methods have been proposed to solve the AVE (1.1), including the successive linearization method [2], the Picard and Picard-HSS methods [7,9], the nonlinear HSS-like iteration method [15], the sign accord method [6] and the hybrid algorithm [12], the generalized Newton (GN) method [11]. Other forms of the iteration methods, one can see [8,9,10,13] for more details.
Recently, Ke and Ma in [18] extended the classical SOR-like iteration method in [19] for solving the AVE (1.1) and proposed the following SOR-like iteration method
with α>0 and ˆD=D(x)=diag(sign(x)),x∈Rn, where D(x)=diag(sign(x)) denotes a diagonal matrix corresponding to sign(x), and sign(x) denotes a vector with components equal to 1,0,−1 depending on whether the corresponding component of x is positive, zero or negative. The SOR-like iteration method (1.2) can be designed by using the idea of SOR-like method in [19] for reformulating the AVE (1.1) as the two-by-two block nonlinear equation
{Ax−y=b,−|x|+y=0,
or
ˉAz=[A−I−ˆDI][xy]=[b0]=ˉb.
(1.3)
The convergence condition of the SOR-like iteration method in [18] was given in the light of assumptions imposed on the involved parameter. Further, in [20], some new convergence conditions were obtained from the involved iteration matrix of the SOR-like iteration method.
When |x| is vanished in the AVE (1.1), the AVE (1.1) reduces to the linear system
Ax=b.
(1.4)
In [14], Bai et al. proposed the following shift splitting (SS) of the matrix A, that is,
A=12(αI+A)−12(αI−A),α>0,
and designed the shift splitting (SS) scheme
x(k+1)=(αI+A)−1(αI−A)x(k)+2(αI+A)−1b,k=0,1,2,…
for solving the non-Hermitian positive definite linear system (1.4). In this paper, we generalize this idea and propose the special SS iteration method for the two-by-two block nonlinear Eq. (1.3) to solve the AVE (1.1).
The remainder of the paper is organized as follows. In Section 2, the special SS iteration method is introduced to solve the AVE (1.1) and the convergence analysis of the special SS iteration method is studied in detail. In Section 3, numerical experiments are reported to show the effectiveness of the proposed method. In Section 4, some conclusions are given to end the paper.
2.
The special SS iteration method
In this section, we introduce the special SS iteration method to solve the AVE (1.1). To this end, based on the iteration methods studied in [14], a special shift splitting (SS) of the ˉA in (1.3) can be constructed as follows
ˉA=[A−I−ˆDI]=12[αI+A−I−ˆDI+I]−12[αI−AIˆDI−I],
where α is a given positive constant and I is an identity matrix. This special splitting naturally leads to the special SS iteration method for solving the nonlinear Eq. (1.3) and describes below.
The special SS iteration method: Let A∈Rn×n be a nonsingular and b∈Rn. Given initial vectors x(0)∈Rm and y(0)∈Rn, for k=0,1,2,... until the iteration sequence {x(k),y(k)}+∞k=0 is convergent, compute
Clearly, the iteration matrix Mα of the special SS method is
Mα=[αI+A−I−ˆD2I]−1[αI−AIˆD0].
Let ρ(Mα) denote the spectral radius of matrix Mα. Then the special SS iteration method is convergent if and only if ρ(Mα)<1. Let λ be an eigenvalue of matrix Mα and [x,y]T be the corresponding eigenvector. Then
[αI−AIˆD0][xy]=λ[αI+A−I−ˆD2I][xy],
which is equal to
(λ−1)αx+(λ+1)Ax−(λ+1)y=0,
(2.2)
(1+λ)ˆDx−2λy=0.
(2.3)
To study the convergence conditions of the special SS iteration method, we first give some lemmas.
Lemma 2.1.[16]The AVE (1.1) has a unique solution for any b∈Rn if and only if matrix A−I+2D or A+I−2D is nonsingular for any diagonal matrix D=diag(di) with 0≤di≤1.
Based on Lemma 2.1, it is easy to obtain when matrix A in the AVE (1.1) satisfies σmin(A)>1, where σmin(A) denotes the smallest singular value of the matrix A, then the AVE (1.1) has a unique solution for any b∈Rn. One can see [16] for more details.
Lemma 2.2.[17]Consider the real quadratic equation x2−bx+d=0, where b and d are real numbers. Both roots of the equation are less than one in modulus if and only if |d|<1 and |b|<1+d.
Lemma 2.3.Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
ˉA=[A−I−ˆDI]
is nonsingular.
Proof. By simple computations, we have
ˉA=[I0−ˆDA−1I][A00I−ˆDA−1][I−A−10I].
Clearly, we just need to prove that matrix I−ˆDA−1 is nonsingular. If not, then for some nonzero y∈Rn we have that (I−ˆDA−1)y=0, which will derive a contradiction. This implies that y=ˆDA−1y. Let A−1y=z. Then Az=ˆDz. Further, we have
zTz<zTATAz=zˆDTˆDz<zTz.
Obviously, this inequality is invalid. This completes the proof.
Lemma 2.4.Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
ˆA=[αI+A−I−ˆD2I]
is nonsingular.
Proof. Similar to Lemma 2.3, by simple computations, we have
Noting that A is a symmetric positive definite matrix. Clearly, when matrix 2I−ˆD(αI+A)−1 is nonsingular, matrix ˆA is also nonsingular. In fact, matrix 2I−ˆD(αI+A)−1 is sure nonsingular. If not, then for some nonzero y∈Rn we have that (2I−ˆD(αI+A)−1)y=0, which will derive a contradiction. This implies that 2y=ˆD(αI+A)−1y. Let (αI+A)−1y=z. Then 2(αI+A)z=ˆDz. Further, we have
2zT(αI+A)z=zˆDz.
(2.4)
Let λ=zTAzzTz and μ=zTDzzTz. Then, from (2.4) we can obtain
2α+2λ=μ.
(2.5)
Since α>0, λ>1 and |μ|≤1, Eq. (2.5) is invalid. Therefore, Eq. (2.4) is also invalid. This completes the proof.
Lemma 2.4 implies that the iteration matrix Mα is valid.
Lemma 2.5.Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. If λ is an eigenvalue of the matrix Mα, then λ≠±1.
Proof. If λ=1. Then from (2.2) and (2.3) we have
{Ax−y=0,−|x|+y=0.
Based on Lemma 2.3, it follows that y=0 and x=0, which contradicts the assumption that [x,y]T is an eigenvector of matrix Mα. Hence λ≠1.
Now, we prove that λ≠−1. When λ=−1, from (2.2) and (2.3) we have
−2αx=0and2αy=0.
Since α>0, we obtain x=0 and y=0, which also contradicts the assumption that [x,y]T is an eigenvector of matrix Mα. Hence λ≠−1.
Lemma 2.6.Assume that A is a symmetric positive definite matrix and satisfies the conditions of Lemma 2.1. Let λ be an eigenvalue of Mα and [x,y]T be the corresponding eigenvector. Then x≠0. Moreover, if y=0, then |λ|<1.
Proof. If x=0, then from (2.2) we have (λ+1)y=0. By Lemma 2.5 we know that λ≠−1. Therefore, y=0, which contradicts the assumption that [x,y]T is an eigenvector of matrix Mα.
If y=0, then from (2.2) we have
(αI+A)−1(αI−A)x=λx.
Since A is symmetric positive definite, then by [12,Lemma 2.1] we know that
|λ|≤‖(αI+A)−1(αI−A)‖<1.
Thus, we complete the proof.
Theorem 2.1.Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1, and α be a positive constant. Then
ρ(Mα)<1,∀α>0,
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
Proof. If λ=0, then ρ(Mα)<1. The results in Theorem 2.1 hold.
If λ≠0. Then from (2.3) we have
y=1+λ2λˆDx.
(2.6)
Substituting (2.6) into (2.3) leads to
(λ−1)αx+(λ+1)Ax−(λ+1)1+λ2λˆDx=0.
(2.7)
By Lemma 2.6, we know that x≠0. Multiplying xTxTx on both sides of Eq. (2.7), we obtain
(λ−1)α+(λ+1)xTAxxTx−(λ+1)22λ⋅xTˆDxxTx=0.
(2.8)
Let
p=xTAxxTx,q=xTˆDxxTx.
Then p>1 and |q|≤1. From Eq. (2.8) we have
(λ−1)α+(λ+1)p−(λ+1)22λq=0.
Further, we know that λ satisfies the following real quadratic equation
λ2+2p−2α−2q2p+2α−qλ−q2p+2α−q=0.
(2.9)
Based on Lemma 2.2, we know that a sufficient and necessary condition for the roots of the real quadratic Eq. (2.9) to satisfy |λ|<1 if and only if
|q2p+2α−q|<1
(2.10)
and
|2p−2α−2q2p+2α−q|<1−q2p+2α−q.
(2.11)
It is easy to check that (2.10) and (2.11) hold for all α>0. Therefore, we obtain
ρ(Mα)<1,∀α>0,
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
For the SOR-like iteration method (1.2), in [18], Ke and Ma given the following result.
Theorem 2.2.[18]Let A∈Rn×n be a nonsingular and b∈Rn. Denote
Comparing the convergence conditions of the special SS method and the SOR-like iteration method, see Theorem 2.1 and Theorem 2.2 [18], the convergence condition of the SOR-like iteration method not only is relatively strict, but also is difficult to compute. The main reason is that the condition (2.12) has to compute the inverse of the matrix A. This implies that the application of the SOR-like iteration method is limited. Base on this, the special SS method may be superior to the SOR-like iteration method under certain conditions.
Next, we turn to consider this situation where matrix A in (1.1) is the positive definite.
It is noted that Lemma 2.3, Lemma 2.5 and Lemma 2.6 are still valid under the condition of Lemma 2.1 when A in (1.1) is the positive definite.
Now, we need guarantee the invertibility of the first factor of Mα. To this end, we have Lemma 2.7.
Lemma 2.7.Let positive definite matrix A satisfy the conditions of Lemma 2.1 and ‖(αI+A)−1‖2<2. Then
ˆA=[αI+A−I−ˆD2I]
is nonsingular.
Proof. Based on the proof of Lemma 2.4, obviously, when ‖(αI+A)−1‖2<2, matrix 2I−ˆD(αI+A)−1 is nonsingular. This implies that matrix ˆA is nonsingular as well.
For later use we define the set S as
S={x∈Cn:x=[x;y] is an eigenvector of Mα with ‖x‖2=1}.
Obviously, the members of S are nonzero. Based on this, Theorem 2.3 can be obtained.
Theorem 2.3.Let the conditions of Lemma 2.7 be satisfied. For every x∈S, let s(x)=ℜ(x∗Ax), t(x)=ℑ(x∗Ax) and q=x∗ˆDx. For each x∈S, if
(α+s(x))2(s(x)−q)+t(x)2s(x)>0,
(2.13)
then
ρ(Mα)<1,∀α>0,
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
Proof. Similar to the proof of Theorem 2.1, using x∗ instead of xT in (2.8) leads to
(λ−1)α+(λ+1)x∗Ax−(λ+1)22λx∗ˆDx=0,
which is equal to
(λ−1)α+(λ+1)(s(x)+t(x)i)−(λ+1)22λq=0,
(2.14)
For the sake simplicity in notations, we use s,t for s(x), t(x), respectively. From (2.14), we have
thus, the necessary and sufficient condition for |λ|<1 in (2.15) is
|ϕ−ϕ∗ψ|+|ψ|2<1.
(2.17)
Substituting (2.16) into (2.17), we can obtain the condition (2.13), which completes the proof.
3.
Numerical experiments
In this section, two examples are given to demonstrate the performance of the special SS method (2.2) for solving the AVE (1.1). To this end, we compare the special SS method with the SOR-like method [18], the generalized Newton (GN) method [11] and the search direction (SD) method [10] from aspects of the iteration counts (denoted as ‘IT’) and the relative residual error (denoted as ‘RES’), the elapsed CPU time in second (denoted as ‘CPU’). In the implementations, we use the sparse LU factorization to calculate an inverse of the first factor of Mα. All runs are implemented in Matlab 7.0.
All initial vectors are chosen to zero vector and all iterations are terminated once the relative residual error satisfies
RES:=‖Ax(k)−|x(k)|−b‖2‖b‖2≤10−6
or if the prescribed iteration number 500 is exceeded. In the following tables, ‘−’ denotes RES>10−6 or the iteration counts larger than 500.
To compare the special SS method with the SOR-like method, we take the same parameter α. In this case, Tables 1 and 2 list some numerical results of the special SS method and the SOR-like method for Example 1 with different dimensions n and different parameters α. From Tables 1 and 2, we can see that when both iteration methods are convergent, the CPU times of the special SS method are more than that of the SOR-like method. It is noted that the SOR-like method occasionally fail to converge in 500 iterations, the special SS method is always convergent, which confirm the results in Theorem 2.1. From the view of this point, the special SS method is robust, compared with the SOR-like method.
Table 1.
Numerical comparison of Example 1 with n=3000.
Table 3 list the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. Here, the iteration parameter of the special SS method is set to be 3, the iteration parameter of the SOR-like method is set to be 1. From Table 3, the iteration counts of the special SS method are more than any other three methods, but it requires the less CPU times than the generalized Newton method and the search direction method. Among these methods, the SOR-like method compared to any other three methods requires the least iteration steps and CPU time, but it has some risks in the implementations.
Table 3.
Numerical comparison of SS, SOR, GN and SD for Example 1.
On the above discussion, we can draw a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
Example 2 Let m be a prescribed positive integer and n=m2. Consider the AVE in (1.1) with A=ˉA+μI
Similarly, for Example 2, we still take the same parameter α for the special SS method with the SOR-like method when both are used. In this case, Tables 4, 5, 7 and 8 list the computing results for Example 2 with μ=1 and μ=4.
Table 4.
Numerical comparison of Example 2 with n=4900 and μ=1.
From Tables 4, 5, 7 and 8, these numerical results further confirm the observations from Tables 1 and 2. That is to say, when both are convergent, the computing efficiency of the SOR-like method advantages over the special SS method. Whereas, in the implementations, indeed, the SOR-like method has some risks; for the special SS method, it is free from such worries. This further verifies that the special SS method is steady.
Similar to Table 3, Tables 6 and 9 still enumerate the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. In Tables 6 and 9, the iteration parameter of the SOR-like method is set to be 1; for the special SS method, its iteration parameter are two cases: α=2.1 in Table 6 and α=5.5 in Table 9. From Tables 6 and 9, the special SS method requires the less CPU times than the generalized Newton method and search direction method. Among these methods, the SOR-like method compared to any other three methods costs the least iteration steps and CPU time, but it has some risks in the implementations.
Table 9.
Numerical comparison of SS, SOR, GN and SD for Example 2 with μ=4.
From the numerical results from Example 2, we can still come to a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
4.
Conclusions
In this paper, based on the shift splitting (SS) of the coefficient matrix of a kind of the equivalent two-by-two block nonlinear equation of the AVE (1.1), a special shift splitting (SS) method is introduced. Some convergence conditions of special SS method are obtained. Numerical experiments show that the special SS method is feasible. However, the parameter α is chosen by the experimentally found optimal choice. So, how to choose the optimal parameter in the special SS method for the AVE is left as a further research work.
Acknowledgments
The author would like to thank the anonymous referee for providing helpful suggestions, which greatly improved the paper. This research was supported by National Natural Science Foundation of China (No.11961082).
Conflict of interest
The authors declare no conflict of interest in this paper.
References
[1]
J. Rohn, A theorem of the alternatives for the equation Ax + B|x| = b, Linear Multilinear A., 52 (2004), 421-426. doi: 10.1080/0308108042000220686
[2]
O. L. Mangasarian, Absolute value programming, Comput. Optim. Appl., 36 (2007), 43-53. doi: 10.1007/s10589-006-0395-5
[3]
O. L. Mangasarian, R. R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006), 359-367. doi: 10.1016/j.laa.2006.05.004
[4]
S. L. Wu, P. Guo, Modulus-based matrix splitting algorithms for the quasi-complementarity problems, Appl. Numer. Math., 132 (2018), 127-137. doi: 10.1016/j.apnum.2018.05.017
[5]
R. W. Cottle, J. S. Pang, R. E. Stone, The Linear Complementarity Problem, Academic, San Diego, 1992.
[6]
J. Rohn, An algorithm for solving the absolute value equations, Electron. J. Linear Al., 18 (2009), 589-599.
[7]
J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35-44. doi: 10.1007/s11590-012-0560-y
[8]
M. A. Noor, J. Iqbal, E. Al-Said, Residual iterative method for solving absolute value equations, Abstr. Appl. Anal., 2012 (2012),1-9.
[9]
D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191-2202. doi: 10.1007/s11590-014-0727-9
[10]
M. A. Noor, J. Iqbal, K. I. Noor, et al. On an iterative method for solving absolute value equations, Optim. Lett., 6 (2012), 1027-1033. doi: 10.1007/s11590-011-0332-0
[11]
O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101-108. doi: 10.1007/s11590-008-0094-5
[12]
O. L. Mangasarian, A hybrid algorithm for solving the absolute value equation, Optim. Lett., 9 (2015), 1469-1474. doi: 10.1007/s11590-015-0893-4
[13]
A. Wang, Y. Cao, J. X. Chen, Modified Newton-type iteration methods for generalized absolute value equations, J. Optimiz. Theory App., 181 (2019), 216-230. doi: 10.1007/s10957-018-1439-6
[14]
Z. Z. Bai, J. F. Yin, Y. F. Su, A shift-splitting preconditioner for non-Hermitian positive definite matrices, J. Comput. Math., 24 (2006), 539-552.
[15]
M. Z. Zhu, G. F. Zhang, Z. Z. Liang, The nonlinear HSS-like iteration method for absolute value equations, arXiv.org:1403.7013v2.
[16]
S. L. Wu, C. X. Li, The unique solution of the absolute value equations, Appl. Math. Lett., 76 (2018), 195-200. doi: 10.1016/j.aml.2017.08.012
[17]
S. L. Wu, T. Z. Huang, X. L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math., 228 (2009), 424-433. doi: 10.1016/j.cam.2008.10.006
[18]
Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195-202.
[19]
G. H. Golub, X. Wu, J. Y. Yuan, SOR-like methods for augmented systems, BIT., 41 (2001), 71-85. doi: 10.1023/A:1021965717530
[20]
P. Guo, S. L. Wu, C. X. Li, On the SOR-like iteration method for solving absolute value equations, Appl. Math. Lett., 97 (2019), 107-113. doi: 10.1016/j.aml.2019.03.033
This article has been cited by:
1.
Hongyu Zhou, Shiliang Wu,
On the unique solution of a class of absolute value equations $ Ax-B|Cx| = d $,
2021,
6,
2473-6988,
8912,
10.3934/math.2021517
2.
Rashid Ali, Jianming Shi,
The Solution of Absolute Value Equations Using Two New Generalized Gauss-Seidel Iteration Methods,
2022,
2022,
2577-7408,
1,
10.1155/2022/4266576
3.
Rashid Ali, Kejia Pan,
New generalized Gauss–Seidel iteration methods for solving absolute value equations,
2023,
0170-4214,
10.1002/mma.9062
4.
Rashid Ali, Ilyas Khan, Asad Ali, Abdullah Mohamed,
Two new generalized iteration methods for solving absolute value equations using $ M $-matrix,
2022,
7,
2473-6988,
8176,
10.3934/math.2022455
5.
Rashid Ali, Kejia Pan,
Two new fixed point iterative schemes for absolute value equations,
2023,
40,
0916-7005,
303,
10.1007/s13160-022-00526-x
6.
Rashid Ali, Asad Ali, Mohammad Mahtab Alam, Abdullah Mohamed, Hemant Kumar Nashine,
Numerical Solution of the Absolute Value Equations Using Two Matrix Splitting Fixed Point Iteration Methods,
2022,
2022,
2314-8888,
1,
10.1155/2022/7934796
7.
Rashid Ali, Kejia Pan,
The solution of the absolute value equations using two generalized accelerated overrelaxation methods,
2022,
15,
1793-5571,
10.1142/S1793557122501546
8.
Rashid Ali, Kejia Pan, Asad Ali,
Two New Iteration Methods with Optimal Parameters for Solving Absolute Value Equations,
2022,
8,
2349-5103,
10.1007/s40819-022-01324-2
9.
Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail,
The development of new efficient iterative methods for the solution of absolute value equations,
2024,
9,
2473-6988,
22565,
10.3934/math.20241098
10.
Rashid Ali, Asad Ali,
The matrix splitting fixed point iterative algorithms for solving absolute value equations,
2023,
16,
1793-5571,
10.1142/S1793557123501061
11.
Alamgir Khan, Javed Iqbal,
An Efficient Two-Step Iterative Method for Absolute Value Equations,
2023,
9,
2349-5103,
10.1007/s40819-023-01593-5
12.
Alamgir Khan, Javed Iqbal, Rasool Shah,
A new efficient two-step iterative method for solving absolute value equations,
2024,
41,
0264-4401,
597,
10.1108/EC-11-2023-0781
13.
Alamgir Khan, Javed Iqbal,
A Two-Step Iterative Method for Absolute Value Equations,
2024,
21,
0219-8762,
10.1142/S021987622450018X
14.
Nisar Gul, Haibo Chen, Javed Iqbal, Rasool Shah,
A new two-step iterative technique for efficiently solving absolute value equations,
2024,
41,
0264-4401,
1272,
10.1108/EC-11-2023-0754
15.
Rashid Ali, Zhao Zhang,
Exploring two new iterative methods for solving absolute value equations,
2024,
1598-5865,
10.1007/s12190-024-02211-3
16.
Peng Wang, Yujing Zhang, Detong Zhu,
A New Finite-Difference Method for Nonlinear Absolute Value Equations,
2025,
13,
2227-7390,
862,
10.3390/math13050862
17.
Jinbao Jian, Mingxia Wang, Jianghua Yin,
A hybrid acceleration Douglas-Rachford splitting method for solving large-scale absolute value equations,
2025,
1862-4472,
10.1007/s11590-025-02209-1