In this paper, based on the shift splitting (SS) technique, we propose a special SS iteration method for solving the absolute value equation (AVE), which is obtained by reformulating equivalently the AVE as a two-by-two block nonlinear equation. Theoretical analysis shows that the special SS method is absolutely convergent under proper conditions. Numerical experiments are given to demonstrate the feasibility, robustness and effectiveness of the special SS method.
Citation: ShiLiang Wu, CuiXia Li. A special shift splitting iteration method for absolute value equation[J]. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
Related Papers:
[1]
Wan-Chen Zhao, Xin-Hui Shao .
New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578.
doi: 10.3934/math.2023536
[2]
Cui-Xia Li, Long-Quan Yong .
Modified BAS iteration method for absolute value equation. AIMS Mathematics, 2022, 7(1): 606-616.
doi: 10.3934/math.2022038
[3]
Li-Tao Zhang, Xian-Yu Zuo, Shi-Liang Wu, Tong-Xiang Gu, Yi-Fan Zhang, Yan-Ping Wang .
A two-sweep shift-splitting iterative method for complex symmetric linear systems. AIMS Mathematics, 2020, 5(3): 1913-1925.
doi: 10.3934/math.2020127
[4]
Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen .
On Picard-SHSS iteration method for absolute value equation. AIMS Mathematics, 2021, 6(2): 1743-1753.
doi: 10.3934/math.2021104
[5]
Xu Li, Rui-Feng Li .
Shift-splitting iteration methods for a class of large sparse linear matrix equations. AIMS Mathematics, 2021, 6(4): 4105-4118.
doi: 10.3934/math.2021243
Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail .
The development of new efficient iterative methods for the solution of absolute value equations. AIMS Mathematics, 2024, 9(8): 22565-22577.
doi: 10.3934/math.20241098
[8]
Miao Guo, Qingbiao Wu .
Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689.
doi: 10.3934/math.20221027
[9]
Yang Cao, Quan Shi, Sen-Lai Zhu .
A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275.
doi: 10.3934/math.2021078
[10]
Yajun Xie, Changfeng Ma .
The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611.
doi: 10.3934/math.20231050
Abstract
In this paper, based on the shift splitting (SS) technique, we propose a special SS iteration method for solving the absolute value equation (AVE), which is obtained by reformulating equivalently the AVE as a two-by-two block nonlinear equation. Theoretical analysis shows that the special SS method is absolutely convergent under proper conditions. Numerical experiments are given to demonstrate the feasibility, robustness and effectiveness of the special SS method.
1.
Introduction
Considering the absolute value equation (AVE)
(1.1)
where , and denotes the absolute value. The AVE (1.1) is used as a very important tool and often arises in scientific and engineering computing, such as linear programming, bimatrix games, quadratic programming, the quasi-complementarity problems, and so on [1,2,3,4,5].
In recent years, to obtain the numerical solution of the AVE (1.1), a large number of efficient iteration methods have been proposed to solve the AVE (1.1), including the successive linearization method [2], the Picard and Picard-HSS methods [7,9], the nonlinear HSS-like iteration method [15], the sign accord method [6] and the hybrid algorithm [12], the generalized Newton (GN) method [11]. Other forms of the iteration methods, one can see [8,9,10,13] for more details.
Recently, Ke and Ma in [18] extended the classical SOR-like iteration method in [19] for solving the AVE (1.1) and proposed the following SOR-like iteration method
(1.2)
with and , where denotes a diagonal matrix corresponding to , and denotes a vector with components equal to depending on whether the corresponding component of is positive, zero or negative. The SOR-like iteration method (1.2) can be designed by using the idea of SOR-like method in [19] for reformulating the AVE (1.1) as the two-by-two block nonlinear equation
or
(1.3)
The convergence condition of the SOR-like iteration method in [18] was given in the light of assumptions imposed on the involved parameter. Further, in [20], some new convergence conditions were obtained from the involved iteration matrix of the SOR-like iteration method.
When is vanished in the AVE (1.1), the AVE (1.1) reduces to the linear system
(1.4)
In [14], Bai et al. proposed the following shift splitting (SS) of the matrix , that is,
and designed the shift splitting (SS) scheme
for solving the non-Hermitian positive definite linear system (1.4). In this paper, we generalize this idea and propose the special SS iteration method for the two-by-two block nonlinear Eq. (1.3) to solve the AVE (1.1).
The remainder of the paper is organized as follows. In Section 2, the special SS iteration method is introduced to solve the AVE (1.1) and the convergence analysis of the special SS iteration method is studied in detail. In Section 3, numerical experiments are reported to show the effectiveness of the proposed method. In Section 4, some conclusions are given to end the paper.
2.
The special SS iteration method
In this section, we introduce the special SS iteration method to solve the AVE (1.1). To this end, based on the iteration methods studied in [14], a special shift splitting (SS) of the in (1.3) can be constructed as follows
where is a given positive constant and is an identity matrix. This special splitting naturally leads to the special SS iteration method for solving the nonlinear Eq. (1.3) and describes below.
The special SS iteration method: Let be a nonsingular and . Given initial vectors and , for until the iteration sequence is convergent, compute
(2.1)
where is a given positive constant.
Clearly, the iteration matrix of the special SS method is
Let denote the spectral radius of matrix . Then the special SS iteration method is convergent if and only if . Let be an eigenvalue of matrix and be the corresponding eigenvector. Then
which is equal to
(2.2)
(2.3)
To study the convergence conditions of the special SS iteration method, we first give some lemmas.
Lemma 2.1.[16]The AVE has a unique solution for any if and only if matrix or is nonsingular for any diagonal matrix with .
Based on Lemma 2.1, it is easy to obtain when matrix in the AVE satisfies , where denotes the smallest singular value of the matrix , then the AVE has a unique solution for any . One can see [16] for more details.
Lemma 2.2.[17]Consider the real quadratic equation , where and are real numbers. Both roots of the equation are less than one in modulus if and only if and .
Lemma 2.3.Let be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
is nonsingular.
Proof. By simple computations, we have
Clearly, we just need to prove that matrix is nonsingular. If not, then for some nonzero we have that , which will derive a contradiction. This implies that . Let . Then . Further, we have
Obviously, this inequality is invalid. This completes the proof.
Lemma 2.4.Let be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
is nonsingular.
Proof. Similar to Lemma 2.3, by simple computations, we have
Noting that is a symmetric positive definite matrix. Clearly, when matrix is nonsingular, matrix is also nonsingular. In fact, matrix is sure nonsingular. If not, then for some nonzero we have that , which will derive a contradiction. This implies that . Let . Then . Further, we have
(2.4)
Let and . Then, from (2.4) we can obtain
(2.5)
Since , and , Eq. (2.5) is invalid. Therefore, Eq. (2.4) is also invalid. This completes the proof.
Lemma 2.4 implies that the iteration matrix is valid.
Lemma 2.5.Let be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. If is an eigenvalue of the matrix , then .
Proof. If . Then from (2.2) and (2.3) we have
Based on Lemma 2.3, it follows that and , which contradicts the assumption that is an eigenvector of matrix . Hence .
Now, we prove that . When , from (2.2) and (2.3) we have
Since , we obtain and , which also contradicts the assumption that is an eigenvector of matrix . Hence .
Lemma 2.6.Assume that is a symmetric positive definite matrix and satisfies the conditions of Lemma 2.1. Let be an eigenvalue of and be the corresponding eigenvector. Then . Moreover, if , then .
Proof. If , then from (2.2) we have . By Lemma 2.5 we know that . Therefore, , which contradicts the assumption that is an eigenvector of matrix .
If , then from (2.2) we have
Since is symmetric positive definite, then by [12,Lemma 2.1] we know that
Thus, we complete the proof.
Theorem 2.1.Let be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1, and be a positive constant. Then
which implies that the special SS iteration method converges to the unique solution of the AVE .
Proof. If , then . The results in Theorem 2.1 hold.
If . Then from (2.3) we have
(2.6)
Substituting (2.6) into (2.3) leads to
(2.7)
By Lemma 2.6, we know that . Multiplying on both sides of Eq. (2.7), we obtain
(2.8)
Let
Then and . From Eq. (2.8) we have
Further, we know that satisfies the following real quadratic equation
(2.9)
Based on Lemma 2.2, we know that a sufficient and necessary condition for the roots of the real quadratic Eq. (2.9) to satisfy if and only if
(2.10)
and
(2.11)
It is easy to check that (2.10) and (2.11) hold for all . Therefore, we obtain
which implies that the special SS iteration method converges to the unique solution of the AVE .
For the SOR-like iteration method (1.2), in [18], Ke and Ma given the following result.
Comparing the convergence conditions of the special SS method and the SOR-like iteration method, see Theorem 2.1 and Theorem 2.2 [18], the convergence condition of the SOR-like iteration method not only is relatively strict, but also is difficult to compute. The main reason is that the condition (2.12) has to compute the inverse of the matrix . This implies that the application of the SOR-like iteration method is limited. Base on this, the special SS method may be superior to the SOR-like iteration method under certain conditions.
Next, we turn to consider this situation where matrix in (1.1) is the positive definite.
It is noted that Lemma 2.3, Lemma 2.5 and Lemma 2.6 are still valid under the condition of Lemma 2.1 when in (1.1) is the positive definite.
Now, we need guarantee the invertibility of the first factor of . To this end, we have Lemma 2.7.
Lemma 2.7.Let positive definite matrix satisfy the conditions of Lemma 2.1 and . Then
is nonsingular.
Proof. Based on the proof of Lemma 2.4, obviously, when , matrix is nonsingular. This implies that matrix is nonsingular as well.
For later use we define the set as
Obviously, the members of are nonzero. Based on this, Theorem 2.3 can be obtained.
Theorem 2.3.Let the conditions of Lemma 2.7 be satisfied. For every , let , and . For each , if
(2.13)
then
which implies that the special SS iteration method converges to the unique solution of the AVE .
Proof. Similar to the proof of Theorem 2.1, using instead of in (2.8) leads to
which is equal to
(2.14)
For the sake simplicity in notations, we use for , , respectively. From (2.14), we have
(2.15)
where
By some calculations, we get
Further, we have
(2.16)
thus, the necessary and sufficient condition for in (2.15) is
(2.17)
Substituting (2.16) into (2.17), we can obtain the condition (2.13), which completes the proof.
3.
Numerical experiments
In this section, two examples are given to demonstrate the performance of the special SS method (2.2) for solving the AVE (1.1). To this end, we compare the special SS method with the SOR-like method [18], the generalized Newton (GN) method [11] and the search direction (SD) method [10] from aspects of the iteration counts (denoted as ‘IT’) and the relative residual error (denoted as ‘RES’), the elapsed CPU time in second (denoted as ‘CPU’). In the implementations, we use the sparse LU factorization to calculate an inverse of the first factor of . All runs are implemented in Matlab 7.0.
All initial vectors are chosen to zero vector and all iterations are terminated once the relative residual error satisfies
or if the prescribed iteration number 500 is exceeded. In the following tables, ‘’ denotes or the iteration counts larger than 500.
To compare the special SS method with the SOR-like method, we take the same parameter . In this case, Tables 1 and 2 list some numerical results of the special SS method and the SOR-like method for Example 1 with different dimensions and different parameters . From Tables 1 and 2, we can see that when both iteration methods are convergent, the CPU times of the special SS method are more than that of the SOR-like method. It is noted that the SOR-like method occasionally fail to converge in 500 iterations, the special SS method is always convergent, which confirm the results in Theorem 2.1. From the view of this point, the special SS method is robust, compared with the SOR-like method.
Table 3 list the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. Here, the iteration parameter of the special SS method is set to be 3, the iteration parameter of the SOR-like method is set to be 1. From Table 3, the iteration counts of the special SS method are more than any other three methods, but it requires the less CPU times than the generalized Newton method and the search direction method. Among these methods, the SOR-like method compared to any other three methods requires the least iteration steps and CPU time, but it has some risks in the implementations.
Table 3.
Numerical comparison of SS, SOR, GN and SD for Example 1.
On the above discussion, we can draw a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
Example 2 Let be a prescribed positive integer and . Consider the AVE in (1.1) with
with
and , where .
Similarly, for Example 2, we still take the same parameter for the special SS method with the SOR-like method when both are used. In this case, Tables 4, 5, 7 and 8 list the computing results for Example 2 with and .
Table 4.
Numerical comparison of Example 2 with and .
From Tables 4, 5, 7 and 8, these numerical results further confirm the observations from Tables 1 and 2. That is to say, when both are convergent, the computing efficiency of the SOR-like method advantages over the special SS method. Whereas, in the implementations, indeed, the SOR-like method has some risks; for the special SS method, it is free from such worries. This further verifies that the special SS method is steady.
Similar to Table 3, Tables 6 and 9 still enumerate the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. In Tables 6 and 9, the iteration parameter of the SOR-like method is set to be 1; for the special SS method, its iteration parameter are two cases: in Table 6 and in Table 9. From Tables 6 and 9, the special SS method requires the less CPU times than the generalized Newton method and search direction method. Among these methods, the SOR-like method compared to any other three methods costs the least iteration steps and CPU time, but it has some risks in the implementations.
Table 9.
Numerical comparison of SS, SOR, GN and SD for Example 2 with .
From the numerical results from Example 2, we can still come to a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
4.
Conclusions
In this paper, based on the shift splitting (SS) of the coefficient matrix of a kind of the equivalent two-by-two block nonlinear equation of the AVE (1.1), a special shift splitting (SS) method is introduced. Some convergence conditions of special SS method are obtained. Numerical experiments show that the special SS method is feasible. However, the parameter is chosen by the experimentally found optimal choice. So, how to choose the optimal parameter in the special SS method for the AVE is left as a further research work.
Acknowledgments
The author would like to thank the anonymous referee for providing helpful suggestions, which greatly improved the paper. This research was supported by National Natural Science Foundation of China (No.11961082).
Conflict of interest
The authors declare no conflict of interest in this paper.
References
[1]
J. Rohn, A theorem of the alternatives for the equation Ax + B|x| = b, Linear Multilinear A., 52 (2004), 421-426. doi: 10.1080/0308108042000220686
[2]
O. L. Mangasarian, Absolute value programming, Comput. Optim. Appl., 36 (2007), 43-53. doi: 10.1007/s10589-006-0395-5
[3]
O. L. Mangasarian, R. R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006), 359-367. doi: 10.1016/j.laa.2006.05.004
[4]
S. L. Wu, P. Guo, Modulus-based matrix splitting algorithms for the quasi-complementarity problems, Appl. Numer. Math., 132 (2018), 127-137. doi: 10.1016/j.apnum.2018.05.017
[5]
R. W. Cottle, J. S. Pang, R. E. Stone, The Linear Complementarity Problem, Academic, San Diego, 1992.
[6]
J. Rohn, An algorithm for solving the absolute value equations, Electron. J. Linear Al., 18 (2009), 589-599.
[7]
J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35-44. doi: 10.1007/s11590-012-0560-y
[8]
M. A. Noor, J. Iqbal, E. Al-Said, Residual iterative method for solving absolute value equations, Abstr. Appl. Anal., 2012 (2012),1-9.
[9]
D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191-2202. doi: 10.1007/s11590-014-0727-9
[10]
M. A. Noor, J. Iqbal, K. I. Noor, et al. On an iterative method for solving absolute value equations, Optim. Lett., 6 (2012), 1027-1033. doi: 10.1007/s11590-011-0332-0
[11]
O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101-108. doi: 10.1007/s11590-008-0094-5
[12]
O. L. Mangasarian, A hybrid algorithm for solving the absolute value equation, Optim. Lett., 9 (2015), 1469-1474. doi: 10.1007/s11590-015-0893-4
[13]
A. Wang, Y. Cao, J. X. Chen, Modified Newton-type iteration methods for generalized absolute value equations, J. Optimiz. Theory App., 181 (2019), 216-230. doi: 10.1007/s10957-018-1439-6
[14]
Z. Z. Bai, J. F. Yin, Y. F. Su, A shift-splitting preconditioner for non-Hermitian positive definite matrices, J. Comput. Math., 24 (2006), 539-552.
[15]
M. Z. Zhu, G. F. Zhang, Z. Z. Liang, The nonlinear HSS-like iteration method for absolute value equations, arXiv.org:1403.7013v2.
[16]
S. L. Wu, C. X. Li, The unique solution of the absolute value equations, Appl. Math. Lett., 76 (2018), 195-200. doi: 10.1016/j.aml.2017.08.012
[17]
S. L. Wu, T. Z. Huang, X. L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math., 228 (2009), 424-433. doi: 10.1016/j.cam.2008.10.006
[18]
Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195-202.
[19]
G. H. Golub, X. Wu, J. Y. Yuan, SOR-like methods for augmented systems, BIT., 41 (2001), 71-85. doi: 10.1023/A:1021965717530
[20]
P. Guo, S. L. Wu, C. X. Li, On the SOR-like iteration method for solving absolute value equations, Appl. Math. Lett., 97 (2019), 107-113. doi: 10.1016/j.aml.2019.03.033
This article has been cited by:
1.
Hongyu Zhou, Shiliang Wu,
On the unique solution of a class of absolute value equations ,
2021,
6,
2473-6988,
8912,
10.3934/math.2021517
2.
Rashid Ali, Jianming Shi,
The Solution of Absolute Value Equations Using Two New Generalized Gauss-Seidel Iteration Methods,
2022,
2022,
2577-7408,
1,
10.1155/2022/4266576
3.
Rashid Ali, Kejia Pan,
New generalized Gauss–Seidel iteration methods for solving absolute value equations,
2023,
0170-4214,
10.1002/mma.9062
4.
Rashid Ali, Ilyas Khan, Asad Ali, Abdullah Mohamed,
Two new generalized iteration methods for solving absolute value equations using -matrix,
2022,
7,
2473-6988,
8176,
10.3934/math.2022455
5.
Rashid Ali, Kejia Pan,
Two new fixed point iterative schemes for absolute value equations,
2023,
40,
0916-7005,
303,
10.1007/s13160-022-00526-x
6.
Rashid Ali, Asad Ali, Mohammad Mahtab Alam, Abdullah Mohamed, Hemant Kumar Nashine,
Numerical Solution of the Absolute Value Equations Using Two Matrix Splitting Fixed Point Iteration Methods,
2022,
2022,
2314-8888,
1,
10.1155/2022/7934796
7.
Rashid Ali, Kejia Pan,
The solution of the absolute value equations using two generalized accelerated overrelaxation methods,
2022,
15,
1793-5571,
10.1142/S1793557122501546
8.
Rashid Ali, Kejia Pan, Asad Ali,
Two New Iteration Methods with Optimal Parameters for Solving Absolute Value Equations,
2022,
8,
2349-5103,
10.1007/s40819-022-01324-2
9.
Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail,
The development of new efficient iterative methods for the solution of absolute value equations,
2024,
9,
2473-6988,
22565,
10.3934/math.20241098
10.
Rashid Ali, Asad Ali,
The matrix splitting fixed point iterative algorithms for solving absolute value equations,
2023,
16,
1793-5571,
10.1142/S1793557123501061
11.
Alamgir Khan, Javed Iqbal,
An Efficient Two-Step Iterative Method for Absolute Value Equations,
2023,
9,
2349-5103,
10.1007/s40819-023-01593-5
12.
Alamgir Khan, Javed Iqbal, Rasool Shah,
A new efficient two-step iterative method for solving absolute value equations,
2024,
41,
0264-4401,
597,
10.1108/EC-11-2023-0781
13.
Alamgir Khan, Javed Iqbal,
A Two-Step Iterative Method for Absolute Value Equations,
2024,
21,
0219-8762,
10.1142/S021987622450018X
14.
Nisar Gul, Haibo Chen, Javed Iqbal, Rasool Shah,
A new two-step iterative technique for efficiently solving absolute value equations,
2024,
41,
0264-4401,
1272,
10.1108/EC-11-2023-0754
15.
Rashid Ali, Zhao Zhang,
Exploring two new iterative methods for solving absolute value equations,
2024,
1598-5865,
10.1007/s12190-024-02211-3
16.
Peng Wang, Yujing Zhang, Detong Zhu,
A New Finite-Difference Method for Nonlinear Absolute Value Equations,
2025,
13,
2227-7390,
862,
10.3390/math13050862