In this paper, based on the accelerated over relaxation (AOR) iteration method, a generalization of the AOR iteration method is presented to solve the absolute value equations (AVE), which is called the GAOR method. The convergence conditions of the GAOR method are obtained. Numerical experiments are presented in order to verify the feasibility of the GAOR method.
Citation: Cui-Xia Li. A generalization of the AOR iteration method for solving absolute value equations[J]. Electronic Research Archive, 2022, 30(3): 1062-1074. doi: 10.3934/era.2022056
Related Papers:
[1]
Yantong Guo, Quansheng Wu, Xiaofeng Wang .
An extension of high-order Kou's method for solving nonlinear systems and its stability analysis. Electronic Research Archive, 2025, 33(3): 1566-1588.
doi: 10.3934/era.2025074
[2]
Yijun Chen, Yaning Xie .
A kernel-free boundary integral method for reaction-diffusion equations. Electronic Research Archive, 2025, 33(2): 556-581.
doi: 10.3934/era.2025026
[3]
Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg S. Weiss .
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electronic Research Archive, 2023, 31(5): 2519-2554.
doi: 10.3934/era.2023128
[4]
Francisco Javier García-Pacheco, María de los Ángeles Moreno-Frías, Marina Murillo-Arcila .
On absolutely invertibles. Electronic Research Archive, 2024, 32(12): 6578-6592.
doi: 10.3934/era.2024307
[5]
Xuefei He, Kun Wang, Liwei Xu .
Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28(4): 1503-1528.
doi: 10.3934/era.2020079
[6]
J. Bravo-Olivares, E. Fernández-Cara, E. Notte-Cuello, M.A. Rojas-Medar .
Regularity criteria for 3D MHD flows in terms of spectral components. Electronic Research Archive, 2022, 30(9): 3238-3248.
doi: 10.3934/era.2022164
[7]
Ishtiaq Ali .
Advanced machine learning technique for solving elliptic partial differential equations using Legendre spectral neural networks. Electronic Research Archive, 2025, 33(2): 826-848.
doi: 10.3934/era.2025037
[8]
Dongmei Yu, Yifei Yuan, Yiming Zhang .
A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of $ H_+ $-matrices. Electronic Research Archive, 2023, 31(1): 123-146.
doi: 10.3934/era.2023007
[9]
Christos Sourdis .
A Liouville theorem for ancient solutions to a semilinear heat equation and its elliptic counterpart. Electronic Research Archive, 2021, 29(5): 2829-2839.
doi: 10.3934/era.2021016
[10]
Gang Cheng, Yadong Liu .
Hybrid short-term traffic flow prediction based on the effect of non-linear sequence noise. Electronic Research Archive, 2024, 32(2): 707-732.
doi: 10.3934/era.2024034
Abstract
In this paper, based on the accelerated over relaxation (AOR) iteration method, a generalization of the AOR iteration method is presented to solve the absolute value equations (AVE), which is called the GAOR method. The convergence conditions of the GAOR method are obtained. Numerical experiments are presented in order to verify the feasibility of the GAOR method.
1.
Introduction
In this paper, based on the AOR iteration method, we consider the numerical solution of the absolute value equations (AVE)
Ax−|x|=b,
(1.1)
where A∈Rn×n, b∈Rn and |x| denotes all the components of the vector x∈Rn by absolute value. If '|x|' is replaced by 'B|x|' in (1.1), then the general AVE is obtained, see [1,2]. Nowadays, many optimization problems including linear programming, convex quadratic programming and linear complementarity problem [3,4,5] can be formulated as the AVE (1.1) such that the AVE (1.1) has been attracted considerable attention.
To efficiently solve the AVE (1.1), in recent years, a great deal of effort has been committed to develop iteration methods. For example, in [6], Mangasarian presented a generalized Newton method for solving the AVE (1.1) and simply described below
x(k+1)=(A−D(x(k)))−1b,k=0,1,…,
(1.2)
where D(x(k)) is a diagonal matrix of the form D(x(k))=diag(sign(x(k))). One can see [7,8,9,10] and find other versions of the generalized Newton method. Clearly, using the generalized Newton method to solve the AVE (1.1), the inverse of the matrix A−D(x(k)) should be computed. Noting that the matrix A−D(x(k)) is changed with the iteration index k, the computation cost of the generalized Newton method may be highly expensive. To remain the iteration matrix unchanged, the following Picard iteration method was considered in [11]
x(k+1)=A−1(|x(k)|+b),k=0,1,….
(1.3)
When the matrix A in (1.3) is ill-conditioned, in each iteration of the Picard method, we have to deal with this ill-conditioned linear system. To overcome the inverse of the matrix A, by making use of the Hermitian and skew-Hermitian splitting (HSS) of the matrix A in [12], Zhu et al. in [13] presented the nonlinear HSS-like method and discussed the convergence property of the nonlinear HSS-like method. Other versions of the nonlinear HSS-like method, one can see [14,15,16] for more details. In addition, in [17], the classical AOR iteration method has been expanded to solve the AVE. Other relative works are [18,19,20,21,22,23,24].
Recently, based on the methodology of the Gauss-Seidel method, together with the matrix splitting A=D−L−U of matrix A, where D=diag(A), L and U are strictly lower and upper triangular matrices obtained from −A, respectively, in [25], the generalized Gauss-Seidel (GGS) iteration method was proposed and adopted to solve the AVE (1.1), which is of form
(D−L)x(k+1)−|x(k+1)|=U|x(k)|+b,k=0,1,….
(1.4)
Numerical experiments showed the efficiency of the GGS method.
In this paper, inspired by the work in [25], based on the AOR iteration method, a generalization of the AOR iteration method (GAOR) is presented to solve the AVE (1.1) and its convergence conditions are discussed in detail. By making use of some numerical experiments, we present the effectiveness of the GAOR method.
The remainder of the paper is organized as follows: Section 2 goes over some preparatory knowledge. Section 3 presents the GAOR iteration method and its convergence conditions. Section 4 reports some numerical results to show the efficiency of the GAOR method. Finally, Section 5 draws some conclusions.
2.
Preparatory knowledge
Let C=(cij)∈Rn×n. 'diag(C)' denotes the diagonal part of matrix C. For A=(aij),B=(bij)∈Rn×n, we call A≥B if aij≥bij for i,j=1,2,…,n. Matrix A is called non-negative if A≥0; further, A−B≥0 if and only if A≥B. These definitions carry immediately over to vectors by identifying them with n×1 matrices. Matrix A=(aij)∈Rn×n is called a Z-matrix if aij≤0 for i≠j; an L-matrix if A is a Z-matrix and aii>0 for i=1,…,n; an M-matrix if A is a Z-matrix and A−1≥0; and an H-matrix if its comparison matrix ⟨A⟩=(⟨a⟩ij)∈Rn×n is an M-matrix, where
⟨a⟩ij={|aij|fori=j,−|aij|fori≠j,i,j=1,2,…,n.
If A is an M-matrix and B is a Z-matrix, then A≤B implies that B is an M-matrix [26]. A matrix A is irreducible if the directed graph associated to A is strongly connected [27].
Let A=M−N. It is called a matrix splitting of A if det(M)≠0; called regular if M−1≥0 and N≥0; an M-splitting of A if M is an M-matrix and N≥0. Evidently, if A=M−N is an M-splitting and A is a nonsingular M-matrix, then ρ(M−1N)<1, where ρ(⋅) denotes the spectral radius of the matrix.
Lemma 2.1.[28]Let A be an H-matrix. Then|A−1|≤⟨A⟩−1.
In this section, the generalized AOR (GAOR) method is introduced to solve the AVE (1.1). For this purpose, we split A into
A=1ω(M−N),
(3.1)
with
M=Ω+D−rLandN=Ω+(1−ω)D+(ω−r)L+ωU,
where Ω is a positive diagonal matrix, ω and r are real parameters with ω≠0, D=diag(A), L and U are the previously mentioned. Based on the matrix splitting (3.1), the AVE (1.1) is rewritten as the fixed-point equations
1ω(Ω+D−rL)x−|x|=1ω[Ω+(1−ω)D+(ω−r)L+ωU]x+b
or
(Ω+D−rL)x−ω|x|=[Ω+(1−ω)D+(ω−r)L+ωU]x+ωb,
(3.2)
then for k=0,1,…, we can define the GAOR method for solving (1.1) below
Lemma 3.1.Let α>β>0. Then αx−β|x|=b with b∈R has onlyone solution in R. If b≥0, then the solution isx=bα−β, otherwise x=bα+β.
Proof. The results in Lemma 3.1 are valid, whose proof is omitted here.
Next, based on Lemma 3.1, we present Algorithm 1 to solve each step of the GAOR iteration method (3.3) when all the diagonal entries of the matrix Ω+D are greater than ω.
which is contradiction. Therefore, y∗=x∗. This completes the proof.
When w=r in (3.3), the GAOR method reduces to the GSOR method. Its convergence theory is given in Corollary 3.1.
Corollary 3.1.Let Ω be a positive diagonal matrix and all singular valuesof the matrix Ω+D−ωL exceed ω with ω>0.If
‖I−ω(Ω+D−ωL)−1A‖2<1−ω‖(Ω+D−ωL)−1‖2,
(3.7)
then the GSOR method (3.3) converges to the uniquesolution of the AVE (1.1) for an arbitrary initial guessx(0)∈Rn.
Further, in Corollary 3.1, if ω=1, then the convergence theory of GGS is obtained and described in Corollary 3.2.
Corollary 3.2.Let Ω be a positive diagonal matrix and all singular valuesof the matrix Ω+D−L exceed 1. If
‖I−(Ω+D−L)−1A‖2<1−‖(Ω+D−L)−1‖2,
(3.8)
then the GGS method (3.3) converges to the unique solutionof the AVE (1.1) for an arbitrary initial guessx(0)∈Rn.
Remark 3.1. In Corollary 3.2, if Ω is a zero matrix, then the results of Corollary 3.2 are similar to Theorem 3 in [25]. In fact, if we use 'all singular values of the matrix D−L exceed 1' instead of 'Let the diagonal entries of A be greater than one and the matrix D−L−I be strictly row diagonally dominant' in Theorem 3 in [25], then Theorem 3 in [25] holds as well.
It is not difficult to find that the above theoretical results including Theorem 3.1, Corollary 3.1 and Corollary 3.2 are a litter too general and not easy to verify. To overcome these negative factors, an effective approach is to limit the type of matrix A. Next, we restrict our attention to this situation where A is an H-matrix with positive diagonals. For A being an H-matrix with positive diagonals, a simple convergence condition for the GAOR method (3.3) can be obtained, see Theorem 3.2.
Theorem 3.2.Let A be an H-matrix with positive diagonals and the positivediagonal matrix Ω≥ωI. If ω and r satisfy
0<r≤ω≤1,
(3.9)
then the GAOR method (3.3) converges to the uniquesolution of (1.1) for an arbitrary initial guessx(0)∈Rn.
Proof. Since A is an H-matrix with positive diagonals, we have
⟨A⟩=|D|−|L|−|U|.
Further,
⟨A⟩≤⟨D−L⟩≤⟨D−rL⟩.
It follows that matrix Ω+D−rL is an H-matrix with positive diagonals. It holds that
Since the matrix Ω+D−rL is an H-matrix with positive diagonals, the matrix ⟨Ω+D−rL⟩ is an M-matrix and ⟨Ω+D−rL⟩−1>0. Noting that Ω≥ωI, it is easy to obtain that
ρ(ω⟨Ω+D−rL⟩−1)=maxωaii+ωii<1,
where aii+ωii denotes the diagonal elements of matrix Ω+D. Hence, from (3.10), we get
Evidently, the GAOR method (3.3) converges to the unique solution of the AVE (1.1) if ρ(ˉM−1ˉN)<1. Noting that ˉA is an M-matrix, matrix ˉM is an M-matrix and ˉN≥0. Then, ˉA=ˉM−ˉN is an M-splitting. Therefore, ρ(ˉM−1ˉN)<1.
From Theorem 3.2, the convergence conditions of the corresponding GSOR and GGS methods for A being an H-matrix with positive diagonals are obtained, which are omitted.
4.
Numerical experiments
In this section, we give some numerical experiments to assess the efficiency of the GAOR method for solving the AVE (1.1). We compare GAOR with AOR in[17] from two aspects: the number of iterations (denoted as IT) and the CPU time in seconds (denoted as CPU). Meanwhile, we also investigate the generalized Newton method, the nonlinear HSS-like method [13] and the Picard-HSS method [14]. The starting iterate is zero vector, all iterations are terminated once the relative residual error satisfies
‖Ax(k)−|x(k)|−b‖2‖b‖2≤10−6
or if the number of the prescribed iteration kmax=500 is exceeded. All the tests are performed in MATLAB R2016B.
Example 4.1. ([25,30]) Let m be a prescribed positive integer and n=m2. Consider the AVE (1.1), in which A∈Rn×n is given by A=ˆM+μI, where
To fairly compare the GAOR method with the AOR method, we choose the same parameters to test the GAOR method and the AOR method. Under this consideration, in Tables 1–4, for Examples 4.1 and 4.2 with μ=2, Ω=0.1I and Ω=0.5I, we list some iteration results to illustrate the convergence behaviors of the GAOR and AOR methods for the different problem sizes of n. We observe that the GAOR and AOR methods can calculate a satisfactory approximation to the solution of the AVE. Fixing ω and n with r increasing, the iteration steps of the GAOR and AOR methods descend. Fixing ω and r with n increasing, the iteration steps of the GAOR and AOR methods may be hardly sensitive to change. Further, we find that the GAOR method requires less iteration steps than the AOR method, but the GAOR method requires more CPU times than the AOR method. The time-consuming of the GAOR method is due to the code only edited by all the components of the matrix. To make the AOR method more competitive, an effective approach is to optimize the edited code, which is an interesting work in the future.
Table 1.
IT and CPU for Example 4.1 with ω=0.9 and Ω=0.1I.
Tables 5 and 6 list the numerical results of the generalized Newton method, where 'GN' denote the generalized Newton method. Comparing the GAOR method, the AOR method with the generalized Newton method, the generalized Newton method require the least iteration steps, but it also takes the most time. Among three methods, the computational efficiency of the generalized Newton method is worst.
Finally, we compare the GAOR method with the nonlinear HSS-like method and the Picard-HSS method under the same parameter, see Table 7. The explanation is that the nonlinear HSS-like method and the Picard-HSS method only contain a parameter, here, we consider the GAOR method with ω=r, i.e., the GSOR method. When the Picard-HSS method is applied, the stopping criterion for its inner iterations is adopted in [13]. In Table 7, 'GSOR', 'NHSS' and 'PHSS', respectively, denote the GSOR method, the nonlinear HSS-like method and the Picard-HSS method, '−' denotes that the iteration steps exceed 500, and '⋅(⋅)' denotes the outer (inner) iteration steps. From Table 7, the numerical results show that the GSOR method is more effective than the nonlinear HSS-like method and the Picard-HSS method.
Overall, based on the numerical results, the GAOR method displays the good performance when the above presented five testing methods are applied to solve the absolute value equations.
5.
Conclusions
In this paper, we have designed a generalization AOR (GAOR) method to solve the absolute value equations. Some convergence properties for the proposed GAOR method are gained. Numerical experiments have been reported to verify the efficiency of the proposed method.
Acknowledgements
The author thanks five anonymous referees for their constructive suggestions and helpful comments, which led to significant improvement of the original manuscript of this paper. The authors are partially supported by National Natural Science Foundation of China (11961082).
Conflict of interest
The authors declare there is no conflicts of interest.
References
[1]
J. Rohn, A theorem of the alternatives for the equation Ax+B|x|=b, Linear Multilinear A., 52 (2004), 421–426. https://doi.org/10.1080/0308108042000220686 doi: 10.1080/0308108042000220686
[2]
O. L. Mangasarian, Absolute value programming, Comput. Optim. Appl., 36 (2007), 43–53. https://doi.org/10.1007/s10589-006-0395-5 doi: 10.1007/s10589-006-0395-5
[3]
O. L. Mangasarian, Absolute value equations via concave minimization, Optim. Lett., 1 (2007), 1–8. https://doi.org/10.1007/s11590-006-0005-6 doi: 10.1007/s11590-006-0005-6
[4]
R. W. Cottle, G. B. Dantzig, Complementary pivot theory of mathematical programming, Linear Algebra Appl., 1 (1968), 103–125. https://doi.org/10.1016/0024-3795(68)90052-9 doi: 10.1016/0024-3795(68)90052-9
O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101–108. https://doi.org/10.1007/s11590-008-0094-5 doi: 10.1007/s11590-008-0094-5
[7]
L. Caccetta, B. Qu, G. L. Zhou, A globally and quadratically convergent method for absolute value equations, Comput. Optim. Appl., 48 (2011), 45–58. https://doi.org/10.1007/s10589-009-9242-9 doi: 10.1007/s10589-009-9242-9
[8]
S. L. Hu, Z. H. Huang, Q. Zhang, A generalized Newton method for absolute value equations associated with second order cones, J. Comput. Appl. Math., 235 (2011), 1490–1501. https://doi.org/10.1016/j.cam.2010.08.036 doi: 10.1016/j.cam.2010.08.036
[9]
S. Ketabchi, H. Moosaei, An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side, Comput. Math. Appl., 64 (2012), 1882–1885. https://doi.org/10.1016/j.camwa.2012.03.015 doi: 10.1016/j.camwa.2012.03.015
[10]
C. Zhang, Q. J. Wei, Global and finite convergence of a generalized Newton method for absolute value equations, J. Optim. Theory. Appl., 143 (2009), 391–403. https://doi.org/10.1007/s10957-009-9557-9 doi: 10.1007/s10957-009-9557-9
[11]
J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35–44. https://doi.org/10.1007/s11590-012-0560-y doi: 10.1007/s11590-012-0560-y
[12]
Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), 603–626. https://doi.org/10.1137/S0895479801395458 doi: 10.1137/S0895479801395458
[13]
M. Z. Zhu, Y. E. Qi, The nonlinear HSS-like iteration method for absolute value equations, IAENG Inter. J. Appl. Math., 48 (2018), 312–316.
[14]
D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191–2202. https://doi.org/10.1007/s11590-014-0727-9 doi: 10.1007/s11590-014-0727-9
[15]
J. J. Zhang, The relaxed nonlinear PHSS-like iteration method for absolute value equations, Appl. Math. Comput., 265 (2015), 266–274. https://doi.org/10.1016/j.amc.2015.05.018 doi: 10.1016/j.amc.2015.05.018
[16]
C. X. Li, On the modified Hermitian and skew-Hermitian splitting iteration methods for a class of the weakly absolute value equations, J. Inequal. Appl., 2016 (2016), 260. https://doi.org/10.1186/s13660-016-1202-1 doi: 10.1186/s13660-016-1202-1
[17]
C. X. Li, A preconditioned AOR iterative method for the absolute value equations, Int. J. Comp. Methods, 14 (2017), 1750016. https://doi.org/10.1142/S0219876217500165 doi: 10.1142/S0219876217500165
[18]
X. M. Gu, T. Z. Huang, H. B. Li, S. F. Wang, L. Li, Two CSCS-based iteration methods for solving absolute value equations, J. Appl. Anal. Comput., 7 (2017), 1336–1356. https://doi.org/10.11948/2017082 doi: 10.11948/2017082
[19]
J. He, Y. M. Liu, J. K. Tian, Two numerical iteration methods for solving absolute value equations, ScienceAsia, 44 (2018), 40–45. https://doi.org/10.2306/scienceasia1513-1874.2018.44.040 doi: 10.2306/scienceasia1513-1874.2018.44.040
[20]
Z. S. Yu, L. Li, Y. Yuan, A modified multivariate spectral gradient algorithm for solving absolute value equations, Appl. Math. Lett., 121 (2021), 107461. https://doi.org/10.1016/j.aml.2021.107461 doi: 10.1016/j.aml.2021.107461
[21]
C. R. Chen, Y. N. Yang, D. M. Yu, D. R. Han, An inverse-free dynamical system for solving the absolute value equations, Appl. Numer. Math., 168 (2021), 170–181. https://doi.org/10.1016/j.apnum.2021.06.002 doi: 10.1016/j.apnum.2021.06.002
Y. F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020), 105990. https://doi.org/10.1016/j.aml.2019.07.021 doi: 10.1016/j.aml.2019.07.021
[24]
Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195–202. https://doi.org/10.1016/j.amc.2017.05.035 doi: 10.1016/j.amc.2017.05.035
[25]
V. Edalatpour, D.Hezari, D. K. Salkuyeh, A generalization of the Gauss-Seidel iteration method for solving absolute value equations, Appl. Math. Comput., 293 (2017), 156–167. https://doi.org/10.1016/j.amc.2016.08.020 doi: 10.1016/j.amc.2016.08.020
S. L. Wu, C. X. Li, Two-sweep modulus-based matrix splitting iteration methods for linear complementarity problems, J. Comput. Appl. Math., 302 (2016), 327–339. https://doi.org/10.1016/j.cam.2016.02.011 doi: 10.1016/j.cam.2016.02.011
[29]
J. L. Dong, M. Q. Jiang, A modified modulus method for symmetric positive-definite linear complementarity problems, Numer. Linear Algebra Appl., 16 (2009), 129–143. https://doi.org/10.1002/nla.609 doi: 10.1002/nla.609
[30]
C. M. Elliot, J. R. Ockenden, Weak Variational Methods for Moving Boundary Value Problems, Pitman, London, 1982. https://doi.org/10.1137/1026023
[31]
Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. https://doi.org/10.1002/nla.680 doi: 10.1002/nla.680
This article has been cited by:
1.
Alamgir Khan, Javed Iqbal, Ali Akgül, Rashid Ali, Yuting Du, Arafat Hussain, Kottakkaran Sooppy Nisar, V. Vijayakumar,
A Newton-type technique for solving absolute value equations,
2023,
64,
11100168,
291,
10.1016/j.aej.2022.08.052
2.
Miao Guo, Qingbiao Wu,
Two effective inexact iteration methods for solving the generalized absolute value equations,
2022,
7,
2473-6988,
18675,
10.3934/math.20221027
3.
Xiaohui Yu, Qingbiao Wu,
Two efficient iteration methods for solving the absolute value equations,
2024,
01689274,
10.1016/j.apnum.2024.10.009
4.
Pingfei Dai, Qingbiao Wu,
Modulus-based block triangular splitting iteration method for solving the generalized absolute value equations,
2024,
96,
1017-1398,
537,
10.1007/s11075-023-01656-0
5.
Lu-Lin Yan, Yi-Xin Jiang, Shu-Xin Miao, Xian-Ming Gu,
A Relaxation Iteration Method with Three Parameters for Solving Absolute Value Equation,
2024,
2024,
2314-8896,
10.1155/2024/6842559
Cui-Xia Li. A generalization of the AOR iteration method for solving absolute value equations[J]. Electronic Research Archive, 2022, 30(3): 1062-1074. doi: 10.3934/era.2022056
Cui-Xia Li. A generalization of the AOR iteration method for solving absolute value equations[J]. Electronic Research Archive, 2022, 30(3): 1062-1074. doi: 10.3934/era.2022056