Picard-type methods are efficient methods for solving the absolute value equation Ax−|x|=b. To further improve the performance of Picard iteration method, a new inexact Picard iteration method, named Picard-SHSS iteration method, is proposed to solve the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to demonstrate the effectiveness of the new method.
Wan-Chen Zhao, Xin-Hui Shao .
New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578.
doi: 10.3934/math.2023536
[3]
Yang Cao, Quan Shi, Sen-Lai Zhu .
A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275.
doi: 10.3934/math.2021078
[4]
Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota .
Computational comparative analysis of fixed point approximations of generalized $ \alpha $-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507.
doi: 10.3934/math.2023129
[5]
Sima Karamseraji, Shokrollah Ziari, Reza Ezzati .
Approximate solution of nonlinear fuzzy Fredholm integral equations using bivariate Bernstein polynomials with error estimation. AIMS Mathematics, 2022, 7(4): 7234-7256.
doi: 10.3934/math.2022404
[6]
Miao Guo, Qingbiao Wu .
Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689.
doi: 10.3934/math.20221027
[7]
Ebrahim. A. Youness, Abd El-Monem. A. Megahed, Elsayed. E. Eladdad, Hanem. F. A. Madkour .
Min-max differential game with partial differential equation. AIMS Mathematics, 2022, 7(8): 13777-13789.
doi: 10.3934/math.2022759
[8]
Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi .
A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516.
doi: 10.3934/math.20231458
[9]
Maryam Arabameri, Raziyeh Gharechahi, Taher A. Nofal, Hijaz Ahmad .
A nonstandard compact finite difference method for a truncated Bratu–Picard model. AIMS Mathematics, 2024, 9(10): 27557-27576.
doi: 10.3934/math.20241338
[10]
Junaid Ahmad, Muhammad Arshad, Reny George .
A fixed point iterative scheme based on Green's function for numerical solutions of singular BVPs. AIMS Mathematics, 2023, 8(12): 29517-29534.
doi: 10.3934/math.20231511
Abstract
Picard-type methods are efficient methods for solving the absolute value equation Ax−|x|=b. To further improve the performance of Picard iteration method, a new inexact Picard iteration method, named Picard-SHSS iteration method, is proposed to solve the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to demonstrate the effectiveness of the new method.
1.
Introduction
The absolute value equation (AVE) of the form
Ax−|x|=b,
(1.1)
where A∈Rn×n, x,b∈Rn, and |x| denotes the component-wise absolute value of the vector x, i.e., |x|=(|x1|,⋯,|xn|)T, arises in a variety of optimization problems, e.g. linear complementarity problem, linear programming or convex quadratic programming problems; see for example [7,17,18,20,21]. It is a special case of the generalized absolute value equation of the type
Ax+B|x|=b,
(1.2)
where B∈Rn×n. The generalized absolute value equation (1.2) was introduced in [21] and investigated in a more general context [17,18,20].
The conditions of the unique solvability of AVE (1.1) and generalized absolute value equation (1.2) have been given in [10,12,13,14,18,21,22,25], for example, in [18], Mangasarian and Meyer have shown that the AVE (1.1) for any constant vector has a unique solvability when the smallest singular values of the involved matrix A are greater than one. When AVE (1.1) has the unique solution, how to find the solution is a major research topic. In this study, we consider the iteration method for solving the AVE (1.1). In recent years, a large variety of methods for solving AVE (1.1) can be found in the literature [1,4,5,6,8,15,16,19,22,24,25]. Among these methods, Picard-type methods capture one's attention. Rohn et al. in [22] proposed the Picard iteration method to solve AVE (1.1)
x(k+1)=A−1(|x(k)|+b),k=0,1,2,⋯,
(1.3)
where x(0) is the initial guess. From (1.3), we can see that there is a linear system with the constant coefficient matrix A that needs to be solved in each iteration of the Picard method. To improve the performance of the Picard method, the linear system with matrix A should be solved by inner iteration, this leads to the inexact Picard iteration method. As an example, Salkuyeh [24] suggested that using Hermitian and skew-Hermitian splitting iteration (HSS) method [2] to approximate the solution of the linear system with A at each Picard iteration, and proposed the Picard-HSS method for solving AVE (1.1). In fact, the Picard-HSS method was proposed originally by Bai and Yang for weakly nonlinear systems in [3]. The sufficient conditions to guarantee the convergence of the Picard-HSS method and some numerical experiments are given to show the effectiveness of the method for solving AVE (1.1) in [24].
Note that each step of the inner HSS iteration of the Picard-HSS method [3,24] requires solving two linear subsystems, one characterized by a Hermitian coefficient matrix and other by a skew-Hermitian coefficient matrix. The solution of linear subsystem with Hermitian coefficient matrix can be easily obtained by CG method, however, the solution of linear subsystem with skew-Hermitian coefficient matrix is not easy to obtain, in some cases, its solution is as difficult as that of the original linear system. To avoid solving a linear subsystem with skew-Hermitian coefficient matrix in the inner iteration of the inexact Picard method, we use the single-step HSS (SHSS) method [9] to approximate the solution of the linear system with coefficient matrix A and present a new inexact Picard method, abbreviated as Picard-SHSS iteration method, in this paper.
The rest of this paper is organized as follows. In Section 2, after review some notes and the SHSS iteration method, the Picard-SHSS iteration method for solving AVE (1.1) is described. And then the convergence properties of the Picard-SHSS iteration method is studied. Numerical experiments are presented in Section 3, to show the feasibility and effectiveness of the Picard-SHSS method.
2.
The Picard-SHSS method
For convenience, some notation, definitions and results that will be used in the following parts are given below. For a matrix A∈Rn×n, AT represents the transpose of A, and ρ(A) denotes the spectral radius of A. A is said to be positive definite if its symmetric part H=12(A+AT) is positive definite, see [11] for the definition of positive definite matrix in a complex set.
Let A∈Rn×n be a positive definite matrix, and A=H+S be its Hermitian and skew-Hermitian splitting with H=12(A+AT) and S=12(A−AT). Based on the Hermitian and skew-Hermitian splitting of A, Bai et al. [2] presented the HSS method
to solve positive definite system of linear equations Ax=q, here α is a positive iteration parameter. There are two linear subsystems that need to be solved at each step of the HSS iteration method, one is the linear subsystem with coefficient matrix αI+H and the other is the linear subsystem with coefficient matrix αI+S for any positive constant α and identity matrix I; see [2] for more details. The challenges of the HSS iteration method lie in solving the linear subsystem with αI+S, which is as difficult as that of the original linear system in some cases. To avoid solving a linear subsystem with αI+S in the HSS iteration method, the SHSS method was proposed recently [9]. The iteration scheme of the SHSS method used for solving system of the linear equations Ax=q can be written equivalently as
(αI+H)x(l+1)=(αI−S)x(l)+q.
(2.2)
It has been proved in [9] that, under a loose restriction on the iteration parameter α, the SHSS method is convergent to the unique solution of the linear system Ax=q for any initial guess x(0)∈Rn.
When A is positive definite matrix, using the HSS method (2.1) as an inner iteration in the Picard method (1.3), Salkuyeh [24] proposed the following Picard-HSS method for solving the AVE (1.1)
Method 2.1.(The Picard-HSS iteration method) Let A∈Rn×n be a positive definite matrix, H=12(A+AT) and S=12(A−AT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)∈Rn and a sequence {lk}∞k=0 of positive integers, compute x(k+1) for k=0,1,2,⋯ using the following iteration scheme until {x(k)} satisfies the stopping criterion:
(a). Set x(k,0)=x(k);
(b). For l=0,1,⋯,lk−1, solve the following linear system to obtain x(k,l+1):
where M(α)=αI+H and N(α)=αI−S, α is a positive constant, {lk}∞k=0 a prescribed sequence of positive integers, and x(k,0)=x(k) is the starting point of the inner SHSS iteration at k-th outer Picard iteration. This leads to the following inexact Picard iteration method, called Picard-SHSS iteration method, for solving the AVE (1.1)
Method 2.2.(The Picard-SHSS iteration method) Let A∈Rn×n be a positive definite matrx, H=12(A+AT) and S=12(A−AT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)∈Rn and a sequence {lk}∞k=0 of positive integers, compute x(k+1) for k=0,1,2,⋯ using the following iteration scheme:
(a). Set x(k,0)=x(k);
(b). For l=0,1,⋯,lk−1, solve the following linear system to obtain x(k,l+1):
(αI+H)x(k,l+1)=(αI−S)x(k,l)+|x(k)|+b,
where α is a positive constant and I is a the identity matrix;
(c). Set x(k+1)=x(k,lk);
(d). If x(k+1) satisfies ‖Ax(k+1)−|x(k+1)|−b‖2‖b‖2≤10−7, then stop; otherwise, let x(k)=x(k+1) and go back to (a).
Compared with the Picard-HSS iteration method [24], a linear subsystem with αI+S is avoided in the inner iteration of the Picard-SHSS iteration method. The involved linear subsystem with αI+H of the Picard-SHSS iteration method can be efficiently solved exactly by a sparse Cholesky factorization, or inexactly by a preconditioned Conjugate Gradient method [23].
The next theorem provides sufficient condition for the convergence of the Picard-SHSS method to solve the AVE (1.1).
Theorem 2.1.Let A∈Rn×n be a positive definite matrix and H=12(A+AT) and S=12(A−AT) be its Hermitian and skew-Hermitian parts, respectively. Let η=‖A−1‖2<1, α be a constant number such that α>max{0,σ2max−λ2min2λmin}, where λmin is the smallest eigenvalue of H and σmax is the largest singular-value of S. Then the AVE (1.1) has a unique solution x∗, and for any initial guess x(0)∈Rn and any sequence of positive integers lk, k=0,1,⋯, the iteration sequence {x(k)}∞k=0 produced by the Picard-SHSS iteration method converges to x∗ provided that lk≥N for all k=0,1,⋯, where N is a natural number satisfying
‖T(α)s‖2<1−η1+η∀s≥N
with T(α)=M(α)−1N(α).
Proof. The proof is similar to that of [24, Theorem 1], for self-contained purpose, we give the proof as follows. Based on the iteration scheme (2.3), we can express the (k+1)th iterate x(k+1) of the Picard-SHSS iteration method as
It follows from [9, Theorem 2.1] that ρ(T(α))<1 when α satisfies α>max{0,σ2max−λ2min2λmin}. In this case, some calculations yield ∑lk−1j=0T(α)jM(α)−1=(I−T(α)lk)A−1. Now (2.6) becomes
Note that ‖|x|−|y|‖2≤‖x−y‖2 for any x,y∈Rn, it then follows that
‖x(k+1)−x∗‖2≤(‖T(α)lk‖2(1+η)+η)‖x(k)−x∗‖2.
The condition of ρ(T(α))<1 when α satisfies α>max{0,σ2max−λ2min2λmin} ensures that T(α) tend to 0 as s tend to infinity. Therefore, there is a natural number N such that
‖T(α)s‖2<ε:=1−η1+η∀s≥N.
Now, if lk≥N for all k=0,1,⋯, then ‖x(k+1)−x∗‖2<‖x(k)−x∗‖2, hence the iteration sequence {x(k)}∞k=0 produced by the Picard-SHSS iteration method converges to x∗.
In actual computation, the residual-updating form of the Picard-SHSS iteration method is more convenient, which can be written as following.
The Picard-SHSS iteration method (residual-updating variant): Let A∈Rn×n be no-Hermitian positive definite, H=12(A+AT) and S=12(A−AT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)∈Rn and a sequence {lk}∞k=0 of positive integers, compute x(k+1) for k=0,1,2,⋯ using the following iteration scheme until {x(k)} satisfies the stopping criterion:
(a). Set s(k,0)=0 and b(k)=|x(k)|+b−Ax(k);
(b). For l=0,1,⋯,lk−1, solve the following linear system to obtain s(k,l+1):
(αI+H)s(k,l+1)=(αI−S)s(k,l)+b(k),
where α is a positive constant and I is the identity matrix;
(c). Set x(k+1)=x(k)+s(k,lk);
(d). If x(k+1) satisfies ‖Ax(k+1)−|x(k+1)|−b‖2‖b‖2≤10−7, then stop; otherwise, let x(k)=x(k+1) and go back to (a).
3.
Numerical experiments
In this section, we give an example with numerical experiments to show the effectiveness of the Picard-SHSS iteration method to solve AVE (1.1), to do this, the numerical properties of the Picard-HSS and Picard-SHSS methods are examined and compared experimentally. We use the residual-updating versions of the Picard-HSS iteration method [24] and Picard-SHSS iteration method.
All the numerical experiments presented in this section have been computed in double precision using some MATLAB R2012b on Intel (R) Core (TM) i5-2400 CPU 3.10 GHz and 4.00 GB of RAM. All runs are started from the initial zero vector and terminated if the current relative residual satisfies
RES:=‖Ax(k)−|x(k)|−b‖2‖b‖2≤10−7,
where x(k) is the computed solution by each of the methods at iteration k, and a maximum number of the iterations 500 is used. In addition, the stopping criterion for the inner iterations of the Picard-HSS and Picard-SHSS methods are set to be
‖b(k)−As(k,l)‖2‖b(k)‖2≤0.01,
and a maximum number of the iterations 10 (lk=10,k=0,1,2,⋯) for inner iterations are used.
The coefficient matrix A of AVE (1.1) is given by
A=Tx⊗Im+Im⊗Ty+pIn,
(3.1)
where Im and In are the identity matrices of order m and n with n=m2, ⊗ means the Kronecker product, Tx and Ty are tridiagonal matrices Tx=tridiag(t2,t1,t3)m×m and Ty=tridiag(t2,0,t3)m×m with t1=4, t2=−1−Re, t3=−1+Re. Here Re=(qh)/2 and h=1/(m+1) are the mesh Reynolds number and the equidistant step size, respectively, and q is a positive constant. In fact, the matrix A arising from the finite difference approximation the two-dimensional convection-diffusion equation
where Ω=(0,1)×(0,1), ∂Ω is its boundary, q is a positive constant used to measure the magnitude of the diffusive term and p is a real number. If we use the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms, then we obtained the matrix A. It is easy to find that for every nonnegative number q the matrix A is in general non-symmetric positive definite [24]. The right-hand side vector of AVE (1.1) is taken such a way that the vector x=(x1,x2,⋯,xn)T with xi=(−1)ii,i=1,2,⋯,n be the exact solution.
In our numerical experiments, the matrix A in AVE (1.1) is defined by (3.1) with different values of q (q=0,1,10,and100) and different values of p (p=0and−1). The parameters used in the Picard-HSS and Picard-SHSS iteration methods are chosen to be the ones, which result in the least number of iteration steps of iteration methods, see Tables 1 and 2. In Tables 3 and 4, we present the numerical results with respect to the Picard-HSS and Picard-SHSS iteration methods. We give the elapsed CPU time in seconds for the convergence (denoted by CPU), the number of iterations for the convergence (denoted by IT) and the relative residuals (denoted by RES).
Table 1.
The values of α for Picard-HSS and Picard-SHSS methods (p=0).
From the Tables 3 and 4, we see that both the Picard-HSS and Picard-SHSS iteration methods can successfully produced approximate solution to the AVE (1.1) for all of the problem-scales n=m2 and the convective measurements q. For the convergent cases, the CPU time also increases rapidly with the increasing of the problem-scale for all tested iteration methods. Moreover, numerical results in the two tables show that the Picard-SHSS iteration method performs better than the Picard-HSS iteration method in most cases as the former one cost the least CPU time to achieve stopping criterion except the cases of q=100, m=10 and q=100,m=20. In addition, for p=−1, the Picard-SHSS iteration method costs the least number of iteration steps and CPU time to achieve stopping criterion. In summary, the Picard-SHSS iteration method is useful and effective for solving the NP-hard AVE (1.1).
4.
Conclusions
In this paper, the Picard-SHSS method is proposed for solving the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to confirm our theoretical results and to verify the effectiveness of the new method.
Acknowledgements
We would like to thank Professor Davod Khojasteh Salkuyeh of University of Guilan for providing us the MATLAB code of the Picard-HSS method. Many thanks to the Editor and the anonymous referees for their helpful comments on the revision of this article.
This work was supported by Natural Science Foundation of Northwest Normal University (No. NWNU-LKQN-17-5).
Conflicts of interest
The authors declare that they have no conflicts of interest.
References
[1]
L. Abdallah, M. Haddou, T. Migot, Solving absolute value equation using complementarity and smoothing functions, J. Comput. Appl. Math.,327 (2018), 196-207. doi: 10.1016/j.cam.2017.06.019
[2]
Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl.,24 (2003), 603-626. doi: 10.1137/S0895479801395458
[3]
Z. Z. Bai, X. Yang, On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math.,59 (2009), 2923-2936. doi: 10.1016/j.apnum.2009.06.005
[4]
L. Caccetta, B. Qu, G. Zhou, A globally and quadratically convergent method for absolute value equations, Comput. Optim. Appl.,48 (2011), 45-58. doi: 10.1007/s10589-009-9242-9
[5]
J. M. Feng, S. Y. Liu, A three-step iterative method for solving absolute value equations, J. Math.,2020 (2020), 1-7.
[6]
X. M. Gu, T. Z. Huang, H. B. Li, S. F. Wang, L. Li, Two CSCS-based iteration methods for solving absolute value equations, J. Appl. Anal. Comput.,7 (2017), 1336-1356.
[7]
S. L. Hu, Z. H. Huang, A note on absolute value equations, Optim. Lett.,4 (2010), 417-424. doi: 10.1007/s11590-009-0169-y
[8]
Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput.,311 (2017), 195-202.
[9]
C. X. Li, S. L. Wu, A single-step HSS method for non-Hermitian positive definite linear systems, Appl. Math. Lett.,44 (2015), 26-29. doi: 10.1016/j.aml.2014.12.013
[10]
F. Mezzadri, On the solution of general absolute value equations, Appl. Math. Lett., 107 (2020), art. 106462.
[11]
C. L. Wang, Z. Z. Bai, Sufficient conditions for the convergent splittings of non-Hermitian positive definite matrices, Linear Algebra Appl.,330 (2001), 215-218. doi: 10.1016/S0024-3795(01)00275-0
[12]
S. L. Wu, P. Guo, On the unique solvability of the absolute value equation, J. Optim. Theory Appl.,169 (2016), 705-712. doi: 10.1007/s10957-015-0845-2
[13]
S. L. Wu, C. X. Li, The unique solution of the absolute value equations, Appl. Math. Lett.,76 (2018), 195-200. doi: 10.1016/j.aml.2017.08.012
[14]
S. L. Wu, C. X. Li, A note on unique solvability of the absolute value equation, Optim. Lett.,14 (2020), 1957-1960. doi: 10.1007/s11590-019-01478-x
[15]
O. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett.,3 (2009), 101-108. doi: 10.1007/s11590-008-0094-5
[16]
O. Mangasarian, Knapsack feasibility as an absolute value equation solvable by successive linear programming, Optim. Lett.,3 (2009), 161-170. doi: 10.1007/s11590-008-0102-9
[17]
O. Mangasarian, Primal-dual bilinear programming solution of the absolute value equation, Optim. Lett.,6 (2012), 1527-1533. doi: 10.1007/s11590-011-0347-6
[18]
O. Mangasarian, R. Meyer, Absolute value equations, Linear Algebra Appl.,419 (2006), 359-367. doi: 10.1016/j.laa.2006.05.004
[19]
M. Noor, J. Iqbal, K. Noor, E. Al-Said, On an iterative method for solving absolute value equations, Optim. Lett.,6 (2012), 1027-1033. doi: 10.1007/s11590-011-0332-0
[20]
O. Prokopyev, On equivalent reformulations for absolute value equations, Comput. Optim. Appl.,44 (2009), 363-372. doi: 10.1007/s10589-007-9158-1
[21]
J. Rohn, A theorem of the alternatives for the equation Ax+B|x|=b, Linear Multilinear Algebra,52 (2004), 421-426. doi: 10.1080/0308108042000220686
[22]
J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett.,8 (2014), 35-44. doi: 10.1007/s11590-012-0560-y
[23]
Y. Saad, Iterative methods for sparse linear systems, 2 Eds., Society for Industrial and Applied Mathematics, Philadelphia, 2003.
[24]
D. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett.,8 (2014), 2191-2202. doi: 10.1007/s11590-014-0727-9
[25]
C. Zhang, Q. Wei, Global and finite convergence of a generalized newton method for absolute value equations, J. Optim. Theory Appl.,143 (2009), 391-403. doi: 10.1007/s10957-009-9557-9
This article has been cited by:
1.
Miao Guo, Qingbiao Wu,
Two effective inexact iteration methods for solving the generalized absolute value equations,
2022,
7,
2473-6988,
18675,
10.3934/math.20221027
2.
Fujie Zhang, Na Huang,
Generalized SOR-like iteration method for solving weakly nonlinear systems,
2022,
99,
0020-7160,
1579,
10.1080/00207160.2021.1994961
3.
Wen-Ling Tang, Shu-Xin Miao,
On the solvability and Picard-type method for absolute value matrix equations,
2022,
41,
2238-3603,
10.1007/s40314-022-01782-w
4.
Ghodrat Ebadi, Somayeh Seifollahzadeh, Cornelis Vuik,
New iterative methods for solving generalized absolute value equations,
2024,
43,
2238-3603,
10.1007/s40314-024-02811-6
5.
Xiaohui Yu, Qingbiao Wu,
Two efficient iteration methods for solving the absolute value equations,
2024,
01689274,
10.1016/j.apnum.2024.10.009
6.
Yan-Xia Dai, Ren-Yi Yan, Ai-Li Yang,
Minimum Residual BAS Iteration Method for Solving the System of Absolute Value Equations,
2024,
2096-6385,
10.1007/s42967-024-00403-z
7.
Susan H. Mohammad, Abdulghafor Mohammed Al-Rozbayani, Aliaa Burqan,
Fractional Integration via Picard Method for Solving Fractional Differential‐Algebraic Systems,
2024,
2024,
1110-757X,
10.1155/2024/8499850
8.
Lu-Lin Yan, Yi-Xin Jiang, Shu-Xin Miao, Xian-Ming Gu,
A Relaxation Iteration Method with Three Parameters for Solving Absolute Value Equation,
2024,
2024,
2314-8896,
10.1155/2024/6842559