In this paper, based on the shift splitting (SS) technique, we propose a special SS iteration method for solving the absolute value equation (AVE), which is obtained by reformulating equivalently the AVE as a two-by-two block nonlinear equation. Theoretical analysis shows that the special SS method is absolutely convergent under proper conditions. Numerical experiments are given to demonstrate the feasibility, robustness and effectiveness of the special SS method.
1.
Introduction
Considering the absolute value equation (AVE)
where A∈Rn×n, b∈Rn and |⋅| denotes the absolute value. The AVE (1.1) is used as a very important tool and often arises in scientific and engineering computing, such as linear programming, bimatrix games, quadratic programming, the quasi-complementarity problems, and so on [1,2,3,4,5].
In recent years, to obtain the numerical solution of the AVE (1.1), a large number of efficient iteration methods have been proposed to solve the AVE (1.1), including the successive linearization method [2], the Picard and Picard-HSS methods [7,9], the nonlinear HSS-like iteration method [15], the sign accord method [6] and the hybrid algorithm [12], the generalized Newton (GN) method [11]. Other forms of the iteration methods, one can see [8,9,10,13] for more details.
Recently, Ke and Ma in [18] extended the classical SOR-like iteration method in [19] for solving the AVE (1.1) and proposed the following SOR-like iteration method
with α>0 and ˆD=D(x)=diag(sign(x)),x∈Rn, where D(x)=diag(sign(x)) denotes a diagonal matrix corresponding to sign(x), and sign(x) denotes a vector with components equal to 1,0,−1 depending on whether the corresponding component of x is positive, zero or negative. The SOR-like iteration method (1.2) can be designed by using the idea of SOR-like method in [19] for reformulating the AVE (1.1) as the two-by-two block nonlinear equation
or
The convergence condition of the SOR-like iteration method in [18] was given in the light of assumptions imposed on the involved parameter. Further, in [20], some new convergence conditions were obtained from the involved iteration matrix of the SOR-like iteration method.
When |x| is vanished in the AVE (1.1), the AVE (1.1) reduces to the linear system
In [14], Bai et al. proposed the following shift splitting (SS) of the matrix A, that is,
and designed the shift splitting (SS) scheme
for solving the non-Hermitian positive definite linear system (1.4). In this paper, we generalize this idea and propose the special SS iteration method for the two-by-two block nonlinear Eq. (1.3) to solve the AVE (1.1).
The remainder of the paper is organized as follows. In Section 2, the special SS iteration method is introduced to solve the AVE (1.1) and the convergence analysis of the special SS iteration method is studied in detail. In Section 3, numerical experiments are reported to show the effectiveness of the proposed method. In Section 4, some conclusions are given to end the paper.
2.
The special SS iteration method
In this section, we introduce the special SS iteration method to solve the AVE (1.1). To this end, based on the iteration methods studied in [14], a special shift splitting (SS) of the ˉA in (1.3) can be constructed as follows
where α is a given positive constant and I is an identity matrix. This special splitting naturally leads to the special SS iteration method for solving the nonlinear Eq. (1.3) and describes below.
The special SS iteration method: Let A∈Rn×n be a nonsingular and b∈Rn. Given initial vectors x(0)∈Rm and y(0)∈Rn, for k=0,1,2,... until the iteration sequence {x(k),y(k)}+∞k=0 is convergent, compute
where α is a given positive constant.
Clearly, the iteration matrix Mα of the special SS method is
Let ρ(Mα) denote the spectral radius of matrix Mα. Then the special SS iteration method is convergent if and only if ρ(Mα)<1. Let λ be an eigenvalue of matrix Mα and [x,y]T be the corresponding eigenvector. Then
which is equal to
To study the convergence conditions of the special SS iteration method, we first give some lemmas.
Lemma 2.1. [16] The AVE (1.1) has a unique solution for any b∈Rn if and only if matrix A−I+2D or A+I−2D is nonsingular for any diagonal matrix D=diag(di) with 0≤di≤1.
Based on Lemma 2.1, it is easy to obtain when matrix A in the AVE (1.1) satisfies σmin(A)>1, where σmin(A) denotes the smallest singular value of the matrix A, then the AVE (1.1) has a unique solution for any b∈Rn. One can see [16] for more details.
Lemma 2.2. [17] Consider the real quadratic equation x2−bx+d=0, where b and d are real numbers. Both roots of the equation are less than one in modulus if and only if |d|<1 and |b|<1+d.
Lemma 2.3. Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
is nonsingular.
Proof. By simple computations, we have
Clearly, we just need to prove that matrix I−ˆDA−1 is nonsingular. If not, then for some nonzero y∈Rn we have that (I−ˆDA−1)y=0, which will derive a contradiction. This implies that y=ˆDA−1y. Let A−1y=z. Then Az=ˆDz. Further, we have
Obviously, this inequality is invalid. This completes the proof.
Lemma 2.4. Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. Then
is nonsingular.
Proof. Similar to Lemma 2.3, by simple computations, we have
Noting that A is a symmetric positive definite matrix. Clearly, when matrix 2I−ˆD(αI+A)−1 is nonsingular, matrix ˆA is also nonsingular. In fact, matrix 2I−ˆD(αI+A)−1 is sure nonsingular. If not, then for some nonzero y∈Rn we have that (2I−ˆD(αI+A)−1)y=0, which will derive a contradiction. This implies that 2y=ˆD(αI+A)−1y. Let (αI+A)−1y=z. Then 2(αI+A)z=ˆDz. Further, we have
Let λ=zTAzzTz and μ=zTDzzTz. Then, from (2.4) we can obtain
Since α>0, λ>1 and |μ|≤1, Eq. (2.5) is invalid. Therefore, Eq. (2.4) is also invalid. This completes the proof.
Lemma 2.4 implies that the iteration matrix Mα is valid.
Lemma 2.5. Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1. If λ is an eigenvalue of the matrix Mα, then λ≠±1.
Proof. If λ=1. Then from (2.2) and (2.3) we have
Based on Lemma 2.3, it follows that y=0 and x=0, which contradicts the assumption that [x,y]T is an eigenvector of matrix Mα. Hence λ≠1.
Now, we prove that λ≠−1. When λ=−1, from (2.2) and (2.3) we have
Since α>0, we obtain x=0 and y=0, which also contradicts the assumption that [x,y]T is an eigenvector of matrix Mα. Hence λ≠−1.
Lemma 2.6. Assume that A is a symmetric positive definite matrix and satisfies the conditions of Lemma 2.1. Let λ be an eigenvalue of Mα and [x,y]T be the corresponding eigenvector. Then x≠0. Moreover, if y=0, then |λ|<1.
Proof. If x=0, then from (2.2) we have (λ+1)y=0. By Lemma 2.5 we know that λ≠−1. Therefore, y=0, which contradicts the assumption that [x,y]T is an eigenvector of matrix Mα.
If y=0, then from (2.2) we have
Since A is symmetric positive definite, then by [12,Lemma 2.1] we know that
Thus, we complete the proof.
Theorem 2.1. Let A be a symmetric positive definite matrix and satisfy the conditions of Lemma 2.1, and α be a positive constant. Then
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
Proof. If λ=0, then ρ(Mα)<1. The results in Theorem 2.1 hold.
If λ≠0. Then from (2.3) we have
Substituting (2.6) into (2.3) leads to
By Lemma 2.6, we know that x≠0. Multiplying xTxTx on both sides of Eq. (2.7), we obtain
Let
Then p>1 and |q|≤1. From Eq. (2.8) we have
Further, we know that λ satisfies the following real quadratic equation
Based on Lemma 2.2, we know that a sufficient and necessary condition for the roots of the real quadratic Eq. (2.9) to satisfy |λ|<1 if and only if
and
It is easy to check that (2.10) and (2.11) hold for all α>0. Therefore, we obtain
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
For the SOR-like iteration method (1.2), in [18], Ke and Ma given the following result.
Theorem 2.2. [18] Let A∈Rn×n be a nonsingular and b∈Rn. Denote
If
then
where
Comparing the convergence conditions of the special SS method and the SOR-like iteration method, see Theorem 2.1 and Theorem 2.2 [18], the convergence condition of the SOR-like iteration method not only is relatively strict, but also is difficult to compute. The main reason is that the condition (2.12) has to compute the inverse of the matrix A. This implies that the application of the SOR-like iteration method is limited. Base on this, the special SS method may be superior to the SOR-like iteration method under certain conditions.
Next, we turn to consider this situation where matrix A in (1.1) is the positive definite.
It is noted that Lemma 2.3, Lemma 2.5 and Lemma 2.6 are still valid under the condition of Lemma 2.1 when A in (1.1) is the positive definite.
Now, we need guarantee the invertibility of the first factor of Mα. To this end, we have Lemma 2.7.
Lemma 2.7. Let positive definite matrix A satisfy the conditions of Lemma 2.1 and ‖(αI+A)−1‖2<2. Then
is nonsingular.
Proof. Based on the proof of Lemma 2.4, obviously, when ‖(αI+A)−1‖2<2, matrix 2I−ˆD(αI+A)−1 is nonsingular. This implies that matrix ˆA is nonsingular as well.
For later use we define the set S as
Obviously, the members of S are nonzero. Based on this, Theorem 2.3 can be obtained.
Theorem 2.3. Let the conditions of Lemma 2.7 be satisfied. For every x∈S, let s(x)=ℜ(x∗Ax), t(x)=ℑ(x∗Ax) and q=x∗ˆDx. For each x∈S, if
then
which implies that the special SS iteration method converges to the unique solution of the AVE (1.1).
Proof. Similar to the proof of Theorem 2.1, using x∗ instead of xT in (2.8) leads to
which is equal to
For the sake simplicity in notations, we use s,t for s(x), t(x), respectively. From (2.14), we have
where
By some calculations, we get
Further, we have
thus, the necessary and sufficient condition for |λ|<1 in (2.15) is
Substituting (2.16) into (2.17), we can obtain the condition (2.13), which completes the proof.
3.
Numerical experiments
In this section, two examples are given to demonstrate the performance of the special SS method (2.2) for solving the AVE (1.1). To this end, we compare the special SS method with the SOR-like method [18], the generalized Newton (GN) method [11] and the search direction (SD) method [10] from aspects of the iteration counts (denoted as ‘IT’) and the relative residual error (denoted as ‘RES’), the elapsed CPU time in second (denoted as ‘CPU’). In the implementations, we use the sparse LU factorization to calculate an inverse of the first factor of Mα. All runs are implemented in Matlab 7.0.
All initial vectors are chosen to zero vector and all iterations are terminated once the relative residual error satisfies
or if the prescribed iteration number 500 is exceeded. In the following tables, ‘−’ denotes RES>10−6 or the iteration counts larger than 500.
Example 1. ([18]) Consider the AVE (1.1) with
and b=Ax∗−|x∗|.
To compare the special SS method with the SOR-like method, we take the same parameter α. In this case, Tables 1 and 2 list some numerical results of the special SS method and the SOR-like method for Example 1 with different dimensions n and different parameters α. From Tables 1 and 2, we can see that when both iteration methods are convergent, the CPU times of the special SS method are more than that of the SOR-like method. It is noted that the SOR-like method occasionally fail to converge in 500 iterations, the special SS method is always convergent, which confirm the results in Theorem 2.1. From the view of this point, the special SS method is robust, compared with the SOR-like method.
Table 3 list the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. Here, the iteration parameter of the special SS method is set to be 3, the iteration parameter of the SOR-like method is set to be 1. From Table 3, the iteration counts of the special SS method are more than any other three methods, but it requires the less CPU times than the generalized Newton method and the search direction method. Among these methods, the SOR-like method compared to any other three methods requires the least iteration steps and CPU time, but it has some risks in the implementations.
On the above discussion, we can draw a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
Example 2 Let m be a prescribed positive integer and n=m2. Consider the AVE in (1.1) with A=ˉA+μI
with
and b=Ax∗−|x∗|, where x∗=(−1,1,−1,1,⋯,−1,1).
Similarly, for Example 2, we still take the same parameter α for the special SS method with the SOR-like method when both are used. In this case, Tables 4, 5, 7 and 8 list the computing results for Example 2 with μ=1 and μ=4.
From Tables 4, 5, 7 and 8, these numerical results further confirm the observations from Tables 1 and 2. That is to say, when both are convergent, the computing efficiency of the SOR-like method advantages over the special SS method. Whereas, in the implementations, indeed, the SOR-like method has some risks; for the special SS method, it is free from such worries. This further verifies that the special SS method is steady.
Similar to Table 3, Tables 6 and 9 still enumerate the numerical results of the special SS method with the SOR-like method, the generalized Newton method, and search direction method. In Tables 6 and 9, the iteration parameter of the SOR-like method is set to be 1; for the special SS method, its iteration parameter are two cases: α=2.1 in Table 6 and α=5.5 in Table 9. From Tables 6 and 9, the special SS method requires the less CPU times than the generalized Newton method and search direction method. Among these methods, the SOR-like method compared to any other three methods costs the least iteration steps and CPU time, but it has some risks in the implementations.
From the numerical results from Example 2, we can still come to a conclusion that the special SS method is feasible, compared with the SOR-like method, the generalized Newton method, and search direction method.
4.
Conclusions
In this paper, based on the shift splitting (SS) of the coefficient matrix of a kind of the equivalent two-by-two block nonlinear equation of the AVE (1.1), a special shift splitting (SS) method is introduced. Some convergence conditions of special SS method are obtained. Numerical experiments show that the special SS method is feasible. However, the parameter α is chosen by the experimentally found optimal choice. So, how to choose the optimal parameter in the special SS method for the AVE is left as a further research work.
Acknowledgments
The author would like to thank the anonymous referee for providing helpful suggestions, which greatly improved the paper. This research was supported by National Natural Science Foundation of China (No.11961082).
Conflict of interest
The authors declare no conflict of interest in this paper.