Constrained solutions are required in some practical problems. This paper studies the problem of different constrained solutions for a class of multivariate quadratic matrix equations, which has rarely been studied before. First, the quadratic matrix equations are transformed into linear matrix equations by Newton's method. Second, the different constrained solutions of the linear matrix equations are solved by the modified conjugate gradient method, and then the different constrained solutions of the multivariate quadratic matrix equations are obtained. Finally, the convergence of the algorithm is proved. The algorithm can be well performed on the computers, and the effectiveness of the method is verified by numerical examples.
Citation: Shousheng Zhu. Double iterative algorithm for solving different constrained solutions of multivariate quadratic matrix equations[J]. AIMS Mathematics, 2022, 7(2): 1845-1855. doi: 10.3934/math.2022106
Related Papers:
[1]
Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia .
An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969.
doi: 10.3934/math.2025365
Jia Tang, Yajun Xie .
The generalized conjugate direction method for solving quadratic inverse eigenvalue problems over generalized skew Hamiltonian matrices with a submatrix constraint. AIMS Mathematics, 2020, 5(4): 3664-3681.
doi: 10.3934/math.2020237
[4]
Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet .
An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106.
doi: 10.3934/math.2021469
[5]
Wan-Chen Zhao, Xin-Hui Shao .
New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578.
doi: 10.3934/math.2023536
[6]
Anli Wei, Ying Li, Wenxv Ding, Jianli Zhao .
Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Mathematics, 2022, 7(4): 5029-5048.
doi: 10.3934/math.2022280
[7]
Zhensheng Yu, Peixin Li .
An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623.
doi: 10.3934/math.2021215
[8]
Xuejie Ma, Songhua Wang .
A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190.
doi: 10.3934/math.20241717
[9]
Ting Lin, Hong Zhang, Chaofan Xie .
A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268.
doi: 10.3934/math.2025151
[10]
Wen-Ning Sun, Mei Qin .
On maximum residual block Kaczmarz method for solving large consistent linear systems. AIMS Mathematics, 2024, 9(12): 33843-33860.
doi: 10.3934/math.20241614
Abstract
Constrained solutions are required in some practical problems. This paper studies the problem of different constrained solutions for a class of multivariate quadratic matrix equations, which has rarely been studied before. First, the quadratic matrix equations are transformed into linear matrix equations by Newton's method. Second, the different constrained solutions of the linear matrix equations are solved by the modified conjugate gradient method, and then the different constrained solutions of the multivariate quadratic matrix equations are obtained. Finally, the convergence of the algorithm is proved. The algorithm can be well performed on the computers, and the effectiveness of the method is verified by numerical examples.
1.
Introduction
Quadratic matrix equation has many different forms, such as AX2+BX+C=O arising in quasi-birth-death processes [1,2] and Riccati equation XCX−XE−AX+B=O arising in transport theory [3,4,5,6]. There are also some coupled quadratic matrix equations with two or three variables [7,8]. This article will study the general form of these equations. As can be seen, the linear part of these equations can be expressed in the form ∑3i=1C(l)iXiD(l)i, and the quadratic part of them can be expressed as ∑3i,j=1XiE(l)ijXj. Therefore, we study the following coupled quadratic matrix equation:
∑3i=1C(l)iXiD(l)i+∑3i,j=1XiE(l)ijXj=S(l)(l=1,2),
(1.1)
where all matrices are n×n real matrices. Each equation in (1.1) consists of three linear terms and nine quadratic terms. Besides, Eq (1.1) have three variables and only two equations, so the solution is not unique.
As we know, we always need some special kind of solutions in practical applications, such as symmetric solutions are widely used in control theory [8,9] and reflexive solutions which are also called generalized centro-symmetric solutions are used in information theory, linear estimate theory and numerical analysis [10,11]. Liao, Liu, etc. have studied the problem of different constrained solutions of linear matrix equations [12,13]. In this article, we will design a method to obtain different constrained solutions of a class of quadratic matrix equations.
Many researchers have studied quadratic matrix equations. For example, Bini, Iannazzo and Poloni gave a fast Newton's method for a quadratic matrix equation [4]. Long, Hu and Zhang used an improved Newton's method to solve a quadratic matrix equation [14]. Convergence rate of some iterative methods for quadratic matrix equations arising in transport theory was also described by Guo and Lin [15]. Zhang, Zhu and Liu have studied the constrained solutions of two-variable Riccati matrix equations based on Newton's method and modified conjugate gradient (MCG) method [16,17]. This article will further study the problem of different constrained solutions of coupled quadratic matrix equations with three matrix variables. The algorithm designed in the paper is superior in computing different constrained solutions.
Notation: Rn×n denotes the set of n×n real matrices. The symbols AT and tr(A) represent the transpose and the trace of the matrix A respectively. A⊗B stands for the Kronecker product of matrices A and B, ¯vec(⋅) is an operator that transforms a matrix A into a column vector by vertically stacking the columns of the matrix AT. For example, for the 2×2 matrix
A=[abcd],
the vectorization is ¯vec(A)=[a,b,c,d]T. We define an inner product of two matrices A,B∈Rn×n as [A,B]=tr(ATB), then the norm of a matrix A generated by this inner product is Frobenius norm and denoted by ‖A‖, i.e. ‖A‖=√[A,A].
Let Ω1 be the set of symmetric matrices. P1,P2∈Rn×n are said to be symmetric orthogonal matrices if Pi=PTi and P2i=I(i=1,2). X∈Rn×n is said to be a reflexive matrix with respect to P1 if P1XP1=X. Let Ω5 be the set of reflexive matrices. X∈Rn×n is said to be a symmetric reflexive matrix with respect to P2 if XT=X=P2XP2. Let Ω9 be the set of symmetric reflexive matrices. We call (X1,X2,X3) a constrained matrix in Ω1−5−9 when X1∈Ω1, X2∈Ω5 and X3∈Ω9. Besides, if the symmetric orthogonal matrices P1 and P2 are changed, we will get different constrained matrices in Ω1−5−9.
The paper is organized as follows: First, we use Newton's method to convert the quadratic matrix equations into linear matrix equations. Second, MCG method [10,13,16,18] is applied to solve the derived linear matrix equations. Finally, numerical examples are presented to support the theoretical results of this paper.
2.
Newton's method for solving quadratic matrix equations
As a matter of convenience, we first introduce some notations.
X=[X1X2X3],X(k)=[X(k)1X(k)2X(k)3],X∗=[X∗1X∗2X∗3].
Y, Y(k) and Y∗ are defined in the same way as X, X(k) and X∗ respectively. Then let
where ϕ(l)X(Y): Rn×n→Rn×n is the Fréchet derivative of ψ(l)(X) at X in the direction Y[1].
Lemma 2.1.Finding the solution (X∗1,X∗2,X∗3)∈Ω1−5−9 of (1.1) can be transformed into finding the corrected value (Y1,Y2,Y3)∈Ω1−5−9 of ψ(l)(X+Y)=0(l=1,2). We linearize and solve, to find (Y1,Y2,Y3)∈Ω1−5−9 from the coupled linear matrix equation
ϕ(l)X(Y)=−ψ(l)(X)(l=1,2).
(2.2)
Proof. Supposing that the approximate solution (X1,X2,X3)∈Ω1−5−9 of Eq (1.1) has been obtained. Let X∗i=Xi+Yi(i=1,2,3), then finding (X∗1,X∗2,X∗3)∈Ω1−5−9 of (1.1) is transformed into finding the corrected value (Y1,Y2,Y3)∈Ω1−5−9 from
ψ(l)(X+Y)=O(l=1,2).
(2.3)
The Eq (2.3) is quadratic equations about Yi. As is known, when the norm of Yi is small enough, the quadratic parts ∑3i,j=1YiE(l)ijYj about Yi in (2.1) can be discarded according to Newton's method. In this way, we can get a linear approximation
ψ(l)(X+Y)≈ψ(l)(X)+ϕ(l)X(Y).
Therefore, finding the solution (X∗1,X∗2,X∗3)∈Ω1−5−9 of (1.1) is transformed into finding (Y1,Y2,Y3)∈Ω1−5−9 from ψ(l)(X)+ϕ(l)X(Y)=O(l=1,2), that is, to solve (2.2).
According to [14], Newton's method (algorithm 1) is introduced to find constrained solutions in Ω1−5−9 of (1.1). Let
ψ(X)=[ψ(1)(X)ψ(2)(X)],ϕX(Y)=[ϕ(1)X(Y)ϕ(2)X(Y)].
Algorithm 1: : Newton's method solves the solution X of Eq (1.1)
Step 1. Choose an initial matrix (X(1)1,X(1)2,X(1)3)∈Ω1−5−9 and set k:=1. Step 2. If ψ(X(k))=O, stop, else, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from ϕX(k)(Y(k))=−ψ(X(k)).(2.4) When (2.4) hasn't constrained solutions in Ω1−5−9, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9, such that ‖ϕX(k)(Y(k))+ψ(X(k))‖=min.(2.5) Step 3. Compute X(k+1)=X(k)+Y(k), set k:=k+1 and go to step 2.
The convergent properties about Newton's method can be obtained as follows according to [14] (The proof is similar to Lemma 2.1 in [14]).
Theorem 2.1.Assume that the real matrix X∗ is a simple root of (1.1), i.e. ψ(X∗)=O and ϕX∗(Y) is regular. Then if the starting matrix X(1) is chosen sufficiently close to the solution X∗, the sequence {X(k)} generated by Newton's method converges quadratically to the solution X∗.
3.
MCG method for solving linear matrix equations
In algorithm 1, when X(k) is known, then Y(k) needs to be solved. In this section, MCG method will be used to solve Y(k) from Eq (2.2), that is, to solve Eq (2.4) or Eq (2.5). Consider the general form of Eq (2.2)
∑3i=1∑7j=1A(l)ijYiB(l)ij=F(l)(l=1,2),
(3.1)
where all matrices in Eq (3.1) are n×n real matrices. Let
In order to solve Eq (3.1), the following two questions will be considered.
Problem 3.1.Assume that (3.1) has constrained solutions, find (Y1,Y2,Y3)∈Ω1−5−9 from (3.1).
Problem 3.2.Assume that (3.1) hasn't constrained solutions, find (Y1,Y2,Y3)∈Ω1−5−9, such that
‖h(Y)−F‖=min.
(3.2)
3.1. MCG method for solving problem 1
Based on the MCG method, we establish the following algorithm (algorithm 2) to solve problem 3.1.
Algorithm 2: : MCG method to solve problem 3.1
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=F−h(Y(k))def=[R(1)kR(2)k],˜Rk=p(Rk)def=[˜R(1)k˜R(2)k˜R(3)k],Zk=q(˜Rk). Step 2. If Rk=O, or Rk≠O and Zk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=F−h(Y(k+1))def=[R(1)k+1R(2)k+1],˜Rk+1=p(Rk+1)def=[˜R(1)k+1˜R(2)k+1˜R(3)k+1], βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=q(˜Rk+1)+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.
From algorithm 2, we can easily see (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 for k=1,2,⋯ and have the following convergent properties (The proof is similar to Theorem 2.1 in [10]):
Theorem 3.1.Assume that Eq (3.1) has constrained solutions in Ω1−5−9. Then for an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, a solution of problem 3.1 can be obtained by algorithm 2 within finite number of iterations, which is also a constrained solution in Ω1−5−9 of (3.1).
3.2. MCG method for solving problem 2
Algorithm 2 will break if Ri≠O and Zi=O, which means that Eq (3.1) hasn't constrained solution in Ω1−5−9 according to Theorem 3.1. Therefore, we need to solve problem 3.2, that is, to find constrained least-squares solutions of (3.1).
We replace the problem of finding least-squares solutions in Ω1−5−9 of (3.1) with finding solutions in Ω1−5−9 of equivalent linear matrix equations by the Theorem 3.2, and then an iterative algorithm to find constrained least-squares solutions in Ω1−5−9 of (3.1) is constructed according to algorithm 2.
As a matter of convenience, we introduce some notations:
u and v are functions of Y. Then, according to [16,17], we have the following theorem.
Theorem 3.2.Iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in Ω1−5−9 from
g(Y)=Q.
(3.3)
Indeed, (3.3) has constrained solutions in Ω1−5−9.
Proof. When (Y1,Y2,Y3)∈Ω1−5−9, we have Y1=YT1, Y2=P1Y2P1 and Y3=YT3=P2Y3P2. Therefore, solving problem 3.2 is equivalent to solving (Y1,Y2,Y3)∈Ω1−5−9 from
‖[u(Y)v(Y)]−[FF]‖=min.
(3.4)
Now we have to prove that solving the problem (3.4) is equivalent to finding constrained solutions in Ω1−5−9 of (3.3). We let multiply operation prior to Kronecker product operation between matrices. Let
where Tn,n denotes a commutation matrix such that Tn,n¯vec(An×n)=¯vec(AT)[19] and let Tn,n only work on ¯vec. Then applying ¯vec to the following equations:
{u(Y)=F,v(Y)=F,
(3.5)
we can get the equivalent equation: My=f. Besides, MTMy=MTf, the normal equation of My=f, is the vectorization of (3.3). Therefore, the least-squares solution of My=f is also a solution of MTMy=MTf, and the vectorization of the solution of (3.3). So the solution of (3.4) is also a solution of (3.3), and vice versa.
Above all, iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in Ω1−5−9 of (3.3).
As we all know, normal equations always have solutions, and the vectorization of Eq (3.3) is a normal equation, so Eq (3.3) also has solutions. Suppose ˜Y=(˜YT1,˜YT2,˜YT3)T (whether ˜Y∈Ω1−5−9 or not) is a solution of (3.3), then g(˜Y)=Q. Let Y∗i=qi(˜Y)(i=1,2,3), then (Y∗1,Y∗2,Y∗3)∈Ω1−5−9 and g(Y∗)=Q. Hence, (3.3) has constrained solutions in Ω1−5−9.
We use the MCG method to find constrained solutions in Ω1−5−9 of (3.3) by algorithm 3, which is also a process to solve the problem 3.2.
Algorithm 3: : MCG method to solve problem 3.2
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=Q−g(Y(k)),˜Rk=g(Rk),Zk=˜Rk. Step 2. If Rk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=Q−g(Y(k+1)),˜Rk+1=g(Rk+1), βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=˜Rk+1+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.
From algorithm 3, we can see that (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 for k=1,2,⋯ and have the following convergent properties (The proof is similar to Theorem 2 in [13]).
Theorem 3.3.For an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, a solution of problem 3.2 can be obtained by algorithm 3 within finite number of iterations, and it is also a constrained least-squares solution in Ω1−5−9 of (3.1).
4.
Numerical experiments
In this section, we design two computation programmes to find constrained solutions of (1.1). Then two numerical examples are given to illustrate the proposed results. All computations are performed using MATLAB. Because of the influence of roundoff errors, we regard a matrix A as zero matrix if ‖A‖≤10−7.
Let n be the order of the matrix Xi, k,k1,k2 be the iteration numbers of algorithm 1, algorithm 2 and algorithm 3 respectively, and t be the computation time (seconds).
Programme 1.
(1) Choose an initial matrix (X(1)1,X(1)2,X(1)3)∈Ω1−5−9 and set k:=1.
(2) If ψ(X(k))=O, stop, else, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from (2.4) using algorithm 2. When algorithm 2 breaks, that is (2.4) hasn't constrained solution in Ω1−5−9, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from (2.5) using algorithm 3.
(3) Compute X(k+1)=X(k)+Y(k), set k:=k+1 and go to step 2.
Programme 2.
(1) Choose an initial matrix (X(1)1,X(1)2,X(1)3)∈Ω1−5−9 and set k:=1.
(2) If ψ(X(k))=O, stop, else, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from (2.5) using algorithm 3. Especially, when (2.4) has constrained solutions in Ω1−5−9, the constrained least-squares solutions in Ω1−5−9 are also its constrained solutions in Ω1−5−9.
(3) Compute X(k+1)=X(k)+Y(k), set k:=k+1 and go to step 2.
Example 4.1.Consider (1.1) with the following parameters:
We can easily see that (1.1) has the constrained solution (X∗1,X∗2,X∗3)∈Ω1−5−9. By applying programmes 1 and 2 with the initial matrix X(1)i=eye(3), Y(1)i=zeros(3)(i=1,2,3), we obtain the constrained solution in Ω1−5−9 of (1.1) as follows:
From the results in Table 1, we see that programme 1 is more effective when the derived linear matrix equations are always have constrained solutions in Ω1−5−9.
Example 4.2.Consider (1.1) with the following parameters:
By applying programmes 1 and 2 with the initial matrix X(1)i=ones(3), Y(1)i=zeros(3)(i=1,2,3), P1 and P2 are identity matrices, we obtain a special constrained solution (now, X1 and X3 are symmetric matrices, X2 is a general matrix) in Ω1−5−9 of (1.1) as follows:
Thus it can be seen that if the constrained solution of Eq (1.1) is not unique, we can get different constrained solutions in Ω1−5−9 when choosing different initial matrices.
5.
Conclusions
In this paper, an iterative algorithm is studied to find different constrained solutions. By using the proposed algorithm, we compute a set of different constrained solution in Ω1−5−9 of multivariate quadratic matrix equations. The provided examples illustrate the effectiveness of the new iterative algorithm.
There are still some results we can obtain by changing the initial matrices X(1)i and Y(1)i, the direction matrix Zk in algorithm 2 and Eq (3.3) in algorithm 3. In this way, we can get other kind of constrained solutions, which are not only interesting but also valuable. It remains to study in our further work.
Acknowledgments
This research was funded by Doctoral Fund Project of Shandong Jianzhu University grant number X18091Z0101.
Conflict of interest
The author declares no conflict of interest.
References
[1]
N. J. Higham, H. M. Kim, Solving a quadratic matrix equation by Newton's method with exact line searches, SIAM J. Matrix Anal. Appl., 23 (2001), 303–316. doi: 10.1137/S0895479899350976. doi: 10.1137/S0895479899350976
[2]
C. R. Chen, R. C. Li, C. F. Ma, Highly accurate doubling algorithm for quadratic matrix equation from quasi-birth-and-death process, Linear Algebra Appl., 583 (2019), 1–45. doi: 10.1016/j.laa.2019.08.018. doi: 10.1016/j.laa.2019.08.018
[3]
V. Angelova, M. Hached, K. Jbilou, Sensitivity of the solution to nonsymmetric differential matrix Riccati equation, Mathematics, 9 (2021), 1–18. doi: 10.3390/math9080855. doi: 10.3390/math9080855
[4]
D. A. Bini, B. Iannazzo, F. Poloni, A fast Newton's method for a nonsymmetric algebraic Riccati equation, SIAM J. Matrix Anal. Appl., 30 (2008), 276–290. doi: 10.1137/070681478. doi: 10.1137/070681478
[5]
P. Benner, Z. Bujanović, P. Kürschner, J. Saak, RADI: A low-rank ADI-type algorithm for large scale algebraic Riccati equations, Numer. Math., 138 (2018), 301–330. doi: 10.1007/s00211-017-0907-5. doi: 10.1007/s00211-017-0907-5
[6]
C. L. Liu, J. G. Xue, R. C. Li, Accurate numerical solution for shifted M-matrix algebraic Riccati equations, J. Sci. Comput., 84 (2020), 1–27. doi: 10.1007/s10915-020-01263-4. doi: 10.1007/s10915-020-01263-4
[7]
A. G. Wu, H. J. Sun, Y. Zhang, A novel iterative algorithm for solving coupled Riccati equations, Appl. Math. Comput., 364 (2020), 1–14. doi: 10.1016/j.amc.2019.124645. doi: 10.1016/j.amc.2019.124645
[8]
S. Koskie, C. Coumarbatch, Z. Gajic, Exact slow-fast decomposition of the singularly perturbed matrix differential Riccati equation, Appl. Math. Comput., 216 (2010), 1401–1411. doi: 10.1016/j.amc.2010.02.040. doi: 10.1016/j.amc.2010.02.040
[9]
E. Burlakov, V. Verkhlyutov, I. Malkov, V. Ushakov, Assessment of cortical travelling waves parameters using radially symmetric solutions to neural field equations with microstructure, In: Proceedings of international conference on neuroinformatics, Cham: Springer, 2020, 51–57. doi: 10.1007/978-3-030-60577-3_5.
[10]
M. Dehghan, M. Hajarian, An iterative algorithm for solving a pair of matrix equations AYB=E, CYD=F over generalized centro-symmetric matrices, Comput. Math. Appl., 56 (2008), 3246–3260. doi: 10.1016/j.camwa.2008.07.031. doi: 10.1016/j.camwa.2008.07.031
[11]
T. X. Yan, C. F. Ma, The BCR algorithms for solving the reflexive or anti-reflexive solutions of generalized coupled Sylvester matrix equations, J. Franklin I., 357 (2020), 12787–12807. doi: 10.1016/j.jfranklin.2020.09.030. doi: 10.1016/j.jfranklin.2020.09.030
[12]
A. P. Liao, Z. Z. Bai, The constrained solutions of two matrix equations, Acta Math. Sinica, 18 (2002), 671–678. doi: 10.1007/s10114-002-0204-8. doi: 10.1007/s10114-002-0204-8
[13]
X. M. Liu, K. Y. Zhang, MCG method for a different constrained least square solution of two-variables linear matrix equations for recurrent event data (in Chinese), Acta Math. Appl. Sin., 34 (2011), 938–948.
[14]
J. H. Long, X. Y. Hu, L. Zhang, Improved Newton's method with exact line searches to solve quadratic matrix equation, J. Comput. Appl. Math., 222 (2008), 645–654. doi: 10.1016/j.cam.2007.12.018. doi: 10.1016/j.cam.2007.12.018
[15]
C. H. Guo, W. W. Lin, Convergence rates of some iterative methods for nonsymmetric algebraic Riccati equations arising in transport theory, Linear Algebra Appl., 432 (2010), 283–291. doi: 10.1016/j.laa.2009.08.004. doi: 10.1016/j.laa.2009.08.004
[16]
K. Y. Zhang, S. S. Zhu, X. M. Liu, An iterative method to find symmetric solutions of two-variable Riccati matrix equation (in Chinese), Acta Math. Appl. Sin., 36 (2013), 831–839.
[17]
S. S. Zhu, K. Y. Zhang, An iterative method for different constrained solutions of two-variable Riccati matrix equation (in Chinese), J. Syst. Sci. Math. Sci., 33 (2013), 197–205. doi: 10.12341/jssms12039. doi: 10.12341/jssms12039
[18]
C. Q. Lv, C. F. Ma, A modified CG algorithm for solving generalized coupled Sylvester tensor equations, Appl. Math. Comput., 365 (2020), 1–15. doi: 10.1016/j.amc.2019.124699. doi: 10.1016/j.amc.2019.124699
[19]
X. D. Zhang, Matrix analysis and applications, 1 Ed., Cambridge: Cambridge University Press, 2017.
Algorithm 1: : Newton's method solves the solution X of Eq (1.1)
Step 1. Choose an initial matrix (X(1)1,X(1)2,X(1)3)∈Ω1−5−9 and set k:=1. Step 2. If ψ(X(k))=O, stop, else, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from ϕX(k)(Y(k))=−ψ(X(k)).(2.4) When (2.4) hasn't constrained solutions in Ω1−5−9, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9, such that ‖ϕX(k)(Y(k))+ψ(X(k))‖=min.(2.5) Step 3. Compute X(k+1)=X(k)+Y(k), set k:=k+1 and go to step 2.
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=F−h(Y(k))def=[R(1)kR(2)k],˜Rk=p(Rk)def=[˜R(1)k˜R(2)k˜R(3)k],Zk=q(˜Rk). Step 2. If Rk=O, or Rk≠O and Zk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=F−h(Y(k+1))def=[R(1)k+1R(2)k+1],˜Rk+1=p(Rk+1)def=[˜R(1)k+1˜R(2)k+1˜R(3)k+1], βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=q(˜Rk+1)+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=Q−g(Y(k)),˜Rk=g(Rk),Zk=˜Rk. Step 2. If Rk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=Q−g(Y(k+1)),˜Rk+1=g(Rk+1), βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=˜Rk+1+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.
Algorithm 1: : Newton's method solves the solution X of Eq (1.1)
Step 1. Choose an initial matrix (X(1)1,X(1)2,X(1)3)∈Ω1−5−9 and set k:=1. Step 2. If ψ(X(k))=O, stop, else, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9 from ϕX(k)(Y(k))=−ψ(X(k)).(2.4) When (2.4) hasn't constrained solutions in Ω1−5−9, solve for (Y(k)1,Y(k)2,Y(k)3)∈Ω1−5−9, such that ‖ϕX(k)(Y(k))+ψ(X(k))‖=min.(2.5) Step 3. Compute X(k+1)=X(k)+Y(k), set k:=k+1 and go to step 2.
Algorithm 2: : MCG method to solve problem 3.1
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=F−h(Y(k))def=[R(1)kR(2)k],˜Rk=p(Rk)def=[˜R(1)k˜R(2)k˜R(3)k],Zk=q(˜Rk). Step 2. If Rk=O, or Rk≠O and Zk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=F−h(Y(k+1))def=[R(1)k+1R(2)k+1],˜Rk+1=p(Rk+1)def=[˜R(1)k+1˜R(2)k+1˜R(3)k+1], βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=q(˜Rk+1)+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.
Algorithm 3: : MCG method to solve problem 3.2
Step 1. Choose an arbitrary initial matrix (Y(1)1,Y(1)2,Y(1)3)∈Ω1−5−9, set k:=1 and compute Rk=Q−g(Y(k)),˜Rk=g(Rk),Zk=˜Rk. Step 2. If Rk=O, stop, else, compute αk=‖Rk‖2‖Zk‖2,Y(k+1)=Y(k)+αkZk. Step 3. Compute Rk+1=Q−g(Y(k+1)),˜Rk+1=g(Rk+1), βk+1=‖Rk+1‖2‖Rk‖2,Zk+1=˜Rk+1+βk+1Zk. Step 4. Set k:=k+1 and go to step 2.