Oscillatory activity is inherent in many types of normal cellular function. Importantly, oscillations contribute to cellular network activity and cellular decision making, which are driving forces for cognition. Theta oscillations have been correlated with learning and memory encoding and gamma oscillations have been associated with attention and working memory. NMDA receptors are also implicated in oscillatory activity and contribute to normal function and in disease-related pathology. The interplay between oscillatory activity and NMDA receptors are intellectually curious and a fascinating dimension of inquiry. In this review we introduce some of the essential mathematical characteristics of oscillatory activity in order to provide a platform for additional discussion on recent studies concerning oscillations involving neuronal firing and NMDA receptor activity, and the effect of these dynamic mechanisms on cognitive processing in health and disease.
1.
Introduction
Quadratic matrix equation has many different forms, such as $ A X^{2}+ B X+ C = O $ arising in quasi-birth-death processes [1,2] and Riccati equation $ X C X- X E- A X+ B = O $ arising in transport theory [3,4,5,6]. There are also some coupled quadratic matrix equations with two or three variables [7,8]. This article will study the general form of these equations. As can be seen, the linear part of these equations can be expressed in the form $ \sum\nolimits_{i = 1}^3 C_{i}^{(l)} X_i D_{i}^{(l)} $, and the quadratic part of them can be expressed as $ \sum\nolimits_{i, j = 1}^3 X_i E_{ij}^{(l)} X_j $. Therefore, we study the following coupled quadratic matrix equation:
where all matrices are $ n \times n $ real matrices. Each equation in (1.1) consists of three linear terms and nine quadratic terms. Besides, Eq (1.1) have three variables and only two equations, so the solution is not unique.
As we know, we always need some special kind of solutions in practical applications, such as symmetric solutions are widely used in control theory [8,9] and reflexive solutions which are also called generalized centro-symmetric solutions are used in information theory, linear estimate theory and numerical analysis [10,11]. Liao, Liu, etc. have studied the problem of different constrained solutions of linear matrix equations [12,13]. In this article, we will design a method to obtain different constrained solutions of a class of quadratic matrix equations.
Many researchers have studied quadratic matrix equations. For example, Bini, Iannazzo and Poloni gave a fast Newton's method for a quadratic matrix equation [4]. Long, Hu and Zhang used an improved Newton's method to solve a quadratic matrix equation [14]. Convergence rate of some iterative methods for quadratic matrix equations arising in transport theory was also described by Guo and Lin [15]. Zhang, Zhu and Liu have studied the constrained solutions of two-variable Riccati matrix equations based on Newton's method and modified conjugate gradient (MCG) method [16,17]. This article will further study the problem of different constrained solutions of coupled quadratic matrix equations with three matrix variables. The algorithm designed in the paper is superior in computing different constrained solutions.
Notation: $ R^{n\times n} $ denotes the set of $ n \times n $ real matrices. The symbols $ A^\mathrm{T} $ and $ \mathrm{tr}{(A)} $ represent the transpose and the trace of the matrix $ A $ respectively. $ A \otimes B $ stands for the Kronecker product of matrices $ A $ and $ B $, $ \overline{\mathrm{vec}}(\cdot) $ is an operator that transforms a matrix $ A $ into a column vector by vertically stacking the columns of the matrix $ A^\mathrm{T} $. For example, for the $ 2 \times 2 $ matrix
the vectorization is $ { \overline{\mathrm{vec}} (A) = [a, b, c, d]^{\mathrm {T} }} $. We define an inner product of two matrices $ A, B\in R^{n\times n} $ as $ [A, B] = \mathrm{tr}({A^\mathrm{T}}B) $, then the norm of a matrix $ A $ generated by this inner product is Frobenius norm and denoted by $ \|A\| $, i.e. $ \|A\| = \sqrt{[A, A]} $.
Let $ \Omega_1 $ be the set of symmetric matrices. $ P_1, P_2\in R^{n\times n} $ are said to be symmetric orthogonal matrices if $ P_i = P_i^\mathrm{T} $ and $ P_i^2 = I\ (i = 1, 2) $. $ X \in R^{n\times n} $ is said to be a reflexive matrix with respect to $ P_1 $ if $ P_1 X P_1 = X $. Let $ \Omega_5 $ be the set of reflexive matrices. $ X \in R^{n\times n} $ is said to be a symmetric reflexive matrix with respect to $ P_2 $ if $ X^\mathrm{T} = X = P_2 X P_2 $. Let $ \Omega_9 $ be the set of symmetric reflexive matrices. We call $ (X_1, X_2, X_3) $ a constrained matrix in $ \Omega_{1-5-9} $ when $ X_1 \in \Omega_1 $, $ X_2 \in \Omega_5 $ and $ X_3 \in \Omega_9 $. Besides, if the symmetric orthogonal matrices $ P_1 $ and $ P_2 $ are changed, we will get different constrained matrices in $ \Omega_{1-5-9} $.
The paper is organized as follows: First, we use Newton's method to convert the quadratic matrix equations into linear matrix equations. Second, MCG method [10,13,16,18] is applied to solve the derived linear matrix equations. Finally, numerical examples are presented to support the theoretical results of this paper.
2.
Newton's method for solving quadratic matrix equations
As a matter of convenience, we first introduce some notations.
$ Y $, $ Y^{(k)} $ and $ Y^* $ are defined in the same way as $ X $, $ X^{(k)} $ and $ X^* $ respectively. Then let
we can obtain
where $ \phi_X^{(l)}(Y) $: $ R^{n\times n} \rightarrow R^{n\times n} $ is the Fréchet derivative of $ \psi^{(l)}(X) $ at $ X $ in the direction $ Y $ [1].
Lemma 2.1. Finding the solution $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) can be transformed into finding the corrected value $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ of $ \psi^{(l)}(X+ Y) = 0\ (l = 1, 2) $. We linearize and solve, to find $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from the coupled linear matrix equation
Proof. Supposing that the approximate solution $ (X_1, X_2, X_3)\in \Omega_{1-5-9} $ of Eq (1.1) has been obtained. Let $ X_i^* = X_i+Y_i \ (i = 1, 2, 3) $, then finding $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) is transformed into finding the corrected value $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from
The Eq (2.3) is quadratic equations about $ Y_i $. As is known, when the norm of $ Y_i $ is small enough, the quadratic parts $ \sum\nolimits_{i, j = 1}^3 Y_i E_{ij}^{(l)} Y_j $ about $ Y_i $ in (2.1) can be discarded according to Newton's method. In this way, we can get a linear approximation
Therefore, finding the solution $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) is transformed into finding $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from $ \psi^{(l)}(X)+\phi_X^{(l)}(Y) = O\ (l = 1, 2) $, that is, to solve (2.2).
According to [14], Newton's method (algorithm 1) is introduced to find constrained solutions in $ \Omega_{1-5-9} $ of (1.1). Let
The convergent properties about Newton's method can be obtained as follows according to [14] (The proof is similar to Lemma 2.1 in [14]).
Theorem 2.1. Assume that the real matrix $ X^* $ is a simple root of (1.1), i.e. $ \psi(X^*) = O $ and $ \phi_{X^*}(Y) $ is regular. Then if the starting matrix $ X^{(1)} $ is chosen sufficiently close to the solution $ X^* $, the sequence $ \{X^{(k)}\} $ generated by Newton's method converges quadratically to the solution $ X^* $.
3.
MCG method for solving linear matrix equations
In algorithm 1, when $ X^{(k)} $ is known, then $ Y^{(k)} $ needs to be solved. In this section, MCG method will be used to solve $ Y^{(k)} $ from Eq (2.2), that is, to solve Eq (2.4) or Eq (2.5). Consider the general form of Eq (2.2)
where all matrices in Eq (3.1) are $ n \times n $ real matrices. Let
where
and $ P_1, P_2\in R^{n\times n} $ are symmetric orthogonal matrices.
In order to solve Eq (3.1), the following two questions will be considered.
Problem 3.1. Assume that (3.1) has constrained solutions, find $ (Y_1, Y_2, Y_3) \in\Omega_{1-5-9} $ from (3.1).
Problem 3.2. Assume that (3.1) hasn't constrained solutions, find $ (Y_1, Y_2, Y_3) \in\Omega_{1-5-9} $, such that
3.1. MCG method for solving problem 1
Based on the MCG method, we establish the following algorithm (algorithm 2) to solve problem 3.1.
From algorithm 2, we can easily see $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ for $ k = 1, 2, \cdots $ and have the following convergent properties (The proof is similar to Theorem 2.1 in [10]):
Theorem 3.1. Assume that Eq (3.1) has constrained solutions in $ \Omega_{1-5-9} $. Then for an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in\Omega_{1-5-9} $, a solution of problem 3.1 can be obtained by algorithm 2 within finite number of iterations, which is also a constrained solution in $ \Omega_{1-5-9} $ of (3.1).
3.2. MCG method for solving problem 2
Algorithm 2 will break if $ R_{i}\neq O $ and $ Z_{i} = O $, which means that Eq (3.1) hasn't constrained solution in $ \Omega_{1-5-9} $ according to Theorem 3.1. Therefore, we need to solve problem 3.2, that is, to find constrained least-squares solutions of (3.1).
We replace the problem of finding least-squares solutions in $ \Omega_{1-5-9} $ of (3.1) with finding solutions in $ \Omega_{1-5-9} $ of equivalent linear matrix equations by the Theorem 3.2, and then an iterative algorithm to find constrained least-squares solutions in $ \Omega_{1-5-9} $ of (3.1) is constructed according to algorithm 2.
As a matter of convenience, we introduce some notations:
where
$ u $ and $ v $ are functions of $ Y $. Then, according to [16,17], we have the following theorem.
Theorem 3.2. Iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in $ \Omega_{1-5-9} $ from
Indeed, (3.3) has constrained solutions in $ \Omega_{1-5-9} $.
Proof. When $ (Y_{1}, Y_{2}, Y_{3})\in\Omega_{1-5-9} $, we have $ Y_1 = Y_1^\mathrm{T} $, $ Y_2 = P_1 Y_2 P_1 $ and $ Y_3 = Y_3^\mathrm{T} = P_2 Y_3 P_2 $. Therefore, solving problem 3.2 is equivalent to solving $ (Y_{1}, Y_{2}, Y_{3})\in \Omega_{1-5-9} $ from
Now we have to prove that solving the problem (3.4) is equivalent to finding constrained solutions in $ \Omega_{1-5-9} $ of (3.3). We let multiply operation prior to Kronecker product operation between matrices. Let
where $ T_{n, n} $ denotes a commutation matrix such that $ T_{n, n}\overline{\mathrm{vec}}(A_{n\times n}) = \overline{\mathrm{vec}}(A^\mathrm{T}) $ [19] and let $ T_{n, n} $ only work on $ \overline{\mathrm{vec}} $. Then applying $ \overline{\mathrm{vec}} $ to the following equations:
we can get the equivalent equation: $ M y = f $. Besides, $ M^\mathrm{T} M y = M^\mathrm{T} f $, the normal equation of $ M y = f $, is the vectorization of (3.3). Therefore, the least-squares solution of $ M y = f $ is also a solution of $ M^\mathrm{T} M y = M^\mathrm{T} f $, and the vectorization of the solution of (3.3). So the solution of (3.4) is also a solution of (3.3), and vice versa.
Above all, iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in $ \Omega_{1-5-9} $ of (3.3).
As we all know, normal equations always have solutions, and the vectorization of Eq (3.3) is a normal equation, so Eq (3.3) also has solutions. Suppose $ \tilde{Y} = (\tilde{Y}_1^\mathrm{T}, \tilde{Y}_2^\mathrm{T}, \tilde{Y}_3^\mathrm{T})^\mathrm{T} $ (whether $ \tilde{Y} \in \Omega_{1-5-9} $ or not) is a solution of (3.3), then $ g(\tilde{ Y}) = Q $. Let $ Y_{i}^* = q_i(\tilde{Y})\ (i = 1, 2, 3) $, then $ (Y_{1}^*, Y_{2}^*, Y_{3}^*)\in\Omega_{1-5-9} $ and $ g(Y^*) = Q $. Hence, (3.3) has constrained solutions in $ \Omega_{1-5-9} $.
We use the MCG method to find constrained solutions in $ \Omega_{1-5-9} $ of (3.3) by algorithm 3, which is also a process to solve the problem 3.2.
From algorithm 3, we can see that $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in\Omega_{1-5-9} $ for $ k = 1, 2, \cdots $ and have the following convergent properties (The proof is similar to Theorem 2 in [13]).
Theorem 3.3. For an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in\Omega_{1-5-9} $, a solution of problem 3.2 can be obtained by algorithm 3 within finite number of iterations, and it is also a constrained least-squares solution in $ \Omega_{1-5-9} $ of (3.1).
4.
Numerical experiments
In this section, we design two computation programmes to find constrained solutions of (1.1). Then two numerical examples are given to illustrate the proposed results. All computations are performed using MATLAB. Because of the influence of roundoff errors, we regard a matrix $ A $ as zero matrix if $ \|A\|\leq 10^{-7} $.
Let $ n $ be the order of the matrix $ X_i $, $ k, k_1, k_2 $ be the iteration numbers of algorithm 1, algorithm 2 and algorithm 3 respectively, and $ t $ be the computation time (seconds).
Programme 1.
(1) Choose an initial matrix $ (X_{1}^{(1)}, X_{2}^{(1)}, X_{3}^{(1)})\in\Omega_{1-5-9} $ and set $ k: = 1 $.
(2) If $ \psi(X^{(k)}) = O $, stop, else, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ from (2.4) using algorithm 2. When algorithm 2 breaks, that is (2.4) hasn't constrained solution in $ \Omega_{1-5-9} $, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in\Omega_{1-5-9} $ from (2.5) using algorithm 3.
(3) Compute $ X^{(k+1)} = X^{(k)}+ Y^{(k)} $, set $ k: = k+1 $ and go to step 2.
Programme 2.
(1) Choose an initial matrix $ (X_{1}^{(1)}, X_{2}^{(1)}, X_{3}^{(1)})\in\Omega_{1-5-9} $ and set $ k: = 1 $.
(2) If $ \psi(X^{(k)}) = O $, stop, else, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ from (2.5) using algorithm 3. Especially, when (2.4) has constrained solutions in $ \Omega_{1-5-9} $, the constrained least-squares solutions in $ \Omega_{1-5-9} $ are also its constrained solutions in $ \Omega_{1-5-9} $.
(3) Compute $ X^{(k+1)} = X^{(k)}+ Y^{(k)} $, set $ k: = k+1 $ and go to step 2.
Example 4.1. Consider (1.1) with the following parameters:
where
We can easily see that (1.1) has the constrained solution $ (X_{1}^*, X_{2}^*, X_{3}^*)\in\Omega_{1-5-9} $. By applying programmes 1 and 2 with the initial matrix $ X_{i}^{(1)} = eye(3) $, $ Y_{i}^{(1)} = zeros(3) (i = 1, 2, 3) $, we obtain the constrained solution in $ \Omega_{1-5-9} $ of (1.1) as follows:
The iteration numbers and computation time are listed in Table 1.
From the results in Table 1, we see that programme 1 is more effective when the derived linear matrix equations are always have constrained solutions in $ \Omega_{1-5-9} $.
Example 4.2. Consider (1.1) with the following parameters:
where
By applying programmes 1 and 2 with the initial matrix $ X_{i}^{(1)} = ones(3) $, $ Y_{i}^{(1)} = zeros(3)\ (i = 1, 2, 3) $, $ P_1 $ and $ P_2 $ are identity matrices, we obtain a special constrained solution (now, $ X_1 $ and $ X_3 $ are symmetric matrices, $ X_2 $ is a general matrix) in $ \Omega_{1-5-9} $ of (1.1) as follows:
When $ X_{i}^{(1)} = eye(3)\ (i = 1, 2, 3) $, others remain unchanged, we obtain another special constrained solution in $ \Omega_{1-5-9} $ of (1.1) as follows:
Thus it can be seen that if the constrained solution of Eq (1.1) is not unique, we can get different constrained solutions in $ \Omega_{1-5-9} $ when choosing different initial matrices.
5.
Conclusions
In this paper, an iterative algorithm is studied to find different constrained solutions. By using the proposed algorithm, we compute a set of different constrained solution in $ \Omega_{1-5-9} $ of multivariate quadratic matrix equations. The provided examples illustrate the effectiveness of the new iterative algorithm.
There are still some results we can obtain by changing the initial matrices $ X_i^{(1)} $ and $ Y_i^{(1)} $, the direction matrix $ Z_k $ in algorithm 2 and Eq (3.3) in algorithm 3. In this way, we can get other kind of constrained solutions, which are not only interesting but also valuable. It remains to study in our further work.
Acknowledgments
This research was funded by Doctoral Fund Project of Shandong Jianzhu University grant number X18091Z0101.
Conflict of interest
The author declares no conflict of interest.