Processing math: 100%
Research article

Oscillations and NMDA Receptors: Their Interplay Create Memories

  • Received: 22 April 2014 Accepted: 29 May 2014 Published: 09 June 2014
  • Oscillatory activity is inherent in many types of normal cellular function. Importantly, oscillations contribute to cellular network activity and cellular decision making, which are driving forces for cognition. Theta oscillations have been correlated with learning and memory encoding and gamma oscillations have been associated with attention and working memory. NMDA receptors are also implicated in oscillatory activity and contribute to normal function and in disease-related pathology. The interplay between oscillatory activity and NMDA receptors are intellectually curious and a fascinating dimension of inquiry. In this review we introduce some of the essential mathematical characteristics of oscillatory activity in order to provide a platform for additional discussion on recent studies concerning oscillations involving neuronal firing and NMDA receptor activity, and the effect of these dynamic mechanisms on cognitive processing in health and disease.

    Citation: Chris Cadonic, Benedict C. Albensi. Oscillations and NMDA Receptors: Their Interplay Create Memories[J]. AIMS Neuroscience, 2014, 1(1): 52-64. doi: 10.3934/Neuroscience.2014.1.52

    Related Papers:

    [1] Habibu Abdullahi, A. K. Awasthi, Mohammed Yusuf Waziri, Issam A. R. Moghrabi, Abubakar Sani Halilu, Kabiru Ahmed, Sulaiman M. Ibrahim, Yau Balarabe Musa, Elissa M. Nadia . An improved convex constrained conjugate gradient descent method for nonlinear monotone equations with signal recovery applications. AIMS Mathematics, 2025, 10(4): 7941-7969. doi: 10.3934/math.2025365
    [2] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [3] Jia Tang, Yajun Xie . The generalized conjugate direction method for solving quadratic inverse eigenvalue problems over generalized skew Hamiltonian matrices with a submatrix constraint. AIMS Mathematics, 2020, 5(4): 3664-3681. doi: 10.3934/math.2020237
    [4] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [5] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [6] Anli Wei, Ying Li, Wenxv Ding, Jianli Zhao . Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Mathematics, 2022, 7(4): 5029-5048. doi: 10.3934/math.2022280
    [7] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [8] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [9] Ting Lin, Hong Zhang, Chaofan Xie . A modulus-based modified multivariate spectral gradient projection method for solving the horizontal linear complementarity problem. AIMS Mathematics, 2025, 10(2): 3251-3268. doi: 10.3934/math.2025151
    [10] Wen-Ning Sun, Mei Qin . On maximum residual block Kaczmarz method for solving large consistent linear systems. AIMS Mathematics, 2024, 9(12): 33843-33860. doi: 10.3934/math.20241614
  • Oscillatory activity is inherent in many types of normal cellular function. Importantly, oscillations contribute to cellular network activity and cellular decision making, which are driving forces for cognition. Theta oscillations have been correlated with learning and memory encoding and gamma oscillations have been associated with attention and working memory. NMDA receptors are also implicated in oscillatory activity and contribute to normal function and in disease-related pathology. The interplay between oscillatory activity and NMDA receptors are intellectually curious and a fascinating dimension of inquiry. In this review we introduce some of the essential mathematical characteristics of oscillatory activity in order to provide a platform for additional discussion on recent studies concerning oscillations involving neuronal firing and NMDA receptor activity, and the effect of these dynamic mechanisms on cognitive processing in health and disease.


    Quadratic matrix equation has many different forms, such as $ A X^{2}+ B X+ C = O $ arising in quasi-birth-death processes [1,2] and Riccati equation $ X C X- X E- A X+ B = O $ arising in transport theory [3,4,5,6]. There are also some coupled quadratic matrix equations with two or three variables [7,8]. This article will study the general form of these equations. As can be seen, the linear part of these equations can be expressed in the form $ \sum\nolimits_{i = 1}^3 C_{i}^{(l)} X_i D_{i}^{(l)} $, and the quadratic part of them can be expressed as $ \sum\nolimits_{i, j = 1}^3 X_i E_{ij}^{(l)} X_j $. Therefore, we study the following coupled quadratic matrix equation:

    $ 3i=1C(l)iXiD(l)i+3i,j=1XiE(l)ijXj=S(l)(l=1,2),
    $
    (1.1)

    where all matrices are $ n \times n $ real matrices. Each equation in (1.1) consists of three linear terms and nine quadratic terms. Besides, Eq (1.1) have three variables and only two equations, so the solution is not unique.

    As we know, we always need some special kind of solutions in practical applications, such as symmetric solutions are widely used in control theory [8,9] and reflexive solutions which are also called generalized centro-symmetric solutions are used in information theory, linear estimate theory and numerical analysis [10,11]. Liao, Liu, etc. have studied the problem of different constrained solutions of linear matrix equations [12,13]. In this article, we will design a method to obtain different constrained solutions of a class of quadratic matrix equations.

    Many researchers have studied quadratic matrix equations. For example, Bini, Iannazzo and Poloni gave a fast Newton's method for a quadratic matrix equation [4]. Long, Hu and Zhang used an improved Newton's method to solve a quadratic matrix equation [14]. Convergence rate of some iterative methods for quadratic matrix equations arising in transport theory was also described by Guo and Lin [15]. Zhang, Zhu and Liu have studied the constrained solutions of two-variable Riccati matrix equations based on Newton's method and modified conjugate gradient (MCG) method [16,17]. This article will further study the problem of different constrained solutions of coupled quadratic matrix equations with three matrix variables. The algorithm designed in the paper is superior in computing different constrained solutions.

    Notation: $ R^{n\times n} $ denotes the set of $ n \times n $ real matrices. The symbols $ A^\mathrm{T} $ and $ \mathrm{tr}{(A)} $ represent the transpose and the trace of the matrix $ A $ respectively. $ A \otimes B $ stands for the Kronecker product of matrices $ A $ and $ B $, $ \overline{\mathrm{vec}}(\cdot) $ is an operator that transforms a matrix $ A $ into a column vector by vertically stacking the columns of the matrix $ A^\mathrm{T} $. For example, for the $ 2 \times 2 $ matrix

    $ A = \left[ abcd
    \right], $

    the vectorization is $ { \overline{\mathrm{vec}} (A) = [a, b, c, d]^{\mathrm {T} }} $. We define an inner product of two matrices $ A, B\in R^{n\times n} $ as $ [A, B] = \mathrm{tr}({A^\mathrm{T}}B) $, then the norm of a matrix $ A $ generated by this inner product is Frobenius norm and denoted by $ \|A\| $, i.e. $ \|A\| = \sqrt{[A, A]} $.

    Let $ \Omega_1 $ be the set of symmetric matrices. $ P_1, P_2\in R^{n\times n} $ are said to be symmetric orthogonal matrices if $ P_i = P_i^\mathrm{T} $ and $ P_i^2 = I\ (i = 1, 2) $. $ X \in R^{n\times n} $ is said to be a reflexive matrix with respect to $ P_1 $ if $ P_1 X P_1 = X $. Let $ \Omega_5 $ be the set of reflexive matrices. $ X \in R^{n\times n} $ is said to be a symmetric reflexive matrix with respect to $ P_2 $ if $ X^\mathrm{T} = X = P_2 X P_2 $. Let $ \Omega_9 $ be the set of symmetric reflexive matrices. We call $ (X_1, X_2, X_3) $ a constrained matrix in $ \Omega_{1-5-9} $ when $ X_1 \in \Omega_1 $, $ X_2 \in \Omega_5 $ and $ X_3 \in \Omega_9 $. Besides, if the symmetric orthogonal matrices $ P_1 $ and $ P_2 $ are changed, we will get different constrained matrices in $ \Omega_{1-5-9} $.

    The paper is organized as follows: First, we use Newton's method to convert the quadratic matrix equations into linear matrix equations. Second, MCG method [10,13,16,18] is applied to solve the derived linear matrix equations. Finally, numerical examples are presented to support the theoretical results of this paper.

    As a matter of convenience, we first introduce some notations.

    $ X = \left[ X1X2X3
    \right], \quad X^{(k)} = \left[ X(k)1X(k)2X(k)3
    \right], \quad X^* = \left[ X1X2X3
    \right]. $

    $ Y $, $ Y^{(k)} $ and $ Y^* $ are defined in the same way as $ X $, $ X^{(k)} $ and $ X^* $ respectively. Then let

    $ \psi^{(l)}(X) = \sum\nolimits_{i = 1}^3 C_{i}^{(l)} X_i D_{i}^{(l)}+ \sum\nolimits_{i, j = 1}^3 X_i E_{ij}^{(l)} X_j - S^{(l)}, $
    $ \phi_X^{(l)}(Y) = \sum\nolimits_{i = 1}^3 C_{i}^{(l)} Y_i D_{i}^{(l)}+ \sum\nolimits_{i, j = 1}^3 X_i E_{ij}^{(l)} Y_j + \sum\nolimits_{i, j = 1}^3 Y_i E_{ij}^{(l)} X_j, $

    we can obtain

    $ ψ(l)(X+Y)=ψ(l)(X)+ϕ(l)X(Y)+3i,j=1YiE(l)ijYj(l=1,2),
    $
    (2.1)

    where $ \phi_X^{(l)}(Y) $: $ R^{n\times n} \rightarrow R^{n\times n} $ is the Fréchet derivative of $ \psi^{(l)}(X) $ at $ X $ in the direction $ Y $ [1].

    Lemma 2.1. Finding the solution $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) can be transformed into finding the corrected value $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ of $ \psi^{(l)}(X+ Y) = 0\ (l = 1, 2) $. We linearize and solve, to find $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from the coupled linear matrix equation

    $ ϕ(l)X(Y)=ψ(l)(X)(l=1,2).
    $
    (2.2)

    Proof. Supposing that the approximate solution $ (X_1, X_2, X_3)\in \Omega_{1-5-9} $ of Eq (1.1) has been obtained. Let $ X_i^* = X_i+Y_i \ (i = 1, 2, 3) $, then finding $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) is transformed into finding the corrected value $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from

    $ ψ(l)(X+Y)=O(l=1,2).
    $
    (2.3)

    The Eq (2.3) is quadratic equations about $ Y_i $. As is known, when the norm of $ Y_i $ is small enough, the quadratic parts $ \sum\nolimits_{i, j = 1}^3 Y_i E_{ij}^{(l)} Y_j $ about $ Y_i $ in (2.1) can be discarded according to Newton's method. In this way, we can get a linear approximation

    $ \psi^{(l)}(X+Y) \approx \psi^{(l)}(X)+ \phi_X^{(l)}(Y). $

    Therefore, finding the solution $ (X_1^*, X_2^*, X_3^*)\in \Omega_{1-5-9} $ of (1.1) is transformed into finding $ (Y_1, Y_2, Y_3)\in \Omega_{1-5-9} $ from $ \psi^{(l)}(X)+\phi_X^{(l)}(Y) = O\ (l = 1, 2) $, that is, to solve (2.2).

    According to [14], Newton's method (algorithm 1) is introduced to find constrained solutions in $ \Omega_{1-5-9} $ of (1.1). Let

    $ \psi ( X) = \left[ ψ(1)(X)ψ(2)(X)
    \right], \quad \phi_X( Y) = \left[ ϕ(1)X(Y)ϕ(2)X(Y)
    \right]. $
    Algorithm 1: : Newton's method solves the solution $ X $ of Eq (1.1)
    Step 1. Choose an initial matrix $ (X_{1}^{(1)}, X_{2}^{(1)}, X_{3}^{(1)})\in\Omega_{1-5-9} $ and set $ k:=1 $.
    Step 2. If $ \psi(X^{(k)})=O $, stop, else, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ from
    $ϕX(k)(Y(k))=ψ(X(k)).            (2.4)
    $
    When (2.4) hasn't constrained solutions in $ \Omega_{1-5-9} $, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)}) \in \Omega_{1-5-9} $, such that
    $ϕX(k)(Y(k))+ψ(X(k))=min.            (2.5)
    $
    Step 3. Compute $ X^{(k+1)}= X^{(k)}+ Y^{(k)} $, set $ k:=k+1 $ and go to step 2.

     | Show Table
    DownLoad: CSV

    The convergent properties about Newton's method can be obtained as follows according to [14] (The proof is similar to Lemma 2.1 in [14]).

    Theorem 2.1. Assume that the real matrix $ X^* $ is a simple root of (1.1), i.e. $ \psi(X^*) = O $ and $ \phi_{X^*}(Y) $ is regular. Then if the starting matrix $ X^{(1)} $ is chosen sufficiently close to the solution $ X^* $, the sequence $ \{X^{(k)}\} $ generated by Newton's method converges quadratically to the solution $ X^* $.

    In algorithm 1, when $ X^{(k)} $ is known, then $ Y^{(k)} $ needs to be solved. In this section, MCG method will be used to solve $ Y^{(k)} $ from Eq (2.2), that is, to solve Eq (2.4) or Eq (2.5). Consider the general form of Eq (2.2)

    $ 3i=17j=1A(l)ijYiB(l)ij=F(l)(l=1,2),
    $
    (3.1)

    where all matrices in Eq (3.1) are $ n \times n $ real matrices. Let

    $ h(Y) = \left[ h(1)(Y)h(2)(Y)
    \right], \quad F = \left[ F(1)F(2)
    \right], \quad R = F-h(Y)\stackrel{\mathrm{def}}{ = }\left[ R(1)R(2)
    \right], $
    $ p(R) = \left[ p1(R)p2(R)p3(R)
    \right], \quad q(Y) = \left[ q1(Y)q2(Y)q3(Y)
    \right], $

    where

    $ h^{(l)}(Y)\stackrel{\mathrm{def}}{ = }h^{(l)}(Y_1, Y_2, Y_3) = \sum\nolimits_{i = 1}^3 \sum\nolimits_{j = 1}^7 A_{ij}^{(l)} Y_i B_{ij}^{(l)}\quad (l = 1, 2), $
    $ p_i(R) = \sum\nolimits_{l = 1}^2 \sum\nolimits_{j = 1}^7 (A_{ij}^{(l)})^\mathrm{T} R^{(l)} (B_{ij}^{(l)})^\mathrm{T} \quad (i = 1, 2, 3), $
    $ q_1(Y) = {\frac 12}(Y_1+ Y_1^\mathrm{T}), \quad q_2(Y) = {\frac 12}(Y_2+ P_1 Y_2 P_1), $
    $ q_3(Y) = {\frac 14}(Y_3+ Y_3^\mathrm{T}+ P_2(Y_3+ Y_3^\mathrm{T})P_2), $

    and $ P_1, P_2\in R^{n\times n} $ are symmetric orthogonal matrices.

    In order to solve Eq (3.1), the following two questions will be considered.

    Problem 3.1. Assume that (3.1) has constrained solutions, find $ (Y_1, Y_2, Y_3) \in\Omega_{1-5-9} $ from (3.1).

    Problem 3.2. Assume that (3.1) hasn't constrained solutions, find $ (Y_1, Y_2, Y_3) \in\Omega_{1-5-9} $, such that

    $ h(Y)F=min.
    $
    (3.2)

    Based on the MCG method, we establish the following algorithm (algorithm 2) to solve problem 3.1.

    Algorithm 2: : MCG method to solve problem 3.1
    Step 1. Choose an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in \Omega_{1-5-9} $, set $ k:=1 $ and compute
    $ R_{k} = F-h(Y^{(k)})\stackrel{\mathrm{def}}{ = }\left[R(1)kR(2)k
    \right], \quad \tilde{R}_{k} = p(R_{k})\stackrel{\mathrm{def}}{ = }\left[˜R(1)k˜R(2)k˜R(3)k
    \right], \quad Z_{k} = q(\tilde{ R}_{k}). $
    Step 2. If $ R_{k}=O $, or $ R_{k}\neq O $ and $ Z_{k}=O $, stop, else, compute
    $ \alpha_{k} = \frac {\|R_{k}\|^2}{\|Z_{k}\|^2}, \quad Y^{(k+1)} = Y^{(k)}+\alpha_{k} Z_{k}. $
    Step 3. Compute
    $ R_{k+1} = F-h(Y^{(k+1)})\stackrel{\mathrm{def}}{ = }\left[R(1)k+1R(2)k+1
    \right], \quad \tilde{R}_{k+1} = p(R_{k+1})\stackrel{\mathrm{def}}{ = }\left[˜R(1)k+1˜R(2)k+1˜R(3)k+1
    \right], $
    $ \beta_{k+1} = \frac {\| R_{k+1}\|^2}{\|R_{k}\|^2}, \quad Z_{k+1} = q(\tilde{ R}_{k+1})+\beta_{k+1} Z_{k}. $
    Step 4. Set $ k:=k+1 $ and go to step 2.

     | Show Table
    DownLoad: CSV

    From algorithm 2, we can easily see $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ for $ k = 1, 2, \cdots $ and have the following convergent properties (The proof is similar to Theorem 2.1 in [10]):

    Theorem 3.1. Assume that Eq (3.1) has constrained solutions in $ \Omega_{1-5-9} $. Then for an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in\Omega_{1-5-9} $, a solution of problem 3.1 can be obtained by algorithm 2 within finite number of iterations, which is also a constrained solution in $ \Omega_{1-5-9} $ of (3.1).

    Algorithm 2 will break if $ R_{i}\neq O $ and $ Z_{i} = O $, which means that Eq (3.1) hasn't constrained solution in $ \Omega_{1-5-9} $ according to Theorem 3.1. Therefore, we need to solve problem 3.2, that is, to find constrained least-squares solutions of (3.1).

    We replace the problem of finding least-squares solutions in $ \Omega_{1-5-9} $ of (3.1) with finding solutions in $ \Omega_{1-5-9} $ of equivalent linear matrix equations by the Theorem 3.2, and then an iterative algorithm to find constrained least-squares solutions in $ \Omega_{1-5-9} $ of (3.1) is constructed according to algorithm 2.

    As a matter of convenience, we introduce some notations:

    $ u(Y) = h(Y_1, Y_2, {\frac 12}(Y_3+ P_2 Y_3 P_2))\stackrel{\mathrm{def}}{ = }\left[ u(1)(Y)u(2)(Y)
    \right], $
    $ v(Y) = h(Y_1^\mathrm{T}, P_1 Y_2 P_1, {\frac 12}(Y_3^\mathrm{T}+ P_2 Y_3^\mathrm{T} P_2))\stackrel{\mathrm{def}}{ = }\left[ v(1)(Y)v(2)(Y)
    \right], $
    $ g(Y) = \left[ g1(Y)g2(Y)g3(Y)
    \right], \quad Q = \left[ Q1(F)Q2(F)Q3(F)
    \right], $

    where

    $ g_1(Y) = p_{1}(u)+ p_{1}^\mathrm{T} (v), \quad g_2(Y) = p_{2}(u)+ P_1 p_{2}(v) P_1, $
    $ g_3(Y) = {\frac 12}\left(p_{3}(u)+ p_{3}^\mathrm{T}(v) + P_2 \left(p_{3}(u)+ p_{3}^\mathrm{T}(v) \right) P_2\right), $
    $ Q_1(F) = p_{1}(F)+ p_{1}^\mathrm{T} (F), \quad Q_2(F) = p_{2}(F)+ P_1 p_{2}(F) P_1, $
    $ Q_3(F) = {\frac 12}\left(p_{3}(F)+ p_{3}^\mathrm{T} (F) + P_2 \left(p_{3}(F)+ p_{3}^\mathrm{T}(F) \right) P_2 \right), $

    $ u $ and $ v $ are functions of $ Y $. Then, according to [16,17], we have the following theorem.

    Theorem 3.2. Iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in $ \Omega_{1-5-9} $ from

    $ g(Y)=Q.
    $
    (3.3)

    Indeed, (3.3) has constrained solutions in $ \Omega_{1-5-9} $.

    Proof. When $ (Y_{1}, Y_{2}, Y_{3})\in\Omega_{1-5-9} $, we have $ Y_1 = Y_1^\mathrm{T} $, $ Y_2 = P_1 Y_2 P_1 $ and $ Y_3 = Y_3^\mathrm{T} = P_2 Y_3 P_2 $. Therefore, solving problem 3.2 is equivalent to solving $ (Y_{1}, Y_{2}, Y_{3})\in \Omega_{1-5-9} $ from

    $ [u(Y)v(Y)][FF]=min.
    $
    (3.4)

    Now we have to prove that solving the problem (3.4) is equivalent to finding constrained solutions in $ \Omega_{1-5-9} $ of (3.3). We let multiply operation prior to Kronecker product operation between matrices. Let

    $ G_i^{(l)} = \sum\nolimits_{j = 1}^7 {A_{ij}^{(l)}}\otimes{(B_{ij}^{(l)})^\mathrm{T}}, $
    $ H_{im}^{(l)} = \sum\nolimits_{j = 1}^7 {A_{ij}^{(l)} P_m}\otimes{(P_m B_{ij}^{(l)})^\mathrm{T}}, \quad (i = 1, 2, 3; l, m = 1, 2), $
    $ M = \left[ G(1)1G(1)212(G(1)3+H(1)32)G(2)1G(2)212(G(2)3+H(2)32)G(1)1Tn,nH(1)2112(G(1)3+H(1)32)Tn,nG(2)1Tn,nH(2)2112(G(2)3+H(2)32)Tn,n
    \right], $
    $ y = \left[ ¯vec(Y1)¯vec(Y2)¯vec(Y3)
    \right], \quad f = \left[ ¯vec(F)¯vec(F)
    \right], $

    where $ T_{n, n} $ denotes a commutation matrix such that $ T_{n, n}\overline{\mathrm{vec}}(A_{n\times n}) = \overline{\mathrm{vec}}(A^\mathrm{T}) $ [19] and let $ T_{n, n} $ only work on $ \overline{\mathrm{vec}} $. Then applying $ \overline{\mathrm{vec}} $ to the following equations:

    $ {u(Y)=F,v(Y)=F,
    $
    (3.5)

    we can get the equivalent equation: $ M y = f $. Besides, $ M^\mathrm{T} M y = M^\mathrm{T} f $, the normal equation of $ M y = f $, is the vectorization of (3.3). Therefore, the least-squares solution of $ M y = f $ is also a solution of $ M^\mathrm{T} M y = M^\mathrm{T} f $, and the vectorization of the solution of (3.3). So the solution of (3.4) is also a solution of (3.3), and vice versa.

    Above all, iterative algorithm for solving problem 3.2 can be replaced by finding constrained solutions in $ \Omega_{1-5-9} $ of (3.3).

    As we all know, normal equations always have solutions, and the vectorization of Eq (3.3) is a normal equation, so Eq (3.3) also has solutions. Suppose $ \tilde{Y} = (\tilde{Y}_1^\mathrm{T}, \tilde{Y}_2^\mathrm{T}, \tilde{Y}_3^\mathrm{T})^\mathrm{T} $ (whether $ \tilde{Y} \in \Omega_{1-5-9} $ or not) is a solution of (3.3), then $ g(\tilde{ Y}) = Q $. Let $ Y_{i}^* = q_i(\tilde{Y})\ (i = 1, 2, 3) $, then $ (Y_{1}^*, Y_{2}^*, Y_{3}^*)\in\Omega_{1-5-9} $ and $ g(Y^*) = Q $. Hence, (3.3) has constrained solutions in $ \Omega_{1-5-9} $.

    We use the MCG method to find constrained solutions in $ \Omega_{1-5-9} $ of (3.3) by algorithm 3, which is also a process to solve the problem 3.2.

    Algorithm 3: : MCG method to solve problem 3.2
    Step 1. Choose an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in\Omega_{1-5-9} $, set $ k:=1 $ and compute
    $ R_{k} = Q-g(Y^{(k)}), \quad \tilde{R}_{k} = g(R_{k}), \quad Z_{k} = \tilde{ R}_{k}. $
    Step 2. If $ R_{k}=O $, stop, else, compute
    $ \alpha_{k} = \frac {\| R_{k}\|^2}{\|Z_{k}\|^2}, \quad Y^{(k+1)} = Y^{(k)}+\alpha_{k} Z_{k}. $
    Step 3. Compute
    $ R_{k+1} = Q- g(Y^{(k+1)}), \quad \tilde{R}_{k+1} = g(R_{k+1}), $
    $ \beta_{k+1} = \frac {\| R_{k+1}\|^2}{\|R_{k}\|^2}, \quad Z_{k+1} = \tilde{R}_{k+1} +\beta_{k+1} Z_{k}. $
    Step 4. Set $ k:=k+1 $ and go to step 2.

     | Show Table
    DownLoad: CSV

    From algorithm 3, we can see that $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in\Omega_{1-5-9} $ for $ k = 1, 2, \cdots $ and have the following convergent properties (The proof is similar to Theorem 2 in [13]).

    Theorem 3.3. For an arbitrary initial matrix $ (Y_{1}^{(1)}, Y_{2}^{(1)}, Y_{3}^{(1)})\in\Omega_{1-5-9} $, a solution of problem 3.2 can be obtained by algorithm 3 within finite number of iterations, and it is also a constrained least-squares solution in $ \Omega_{1-5-9} $ of (3.1).

    In this section, we design two computation programmes to find constrained solutions of (1.1). Then two numerical examples are given to illustrate the proposed results. All computations are performed using MATLAB. Because of the influence of roundoff errors, we regard a matrix $ A $ as zero matrix if $ \|A\|\leq 10^{-7} $.

    Let $ n $ be the order of the matrix $ X_i $, $ k, k_1, k_2 $ be the iteration numbers of algorithm 1, algorithm 2 and algorithm 3 respectively, and $ t $ be the computation time (seconds).

    Programme 1.

    (1) Choose an initial matrix $ (X_{1}^{(1)}, X_{2}^{(1)}, X_{3}^{(1)})\in\Omega_{1-5-9} $ and set $ k: = 1 $.

    (2) If $ \psi(X^{(k)}) = O $, stop, else, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ from (2.4) using algorithm 2. When algorithm 2 breaks, that is (2.4) hasn't constrained solution in $ \Omega_{1-5-9} $, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in\Omega_{1-5-9} $ from (2.5) using algorithm 3.

    (3) Compute $ X^{(k+1)} = X^{(k)}+ Y^{(k)} $, set $ k: = k+1 $ and go to step 2.

    Programme 2.

    (1) Choose an initial matrix $ (X_{1}^{(1)}, X_{2}^{(1)}, X_{3}^{(1)})\in\Omega_{1-5-9} $ and set $ k: = 1 $.

    (2) If $ \psi(X^{(k)}) = O $, stop, else, solve for $ (Y_{1}^{(k)}, Y_{2}^{(k)}, Y_{3}^{(k)})\in \Omega_{1-5-9} $ from (2.5) using algorithm 3. Especially, when (2.4) has constrained solutions in $ \Omega_{1-5-9} $, the constrained least-squares solutions in $ \Omega_{1-5-9} $ are also its constrained solutions in $ \Omega_{1-5-9} $.

    (3) Compute $ X^{(k+1)} = X^{(k)}+ Y^{(k)} $, set $ k: = k+1 $ and go to step 2.

    Example 4.1. Consider (1.1) with the following parameters:

    $ P_1 = \left[ 010100001
    \right], \quad P_2 = \left[ 010100001
    \right], $
    $ C_i^{(l)} = C+l\times{ones(3)}, \quad D_i^{(l)} = (C_i^{(l)})^\mathrm{T}, \quad E_{ij}^{(l)} = -u_{i} u_{i}^\mathrm{T}, $
    $ S^{(l)} = \sum\nolimits_{i = 1}^3 C_{i}^{(l)} X_i^* D_{i}^{(l)}+ \sum\nolimits_{i, j = 1}^3 X_i^* E_{ij}^{(l)} X_j^* \quad (i, j = 1, 2, 3; l = 1, 2), $

    where

    $ C = \left[ 100011101
    \right], \quad X_{1}^* = \left[ 100.50100.502
    \right], $
    $ X_{2}^* = \left[ 100.5010.5002
    \right], \quad X_{3}^* = \left[ 100.25010.250.250.252
    \right], $
    $ u_1 = (1, 1, 0)^\mathrm{T}, \quad u_2 = (0, 1, 1)^\mathrm{T}, \quad u_3 = (0, 0, 1)^\mathrm{T}. $

    We can easily see that (1.1) has the constrained solution $ (X_{1}^*, X_{2}^*, X_{3}^*)\in\Omega_{1-5-9} $. By applying programmes 1 and 2 with the initial matrix $ X_{i}^{(1)} = eye(3) $, $ Y_{i}^{(1)} = zeros(3) (i = 1, 2, 3) $, we obtain the constrained solution in $ \Omega_{1-5-9} $ of (1.1) as follows:

    $ X_{1}^{(5)} = \left[ 1.00000.00000.50000.00001.00000.00000.50000.00002.0000
    \right], $
    $ X_{2}^{(5)} = \left[ 1.00000.00000.50000.00001.00000.50000.00000.00002.0000
    \right], $
    $ X_{3}^{(5)} = \left[ 1.00000.00000.25000.00001.00000.25000.25000.25002.0000
    \right]. $

    The iteration numbers and computation time are listed in Table 1.

    Table 1.  The iteration numbers and computation time of programmes 1 and 2.
    Results $ n $ $ k $ $ k_1 $ $ k_2 $ $ t $
    Programme 1 3 5 97 0 0.4521
    Programme 2 3 5 0 184 3.4310

     | Show Table
    DownLoad: CSV

    From the results in Table 1, we see that programme 1 is more effective when the derived linear matrix equations are always have constrained solutions in $ \Omega_{1-5-9} $.

    Example 4.2. Consider (1.1) with the following parameters:

    $ C_i^{(l)} = i\times C+l\times{ones(3)}, \quad D_i^{(l)} = (C_i^{(l)})^\mathrm{T}, $
    $ E_{ij}^{(l)} = -u_{i} u_{i}^\mathrm{T} \quad (i, j = 1, 2, 3; l = 1, 2), $
    $ S^{(1)} = \left[ 18.532293650.54.5435.537.5
    \right], \quad S^{(2)} = \left[ 2.568118183.529.52217.522.5
    \right], $

    where

    $ C = \left[ 100011101
    \right], $
    $ u_1 = (1, 1, 0)^\mathrm{T}, \quad u_2 = (0, 1, 1)^\mathrm{T}, \quad u_3 = (0, 0, 1)^\mathrm{T}. $

    By applying programmes 1 and 2 with the initial matrix $ X_{i}^{(1)} = ones(3) $, $ Y_{i}^{(1)} = zeros(3)\ (i = 1, 2, 3) $, $ P_1 $ and $ P_2 $ are identity matrices, we obtain a special constrained solution (now, $ X_1 $ and $ X_3 $ are symmetric matrices, $ X_2 $ is a general matrix) in $ \Omega_{1-5-9} $ of (1.1) as follows:

    $ X_{1}^{(4)} = \left[ 1.13521.52491.59241.52491.01880.99041.59240.99041.1223
    \right], $
    $ X_{2}^{(4)} = \left[ 0.93320.84111.01521.84110.97300.95492.01520.95491.0135
    \right], $
    $ X_{3}^{(4)} = \left[ 0.97691.62451.41501.62451.01281.02991.41501.02990.9482
    \right]. $

    When $ X_{i}^{(1)} = eye(3)\ (i = 1, 2, 3) $, others remain unchanged, we obtain another special constrained solution in $ \Omega_{1-5-9} $ of (1.1) as follows:

    $ X_{1}^{(6)} = \left[ 1.81471.88332.46921.88331.10891.02272.46921.02272.6495
    \right], $
    $ X_{2}^{(6)} = \left[ 1.25810.07730.61450.92270.68730.58801.61450.58800.2754
    \right], $
    $ X_{3}^{(6)} = \left[ 0.34662.22761.12612.22761.35421.00981.12611.00981.1756
    \right]. $

    Thus it can be seen that if the constrained solution of Eq (1.1) is not unique, we can get different constrained solutions in $ \Omega_{1-5-9} $ when choosing different initial matrices.

    In this paper, an iterative algorithm is studied to find different constrained solutions. By using the proposed algorithm, we compute a set of different constrained solution in $ \Omega_{1-5-9} $ of multivariate quadratic matrix equations. The provided examples illustrate the effectiveness of the new iterative algorithm.

    There are still some results we can obtain by changing the initial matrices $ X_i^{(1)} $ and $ Y_i^{(1)} $, the direction matrix $ Z_k $ in algorithm 2 and Eq (3.3) in algorithm 3. In this way, we can get other kind of constrained solutions, which are not only interesting but also valuable. It remains to study in our further work.

    This research was funded by Doctoral Fund Project of Shandong Jianzhu University grant number X18091Z0101.

    The author declares no conflict of interest.

    [1] Buzsaki G, Draguhn A. (2004) Neuronal oscillations in cortical networks. Science 304(5679):1926-9.
    [2] Albensi BC. (2007) The NMDA receptor/ion channel complex: a drug target for modulating synaptic plasticity and excitotoxicity. Curr Pharm Des 13(31): 3185-94.
    [3] Seeburg PH. (1994) Molecular biology of NMDA receptors. In: Watkins JC, Collingridge GL. Authors, The NMDA receptor, 2 Eds. , New York: Oxford University Press.
    [4] Gibb AJ. (1994) Activation of NMDA receptors. In: Watkins JC, Collingridge GL. Authors, The NMDA receptor, 2 Eds. , New York: Oxford University Press.
    [5] Alford S, Brodin L. (1994) The role of NMDA receptors in synaptic integration and the organization of motor patterns. In: Watkins JC, Collingridge GL. Authors, The NMDA receptor, 2 Eds. , New York: Oxford University Press.
    [6] Morris RGM, Davis M. (1994) The role of NMDA receptors in learning and memory. In: Watkins JC, Collingridge GL. Authors, The NMDA receptor, 2 Eds. , New York: Oxford University Press.
    [7] Garthwaite J. (1994) NMDA receptors, neuronal development, and neurodegeneration. In: Watkins JC, Collingridge GL. Authors, The NMDA receptor, 2 Eds. , New York: Oxford University Press.
    [8] Bear MF, Malenka RC. (1994) Synaptic plasticity: LTP and LTD. Curr Opin Neurobiol 4(3):389-99.
    [9] Möddel G, Jacobson B, Ying Z, et al. (2005) The NMDA receptor NR2B subunit contributes to epileptogenesis in human cortical dysplasia. Brain Res 1046(1-2): 10-23.
    [10] Mody I, MacDonald JF. (1995) NMDA receptor-dependent excitotoxicity: the role of intracellular Ca2+ release. Trends Pharmacol Sci 16(10): 356-9.
    [11] Nagle R, Saff B, Snider A. (2012) Fundamentals of differential equations and boundary value problems. 6 Eds. , Boston: Addison-Wesley.
    [12] Izhikevich EM. (2007) Dynamical systems in neuroscience: the geometry of excitability and bursting. Computational neuroscience, Cambridge: MIT Press, 441.
    [13] Koch C, Segev I. (1998) Methods in neuronal modeling: From ions to networks, Cambridge, Massachusetts: MIT Press.
    [14] Kuramoto Y. (1984) Chemical oscillations, waves, and turbulence. In: Springer series in synergetics of Berlin, New York: Springer-Verlag, 156.
    [15] Niedermeyer E, Schomer DL, Lopes da Silva FH. (2011) Niedermeyer's electroencephalography: basic principles, clinical applications, and related fields. 6 Eds. , Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins, 1275.
    [16] Tass PA. (2007) Phase resetting in medicine and biology stochastic modelling and data analysis, In: Springer Series in Synergetics, Berlin: Springer-Verlag.
    [17] Albensi BC, et al. (2004) Why do many NMDA antagonists fail, while others are safe and effective at blocking excitotoxicity associated with dementia and acute injury? Am J Alzheimers Dis Other Demen 19(5): 269-74.
    [18] Bliss TV, Collingridge GL. (1993) A synaptic model of memory: long-term potentiation in the hippocampus. Nature 361(6407): 31-9.
    [19] Lynch MA. (2004) Long-term potentiation and memory. Physiol Rev 84(1): 87-136.
    [20] Lisman J. (2003) Long-term potentiation: outstanding questions and attempted synthesis. Philos Trans R Soc Lond B Biol Sci 358(1432): 829-42.
    [21] Malenka RC. (1994) Synaptic plasticity in the hippocampus: LTP and LTD. Cell 78(4): 535-8.
    [22] Yang SN, Tang YG, Zucker RS. (1999) Selective induction of LTP and LTD by postsynaptic [Ca2+]i elevation. J Neurophysiol 81(2): 781-7.
    [23] Huang YY, Malenka RC. (1993) Examination of TEA-induced synaptic enhancement in area CA1 of the hippocampus: the role of voltage-dependent Ca2+ channels in the induction of LTP. J Neurosci 13(2): 568-76.
    [24] Abraham WC, Williams JM. (2003) Properties and mechanisms of LTP maintenance. Neuroscientist 9(6): 463-74.
    [25] Chen HS, Lipton SA. (2006) The chemical biology of clinically tolerated NMDA receptor antagonists. J Neurochem 97(6): 1611-26.
    [26] McIlhinney RA, et al. (2003) Assembly of N-methyl-D-aspartate (NMDA) receptors. Biochem Soc Trans 31(Pt 4): 865-8.
    [27] Waxman EA, Lynch DR. (2005) N-methyl-D-aspartate receptor subtypes: multiple roles in excitotoxicity and neurological disease. Neuroscientist 1(1): 37-49.
    [28] Dingledine R, et al. (1999) The glutamate receptor ion channels. Pharmacol Rev 51(1): 7-61.
    [29] Lynch DR, Guttmann RP. (2001) NMDA receptor pharmacology: perspectives from molecular biology. Curr Drug Targets 2(3): 215-31.
    [30] Farber NB, Newcomer JW, Olney JW. (1998) The glutamate synapse in neuropsychiatric disorders, progress in brain research. In: Ottersen OP, Langmoen IA, Gjerstad L. Authors, The gluatmate synapse as a therapeutic target: molecular organization and pathology of the glutamate synapse, New York: Elsevier.
    [31] Lindsley CW, et al. (2006) Progress towards validating the NMDA receptor hypofunction hypothesis of schizophrenia. Curr Top Med Chem 6(8): 771-85.
    [32] Ikonomidou C, Turski L. (2002) Why did NMDA receptor antagonists fail clinical trials for stroke and traumatic brain injury? Lancet Neurol 1(6): 383-6.
    [33] Hardingham GE, Bading H. (2003) The Yin and Yang of NMDA receptor signalling. Trends Neurosci 26(2): 81-9.
    [34] Whittington MA, Traub RD, Jefferys JG. (1995) Synchronized oscillations in interneuron networks driven by metabotropic glutamate receptor activation. Nature 373(6515): 612-5.
    [35] Middleton S, et al. (2008) NMDA receptor-dependent switching between different gamma rhythm-generating microcircuits in entorhinal cortex. Proc Natl Acad Sci USA 105(47): 18572-7.
    [36] Lazarewicz MT, et al. (2010) Ketamine modulates theta and gamma oscillations. J Cogn Neurosci 22(7): 1452-64.
    [37] Cabral HO, et al. (2014) Oscillatory dynamics and place field maps reflect hippocampal ensemble processing of sequence and place memory under NMDA receptor control. Neuron81(2): 402-15.
    [38] Jacobsen RB, Ulrich D, Huguenard JR. (2001) GABA(B) and NMDA receptors contribute to spindle-like oscillations in rat thalamus in vitro. J Neurophysiol 86(3): 1365-75.
    [39] Korotkova T, et al. (2010) NMDA receptor ablation on parvalbumin-positive interneurons impairs hippocampal synchrony, spatial representations, and working memory. Neuron 68(3):557-69.
    [40] Carlen M, et al. (2012) A critical role for NMDA receptors in parvalbumin interneurons for gamma rhythm induction and behavior. Mol Psychiatry 17(5): 537-48.
    [41] Anver H, et al. (2010) NMDA receptor hypofunction phase couples independent gamma-oscillations in the rat visual cortex. Neuropsychopharmacology 36(2): 519-28.
    [42] van Wingerden M, et al. (2012) NMDA receptors control cue-outcome selectivity and plasticity of orbitofrontal firing patterns during associative stimulus-reward learning. Neuron 76(4):813-25.
    [43] Wallen P, Grillner S. (1987) N-methyl-D-aspartate receptor-induced, inherent oscillatory activity in neurons active during fictive locomotion in the lamprey. J Neurosci 7(9): 2745-55.
    [44] Wang D, Grillner S, Wallen P. (2013) Calcium dynamics during NMDA-induced membrane potential oscillations in lamprey spinal neurons, contribution of L-type calcium channels (CaV1. 3). J Physiol 591(Pt 10): 2509-21.
    [45] Lee S, Sen K, Kopell N. (2009), Cortical gamma rhythms modulate NMDAR-mediated spike timing dependent plasticity in a biophysical model. PLoS Comput Biol 5(12): e1000602.
    [46] McNaughton BL, et al. (2006) Path integration and the neural basis of the "cognitive map". Nat Rev Neurosci 7(8): 663-78.
    [47] Burgess N. (2006) Spatial memory: how egocentric and allocentric combine. Trends Cogn Sci10(12): 551-7.
    [48] O'Keefe J, Nadel L. (1978) The Hippocampus as a Cogntive Map. London: University of Oxford Press.
  • Reader Comments
  • © 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5728) PDF downloads(1197) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog