Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

Constrainted least squares solution of Sylvester equation

  • Received: 18 March 2021 Accepted: 20 June 2021 Published: 23 June 2021
  • In this paper, we study several constrainted least squares solutions of quaternion Sylvester matrix equation. We first propose a real vector representation of quaternion matrix and study its properties. By using this real vector representation, semi-tensor product of matrices, swap matrix and Moore-Penrose inverse, we derive compatible conditions and the expressions of several constrainted least squares solutions of quaternion Sylvester equation.

    Citation: Wenxv Ding, Ying Li, Dong Wang, AnLi Wei. Constrainted least squares solution of Sylvester equation[J]. Mathematical Modelling and Control, 2021, 1(2): 112-120. doi: 10.3934/mmc.2021009

    Related Papers:

    [1] Jianhua Sun, Ying Li, Mingcui Zhang, Zhihong Liu, Anli Wei . A new method based on semi-tensor product of matrices for solving reduced biquaternion matrix equation $ \sum\limits_{p = 1}^l A_pXB_p = C $ and its application in color image restoration. Mathematical Modelling and Control, 2023, 3(3): 218-232. doi: 10.3934/mmc.2023019
    [2] Naiwen Wang . Solvability of the Sylvester equation $ AX-XB = C $ under left semi-tensor product. Mathematical Modelling and Control, 2022, 2(2): 81-89. doi: 10.3934/mmc.2022010
    [3] Xueling Fan, Ying Li, Wenxv Ding, Jianli Zhao . $ \mathcal{H} $-representation method for solving reduced biquaternion matrix equation. Mathematical Modelling and Control, 2022, 2(2): 65-74. doi: 10.3934/mmc.2022008
    [4] Aidong Ge, Zhen Chang, Jun-e Feng . Solving interval type-2 fuzzy relation equations via semi-tensor product of interval matrices. Mathematical Modelling and Control, 2023, 3(4): 331-344. doi: 10.3934/mmc.2023027
    [5] Min Zeng, Yongxin Yuan . On the solutions of the dual matrix equation $ A^\top XA = B $. Mathematical Modelling and Control, 2023, 3(3): 210-217. doi: 10.3934/mmc.2023018
    [6] Daizhan Cheng, Ying Li, Jun-e Feng, Jianli Zhao . On numerical/non-numerical algebra: Semi-tensor product method. Mathematical Modelling and Control, 2021, 1(1): 1-11. doi: 10.3934/mmc.2021001
    [7] Yuyang Zhao, Yang Liu . Output controllability and observability of mix-valued logic control networks. Mathematical Modelling and Control, 2021, 1(3): 145-156. doi: 10.3934/mmc.2021013
    [8] Weiwei Han, Zhipeng Zhang, Chengyi Xia . Modeling and analysis of networked finite state machine subject to random communication losses. Mathematical Modelling and Control, 2023, 3(1): 50-60. doi: 10.3934/mmc.2023005
    [9] Xiaomeng Wei, Haitao Li, Guodong Zhao . Kronecker product decomposition of Boolean matrix with application to topological structure analysis of Boolean networks. Mathematical Modelling and Control, 2023, 3(4): 306-315. doi: 10.3934/mmc.2023025
    [10] Hongli Lyu, Yanan Lyu, Yongchao Gao, Heng Qian, Shan Du . MIMO fuzzy adaptive control systems based on fuzzy semi-tensor product. Mathematical Modelling and Control, 2023, 3(4): 316-330. doi: 10.3934/mmc.2023026
  • In this paper, we study several constrainted least squares solutions of quaternion Sylvester matrix equation. We first propose a real vector representation of quaternion matrix and study its properties. By using this real vector representation, semi-tensor product of matrices, swap matrix and Moore-Penrose inverse, we derive compatible conditions and the expressions of several constrainted least squares solutions of quaternion Sylvester equation.



    First some necessary notations are given to make this paper more fluid. RQ represent the real number field and quaternion skew-field, respectively. Rt represents the set of all real column vectors with order t. Rm×nQm×n represent the set of all m×n real quaternion matrices, respectively. ηHQn×n ηAQn×n represent the set of all n×n quaternion ηHermitianmatrix and quaternion ηantiHermitian matrices, respectively. In represents the unit matrix with order n. δin represents the ith column of unit matrix In. AT,AH,A stands for the transpose, the conjugate transpose, Moore-Penrose(MP) inverse of matrix A, respectively. represents the Kronecker product of matrices. represents the semi-tensor product of matrices. represents the Frobenius norm of a matrix or Euclidean norm of a vector.

    In the process of studying the theory and numerical calculation of mathematical and physical problems, it is often necessary to solve the approximate solution of quaternion linear system, which also have wide applications in computer science, quantum physics, statistic, signal and color image processing, rigid mechanics, quantum mechanics, control theory, field theory and so on [1,2,3,4,5,6,7,8,9]. Many researchers are interested in quaternion linear system and use different methods to get a lot of results [10,11]. In this paper, we are interested in the Sylvester equation

    AXB+CYD=E (1.1)

    over quaternion algebra. ηHermitian matrix and ηantiHermitian matrix are two kind of important matrices in linear modeling and convergence analysis in statistical signal processing [12,13]. As for the special Hermitian solution of the Sylvester equation, the following literatures are available. Ling et al came up with iterative algorithms for the η -Hermitian and η-bi Hermitian solutions with minimal norm for quaternion least squares problem [14]. Yuan et al. studied η -Hermitian and η-anti-Hermitian solutions to the quaternion matrix equations [15,16]. Liu considered the η-anti-Hermitian solution for the quaternion matrix equations AX=B, AXB=C, AXAη=B, EXEη+FYFη=H,  and established general expressions of solutions [17]. Rehman et al. mentioned some necessary and sufficient conditions for the existence of the solution to the system of real quaternion matrix equations including ηHermicity and also constructed the general solution to the system when it is consistent [18].

    In this paper, we will propose a new method to solve the special least squares problems of (1.1) by using a powerful tool-the semi-tensor product of matrices. The semi-tensor product(STP) is a new matrix product, which generalizes the conventional matrix product to two arbitrary matrices. The conventional multiplication of matrix is limited of dimension and non-commutativity. The semi-tensor product breaks through the limitation of dimension and satisfies quasi-commutative. It has been proved to be extremely useful in many fields such as the coloring problem [19], the design of shifting register [20], the fault detection [21] and so on. In addition, since the dynamics of a finite game can be modeled as a logical network [22], the semi-tensor product method has also been applied to the study of game theory [23,24]. In this paper, we will convert the least squares problems of quaternion matrix equation to the corresponding real problems by using the semi-tensor product. Our specific problem is as follows:

    Problem 1. Let AQm×n, BQn×s, CQm×k, DQk×s, EQm×s, and

    SM={(X,Y)|XηHQn×n,YηAQk×k,AXB+CYDE=min}.

    Find out (ˆX,ˆY)SM such that

    ˆX2+ˆY2=min

    (\hat{X}, \hat{Y}) is called minimal norm least squares mixed solution of (1.1).

    This paper is arranged as follows. In Section 2, we recall some preliminary results on quaternion matrix and STP used in the paper. In Section 3, we propose a new kind of real vector representation of a quaternion matrix and survey its properties. In Section 4, we study the solutions of Problem 1 by applying the real vector representation of quaternion matrix, the special structure of solutions and STP. In Section 5, we give a numerical experiment to illustrate the effectiveness of the method. In Section 6, we make some concluding remarks.

    Definition 2.1. [25] A quaternion q \in \mathbb{Q} is expressed as

    q = a + b{{\bf i}} + c{{\bf j}} + d{{\bf k}},

    where a, b, c, d\in \mathbb{R}, and three imaginary units {{\bf i}}, {{\bf j}}, {{\bf k}} satisfy

    {{\bf i}}^2 = {{\bf j}}^2 = {{\bf k}}^2 = {{\bf i}}{{\bf j}}{{\bf k}} = -1,\ \ {{\bf i}}{{\bf j}} = -{{\bf j}}{{\bf i}} = {{\bf k}},
    {{\bf j}}{{\bf k}} = -{{\bf k}}{{\bf j}} = {{\bf i}},\ {{\bf k}}{{\bf i}} = -{{\bf i}}{{\bf k}} = {{\bf j}}.

    \mathbb{Q} is clearly an associative but non-commutative algebra of rank four over \mathbb{R}, called quaternion skew-field.

    Let A = A_1+A_2{{\bf i}}+A_3{{\bf j}}+A_4{{\bf k}} \in\mathbb{Q}^{k\times k} , where A_i\in\mathbb{R}^{k\times k}\ (i = 1, 2, 3, 4). The matrix A^{{{\bf i}} H}, A^{{{\bf j}} H}, A^{{{\bf k}} H} are defined as below

    A^{{{\bf i}} H} = -{{\bf i}} A^{H}{{\bf i}} = A_1^T-A_2^T{{\bf i}}+A_3^T{{\bf j}}+A_4^T{{\bf k}},
    A^{{{\bf j}} H} = -{{\bf j}} A^{H}{{\bf j}} = A_1^T+A_2^T{{\bf i}}-A_3^T{{\bf j}}+A_4^T{{\bf k}},
    A^{{{\bf k}} H} = -{{\bf k}} A^{H}{{\bf k}} = A_1^T+A_2^T{{\bf i}}+A_3^T{{\bf j}}-A_4^T{{\bf k}}.

    Definition 2.2. [26] Let A \in\mathbb{Q}^{k\times k}, \ \eta = {{\bf i}}, {{\bf j}}, {{\bf k}} . If A^{\eta H} = A, then A is \eta -Hermitian. If A^{\eta H} = -A, then A is \eta -anti-Hermitian. For\; A = A_1+A_2{{\bf i}}+A_3{{\bf j}}+A_4{{\bf k}}\in\mathbb{Q}^{k\times k} , by Definition 2.1, we can obtain

    \begin{equation*} \label{2.4} \begin{split} &(1) For\ \eta = {{\bf i}},\ A\in\eta\mathbb{HQ}^{k\times k}\Longleftrightarrow A_2^T = -A_2,A_s^T = A_s,s = 1,3,4.\\ &(2) For\ \eta = {{\bf j}},\ A\in\eta\mathbb{HQ}^{k\times k}\Longleftrightarrow A_3^T = -A_3,A_s^T = A_s,s = 1,2,4.\\ &(3) For\ \eta = {{\bf k}},\ A\in\eta\mathbb{HQ}^{k\times k}\Longleftrightarrow A_4^T = -A_4,A_s^T = A_s,s = 1,2,3. \end{split} \end{equation*}

    Similarly, we have

    \begin{equation*} \label{2.5} \begin{split} &(4) For\ \eta = {{\bf i}},\ A\in\eta\mathbb{AQ}^{k\times k}\Longleftrightarrow A_2^T = A_2,A_s^T = -A_s,s = 1,3,4.\\ &(5) For\ \eta = {{\bf j}},\ A\in\eta\mathbb{AQ}^{k\times k}\Longleftrightarrow A_3^T = A_3,A_s^T = -A_s,s = 1,2,4.\\ &(6) For\ \eta = {{\bf k}},\ A\in\eta\mathbb{AQ}^{k\times k}\Longleftrightarrow A_4^T = A_4,A_s^T = -A_s,s = 1,2,3. \end{split} \end{equation*}

    Definition 2.3. [27] Let A\in \mathbb{R}^{m\times n}, \ B\in \mathbb{R}^{p\times q} , the semi-tensor product of A and B is denoted by

    A\ltimes B = (A\otimes I_{t/n})(B\otimes I_{t/p}),

    where t = lcm(n, p) is the least common multiple of n and p .

    If n = p, the semi-tensor product of matrices reduces to the conventional matrix product.

    Theorem 2.1. [27]Assume that A, \ B, \ C\ are real matrix with appropriate sizes, a, b\in \mathbb{R} , then

    (1) (Distributive law)

    A\ltimes (aB\pm bC) = aA\ltimes B\pm bA\ltimes C,
    (aA\pm bB)\ltimes C = aA\ltimes C\pm bB\ltimes C.

    (2) (Associative law)

    (A\ltimes B)\ltimes C = A\ltimes (B\ltimes C).

    (3) Assume that x\in \mathbb{R}^m, \ y \in \mathbb{R}^n , then

    x\ltimes y = x\otimes y.

    The semi-tensor product of a matrix and a vector has the following properties of quasi-commutativity.

    Theorem 2.2. [27] Let x\in \mathbb{R}^t , A\in \mathbb{R}^{m\times n} , then

    x\ltimes A = (I_t\otimes A)\ltimes x.

    Definition 2.4. [28] Let x\in \mathbb{R}^m , y\in \mathbb{R}^n, then

    W_{[m,n]}(x\ltimes y) = y\ltimes x,

    in which

    W_{[m,n]} = \delta_{mn}[1,\ \cdots,\ (n-1)m+1,\ \cdots,\ m,\ \cdots,\ nm ],

    where \delta_k[i_1, \ \cdots, \ i_s] is an abbreviation of [\delta_k^{i_1}, \ \cdots, \ \delta_k^{i_s}] . Especially, when m = n , we denote W_{[n]}: = W_{[n, n]}.

    The following results are the well-known conclusions of matrix equations.

    Theorem 2.3. [29] The least squares solutions of the matrix equation Ax = b , with A\in \mathbb{R}^{m\times n} and b\in \mathbb{R}^m, can be represented as

    x = A^\dagger b+(I-A^\dagger A)y,

    where y\in \mathbb{R}^n is an arbitrary vector. The minimal norm least squares solution of the matrix equation Ax = b is A^\dagger b.

    Theorem 2.4. [29] The matrix equation Ax = b, with A\in\mathbb{R}^{m\times n} and b\in\mathbb{R}^m , has a solution x\in \mathbb{R}^n if and only if

    AA^\dagger b = b.

    In this case it has the general solution

    x = A^\dagger b+(I-A^\dagger A)y,

    where y\in \mathbb{R}^n is an arbitrary vector.

    In this section, we will propose the concept of real vector representation of a quaternion matrix and study its properties. First we define real staking form of x\in\mathbb{Q}.

    Definition 3.1. Let x = x_1+x_2 {{\bf i}}+x_3 {{\bf j}}+x_4 {{\bf k}}\in \mathbb{Q} , denote

    v^R(x) = (x_1,x_2,x_3,x_4)^T,

    v^R(x) is called as the real staking form of x .

    By means of structure matrix and the real stacking form, we can express the product of two quaternions by the semi-tensor product of matrices.

    Theorem 3.1. Let x, y\in\mathbb{Q} , then

    \begin{equation} v^R(xy) = M_Q\ltimes v^R(x)\ltimes v^R(y), \end{equation} (3.1)

    where

    M_Q = \left( \begin{smallmatrix} 1&0&0&0&0&-1&0&0&0&0&-1&0&0&0&0&-1\\ 0&1&0&0&1&0&0&0&0&0&0&1&0&0&-1&0\\ 0&0&1&0&0&0&0&-1&1&0&0&0&0&1&0&0\\ 0&0&0&1&0&0&1&0&0&-1&0&0&1&0&0&0\\ \end{smallmatrix} \right)

    is the structure matrix of multiplication of quaternion.

    Combining the real stacking form of a quaternion with vec operator of a real matrix, we propose a new kind of real vector representation of a quaternion matrix. For this purpose, we first propose the real stacking form of a quaternion vector as follows.

    Definition 3.2. Let x = (x^{1}, \cdots, x^{n}), \ y = (y^{1}, \ \cdots, y^{n})^T be quaternion vectors. Denote

    v^R(x) = \left( \begin{array}{c} v^R(x^1)\\ \vdots\\ v^R(x^n)\\ \end{array} \right),\ \ \ \ v^R(y) = \left( \begin{array}{c} v^R(y^1)\\ \vdots\\ v^R(y^n)\\ \end{array} \right)

    v^R(x) and v^R(y) are called as the real staking form of quaternion vector x and y , respectively.

    Now we define the concepts of the real column stacking form and the real row stacking form of a quaternion matrix A .

    Definition 3.3. For A\in \mathbb{Q}^{m\times n} , denote

    v_c^R(A) = \left( \begin{array}{c} v^R(Col_1(A))\\ v^R(Col_2(A))\\ \vdots\\ v^R(Col_n(A))\\ \end{array} \right),\ \ \ \ v_r^R(A) = \left( \begin{array}{c} v^R(Row_1(A))\\ v^R(Row_2(A))\\ \vdots\\ v^R(Row_m(A))\\ \end{array} \right),

    v^R_c(A) and v^R_r(A) are called the real column stacking form and the real row stacking form of A , respectively.

    We can prove that this real vector representation has the following properties with respect to vector or matrix operations.

    Theorem 3.2. Let x = (x^{1}, x^{2}, \cdots, x^{n}), \ \check{x} = (\check{x}^{1}, \check{x}^{2}, \cdots, \check{x}^{n}) , y = (y^{1}, y^{2}, \cdots, y^{n})^T , x^{i}, \ \check{x}^{i}\ y^{i}\in \mathbb{Q} , a \in \mathbb{R} , then

    \begin{equation*} \begin{split} &(1)\ v^R(x+\check{x}) = v^R(x)+v^R(\check{x}),\\ &(2)\ v^R(ax) = av^R(x),\\ &(3)\ v^R(xy) = M_Q\ltimes\left(\sum\limits_{i = 1}^n(\delta_n^i)^T\ltimes(I_{4n}\otimes(\delta_n^i)^T)\right)\ltimes v^R(x)\ltimes v^R(y). \end{split} \end{equation*}

    Proof. By simply computing, we know (1), (2) hold. We only give a detailed proof of (3). Using (3.1), we have

    \begin{array}{rl} v^R(xy)& = v^R(x^{1}y^{1}+ \cdots +x^{n}y^{n})\\ & = M_Q\ltimes v^R(x^{1})\ltimes v^R(y^{1})+\cdots+M_Q\ltimes v^R(x^{n})\ltimes v^R(y^{n})\\ & = M_Q\ltimes\left(\sum\limits_{i = 1}^n v^R(x^{i})\ltimes v^R(y^{i})\right)\\ & = M_Q\ltimes\left(\sum\limits_{i = 1}^n(\delta_n^i)^T\ltimes v^R(x)\ltimes(\delta_n^i)^T\ltimes v^R(y) \right)\\ & = M_Q\ltimes\left(\sum\limits_{i = 1}^n(\delta_n^i)^T\ltimes(I_{4n}\otimes(\delta_n^i)^T)\right)\ltimes v^R(x)\ltimes v^R(y).\\ \end{array}

    By using Theorem 3.2, we can drive the following result on the real vector representation of multiplication of two quaternion matrices.

    Theorem 3.3. Let A, \ \check{A} \in \mathbb{Q}^{m \times n}, \ B\in \mathbb{Q}^{n \times p}, \ {\alpha} \in \mathbb{R} , then

    \begin{equation*} \begin{split} &(1) \ v_{r}^{R}(A+\check{A}) = v_{r}^{R}(A)+v_{r}^{R}(\check{A}),\ v_{c}^{R}(A+\check{A}) = v_{c}^{R}(A)+v_{c}^{R}(\check{A}),\\ &(2) \ \|A\| = \|v_{r}^{R}(A)\| = \|v_{c}^{R}(A)\|,\\ &(3) \ v_{r}^{R}(AB) = G(v_{r}^{R}(A)\ltimes v_{c}^{R}(B)),\\ \end{split} \end{equation*}

    in which

    F = M_Q\ltimes\left(\sum\limits_{i = 1}^n(\delta_n^i)^T\ltimes(I_{4n}\otimes(\delta_n^i)^T)\right),
    G = \left( \begin{smallmatrix} F \ltimes (\delta_{m}^{1})^T \ltimes [I_{4mn} \otimes (\delta_{p}^{1})^{T}] \\ \vdots\\ F \ltimes (\delta_{m}^{1})^T \ltimes [I_{4mn} \otimes (\delta_{p}^{p})^{T}]\\ \vdots\\ F \ltimes (\delta_{m}^{m})^T \ltimes [I_{4mn} \otimes (\delta_{p}^{1})^{T}] \\ \vdots\\ F \ltimes (\delta_{m}^{m})^T \ltimes [I_{4mn} \otimes (\delta_{p}^{p})^{T}] \\ \end{smallmatrix} \right).

    Proof. We only prove the equality in (3). We partition A and B with its rows or columns as follows,

    A = \left( \begin{array}{c} Row_1(A)\\ Row_2(A)\\ \vdots\\ Row_m(A) \end{array} \right) ,\ B = \left(Col_1(B)\, \ Col_2(B)\, \cdots \ , Col_p(B)\right).

    Then we have

    \begin{array}{rl} v_r^R(AB)& = \left( \begin{smallmatrix} v^R(Row_1(A) Col_1(B))\\ \vdots\\ v^R(Row_1(A)Col_p(B))\\ \vdots\\ v^R(Row_m(A)Col_1(B))\\ \vdots\\ v^R(Row_m(A) Col_p(B))\\ \end{smallmatrix}\right) = \left( \begin{smallmatrix} F\ltimes v^R(Row_1(A))\ltimes v^R(Col_1(B)) \\ \vdots\\ F\ltimes v^R(Row_1(A))\ltimes v^R(Col_p(B)) \\ \vdots\\ F\ltimes v^R(Row_m(A))\ltimes v^R(Col_1(B)) \\ \vdots\\ F\ltimes v^R(Row_m(A))\ltimes v^R(Col_p(B)) \\ \end{smallmatrix} \right)\\ & = \left( \begin{smallmatrix} F\ltimes [(\delta _m^1)^T\ltimes v_r^R(A)]\ltimes [(\delta _p^1)^T\ltimes v_c^R(B)] \\ \vdots\\ F\ltimes [(\delta _m^1)^T\ltimes v_r^R(A)]\ltimes [(\delta _p^p)^T\ltimes v_c^R(B)] \\ \vdots\\ F\ltimes [(\delta _m^m)^T\ltimes v_r^R(A)]\ltimes [(\delta _p^1)^T\ltimes v_c^R(B)] \\ \vdots\\ F\ltimes [(\delta _m^m)^T\ltimes v_r^R(A)]\ltimes [(\delta _p^p)^T\ltimes v_c^R(B)] \\ \end{smallmatrix} \right)\\ & = \left( \begin{smallmatrix} F\ltimes (\delta _m^1)^T\ltimes [I_{4mn}\otimes (\delta _p^1)^T] \\ \vdots\\ F\ltimes (\delta _m^1)^T\ltimes [I_{4mn}\otimes (\delta _p^p)^T] \\ \vdots\\ F\ltimes (\delta _m^m)^T\ltimes [I_{4mn}\otimes (\delta _p^1)^T] \\ \vdots\\ F\ltimes (\delta _m^m)^T\ltimes [I_{4mn}\otimes (\delta _p^p)^T] \\ \end{smallmatrix} \right) (v_r^R(A)\ltimes v_c^R(B)). \end{array}

    In this section, we study the solutions of Problem 1. First, Through the structural characteristics of \eta -Hermitian matrix and anti- \eta -Hermitian matrix, we can find a large number of repeated elements in the matrices. In order to reduce the calculation order of quaternion matrix equation (1.1), we can extract some elements as independent elements, and express the whole matrix by independent elements. The specific contents are as follows.

    Theorem 4.1. Let X\in \eta\mathbb{HQ}^{n\times n}\ \eta = {{\bf i}}, {{\bf j}}, {{\bf k}} , denote

    LX_i = \left( \begin{array}{c} x_{ii}\\ x_{i(i+1)}\\ \vdots\\ x_{in} \end{array} \right),(i = 1,2,\cdots,n),\ \ \ v^R_s(X) = \left( \begin{array}{c} v^R(LX_1)\\ v^R(LX_2))\\ \vdots\\ v^R(LX_n)\\ \end{array} \right).

    Then

    v_c^R(X) = J^{\eta} v_s^R(X),

    where

    J^{\eta} = \left(\begin{array}{c} J^{\eta}_1\\ \vdots \\ J^{\eta}_m\\ \vdots\\ J^{\eta}_n \end{array}\right)\ \ and \ \ J^{\eta}_m = \left(\begin{array}{c} J^{\eta}_{1m}\\ \vdots\\ J^{\eta}_{rm}\\ \vdots\\ J^{\eta}_{nm}\\ \end{array}\right)\ \ m = 1,2,\cdots,n,

    when \eta = {{\bf i}} , J^{{{\bf i}}}_m is as follows

    J_{rm}^{{{\bf i}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {R_4}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{R_4} = \left( {\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{smallmatrix}} \right).

    Similarly we have J^{{{\bf j}}}_m, \ J^{{{\bf k}}}_m .

    J_{rm}^{{{\bf j}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {L_4}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{L_4} = \left( {\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & 1 \\ \end{smallmatrix}} \right).
    J_{rm}^{{{\bf k}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {S_4}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{S_4} = \left( {\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & { - 1} \\ \end{smallmatrix}} \right).

    We can also find the relationship of v_c^R(X) and v_s^R(X) for \eta-anti-Hermitian matrix.

    Theorem 4.2. Let X\in \eta\mathbb{AQ}^{n\times n}\ \eta = {{\bf i}}, {{\bf j}}, {{\bf k}} , v_s^R(X) is defined in Theorem 4.1 Then

    v_c^R(X) = R^{\eta} v_s^R(X),

    where

    R^{\eta} = \left(\begin{array}{c} R^{\eta}_1\\ \vdots \\ R^{\eta}_m\\ \vdots\\ R^{\eta}_n \end{array}\right)\ \ and \ \ R^{\eta}_m = \left(\begin{array}{c} R^{\eta}_{1m}\\ \vdots\\ R^{\eta}_{rm}\\ \vdots\\ R^{\eta}_{nm}\\ \end{array}\right)\ \ m = 1,2,\cdots n,

    when \eta = {{\bf i}} , R^{{{\bf i}}} is as follows

    R_{rm}^{{{\bf i}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {R_{4}'}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{R_4'} = \left( {\begin{smallmatrix} { - 1} & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & { - 1} \\ \end{smallmatrix}} \right).

    Similarly we have R^{{{\bf j}}}, \ R^{{{\bf k}}} .

    R_{rm}^{{{\bf j}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {L_{4}'}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{L_{4}'} = \left( {\begin{smallmatrix} { - 1} & 0 & 0 & 0 \\ 0 & { - 1} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & { - 1} \\ \end{smallmatrix}} \right),
    R_{rm}^{{{\bf k}}} = \left\{ {\begin{array}{*{10}{c}} {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {r - 1} \right)\left( {2n - r + 2} \right)}}{2} + m - r + 1}} \right)}^T} \otimes {S_{4}'}} \\ {{{\left( {\delta _{n(n + 1)/2}^{\frac{{\left( {m - 1} \right)\left( {2n - m + 2} \right)}}{2} + r - m + 1}} \right)}^T} \otimes {I_4}} \\ \end{array}} \right.\begin{array}{*{10}{c}} {} &{r < m} \\ {} &{r \ge m} \\ \end{array},\begin{array}{*{10}{c}} {} \\ \end{array}{S_{4}'} = \left( {\begin{smallmatrix} { - 1} & 0 & 0 & 0 \\ 0 & { - 1} & 0 & 0 \\ 0 & 0 & { - 1} & 0 \\ 0 & 0 & 0 & 1 \\ \end{smallmatrix}} \right).

    Based on the above discussion, we give the solution of problem 1 by feat of the real vector representation of quaternion matrix and STP.

    Theorem 4.3. Let A\in \mathbb{Q}^{m\times n} , B\in \mathbb{Q}^{n\times s} , C\in \mathbb{Q}^{m\times k} , D\in \mathbb{Q}^{k\times s} , E\in\mathbb{Q}^{m\times s}, G_i has the same structure as G except for the dimension, denote

    M_1 = G_2\ltimes G_3\ltimes v^R_r(A)\ltimes W_{[4ns,4n^2]}\ltimes v^R_c(B)\ltimes J^{\eta},
    M_2 = G_4\ltimes G_5\ltimes v^R_r(C)\ltimes W_{[4ks,4k^2]}\ltimes v^R_c(D)\ltimes R^{\eta},
    \hat{M} = \left(M_1,\ M_2\right).

    Then the set S_M of Problem 1 is represented as

    \begin{equation} S_M = \left\{(X,Y) \bigg|\left( \begin{smallmatrix} v^R_s(X)\\ v^R_s(Y)\\ \end{smallmatrix}\right) = \hat{M}^\dagger v_r^R(E)+\left(I_{{2(n^2+k^2)+2(n+k)}}-\hat{M}^\dagger\hat{M}\right)y \right\} \end{equation} (4.1)

    where, y\in \mathbb{R}^{2(n^2+k^2)+2(n+k)} . And then, the minimal norm least squares mixed solution (\hat{X}, \hat{Y}) of (1.1) satisfies

    \begin{equation} \left( \begin{array}{c} v^R_s(\hat{X})\\ v^R_s(\hat{Y})\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E). \end{equation} (4.2)

    Proof.

    \begin{align*} &\left\|AXB+CYD-E\right\|\\ & = \left\|v_r^R\left(AXB+CYD\right)-v_r^R(E)\right\| \\ & = \left\|M_1\ltimes v_s^R(X)+ M_2\ltimes v_s^R(Y)-v_r^R(E)\right\|\\ & = \left\|\left(M_1,\ M_2\right)\left(\begin{array}{c}v_s^R(X)\\v_s^R(Y)\end{array}\right)-v_r^R(E)\right\|\\ & = \left\|\hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right)-v_r^R(E)\right\|. \end{align*}

    Thus

    \|AXB+CYD-E\| = min

    if and only if

    \left\|\hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right)-v_r^R(E)\right\| = min.

    For the real matrix equation

    \hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right) = v_r^R(E).

    According to Theorem 2.3, its least squares solutions can be represented as

    \left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E)+(I_{2(n^2+k^2)+2(n+k)}-\hat{M}^\dagger\hat{M})y,

    where y\in \mathbb{R}^{2(n^2+k^2)+2(n+k)} . Thus we get the formula in (4.1).

    Notice

    \min\limits_{(X,Y)\in S_M}\left\|X\right\|^2+\left\|Y\right\|^2\Longleftrightarrow \min\limits_{(X,Y)\in S_M}\left\|\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right)\right\|^2,

    so we have that the minimal norm least squares mixed solution (\hat{X}, \hat{Y}) of (1.1) satisfies

    \left( \begin{array}{c} v^R_s(\hat{X})\\ v^R_s(\hat{Y})\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E).

    Therefore, (4.2) holds.

    Corollary 4.4. Let A\in \mathbb{Q}^{m\times n} , B\in \mathbb{Q}^{n\times s} , C\in \mathbb{Q}^{m\times k} , D\in\mathbb{Q}^{k\times s}, \hat{M} is defined in Theorem 4.3. Then AXB+CYD = E has a mixed solution (X, Y) if and only if

    \begin{equation} \left(\hat{M}\hat{M}^\dagger-I_{4ms}\right)v_r^R(E) = 0. \end{equation} (4.3)

    Moreover, if (4.3) holds, the mixed solution set of AXB+CYD = E can be represented as

    \begin{equation} \widetilde{S}_M = \left\{ (X,Y)\bigg|\left( \begin{smallmatrix} v^R_s(X)\\ v^R_s(Y)\\ \end{smallmatrix}\right) = \hat{M}^\dagger v_r^R(E)+(I_{2(n^2+k^2)+2(n+k)}-\hat{M}^\dagger\hat{M})y \right\} \end{equation} (4.4)

    where y\in \mathbb{R}^{2(n^2+k^2)+2(n+k)} . We can obtain the minimal norm mixed solution (\hat{X}, \hat{Y}) satisfying

    \begin{equation} \left( \begin{array}{c} v^R_s(\hat{X})\\ v^R_s(\hat{Y})\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E). \end{equation} (4.5)

    Proof. AXB+CYD = E has a mixed solution (X, Y) if and only if

    \left\|AXB+CYD-E\right\| = 0.

    Using (2) in Theorem 3.3 and the properties of the MP inverse, we get

    \begin{align*} &\left\|AXB+CYD-E\right\|\\ & = \left\|\hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right)-v_r^R(E)\right\|\\ & = \left\|\hat{M}\hat{M}^{\dagger}\hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right)-v_r^R(E)\right\| \\ & = \left\|\hat{M}\hat{M}^{\dagger}v_r^R(E)-v_r^R(E)\right\|\\ & = \left\|(\hat{M}\hat{M}^{\dagger}-I_{4ms})v_r^R(E)\right\|. \end{align*}

    Therefore, for (X, Y) , we obtain

    \begin{array}{rl} &\left\|AXB+CYD-E\right\| = 0\\ &\Longleftrightarrow \left\|(\hat{M}\hat{M}^{\dagger}-I_{4ms})v_r^R(E)\right\| = 0 \\ &\Longleftrightarrow (\hat{M}\hat{M}^{\dagger}-I_{4ms})v_r^R(E) = 0. \end{array}

    When AXB+CYD = E is compatible, its mixed solution [X, Y]\in \widetilde{S}_M satisfies

    \hat{M}\left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right) = v_r^R(E).

    Moreover, according to Theorem 2.4, the mixed solution [X, Y] satisfies

    \left( \begin{array}{c} v^R_s(X)\\ v^R_s(Y)\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E)+(I_{{2(n^2+k^2)+2(n+k)}}-\hat{M}^\dagger \hat{M})y,

    where y\in \mathbb{R}^{2(n^2+k^2)+2(n+k)} and the minimal norm mixed (\hat{X}, \hat{Y}) , satisfies

    \left( \begin{array}{c} v^R_s(\hat{X})\\ v^R_s(\hat{Y})\\ \end{array} \right) = \hat{M}^\dagger v_r^R(E).

    So, we can get the formula in (4.4), (4.5).

    In this section, using the results in Section 4, we propose the algorithm of solving Problem 1.

    Algorithm 5.1. (Problem 1)

    1) Input A, \ B, \ C, \ D, \ E, \in\mathbb{Q}^{n\times n}, (i = 1:4), output v^R_r(A), \ v_r^R(C), \ v_c^R(B), \ v_c^R(D), \ v^R_r(E),

    (2) Input G, \ W_[m, n], \ J^\eta, \ R^\eta output the matrix \hat{M} ,

    (3) According to (4.2), output the minimal norm least squares mixed solution (\hat{X}, \hat{Y}) of (1.1).

    Example 5.1. Consider the quaternion matrix equation AXB+CYD = E . Using the 'rand' and 'quaternion' in Matlab, the quaternion matrix A, \ B, \ C, \ D are created. Suppose X\in \eta\mathbb{HQ}^{n\times n}, \ Y\in \eta\mathbb{AHQ}^{k\times k} , \eta = {{\bf i}} . Let m = n = k = s = 8 , and randomly generate 20 groups of matrices A, B, C, D, X, Y . Compute quaternion matrix equation 1.1. we get a solution (X_T, Y_T) of Problem 1 by Algorithm 5.1 and the method in [30], respectively. and the error \varepsilon = log_{10}([X_T, Y_T]-[X, Y]) is shown in the Figure below.

    Figure  .  \eta = i .

    Here, two methods are used for comparing the {{\bf i}} -Hermitian and {{\bf i}} -anti-Hermitian mixed solutions with the real solutions. It can be seen that the real vector representation method based on the semi tensor product of matrix has more times than the real representation method. A large number of numerical experiments show that the real vector representation method has a dominant probability of more than 50\% when calculating the same quaternion matrix equation (1.1).

    Remark 5.1. (i) There are many kinds of mixed solutions. In Example 1, only the {{\bf i}} -Hermitian and {{\bf i}} -anti-Hermitian cases are studied.

    (ii) Because the comparison with the real representation method in [30], In order to ensure the number of effective elements calculated is the same, the J^{{{\bf i}}} and R^{{{\bf i}}} which are used to find rules are changed before.

    In this paper, we proposed a real vector representation of quaternion matrix and combined this real vector representation with semi-tensor product of matrices. We solved the least squares problems as in Problem 1. It is not hard to find that with the help of this real vector representation and semi-tensor product of matrices, we can transform the problems of solving matrices with some special structure on quaternion skew-field into the corresponding problems on real number field. It is very helpful for us to solve the quaternion matrix equation.

    The work is supported partly the Natural Science Foundation of Shandong under grant ZR2020MA053.

    The authors declare they have no conflicts of interest to this work.



    [1] S. Adler, Scattering and decay theory for quaternionic quantum mechanics and structure of induced nonconservation, Phys. Rev. D, 37 (1988), 3654–3662.
    [2] F. Caccavale, C. Natale, B. Siciliano, L. Villani, Six-dof impedance control based on angle/axis representations, IEEE Transactions on Robotics and Automation, 2 (1999), 289–300.
    [3] N. Bihan, S. Sangwine, Color image decomposition using quaternion singular value decomposition, in: Proceedings of IEEE International Conference on Visual Information Engineering of Quaternion, VIE, Guidford, (2003), 113–116.
    [4] L. Ghouti, Robust perceptual color image hashing using quaternion singular value decomposition, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.
    [5] D. R. Farenick, B. A. F. Pidkowich, The spectral theorem in quaternions, Linear Algebra Appl., 371 (2003), 75–102. doi: 10.1016/S0024-3795(03)00420-8
    [6] P. Ji, H. Wu, A closed-form forward kinematics solution for the 6-6p Stewart platform, IEEE Transactions on Robotics and Automation, 17 (2001), 522–526. doi: 10.1109/70.954766
    [7] C. Moxey, S. Sangwine, T. Ell, Hypercomplex correlation techniques for vector imagines, IEEE T. Signal Proces., 51 (2003), 1941–1953. doi: 10.1109/TSP.2003.812734
    [8] A. Davies, Quaternionic Dirac equation, Phys. Rev. D, 41 (1990), 2628–2630.
    [9] M. Wang, M. Wei, Y. Feng, An iterative algorithm for least squares problem in quaternionic quantum theory, Comput. Phys. Commun., 4 (2008), 203–207.
    [10] Q. Wang, Bisymmetric and centrosymmetric solutions to system of real quaternion matrix equation, Comput. Math. Appl., 49 (2005), 641–650. doi: 10.1016/j.camwa.2005.01.014
    [11] Q. Wang, X. Yang, S. Yuan, The Least Square Solution with the Least Norm to a System of Quaternion Matrix Equations, Iranian Journal of Science and Technology, Transactions A: Science, 42 (2018), 1317–1325. doi: 10.1007/s40995-017-0472-x
    [12] C. Took, D. Mandic, The quaternion LMS algorithm for adaptive filtering of hypercomplex real world processes, IEEE T. Signal Proces., 57 (2009), 1316–1327. doi: 10.1109/TSP.2008.2010600
    [13] C. Took, D. Mandic, Augmented second-order statistics of quaternion random signals, Singal Processing, 91 (2011), 214–224. doi: 10.1016/j.sigpro.2010.06.024
    [14] S. Ling, Z. Jia, B. Lu, B. Yang, Matrix LSQR algorithm for structured solutions to quaternionic least squares problem, Comput. Math. Appl., 77 (2019), 830–845. doi: 10.1016/j.camwa.2018.10.023
    [15] S. Yuan, Q. Wang, Two special kinds of least squares solutions for the quaternion matrix equation AXB+CYD = E, The Electron Journal Linear Algebra, 23 (2012), 257–274.
    [16] S. Yuan, Q. Wang, X. Zhang, Least-squares problem for the quaternion matrix equation AXB+CYD = E over different constrained matrices, Int. J. Comput. Math., 90 (2013), 565–576. doi: 10.1080/00207160.2012.722626
    [17] X. Liu, The \eta-anti-Hermitian solution to come classic matrix equations, Appl. Math. Comput., 320 (2018), 264–270.
    [18] A. Rehman, Q. Wang, Z. He, Solution to a system of a real quaternion matrix equations encompassing \eta-Hermicity, Appl. Math. Comput., 265 (2015), 945–957.
    [19] Y. Wang, C. Zhang, Z. Liu, A matrix approach to graph maximum stable set and coloring problem with application to multi-agent systems, Automatica, 48 (2012), 1227–1236. doi: 10.1016/j.automatica.2012.03.024
    [20] D. Zhao, H. Peng, L. Li, H. Li, Y. Yang, Novel way to research nonlinear feedback shift register, Sci. China Inform. Sci., 57 (2014), 1–14.
    [21] H. Li, Y. Wang, Boolean derivative calculation with application to fault detection of combinational circuits via the semi-tensor product method, Automatica, 48 (2012), 688–693. doi: 10.1016/j.automatica.2012.01.021
    [22] P. Guo, Y. Wang, H. Li, Algebraic formulation and strategy optimization for a class of evolutionary networked games via semi-tensor product method, Automatica, 49 (2013), 3384–3389. doi: 10.1016/j.automatica.2013.08.008
    [23] D. Cheng, H. Qi, F. He, T. Xu, H. Dong, Semi-tensor product approach to networked evolutionary games, Control Theory and Technology, 12 (2014), 198–214. doi: 10.1007/s11768-014-0038-9
    [24] D. Cheng, T. Xu, Application of STP to cooperative games, Proceedings of 10th IEEE, International Conference on Control and Automation, Zhejiang, (2013), 1680–1685.
    [25] M. Wei, Y. Li, F. Zhang, et al, Quaternion matrix computations, New York: Nova Science Publisher, 2018.
    [26] C. Took, D. Mandic, F. Zhang, On the unitary diagonalization of a special class of quaternion matrices, Appl. Math. Lett., 24 (2011), 1806–1809. doi: 10.1016/j.aml.2011.04.038
    [27] D. Z. Cheng, H. Qi, A. Xue, A survey on semi-tensor product of matrices, Institute of Systems Science Academy of Mathematics, 20 (2007), 304–322.
    [28] D. Cheng, H. Qi, Z. Liu, From STP to game based control, Sci. China Inform. Sci., 61 (2018), 1–19.
    [29] G. Golub, C. Van Loan, Matrix computations, 4th Edition, Baltimore: The Johns Hopkins University Press, 2013.
    [30] F. Zhang, M. Wei, Y. Li, J. Zhao, An efficient real representation method for least squares problem of the quaternion constrained matrix equation AXB+CYD = E, Int. J. Comput. Math., (2020), 1–12.
  • This article has been cited by:

    1. Xueling Fan, Ying Li, Wenxv Ding, Jianli Zhao, \mathcal{H} -representation method for solving reduced biquaternion matrix equation, 2022, 2, 2767-8946, 65, 10.3934/mmc.2022008
    2. Xueling Fan, Ying Li, Zhihong Liu, Jianli Zhao, Solving Quaternion Linear System Based on Semi-Tensor Product of Quaternion Matrices, 2022, 14, 2073-8994, 1359, 10.3390/sym14071359
    3. Fengxia Zhang, Ying Li, Jianli Zhao, The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation, 2022, 8, 2473-6988, 5200, 10.3934/math.2023261
    4. Xueling Fan, Ying Li, Jianhua Sun, Jianli Zhao, Solving quaternion linear system AXB=E based on semi-tensor product of quaternion matrices, 2023, 17, 2662-2033, 10.1007/s43037-023-00251-8
    5. Daizhan Cheng, Zhengping Ji, Jun-e Feng, Shihua Fu, Jianli Zhao, Perfect hypercomplex algebras: Semi-tensor product approach, 2021, 1, 2767-8946, 177, 10.3934/mmc.2021017
    6. Ahmed E. Abouelregal, Hamid M. Sedighi, Ali H. Shirazi, The Effect of Excess Carrier on a Semiconducting Semi-Infinite Medium Subject to a Normal Force by Means of Green and Naghdi Approach, 2022, 14, 1876-990X, 4955, 10.1007/s12633-021-01289-9
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3299) PDF downloads(138) Cited by(6)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog