Research article

The best approximation problems between the least-squares solution manifolds of two matrix equations

  • Received: 16 April 2024 Revised: 01 June 2024 Accepted: 24 June 2024 Published: 28 June 2024
  • MSC : 15A24, 15A60

  • In this paper, we will deal with the following two classes of best approximation problems about the linear manifolds: Problem 1. Given matrices A1,B1,C1, and D1Rm×n, find d(L1,L2)=minXL1,YL2XY, and find ˆXL1,ˆYL2 such that ˆXˆY=d(L1,L2), where L1={XSRn×n| A1XB1=min} and L2={YSRn×n| C1YD1=min}. Problem 2. Given matrices A2,B2,E2,F2Rm×n and C2,D2,G2,H2Rn×p, find d(L3,L4)=minXL3,YL4XY, and find ˜XL3,˜YL4 such that ˜X˜Y=d(L3,L4), where L3={XRn×n| A2XB22+||XC2D22=min} and L4={YRn×n| E2YF22+||YG2H22=min}. We obtain explicit formulas for d(L1,L2) and d(L3,L4), and all the matrices in question by using the singular value decompositions and the canonical correlation decompositions of matrices.

    Citation: Yinlan Chen, Yawen Lan. The best approximation problems between the least-squares solution manifolds of two matrix equations[J]. AIMS Mathematics, 2024, 9(8): 20939-20955. doi: 10.3934/math.20241019

    Related Papers:

    [1] Huiting Zhang, Yuying Yuan, Sisi Li, Yongxin Yuan . The least-squares solutions of the matrix equation $ A^{\ast}XB+B^{\ast}X^{\ast}A = D $ and its optimal approximation. AIMS Mathematics, 2022, 7(3): 3680-3691. doi: 10.3934/math.2022203
    [2] J. Alberto Conejero, Antonio Falcó, María Mora–Jiménez . A pre-processing procedure for the implementation of the greedy rank-one algorithm to solve high-dimensional linear systems. AIMS Mathematics, 2023, 8(11): 25633-25653. doi: 10.3934/math.20231308
    [3] Yinlan Chen, Wenting Duan . The Hermitian solution to a matrix inequality under linear constraint. AIMS Mathematics, 2024, 9(8): 20163-20172. doi: 10.3934/math.2024982
    [4] Yinlan Chen, Min Zeng, Ranran Fan, Yongxin Yuan . The solutions of two classes of dual matrix equations. AIMS Mathematics, 2023, 8(10): 23016-23031. doi: 10.3934/math.20231171
    [5] Jeong-Kweon Seo, Byeong-Chun Shin . Reduced-order modeling using the frequency-domain method for parabolic partial differential equations. AIMS Mathematics, 2023, 8(7): 15255-15268. doi: 10.3934/math.2023779
    [6] Yinlan Chen, Lina Liu . A direct method for updating piezoelectric smart structural models based on measured modal data. AIMS Mathematics, 2023, 8(10): 25262-25274. doi: 10.3934/math.20231288
    [7] Justin Eilertsen, Marc R. Roussel, Santiago Schnell, Sebastian Walcher . On the quasi-steady-state approximation in an open Michaelis–Menten reaction mechanism. AIMS Mathematics, 2021, 6(7): 6781-6814. doi: 10.3934/math.2021398
    [8] Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158
    [9] Yonghong Duan, Ruiping Wen . An alternating direction power-method for computing the largest singular value and singular vectors of a matrix. AIMS Mathematics, 2023, 8(1): 1127-1138. doi: 10.3934/math.2023056
    [10] Muammer Ayata, Ozan Özkan . A new application of conformable Laplace decomposition method for fractional Newell-Whitehead-Segel equation. AIMS Mathematics, 2020, 5(6): 7402-7412. doi: 10.3934/math.2020474
  • In this paper, we will deal with the following two classes of best approximation problems about the linear manifolds: Problem 1. Given matrices A1,B1,C1, and D1Rm×n, find d(L1,L2)=minXL1,YL2XY, and find ˆXL1,ˆYL2 such that ˆXˆY=d(L1,L2), where L1={XSRn×n| A1XB1=min} and L2={YSRn×n| C1YD1=min}. Problem 2. Given matrices A2,B2,E2,F2Rm×n and C2,D2,G2,H2Rn×p, find d(L3,L4)=minXL3,YL4XY, and find ˜XL3,˜YL4 such that ˜X˜Y=d(L3,L4), where L3={XRn×n| A2XB22+||XC2D22=min} and L4={YRn×n| E2YF22+||YG2H22=min}. We obtain explicit formulas for d(L1,L2) and d(L3,L4), and all the matrices in question by using the singular value decompositions and the canonical correlation decompositions of matrices.


    Matrix equationsplay important roles in structural vibration systems[1,2], automatic control [3], and other fields[4,5]. Observe that the linear matrix equation

    AX=B (1.1)

    and the linear matrix equations

    AX=B, XC=D (1.2)

    have been extensively studied, and some profound results were established. The various special solutions of Eq (1.1) have been studied in [6,7,8,9]. The various special solutions of Eq (1.2) have been considered in [10,11,12,13,14]. Don [15] studied the general symmetric solution to Eq (1.1) by applying a formula for the partitioned minimum-norm reflexive generalized inverse. Dai [16] derived the symmetric solution of Eq (1.1) by utilizing the singular value decomposition (SVD). Sun [17] provided the least-squares symmetric solution for Eq (1.1) by using the SVD. Rao and Mitra [18] studied the common solutionof Eq (1.2). Yuan [19] considered the least-squares solution and the least-squares symmetric solution with the minimum-normof Eq (1.2) by applying the SVD. Obviously, the least-squares solution of the matrix equations form a linear manifold. It is known that the manifold distance is widely used in many aspects, such as in pattern recognition[20], image recognition [21,22,23], and Riemannian manifolds [24,25,26,27]. There are some results about the distance between linear manifolds. For example, Kass [28] obtained the expression of the distance from a point y to an affine subspace (linear manifold) L={x|Ax=b}. Dupré and Kass [29] and Yuan [30] further proposed the specific formulas for the distance between two affine subspaces. Grover [31] derived an expression for the distance of a matrix A from any unital C-subalgebra of Cn×n by the SVD. In addition, Du and Deng [32] established a new characterization of gaps between two subspaces of a Hilbert space. Baksalary and Trenkler [33] investigated the angles and distances between two given subspaces of Cn. Scheffer and Vahrenhold [34,35] proposed an algorithm to approximate the geodesic distances and approximate weighted geodesic distances on a 2-manifold in R3, respectively. Inspired by the works of Kass [28] and Yuan [30], in this paper we will discuss the best approximation problem between two least-squares symmetric solution manifolds of Eq (1.1) and the best approximation problem between two least-squares solution manifolds of Eq (1.2). Namely, we consider the following two problems:

    Problem 1. Given A1,B1,C1, and D1Rm×n, find d(L1,L2)=minXL1,YL2XY, and find ˆXL1,ˆYL2 such that ˆXˆY=d(L1,L2), where

    L1={XSRn×n| A1XB1=min},L2={YSRn×n| C1YD1=min},

    SRn×n stands for the set of all n×n symmetric matrices, and is the Frobenius norm.

    Problem 2. Given matrices A2,B2,E2,F2Rm×n and C2,D2,G2,H2Rn×p, find d(L3,L4)=minXL3,YL4XY, and find ˜XL3,˜YL4 such that ˜X˜Y=d(L3,L4), where

    L3={XRn×n| A2XB22+||XC2D22=min},L4={YRn×n| E2YF22+||YG2H22=min}.

    The paper is organized as follows. In Section 2, we introduce some important lemmas. In Sections 3 and 4, we derive the expressions of the matrices ˆX,ˆY,˜X,˜Y and present the explicit expressions for d(L1,L2) and d(L3,L4) of Problems 1 and 2 by utilizing the singular value decomposition and the canonical correlation decomposition (CCD). Finally, in Section 5, we provide a simple recipe for numerical computation to solve Problem 2.

    In order to solve Problems 1 and 2, we need the following lemmas.

    Lemma 2.1. [19] Given A1,B1,C1, and D1Rm×n, let the SVDs of the matrices A1 and C1 be

    A1=U1[Σ1000]V1,  C1=P1[Σ2000]Q1,

    where Σ1=diag(γ1,,γs1)>0 (that is, Σ1>0 means that Σ1 is a symmetric positive definite matrix), s1=rank(A1),Σ2=diag(θ1,,θt1)>0,t1=rank(C1), and U1=[U11,U12],V1=[V11,V12],P1=[P11,P12],Q1=[Q11,Q12] are orthogonal matrices with U11Rm×s1,V11Rn×s1,P11Rm×t1,Q11Rn×t1. Let

    U1B1V1=[B11B12B13B14],  P1D1Q1=[D11D12D13D14].

    Then, the solution sets L1 and L2 can be expressed as

    L1={XSRn×n| X=X1+V12X14V12},L2={YSRn×n| Y=Y1+Q12Y14Q12}, (2.1)

    where

    X1=V1[W1(Σ1B11+B11Σ1)Σ11B12B12Σ110]V1, (2.2)
    Y1=Q1[W2(Σ2D11+D11Σ2)Σ12D12D12Σ120]Q1, (2.3)

    and W1(Σ1B11+B11Σ1) represents the Hadamard product of W1 and Σ1B11+B11Σ1, W1=[w(1)ij]s1×s1 with w(1)ij=1γ2i+γ2j (i,j=1,,s1), W2=[w(2)ij]t1×t1 with w(2)ij=1θ2i+θ2j (i,j=1,,t1), and X14SR(ns1)×(ns1), Y14SR(nt1)×(nt1) are arbitrary symmetric matrices.

    Lemma 2.2. [19] Suppose that A2,B2,E2,F2Rm×n and C2,D2,G2,H2Rn×p. Assume that the SVDs of the matrices A2,C2,E2, and G2 are

    A2=U2[Σ3000]V2,  C2=P2[Σ4000]Q2,E2=U3[Σ5000]V3,  G2=P3[Σ6000]Q3, (2.4)

    where Σ3=diag(λ1,,λs2)>0,s2=rank(A2), Σ4=diag(ρ1,,ρt2)>0,t2=rank(C2), Σ5=diag(ϵ1,,ϵs3)>0, s3=rank(E2),Σ6=diag(η1,,ηt3)>0, t3=rank(G2), and U2=[U21,U22],V2=[V21,V22],P2=[P21,P22], Q2=[Q21,Q22],U3=[U31,U32],V3=[V31,V32], P3=[P31,P32],Q3=[Q31,Q32] are orthogonal matrices with U21Rm×s2,V21Rn×s2,P21Rn×t2,Q21Rp×t2, U31Rm×s3,V31Rn×s3,P31Rn×t3, and Q31Rp×t3. Let

    U2B2P2=[B21B22B23B24],  V2D2Q2=[D21D22D23D24],U3F2P3=[K21K22K23K24],  V3H2Q3=[H21H22H23H24].

    Then, the solution sets L3 and L4 can be expressed as

    L3={XRn×n| X=X2+V22X24P22},L4={YRn×n| Y=Y2+V32Y24P32}, (2.5)

    where

    X2=V2[W3(Σ3B21+D21Σ4)Σ13B22D23Σ140]P2, (2.6)
    Y2=V3[W4(Σ5K21+H21Σ6)Σ15K22H23Σ160]P3, (2.7)

    and W3=[w(3)ij]s2×t2 with w(3)ij=1λ2i+ρ2j(i=1,,s2;j=1,,t2), W4=[w(4)ij]s3×t3 with w(4)ij=1ϵ2i+η2j(i=1,,s3;j=1,,t3), and X24R(ns2)×(nt2), Y24R(ns3)×(nt3) are arbitrary matrices.

    Lemma 2.3. Let F26,F56Rh1×k1, and CA=diag(α1,,αh1)>0,SA=diag(β1,,βh1)>0 satisfy α2i+β2i=1 (i=1,,h1). Then,

    Φ(T23)=CAT23+F262+SAT23+F562=min

    if and only if

    T23=(CAF26+SAF56). (2.8)

    Proof. Let F26=[fij],F56=[gij]Rh1×k1, and T23=[tij]Rh1×k1. Then,

    Φ(T23)=h1i=1k1j=1((αitij+fij)2+(βitij+gij)2).

    Now, we minimize the quantities

    φ1=(αitij+fij)2+(βitij+gij)2(i=1,,h1;j=1,,k1).

    It is easy to obtain the minimizers

    tij=αifij+βigijα2i+β2i=(αifij+βigij)(i=1,,h1;j=1,,k1). (2.9)

    (2.8) follows from (2.9) straightforwardly.

    Lemma 2.4. Suppose that the matrices F12,F15Rr1×h1. Let CA=diag(α1,,αh1)>0,SA=diag(β1,,βh1)>0 satisfy α2i+β2i=1(i=1,,h1). Then,

    Φ(T12,J12)=T12CAJ12+F122+T12SA+F152=min (2.10)

    if and only if

    T12=F15S1A, J12=F12F15S1ACA. (2.11)

    Proof. Let F12=[fij],F15=[kij],T12=[tij],J12=[qij]Rr1×h1. Then, the minimization problem of (2.10) is equivalent to

    Φ(T12,J12)=r1i=1h1j=1((tijαjqij+fij)2+(tijβj+kij)2). (2.12)

    Clearly, the function of concerning variables tij and qij in (2.12) is

    φ2=(tijαjqij+fij)2+(tijβj+kij)2(i=1,,r1;j=1,,h1).

    It is easy to verify that the function φ2 attains the smallest value at

    φ2tij=0,  φ2qij=0(i=1,,r1;j=1,,h1),

    which yields

    tij=kijβ1j,  qij=fijkijβ1jαj(i=1,,r1;j=1,,h1). (2.13)

    By rewriting (2.13) in matrix forms, we immediately obtain (2.11).

    Lemma 2.5. [36] Assume that CA=diag(α1,,αh1)>0,SA=diag(β1,,βh1)>0 with α2i+β2i=1(i=1,,h1), and F22,F25,F55Rh1×h1. Then, the problem

    Φ(T22,J22)=CAT22CAJ22+F222+2CAT22SA+F252+SAT22SA+F552=min

    has unique symmetric solutions T22,J22Rh1×h1 with the forms

    T22=W5(CAF25SA+SAF25CA+SAF55SA),J22=F22+W5(C2AF25SACA+CASAF25C2A+CASAF55SACA), (2.14)

    where W5=[w(5)ij]h1×h1,w(5)ij=1α2iα2j1(i,j=1,,h1).

    Lemma 2.6. [36] Let M22,M25,M52,M55Rh2×h3, and let C3=diag(κ1,,κh2)>0,S3=diag(σ1,,σh2)>0 satisfy κ2i+σ2i=1(i=1,,h2),C4=diag(δ1,,δh3)>0, and S4=diag(ζ1,,ζh3)>0 satisfy δ2i+ζ2i=1(i=1,,h3). Then,

    Φ(R22,Z22)=C3R22C4Z22+M222+S3R22C4+M522+C3R22S4+M252+S3R22S4+M552=min

    if and only if

    R22=W6(C3M25S4+S3M52C4+S3M55S4),Z22=M22+W6(C23M25S4C4+C3S3M52C24+C3S3M55S4C4), (2.15)

    where W6=[w(6)ij]h2×h3 with w(6)ij=1κ2iδ2j1(i=1,,h2;j=1,,h3).

    Let V12Rn×(ns1) and Q12Rn×(nt1) be column orthogonal matrices, and assume that ns1=rank(V12)rank(Q12)=nt1. Let the CCD [37] of the matrix pair [V12,Q12] be

    V12=FΣA1N1,  Q12=FΣC1N2, (3.1)

    in which FRn×n,N1R(ns1)×(ns1), and N2R(nt1)×(nt1) are orthogonal matrices, and

     ΣA1=[I000CA00000000SA000I]r1h1f1t1h1k1h1k1r1h1k1,  ΣC1=[I000I000I000000000]r1h1f1t1h1k1h1k1r1h1f1,

    rank(V12)=r1+h1+k1, h1=rank([V12,Q12])+rank(Q12V12)rank(V12)rank(Q12), r1=rank(V12)+rank(Q12)rank([V12,Q12]), k1=rank(V12)rank(Q12V12), f1=nt1r1h1, and

    CA=diag(α1,,αh1),  SA=diag(β1,,βh1)

    with

    1>α1αh1>0,  0<β1βh1<1,  α2i+β2i=1(i=1,,h1).

    It follows from (2.1) and (3.1) that for any XL1 and YL2, we have

    XY=FΣA1N1X14N1ΣA1FFΣC1N2Y14N2ΣC1F+(X1Y1)=ΣA1N1X14N1ΣA1ΣC1N2Y14N2ΣC1+F(X1Y1)F.

    Write

    F(X1Y1)F=[F11F12F13F14F15F16F12F22F23F24F25F26F13F23F33F34F35F36F14F24F34F44F45F46F15F25F35F45F55F56F16F26F36F46F56F66], (3.2)
    N1X14N1=[T11T12T13T12T22T23T13T23T33],N2Y14N2=[J11J12J13J12J22J23J13J23J33]. (3.3)

    Then,

    d(L1,L2)=minXL1,YL2XY= [T11J11+F11T12CAJ12+F12F13J13F14T12SA+F15T13+F16CAT12J12+F12CAT22CAJ22+F22F23J23F24CAT22SA+F25CAT23+F26F13J13F23J23F33J33F34F35F36F14F24F34F44F45F46SAT12+F15SAT22CA+F25F35F45SAT22SA+F55SAT23+F56T13+F16T23CA+F26F36F46T23SA+F56T33+F66] =min. (3.4)

    It follows from (3.2)–(3.4) that d(L1,L2)=minXL1,YL2XY if and only if

    T11J11+F11=min, (3.5)
    T13+F16=min,T33+F66=min, (3.6)
    F13J13=min,F23J23=min,F33J33=min, (3.7)
    CAT23+F262+SAT23+F562=min, (3.8)
    T12CAJ12+F122+T12SA+F152=min, (3.9)
    CAT22CAJ22+F222+2CAT22SA+F252+SAT22SA+F552=min. (3.10)

    From (3.5), we obtain

    T11=J11F11, (3.11)

    where J11SRr1×r1 is an arbitrary matrix. By (3.6) and (3.7), we can find that

    T13=F16,  T33=F66,  J13=F13,  J23=F23,  J33=F33. (3.12)

    By applying Lemma 2.3, from (3.8) we have

    T23=(CAF26+SAF56). (3.143)

    Solving the minimization problem (3.9) by using Lemma 2.4, we obtain

    T12=F15S1A,  J12=F12F15S1ACA. (3.14)

    Applying Lemma 2.5, we can obtain (2.14) from (3.10). Inserting (2.14) and (3.11)–(3.14) into (3.3) and (3.4), we obtain

    X14=N1[J11F11F15S1AF16S1AF15T22(CAF26+SAF56)F16(F26CA+F56SA)F66]N1,
    Y14=N2[J11F12F15S1ACAF13F12CAS1AF15J22F23F13F23F33]N2,
    d(L1,L2)=[000F1400000F24CAT22SA+F25S2AF26CASAF56000F34F35F36F14F24F34F44F45F460SAT22CA+F25F35F45SAT22SA+F55C2AF56SACAF260F26S2AF56SACAF36F46F56C2AF26CASA0]. (3.15)

    The relation of (3.15) can be equivalently written as

    d(L1,L2)=(23i=1Fi42+26j=5F3j2+26k=5F4k2+F442+SAT22SA+F552+2CAT22SA+F252+2S2AF26CASAF562+2C2AF56SACAF262)12. (3.16)

    As a summary of the above discussion, we have proved the following result.

    Theorem 3.1. Given the matrices A1,B1,C1, and D1Rm×n, the explicit expression for d(L1,L2) of Problem 1 can be expressed as (3.16), and the matrices ˆX and ˆY are given by

    ˆX=X1+V12ˆX14V12,  ˆY=Y1+Q12ˆY14Q12, (3.17)

    where

    ˆX14=N1[J11F11F15S1AF16S1AF15T22(CAF26+SAF56)F16(F26CA+F56SA)F66]N1,ˆY14=N2[J11F12F15S1ACAF13F12CAS1AF15J22F23F13F23F33]N2,

    J11SRr1×r1 is an arbitrary matrix, and X1,Y1,T22,J22 are given by (2.2),(2.3), and (2.14), respectively.

    Suppose that V22Rn×(ns2),V32Rn×(ns3),P22Rn×(nt2), and P32Rn×(nt3) are column orthogonal matrices, and assume that ns2=rank(V22)rank(V32)=ns3 and nt2=rank(P22)rank(P32)=nt3. Let the CCDs of the matrix pairs [V22,V32] and [P22,P32] be

    V22=MΣA2N3,  V32=MΣE2N4,P22=WΣC2N5,  P32=WΣG2N6, (4.1)

    in which MRn×n,WRn×n,N3R(ns2)×(ns2), N4R(ns3)×(ns3),N5R(nt2)×(nt2), and N6R(nt3)×(nt3) are all orthogonal matrices, and

     ΣA2=[I000C300000000S3000I]r2h2f2s3h2k2h2k2r2h2k2, ΣE2=[I000I000I000000000]r2h2f2s3h2k2h2k2r2h2f2,

    rank(V22)=r2+h2+k2, h2=rank([V22,V32])+rank(V32V22)rank(V22)rank(V32), r2=rank(V22)+rank(V32)rank([V22,V32]), k2=rank(V22)rank(V32V22), f2=ns3r2h2, and

    C3=diag(κ1,,κh2),  S3=diag(σ1,,σh2)

    with

    1>κ1κh2>0,  0<σ1σh2<1,  κ2i+σ2i=1(i=1,,h2).
    ΣC2=[I000C400000000S4000I]r3h3f3t3h3k3h3k3r3h3k3, ΣG2=[I000I000I000000000]r3h3f3t3h3k3h3k3r3h3f3,

    rank(P22)=r3+h3+k3, h3=rank([P22,P32])+rank(P32P22)rank(P22)rank(P32), r3=rank(P22)+rank(P32)rank([P22,P32]), k3=rank(P22)rank(P32P22), f3=nt3r3h3, and

    C4=diag(δ1,,δh3),  S4=diag(ζ1,,ζh3)

    with

    1>δ1δh3>0,  0<ζ1ζh3<1,  δ2i+ζ2i=1(i=1,,h3).

    It follows from (2.5) and (4.1) that, for any XL3 and YL4, we have

    XY=MΣA2N3X24N5ΣC2WMΣE2N4Y24N6ΣG2W+(X2Y2)=ΣA2N3X24N5ΣC2ΣE2N4Y24N6ΣG2+M(X2Y2)W.

    If we set

    M(X2Y2)W=[M11M12M13M14M15M16M21M22M23M24M25M26M31M32M33M34M35M36M41M42M43M44M45M46M51M52M53M54M55M56M61M62M63M64M65M66], (4.2)
    N3X24N5=[R11R12R13R21R22R23R31R32R33],  N4Y24N6=[Z11Z12Z13Z21Z22Z23Z31Z32Z33], (4.3)

    then

    d(L3,L4)=minXL3,YL4XY=[R11Z11+M11R12C4Z12+M12M13Z13M14R12S4+M15R13+M16C3R21Z21+M21C3R22C4Z22+M22M23Z23M24C3R22S4+M25C3R23+M26M31Z31M32Z32M33Z33M34M35M36M41M42M43M44M45M46S3R21+M51S3R22C4+M52M53M54S3R22S4+M55S3R23+M56R31+M61R32C4+M62M63M64R32S4+M65R33+M66]=min. (4.4)

    It follows from (4.2)–(4.4) that d(L3,L4)=minXL3,YL4XY if and only if

    R11Z11+M11=min, (4.5)
    M13Z13=min,M23Z23=min,M33Z33=min,M31Z31=min, (4.6)
    M32Z32=min,R13+M16=min,R31+M61=min,R33+M66=min, (4.7)
    C3R23+M262+S3R23+M562=min, (4.8)
    R32C4+M622+R32S4+M652=min, (4.9)
    R12C4Z12+M122+R12S4+M152=min, (4.10)
    C3R21Z21+M212+S3R21+M512=min, (4.11)
    C3R22C4Z22+M222+S3R22C4+M522+C3R22S4+M252+S3R22S4+M552=min. (4.12)

    From (4.5), we obtain

    R11=Z11M11, (4.13)

    where Z11Rr2×r3 is an arbitrary matrix. By (4.6) and (4.7), we get

    Z13=M13,  Z23=M23,  Z33=M33,  Z31=M31,Z32=M32,  R13=M16,  R31=M61,  R33=M66. (4.14)

    It follows from (4.8), (4.9), and Lemma 2.3 that

    R23=(C3M26+S3M56),  R32=(M62C4+M65S4). (4.15)

    By using Lemma 2.4, solving minimization problems (4.10) and (4.11) yields

    R12=M15S14,  Z12=M12M15S14C4,R21=S13M51,  Z21=M21C3S13M51. (4.16)

    The relation of (2.15) follows from Lemma 2.6 and (4.12). Substituting (2.15) and (4.13)–(4.16) into (4.3) and (4.4) leads to

    X24=N3[Z11M11M15S14M16S13M51R22(C3M26+S3M56)M61(M62C4+M65S4)M66]N5,
    Y24=N4[Z11M12M15S14C4M13M21C3S13M51Z22M23M31M32M33]N6,
    d(L3,L4)=[000M1400000M24C3R22S4+M25S23M26C3S3M56000M34M35M36M41M42M43M44M45M460S3R22C4+M52M53M54S3R22S4+M55C23M56S3C3M260M62S24M65S4C4M63M64M65C24M62C4S40]. (4.17)

    The relation of (4.17) can be equivalently written as

    d(L3,L4)=(6i=1Mi42+6j=1M4j2M442+6k=5M3k2+6n=5Mn32+S23M26C3S3M562+M62S24M65S4C42+C23M56S3C3M262+M65C24M62C4S42+C3R22S4+M252+S3R22C4+M522+S3R22S4+M552)12. (4.18)

    By now, we have proved the following theorem.

    Theorem 4.1. Given A2,B2,E2,F2Rm×n and C2,D2,G2,H2Rn×p, d(L3,L4) can be expressed by (4.18) and the matrices ˜X and ˜Y are given by

    ˜X=X2+V22˜X24P22,  ˜Y=Y2+V32˜Y24P32, (4.19)

    where

    ˜X24=N3[Z11M11M15S14M16S13M51R22(C3M26+S3M56)M61(M62C4+M65S4)M66]N5, (4.20)
    ˜Y24=N4[Z11M12M15S14C4M13M21C3S13M51Z22M23M31M32M33]N6, (4.21)

    Z11Rr2×r3 is an arbitrary matrix, and X2,Y2,R22,Z22 are given by (2.6),(2.7), and (2.15), respectively.

    According to Theorem 4.1, we have the following algorithm to solve Problem 2.

    Algorithm 1.

    1) Input matrices A2,B2,C2,D2,E2,F2,G2, and H2.

    2) Compute the SVDs of the matrices A2,C2,E2, and G2 according to (2.4).

    3) Compute the CCDs of the matrix pairs [V22,V23] and [P22,P23] by (4.1).

    4) Calculate the matrices Mij,i,j=1,,6 following (4.2).

    5) Randomly choose the matrix Z11.

    6) Calculate the matrices Rij,i,j=1,2,3;Zmn,m,n=2,3;Z12 and Z13 by (2.15) and (4.13)–(4.16).

    7) Compute d(L3,L4),˜X24, and ˜Y24 by (4.18), (4.20), and (4.21), respectively.

    8) Compute matrices ˜X and ˜Y by (4.19).

    Example 5.1. Let m=9,n=8,p=7, and the matrices A2,B2,C2,D2,E2,F2,G2 and H2 be given by

    A2=[0.01080.00270.00640.00670.00120.00160.00210.00790.00810.03070.02080.06520.00810.20830.10790.05300.00050.07860.27360.14360.05290.10790.17130.09050.00040.06550.10640.03930.09480.20180.20750.09740.00230.06880.17720.03040.07820.17580.19990.09660.00490.05630.15640.23630.03330.28680.11810.05240.00440.10640.26070.07950.10520.19700.25560.12940.00420.08490.20960.24530.00600.27160.06950.03840.01350.03020.05410.06030.00250.11400.03120.0295],B2=[0.05320.56810.01570.00040.01600.19900.74260.42040.01290.75900.18460.71790.83470.32370.08640.05450.92980.25140.02490.46080.06920.95830.30430.99840.44250.40360.79460.70320.19010.15800.45790.18980.70170.07730.30140.78880.07710.71600.54430.26900.94230.20190.48650.00870.75080.23490.42490.10900.01650.88950.49310.25050.14600.81120.22470.95050.26010.65800.06030.09320.39020.78430.35640.65750.47700.63480.71410.78460.45910.74290.08780.8769],C2=[0.00000.08050.03520.10270.08550.19020.05830.17590.14620.33480.04770.09990.21070.09520.01230.17820.26810.07590.30820.13800.08470.04990.16960.30770.07980.26260.15240.08740.46140.14170.22760.04240.39350.00070.02440.21700.02720.39740.18360.17220.03730.01650.22720.17400.00350.01610.30510.11990.06810.37500.08170.22300.03680.28280.03840.0020],D2=[0.31010.67580.67710.08030.88090.97520.51190.87630.27690.70850.81870.49560.45340.63920.08410.46580.09670.16080.15580.42970.13690.47810.66270.38590.78050.68040.49920.26360.11730.62150.05420.97010.61570.05770.71840.71100.09660.78960.37620.60680.44270.50080.54700.74110.18570.80370.77110.23630.16140.94890.85450.34300.14920.27340.19880.3628],
    E2=[0.00000.10500.06480.52650.20470.15110.02590.16790.02010.04670.06230.10410.02450.09190.09010.03130.01800.00120.03500.17340.08340.03410.02250.06210.00130.10830.01800.04550.01050.04590.02350.01690.03560.09590.05120.34900.11670.11870.02110.12900.03010.00220.03720.04510.04920.07590.07070.02400.02150.00040.00520.03160.02240.00460.00210.00490.03800.01800.02620.16680.09010.02790.01380.04800.02370.03670.05710.00510.00450.02020.05720.0543],F2=[0.91220.66730.75540.30680.63560.00030.27790.42720.02890.35130.29980.41260.34060.66590.96960.15470.45730.22420.48360.12120.22810.94990.26580.01780.42500.17100.39920.71800.36580.66400.40390.56020.48750.00400.37870.52280.03700.02450.26820.39870.90810.48870.50790.38660.17950.05020.78040.17910.90270.48060.76770.50340.11830.42260.71040.78520.62920.27740.38680.07100.69280.14430.75760.30960.86530.76700.64260.59600.72420.33250.23460.5268],G2=[0.00000.00010.03890.02600.02930.00640.11100.11080.09700.16460.10580.02410.08160.10400.04290.02560.03280.02630.01800.03300.09530.04440.15110.14010.07630.07060.09550.06660.04820.13760.15710.08640.04710.08170.00870.06120.22120.10330.03190.12720.10540.03010.06330.13860.04630.00260.08710.05100.03430.17400.16140.06360.09410.14950.03490.0310],H2=[0.12070.40660.18420.17810.58610.39660.08460.71930.86910.24630.88980.72670.73020.76620.27260.87480.87310.35450.39230.60700.77990.67910.36470.43290.90140.88620.71900.79170.67680.58480.29610.57330.96030.25100.81450.26110.80550.43410.45150.57800.05330.59210.48290.74720.58750.47870.32470.75910.49950.48250.20340.71490.73670.04220.78590.5803].

    By using Algorithm 1, the matrices ˜X and ˜Y can be calculated as

    ˜X=[10.31251.685611.57682.94045.860810.03345.661610.27963.15580.99533.45352.89742.49653.00320.60494.84441.47330.32611.75300.00890.40392.19980.06851.05813.52132.83001.56421.53482.57382.01561.06892.550610.93381.07806.95960.22932.08048.15690.43776.78510.63732.90511.49271.70292.04410.88840.44780.41402.63352.84860.22890.37843.84930.85032.69122.72171.30590.89983.75760.67592.54160.42443.93670.5737],˜Y=[11.46710.918310.64940.82318.112012.10952.88329.81362.22250.82413.22304.25230.30293.59581.98103.43725.13580.76840.84810.67150.83870.05391.16880.81173.99630.22810.08770.41462.21473.48101.33411.658011.67301.03705.11281.63344.834510.70770.33487.96910.69322.19501.59181.33044.36913.40501.24381.30813.19393.06590.61510.20203.17121.44962.28991.22931.43781.00181.93670.51031.74623.33883.42934.0501],

    and the distance between L3 and L4 is d(L3,L4)=13.7969, which implies that there is no common element between linear manifolds L3 and L4.

    In this paper, by utilizing the singular value decompositions and the canonical correlation decompositions of matrices, we have achieved the explicit representation for the optimal approximation distance d(L1,L2) of linear manifolds L1 and L2 and the matrices ˆXL1,ˆYL2 satisfying ˆXˆY=d(L1,L2) in Problem 1 (see Theorem 3.1), and the explicit representation for the optimal approximation distance d(L3,L4) of linear manifolds L3 and L4 and the matrices ˜XL3,˜YL4 satisfying ˜X˜Y=d(L3,L4) in Problem 2 (see Theorem 4.1). Also, we have provided a simple recipe for constructing the optimal approximation solution of Problem 2, which can serve as the basis for numerical computation. The approach is demonstrated by a simple numerical example and reasonable results are produced.

    Yinlan Chen: Conceptualization, Methodology, Project administration, Supervision, Writing-review & editing; Yawen Lan: Investigation, Software, Validation, Writing-original draft, Writing-review & editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest.



    [1] J. E. Mottershead, Y. M. Ram, Inverse eigenvalue problems in vibration absorption: passive modification and active control, Mech. Syst. Signal Process., 20 (2006), 5–44. https://doi.org/10.1016/j.ymssp.2005.05.006 doi: 10.1016/j.ymssp.2005.05.006
    [2] B. Dong, M. M. Lin, M. T. Chu, Parameter reconstruction of vibration systems from partial eigeninformation, J. Sound Vibration, 327 (2009), 391–401. https://doi.org/10.1016/j.jsv.2009.06.026 doi: 10.1016/j.jsv.2009.06.026
    [3] S. A. Avdonin, M. I. Belishev, S. A. Ivano, Boundary control and a matrix inverse problem for the equation uttuxx+V(x)u=0, Math. USSR Sb., 72 (1992), 287–310. https://doi.org/10.1070/SM1992v072n02ABEH002141 doi: 10.1070/SM1992v072n02ABEH002141
    [4] Y. X. Yuan, A symmetric inverse eigenvalue problem in structural dynamic model updating, Appl. Math. Comput., 213 (2009), 516–521. https://doi.org/10.1016/j.amc.2009.03.045 doi: 10.1016/j.amc.2009.03.045
    [5] Y. X. Yuan, H. Dai, An inverse problem for undamped gyroscopic systems, J. Comput. Appl. Math., 236 (2012), 2574–2581. https://doi.org/10.1016/j.cam.2011.12.015 doi: 10.1016/j.cam.2011.12.015
    [6] L. Wu, The re-positive definite solutions to the matrix inverse problem AX=B, Linear Algebra Appl., 174 (1992), 145–151. https://doi.org/10.1016/0024-3795(92)90048-F doi: 10.1016/0024-3795(92)90048-F
    [7] K. W. E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl., 119 (1989), 35–50. https://doi.org/10.1016/0024-3795(89)90067-0 doi: 10.1016/0024-3795(89)90067-0
    [8] L. J. Zhao, X. Y. Hu, L. Zhang, Least squares solutions to AX=B for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation, Linear Algebra Appl., 428 (2008), 871–880. https://doi.org/10.1016/j.laa.2007.08.019 doi: 10.1016/j.laa.2007.08.019
    [9] Q. W. Wang, Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl., 49 (2005), 641–650. https://doi.org/10.1016/j.camwa.2005.01.014 doi: 10.1016/j.camwa.2005.01.014
    [10] S. K. Mitra, The matrix equations AX=C, XB=D, Linear Algebra Appl., 59 (1984), 171–181.
    [11] S. K. Mitra, A pair of simultaneous linear matrix equations A1XB1=C1, A2XB2=C2 and a matrix programming problem, Linear Algebra Appl., 131 (1990), 107–123. https://doi.org/10.1016/0024-3795(90)90377-O doi: 10.1016/0024-3795(90)90377-O
    [12] A. Dajić, J. J. Koliha, Positive solutions to the equations AX=C and XB=D for Hilbert space operators, J. Math. Anal. Appl., 333 (2007), 567–576. https://doi.org/10.1016/j.jmaa.2006.11.016 doi: 10.1016/j.jmaa.2006.11.016
    [13] Y. Y. Qiu, A. D. Wang, Least squares solutions to the equations AX=B, XC=D with some constraints, Appl. Math. Comput., 204 (2008), 872–880. https://doi.org/10.1016/j.amc.2008.07.035 doi: 10.1016/j.amc.2008.07.035
    [14] Y. H. Liu, Some properties of submatrices in a solution to the matrix equations AX=C, XB=D, J. Appl. Math. Comput., 31 (2009), 71–80. https://doi.org/10.1007/s12190-008-0192-7 doi: 10.1007/s12190-008-0192-7
    [15] F. J. H. Don, On the symmetric solutions of a linear matrix equation, Linear Algebra Appl., 93 (1987), 1–7. https://doi.org/10.1016/S0024-3795(87)90308-9 doi: 10.1016/S0024-3795(87)90308-9
    [16] D. Hua, On the symmetric solutions of linear matrix equations, Linear Algebra Appl., 131 (1990), 1–7. https://doi.org/10.1016/0024-3795(90)90370-r doi: 10.1016/0024-3795(90)90370-r
    [17] J. G. Sun, Two kinds of inverse eigenvalue problems for real symmetric matrices (Chinese), Math. Numer. Sinica, 3 (1988), 282–290.
    [18] C. R. Rao, S. K. Mitra, Generalized inverse of matrices and its applications, John Wiley & Sons, 1971.
    [19] Y. X. Yuan, Least-squares solutions to the matrix equations AX=B and XC=D, Appl. Math. Comput., 216 (2010), 3120–3125. https://doi.org/10.1016/j.amc.2010.04.002 doi: 10.1016/j.amc.2010.04.002
    [20] R. Hettiarachchi, J. F. Peters, Multi-manifold LLE learning in pattern recognition, Pattern Recogn., 48 (2015), 2947–2960. https://doi.org/10.1016/j.patcog.2015.04.003 doi: 10.1016/j.patcog.2015.04.003
    [21] R. Souvenir, R. Pless, Image distance functions for manifold learning, Image Vision Comput., 25 (2007), 365–373. https://doi.org/10.1016/j.imavis.2006.01.016 doi: 10.1016/j.imavis.2006.01.016
    [22] J. X. Du, M. W. Shao, C. M. Zhai, J. Wang, Y. Y. Tang, C. L. P. Chen, Recognition of leaf image set based on manifold-manifold distance, Neurocomputing, 188 (2016), 131–188. https://doi.org/10.1016/j.neucom.2014.10.113 doi: 10.1016/j.neucom.2014.10.113
    [23] L. K. Huang, J. W. Lu, Y. P. Tan, Multi-manifold metric learning for face recognition based on image sets, J. Vis. Commun. Image Represent., 25 (2014), 1774–1783. https://doi.org/10.1016/j.jvcir.2014.08.006 doi: 10.1016/j.jvcir.2014.08.006
    [24] C. Y. Chen, J. P. Zhang, R. Fleischer, Distance approximating dimension reduction of Riemannian manifolds, IEEE Trans. Syst. Man Cybernet. Part B, 40 (2010), 208–217. https://doi.org/10.1109/TSMCB.2009.2025028 doi: 10.1109/TSMCB.2009.2025028
    [25] H. R. Chen, Y. F. Sun, J. B. Gao, Y. L. Hu, B. C. Yin, Solving partial least squares regression via manifold optimization approaches, IEEE Trans. Neural Netw. Learn. Syst., 30 (2019), 588–600. https://doi.org/10.1109/TNNLS.2018.2844866 doi: 10.1109/TNNLS.2018.2844866
    [26] M. Shahbazi, A. Shirali, H. Aghajan, H. Nili, Using distance on the Riemannian manifold to compare representations in brain and in models, NeuroImage, 239 (2021), 118271. https://doi.org/10.1016/j.neuroimage.2021.118271 doi: 10.1016/j.neuroimage.2021.118271
    [27] K. Sharma, R. Rameshan, Distance based kernels for video tensors on product of Riemannian matrix manifolds, J. Vis. Commun. Image Represent., 75 (2021), 103045. https://doi.org/10.1016/j.jvcir.2021.103045 doi: 10.1016/j.jvcir.2021.103045
    [28] S. Kass, Spaces of closest fit, Linear Algebra Appl., 117 (1989), 93–97. https://doi.org/10.1016/0024-3795(89)90550-8 doi: 10.1016/0024-3795(89)90550-8
    [29] A. M. Dupré, S. Kass, Distance and parallelism between flats in Rn, Linear Algebra Appl., 171 (1992), 99–107. https://doi.org/10.1016/0024-3795(92)90252-6 doi: 10.1016/0024-3795(92)90252-6
    [30] Y. X. Yuan, On the approximation between affine subspaces (Chinese), J. Nanjing Univ. Math. Biq., 17 (2000), 244–249.
    [31] P. Grover, Orthogonality to matrix subspaces, and a distance formula, Linear Algebra Appl., 445 (2014), 280–288. https://doi.org/10.1016/j.laa.2013.11.040 doi: 10.1016/j.laa.2013.11.040
    [32] H. K. Du, C. Y. Deng, A new characterization of gaps between two subspaces, Proc. Amer. Math. Soc., 133 (2005), 3065–3070.
    [33] O. M. Baksalary, G. Trenkler, On angles and distances between subspaces, Linear Algebra Appl., 431 (2009), 2243–2260. https://doi.org/10.1016/j.laa.2009.07.021 doi: 10.1016/j.laa.2009.07.021
    [34] C. Scheffer, J. Vahrenhold, Approximating geodesic distances on 2-manifolds in R3, Comput. Geom., 47 (2014), 125–140. https://doi.org/10.1016/j.comgeo.2012.05.001 doi: 10.1016/j.comgeo.2012.05.001
    [35] C. Scheffer, J. Vahrenhold, Approximating geodesic distances on 2-manifolds in R3: the weighted case, Comput. Geom., 47 (2014), 789–808. https://doi.org/10.1016/j.comgeo.2014.04.003 doi: 10.1016/j.comgeo.2014.04.003
    [36] G. P. Xu, M. S. Wei, D. S. Zheng, On solutions of matrix equation AXB+CYD=F, Linear Algebra Appl., 279 (1998), 93–109. https://doi.org/10.1016/S0024-3795(97)10099-4 doi: 10.1016/S0024-3795(97)10099-4
    [37] G. H. Golub, H. Y. Zha, Perturbation analysis of the canonical correlations of matrix pairs, Linear Algebra Appl., 210 (1994), 3–28. https://doi.org/10.1016/0024-3795(94)90463-4 doi: 10.1016/0024-3795(94)90463-4
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1074) PDF downloads(47) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog