Research article Special Issues

Supporting vectors vs. principal components

  • Let T:XY be a bounded linear operator between Banach spaces X,Y. A vector x0SX in the unit sphere SX of X is called a supporting vector of T provided that T(x0)=sup{T(x):x=1}=T. Since matrices induce linear operators between finite-dimensional Hilbert spaces, we can consider their supporting vectors. In this manuscript, we unveil the relationship between the principal components of a matrix and its supporting vectors. Applications of our results to real-life problems are provided.

    Citation: Almudena P. Márquez, Francisco Javier García-Pacheco, Míriam Mengibar-Rodríguez, Alberto Sánchez-Alzola. Supporting vectors vs. principal components[J]. AIMS Mathematics, 2023, 8(1): 1937-1958. doi: 10.3934/math.2023100

    Related Papers:

    [1] Yanlin Li, Nasser Bin Turki, Sharief Deshmukh, Olga Belova . Euclidean hypersurfaces isometric to spheres. AIMS Mathematics, 2024, 9(10): 28306-28319. doi: 10.3934/math.20241373
    [2] Dan Yang, Jinchao Yu, Jingjing Zhang, Xiaoying Zhu . A class of hypersurfaces in $ \mathbb{E}^{n+1}_{s} $ satisfying $ \Delta \vec{H} = \lambda\vec{H} $. AIMS Mathematics, 2022, 7(1): 39-53. doi: 10.3934/math.2022003
    [3] Wei Xu, Jingjing Liu, Jinman Li, Hua Wang, Qingtai Xiao . A novel hybrid intelligent model for molten iron temperature forecasting based on machine learning. AIMS Mathematics, 2024, 9(1): 1227-1247. doi: 10.3934/math.2024061
    [4] Hanan Alohali, Sharief Deshmukh . Some generic hypersurfaces in a Euclidean space. AIMS Mathematics, 2024, 9(6): 15008-15023. doi: 10.3934/math.2024727
    [5] George Rosensteel . Differential geometry of collective models. AIMS Mathematics, 2019, 4(2): 215-230. doi: 10.3934/math.2019.2.215
    [6] Daniel Gerth, Bernd Hofmann . Oversmoothing regularization with $\ell^1$-penalty term. AIMS Mathematics, 2019, 4(4): 1223-1247. doi: 10.3934/math.2019.4.1223
    [7] Tingting Xu, Zaiyong Feng, Tianyang He, Xiaona Fan . Sharp estimates for the $ {p} $-adic $ {m} $-linear $ {n} $-dimensional Hardy and Hilbert operators on $ {p} $-adic weighted Morrey space. AIMS Mathematics, 2025, 10(6): 14012-14031. doi: 10.3934/math.2025630
    [8] Melek Erdoğdu, Ayșe Yavuz . On differential analysis of spacelike flows on normal congruence of surfaces. AIMS Mathematics, 2022, 7(8): 13664-13680. doi: 10.3934/math.2022753
    [9] Sharief Deshmukh, Mohammed Guediri . Characterizations of Euclidean spheres. AIMS Mathematics, 2021, 6(7): 7733-7740. doi: 10.3934/math.2021449
    [10] Yong-Ki Ma, N. Valliammal, K. Jothimani, V. Vijayakumar . Solvability and controllability of second-order non-autonomous impulsive neutral evolution hemivariational inequalities. AIMS Mathematics, 2024, 9(10): 26462-26482. doi: 10.3934/math.20241288
  • Let T:XY be a bounded linear operator between Banach spaces X,Y. A vector x0SX in the unit sphere SX of X is called a supporting vector of T provided that T(x0)=sup{T(x):x=1}=T. Since matrices induce linear operators between finite-dimensional Hilbert spaces, we can consider their supporting vectors. In this manuscript, we unveil the relationship between the principal components of a matrix and its supporting vectors. Applications of our results to real-life problems are provided.



    Supporting Vector Analysis (SVA) is a relatively recent technique that allows one to solve analytically many real-life problems that used to be tackled by means of Heuristic methods. The lack of mathematical formalism of Heuristic methods resulted many times in unpredictable solutions, that is, mathematical solutions whose real-life interpretations make no sense. Supporting vectors came into play to overcome this issue. This way, supporting vectors were used in a successful way to solve multiobjective optimization problems coming from different disciplines, such as Bioengineering, Physics, and Statistics [4,6,7,8,9,15,22], improving considerably the results achieved by other methods like, for instance, Heuristic techniques [10,11,20,21].

    In [4,6,15], it was proven that Singular Value Decomposition (SVD) can be seen as a particular case of SVA. This fact triggered the new trend of restating Statistical notions from the perspective of Functional Analysis and Operator Theory. The main objective of this manuscript is to study Principal Component Analysis (PCA) by means of SVA.

    We will review several basic notions from Operator Theory that will turn out to be crucial for the development of this manuscript.

    If x=(x1,,xn)Rn, then the mean of x is defined as ¯x:=1nni=1xi, and its standard deviation is given by sx:=1nni=1(xi¯x)2. Notice that

    nsx=x¯x2, (2.1.1)

    where ¯x:=(¯x,n,¯x) denotes the constant vector of term ¯x (in general, if aR, then a:=(a,n,a) denotes the constant vector of term a).

    We say that xRn is centered provided that ¯x=0, and it is standardized provided that ¯x=0 and sx=1. In the latter situation, x2=n, in view of (2.1.1). The subset of centered vectors of Rn is usually denoted by cen(Rn), that is, cen(Rn):={xRn:¯x=0}. The subset of standardized vectors of Rn is usually denoted by stan(Rn), that is,

    stan(Rn):={xRn:¯x=0 and sx=1}.

    According to (2.1.1), stan(Rn)nSn2, where Sn2 stands for the unit sphere of n2:=(Rn,2). In Topology, Sn2 is denoted as Sn1.

    The covariance of two vectors x,yRn is defined as

    sx,y:=1nni=1(xi¯x)(yi¯y).

    Notice that sx,x=s2x, that is, the variance of x. The covariance matrix of a given matrix AMm×n is defined by sa1,,an:=(sai,aj)i,j=1,,n, where a1,,an stand for the column vectors of A.

    Consider a matrix AMm×n. The principal components of A are defined as Ax1,,Axn, where {x1,,xn} is an ordered orthonormal basis of eigenvectors of sa1,,an, sorting the eigenvalues of sa1,,an decreasingly.

    We refer the reader to [24] for a wider perspective on PCA. Interesting applications of PCA to certain Engineering fields, such as video processing and Big Data, have been provided in [3,12].

    Let X,Y be Banach spaces. Let T:XY be a bounded linear operator. The operator norm of T is given by

    T:=sup{T(x):x=1}. (2.3.1)

    The vector space CL(X,Y) of continuous linear operators from X to Y becomes a Banach space when endowed with the operator norm (2.3.1). In the case X=Y, CL(X,Y) is simply denoted as CL(X). If Y=K (R or C), then CL(X,Y) is denoted as X, that is, the dual space of X. It is also common to denote CL(X,Y) by B(X,Y) and CL(X) by B(X).

    The supporting vector notion was formally posed for the first time in [5]. However, this concept can be found implicitly and scattered throughout the literature of Banach Space Theory [1,2,18,19].

    The set of supporting vectors of a bounded linear operator T:XY between Banach spaces X,Y is defined by

    suppv(T):={xSX:T(x)=T}=argmaxx=1T(x). (2.3.2)

    Here, SX stands for the unit sphere of X, and BX denotes the (closed) unit ball of X. In the infinite-dimensional setting, it may occur that (2.3.2) is empty. Note that suppv(T)=suppv(λT) for all λK{0}, and suppv(T)=SKsuppv(T), where K=R or C. For a topological and geometrical analysis of the above set, we strongly refer the reader to [13,14,23].

    For linear functionals, a special subset of supporting vectors is worth regarding. Consider a continuous linear functional fX in the dual X of a Banach space X. We define the set of 1-supporting vectors of f by

    suppv1(f):={xSX:f(x)=f}. (2.3.3)

    Notice that 1-supporting vectors are particular cases of supporting vectors; in other words, suppv1(f)suppv(f). In the upcoming sections, 1-supporting vectors will be very much relied on. The following remark highlights a standard geometrical property satisfied by 1-supporting vectors.

    Remark 2.1. Consider a Banach space X and a nonzero linear functional fX{0}. For every x,ysuppv1(f) and every λ[0,1], we have that λx+(1λ)ysuppv1(f), that is, suppv1(f) is a convex subset of the unit sphere SX of X.

    A direct consequence of Remark 2.1 is that suppv1(f) is either empty or a singleton in strictly convex Banach spaces, like, for instance, Hilbert spaces.

    Representation Theory is one of the most important theories in Mathematics. A major result in Representation Theory is undoubtedly the Riesz Representation Theorem. This is a key result in Functional Analysis and is crucial for working with self-adjoint operators on Hilbert spaces.

    Riesz Representation Theorem. In a Hilbert space H, for every hH there exists a unique hH satisfying h=(|h). This assignment between H and H is a surjective linear isometry.

    In view of Remark 2.1 and under the settings of the Riesz Representation Theorem, for every hH{0}, we have that suppv1(h)={hh}, that is, hh is the only 1-supporting vector of h.

    On the other hand, the orthogonal subspace of a closed subspace V of a Hilbert space H is denoted by V. The orthogonal projection of H onto V is usually denoted by pV. Observe that H=V2V, that is, for each hH,

    h=pV(h)+pV(h), and h2=pV(h)2+pV(h)2. (2.4.1)

    The adjoint of a bounded linear operator T:HK between Hilbert spaces H,K is defined as the unique bounded linear operator T:KH satisfying (T(h)|k)=(h|T(k)) for each hH and each kK. Basic properties satisfied by the adjoint operator are T=T, (T)=T, (T+S)=T+S, (TS)=ST and (λT)=¯λT.

    Lemma 2.2. Every bounded linear operator T:HK between Hilbert spaces H,K verifies that cl(T(H))=ker(T) and T(H)=cl(T(H))=ker(T).

    Proof. First off, it is a trivial observation that T(H)=cl(T(H)). Fix an arbitrary hH. For every kker(T), (T(h)|k)=(h|T(k))=(h|0)=0. This shows that T(h)ker(T). The arbitrariness of hH means that T(H)ker(T). By taking orthogonal complements, we obtain that ker(T)T(H). Conversely, fix an arbitrary kT(H). For every hH, 0=(T(h)|k)=(h|T(k)). This shows that T(k)=0, and hence kker(T). The arbitrariness of kT(H) means that T(H)ker(T). By taking orthogonal complements, we finally obtain that ker(T)cl(T(H)).

    A bounded operator TB(H) is said to be self-adjoint provided that T=T. If H is complex, then TB(H) is self-adjoint if and only if (T(h)|h)R for all hH. A self-adjoint operator is called positive provided that (T(h)|h)0 for all hH.

    For every TB(H), σ(T):={λC:λITU(B(H))} is the spectrum of T, where U(B(H)) is the multiplicative group of invertible operators on H. Among other spectral properties, the spectrum is compact and nonempty, and Tmax|σ(T)|. A special subset of the spectrum, called the point spectrum, σp(T):={λC:ker(λIT){0}}, whose elements are called the eigenvalues of T, will be very much employed. Note that σp(T)σ(T). Furthermore, for each λσp(T), VT(λ):={hH:T(h)=λh} stands for the subspace of eigenvectors associated with λ. In case there is no confusion with T, we will simply denote VT(λ) by V(λ).

    Suppose next that T is an eigenvalue of T, that is, Tσp(T). In this situation, since Tmax|σ(T)|, we conclude that T is the maximum of |σ(T)|; in other words, T=max|σ(T)|. In this case, we write T=λmax(T). Observe also that V(T)SXsuppv(T). Indeed, if xV(T)SX, then T(x)=Tx, so T(x)=T, and hence xsuppv(T).

    Nevertheless, in general, Tσp(T), unless, for instance, T is compact, self-adjoint and positive. This is why we have to rely on the adjoint T and on the strongly positive operator TT. It is straightforward to check that the eigenvalues of a self-adjoint operator are real, and the eigenvalues of a self-adjoint positive operator are positive. When T is compact, it holds that TT is compact, self-adjoint and positive.

    The following result, on which we will strongly rely later on, can be found in [6,Theorem 4], which is itself a refinement of [14,Theorem 9].

    Theorem 2.3. Let H,K be Hilbert spaces. Let TB(H,K). Then, T2=TT, and suppv(T)suppv(TT). Furthermore, suppv(T) if and only if TTσp(TT). In this situation, T=λmax(TT), and suppv(T)=V(λmax(TT))SH.

    In this section, we will state and prove all the novel theorems of this work. This section is divided into four subsections, in which we will deal with supporting vectors, principal components and the topological structure of the subsets of centered and standardized vectors.

    This subsection begins unveiling the topological structure of the subsets cen(Rn) and stan(Rn) of centered and standardized vectors, respectively. Notice that cen(R)={0} and stan(R)=.

    Theorem 3.1. If n1, then cen(Rn) is linearly isometric, and hence homeomorphic, to n12. If n2, then stan(Rn) is linearly isometric, and hence homeomorphic, to Sn2.

    Proof. Notice that ¯x+y=¯x+¯y, and ¯tx=t¯x for all xRn and all tR. As a consequence,

    ¯:RnRx¯x=1nni=1xi (3.1.1)

    is a linear functional (usually called the mean functional). Next, simply observe that cen(Rn)=ker(¯). Thus, by bearing in mind (2.1.1), we immediately obtain that stan(Rn)=ker(¯)nSn2=nSker(¯). In other words, stan(Rn) is precisely a multiple of the unit sphere of the Hilbert subspace ker(¯) of n2. Since dim(ker(¯))=n1, we have that ker(¯) is linearly isometric to n12. As a consequence, the unit sphere of ker(¯), 1nstan(Rn), is linearly isometric to the unit sphere of n12, Sn12=Sn2.

    The following lemma aims at computing the norm of the mean functional as an element of the dual space of n2 as well as its only 1-supporting vector.

    Lemma 3.2. In (n2), ¯=1n and suppv1(¯)={1n}.

    Proof. Hölder's Inequality ensures that

    |¯x|=|1nni=1xi|ni=11n|xi|(ni=11n2)12(ni=1x2i)12=1nx2

    for every xn2. This shows that ¯1n. On the other hand, 12=n, that is, 1nSn2. Finally,

    ¯1n=1nni=11n=1n.

    As a consequence, ¯=1n and suppv1(¯)={1n}.

    As a direct consequence of Lemma 3.2, the standard deviation of a vector xRn can be rewritten as the following (2.1.1):

    sx=1nx¯x2=¯x¯x2. (3.1.2)

    By bearing in mind the Riesz Representation Theorem, Lemma 3.2 assures that if h:=1n, then h=¯ in H:=n2. Notice that h:=1n=(1n,n,1n) can be seen as a (finite) convex series. This fact will help us generalize these concepts to an infinite-dimensional separable Hilbert space setting later on in the Discussion.

    Definition 3.3 For n1, the mean operator is defined by

    ¯:RnRnx¯x=(¯x,n,¯x), (3.1.3)

    and the centering operator is defined as

    cen:RnRnxcen(x):=x¯x=(x1¯x,n,xn¯x). (3.1.4)

    For n2, the standardizing operator is defined as

    stan:RnRn1Rnxstan(x):=nx¯xx¯x2. (3.1.5)

    It is a trivial observation that

    stan(x)=ncen(x)cen(x)2=cen(x)¯cen(x)2 (3.1.6)

    for all xRn.

    Recall that a projection on a Banach space X is a continuous, linear, and idempotent map P:XX. Its dual operator P:XX is also a projection. The complementary projection of P is defined as IXP, which is also a projection. Every non-zero projection has norm greater than or equal to 1. A 1-projection is a projection of norm 1 (also called a contractive projection), and a (1,1)-projection is a 1-projection whose complementary projection is also a 1-projection (also called bicontractive). Orthogonal projections in Hilbert spaces are the most representative examples of bicontractive projections.

    The final result of this first subsection serves to show that both the mean operator and the centering operator are complementary projections to each other of norm 1, that is, bicontractive, for the Euclidean norm. Even more, the mean operator and the centering operator are orthogonal projections to each other. This will allow us to directly obtain the König-Huygens Theorem (3.1.7) as a direct consequence of the Pythagorean Theorem in Hilbert spaces. The König-Huygens Theorem provides the classical decomposition of the 2-norm of a vector of Rn in terms of the mean and the standard deviation.

    Theorem 3.4. The centering operator is a linear projection on Rn whose kernel is ker(cen)=Rn1, whose range is cen(Rn)=ker(¯), and whose complementary projection is the mean operator. Furthermore, if we consider Rn endowed with the Euclidean norm, then ¯=cen=1, and ¯ and cen are complementary orthogonal projections. As a consequence, for every xRn,

    x22=¯x22+x¯x22=n(|¯x|2+s2x). (3.1.7)

    Proof. In the first place, notice that ¯ is clearly linear, since ¯=¯1; in other words, ¯x=(¯x,n,¯x)=¯x(1,n,1)=¯x1 for all xRn. As a consequence, cen is linear as well because cen=IRn¯, that is, the difference of two linear operators on Rn. On the other hand, observe that if xRn is a constant vector, that is, x=t1=t for some tR, then cen(x)=0. Conversely, if cen(x)=0, then x=¯x=¯x1, and hence xRn1 is a constant vector. This shows that ker(cen)=Rn1. From Theorem 3.1, we already know that cen(Rn)=ker(¯). Next, let us prove that cen is a projection. Fix an arbitrary xRn. Notice that

    cen(cen(x))=cen(x¯x)=cen(x)cen(¯x)=x¯x0=x¯x=cen(x).

    Since ¯=IRncen, we conclude that the mean operator is the complementary projection to the centering operator. Finally, let us compute ¯ and cen. In accordance with Lemma 3.2, for every xRn we have that

    ¯x2=n|¯x|n1nx2=x2,

    meaning that ¯1. Next,

    1n2=(1n,n,1n)2=n1n=1.

    This shows that ¯=1. In order to prove that ¯ and cen are orthogonal projections, it only suffices to realize that R1 and cen(Rn) are orthogonal subspaces. Indeed, for every tR and every xRn with ¯x=0, we have that (t1|x)=t(1|x)=tni=1xi=tn¯x=0, and hence R1cen(Rn), or equivalently, cen(Rn)(R1). Furthermore, the Pythagorean Theorem in n2 allows that

    t1+x22=t122+x22

    for every tR and every xRn with ¯x=0. Next, if y(R1), then 0=1n(1|y)=1nni=1yi=¯y, meaning that yker(¯)=cen(Rn). As a consequence, cen(Rn)=R1, resulting in

    x22=¯x22+x¯x22=n(|¯x|2+s2x)

    for every xRn. It only remains to show that cen=1, but this is a direct consequence of the fact that cen is an orthogonal projection.

    In this subsection, we will provide sufficient conditions for the supporting vectors to coincide with the first principal component. First, we will need some definitions.

    Definition 3.5. A matrix is said to be centered (standardized) provided that all of its column vectors are centered (standardized) vectors.

    The following lemma displays a simple characterization of centered matrices.

    Lemma 3.6. Let AMm×n(R). The following conditions are equivalent:

    1) A is centered.

    2) Ax is centered for all xRn.

    Proof. Let {e1,,en} denote the canonical basis of Rn. Notice that the columns of A are precisely Aej for j=1,,n. Suppose first that A is centered. Fix an arbitrary xRn. The linearity of the mean functional allows that

    ¯Ax=¯nj=1xjAej=nj=1xj¯Aej=0.

    As a consequence, Ax is centered for all xRn. Conversely, suppose now that Ax is centered for all xRn. In particular, ¯Aej=0 for j=1,,n, meaning that the columns of A are centered vectors, that is, A is centered.

    The following characterization of centered matrices is a bit more sophisticated. First, a technical lemma is needed.

    Lemma 3.7. Consider x,yRm. Then,

    1) msx,y=xy if and only if either x or y is centered.

    2) x is standardized if and only if x is centered and xx=m.

    Proof.

    1) Let us observe that

    msx,y=mi=1(xi¯x)(yi¯y)=mi=1xiyimi=1¯xyimi=1xi¯y+mi=1¯x¯y=mi=1xiyi¯xmi=1yi¯ymi=1xi+¯x¯ymi=11=mi=1xiyi¯xm¯y¯ym¯x+¯x¯y=xym¯x¯y.

    As a consequence, msx,y=xy if and only if either x or y is centered.

    2) By definition, x is standardized if and only if x is centered and sx=1. We know that then x¯x2=msx. In the context of centered vectors, the previous expression becomes xx=x2=msx. Therefore, x is standardized if and only if x is centered and xx=m.

    Recall that the covariance matrix of a given matrix AMm×n is defined by sa1,,an:=(sai,aj)i,j=1,,n, where a1,,an stand for the column vectors of A. Also, recall that if B is a square matrix, then diag(B) stands for the diagonal of B.

    Proposition 3.8. Let AMm×n(R). The following conditions are equivalent:

    1) A is centered.

    2) AtA=msa1,,an.

    3) diag(AtA)=diag(msa1,,an).

    Proof. Notice that AtA=(aiaj)i,j=1,,n. Suppose first that A is centered. In view of Lemma 3.7, msai,aj=aiaj for all i,j{1,,n}, meaning that AtA=msa1,,an. Next, if AtA=msa1,,an, then we trivially have that diag(AtA)=diag(msa1,,an). Finally, assume that diag(AtA)=diag(msa1,,an). Then, msai,ai=aiai for all i=1,,n. In accordance with Lemma 3.7, ai is centered for all i=1,,n, meaning that A is centered.

    As an immediate consequence of Lemma 3.7 and Proposition 3.8, we obtain the following corollary.

    Corollary 3.9. Let AMm×n(R). The following conditions are equivalent:

    1) A is standardized.

    2) AtA=msa1,,an and diag(AtA)=(m,n,m).

    Finally, we have gathered all the necessary tools to prove our main results of this subsection. Simply keep in mind the small observation that if BMn(R), then σp(αB)=ασp(B) and VαB(αλ)=VB(λ) for all αR{0} and all λσp(B).

    Theorem 3.10. Let AMm×n(R). If A is centered, then

    suppv(A)={xSn2:Ax is the first principal component of A},

    where A is seen as a linear operator

    A:n2m2xAx. (3.2.1)

    Proof. According to Theorem 2.3, suppv(A)=V(λmax(AA))Sn2. Notice that the adjoint of A, A, coincides with its transpose, At. On the other hand, since A is centered, Proposition 3.8 allows that AtA=msa1,,an. Finally,

    suppv(A)=VAA(λmax(AA))Sn2=VAtA(λmax(AtA))Sn2=Vmsa1,,an(λmax(msa1,,an))Sn2=Vmsa1,,an(mλmax(sa1,,an))Sn2=Vsa1,,an(λmax(sa1,,an))Sn2={xSn2:Ax is the first principal component of A}.

    We will conclude this subsection with an example of a centered matrix A whose last principal component has at least two dimensions.

    Remark 3.11. Let T:HK be a bounded linear operator between Hilbert spaces H,K. Then, ker(T)ker(TT). As a consequence, if 0σp(T), then 0σp(TT).

    Notice that if TB(H) is a self-adjoint positive operator on a Hilbert space H such that ker(T){0}, then 0 is the minimum of σp(T) since all the eigenvalues of T are positive.

    Example 3.12. Let AMm×n(R) be any centered matrix with three equal columns. Then, ker(A)ker(AtA)=ker(sa1,,an) since AtA=msa1,,an in accordance with Proposition 3.8. Notice that ker(A) has at least dimension 2. Observe that 0σp(sa1,,an). As a consequence, the last principal component of A corresponds to Ax1,,Axp, where {x1,,xp} is an orthonormal basis of Vsa1,,an(0).

    This subsection is aimed at showing the second principal component of a centered matrix can be obtained via a supporting vector of a derived matrix from the original one.

    Theorem 3.13. Let H,K be Hilbert spaces and T:HK be a continuous linear operator. Let H1 be a closed subspace of H. Then,

    1) T|H1=TpH1.

    2) suppv(T|H1)=suppv(TpH1).

    3) (T|H1)=pH1T.

    4) If H1 is an invariant subspace of TT, then (T|H1)T|H1=(TT)|H1.

    Proof.

    1) In the first place, pH1 is an orthogonal projection. Thus, pH1=1, and therefore TpH1=T|H1pH1T|H1pH1=T|H1. On the other hand, for every h1BH1, T|H1(h1)=T(h1)=T(pH1)=(TpH1)(h1)TpH1. As a consequence, T|H1TpH1, meaning that T|H1=TpH1.

    2) Fix an arbitrary h1suppv(T|H1). Then, (TpH1)(h1)=T|H1(h1)=T|H1=TpH1, meaning that h1suppv(TpH1). Conversely, take hsuppv(TpH1). We will show first that pH1(h)suppv(T|H1). Indeed, T|H1(pH1(h))=(TpH1)(h)=TpH1=T|H1. Since pH1(h)h=1, we conclude that pH1(h)=1, and hence pH1(h)suppv(T|H1). Now, notice that

    1=h2=pH1(h)2+pH1(h)2=1+pH1(h)2,

    meaning that pH1(h)=0 and h=pH1(h)suppv(T|H1).

    3) Fix arbitrary elements h1H1 and kK. Notice that

    (h1|(pH1T)(k))=(h1|pH1(T(k)))=(h1|pH1(T(k)))+(h1|pH1(T(k)))=(h1|pH1(T(k))+pH1(T(k)))=(h1|T(k))=(T(h1)|k)=(T|H1(h1)|k).

    By the uniqueness of the adjoint, we conclude that (T|H1)=pH1T.

    4) Finally, let us show that (T|H1)T|H1=(TT)|H1 whenever H1 is an invariant subspace of TT. Indeed, fix an arbitrary h1H1. Then,

    (TT)|H1(h1)=T(T|H1(h1))=pH1(T(T|H1(h1)))=((pH1T)T|H1)(h1)=((T|H1)T|H1)(h1).

    If X is a normed space, T:XX is a continuous linear operator, and λσp(T), then it is clear that T(V(λ))V(λ). The following lemma is also well known, yet we include the proof for the sake of completeness.

    Lemma 3.14. Let H be a Hilbert space, and T:HH is a self-adjoint continuous linear operator. Let λσp(T). Then,

    1) T(V(λ))V(λ).

    2) σp(T|V(λ))=σp(T){λ}.

    Proof.

    1) Fix arbitrary elements wV(λ) and vV(λ). Observe that

    (T(w)|v)=(w|T(v))=(w|λv)=¯λ(w|v)=0.

    This shows that T(w)V(λ). The arbitrariness of wV(λ) serves to assure that T(V(λ))V(λ).

    2) Take any γσp(T|V(λ)). There exists wV(λ){0} such that T(w)=γw. It is clear that γσp(T). If γ=λ, then wV(λ), meaning the contradiction that w=0. Conversely, take any γσp(T){λ}. There exists wH{0} such that T(w)=γw. It suffices to show that wV(λ). Indeed, since λγ, either λ or γ is not 0. We can assume without any loss of generality that γ0. Then, for every vV(λ), we have that

    (w|v)=1γ(γw|v)=1γ(T(w)|v)=1γ(w|T(v))=1γ(w|λv)=λγ(w|v).

    Since λγ, the only possibility is that (w|v)=0. As a consequence, wV(λ).

    Theorem 3.15. Let H,K be Hilbert spaces and T:HK be a compact linear operator. If λσp(TT) is the second largest eigenvalue of TT, then λ=T|VTT(T2)2 and

    VTT(λ)SH=suppv(T|VTT(T2))=suppv(TpVTT(T2)).

    Proof. In the first place, TT:HH is self-adjoint, positive, and compact. According to Theorem 2.3, T2=TTσp(TT), and T2 is the largest eigenvalue of TT. By applying Lemma 3.14, we have that (TT)(VTT(T2))VTT(T2) and

    σp((TT)|VTT(T2))=σp(TT){T2}.

    As a consequence, λ is the largest eigenvalue of (TT)|VTT(T2). Next, since VTT(T2) is an invariant subspace of TT, Theorem 3.13(4) allows that

    (TT)|VTT(T2)=(T|VTT(T2))T|VTT(T2).

    This means that λ is the largest eigenvalue of (T|VTT(T2))T|VTT(T2). By relying again on Theorem 2.3 and on the fact that T|VTT(T2) is also compact, we obtain that λ=T|VTT(T2)2 and

    VTT(λ)SH=suppv(T|VTT(T2)).

    Finally, Theorem 3.13(2) assures that

    VTT(λ)SH=suppv(T|VTT(T2))=suppv(TpVTT(T2)).

    Let X be a Banach space and MX be a closed subspace. The quotient space of X by M is defined by XM:={x+M:xX} and becomes a Banach space endowed with the distance-to-M norm x+M:=d(x,M):=inf{xm:mM}. The canonical projection of X onto XM is given by

    πM:XXMxπM(x):=x+M, (3.4.1)

    and it is clearly a linear operator of norm πM=1. The closed subspace M is called proximinal provided that for every xX, the distance from x to M is attained at some m0M, that is, d(x,M)=xm0.

    Let X,Y be Banach spaces and T:XY be a continuous linear operator. Let Mker(T) be a closed subspace. The quotient operator of T

    TM:XMYx+MTM(x+M):=T(x) (3.4.2)

    is a well-defined continuous linear operator. Notice that when M is chosen to be ker(T), we obtain the First Isomorphism Theorem. The following theorem relates the supporting vectors of T with those of TM.

    Theorem 3.16. Let X,Y be Banach spaces and T:XY be a continuous linear operator. Let Mker(T) be a closed subspace. Then,

    1) TM=T.

    2) πM(suppv(T))suppv(TM).

    3) If M is proximinal, then πM(suppv(T))=suppv(TM).

    Proof.

    1) Indeed, if x1, then x+M:=d(x,M)x01. Therefore, T(x)=TM(x+M)TM. This shows that TTM. Conversely, if x+M=1, then we can find a sequence (mk)kNM such that xmkd(x,M)=x+M=1 as k. Notice that T(xmkxmk)T for all kN, that is, TM(x+M)=T(x)=T(xmk)Txmk for all kN. Since xmk1 as k, we conclude that TM(x+M)T. The arbitrariness of x+MSXM assures that TMT.

    2) Fix an arbitrary xsuppv(T). Notice that x=1, so x+M=d(x,M)x0=x=1. This shows that x+MBXM. Next, TM(x+M)=T(x)=T=TM. As a consequence, x+Msuppv(TM).

    3) Fix an arbitrary x+Msuppv(TM). Since M is proximinal, there exists m0M such that 1=x+M=d(x,M)=xm0. We will show that xm0suppv(T). Indeed, xm0=1, and T(xm0)=T(x)=TM(x+M)=TM=T. Finally, πM(xm0)=x+M.

    In this section, we present the application of our SVA/PCA theorems to solve a real-life problem focused on the distribution of political sensitivities with several economic variables. For that, we have used the results of the 2018 Andalusian (Spain) elections. The data analyzed is provided by the Institute of Statistics and Cartography of Andalusia [17] and combines information from 153 municipalities with more than ten thousand inhabitants and 8 measured variables. The data include unemployment rate, aging rate and 6 generalist policy options ranging from left to right wing. Note that the variables in columns were previously centered.

    A Graphical User Interface (GUI) has been developed in Python code and aims to compute the eigenvectors, including the supporting vector (first eigenvector), and the principal components. Figure 1 shows the GUI of the PCA input, where the data matrix is introduced. It is important to note that the input data can be imported by a CSV file. This option is very useful when we are working with large matrices. The results obtained are shown in Figure 2, which can also be exported as a CSV file.

    Figure 1.  GUI of the PCA input for a matrix AM153×8(R).
    Figure 2.  GUI of the PCA output for a matrix AM153×8(R).

    First of all, we note that the AM153×8(R) data matrix is centered in the origin by subtracting the mean. Then, we calculate the covariance matrix for the centered matrix. Afterwards, we compute the eigenvalues and eigenvectors. In particular, as mentioned in this paper, the eigenvector associated with the maximum eigenvalue corresponds to the supporting vector. These eigenvalues and eigenvectors are sorted in descending order, so the first element corresponds to the supporting vector and the first principal component. Finally, the products of the initial matrix and the eigenvectors are done, obtaining the principal components.

    In particular, focusing on the example of the Andalusian elections, the results show the correspondence of the first principal component with the classic political sensitivities associated with old-aging municipalities. On the other hand, the second principal component is related to the differentiation between left-wing and right-wing political options.

    In Figure 3, the 8 variables used in the example are shown in the eigenvector reference system with the supporting vector and the second eigenvector. The representation of the 153 municipalities in the principal component reference system (Ax) is shown in Figure 4. This 2D image with the first and the second principal component group the municipalities according to the political sensitivities and their demographic characteristics.

    Figure 3.  Variables represented in the eigenvector reference system with the supporting vector and the second eigenvector.
    Figure 4.  Municipalities represented in the principal component reference system (first and second PC).

    We will discuss how to transport the mean functional, the mean operator and the centering operator to abstract settings, such as Hilbert spaces, Banach algebras and probability spaces.

    As mentioned in Lemma 3.2, in view of the Riesz Representation Theorem, if h0:=1n=(1n,n,1n), then h0=¯ is precisely the mean functional in H:=n2. Actually, h02=1n=h0. At this stage, the key is to realize that h0 can be seen as a (finite) convex series. Recall that a convex series is a convergent series n=1tn such that n=1tn=1 and tn0 for all nN. Notice that if n=1tn is a convex series, then n=1t2nn=1tn=1, and hence (tn)nN2. Now, we are in the right position to define the notions of mean and standard deviation on separable Hilbert spaces.

    Let H be a separable Hilbert space with an orthonormal basis (en)nN. Fix a convex series n=1tn. Let hH and write h=n=1(h|en)en. The mean of h, with respect to (tn)nN, is defined as

    ¯h:=n=1tn(h|en). (4.1.1)

    The mean functional, with respect to (tn)nN, is given by

    ¯:HKh¯h:=n=1tn(h|en). (4.1.2)

    By relying on Hölder's Inequality, it is not hard to check that the mean functional is an element of H whose norm is precisely n=1t2n=(tn)nN2. In accordance with the Riesz Representation Theorem, there exists h0H such that (h|h0)=¯h for all hH. If we let h=n=1(h|en)en, then we obtain that

    n=1(h|en)(en|h0)=(n=1(h|en)en|h0)=n=1tn(h|en). (4.1.3)

    By taking h=en for every nN in (4.1.3), we conclude that (en|h0)=tn for every nN. In particular, h0=n=1tnen. As expected,

    h0=(tn)nN2=n=1t2n=¯.

    In view of the Riesz Representation Theorem, h0=(|h0)=¯. Notice that, in order to conclude that h0=(|h0)=¯, the Riesz Representation Theorem is really not needed. Let us get back for a second to n2. For every xn2,

    ¯x=(¯x,n,¯x)=¯x1=¯x1n1n1n=¯1n(x)1n1n.

    Then, going back to a general separable Hilbert space H, the mean operator is defined as

    HHh¯h:=h0h0(h)h0h0. (4.1.4)

    It is not hard to check that the mean operator is an orthogonal projection on H whose complementary projection is, precisely, the centering operator:

    HHhcen(h):=h¯h. (4.1.5)

    A Banach algebra is a real or complex algebra A endowed with a complete vector norm that is also a ring norm, that is, abab for all a,bA. We say that A is unital if it is unitary, that is, it has a unity 1A, and 1=1. In this situation, according to the Hahn-Banach Theorem, there exists a continuous linear functional, which we will denote by 1A, such that 1=1 and 1(1)=1. Then, the mean functional is precisely 1, that is, defined by

    AKa¯a:=1(a). (4.2.1)

    The mean operator is defined as

    AAa¯a:=1(a)1. (4.2.2)

    Finally, the centering operator is defined as

    AAacen(a):=a¯a. (4.2.3)

    Following (3.1.2), the variance of an element aA can be defined by

    s2a:=1((a¯a)2)=1(a2)1(a)2. (4.2.4)

    Notice that this generalization to unital Banach algebras presents a weakness: The existence of the functional 1A is guaranteed by the Hahn-Banach Theorem, but its uniqueness is not guaranteed. In fact, in many unital Banach algebras, such as (Λ) for instance, 1 is not a smooth point of B(Λ) [16,Theorem 2.9], and thus there are infinitely many functionals of norm 1 attaining their norm at 1. It seems not trivial to overcome this issue. Maybe an option is to try to renorm equivalently the unital Banach algebra in such a way that 1 becomes a smooth point of the new unit ball of the algebra or, at least, to find another smooth point in the unit ball. According to [16,Theorem 2.1], the canonical unit vector eλ is a smooth point of B(Λ) for each λΛ. Another possibility may rely on constructing the mean functional in a C-algebra. Recall that a C-algebra is a Banach algebra A endowed with an antimultiplicative and antilinear involution :AA satisfying that aa=a2 for all aA.

    A probability space is a 3-tuple (Ω,Σ,P) where (Ω,Σ) is a measurable space and P:Σ[0,1] is a probability measure, that is, a countably additive positive measure such that P(Ω)=1. If X is a Banach space, by L1((Ω,Σ,P),X) we denote the Banach space of all absolutely integrable functions, that is,

    L1((Ω,Σ,P),X):={fXΩ:f is measurable and ΩfdP<},

    endowed with the norm

    f1:=ΩfdP.

    For each fL1((Ω,Σ,P),X), the mean of f is defined as

    μ(f):=ΩfdP. (4.3.1)

    The mean functional is given by

    μ:L1((Ω,Σ,P),X)Xfμ(f):=ΩfdP. (4.3.2)

    Notice that the mean functional is actually an operator, but we keep calling it functional not to mistake it with the mean operator, which will be defined next. It can be easily shown that the mean functional has norm equal to 1. Indeed,

    μ(f)=ΩfdPΩfdP=f1

    for every fL1((Ω,Σ,P),X), meaning that μ1. Now, if we choose any xSX, then xχΩSL1((Ω,Σ,P),X), and

    μ(xχΩ)=ΩxχΩdP=xP(Ω)=x.

    Hence,

    μ(xχΩ)=ΩxχΩdP=xP(Ω)=1=xχΩ1.

    This proves that μ=1 and xχΩsuppv(μ). The mean operator is then defined as

    L1((Ω,Σ,P),X)L1((Ω,Σ,P),X)fμ(f)χΩ, (4.3.3)

    and the centering operator is

    L1((Ω,Σ,P),X)L1((Ω,Σ,P),X)ffμ(f)χΩ. (4.3.4)

    Finally, if X is a unital Banach algebra, then the natural way of defining the variance of fL1((Ω,Σ,P),X) is

    σ(f):=μ((fμ(f)χΩ)2)=Ω(fμ(f)χΩ)2dP. (4.3.5)

    It is well known in the literature of the Geometry of Banach Spaces that Hilbert spaces are transitive Banach spaces, meaning that every two elements of the unit sphere of a Hilbert space can be transported one into another by means of a surjective linear isometry. This fact confers Hilbert spaces with a certain freedom when it comes to choosing the convex series that defines the mean functional, the mean operator, the centering operator and the standard deviation. Theorem 3.1, Lemma 3.2 and Theorem 3.4 can be transported to the more general scope provided by separable Hilbert spaces discussed in the previous section. In fact, in the Discussion we unveiled how to extend the mean functional, the mean operator, the centering operator and the variance to spaces of absolutely integrable functions defined on a probability space and valued on a unital Banach algebra.

    On the other hand, the study of Principal Component Analysis through Supporting Vector Analysis is a revolutionary trend that allows one to look at these Statistical concepts from a Functional Analysis viewpoint, which is more general and works in infinite dimensional environments, making possible applications in very specific settings such as, for instance, Quantum Mechanical Systems.

    This work has been partially supported by the Research Grant PGC-101514-B-I00 awarded by the Ministry of Science, Innovation and Universities of Spain and co-financed by the 2014-2020 ERDF Operational Programme and by the Department of Economy, Knowledge, Business and University of the Regional Government of Andalusia under Project reference FEDER-UCA18-105867. The APCs have been paid by the Department of Mathematics of the University of Cadiz.

    The authors declare that they have no conflict of interest.

    Let AMm×n(R) be a general matrix. The algorithm to compute the supporting vector with its corresponding first principal component, the second eigenvector with its corresponding second principal component and the next eigenvectors and principal components of A is the following:

    One of the novelties of this work is to apply the PCA method with a different procedure specifically, using an algorithm based on the mathematical idea that the second eigenvector, with its associated second principal component, is the supporting vector of the original points projected to the orthogonal complement of the original supporting vector. Consequently, all the principal components can be computed in an iterative process via a supporting vector.

    Let xRn be a vector. First, we show the following function to calculate the orthogonal complement of x:

    Next, by using the previous function, we calculate the orthogonal complement of the supporting vector, representing a hyperplane with dimension n1. Afterwards, the points of the original matrix, used for the initial supporting vector, are projected to this hyperplane. Hence, a new matrix is derived, whose first principal component corresponds to the second principal component of the original matrix. Thus, by applying this iterative algorithm, we can calculate all the principal components via the orthogonal complement of a supporting vector.

    It is easier to understand this algorithm considering a special case. Let AM3×3(R) be a matrix. In an intuitive manner, each row of the matrix represents points of R3. Now, we present a process of computing the first eigenvector (supporting vector) and its corresponding first principal component as before, but with a different procedure to obtain the followings. As mentioned, we notice that the first principal component via a supporting vector of a derived matrix from the original one coincides with the second principal component of the original matrix. This derived matrix is constructed projecting the original points of the original matrix to a plane formed by the orthogonal complement (two vectors) of the initial supporting vector (one vector) and the origin point. Therefore, we obtain the same result by applying the PCA presented before and this algorithm determining the next eigenvector as a supporting vector of the original supporting vector.



    [1] E. Bishop, R. R. Phelps, A proof that every Banach space is subreflexive, Bull. Amer. Math. Soc., 67 (1961), 97–98. https://doi.org/10.1090/S0002-9904-1961-10514-4 doi: 10.1090/S0002-9904-1961-10514-4
    [2] E. Bishop, R. R. Phelps, The support functionals of a convex set, In: Proceedings of Symposia in Pure Mathematics, Vol. VII, Providence, R.I.: Amer. Math. Soc., 1963, 27–35.
    [3] T. Bouwmans, S. Javed, H. Zhang, Z. Lin, R. Otazo, On the applications of robust PCA in image and video processing, Proc. IEEE, 106 (2018), 1427–1457. https://doi.org/10.1109/JPROC.2018.2853589 doi: 10.1109/JPROC.2018.2853589
    [4] C. Cobos-Sánchez, F. J. Garcia-Pacheco, J. M. Guerrero Rodriguez, J. R. Hill, An inverse boundary element method computational framework for designing optimal TMS coils, Eng. Anal. Bound. Elem., 88 (2018), 156–169. https://doi.org/10.1016/j.enganabound.2017.11.002 doi: 10.1016/j.enganabound.2017.11.002
    [5] C. Cobos-Sánchez, F. J. García-Pacheco, S. Moreno-Pulido, S. Sáez-Martínez, Supporting vectors of continuous linear operators, Ann. Funct. Anal., 8 (2017), 520–530. https://doi.org/10.1215/20088752-2017-0016 doi: 10.1215/20088752-2017-0016
    [6] C. Cobos-Sánchez, J. A. Vilchez-Membrilla, A. Campos-Jiménez, F. J. García-Pacheco, Pareto optimality for multioptimization of continuous linear operators, Symmetry, 13 (2021), 661. https://doi.org/10.3390/sym13040661 doi: 10.3390/sym13040661
    [7] C. Cobos-Sánchez, M. R. Cabello, Á. Q. Olozábal, M. F. Pantoja, Design of TMS coils with reduced lorentz forces: application to concurrent TMS-fMRI, J. Neural Eng., 17 (2020), 016056. https://doi.org/10.1088/1741-2552/ab4ba2 doi: 10.1088/1741-2552/ab4ba2
    [8] C. Cobos-Sánchez, J. J. J. García, M. R. Cabello, M. F. Pantoja, Design of coils for lateralized TMS on mice, J. Neural Eng., 17 (2020), 036007. https://doi.org/10.1088/1741-2552/ab89fe doi: 10.1088/1741-2552/ab89fe
    [9] C. Cobos-Sánchez, F. J. Garcia-Pacheco, J. M. Guerrero-Rodriguez, L. Garcia-Barrachina, Solving an IBEM with supporting vector analysis to design quiet TMS coils, Eng. Anal. Bound. Elem., 117 (2020), 1–12. https://doi.org/10.1016/j.enganabound.2020.04.013 doi: 10.1016/j.enganabound.2020.04.013
    [10] C. Cobos-Sánchez, J. M. Guerrero-Rodriguez, Á. Q. Olozábal, D. Blanco-Navarro, Novel TMS coils designed using an inverse boundary element method, Phys. Med. Biol., 62 (2016), 73–90. https://doi.org/10.1088/1361-6560/62/1/73 doi: 10.1088/1361-6560/62/1/73
    [11] C. M. Epstein, E. Wassermann, U. Ziemann, Oxford Handbook of Transcranial Stimulation, New York: Oxford University Press, 2008. https://doi.org/10.1093/oxfordhb/9780198568926.001.0001
    [12] J. Fan, Q. Sun, W.-X. Zhou, Z. Zhu, Principal component analysis for big data, Wiley StatsRef: Statistics Reference Online, in press. https://doi.org/10.1002/9781118445112.stat08122
    [13] F. J. García-Pacheco, E. Naranjo-Guerra, Supporting vectors of continuous linear projections, International Journal of Functional Analysis, Operator Theory and Applications, 9 (2017), 85–95.
    [14] F. J. García-Pacheco, Lineability of the set of supporting vectors, RACSAM, 115 (2021), 41, https://doi.org/10.1007/s13398-020-00981-6 doi: 10.1007/s13398-020-00981-6
    [15] F. J. Garcia-Pacheco, C. Cobos-Sánchez, S. Moreno-Pulido, A. Sanchez-Alzola, Exact solutions to maxx=1i=1Ti(x)2 with applications to Physics, Bioengineering and Statistics, Commun. Nonlinear Sci. Numer. Simul., 82 (2020), 105054. https://doi.org/10.1016/j.cnsns.2019.105054 doi: 10.1016/j.cnsns.2019.105054
    [16] F.-K. Garsiya-Pacheko, The cardinality of the set Λ determines the geometry of the spaces B(Λ) and B(Λ), (Russian), Funktsional. Anal. i Prilozhen., 52 (2018), 62–71. https://doi.org/10.4213/faa3534 doi: 10.4213/faa3534
    [17] Instituto de Estadística y Cartografía de Andalucía. Available from: https://www.juntadeandalucia.es/institutodeestadisticaycartografia.
    [18] R. C. James, Characterizations of reflexivity, Stud. Math., 23 (1964), 205–216. https://doi.org/10.4064/sm-23-3-205-216 doi: 10.4064/sm-23-3-205-216
    [19] J. Lindenstrauss, On operators which attain their norm, Israel J. Math., 1 (1963), 139–148. https://doi.org/10.1007/BF02759700 doi: 10.1007/BF02759700
    [20] L. Marin, H. Power, R. W. Bowtell, C. Cobos-Sánchez, A. A. Becker, P. Glover, et al., Numerical solution of an inverse problem in magnetic resonance imaging using a regularized higher-order boundary element method, In: Boundary elements and other mesh reduction methods XXIX, Southampton: WIT Press, 2007,323–332. https://doi.org/10.2495/BE070311
    [21] L. Marin, H. Power, R. W. Bowtell, C. Cobos-Sánchez, A. A. Becker, P. Glover, et al., Boundary element method for an inverse problem in magnetic resonance imaging gradient coils, CMES Comput. Model. Eng. Sci., 23 (2008), 149–173. https://doi.org/10.3970/cmes.2008.023.149 doi: 10.3970/cmes.2008.023.149
    [22] S. Moreno-Pulido, F. J. Garcia-Pacheco, C. Cobos-Sánchez, A. Sanchez-Alzola, Exact solutions to the maxmin problem maxax subject to bx1, Mathematics, 8 (2020), 85. http://doi.org/10.3390/math8010085 doi: 10.3390/math8010085
    [23] A. Sánchez-Alzola, F. J. García-Pacheco, E. Naranjo-Guerra, S. Moreno-Pulido, Supporting vectors for the 1-norm and the -norm and an application, Math. Sci., 15 (2021), 173–187. https://doi.org/10.1007/s40096-021-00400-w doi: 10.1007/s40096-021-00400-w
    [24] L. Surhone, M. Timpledon, S. Marseken, Principal component analysis: Karhunen-Loève Theorem, Harold Hotelling, Karl Pearson, Exploratory Data Analysis, Eigendecomposition of a Matrix, Covariance Matrix, Singular Value Decomposition, Factor Analysis, Betascript Publishing, 2010.
  • This article has been cited by:

    1. José Antonio Vilchez Membrilla, Víctor Salas Moreno, Soledad Moreno-Pulido, Alberto Sánchez-Alzola, Clemente Cobos Sánchez, Francisco Javier García-Pacheco, Minimization over Nonconvex Sets, 2024, 16, 2073-8994, 809, 10.3390/sym16070809
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1697) PDF downloads(53) Cited by(1)

Figures and Tables

Figures(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog