1.
Introduction
It is well known that the Galois extension of a field is of special importance in analyzing various fields, solving polynomial equations, etc.[16]. In addition to field extension, some other extensions of R to algebras on it are also useful. This paper considers the extension of a field F to a finite dimensional algebra on F. Such extensions are very general, which contain Galois extension, noncommutative Galois extension [1,21], commutative quaternions [17,18], dual and hyperbolic numbers [19,22], etc. as its special cases.
To make the problem clear, the algebra considered in this paper is defined as follows:
Definition 1.1. [12,14] An algebra, A, is a finite dimensional vector space V over a pre-assigned field F with a bilinear operator, ∗:V×V→V, satisfying distributive rule:
Starting from the idea of extension, it is obvious that we need to distinct two kinds of algebras over a given field F, called numerical and non-numerical algebras, which are defined as follows:
Definition 1.2. An algebra A over F, denoted by A=(V,∗), is called a numerical algebra, if F is one dimensional subspace of V. The set of numerical algebras is denoted by NA. Otherwise, it is called a non-numerical algebra. The set of non-numerical algebras is denoted by VA.
Example 1.3. (i) Quaternion, denoted by Q, is a numerical algebra, because F=R is one dimensional subspace of Q.
(ii) Cross product over R3, denoted by Cr=(R3,→×), is a non-numerical algebra.
From another point of view: an algebra extension of F can also be considered as an extension of (V,∗)∈VA by adding F to it. From this perspective, this paper investigates the algebra extensions of a field by studying the relationship between VA and NA.
The following fundamental problems about an algebra over a given field are considered in this paper:
(i) How to convert a numerical algebra to a non-numerical algebra by removing the "number" dimensional subspace, and how to convert a non-numerical algebra to a numerical algebra by adding a "number" dimensional subspace.
(ii) Some properties of finite algebras, such as commutativity, associativity, invertibility, etc.
(iii) Check whether a numerical algebra is a field.
(iv) Find the set of non-invertible elements of a commutative quaternion QS, and investigate the solutions of linear systems over QS.
Finally, as an application we apply the above approach to investigate a commutative quaternion QS, which was firstly proposed by Segre [20] and received many applications recently [3,18].
Recently, the semi-tensor product (STP) of matrices has been proposed and used to investigate some algebraic properties, such as cross-dimensional general linear algebra gl(R)=∪∞i=1gl(n,R) [8,9], Boolean-like algebras [11], etc. The basic tool used in this paper is also STP. Using it, the product structure matrix (PSM) of a given algebra is constructed. The PSM, which completely determines the algebra, is the key issue in our investigation.
The rest of this paper is organized as follows: Section 2 reviews some necessary preliminaries, including (i) STP of matrices; (ii) Structure matrices of binary operators, particularly, PSM of finite dimensional algebras. In Section 3 the numeralization and dis-numeralization of algebras are proposed, algorithms are developed. Section 4 considers base transformation, which provides an essential numerically separable algebra. Some properties of an algebra are investigated via its PSM. In Section 6 we consider when a numerical algebra is a field. Necessary and sufficient conditions are obtained. In Section 7 a commutative quaternion, QS is investigated. The set of non-invertible elements is revealed, which is shown to be a zero-measure set. The solution of linear systems over QS is discussed. Section 8 gives some brief concluding remarks.
Before ending this section, a list of notations is presented as follows:
1. A: an algebra; F: a field; R(C,Q): field of real (complex, rational) numbers.
2. Fm×n: the set of m×n matrices with all entries in F.
3. Col(M) (Row(M)): the set of columns (rows) of matrix M. Coli(M) (Rowi(M)) is the i-th column (row) of matrix M.
4. δin: the i-th column of identity matrix In.
5. ⨄: direct sum of vector spaces.
2.
Preliminaries
2.1. STP of matrices
This subsection provides a brief survey on semi-tensor product (STP) of matrices. We refer to [6,7] for more details.
Definition 2.1. [5,6]: Let M∈Fm×n and N∈Fp×q where F∈{Q,R,C}, and t=lcm{n,p} be the least common multiple of n and p. The STP of M and N, denoted by M⋉N, is defined as
where ⊗ is the Kronecker product.
Remark 2.2. (i) When n=p, M⋉N=MN. That is, the semi-tensor product is a generalization of conventional matrix product. Moreover, it keeps all the properties of conventional matrix product available [6].
(ii) Throughout this paper the matrix product is assumed to be the STP and the symbol ⋉ is mostly omitted.
(iii) For statement ease, the field F in this paper is assumed to be of characteristic 0. Particularly, the reader, who is not familiar with abstract algebra, may consider F∈{Q,R,C}. In fact, the results in this paper are mostly applicable to Galois fields.
We briefly review some basic properties of STP:
Proposition 2.3. [5,6]
1. (Associative Law)
(F⋉G)⋉H=F⋉(G⋉H).
2. (Distributive Law)
Define a swap matrix W[m,n]∈Mmn×mn as follows:
Proposition 2.4. [5,6] Let x∈Rm and y∈Rn be two column vectors. Then W[m,n]x⋉y=y⋉x.
Proposition 2.5. [5,6] Let x∈Ft be a column vector, and A be an arbitrary matrix over F. Then x⋉A=(It⊗A)⋉x.
2.2. PSM of a finite dimensional algebra
Let A=(V,∗). Assume a basis of V is B=(e1,e2,⋯,en). Then it is clear that the properties of A is completely determined by the binary operator ∗, which is briefly called "product". For a fixed basis B, a vector X∈V can be expressed as a vector X=(x1,x2,⋯,xn)T, which means X=n∑i=1xiei. We introduce a matrix, called the PSM, which is a complete description of the product ∗.
Proposition 2.6. Let A=(V,∗) be a finite dimensional algebra over F with a fixed basis B=(e1,e2,⋯,en). Then there exists a unique matrix MA∈Fn×n2, called the PSM of A, such that X∗Y=MAXY,X,Y∈V.
Proof. Assume eα∗eβ=n∑k=1λαβkek, and set
Then we define
It is easy to verify that for 1≤j≤n2, there exists a unique pair (α,β), with 1≤α≤n, 1≤β≤n, such that (2.3) holds. Hence, MA is completely constructible by (2.4). Straightforward computation shows that X∗Y=MAXY.
Using PSM, we can transform the complex form of an operation into algebraic form. In addition, PSM can transform some qualitative properties into quantitative expressions, which will be mentioned in Section 5.
Example 2.7. Consider the following algebras:
(i) Quaternion, Q. Consider the classical (i.e., Hamiltonian) quaternion. Let (1,i,j,k) be its classical basis. Then it is easy to calculate that its PSM is
(ii) Cross product over R3 (denoted by Cr): Let (I,J,K) be its classical basis. Then its PSM is
3.
Numeralization and dis-numeralization
Definition 3.1. Assume A=(V,∗) is a non-numerical algebra over F with a basis as (e1,e2,⋯,en). Then we can add a numerical dimension to it to make it a numerical algebra. Precisely speaking, we define a new operator ⊕ as
Then Aa:=(V⨄F,⊕) is called a numeralization of A.
If the ci,j=0, ∀i,j. Then the numeralization is called a numerically separable numeralization.
Remark 3.2. (i) In the enlarged vector space, we assume ∑ni=10ei=0, that is, the zero element in V is identified with 0∈F.
(ii) By definition, in the enlarged vector space we also have 1⊕X=X,X∈V.
Example 3.3. (i) Consider Cr, denote a basis of V=R3 as B={e1,e3,e3}. It is a non-numerical algebra. Now we add R to it and define a new operator as
where ci,j=−1, if i=j, otherwise, ci,j=0.
Then it is easy to verify that the numeralized algebra (R3⨄R,⊕) is exactly the quaternion Q.
(ii) Consider special linear algebra sl(2,R), which is the set of 2×2 real matrices with their trace equal to 0 [2]. Choose a basis as
Its PSM is
Now we add R to it and define
Then we have the numericalized algebra (sl(2,R)⨄R,⊕). We calculate its PSM under the basis of (1,e1,e2,e3)
It is a numerically separable numericalization.
Remark 3.4. In [9] a cross-dimensional general linear algebra is defined as gl(R):=({A|Ais square},[⋅,⋅]), where [A,B]:=A⋉B−B⋉A, which is also applicable to 1×1 matrix, that is A∈R or B∈R. But it is completely different from ⊕ defined in previous example.
Definition 3.5. Assume A=(V,∗) is a numerical algebra over F.
(i) The subspace V∖F is called the non-numerical subspace, denoted by Va. It is clear that if dim(V)=n, then dim(Va)=n−1.
(ii) If Aa:=(Va,∗) is a sub-algebra, A is said to be numerically separable.
Example 3.6. [17]
Let A=(V,∗) be a numerical algebra on R, where V={a+bi|a,b∈R}.
(i) Complex Numbers (C): The product is defined by 1∗i=i∗1=i;i∗i=−1. Then we have C as a two dimensional algebra over R.
(ii) Dual Numbers (D): Assume the product is defined by 1∗i=i∗1=i;i∗i=0. Then we have D as another two dimensional algebra over R. It is easy to see that D is numerically separable.
(iii) Hyperbolic Numbers (H): Assume the product is defined by 1∗i=i∗1=i;i∗i=1. Then we know that H is also a two dimensional algebra over R.
(iv) e1−e2 algebra [17] is a numerical algebra, which is defined as A=(V,∗), where V={a+be1+ce2,|a,b,c∈R}, with the product defined by e2i=ei,i=1,2; and e1∗e2=e2∗e1=0.
It is easy to verify that e1−e2 is numerically separable.
(v) Consider a numerical algebra A=(V,∗) over R, with a basis B=(e1=1,e2,e3,e4). Corresponding to this basis its PSM is
It is ready to verify that this is a numerically separable algebra.
Definition 3.7. Let A=(V,∗) be a numerical algebra. Define a new algebraic structure on Va=V∖F as
where Πa is the projection from V to Va.
Proposition 3.8. Let A=(V,∗) be a numerical algebra. Then Aa:=(Va,⊖) is an algebra, which is called the dis-numeralized algebra of A.
Proof. It is enough to show that ⊖ is distributive. Since the projection Πa is linear, the conclusion is obvious.
The process from a numerical algebra to its dis-numeralized algebra is called the dis-numeralization.
Definition 3.9. A subspace Vs⊂V is a closed subspace, if it is closed under ∗. That is, X∗Y∈Vs,∀X,Y∈Vs.
The following proposition comes from Definition 3.9.
Proposition 3.10. Let A be a numerical algebra. Then the following are equivalent:
(i) A is numerically separable;
(ii) Va is a closed subspace;
(iii) Aa=(Va,∗) is an algebra.
Example 3.11. Let F=Q be the field of rational numbers. Consider the extended field A:=Q(√2,√3), which is a numerical algebra. Set its basis as (e1=1,e2=√2,e3=√3,e4=√6). Then its PSM is calculated as in Table 1. Expressing the table into matrix form yields
It is obvious that this algebra is not numerically separable. A simple computation shows the PSM of Aa=(Va,∘) is
It is obvious that MAa can be obtained from MA by deleting rows and columns of MA, which are related to e1.
Example 3.12. Recall the Aa in Example 3.11. If we choose
then (Aa)a=A, where (Aa)a is a numeralization of non-numerical algebra Aa.
Next, we consider numeralization of vector spaces. A vector space V over F, such as Rn over R, is not an algebra, because there is no product. A simple way to turn it into an algebra is to add a trivial product, called zero product, to it. That is, define X∗Y=0,∀X,Y∈V. It is ready to verify that (V,∗) is a non-numerical algebra.
Definition 3.13. Let V be a vector space over F. A numeralization of V is a numeralization of (V,∗), where ∗ is zero product on V.
Example 3.14. Let V=Rn (or V=Cn). Consider algebra (V,∗), where ∗ is the zero product. Define
Then (V⨄F,⊕) is a numerical algebra.
Remark 3.15. Note that an interesting fact is: in the above example ⊕|V is exactly the inner product on V. That is, in the numeralized vector space, the inner product becomes a standard vector product.
Example 3.16. Let V=R3, and consider (R3,∗) as an algebra with zero product. Now we define
where c is the speed of light. Then numeralized algebra (R3⨄R,⊕) is the four dimensional spacious-time space in general relativity, consisting of real world R3 and time t∈R. In fact, (3.7) represents the Riemannian coefficients used for general relativity [10]. *
*The idea comes from a private talk with a Physician friend.
4.
Basis transformation
Assume A=(V,∗) is an algebra with B=(e1,e2,⋯,en) as a basis of V. Moreover, its PSM is M. It is clear that M depends on the choice of B. Let B′=(e′1,e′2,⋯,e′n) be another basis. Under this basis, the PSM is M′. Then their relationship is easily obtained as follows.
Proposition 4.1. Assume B′=BT, where T∈Fn×n is non-singular. Then
Proof. Let B=(e1,e2,⋯,en) and B′=(e′1,e′2,⋯,e′n). Then
Hence Y′=T−1Y,X′=T−1X. It follows that
Since X,Y∈Fn are arbitrary, it follows that
It is ready to verify that (4.1) and (4.2) are equivalent.
Recall the numerically separability. It is obvious that the definition is basis-depending. A natural question is: If a numerical algebra is not numerically separable, is it possible to turn it into a numerically separable algebra by a proper basis transformation? We give an example for this.
Example 4.2. Consider a numerical algebra A=(V,∗) over R with a basis as B=(e1=1,e2,e3,e4). Assume its PSM is
It is easy to see that this is not a numerically separable algebra. For instance, let x=e3. Then x∗x=−2+e2+3e3∉Va. Hence Va is not closed.
Now we consider a basis transformation B′=BT, where
Under this new basis, the PSM becomes
which is exactly the matrix of (3.3) in Example 3.6. Hence, it is numerically separable.
From above example one sees that the definition of numerically separable needs to be modified. We give the following one.
Definition 4.3. A numerical algebra is essentially numerically separable, if there exists a basis such that under this basis it is numerically separable.
The following proposition is obvious.
Proposition 4.4. A numerical algebra is essentially numerically separable, if and only if, there exists a closed subspace Va such that
Definition 4.5. Given two algebras Ai=(Vi,∗i), i=1,2 over same F.
1. A1 is said to be homomorphic to A2, if there exists a mapping π:V1→V2 such that
(i) π(a1X1+1a2X2)=a1π(X1)+2a2π(X2);
(ii) π(X1∗1X2)=π(X1)∗2π(X2).
2. A1 is said to be isomorphic to A2, if A1 is homomorphic to A2, and the homomorphism π:V1→V2 is bijective. If A1=A2, and π:V→V is an isomorphism, then π is called an automorphism.
We have the following result.
Proposition 4.6. (i) Let Ai=(Vi,∗i), where dim(Vi)=ni, i=1,2. π:V1→V2 is a homomorphism, if and only if, there is a matrix Tπ∈Fn2×n1 such that
(ii) Let Ai=(Vi,∗i), where dim(Vi)=n, i=1,2. π:V1→V2 is an isomorphism, if and only if, there is a non-singular matrix Tπ∈Fn×n such that
(iii) Let A=(V,∗), where dim(V)=n. π:V→V is an automorphism, if and only if, there is a non-singular matrix Tπ∈Fn×n such that
Proof. We prove (4.6) only. Proofs for (4.4) and (4.5) are similar.
(Necessity) To meet the requirement of (i) of Definition 4.5, it is clear that π must be a linear mapping. Hence there exists a matrix Tπ∈Fn×n such that π(X)=TπX,X∈V. Since π is bijective, Tπ must be non-singular.
Now for any X,Y∈V we have
Since X,Y∈V are arbitrary, (4.6) follows immediately.
(Sufficiency) Verifying the proof of necessity, it is easy to see that each step is necessary and sufficient. The conclusion follows.
Example 4.7. Consider a two dimensional NA over R as V={a+be|a,b∈R}, where 1∗e=e∗1=e;e∗e=e. It seems that we have a new two dimensional algebra over R. But if we let i=1−2e, then it is easy to see that i2=1. That is, this new two dimensional algebra is isomorphic to the hyperbolic algebra H.
Using STP, [4] has proved that there are only three two dimensional algebras over R with their isomorphic algebras.
5.
Properties via PSM
Definition 5.1. A=(V,∗) is a finite dimensional algebra.
(i) A is commutative, if ∗ is symmetric. That is, X∗Y=Y∗X,X,Y∈V.
(ii) A is anti-commutative, if ∗ is skew-symmetric. That is, X∗Y=−Y∗X,X,Y∈V.
(iii) A is associative if
Proposition 5.2. Let A=(V,∗) be a given finite dimensional algebra, and MA is its PSM.
(i) A is symmetric if and only if,
(ii) A is skew-symmetric, if and only if,
(iii) A is associative, if and only if,
Proof. (i) Using PSM, it is clear that X∗Y=Y∗X can be expressed as MAXY=MAYX. Using swap matrix, the right hand side can be expressed as MAXY=MAW[n,n]XY. Since X,Y∈V are arbitrary, (5.2) follows. (ii) The proof is similar to (i). (iii) Using PSM, (5.1) can be expressed as
It becomes MA(In⊗MA)XYZ=M2AXYZ using Proposition 2.5. Thus (5.4) follows.
6.
From numerical algebra to field
Assume A=(V,∗) is a numerical algebra over F, we ask when A is a field? In other words, when A⊃F is a finite extension of F (with [A:F]=n) [16]?
Since V is a vector space, of course, (V,+) is an abelian group. As for the distribution, it is also ensured by the properties of algebra. Hence we have the following obvious fact.
Lemma 6.1. If A=(V,∗) be a numerical algebra over F, then A is a field if and only if, (V∖{0},∗) is an abelian group.
According to Lemma 6.1, we have only to check (i) commutativity? (ii) associativity? (iii) invertibility? (i) can be verified by (5.2), and (ii) by (5.4). Hence, we have only to find a way to verify when a numerical algebra is invertible. To this end, we need some preparations.
Definition 6.2. Let F be a given field, A∈Fk×k2. A is said to be jointly non-singular, if for any 0≠x∈Fk, Ax is non-singular.
Split A into k square blocks as A=[A1,A2,⋯,Ak], where Ai=Aδik∈Fk×k. Denote by
Then we have the following result:
Proposition 6.3. A∈Fk×k2 is jointly non-singular, if and only if, the following homogeneous polynomial
Proof. Expanding the determinant det(Ax) yields this.
Example 6.4. (i) Consider C. Using {1,i} as a basis of C over R, it is easy to verify that the PSM of C is
(ii) Calculating right hand side of (6.1) for MC, we have
It follows that p(x1,x2)=x21+x22. Hence, p(x1,x2)=0, if and only if, x1=x2=0. It follows that MC is jointly non-singular.
From above arguments, we have the following result.
Theorem 6.5. Assume A=(V,∗) is a numerical algebra over F, where V is an n-dimensional vector space with a basis (e1=1,e2,⋯,en). Moreover, assume the PSM is MA. Then A is a field, if and only if,
(i)M2A=MA(In⊗MA);
(ii) MA=MAW[n,n].
(iii) MA is jointly non-singular;
Proof. It was shown in Proposition 5.2 that condition (i) is equivalent to associativity, and condition (ii) is equivalent to commutativity.
Now we consider condition (iii). In fact, it is equivalent to that each x≠0 has unique inverse.
Let x0≠0. Then we consider the algebraic equation MAx0y=δ1k. To get unique solution, MAx0 should be non-singular. The conclusion is obvious.
Example 6.6. Consider E=Q(√2,√3).
It is easy to see that a basis of E over Q is: (1,√2,√3,√6). Using this basis, the PSM is calculated as in Example 3.11, (see (3.5)).
We verify three conditions in Theorem 6.5. Verifications of conditions (i) and (ii) are straightforward calculations. We verify condition (iii) only. Let x=(a,b,c,d)T. It is easy to calculate that
Factorizing the above polynomial yields
Since each factor is a linear combination of the basis elements {1,√2,√3,√6}, we conclude that det(MAx)≠0, ∀(a,b,c,d)T≠04.
Assume E is a Galois extension over F, [E,F]=k. As a convention, we assume the basis of E is (e1=1,e2,⋯,ek). Then we have the following result:
Proposition 6.7. Let E be a Galois extension of F, and [E:F]=k. Denote the PSM of E by ME=[M1E,M2E,⋯,MkE]∈Fk×k2. Then ME satisfies the following conditions:
(i) M1E=Ik.
(ii) Col1(MsE)=δsk,s=1,2,⋯,k.
Proof. It is clear that in vector form we have ei=δik. By definition, Coli(M1E)=δ1k×Eδik=δik,i=1,⋯,k. Hence M1E=Ik. The reason for the other condition is the same.
Since we can add i to R to generate another field C, it is a natural question: Is it possible to add some new numbers to C to generate a new field F such that F is a Galois extension of C? The answer is "No" †.
†W. Li [15] mentioned that in 1861 Weierstrass proved that C is the only finite field extension over R.
As an application of the PSM of finite extension, we prove the following result:
Theorem 6.8. There is no Galois extension over C.
Proof. We prove it by contradiction. Assume there exists a Galois extension [E:C]=k, and let ME be the PSM of E. Choose x∗=[a 1k−1]T. Using equation (6.1) and the first requirement of Proposition 6.7, we have
where LOT(a) stands for lower order terms. According to fundamental algebraic theorem, (6.3) has solution a0. That is, x∗ has no unique inverse, which is a contradiction.
Remark 6.9. Using similar argument, it is easy to show that there is no k=2s+1 dimensional extension over R, because equation (6.3) surely has real solution. But this method is not applicable to the case of k=2s.
7.
Numerical algorithm for Segre quaternion
7.1. Invertibility of Segre quaternion
The Segre quaternion, denoted by QS, is commutative. It is defined as follows: QS={x1+x2I+x3J+x4K|x1,x2,x3,x4∈R}. The product is multi-linear over R and determined by the following rules:
Then the PSM of QS is easily calculated as follows:
A straightforward computation shows that MQS satisfies the first two requirements of Theorem 6.5. Therefore, QS is commutative and associative. Calculating
Hence, x=x1+x2I+x3J+x4K is invertible, if and only if, [x1 x2]T≠±[x3 −x4]T. Then we know that QS is almost invertible except a zero-measure set Ω, where Ω={xT∈R4|(x1,x2)=±(x3,−x4)}.
From above argument, we obtain the following result:
Proposition 7.1. The commutative quaternion QS is: (1) commutative; (2) associative; and (3) invertible over QS∖{Ω}.
Next, we consider the dis-numeralization of QS, the following is obvious:
Proposition 7.2. The dis-numeralization of QS is a symmetric cross product, denoted by ˉ×, over R3, which has PSM as
7.2. Linear systems over QS
Denote by QSm×n the set of matrices, which have its entries in QS.
Definition 7.3. Let M∈QSn×n. M is said to be non-singular (or invertible), if det(M)∉Ω.
Proposition 7.4. If M∈QSn×n is invertible, there exists a matrix M−1∈QSn×n, such that M∗M−1=M−1M=In.
Proof. The M−1 can be constructed exactly the same as matrices over R or C. The uniqueness is also trivial.
In numerical computation it is important to verify if an element in an algebra has inverse. If the answer is "yes", how to calculate it. Recalling the proof of Theorem 6.5, the following result is obvious:
Proposition 7.5. Assume A=(V,∗) is a k dimensional commutative algebra with MA as its PSM. Then x∈V is invertible, if and only if, MAx is invertible. Moreover, x−1=(MAx)−1δ1k.
Example 7.6. Recall Example 6.6 again. Then MA can be used to calculate the inverse of any x≠0 using the proposition above. For example, let x=1+√2−√3−√6. In vector form it becomes x=(1,1,−1,−1)T. Then
Back to scalar form, we have x−1=0.5(1−√2+√3−√6).
Now we can calculate the inverse of a matrix on QS.
Example 7.7. [13] Given
in vector form we have
It is ready to calculate that
Then M−1=1det(M)M∗:=B=(bij),
which are the same as in [13].
Finally, we consider a linear system Ax=b, where A∈QSn×n, b∈QSn×1. We have the following result.
Proposition 7.8. Assume A is non-singular, then system Ax=b has unique solution x=A−1∗b.
Example 7.9. Consider a linear system Ax=b, where A=(ai,j)∈QS3×3 with
b=(b1,b2,b3)T∈QS3×1 with
Then it is easy to calculate that
The conjugate matrix of A is A∗=(bi,j), where
Finally, the solution is x=1det(A)A∗b=(x1,x2,x3)T, where
A direct computation shows that the solution is correct.
8.
Conclusions
Two classes of algebras called numerical and non-numerical ones, were proposed and investigated. The conversions between these two kinds of algebras were established. Particularly, it was pointed out that the cross product on R3 (Cr) and the quaternion (Q) are a couple of representatives. That is, Cr is a non-numerical algebra and Q is a numerical algebra. Properly numeralizing Cr yields Q and dis-numeralizing Q yields Cr. Then numerization of vector space was also investigated. It was pointed out that Einstein four dimensional special-time space is of this kind of numerizations. Then the condition for a numerical algebra to be a field is investigated. Finally, as a commutative quaternion, the Segre quaternion QS is considered. Its set of non-invertible elements is revealed. Non-singular matrices and solutions of linear systems over QS were investigated.
The basic tool in this investigation is STP. Using STP the PSM of an algebra was proposed, which contains all the information about an algebra. In fact, PSM provides a convenient framework for the investigation.
In one word, this paper established a bridge to connect a field with algebras (including vector spaces) over it.
Acknowledgment
This work is supported partly by the National Natural Science Foundation of China (NSFC) under Grants 61773371, 61733018, 61877036, 62073315, and the Natural Science Fund of Shandong Province under Grant ZR2019MF002.
Conflict of interest
The authors declare that they have no conflicts of interest to this work.