Processing math: 100%
Research article Special Issues

The integer part of nonlinear forms with prime variables

  • Received: 17 August 2021 Accepted: 28 September 2021 Published: 20 October 2021
  • MSC : 11D75, 11P55

  • In this paper, we discuss problems that integer part of nonlinear forms with prime variables represent primes infinitely. We prove that under suitable conditions there exist infinitely many primes pj,p such that [λ1p21+λ2p22+λ3pk3]=p and [λ1p31++λ4p34+λ5pk5]=p with k2 and k3 respectively, which improve the author's earlier results.

    Citation: Weiping Li, Guohua Chen. The integer part of nonlinear forms with prime variables[J]. AIMS Mathematics, 2022, 7(1): 1147-1154. doi: 10.3934/math.2022067

    Related Papers:

    [1] Anas Al-Masarwah, Abd Ghafur Ahmad . Subalgebras of type (α, β) based on m-polar fuzzy points in BCK/BCI-algebras. AIMS Mathematics, 2020, 5(2): 1035-1049. doi: 10.3934/math.2020072
    [2] Aneeza Imtiaz, Umer Shuaib . On conjunctive complex fuzzification of Lagrange's theorem of ξ−CFSG. AIMS Mathematics, 2023, 8(8): 18881-18897. doi: 10.3934/math.2023961
    [3] Mona Aaly Kologani, Rajab Ali Borzooei, Hee Sik Kim, Young Bae Jun, Sun Shin Ahn . Construction of some algebras of logics by using intuitionistic fuzzy filters on hoops. AIMS Mathematics, 2021, 6(11): 11950-11973. doi: 10.3934/math.2021693
    [4] Muhammad Jawad, Niat Nigar, Sarka Hoskova-Mayerova, Bijan Davvaz, Muhammad Haris Mateen . Fundamental theorems of group isomorphism under the framework of complex intuitionistic fuzzy set. AIMS Mathematics, 2025, 10(1): 1900-1920. doi: 10.3934/math.2025088
    [5] Jian-Rong Wu, He Liu . The nearest point problems in fuzzy quasi-normed spaces. AIMS Mathematics, 2024, 9(3): 7610-7626. doi: 10.3934/math.2024369
    [6] Shaoyuan Xu, Yan Han, Suzana Aleksić, Stojan Radenović . Fixed point results for nonlinear contractions of Perov type in abstract metric spaces with applications. AIMS Mathematics, 2022, 7(8): 14895-14921. doi: 10.3934/math.2022817
    [7] Ahmed Ayad Khudhair, Saeed Sohrabi, Hamid Ranjbar . Numerical solution of nonlinear complex integral equations using quasi- wavelets. AIMS Mathematics, 2024, 9(12): 34387-34405. doi: 10.3934/math.20241638
    [8] Shahida Bashir, Ahmad N. Al-Kenani, Maria Arif, Rabia Mazhar . A new method to evaluate regular ternary semigroups in multi-polar fuzzy environment. AIMS Mathematics, 2022, 7(7): 12241-12263. doi: 10.3934/math.2022680
    [9] Han Wang, Jianrong Wu . The norm of continuous linear operator between two fuzzy quasi-normed spaces. AIMS Mathematics, 2022, 7(7): 11759-11771. doi: 10.3934/math.2022655
    [10] Remala Mounikalakshmi, Tamma Eswarlal, Chiranjibe Jana . Bipolar fuzzy INK-subalgebras of INK-algebras. AIMS Mathematics, 2024, 9(10): 27593-27606. doi: 10.3934/math.20241340
  • In this paper, we discuss problems that integer part of nonlinear forms with prime variables represent primes infinitely. We prove that under suitable conditions there exist infinitely many primes pj,p such that [λ1p21+λ2p22+λ3pk3]=p and [λ1p31++λ4p34+λ5pk5]=p with k2 and k3 respectively, which improve the author's earlier results.



    In this paper, for the convenience of expression, we first introduce some notations. R, Rm and Rm×n stand for the sets of all real numbers, m-dimensional real column vectors and m×n real matrices, respectively. C and Cm×n stand for the sets of all complex numbers and m×n complex matrices, respectively. HCm×m,AHCm×m stand for the sets of all m×m complex Hermitian and anti-Hermitian matrices, respectively. SRm×m and ASRm×m stand for the sets of all m×m real symmetric and anti-symmetric matrices, respectively. Im is the m×m identity matrix, and δkm is the k-th column of Im, k=1,2,,m. B and BT represent the Moore-Penrose inverse and the transpose of the matrix B, respectively. For the matrix B=(β1,β2,,βn)Rm×n, vec(B) represents the mn-dimensional column vector (βT1,βT2,,βTn)T. BC represents the Kronecker product of matrices B and C. represents the 2 norm of a vector or the Frobenius norm of a matrix. For a matrix C, rowi(C),colj(C) represent the i-th row and the j-th column of C, respectively. rand(m,n) is a function in MATLAB.

    Linear matrix equations are widely used in applied mathematics, computational mathematics, computer science, control theory, signal and color image processing and other fields, which has aroused the interest of many scholars and achieved some valuable results [1,2,3,4,5,6,7,8]. Direct method [9,10,11,12,13,14] and iterative method [15,16,17,18,19,20] are two common methods to solve linear matrix equations.

    In this paper, we are interested in the following generalized Sylvester matrix equation

    CXD+EXF=G, (1.1)

    in which C,D,E,F,G are known matrices, and X is unknown matrix. This matrix equation (1.1) over different number fields has been widely studied in recent years. Many scholars have proposed iterative methods for different solutions of the matrix equation (1.1) [21,22,23,24,25,26]. For the direct method, some meaningful conclusions are also obtained for the matrix equation (1.1). For example, Yuan et al. [27] gave the expression of the minimal norm of least squares Hermitian solution for the complex matrix equation (1.1) by a product for matrices and vectors. Zhang et al. [28] studied the least squares Hermitian solutions for the complex matrix equation (1.1) by the real representations of complex matrices. Yuan [29] proposed the expressions of the least squares pure imaginary solutions and real solutions for the quaternion matrix equation (1.1) by using the complex representations of quaternion matrices and the Moore-Penrose inverse. Wang et al. [30] gave some necessary and sufficient conditions for the complex constraint generalized Sylvester matrix equations to have a common solution by the rank method, and obtained the expression of the general common solution. Yuan [31] solved the mixed complex Sylvester matrix equations by the generalized singular-value decomposition, and gave the explicit expression of the general solution. Yuan et al. [32] studied the Hermitian solution of the split quaternion matrix equation (1.1). Kyrchei [33] represented the solution of the quaternion matrix equation (1.1) by quaternion row-column determinants.

    In the process of solving the linearization problem of nonlinear systems, Cheng et al. [34] proposed the theory of the semi-tensor product of matrices in recent years. The semi-tensor product of matrices breaks through the limitation of dimension and realizes quasi commutativity, which is a generalization of the traditional matrix multiplication. Now, the semi-tensor product of matrices has been applied to Boolean networks [35], graph theory [36], game theory [37], logical systems [38] and so on.

    To the best of our knowledge, there is no reference on the study of the complex matrix equation (1.1) by using the semi-tensor product method. Therefore, in this paper, we will use the real vector representations of complex matrices and the semi-tensor product of matrices to study the following two problems.

    Problem 1. Suppose C,ECm×n,D,FCn×p,GCm×p and

    LH={X|XHCn×n CXD+EXFG=min}.

    Find out XHCLH satisfying XHC=minXLHX.

    Problem 2. Suppose C,ECm×n,D,FCn×p,GCm×p and

    LAH={X|XAHCn×n CXD+EXFG=min}.

    Find out XAHCLAH satisfying XAHC=minXLAHX.

    XHC, XAHC are called the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex matrix equation (1.1), respectively.

    The rest of this paper is arranged as below. In Section 2, some preliminary results are presented for solving Problems 1 and 2. In Section 3, the solutions of Problems 1 and 2 are obtained by the real vector representations of complex matrices and the semi-tensor product of matrices. In Section 4, two numerical algorithms of solving Problems 1 and 2 are proposed, and two numerical examples are given to show the effectiveness of purposed algorithms. In Section 5, this paper is summarized briefly.

    In this section, we review and present some preliminary results of solving Problems 1 and 2.

    Definition 2.1. ([39]) Let BRm×n,CRs×t. The semi-tensor product of B,C is defined as

    BC=(BIq/n)(CIq/s),

    where q is the least common multiple of n,s.

    In Definition 2.1, when n=s, the semi-tensor product of A,B is essentially the traditional matrix product. The semi-tensor product has the following properties.

    Lemma 2.2. ([39]) Suppose A,B,C,D,E are real matrices of appropriate orders. There are the following conclusions:

    (1) (AB)C=A(BC).

    (2) A(D+E)=AD+AE,(D+E)A=DA+EA.

    Lemma 2.3. ([39]) Let αRm, BRp×q, and then αB=(ImB)α.

    Lemma 2.3 reflects the quasi commutativity of vector and matrix. In order to realize the commutativity between vectors, the following swap matrix is important.

    Definition 2.4. ([39]) The following square matrix

    S[m,n]=(Inδ1m,Inδ2m,,Inδmm)

    is called the (m,n)-dimension swap matrix.

    Lemma 2.5. ([39]) Let αRm,βRn, then

    S[m,n]αβ=βα,

    where S[m,n] is as same as that in Lemma 2.4.

    To study Problems 1 and 2, we define the real vector representations of complex number, complex vector and complex matrix, and give their properties.

    Let a complex number a=a1+a2i, where a1,a2R, and then the following column vector

    a=(a1a2)

    is defined as the real vector representation of the complex number a.

    Lemma 2.6. ([40]) Let a,bC, then ab=Wab, where

    W=(10010110).

    Suppose α=(x1,x2,,xn), β=(y1,y2,,ym)T are two complex vectors. The following column vectors

    α=(x1x2xn),β=(y1y2ym)

    are defined as the real vector representations of the complex vectors α,β, respectively. And we can easily get the following properties of the real vector representations of complex vectors:

    xi=(δin)Tα,yj=(δjm)Tβ,

    in which i=1,2,,n,j=1,2,,m.

    Lemma 2.7. ([40]) Suppose α=(x1,x2,,xn),β=(y1,y2,,yn)T are two complex vectors. W is as same as that in Lemma 2.7. Then

    αβ=Fnαβ,

    in which Fn=Wni=1[(δin)T(I2n(δin)T)].

    Let ACm×n, and then the following column vectors

    Ac=(col1(A)col2(A)coln(A)),  Ar=(row1(A)row2(A)rowm(A))

    are defined as the real column and row vector representations of A, respectively. And they satisfy

    coli(A)=(δin)TAc,  rowj(A)=(δjm)TAr,

    in which i=1,2,,n, j=1,2,,m.

    Further, the following properties also hold.

    Lemma 2.8. Let A,BCm×n, and then

    (1)(A±B)c=Ac±Bc,(A±B)r=Ar±Br;

    (2)A=Ac=Ar.

    Proof. (1) Notice that

    colj(A±B)=colj(A)±colj(B),j=1,2,,n,rowi(A±B)=rowi(A)±rowi(B),i=1,2,,m.

    Thus, we obtain

    (A±B)c=(col1(A±B)col2(A±B)coln(A±B))=(col1(A)±col1(B)col2(A)±col2(B)coln(A)±coln(B))=(col1(A)col2(A)coln(A))±(col1(B)col2(B)coln(B))=Ac±Bc,(A±B)r=(row1(A±B)row2(A±B)rowm(A±B))=(row1(A)±row1(B)row2(A)±row2(B)rowm(A)±rowm(B))=(row1(A)row2(A)rowm(A))±(row1(B)row2(B)rowm(B))=Ar±Br,

    which show that (1) holds.

    (2) Because rowi(A)2=rowi(A)2, colj(A)2=colj(A)2, we get

    A2=mi=1rowi(A)2=mi=1rowi(A)2=Ar2,A2=ni=1colj(A)2=ni=1colj(A)2=Ac2.

    Thus (2) holds.

    Lemma 2.9. Let ACm×n,BCn×p. Fn is as same as that in Lemma 2.8. Then

    (AB)c=KmnpArBc,

    in which

    Kmnp=(K1mnpK2mnpKpmnp), Kimnp=(Fn(δ1m)T(I2mn(δip)T)Fn(δ2m)T(I2mn(δip)T)Fn(δmm)T(I2mn(δip)T)),i=1,2,,p.

    Proof. Block the matrices A,B into the following forms:

    A=(row1(A)row2(A)rowm(A)), B=(col1(B),col2(B),,colp(B)).

    By using of Lemma 2.3 and Lemma 2.8, we obatin

    rowi(A)colj(B)=Fnrowi(A)colj(B)=Fn(δim)TAr(δjp)TBc=Fn(δim)T[I2mn(δjp)T]ArBc.

    Thus, there is

    (AB)c=(row1(A)col1(B)row2(A)col1(B)rowm(A)col1(B)row1(A)colp(B)row2(A)colp(B)rowm(A)colp(B))=(Fn(δ1m)T(I2mn(δ1p)T)Fn(δ2m)T(I2mn(δ1p)T)Fn(δmm)T(I2mn(δ1p)T)Fn(δ1m)T(I2mn(δpp)T)Fn(δ2m)T(I2mn(δpp)T)Fn(δmm)T(I2mn(δpp)T))ArBc.

    So Lemma 2.10 holds.

    Lemma 2.10. ([40]) Let ACm×n,BCn×p. Fn is as same as that in Lemma 2.8. Then

    (AB)r=˜KmnpArBc,

    in which

    ˜Kmnp=(˜K1mnp˜K2mnp˜Kmmnp), ˜Kjmnp=(Fn(δjm)T(I2mn(δ1p)T)Fn(δjm)T(I2mn(δ1p)T)Fn(δjm)T(I2mn(δpp)T)),j=1,2,,m.

    In the last part of this section, we propose the necessary and sufficient conditions for a complex matrix to be Hermitian and anti-Hermitian.

    Lemma 2.11. Let X=X1+X2iCn×n, then

    Xc=M(vec(X1)vec(X2)),

    where M=(δ12n2,δ32n2,,δ2n212n2,δ22n2,δ42n2,,δ2n22n2).

    Proof. Let X1=(x(1)ij)n×n,X2=(x(2)ij)n×n, and then

    vec(X1)=(x(1)11,x(1)21,,x(1)n1,,x(1)1n,x(1)2n,,x(1)nn)T,
    vec(X2)=(x(2)11,x(2)21,,x(2)n1,,x(2)1n,x(2)2n,,x(2)nn)T.

    Notice that

    (δ12n2,δ32n2,,δ2n212n2)vec(X1)=(x(1)11,0,x(1)21,0,,x(1)n1,0,,x(1)1n,0,x(1)2n,0,,x(1)nn,0)T,
    (δ22n2,δ42n2,,δ2n22n2)vec(X2)=(0,x(2)11,0,x(2)21,,0,x(2)n1,,0,x(2)1n,0,x(2)2n,,0,x(2)nn)T,

    so we have

    M(vec(X1)vec(X2))=(δ12n2,δ32n2,,δ2n212n2)vec(X1)+(δ22n2,δ42n2,,δ2n22n2)vec(X2)=(x(1)11,0,x(1)21,0,,x(1)n1,0,,x(1)1n,0,x(1)2n,0,,x(1)nn,0)T+(0,x(2)11,0,x(2)21,,0,x(2)n1,,0,x(2)1n,0,x(2)2n,,0,x(2)nn)T=(x(1)11,x(2)11,x(1)21,x(2)21,x(1)n1,x(2)n1,,x(1)1n,x(2)1n,x(1)2n,x(2)2n,,x(1)nn,x(2)nn)T=Xc.

    So Lemma 2.12 holds.

    For a real matrix X=(xij)Rn×n, we denote

    α1=(x11,x21,,xn1),α2=(x22,x32,,xn2),,αn1=(x(n1)(n1),xn(n1)),αn=xnn,β1=(x21,x31,,xn1),β2=(x32,x42,,xn2),,βn2=(x(n1)(n2),xn(n2)),βn1=xn(n1).

    vecS(X),vecA(X) stand for the following vectors

    vecS(X)=(α1,α2,,αn1,αn)TRn(n+1)2, (2.1)
    vecA(X)=(β1,β2,,βn2,βn1)TRn(n1)2. (2.2)

    Lemma 2.12. ([7]) Let XRn×n, and then the following conclusions hold.

    (1) XSRn×nvec(X)=PnvecS(X), where

    Pn=(δ1nδ2nδ3nδn1nδnn00000000δ1n000δ2nδ3nδn1nδnn00000δ1n000δ2n00000000δ1n000δ2n0δn1nδnn00000δ1n000δ2n0δn1nδnn).

    (2) XASRn×nvec(X)=QnvecA(X), where

    Qn=(δ2nδ3nδn1nδnn0000δ1n000δ3nδn1nδnn00δ1n00δ2n00000δ1n00δ2n0δnn000δ1n00δ2nδn1n).

    We can get the following results by Lemmas 2.12 and 2.13.

    Lemma 2.13. Suppose X=X1+X2iCn×n.

    (1) If XHCn×n, then Xc=H(vecS(X1)vecA(X2)), where H=M(Pn00Qn).

    (2) If XAHCn×n, then Xc=˜H(vecA(X1)vecS(X2)), where ˜H=M(Qn00Pn).

    Proof. For XHCn×n, we have X1SRn×n,X2ASRn×n. By Lemma 2.13, we obtain

    vec(X1)=PnvecS(X1),vec(X2)=QnvecA(X2).

    According to Lemma 2.12, we get

    Xc=M(vec(X1)vec(X2))=M(PnvecS(X1)QnvecA(X2))=M(Pn00Qn)(vecS(X1)vecA(X2))=H(vecS(X1)vecA(X2)),

    which shows that (1) holds. Similarly, we can prove that (2) holds.

    In this section, by using the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform the least squares problems of Problems 1 and 2 into the corresponding real least squares problems with free variables, and then obtain the expressions of the solutions of Problems 1 and 2. Further, we give the necessary and sufficient conditions for the complex matrix equation (1.1) to have Hermitian and anti-Hermitian solutions.

    Theorem 3.1. Let C,ECm×n,D,FCn×p,GCm×p. S[2np,2n2],Kmnp,˜Kmnn,H are as same as those in Definition 2.4, Lemmas 2.10, 2.11 and Lemma 2.14, respectively. Denote

    U=Kmnp˜Kmnn(CrS[2np,2n2]Dc+ErS[2np,2n2]Fc).

    Then the set LH in Problem 1 can be expressed as

    LH={X|Xc=H(UH)Gc+H[In2(UH)(UH)]y, yRn2}, (3.1)

    and the unique Hermitian solution XHCLH of Problem 1 satisfies

    (XHC)c=H(UH)Gc. (3.2)

    Proof. By Lemma 2.5, Lemmas 2.9–2.11, we obtain

    CXD+EXFG=(CXD+EXFG)c=(CXD)c+(EXF)cGc=Kmnp(CX)rDc+Kmnp(EX)rFcGc=Kmnp˜KmnnCrXcDc+Kmnp˜KmnnErXcFcGc=Kmnp˜Kmnn(CrS[2np,2n2]Dc+ErS[2np,2n2]Fc)XcGc=UXcGc.

    For the Hermitian matrix X=X1+X2i, by applying Lemma 2.14, we have

    Xc=H(vecS(X1)vecA(X2)).

    So we get

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc.

    Then the complex least squares problem of Problem 1 is converted into the real least squares problem with free variables

    minUH(vecS(X1)vecA(X2))Gc.

    The general solution of the above least squares problem is

    (vecS(X1)vecA(X2))=(UH)Gc+[In2(UH)(UH)]y, yRn2.

    Then applying Lemma 2.14, we obtain that (3.1) holds. Therefore, the unique Hermitian solution XCHLH of Problem 1 satisfies (3.2).

    Now, we propose a necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution, and the expression of general Hermitian solutions.

    Corollary 3.2. Let C,ECm×n,D,FCn×p,GCm×p. H, U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is

    (UH(UH)I2mp)Gc=0. (3.3)

    If (3.3) holds, the Hermitian solution set of the complex matrix equation (1.1) is

    SH={X|Xc=H(UH)Gc+H[In2(UH)(UH)]y, yRn2}. (3.4)

    Proof. The necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is for any XHCn×n,

    CXD+EXFG=0.

    According to Theorem 3.1, we have

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc=UH(UH)UH(vecS(X1)vecA(X2))Gc=UH(UH)GcGc=(UH(UH)I2mp)Gc.

    Therefore, the necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is

    (UH(UH)I2mp)Gc=0,

    which illustrates that (3.3) holds. For XHCn×n, we get

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc.

    Therefore, when the complex matrix equation (1.1) has a Hermitian solution, the Hermitian solutions of (1.1) satisfy the following equation

    UH(vecS(X1)vecA(X2))=Gc.

    By Theorem 3.1, (3.4) is established.

    Theorem 3.3. Let C,ECm×n,D,FCn×p,GCm×p. ˜H,U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the set LAH of Problem 2 can be expressed as

    LAH={X|Xc=˜H(U˜H)Gc+˜H[In2(U˜H)(U˜H)]y, yRn2}, (3.5)

    and the unique anti-Hermitian solution XAHCLAH of Problem 2 satisfies

    (XAHC)c=˜H(U˜H)Gc. (3.6)

    Proof. By Theorem 3.1, we obtain

    CXD+EXFG=UXcGc.

    For X=X1+X2iAHCn×n, we have Xc=˜H(vecA(X1)vecS(X2)). Thus we get

    CXD+EXFG=U˜H(vecA(X1)vecS(X2))Gc.

    minU˜H(vecA(X1)vecS(X2))Gc has the following general solution:

    (vecA(X1)vecS(X2))=(U˜H)Gc+[In2(U˜H)(U˜H)]y, yRn2.

    By Lemma 2.14, (3.5) holds. Further, the unique anti-Hermitian solution XACHLAH of Problem 2 satisfies (3.6).

    Now, we give the necessary and sufficient condition that the complex matrix equation (1.1) to have an anti-Hermitian solution and the expression of general anti-Hermitian solution. Because the research method is similar to Corollary 3.2, we only give these conclusions and omit the specific derivation process.

    Corollary 3.4. Let C,ECm×n,D,FCn×p,GCm×p. ˜H,U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the necessary and sufficient condition that the complex matrix equation (1.1) has an anti-Hermitian solution is

    (U˜H(U˜H)I2mp)Gc=0. (3.7)

    If (3.7) holds, the anti-Hermitian solution set of the complex matrix equation (1.1) is

    SAH={X|Xc=˜H(U˜H)Gc+˜H[In2(U˜H)(U˜H)]y, yRn2}. (3.8)

    Remark 1. The semi-tensor product of matrices provides a new method for solving linear matrix equations. The feature of this method is to first convert complex matrices into the corresponding real vectors, and then use the quasi commutativity of the vectors to transform the complex matrix equation into the real linear system with the form Ax=b, so as to obtain the solution of the complex matrix equation. This method only involves real matrices and real vectors, so it is more convenient in numerical calculation. The weakness of this method is that it leads to the expansion of the dimension when the complex matrix is straightened into a real vector, and therefore it is not convenient to calculate the complex matrix equation of higher order.

    Remark 2. In [27], the authors propose a method of solving the solution of Problem 1 by a product for matrices and vectors. This method involves a lot of calculations of complex matrix, the Moore-Penrose inverse and matrix inner product, which reduces the accuracy of the result to some degree. The numerical example in Section 5 will illustrate this.

    In this section, two numerical algorithms are first proposed to solve Problems 1 and 2, and then two numerical examples are given to show the effectiveness of purposed algorithms. In the first example, we give the errors of the solutions of Problems 1 and 2. In the second example, we compare the accuracy of the solution of Problem 1 calculated by Algorithm 4.1 and the method in [27]. All calculations are implemented on an Intel Core i7-2600@3.40GHz/8GB computer by using MATLAB R2021b.

    Algorithm 4.1. (The solution of Problem 1)

    (1) Input matrices C,ECm×n, D,FCn×p, GCm×p, W, S[2np,2n2], Pn,Qn, M.

    (2) Generate Cr,Er,Dc,Fc,Gc, Fn, and then calculate Kmnp,˜Kmnn, U and H.

    (3) Calculate the unique Hermitian solution XCH of Problem 1 by (3.2).

    Algorithm 4.2. (The solution of Problem 2)

    (1) Input matrices C,ECm×n, D,FCn×p, GCm×p, W, S[2np,2n2], Pn,Qn, M.

    (2) Generate Cr,Er,Dc,Fc,Gc, Fn, and then calculate Kmnp,˜Kmnn, U and ˜H.

    (3) Calculate the unique anti-Hermitian solution XAHC of Problem 2 by (3.4).

    Example 4.1. Suppose C,ECm×n, D,FCn×p are random matrices generated by MATLAB. Let M1=rand(n,n), M2=rand(n,n).

    (1) According to M1,M2, we generate the following matrix

    X=(M1+MT1)+(M2MT2)i,

    and then XHCn×n. Let G=CXD+EXF, and m=n=p=k. Here, all matrices are square, and the probability that random matrices are nonsingular is 1. Therefore, (1.1) has a unique Hermitian solution X, which is also the unique solution of Problem 1. We calculate the solution XHC of Problem 1 by Algorithm 4.1. Let k=2:10 and the error ε=log10(XHCX). The relation between the error ε and k is shown in Figure 1 (a).

    Figure 1.  The errors of solving Problems 1 and 2.

    (2)Generate the following matrix

    X=(M1MT1)+(M2+MT2)i,

    and then XAHCn×n. Let G=CXD+EXF, and m=n=p=k. Then, (1.1) has a unique anti-Hermitian solution X, which is also the unique solution of Problem 2. We calculate the solution XAHC of Problem 2 by Algorithm 4.2. Let k=2:10 and the error ε=log10(XAHCX). The relation between the error ε and k is shown in Figure 1 (b).

    From Figure 1, we see that ε<12, and thus the errors of solving Problems 1 and 2 are all no more than 1012. This shows that the effectiveness of Algorithms 4.1 and 4.2. In addition, (a) and (b) of Figure 1 are little difference, which reflects that the calculation amount of the solution of Problem 1 is almost the same as that of Problem 2. These are consistent with theoretical reality, and therefore Figure 1 is reasonable.

    Example 4.2. Suppose

    C=10rand(m,n)+20rand(m,n)i, D=20rand(n,p)+10rand(n,p)i,
    E=20rand(m,n)+10rand(m,n)i, F=10rand(n,p)+20rand(n,p)i.

    X is same as that in Example 4.1. Let G=CXD+EXF. When m=n=p=k, X is the unique solution of Problem 1. X1, X2 represent the solutions of Problem 1 computed by Algorithm 4.1 and the method in [27], respectively. Let ϵ1=X1X,ϵ2=X2X. Table 1 shows the errors ϵ1,ϵ2 of matrices of different orders.

    Table 1.  Error comparison of two methods in solving Problem 1.
    k ϵ1 ϵ2
    2 5.1179e-16 4.0656e-15
    3 3.8081e-15 1.5832e-13
    4 6.9372e-15 2.8004e-13
    5 3.1605e-14 2.0132e-12
    6 3.0276e-14 1.3684e-12
    7 5.8574e-14 2.7640e-12
    8 2.5821e-13 1.4964e-11
    9 3.1605e-13 1.1339e-11
    10 7.4086e-13 1.1213e-11

     | Show Table
    DownLoad: CSV

    Table 1 illustrates that the errors obtained by Algorithm 4.1 are smaller than those obtained by the method in [27]. This is because, in the process of solving Problem 1, [27] involves a lot of calculations of complex matrix, the Moore-Penrose inverse and matrix inner product, which leads to the reduction of calculation accuracy. Thus Algorithm 4.1 is more accurate and efficient.

    In this paper, we obtain the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex matrix equation CXD+EXF=G by the semi-tensor product of matrices. The numerical examples show that our purposed method is more effective and accurate. The semi-tensor product of matrices provides a new idea for the study of matrix equations. This method can also be applied to the study of solutions of many types of linear matrix equations.

    The study is supported by the Scientifc Research Foundation of Liaocheng University (No. 318011921), Discipline with Strong Characteristics of Liaocheng University – Intelligent Science and Technology (No. 319462208), and the Natural Science Foundation of Shandong Province of China (Nos. ZR2020MA053 and ZR2022MA030).

    The authors declare that there is no conflict of interest.



    [1] I. Danicic, On the integral part of a linear form with prime variables, Canadian J. Math., 18 (1966), 621–628. doi: 10.4153/CJM-1966-061-3. doi: 10.4153/CJM-1966-061-3
    [2] W. Li, T. Wang, Integral part of a nonlinear form with three squares of primes, Chinese Ann. Math., 32 (2011), 753–762. doi: 10.1007/s11464-011-0100-6. doi: 10.1007/s11464-011-0100-6
    [3] W. Li, B. Su, The integral part of a nonlinear form with five cubes of primes, Lith. Math. J., 53 (2013), 63–71. doi: 10.1007/s10986-013-9193-9. doi: 10.1007/s10986-013-9193-9
    [4] S. Srinivasan, A Diophantine inequality with prime variables, B. Aust. Math. Soc., 38 (1988), 57–66. doi: 10.1017/S0004972700027234. doi: 10.1017/S0004972700027234
    [5] R. C. Vaughan, Diophantine approximation by prime numbers, I, P. Lond. Math. Soc., 28 (1974), 373–384. doi: 10.1112/plms/s3-28.2.373 doi: 10.1112/plms/s3-28.2.373
    [6] R. C. Vaughan, Diophantine approximation by prime numbers, II, P. Lond. Math. Soc., 28 (1974), 385–401. doi: 10.1112/plms/s3-28.3.385. doi: 10.1112/plms/s3-28.3.385
    [7] G. Harman, Trigonometric sums over primes I, Mathematika, 28 (1981), 249–254. doi: 10.1112/S0025579300010305. doi: 10.1112/S0025579300010305
    [8] H. Davenport, K. F. Roth, The solubility of certain diophantine inequalities, Mathematika, 2 (1955), 81–96. doi: 10.1112/S0025579300000723. doi: 10.1112/S0025579300000723
  • This article has been cited by:

    1. Jiaxin Lan, Jingpin Huang, Yun Wang, An E-extra iteration method for solving reduced biquaternion matrix equation AX+XB=C, 2024, 9, 2473-6988, 17578, 10.3934/math.2024854
    2. Qing-Wen Wang, Zi-Han Gao, Jia-Le Gao, A Comprehensive Review on Solving the System of Equations AX = C and XB = D, 2025, 17, 2073-8994, 625, 10.3390/sym17040625
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2494) PDF downloads(63) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog