Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Regularity and uniqueness of 3D compressible magneto-micropolar fluids

  • Received: 23 February 2024 Revised: 27 March 2024 Accepted: 16 April 2024 Published: 23 April 2024
  • MSC : 35B65, 35Q35, 76N10

  • This article established the global existence and uniqueness of solutions for the 3D compressible magneto-micropolar fluid system with vacuum. The remarkable thing is that in the context of small initial energy, we got a new result with a lower regularity than we ever have before.

    Citation: Mingyu Zhang. Regularity and uniqueness of 3D compressible magneto-micropolar fluids[J]. AIMS Mathematics, 2024, 9(6): 14658-14680. doi: 10.3934/math.2024713

    Related Papers:

    [1] Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen . On Picard-SHSS iteration method for absolute value equation. AIMS Mathematics, 2021, 6(2): 1743-1753. doi: 10.3934/math.2021104
    [2] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [3] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [4] Cui-Xia Li, Long-Quan Yong . Modified BAS iteration method for absolute value equation. AIMS Mathematics, 2022, 7(1): 606-616. doi: 10.3934/math.2022038
    [5] Rashid Ali, Ilyas Khan, Asad Ali, Abdullah Mohamed . Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Mathematics, 2022, 7(5): 8176-8187. doi: 10.3934/math.2022455
    [6] Miao Guo, Qingbiao Wu . Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689. doi: 10.3934/math.20221027
    [7] ShiLiang Wu, CuiXia Li . A special shift splitting iteration method for absolute value equation. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
    [8] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [9] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [10] Zui-Cha Deng, Fan-Li Liu, Liu Yang . Numerical simulations for initial value inversion problem in a two-dimensional degenerate parabolic equation. AIMS Mathematics, 2021, 6(4): 3080-3104. doi: 10.3934/math.2021187
  • This article established the global existence and uniqueness of solutions for the 3D compressible magneto-micropolar fluid system with vacuum. The remarkable thing is that in the context of small initial energy, we got a new result with a lower regularity than we ever have before.



    Let Cm×n and Z+ denote the set of all m×n complex matrices and the set of all positive integers, respectively. The symbols r(A) and Ind(A) stand for the rank and the index of ACn×n, respectively. For a matrix ACn×n, we assume that A0=In. Let Cn×nk be the set of all n×n complex matrices with index k. By CCMn we denote the set of all core matrices (or group invertible matrices), i.e.,

    CCMn={A|ACn×n,r(A)=r(A2)}.

    The Drazin inverse [1] of ACn×nk, denoted by AD, is the unique matrix XCn×n satisfying:

    XAk+1=Ak,XAX=X and AX=XA. (1.1)

    Especially, when ACCMn, then X that satisfies (1.1) is called the group inverse of A and is denoted by A#. The Drazin inverse has been widely applied in different fields of mathematics and its applications. Here we will mention only some of them. The perturbation theory and additive results for the Drazin inverse were investigated in [2,3,4,5]. In [6], the algorithms for the computation of the Drazin inverse of a polynomial matrix are presented based on the discrete Fourier transformation. Karampetakis and Stanimiroviˊc [7] presented two algorithms for symbolic computation of the Drazin inverse of a given square one-variable polynomial matrix, which was effective with respect to CPU time and the elimination of redundant computations. Some representations of the W-weighted Drazin inverse were investigated and the computational complexities of the representations were also estimated in [8]. Kyrchei [9] generalized the weighted Drazin inverse, the weighted DMP-inverse, and the weighted dual DMP-inverse [10,11,12] for the matrices over the quaternion skew field and provided their determinantal representations by using noncommutative column and row determinants. In [13], the authors considered the quaternion two-sided restricted matrix equations and gave their unique solutions by the DMP-inverse and dual DMP-inverse. For interesting properties of different kinds of generalized inverses see [14].

    In 2018, Wang [15] introduced the weak group inverse of complex square matrices using the core-EP decomposition [16] and gave its certain characterizations.

    Definition 1.1. Let ACn×nk. Then the unique solution of the system

    is the weak group inverse of A denoted by A.

    Recently, there has been a huge interest in the weak group inverse. For example, Wang et al. [17] compared the weak group inverse with the group inverse of a matrix. In [18], the weak group inverse was introduced in *-rings and characterized by three equations (see also [19,20]). The weak group inverse in the setting of rectangular matrices was considered in [21]. In 2021, Zhou and Chen [19] introduced the m-weak group inverse in the ring and presented its different characterizations.

    Definition 1.2. Let R be a unitary ring with involution, aR and mZ+. If there exist xR and kZ+ such that

    xak+1=ak,   ax2=x,   (ak)am+1x=(am)ak,

    then x is called the m-weak group inverse of a and in this case, a is m-weak group invertible.

    In general, the m-weak group inverse of a may not be unique. If the m-weak group inverse of a is unique, then it is denoted by am.

    In [22], we can find a relation between the weak core inverse and the m-weak group inverse as well as certain necessary and sufficient conditions that the Drazin inverse coincides with the m-weak group inverse of a complex matrix. It is interesting to note that X which satisfies (1.1) coincides with the m-weak group inverse on complex matrices, in which case X exists for every ACn×n and is unique.

    Now, we consider the system of equations

    (1.2)

    Motivated by the above discussion, we introduce a new characterization of the m-weak group inverse related with (1.2) and proved the existence and uniqueness of a solution of (1.2), for every ACn×n. Some new characterizations of the m-weak group inverse are derived in terms of the range space, null space, rank equalities, and projectors. We present some representations of the m-weak group inverse involving some known generalized inverses and limit expressions as well as certain relations between the m-weak group inverse and other generalized inverses. Finally, we consider a relation between the m-weak group inverse and the nonsingular bordered matrix, which is applied to the Cramer's rule for the solution of the restricted matrix equation.

    The paper is organized as follows: In Section 2, we present some well-known definitions and lemmas. In Section 3, we provide a new characterization, as well as certain representations and properties of the m-weak group inverse of a complex matrix. In Section 4, we provide several expressions of the m-weak group inverse which are useful in computation. In Section 5, we present some properties of the m-weak group inverse as well as the relationships between the m-weak group inverse and other generalized inverses by core-EP decomposition. In Section 6, we show the applications of the m-weak group inverse concerned with the bordered matrices and the Cramer's rule for the solution of the restricted matrix equation.

    The symbols R(A), N(A) and A denote the range space, null space and conjugate transpose of ACm×n, respectively. The symbol In denotes the identity matrix of order n. Let PL,M be the projector on the space L along the M, where L,MCn and LM=Cn. For ACm×n, PA represents the orthogonal projection onto R(A), i.e., PA=PR(A)=AA. The symbols CPn and CHn represent the subsets of Cn×n consisting of all idempotent and Hermitian matrices, respectively, i.e.,

    CPn={A|ACn×n,A2=A},CHn={A|ACn×n,A=A}.

    Let ACm×n. The MP-inverse A of A is the unique matrix XCn×m satisfying the following four Penrose equations (see [14,23,24]):

    (1) AXA=A,   (2) XAX=X,   (3) (AX)=AX,   (4) (XA)=XA.

    A matrix XCn×m that satisfies condition (1) above is called an inner inverse of A and the set of all inner inverses of A is denoted by A{1}, while a matrix XCn×m that satisfies condition (2) above is called an outer inverse of A. A matrix XCn×m that satisfies both conditions (1) and (2) is called a reflexive g-inverse of A. If a matrix XCn×m satisfies

    X=XAX, R(X)=T  and N(X)=S,

    where T and S are the subspaces of Cn and Cm respectively, then X is an outer inverse of A with prescribed range and null space and it is denoted by A(2)T,S. If A(2)T,S exists, then it is unique. The notion of the core inverse on the CCMn was proposed and was denoted by [25,26,27]. The core inverse of ACn×nk is the unique matrix XCn×n satisfying

    AX=PA,  R(X)R(A).

    In addition, it was proved that

    The core-EP inverse of ACn×nk, denoted by is given in [28,29,30]. The core-EP inverse of ACn×nk is the unique matrix XCn×n satisfying

    XAX=X,  R(X)=R(X)=R(Ak).

    Moreover, it was proved that

    The DMP-inverse of ACn×nk, denoted by AD, was introduced in [10,11]. The DMP-inverse of ACn×nk is the unique matrix XCn×n satisfying

    XAX=X,  XA=ADA  AkX=AkA.

    Moreover, it was shown that

    AD,=ADAA.

    Also, the dual DMP-inverse of A was introduced in [10], as A,D=AAAD.

    The (B,C)-inverse of ACm×n, denoted by A(B,C) [31,32], is the unique matrix XCn×m satisfying

    XAB=B,CAX=C,R(X)=R(B) and N(X)=N(C),

    where B,CCn×m.

    To discuss further properties of the m-weak group inverse, several auxiliary lemmas will be given. The first lemma gives the core-EP decomposition of a matrix ACn×nk which will be a very useful tool throughout this paper.

    Lemma 2.1. [16] Let ACn×nk. Then there exists a unitary matrix UCn×n such that

    A=A1+A2=U[TS0N]U, (2.1)
    A1=U[TS00]U,   A2=U[000N]U, (2.2)

    where TCt×t is nonsingular with t=r(T)=r(Ak) and N is nilpotent of index k. The representation (2.1) is called the core-EP decomposition of A, while A1 and A2 are the core part and nilpotent part of A, respectively.

    Following the representation (2.1) of a matrix ACn×nk, we have the following representations of certain generalized inverses (see [15,16,33]):

    (2.3)
    A=U[T1T2S00]U, (2.4)
    AD=U[T1(Tk+1)1Tk00]U, (2.5)

    where Tk=k1j=0TjSNk1j.

    By direct computations, we get that ACCMn is equivalent with N=0, in which case

    A#=U[T1T2S00]U, (2.6)

    and

    (2.7)

    Let ACn×nk be of the form (2.1) and let mZ+. The notations below will be frequently used in this paper:

    M=S(IntNN),=(TT+MS)1,Tm=m1j=0TjSNm1j.

    Lemma 2.2. [34,Lemma 6] Let ACn×nk be of the form (2.1). Then

    A=U[TTSNMNMSN]U. (2.8)

    From (2.8) and [16,Theorem 2.2], we get that

    AA=U[It00NN]U, (2.9)
    AA=U[TTTMMTNN+MM]U, (2.10)
    Ak=U[TkTk00]U, (2.11)
    Am=U[TmTm0Nm]U, (2.12)
    PAk=Ak(Ak)=U[It000]U, (2.13)

    where t=r(Ak).

    Lemma 2.3. [29,35,36] Let ACn×nk and let mZ+. Then

    Lemma 2.4. Let ACn×nk and let mZ+. Then

    Proof. Assume that ACn×nk is of the form (2.1). By (2.3), (2.12) and (2.13), it follows that

    In this section, using the core-EP decomposition of a matrix ACn×nk we will give another definition of the m-weak group inverse. Furthermore, some properties of the m-weak group inverse will be derived.

    Theorem 3.1. Let ACn×nk be given by (2.1) and let XCn×n and mZ+. The system of equations

    (3.1)

    is consistent and has a unique solution X given by

    (3.2)

    Proof. If m=1, then X coincides with A. Clearly, X is the unique solution of (3.1) according to the definition of the weak group inverse. If m1, by (3.1), Lemmas 2.3 (d) and 2.4, it follows that

    Thus, by (2.3) and (2.12), we have that

    Definition 3.2. Let ACn×nk and mZ+. The m-weak group inverse of A, denoted by Am, is the unique solution of the system (3.1).

    Remark 3.3. The m-weak group inverse is in some sense a generalization of the weak group inverse and Drazin inverse. We have the following:

    (a) If m=1, then 1-weak group inverse of ACn×nk coincides with the weak group inverse of A;

    (b) If mk, then m-weak group inverse of ACn×nk coincides with the Drazin inverse of A.

    In the following example, we will show that the m-weak group inverse is different from some known generalized inverses.

    Example 3.4. Let A=[I3I30N], where N=[010001000]. It can be verified that Ind(A)=3. By computations, we can check the following:

    AD,=[I3H300],   A,D=[H1H4I3H1H2H4], A=[I3I300],

    where H1=[1200010001], H2=[111011001], H3=[110010000], H4=[121212011000] and N=[000100010].

    It is clear that

    Theorem 3.5. Let ACn×nk be decomposed by A=A1+A2 as in (2.1) and let mZ+. Then

    (a) Am is an outer inverse of A;

    (b) Am is a reflexive g-inverse of A1.

    Proof. (a) By Lemmas 2.3 (d), 2.4 and the definition of Am, it follows that

    (b) By (2.2) and (3.2), we get that

    A1AmA1=U[TS00][T1T(m+1)Tm00][TS00]U=U[TS00]U=A1.

    From [16,Theorem 3.4], we get . By the fact that and the statement (a) above, it follows that

    Hence Am is a reflexive g-inverse of A1.

    Theorem 3.6. Let ACn×nk and mZ+. Then

    (a) r(Am)=r(Ak).

    (b) R(Am)=R(Ak), N(Am)=N((Ak)Am).

    (c) Am=A(2)R(Ak),N((Ak)Am).

    Proof. (a) Assume that A is given by (2.1). From (2.11) and (3.2), it is clear that r(Am)=t=r(Ak).

    (b) Since implies that and since r(Am)=r(Ak), we get R(Am)=R(Ak). From and we get If we get that Then , and by r(Am)=r((Ak)Am), it follows that

    (c) It is a direct consequence from Theorems 3.5 (a) and 3.6 (b).

    Theorem 3.7. Let ACn×nk and mZ+. Then

    (a) AAm=PR(Ak),N((Ak)Am);

    (b) AmA=PR(Ak),N((Ak)Am+1).

    Proof. (a) From Theorem 3.5 (a), it follows that AAmCPn. By the definition of Am and (3.2), it can be proved that and r(AAm)=r(Am)=r(Ak)=t. Hence R(AAm)=R(Ak). Similarly, we get that N(AAm)=N(Am)=N((Ak)Am). Therefore, AAm=PR(Ak),N((Ak)Am).

    (b) The proof follows similarly as for the part (a).

    In this part, we represent some characterizations of the m-weak group inverse in terms of the range space, null space, rank equalities, and projectors.

    The next theorem gives several characterizations of Am.

    Theorem 4.1. Let ACn×nk, XCn×n and let mZ+. Then the following hold:

    (a) X=Am.

    (c) R(X)=R(Ak), Am+1X=PAkAm.

    (d) R(X)=R(Ak), (Ak)Am+1X=(Ak)Am.

    Proof. (a)(b): This follows directly by Theorem 3.6 (b) and the definition of Am.

    (b)(c): Premultiplying by Am, and by Lemma 2.4, it follows that

    (c)(d): Premultiplying Am+1X=PAkAm by (Ak), it follows that

    (Ak)Am+1X=(Ak)PAkAm=(Ak)Am.

    (d)(a): Let A be of the form (2.1). By (2.11) and R(X)=R(Ak), we obtain that

    X=U[X1X200]U,

    where X1Ct×t and X2Ct×(nt). Thus (Ak)Am+1X=(Ak)Am implies that

    U[(Tk)Tm+1X1(Tk)Tm+1X2(˜T)Tm+1X1(˜T)Tm+1X2]U=U[(Tk)Tm(Tk)Tm(˜T)Tm(˜T)Tm]U,

    i.e., X1=T1 and X2=(Tm+1)1Tm, which imply X=U[T1(Tm+1)1Tm00]U=Am.

    By Theorem 3.5, it is known that Am is an outer inverse of ACn×nk, i.e., AmAAm=Am. Using this result, we obtain some characterizations of Am.

    Theorem 4.2. Let ACn×nk, XCn×n and let mN+. Then the following conditions are equivalent:

    (a) X=Am.

    (b) XAX=X, R(X)=R(Ak), N(X)=N((Ak)Am).

    (d) XAX=X, R(X)=R(Ak), (Am)Am+1XCHn.

    Proof. (a)(b): It is a direct consequence of Theorem 3.6 (c).

    (b)(c): By XAX=X and R(X)=R(Ak), it follows that

    and

    Since AX, , we have and XAX=X, we obtain that XAk+1=Ak.

    (c)(d): We have that

    and by R(Ak)=R(XAk+1)R(X), we get R(XA)=R(Ak). Since , it follows that

    (d)(a): Assume that A is of the form (2.1). From XAX=X and R(X)=R(Ak), we get that XAk+1=Ak. Then it is easy to conclude that

    X=U[T1X200]U,

    where X2Ct×(nt).

    Since

    (Am)Am+1X=U[(Tm)0(Tm)(Nm)][Tm+1TTm+SNm0Nm+1][T1X200]U=U[(Tm)Tm(Tm)Tm+1X2(Tm)Tm(Tm)Tm+1X2]UCHn,

    we obtain that X2=T(m+1)Tm. Hence X=U[T1T(m+1)Tm00]U=Am.

    Motivated by the first two matrix equations XAk+1=Ak and XAX=X, we provide several characterizations of Am.

    Theorem 4.3. Let ACn×nk, XCn×n and let mZ+. Then the following conditions are equivalent:

    (a) X=Am.

    (b) XAk+1=Ak, AX2=X, (Am)Am+1XCHn.

    (c) XAk+1=Ak, AX2=X, Am+1X=PAkAm.

    Proof. (a)(b): This follows by Proposition 4.2 in [19].

    (a)(c): It is a direct consequence of Theorems 4.1 (c) and 4.2 (c).

    (c)(d): Assume that A is given by (2.1). By XAk+1=Ak, we get that

    X=U[T1X20X4]U,

    where X2Ct×(nt) and X4C(nt)×(nt).

    By AX2=X, we have that X4=NX42, which implies that

    X4=NX42=N2X43==NkX4k+1=0.

    Using (2.12) and (2.13) and that Am+1X=PAkAm, we get

    X=U[T1T(m+1)Tm00]U.

    Now, the proof follows directly.

    (d)(a): Since XAk+1=Ak, it follows that R(Ak)=R(XAk+1)R(X) and by r(X)=r(Ak), we get R(Ak)=R(X). Hence, according to Theorem 4.1 (b), we get X=Am.

    According to Theorem 3.7, it follows that AX=PR(Ak),N((Ak)Am) and XA=PR(Ak),N((Ak)Am+1) when X=Am. Conversely, the implication does not hold. Here's an example below.

    Example 4.4. Let A=[I3L0N], X=[I3L0L], where N=[010001000], L=[001000000]. Then it is clear that k=Ind(A)=3 and A2=[I3L00]. It can be directly verified that AX=PR(A3),N((A3)A2),XA=PR(A3),N((A3)A3). However, XA2.

    Based on the example above, the next theorem, we consider other characterizations of Am by using AX=PR(Ak),N((Ak)Am) and XA=PR(Ak),N((Ak)Am+1).

    Theorem 4.5. Let ACn×nk be of the form (2.1), XCn×n and mZ+. Then the following statements are equivalent:

    (a) X=Am;

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak);

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and XAX=X;

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and AX2=X.

    Proof. (a)(b): It is a direct consequence of Theorems 3.6 (a) and 3.7.

    (b)(c): Since XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak), we get that R(X)=R(Ak) and by XA=PR(Ak),N((Ak)Am+1), we obtain XAX=X.

    (c)(d): From XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak), we have that R(X)=R(Ak) and by AX=PR(Ak),N((Ak)Am), it follows that AX2=X.

    (d)(a): By XA=PR(Ak),N((Ak)Am+1) and AX2=X, it follows that

    R(Ak)=R(XA)R(X)=R(AX2)==R(AkXk+1)R(Ak),

    which implies R(X)=R(Ak). By AX=PR(Ak),N((Ak)Am), we get that

    (Ak)Am+1X=(Ak)Am.

    According to Theorem 4.1 (d), we have that X=Am.

    Analogously, we characterize Am using that AX=PR(Ak),N((Ak)Am) or XA=PR(Ak),N((Ak)Am+1) as follows:

    Theorem 4.6. Let ACn×nk, XCn×n and let mZ+. Then

    (a) X=Am is the unique solution of the system of equations:

    AX=PR(Ak),N((Ak)Am), R(X)=R(Ak). (4.1)

    (b) X=Am is the unique solution of the system of equations:

    XA=PR(Ak),N((Ak)Am+1), N(X)=N((Ak)Am). (4.2)

    Proof. (a) By Theorems 3.6 (b) and 3.7 (a), it follows that X=Am is a solution of the system of Eq (4.1). Conversely, if the system (4.1) is consistent, it follows that (Ak)AmAX=(Ak)Am. Hence by Theorem 4.1, X=Am (d).

    (b) By Theorems 3.6 (b) and 3.7 (b), it is evident that X=Am is a solution of (4.2). Next, we prove the uniqueness of the solution.

    Assume that X1,X2 satisfy the system of Eq (4.2). Then X1A=X2A and N(X1)=N(X2)=N((Ak)Am). Thus, we get that R(X1X2)N(A)N((Ak)) and R(X1X2)R((Am)Ak). For any ηN((Ak))R((Am)Ak), we obtain that (Ak)η=0, η=(Am)Akξ for some ξCn. Since Ind(A)=k, we derive that R(Ak)=R(Ak+m), and it follows that Akξ=Ak+mξ0 for some ξ0Cn×n. Then we have that

    0=(Ak)η=(Ak+m)Ak+mξ0.

    Premultiplying the equation above by ξ0, we derive that (Ak+mξ0)Ak+mξ0=0, which implies Ak+mξ0=0. Hence η=0, i.e., R(X1X2)={0}, which implies X1=X2.

    Remark 4.7. Notice that the condition R(X)=R(Ak) in Theorem 4.6 (a) can be replaced by R(X)R(Ak). Also the condition N(X)=N((Ak)Am) in Theorem 4.6 (b) can be replaced by N(X)N((Ak)Am).

    From Theorem 3.1, we get an expression of Am in terms of . In the next results, we present several expressions of Am in terms of certain generalized inverses.

    Theorem 5.1. Let ACn×nk and let mZ+. Then the following statements hold:

    (a) Am=(AD)m+1PAkAm.

    (c) Am=(Ak)#Akm1PAkAm (km+1).

    (d) Am=(Am+1PAk)Am.

    (e) Am=Am1PAk(Am).

    Proof. Assume that A is given by (2.1). By (2.3)(2.7) and (2.11)(2.13), we get that

    Am+1PAk=U[Tm+1000]U, (5.1)
    (Am+1PAk)=U[Tm1000]U, (5.2)
    (AD)m+1=U[Tm1T2mkTk00]U, (5.3)
    (Ak)#=U[TkT2kTk00]U, (5.4)
    (5.5)
    (Am)=U[TmT2mTm00]U. (5.6)

    (a) By (2.12), (2.13) and (5.3), it follows that

    (AD)m+1PAkAm=U[T1Tk1Tk00]m+1[It000][TmTm0Nm]U=U[T1T(m+1)Tm00]U.

    Hence Am=(AD)m+1PAkAm.

    The proofs of (b)(e) are analogous to that of (a).

    Next, we consider the accuracy of the expression in Theorem 5.1 (a) for computing the m-weak group inverse.

    Example 5.2. Let

    Assume that A is given by (2.1). Then

    It is clear that k = Ind(A) = 3. According to (2.12), (2.13), (3.2) and (5.3), a straightforward computation shows that

    Let K=(AD)3PA3A2. Then

    and

    r1=∥A2K∥=6.6885×1014,

    where is the Frobenius norm.

    Hence, Theorem 5.1 (a) gives a good result in terms of computational accuracy.

    In the following theorem, we present a connection between the (B,C)-inverse and the m-weak group inverse showing that the m-weak group inverse of ACn×nk is its (Ak,(Ak)Am)-inverse.

    Theorem 5.3. Let ACn×nk and let mZ+. Then Am=A(Ak,(Ak)Am).

    Proof. By Theorem 3.7 we have that AmAAk=Ak and ((Ak)Am)AAm=(Ak)Am. From Theorem 3.6 (b), we derive that R(Am)=R(Ak) and N(Am)=N((Ak)Am). Evidently, Am=A(Ak,(Ak)Am).

    Now we will give some limit expressions of Am, but before we need the next auxiliary lemma:

    Lemma 5.4. [37] Let ACm×n,XCn×p and YCp×m. Then the following hold:

    (a) limλ0X(λIp+YAX)1Y exists;

    (b) r(XYAXY)=r(XY);

    (c) A(2)R(XY),N(XY) exists,

    in which case,

    limλ0X(λIp+YAX)1Y=A(2)R(XY),N(XY).

    Theorem 5.5. Let ACn×nk be given by (2.1) and let mN+. Then the following statements hold:

    (a) Am=limλ0Ak(λIn+(Ak)Ak+m+1)1(Ak)Am;

    (b) Am=limλ0Ak(Ak)(λIn+Ak+m+1(Ak))1Am;

    (c) Am=limλ0Ak(Ak)Am(λIn+Ak+1(Ak)Am)1;

    (d) Am=limλ0(λIn+Ak(Ak)Am+1)1Ak(Ak)Am.

    Proof. (a) It is easy to check that r(Ak(Ak)Am)=r((Ak)Am)=r(Ak)=t. By Theorem 3.6, we get that R(Ak)=R(Ak(Ak)Am), N((Ak)Am)=N(Ak(Ak)Am). From Theorem 3.6, we get

    Am=A(2)R(Ak),N((Ak)Am)=A(2)R(Ak(Ak)Am),N(Ak(Ak)Am).

    Let X=Ak,Y=(Ak)Am. By Lemma 5.4, we get that

    Am=limλ0Ak(λIn+(Ak)Ak+m+1)1(Ak)Am.

    The statements (b)(d) can be similarly proved.

    The following example will test the accuracy of expression in Theorem 5.5 (a) for computing the m-weak group inverse.

    Example 5.6. Let

    with k = Ind(A) = 3. By , we get

    Together with (3.2), it follows that

    Let L=limλ0A3(λIn+(A3)A6)1(A3)A2. Then

    and

    r2=∥A2L∥=6.136×1011,

    where is the Frobenius norm. Hence, the representation in Theorem 5.5 (a) is efficient for computing the m-weak group inverse.

    In this section, we consider some relations between the m-weak group inverse and other generalized inverses as well as certain matrix classes. The symbols COPn, CEPn, CiEPn and stand for the subsets of Cn×n consisting of orthogonal projectors (Hermitian idempotent matrices), EP (Range-Hermitian) matrices, i-EP matrices and k-core-EP matrices, respectively, i.e.,

    First, we will state the following lemma auxiliary lemma:

    Lemma 6.1. Let ACn×nk be given by (2.1). Then Tm=0 if and only if S=0.

    Proof. Notice that Tm=0 can be equivalently expressed by the equation below:

    Tm1S+Tm2SN++TSNm2+SNm1=0. (6.1)

    Multiplying the equation above from the right side by Nk1, we get SNk1=0. Then multiplying from the right by Nk2, we get SNk1=0,. Similarly, we get SNk3=0,,SN=0. Now by (6.1), it follows that Tm1S=0, i.e, S=0.

    The next theorem provides some necessary and sufficient conditions for Am to be equal to various transformations of ACn×nk.

    Theorem 6.2. Let ACn×nk and let mZ+. Then the following statements hold:

    (a) AmA{1} if and only if ACCMn.

    (b) AmCCMn.

    (c) Am=A if and only if A=A3.

    (d) Am=A if and only if AACOPn and ACEPn.

    (e) Am=PA if and only if ACOPn.

    Proof. Let A be given by (2.1).

    (a) By (3.2), it follows that

    AmA{1}AAmA=AU[TS+TmTmN00]U=U[TS0N]UN=0ACCMn.

    (b) By (3.2), it is clear that r(Am)=r((Am)2)=t, which implies AmCCMn.

    (c) From (3.2), we get that

    Am=AU[T1(Tm+1)1Tm00]U=U[TS0N]UT2=It and N=0A=A3.

    (d) According to (3.2), we obtain that

    Am=AU[T1(Tm+1)1Tm00]U=U[T0SN]UT1=T,S=0 and N=0AACOPnand ACEPn.

    (e) By (2.9) and (3.2), it follows that

    Am=PAU[T1(Tm+1)1Tm00]U=U[It00NN]UT=It,NN=0 and Tm=0T=It,S=0 and N=0.

    Hence Am=PA if and only if ACOPn.

    Using the core-EP decompositio, we proved that ACiEPn if and only if AmCEPn. Therefore, we will consider certain equivalent conditions for AmCEPn.

    Lemma 6.3. [17] Let ACn×nk be of the form (2.1). Then ACiEPn if and only if S=0.

    Moreover, S=0 if and only if

    Theorem 6.4. Let ACn×nk and let mZ+. The following statements are equivalent:

    (a) AmCEPn;

    (b) ACiEPn;

    (c) ACEPn;

    Proof. Let ACn×nk be of the form (2.1). According to Lemma 6.3, we will prove that each of the statements (a), (c), (d) and (e) is equivalent to S=0.

    (a) According to (3.2) and Lemma 6.1, it follows that

    AmCEPnR(Am)=R((Am))(Tm+1)1Tm=0S=0.

    (c) By (2.4), we get that

    ACEPnR(A)=R((A))T2S=0S=0.

    (d) By (2.3), (3.2) and Lemma 6.1, it follows that

    (e) From (2.3), (3.2) and Lemma 6.1, we get that

    In [22], the authors proved that Am=AD if and only if SNm=0. In the following results, we investigate the relation between the m-weak group inverse and other generalized inverses such as the MP-inverse, group inverse, core inverse, DMP-inverse, dual DMP-inverse, weak group inverse by core-EP decomposition.

    Theorem 6.5. Let ACn×nk be given by (2.1) and let mZ+. Then the following statements hold:

    (a) Am=AACEPn;

    (b) Am=A#ACCMn;

    (d) Am=AD,TkmTm=TkNN;

    (e) Am=A,DSNm=0 and S=SNN;

    (f) Am=ASN=0 (m>1).

    Proof. (a) It follows from (2.8) and (3.2) that

    Am=AU[T1(Tm+1)1Tm00]U=U[TTSNMNMSN]UM=0,N=0,T1=T and (Tm+1)1Tm=TSNS=0 and N=0ACEPn.

    (b) Since A# exits if and only if ACCMn, which is equivalent to N=0, we get by (2.6) and (3.2) the following:

    Am=A#U[T1(Tm+1)1Tm00]U=U[T1T2S00]U and N=0(Tm+1)1Tm=T2S and N=0N=0ACCMn.

    (c) The proof follows similarly as in (b).

    (d) Using (2.5) and (2.9) to AD,=ADAA, we derive

    AD,=[T1(Tk+1)1TkNN00],

    and by (3.2), it follows that

    Am=AD,U[T1(Tm+1)1Tm00]U=U[T1(Tk+1)1TkNN00]UTkmTm=TkNN.

    (e) Using (2.5) and (2.10) and the faact that A,D=AAAD, we obtain that

    A,D=[TTTkTkMMTkTk],

    which together with (3.2), gives

    Am=A,DU[T1(Tm+1)1Tm00]U=U[TTTkTkMMTkTk]UM=0,T1=T and (Tm+1)1Tm=TTkTkS=SNN and TkmTm=TkS=SNN and SNm=0.

    (f) If m>1, from (2.4) and (3.2), we get

    Am=AU[T1(Tm+1)1Tm00]U=U[T1T2S00]U(Tm+1)1Tm=T2S.

    Clearly, (Tm+1)1Tm=T2S is equivalent to T3SN++(Tm+1)1SNm1=0, which is further equivalent to SN=0. Hence Am=A if and only if SN=0.

    In this section, we consider a relation between the m-weak group inverse and the nonsingular bordered matrix, which will be applied to the Cramer's rule for the solution of the restricted matrix equation.

    Theorem 7.1. Let ACn×nk be such that r(Ak)=t and let mZ+. Let BCn×(nt) and CCn×(nt) be of full column rank such that N((Ak)Am)=R(B) and R(Ak)=N(C). Then the bordered matrix

    K=[ABC0]

    is invertible and its inverse is given by

    K1=[Am  (InAmA)CB(InAAm)  B(AAmAA)C].

    Proof. Let X=[Am  (InAmA)CB(InAAm)  B(AAmAA)C]. Since R(Am)=R(Ak)=N(C), we have that CAm=0. Since C is a full row rank matrix, it is right invertible and CC=Int. From

    R(InAAm)=N(AAm)=N((Ak)Am)=R(B)=R(BB),

    we get BB(InAAm)=InAAm. Hence,

    KX=[AAm+BB(InAAm)  A(InAmA)C+BB(AAmAA)CCAm  C(InAmA)C]=[AAm+InAAm  A(InAmA)C(InAAm)AC0  CC]=[In  00  Int].

    Thus, X=K1.

    In the next result, we will discuss the solution of the restricted matrix equation

    AX=D,   R(X)R(Ak), (7.1)

    using the m-weak group inverse.

    Theorem 7.2. Let ACn×nk, XCn×p and DCn×p. If R(D)R(Ak), then the restricted matrix equation

    AX=D,   R(X)R(Ak) (7.2)

    has a unique solution X=AmD.

    Proof. Since R(Ak)=R(AAk) and R(D)R(Ak), we get that R(D)AR(Ak), which implies solvability of the matrix Eq (7.1). Obviously, X=AmD is a solution of (7.1). Then we prove the uniqueness of X. If X1 also satisfies (7.1), then

    X=AmD=AmAX1=PR(Ak),N((Ak)Am)X1=X1.

    Based on the nonsingularity of the bordered matrix given in Theorem 7.1, we will show in the next theorem how the Cramer's rule can be used for solving the restricted matrix Eq (7.1).

    Theorem 7.3. Let ACn×nk be such that r(Ak)=t and let XCn×p and DCn×p. Let BCn×(nt) and CCn×(nt) be full column rank matrices such that N((Ak)Am)=R(B) and R(Ak)=N(C). Then the unique solution of the restricted matrix Eq (7.1) is given by X=[xij], where

    xij=det[A(idj)BC(i0)0]det[ABC0], i=1,2,...,n,j=1,2,...,m, (7.3)

    where dj denotes the j-th column of D.

    Proof. Since X is the solution of the restricted matrix Eq (7.1), we get that R(X)R(Ak)=N(C), which implies CX=0. Then the restricted matrix Eq (7.1) can be rewritten as

    [ABC0][X0]=[AXCX]=[D0].

    By Theorem 7.1, we have that [ABC0] is invertible. Consequently, (7.2) follows from the Cramer's rule for the above equation.

    Example 7.4. Let

    A=[100100010010001001000010000001000000],  D=[1014242861920224101415000000000000],B=[123012136247124136],  C=[000106000120000003].

    It can be verified that Ind(A)=3. Then we get that

    It is easy to check that

    X=A2D=[1014242861920224101415000000000000]

    satisfies the restricted matrix equation AX=D and R(X)R(A3). By simple calculations, we can also get that the components of X can be expressed by (7.2).

    This paper gives a new definition of the m-weak group inverse for the complex matrices, which extends the Drazin inverse and the weak group inverse. Some characterizations of the m-weak group inverse in terms of the range space, null space, rank, and projectors are presented. Several representations of the m-weak group inverse involving some known generalized inverses as well as limitations are also derived. The representation in Theorem 5.1 gives a better result in terms of the computational accuracy (see Examples 5.2 and 5.6). The m-weak group inverses are concerned with the solution of a restricted matrix Eq (7.1). The solution of (7.1) can also be expressed by the Cramer's rule (see Theorem 7.3). In [38,39,40], there are some iterative methods and algorithms to compute the outer inverses. Motivated by these, further investigations deserve more attention as follows:

    (1) The applications of the m-weak group inverse in linear equations and matrix equations;

    (2) Perturbation formulae as well as perturbation bounds for the m-weak group inverse;

    (3) Iterative algorithm, a splitting method for computing the m-weak group inverse;

    (4) Other representations of the m-weak group inverse.

    This research is supported by the National Natural Science Foundation of China (No. 11961076).

    The authors declare no conflict of interest.



    [1] J. T. Beale, T. Kato, A. Majda, Remarks on the breakdown of smooth solutions for the 3-D Euler equations, Commun. Math. Phys., 94 (1984), 61–66. https://doi.org/10.1007/BF01212349 doi: 10.1007/BF01212349
    [2] Y. Cho, H. J. Choe, H. Kim, Unique solvability of the initial boundary value problems for compressible viscous fluids, J. Math. Pures Appl., 83 (2004), 243–275. https://doi.org/10.1016/j.matpur.2003.11.004 doi: 10.1016/j.matpur.2003.11.004
    [3] H. Chen, Y. M. Sun, X. Zhong, Global classical solutions to the 3D Cauchy problem of compressible magneto-micropolar fluid equations with far field vacuum, Discrete Cont Dyn Sys. B, 29 (2024), 282–318. https://doi.org/10.3934/dcdsb.2023096 doi: 10.3934/dcdsb.2023096
    [4] G. Q. Chen, D. H. Wang, Global solution of nonlinear magnetohydrodynamics with large initial data, J. Differ. Equ., 182 (2002), 344–376. https://doi.org/10.1006/jdeq.2001.4111 doi: 10.1006/jdeq.2001.4111
    [5] G. Q. Chen, D. Wang, Existence and continuous dependence of large solutions for the magnetohydrodynamic equations, Z. Angew. Math. Phys., 54 (2003), 608–632. https://doi.org/10.1007/s00033-003-1017-z doi: 10.1007/s00033-003-1017-z
    [6] M. T. Chen, X. Y. Xu, J. W. Zhang, Global weak solutions of 3D compressible micropolar fluids with discontinuous initial data and vacuum, Commun. Math. Sci., 13 (2015), 225–247. https://doi.org/10.4310/CMS.2015.v13.n1.a11 doi: 10.4310/CMS.2015.v13.n1.a11
    [7] M. T. Chen, Blowup criterion for viscous, compressible micropolar fluids with vacuum, Nonlin. Anal. RWA, 13 (2012), 850–859. https://doi.org/10.1016/j.nonrwa.2011.08.021 doi: 10.1016/j.nonrwa.2011.08.021
    [8] M. T. Chen, Unique solvability of compressible micropolar viscous fluids, Bound. Value Probl., 2012 (2012), 232. https://doi.org/10.1186/1687-2770-2012-32 doi: 10.1186/1687-2770-2012-32
    [9] B. Ducomet, E. Feireisl, The equations of magnetohydrodynamics: on the interaction between matter and radiation in the evolution of gaseous stars, Commun. Math. Phys., 266 (2006), 595–629. https://doi.org/10.1007/s00220-006-0052-y doi: 10.1007/s00220-006-0052-y
    [10] A. C. Eringen, Theory of micropolar fluids, J. Math. Mech., 16 (1966), 1–18.
    [11] J. S. Fan, S. Jiang, G. Nakamura, Vanishing shear viscosity limit in the magnetohydrodynamic equations, Commun. Math. Phys., 270 (2007), 691–708. https://doi.org/10.1007/s00220-006-0167-1 doi: 10.1007/s00220-006-0167-1
    [12] J. S. Fan, W. H. Yu, Global variational solutions to the compressible magnetohydrodynamic equations, Nonlinear Anal., 69 (2008), 3637–3600. https://doi.org/10.1016/j.na.2007.10.005 doi: 10.1016/j.na.2007.10.005
    [13] D. Hoff, Global solutions of the Navier-Stokes equations for multidimensional compressible flow with discontinuous initial data, J. Differ. Equ., 120 (1995), 215–254. https://doi.org/10.1006/jdeq.1995.1111 doi: 10.1006/jdeq.1995.1111
    [14] X. D. Huang, J. Li, Z. P. Xin, Serrin-type criterion for the three-dimensional viscous compressible flows, SIAM J. Math. Anal., 43 (2011), 1872–1886. https://doi.org/10.1137/100814639 doi: 10.1137/100814639
    [15] X. D. Huang, J. Li, Z. P. Xin, Global well-posedness of classical solutions with large oscil-lations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations, Commun. Pure Appl. Math., 65 (2012), 549–585. https://doi.org/10.1002/cpa.21382 doi: 10.1002/cpa.21382
    [16] G. Lukaszewicz, Micropolar Fluids: Theory and Applications Modeling and Simulation in Science, Engineering and Technology, Boston: Brikhäuser, 1999. https://doi.org/10.1007/978-1-4612-0614-5
    [17] L. L. Tong, Z. Tan, Optimal decay rates of the compressible magneto-micropolar fluids system in R3, Commun. Math. Sci., 17 (2019), 1109–1134. https://doi.org/10.4310/CMS.2019.v17.n4.a13 doi: 10.4310/CMS.2019.v17.n4.a13
    [18] R. Y. Wei, B. L. Guo, Y. Li, Global existence and optimal convergence rates of solutions for 3D compressible magneto-micropolar fluid equations, J. Differ. Equ., 263 (2017), 2457–2480. https://doi.org/10.1016/j.jde.2017.04.002 doi: 10.1016/j.jde.2017.04.002
    [19] H. Xu, J. W. Zhang, Regularity and uniqueness for the compressible full Navier-Stokes equations, J. Differ. Equ., 272 (2021), 46–73. https://doi.org/10.1016/j.jde.2020.09.036 doi: 10.1016/j.jde.2020.09.036
    [20] Q. Xu, X. Zhong, Strong solutions to the three-dimensional barotropic compressible magneto-micropolar fluid equations with vacuum, Z. Angew. Math. Phys., 73 (2022), 14. https://doi.org/10.1007/s00033-021-01642-3 doi: 10.1007/s00033-021-01642-3
    [21] Q. J. Xu, Z. Tan, H. Q. Wang, Global existence and asymptotic behavior for the 3D compressible magneto-micropolar fluids in a bounded domain, J. Math. Phys., 61 (2020), 011506. https://doi.org/10.1063/1.5121247 doi: 10.1063/1.5121247
    [22] Q. J. Xu, Z. Tan, H. Q. Wang, L. L. Tong, Global low-energy weak solutions of the compressible magneto-micropolar fluids in the half-space, Z. Angew. Math. Phys., 73 (2022), 223. https://doi.org/10.1007/s00033-022-01860-3 doi: 10.1007/s00033-022-01860-3
    [23] P. X. Zhang, Decay of the compressible magneto-micropolar fluids, J. Math. Phys., 59 (2018), 023102. https://doi.org/10.1063/1.5024795 doi: 10.1063/1.5024795
    [24] X. L. Zhang, H. Cai, Existence and uniqueness of time periodic solutions to the compressible magneto-micropolar fluids in a periodic domain, Z. Angew. Math. Phys., 71 (2020), 184. https://doi.org/10.1007/s00033-020-01409-2 doi: 10.1007/s00033-020-01409-2
    [25] J. W. Zhang, S. Jiang, F. Xie, Global weak solutions pf an initial boundary value problem for screw pinches in plasma physics, Math. Model. Meth. Appl. Sci., 19 (2009), 833–875. https://doi.org/10.1142/S0218202509003644 doi: 10.1142/S0218202509003644
    [26] J. W. Zhang, J. N. Zhao, Some decay estimates of solutions for the 3-D compressible isentropic magnetohydrodynamics, Commun. Math. Sci., 8 (2010), 835–850. https://projecteuclid.org/euclid.cms/1288725260
  • This article has been cited by:

    1. Rashid Ali, Zhao Zhang, Exploring two new iterative methods for solving absolute value equations, 2024, 1598-5865, 10.1007/s12190-024-02211-3
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1114) PDF downloads(41) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog