Processing math: 95%
Research article

Intuitionistic fuzzy-based TOPSIS method for multi-criterion optimization problem: a novel compromise methodology

  • Received: 16 February 2023 Revised: 24 April 2023 Accepted: 25 April 2023 Published: 15 May 2023
  • MSC : 03E72, 08A72, 54A40

  • The decision-making process is characterized by some doubt or hesitation due to the existence of uncertainty among some objectives or criteria. In this sense, it is quite difficult for decision maker(s) to reach the precise/exact solutions for these objectives. In this study, a novel approach based on integrating the technique for order preference by similarity to ideal solution (TOPSIS) with the intuitionistic fuzzy set (IFS), named TOPSIS-IFS, for solving a multi-criterion optimization problem (MCOP) is proposed. In this context, the TOPSIS-IFS operates with two phases to reach the best compromise solution (BCS). First, the TOPSIS approach aims to characterize the conflicting natures among objectives by reducing these objectives into only two objectives. Second, IFS is incorporated to obtain the solution model under the concept of indeterminacy degree by defining two membership functions for each objective (i.e., satisfaction degree, dissatisfaction degree). The IFS can provide an effective framework that reflects the reality contained in any decision-making process. The proposed TOPSIS-IFS approach is validated by carrying out an illustrative example. The obtained solution by the approach is superior to those existing in the literature. Also, the TOPSIS-IFS approach has been investigated through solving the multi-objective transportation problem (MOTP) as a practical problem. Furthermore, impacts of IFS parameters are analyzed based on Taguchi method to demonstrate their effects on the BCS. Finally, this integration depicts a new philosophy in the mathematical programming field due to its interesting principles.

    Citation: Ya Qin, Rizk M. Rizk-Allah, Harish Garg, Aboul Ella Hassanien, Václav Snášel. Intuitionistic fuzzy-based TOPSIS method for multi-criterion optimization problem: a novel compromise methodology[J]. AIMS Mathematics, 2023, 8(7): 16825-16845. doi: 10.3934/math.2023860

    Related Papers:

    [1] Xiaofei Cao, Yuyue Huang, Xue Hua, Tingyu Zhao, Sanzhang Xu . Matrix inverses along the core parts of three matrix decompositions. AIMS Mathematics, 2023, 8(12): 30194-30208. doi: 10.3934/math.20231543
    [2] Hui Yan, Hongxing Wang, Kezheng Zuo, Yang Chen . Further characterizations of the weak group inverse of matrices and the weak group matrix. AIMS Mathematics, 2021, 6(9): 9322-9341. doi: 10.3934/math.2021542
    [3] Jinyong Wu, Wenjie Shi, Sanzhang Xu . Revisiting the m-weak core inverse. AIMS Mathematics, 2024, 9(8): 21672-21685. doi: 10.3934/math.20241054
    [4] Zhimei Fu, Kezheng Zuo, Yang Chen . Further characterizations of the weak core inverse of matrices and the weak core matrix. AIMS Mathematics, 2022, 7(3): 3630-3647. doi: 10.3934/math.2022200
    [5] Wanlin Jiang, Kezheng Zuo . Revisiting of the BT-inverse of matrices. AIMS Mathematics, 2021, 6(3): 2607-2622. doi: 10.3934/math.2021158
    [6] Yang Chen, Kezheng Zuo, Zhimei Fu . New characterizations of the generalized Moore-Penrose inverse of matrices. AIMS Mathematics, 2022, 7(3): 4359-4375. doi: 10.3934/math.2022242
    [7] Hongjie Jiang, Xiaoji Liu, Caijing Jiang . On the general strong fuzzy solutions of general fuzzy matrix equation involving the Core-EP inverse. AIMS Mathematics, 2022, 7(2): 3221-3238. doi: 10.3934/math.2022178
    [8] Jiale Gao, Kezheng Zuo, Qingwen Wang, Jiabao Wu . Further characterizations and representations of the Minkowski inverse in Minkowski space. AIMS Mathematics, 2023, 8(10): 23403-23426. doi: 10.3934/math.20231189
    [9] Jin Zhong, Yilin Zhang . Dual group inverses of dual matrices and their applications in solving systems of linear dual equations. AIMS Mathematics, 2022, 7(5): 7606-7624. doi: 10.3934/math.2022427
    [10] Yongge Tian . Miscellaneous reverse order laws and their equivalent facts for generalized inverses of a triple matrix product. AIMS Mathematics, 2021, 6(12): 13845-13886. doi: 10.3934/math.2021803
  • The decision-making process is characterized by some doubt or hesitation due to the existence of uncertainty among some objectives or criteria. In this sense, it is quite difficult for decision maker(s) to reach the precise/exact solutions for these objectives. In this study, a novel approach based on integrating the technique for order preference by similarity to ideal solution (TOPSIS) with the intuitionistic fuzzy set (IFS), named TOPSIS-IFS, for solving a multi-criterion optimization problem (MCOP) is proposed. In this context, the TOPSIS-IFS operates with two phases to reach the best compromise solution (BCS). First, the TOPSIS approach aims to characterize the conflicting natures among objectives by reducing these objectives into only two objectives. Second, IFS is incorporated to obtain the solution model under the concept of indeterminacy degree by defining two membership functions for each objective (i.e., satisfaction degree, dissatisfaction degree). The IFS can provide an effective framework that reflects the reality contained in any decision-making process. The proposed TOPSIS-IFS approach is validated by carrying out an illustrative example. The obtained solution by the approach is superior to those existing in the literature. Also, the TOPSIS-IFS approach has been investigated through solving the multi-objective transportation problem (MOTP) as a practical problem. Furthermore, impacts of IFS parameters are analyzed based on Taguchi method to demonstrate their effects on the BCS. Finally, this integration depicts a new philosophy in the mathematical programming field due to its interesting principles.



    Let Cm×n and Z+ denote the set of all m×n complex matrices and the set of all positive integers, respectively. The symbols r(A) and Ind(A) stand for the rank and the index of ACn×n, respectively. For a matrix ACn×n, we assume that A0=In. Let Cn×nk be the set of all n×n complex matrices with index k. By CCMn we denote the set of all core matrices (or group invertible matrices), i.e.,

    CCMn={A|ACn×n,r(A)=r(A2)}.

    The Drazin inverse [1] of ACn×nk, denoted by AD, is the unique matrix XCn×n satisfying:

    XAk+1=Ak,XAX=X and AX=XA. (1.1)

    Especially, when ACCMn, then X that satisfies (1.1) is called the group inverse of A and is denoted by A#. The Drazin inverse has been widely applied in different fields of mathematics and its applications. Here we will mention only some of them. The perturbation theory and additive results for the Drazin inverse were investigated in [2,3,4,5]. In [6], the algorithms for the computation of the Drazin inverse of a polynomial matrix are presented based on the discrete Fourier transformation. Karampetakis and Stanimiroviˊc [7] presented two algorithms for symbolic computation of the Drazin inverse of a given square one-variable polynomial matrix, which was effective with respect to CPU time and the elimination of redundant computations. Some representations of the W-weighted Drazin inverse were investigated and the computational complexities of the representations were also estimated in [8]. Kyrchei [9] generalized the weighted Drazin inverse, the weighted DMP-inverse, and the weighted dual DMP-inverse [10,11,12] for the matrices over the quaternion skew field and provided their determinantal representations by using noncommutative column and row determinants. In [13], the authors considered the quaternion two-sided restricted matrix equations and gave their unique solutions by the DMP-inverse and dual DMP-inverse. For interesting properties of different kinds of generalized inverses see [14].

    In 2018, Wang [15] introduced the weak group inverse of complex square matrices using the core-EP decomposition [16] and gave its certain characterizations.

    Definition 1.1. Let ACn×nk. Then the unique solution of the system

    is the weak group inverse of A denoted by A.

    Recently, there has been a huge interest in the weak group inverse. For example, Wang et al. [17] compared the weak group inverse with the group inverse of a matrix. In [18], the weak group inverse was introduced in *-rings and characterized by three equations (see also [19,20]). The weak group inverse in the setting of rectangular matrices was considered in [21]. In 2021, Zhou and Chen [19] introduced the m-weak group inverse in the ring and presented its different characterizations.

    Definition 1.2. Let R be a unitary ring with involution, aR and mZ+. If there exist xR and kZ+ such that

    xak+1=ak,   ax2=x,   (ak)am+1x=(am)ak,

    then x is called the m-weak group inverse of a and in this case, a is m-weak group invertible.

    In general, the m-weak group inverse of a may not be unique. If the m-weak group inverse of a is unique, then it is denoted by am.

    In [22], we can find a relation between the weak core inverse and the m-weak group inverse as well as certain necessary and sufficient conditions that the Drazin inverse coincides with the m-weak group inverse of a complex matrix. It is interesting to note that X which satisfies (1.1) coincides with the m-weak group inverse on complex matrices, in which case X exists for every ACn×n and is unique.

    Now, we consider the system of equations

    (1.2)

    Motivated by the above discussion, we introduce a new characterization of the m-weak group inverse related with (1.2) and proved the existence and uniqueness of a solution of (1.2), for every ACn×n. Some new characterizations of the m-weak group inverse are derived in terms of the range space, null space, rank equalities, and projectors. We present some representations of the m-weak group inverse involving some known generalized inverses and limit expressions as well as certain relations between the m-weak group inverse and other generalized inverses. Finally, we consider a relation between the m-weak group inverse and the nonsingular bordered matrix, which is applied to the Cramer's rule for the solution of the restricted matrix equation.

    The paper is organized as follows: In Section 2, we present some well-known definitions and lemmas. In Section 3, we provide a new characterization, as well as certain representations and properties of the m-weak group inverse of a complex matrix. In Section 4, we provide several expressions of the m-weak group inverse which are useful in computation. In Section 5, we present some properties of the m-weak group inverse as well as the relationships between the m-weak group inverse and other generalized inverses by core-EP decomposition. In Section 6, we show the applications of the m-weak group inverse concerned with the bordered matrices and the Cramer's rule for the solution of the restricted matrix equation.

    The symbols R(A), N(A) and A denote the range space, null space and conjugate transpose of ACm×n, respectively. The symbol In denotes the identity matrix of order n. Let PL,M be the projector on the space L along the M, where L,MCn and LM=Cn. For ACm×n, PA represents the orthogonal projection onto R(A), i.e., PA=PR(A)=AA. The symbols CPn and CHn represent the subsets of Cn×n consisting of all idempotent and Hermitian matrices, respectively, i.e.,

    CPn={A|ACn×n,A2=A},CHn={A|ACn×n,A=A}.

    Let ACm×n. The MP-inverse A of A is the unique matrix XCn×m satisfying the following four Penrose equations (see [14,23,24]):

    (1) AXA=A,   (2) XAX=X,   (3) (AX)=AX,   (4) (XA)=XA.

    A matrix XCn×m that satisfies condition (1) above is called an inner inverse of A and the set of all inner inverses of A is denoted by A{1}, while a matrix XCn×m that satisfies condition (2) above is called an outer inverse of A. A matrix XCn×m that satisfies both conditions (1) and (2) is called a reflexive g-inverse of A. If a matrix XCn×m satisfies

    X=XAX, R(X)=T  and N(X)=S,

    where T and S are the subspaces of Cn and Cm respectively, then X is an outer inverse of A with prescribed range and null space and it is denoted by A(2)T,S. If A(2)T,S exists, then it is unique. The notion of the core inverse on the CCMn was proposed and was denoted by [25,26,27]. The core inverse of ACn×nk is the unique matrix XCn×n satisfying

    AX=PA,  R(X)R(A).

    In addition, it was proved that

    The core-EP inverse of ACn×nk, denoted by is given in [28,29,30]. The core-EP inverse of ACn×nk is the unique matrix XCn×n satisfying

    XAX=X,  R(X)=R(X)=R(Ak).

    Moreover, it was proved that

    The DMP-inverse of ACn×nk, denoted by AD, was introduced in [10,11]. The DMP-inverse of ACn×nk is the unique matrix XCn×n satisfying

    XAX=X,  XA=ADA  AkX=AkA.

    Moreover, it was shown that

    AD,=ADAA.

    Also, the dual DMP-inverse of A was introduced in [10], as A,D=AAAD.

    The (B,C)-inverse of ACm×n, denoted by A(B,C) [31,32], is the unique matrix XCn×m satisfying

    XAB=B,CAX=C,R(X)=R(B) and N(X)=N(C),

    where B,CCn×m.

    To discuss further properties of the m-weak group inverse, several auxiliary lemmas will be given. The first lemma gives the core-EP decomposition of a matrix ACn×nk which will be a very useful tool throughout this paper.

    Lemma 2.1. [16] Let ACn×nk. Then there exists a unitary matrix UCn×n such that

    A=A1+A2=U[TS0N]U, (2.1)
    A1=U[TS00]U,   A2=U[000N]U, (2.2)

    where TCt×t is nonsingular with t=r(T)=r(Ak) and N is nilpotent of index k. The representation (2.1) is called the core-EP decomposition of A, while A1 and A2 are the core part and nilpotent part of A, respectively.

    Following the representation (2.1) of a matrix ACn×nk, we have the following representations of certain generalized inverses (see [15,16,33]):

    (2.3)
    A=U[T1T2S00]U, (2.4)
    AD=U[T1(Tk+1)1Tk00]U, (2.5)

    where Tk=k1j=0TjSNk1j.

    By direct computations, we get that ACCMn is equivalent with N=0, in which case

    A#=U[T1T2S00]U, (2.6)

    and

    (2.7)

    Let ACn×nk be of the form (2.1) and let mZ+. The notations below will be frequently used in this paper:

    M=S(IntNN),=(TT+MS)1,Tm=m1j=0TjSNm1j.

    Lemma 2.2. [34,Lemma 6] Let ACn×nk be of the form (2.1). Then

    A=U[TTSNMNMSN]U. (2.8)

    From (2.8) and [16,Theorem 2.2], we get that

    AA=U[It00NN]U, (2.9)
    AA=U[TTTMMTNN+MM]U, (2.10)
    Ak=U[TkTk00]U, (2.11)
    Am=U[TmTm0Nm]U, (2.12)
    PAk=Ak(Ak)=U[It000]U, (2.13)

    where t=r(Ak).

    Lemma 2.3. [29,35,36] Let ACn×nk and let mZ+. Then

    Lemma 2.4. Let ACn×nk and let mZ+. Then

    Proof. Assume that ACn×nk is of the form (2.1). By (2.3), (2.12) and (2.13), it follows that

    In this section, using the core-EP decomposition of a matrix ACn×nk we will give another definition of the m-weak group inverse. Furthermore, some properties of the m-weak group inverse will be derived.

    Theorem 3.1. Let ACn×nk be given by (2.1) and let XCn×n and mZ+. The system of equations

    (3.1)

    is consistent and has a unique solution X given by

    (3.2)

    Proof. If m=1, then X coincides with A. Clearly, X is the unique solution of (3.1) according to the definition of the weak group inverse. If m1, by (3.1), Lemmas 2.3 (d) and 2.4, it follows that

    Thus, by (2.3) and (2.12), we have that

    Definition 3.2. Let ACn×nk and mZ+. The m-weak group inverse of A, denoted by Am, is the unique solution of the system (3.1).

    Remark 3.3. The m-weak group inverse is in some sense a generalization of the weak group inverse and Drazin inverse. We have the following:

    (a) If m=1, then 1-weak group inverse of ACn×nk coincides with the weak group inverse of A;

    (b) If mk, then m-weak group inverse of ACn×nk coincides with the Drazin inverse of A.

    In the following example, we will show that the m-weak group inverse is different from some known generalized inverses.

    Example 3.4. Let A=[I3I30N], where N=[010001000]. It can be verified that Ind(A)=3. By computations, we can check the following:

    AD,=[I3H300],   A,D=[H1H4I3H1H2H4], A=[I3I300],

    where H1=[1200010001], H2=[111011001], H3=[110010000], H4=[121212011000] and N=[000100010].

    It is clear that

    Theorem 3.5. Let ACn×nk be decomposed by A=A1+A2 as in (2.1) and let mZ+. Then

    (a) Am is an outer inverse of A;

    (b) Am is a reflexive g-inverse of A1.

    Proof. (a) By Lemmas 2.3 (d), 2.4 and the definition of Am, it follows that

    (b) By (2.2) and (3.2), we get that

    A1AmA1=U[TS00][T1T(m+1)Tm00][TS00]U=U[TS00]U=A1.

    From [16,Theorem 3.4], we get . By the fact that and the statement (a) above, it follows that

    Hence Am is a reflexive g-inverse of A1.

    Theorem 3.6. Let ACn×nk and mZ+. Then

    (a) r(Am)=r(Ak).

    (b) R(Am)=R(Ak), N(Am)=N((Ak)Am).

    (c) Am=A(2)R(Ak),N((Ak)Am).

    Proof. (a) Assume that A is given by (2.1). From (2.11) and (3.2), it is clear that r(Am)=t=r(Ak).

    (b) Since implies that and since r(Am)=r(Ak), we get R(Am)=R(Ak). From and we get If we get that Then , and by r(Am)=r((Ak)Am), it follows that

    (c) It is a direct consequence from Theorems 3.5 (a) and 3.6 (b).

    Theorem 3.7. Let ACn×nk and mZ+. Then

    (a) AAm=PR(Ak),N((Ak)Am);

    (b) AmA=PR(Ak),N((Ak)Am+1).

    Proof. (a) From Theorem 3.5 (a), it follows that AAmCPn. By the definition of Am and (3.2), it can be proved that and r(AAm)=r(Am)=r(Ak)=t. Hence R(AAm)=R(Ak). Similarly, we get that N(AAm)=N(Am)=N((Ak)Am). Therefore, AAm=PR(Ak),N((Ak)Am).

    (b) The proof follows similarly as for the part (a).

    In this part, we represent some characterizations of the m-weak group inverse in terms of the range space, null space, rank equalities, and projectors.

    The next theorem gives several characterizations of Am.

    Theorem 4.1. Let ACn×nk, XCn×n and let mZ+. Then the following hold:

    (a) X=Am.

    (c) R(X)=R(Ak), Am+1X=PAkAm.

    (d) R(X)=R(Ak), (Ak)Am+1X=(Ak)Am.

    Proof. (a)(b): This follows directly by Theorem 3.6 (b) and the definition of Am.

    (b)(c): Premultiplying by Am, and by Lemma 2.4, it follows that

    (c)(d): Premultiplying Am+1X=PAkAm by (Ak), it follows that

    (Ak)Am+1X=(Ak)PAkAm=(Ak)Am.

    (d)(a): Let A be of the form (2.1). By (2.11) and R(X)=R(Ak), we obtain that

    X=U[X1X200]U,

    where X1Ct×t and X2Ct×(nt). Thus (Ak)Am+1X=(Ak)Am implies that

    U[(Tk)Tm+1X1(Tk)Tm+1X2(˜T)Tm+1X1(˜T)Tm+1X2]U=U[(Tk)Tm(Tk)Tm(˜T)Tm(˜T)Tm]U,

    i.e., X1=T1 and X2=(Tm+1)1Tm, which imply X=U[T1(Tm+1)1Tm00]U=Am.

    By Theorem 3.5, it is known that Am is an outer inverse of ACn×nk, i.e., AmAAm=Am. Using this result, we obtain some characterizations of Am.

    Theorem 4.2. Let ACn×nk, XCn×n and let mN+. Then the following conditions are equivalent:

    (a) X=Am.

    (b) XAX=X, R(X)=R(Ak), N(X)=N((Ak)Am).

    (d) XAX=X, R(X)=R(Ak), (Am)Am+1XCHn.

    Proof. (a)(b): It is a direct consequence of Theorem 3.6 (c).

    (b)(c): By XAX=X and R(X)=R(Ak), it follows that

    and

    Since AX, , we have and XAX=X, we obtain that XAk+1=Ak.

    (c)(d): We have that

    and by R(Ak)=R(XAk+1)R(X), we get R(XA)=R(Ak). Since , it follows that

    (d)(a): Assume that A is of the form (2.1). From XAX=X and R(X)=R(Ak), we get that XAk+1=Ak. Then it is easy to conclude that

    X=U[T1X200]U,

    where X2Ct×(nt).

    Since

    (Am)Am+1X=U[(Tm)0(Tm)(Nm)][Tm+1TTm+SNm0Nm+1][T1X200]U=U[(Tm)Tm(Tm)Tm+1X2(Tm)Tm(Tm)Tm+1X2]UCHn,

    we obtain that X2=T(m+1)Tm. Hence X=U[T1T(m+1)Tm00]U=Am.

    Motivated by the first two matrix equations XAk+1=Ak and XAX=X, we provide several characterizations of Am.

    Theorem 4.3. Let ACn×nk, XCn×n and let mZ+. Then the following conditions are equivalent:

    (a) X=Am.

    (b) XAk+1=Ak, AX2=X, (Am)Am+1XCHn.

    (c) XAk+1=Ak, AX2=X, Am+1X=PAkAm.

    Proof. (a)(b): This follows by Proposition 4.2 in [19].

    (a)(c): It is a direct consequence of Theorems 4.1 (c) and 4.2 (c).

    (c)(d): Assume that A is given by (2.1). By XAk+1=Ak, we get that

    X=U[T1X20X4]U,

    where X2Ct×(nt) and X4C(nt)×(nt).

    By AX2=X, we have that X4=NX42, which implies that

    X4=NX42=N2X43==NkX4k+1=0.

    Using (2.12) and (2.13) and that Am+1X=PAkAm, we get

    X=U[T1T(m+1)Tm00]U.

    Now, the proof follows directly.

    (d)(a): Since XAk+1=Ak, it follows that R(Ak)=R(XAk+1)R(X) and by r(X)=r(Ak), we get R(Ak)=R(X). Hence, according to Theorem 4.1 (b), we get X=Am.

    According to Theorem 3.7, it follows that AX=PR(Ak),N((Ak)Am) and XA=PR(Ak),N((Ak)Am+1) when X=Am. Conversely, the implication does not hold. Here's an example below.

    Example 4.4. Let A=[I3L0N], X=[I3L0L], where N=[010001000], L=[001000000]. Then it is clear that k=Ind(A)=3 and A2=[I3L00]. It can be directly verified that AX=PR(A3),N((A3)A2),XA=PR(A3),N((A3)A3). However, XA2.

    Based on the example above, the next theorem, we consider other characterizations of Am by using AX=PR(Ak),N((Ak)Am) and XA=PR(Ak),N((Ak)Am+1).

    Theorem 4.5. Let ACn×nk be of the form (2.1), XCn×n and mZ+. Then the following statements are equivalent:

    (a) X=Am;

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak);

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and XAX=X;

    (a) AX=PR(Ak),N((Ak)Am),XA=PR(Ak),N((Ak)Am+1) and AX2=X.

    Proof. (a)(b): It is a direct consequence of Theorems 3.6 (a) and 3.7.

    (b)(c): Since XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak), we get that R(X)=R(Ak) and by XA=PR(Ak),N((Ak)Am+1), we obtain XAX=X.

    (c)(d): From XA=PR(Ak),N((Ak)Am+1) and r(X)=r(Ak), we have that R(X)=R(Ak) and by AX=PR(Ak),N((Ak)Am), it follows that AX2=X.

    (d)(a): By XA=PR(Ak),N((Ak)Am+1) and AX2=X, it follows that

    R(Ak)=R(XA)R(X)=R(AX2)==R(AkXk+1)R(Ak),

    which implies R(X)=R(Ak). By AX=PR(Ak),N((Ak)Am), we get that

    (Ak)Am+1X=(Ak)Am.

    According to Theorem 4.1 (d), we have that X=Am.

    Analogously, we characterize Am using that AX=PR(Ak),N((Ak)Am) or XA=PR(Ak),N((Ak)Am+1) as follows:

    Theorem 4.6. Let ACn×nk, XCn×n and let mZ+. Then

    (a) X=Am is the unique solution of the system of equations:

    AX=PR(Ak),N((Ak)Am), R(X)=R(Ak). (4.1)

    (b) X=Am is the unique solution of the system of equations:

    XA=PR(Ak),N((Ak)Am+1), N(X)=N((Ak)Am). (4.2)

    Proof. (a) By Theorems 3.6 (b) and 3.7 (a), it follows that X=Am is a solution of the system of Eq (4.1). Conversely, if the system (4.1) is consistent, it follows that (Ak)AmAX=(Ak)Am. Hence by Theorem 4.1, X=Am (d).

    (b) By Theorems 3.6 (b) and 3.7 (b), it is evident that X=Am is a solution of (4.2). Next, we prove the uniqueness of the solution.

    Assume that X1,X2 satisfy the system of Eq (4.2). Then X1A=X2A and N(X1)=N(X2)=N((Ak)Am). Thus, we get that R(X1X2)N(A)N((Ak)) and R(X1X2)R((Am)Ak). For any ηN((Ak))R((Am)Ak), we obtain that (Ak)η=0, η=(Am)Akξ for some ξCn. Since Ind(A)=k, we derive that R(Ak)=R(Ak+m), and it follows that Akξ=Ak+mξ0 for some ξ0Cn×n. Then we have that

    0=(Ak)η=(Ak+m)Ak+mξ0.

    Premultiplying the equation above by ξ0, we derive that (Ak+mξ0)Ak+mξ0=0, which implies Ak+mξ0=0. Hence η=0, i.e., R(X1X2)={0}, which implies X1=X2.

    Remark 4.7. Notice that the condition R(X)=R(Ak) in Theorem 4.6 (a) can be replaced by R(X)R(Ak). Also the condition N(X)=N((Ak)Am) in Theorem 4.6 (b) can be replaced by N(X)N((Ak)Am).

    From Theorem 3.1, we get an expression of Am in terms of . In the next results, we present several expressions of Am in terms of certain generalized inverses.

    Theorem 5.1. Let ACn×nk and let mZ+. Then the following statements hold:

    (a) Am=(AD)m+1PAkAm.

    (c) Am=(Ak)#Akm1PAkAm (km+1).

    (d) Am=(Am+1PAk)Am.

    (e) Am=Am1PAk(Am).

    Proof. Assume that A is given by (2.1). By (2.3)(2.7) and (2.11)(2.13), we get that

    Am+1PAk=U[Tm+1000]U, (5.1)
    (Am+1PAk)=U[Tm1000]U, (5.2)
    (AD)m+1=U[Tm1T2mkTk00]U, (5.3)
    (Ak)#=U[TkT2kTk00]U, (5.4)
    (5.5)
    (Am)=U[TmT2mTm00]U. (5.6)

    (a) By (2.12), (2.13) and (5.3), it follows that

    (AD)m+1PAkAm=U[T1Tk1Tk00]m+1[It000][TmTm0Nm]U=U[T1T(m+1)Tm00]U.

    Hence Am=(AD)m+1PAkAm.

    The proofs of (b)(e) are analogous to that of (a).

    Next, we consider the accuracy of the expression in Theorem 5.1 (a) for computing the m-weak group inverse.

    Example 5.2. Let

    Assume that A is given by (2.1). Then

    It is clear that k = Ind(A) = 3. According to (2.12), (2.13), (3.2) and (5.3), a straightforward computation shows that

    Let K=(AD)3PA3A2. Then

    and

    r1=∥A2K∥=6.6885×1014,

    where is the Frobenius norm.

    Hence, Theorem 5.1 (a) gives a good result in terms of computational accuracy.

    In the following theorem, we present a connection between the (B,C)-inverse and the m-weak group inverse showing that the m-weak group inverse of ACn×nk is its (Ak,(Ak)Am)-inverse.

    Theorem 5.3. Let ACn×nk and let mZ+. Then Am=A(Ak,(Ak)Am).

    Proof. By Theorem 3.7 we have that AmAAk=Ak and ((Ak)Am)AAm=(Ak)Am. From Theorem 3.6 (b), we derive that R(Am)=R(Ak) and N(Am)=N((Ak)Am). Evidently, Am=A(Ak,(Ak)Am).

    Now we will give some limit expressions of Am, but before we need the next auxiliary lemma:

    Lemma 5.4. [37] Let ACm×n,XCn×p and YCp×m. Then the following hold:

    (a) limλ0X(λIp+YAX)1Y exists;

    (b) r(XYAXY)=r(XY);

    (c) A(2)R(XY),N(XY) exists,

    in which case,

    limλ0X(λIp+YAX)1Y=A(2)R(XY),N(XY).

    Theorem 5.5. Let ACn×nk be given by (2.1) and let mN+. Then the following statements hold:

    (a) Am=limλ0Ak(λIn+(Ak)Ak+m+1)1(Ak)Am;

    (b) Am=limλ0Ak(Ak)(λIn+Ak+m+1(Ak))1Am;

    (c) Am=limλ0Ak(Ak)Am(λIn+Ak+1(Ak)Am)1;

    (d) Am=limλ0(λIn+Ak(Ak)Am+1)1Ak(Ak)Am.

    Proof. (a) It is easy to check that r(Ak(Ak)Am)=r((Ak)Am)=r(Ak)=t. By Theorem 3.6, we get that R(Ak)=R(Ak(Ak)Am), N((Ak)Am)=N(Ak(Ak)Am). From Theorem 3.6, we get

    Am=A(2)R(Ak),N((Ak)Am)=A(2)R(Ak(Ak)Am),N(Ak(Ak)Am).

    Let X=Ak,Y=(Ak)Am. By Lemma 5.4, we get that

    Am=limλ0Ak(λIn+(Ak)Ak+m+1)1(Ak)Am.

    The statements (b)(d) can be similarly proved.

    The following example will test the accuracy of expression in Theorem 5.5 (a) for computing the m-weak group inverse.

    Example 5.6. Let

    with k = Ind(A) = 3. By , we get

    Together with (3.2), it follows that

    Let L=limλ0A3(λIn+(A3)A6)1(A3)A2. Then

    and

    r2=∥A2L∥=6.136×1011,

    where is the Frobenius norm. Hence, the representation in Theorem 5.5 (a) is efficient for computing the m-weak group inverse.

    In this section, we consider some relations between the m-weak group inverse and other generalized inverses as well as certain matrix classes. The symbols COPn, CEPn, CiEPn and stand for the subsets of Cn×n consisting of orthogonal projectors (Hermitian idempotent matrices), EP (Range-Hermitian) matrices, i-EP matrices and k-core-EP matrices, respectively, i.e.,

    First, we will state the following lemma auxiliary lemma:

    Lemma 6.1. Let ACn×nk be given by (2.1). Then Tm=0 if and only if S=0.

    Proof. Notice that Tm=0 can be equivalently expressed by the equation below:

    Tm1S+Tm2SN++TSNm2+SNm1=0. (6.1)

    Multiplying the equation above from the right side by Nk1, we get SNk1=0. Then multiplying from the right by Nk2, we get SNk1=0,. Similarly, we get SNk3=0,,SN=0. Now by (6.1), it follows that Tm1S=0, i.e, S=0.

    The next theorem provides some necessary and sufficient conditions for Am to be equal to various transformations of ACn×nk.

    Theorem 6.2. Let ACn×nk and let mZ+. Then the following statements hold:

    (a) AmA{1} if and only if ACCMn.

    (b) AmCCMn.

    (c) Am=A if and only if A=A3.

    (d) Am=A if and only if AACOPn and ACEPn.

    (e) Am=PA if and only if ACOPn.

    Proof. Let A be given by (2.1).

    (a) By (3.2), it follows that

    AmA{1}AAmA=AU[TS+TmTmN00]U=U[TS0N]UN=0ACCMn.

    (b) By (3.2), it is clear that r(Am)=r((Am)2)=t, which implies AmCCMn.

    (c) From (3.2), we get that

    Am=AU[T1(Tm+1)1Tm00]U=U[TS0N]UT2=It and N=0A=A3.

    (d) According to (3.2), we obtain that

    Am=AU[T1(Tm+1)1Tm00]U=U[T0SN]UT1=T,S=0 and N=0AACOPnand ACEPn.

    (e) By (2.9) and (3.2), it follows that

    Am=PAU[T1(Tm+1)1Tm00]U=U[It00NN]UT=It,NN=0 and Tm=0T=It,S=0 and N=0.

    Hence Am=PA if and only if ACOPn.

    Using the core-EP decompositio, we proved that ACiEPn if and only if AmCEPn. Therefore, we will consider certain equivalent conditions for AmCEPn.

    Lemma 6.3. [17] Let ACn×nk be of the form (2.1). Then ACiEPn if and only if S=0.

    Moreover, S=0 if and only if

    Theorem 6.4. Let ACn×nk and let mZ+. The following statements are equivalent:

    (a) AmCEPn;

    (b) ACiEPn;

    (c) ACEPn;

    Proof. Let ACn×nk be of the form (2.1). According to Lemma 6.3, we will prove that each of the statements (a), (c), (d) and (e) is equivalent to S=0.

    (a) According to (3.2) and Lemma 6.1, it follows that

    AmCEPnR(Am)=R((Am))(Tm+1)1Tm=0S=0.

    (c) By (2.4), we get that

    ACEPnR(A)=R((A))T2S=0S=0.

    (d) By (2.3), (3.2) and Lemma 6.1, it follows that

    (e) From (2.3), (3.2) and Lemma 6.1, we get that

    In [22], the authors proved that Am=AD if and only if SNm=0. In the following results, we investigate the relation between the m-weak group inverse and other generalized inverses such as the MP-inverse, group inverse, core inverse, DMP-inverse, dual DMP-inverse, weak group inverse by core-EP decomposition.

    Theorem 6.5. Let ACn×nk be given by (2.1) and let mZ+. Then the following statements hold:

    (a) Am=AACEPn;

    (b) Am=A#ACCMn;

    (d) Am=AD,TkmTm=TkNN;

    (e) Am=A,DSNm=0 and S=SNN;

    (f) Am=ASN=0 (m>1).

    Proof. (a) It follows from (2.8) and (3.2) that

    Am=AU[T1(Tm+1)1Tm00]U=U[TTSNMNMSN]UM=0,N=0,T1=T and (Tm+1)1Tm=TSNS=0 and N=0ACEPn.

    (b) Since A# exits if and only if ACCMn, which is equivalent to N=0, we get by (2.6) and (3.2) the following:

    Am=A#U[T1(Tm+1)1Tm00]U=U[T1T2S00]U and N=0(Tm+1)1Tm=T2S and N=0N=0ACCMn.

    (c) The proof follows similarly as in (b).

    (d) Using (2.5) and (2.9) to AD,=ADAA, we derive

    AD,=[T1(Tk+1)1TkNN00],

    and by (3.2), it follows that

    Am=AD,U[T1(Tm+1)1Tm00]U=U[T1(Tk+1)1TkNN00]UTkmTm=TkNN.

    (e) Using (2.5) and (2.10) and the faact that A,D=AAAD, we obtain that

    A,D=[TTTkTkMMTkTk],

    which together with (3.2), gives

    Am=A,DU[T1(Tm+1)1Tm00]U=U[TTTkTkMMTkTk]UM=0,T1=T and (Tm+1)1Tm=TTkTkS=SNN and TkmTm=TkS=SNN and SNm=0.

    (f) If m>1, from (2.4) and (3.2), we get

    Am=AU[T1(Tm+1)1Tm00]U=U[T1T2S00]U(Tm+1)1Tm=T2S.

    Clearly, (Tm+1)1Tm=T2S is equivalent to T3SN++(Tm+1)1SNm1=0, which is further equivalent to SN=0. Hence Am=A if and only if SN=0.

    In this section, we consider a relation between the m-weak group inverse and the nonsingular bordered matrix, which will be applied to the Cramer's rule for the solution of the restricted matrix equation.

    Theorem 7.1. Let ACn×nk be such that r(Ak)=t and let mZ+. Let BCn×(nt) and CCn×(nt) be of full column rank such that N((Ak)Am)=R(B) and R(Ak)=N(C). Then the bordered matrix

    K=[ABC0]

    is invertible and its inverse is given by

    K1=[Am  (InAmA)CB(InAAm)  B(AAmAA)C].

    Proof. Let X=[Am  (InAmA)CB(InAAm)  B(AAmAA)C]. Since R(Am)=R(Ak)=N(C), we have that CAm=0. Since C is a full row rank matrix, it is right invertible and CC=Int. From

    R(InAAm)=N(AAm)=N((Ak)Am)=R(B)=R(BB),

    we get BB(InAAm)=InAAm. Hence,

    KX=[AAm+BB(InAAm)  A(InAmA)C+BB(AAmAA)CCAm  C(InAmA)C]=[AAm+InAAm  A(InAmA)C(InAAm)AC0  CC]=[In  00  Int].

    Thus, X=K1.

    In the next result, we will discuss the solution of the restricted matrix equation

    AX=D,   R(X)R(Ak), (7.1)

    using the m-weak group inverse.

    Theorem 7.2. Let ACn×nk, XCn×p and DCn×p. If R(D)R(Ak), then the restricted matrix equation

    AX=D,   R(X)R(Ak) (7.2)

    has a unique solution X=AmD.

    Proof. Since R(Ak)=R(AAk) and R(D)R(Ak), we get that R(D)AR(Ak), which implies solvability of the matrix Eq (7.1). Obviously, X=AmD is a solution of (7.1). Then we prove the uniqueness of X. If X1 also satisfies (7.1), then

    X=AmD=AmAX1=PR(Ak),N((Ak)Am)X1=X1.

    Based on the nonsingularity of the bordered matrix given in Theorem 7.1, we will show in the next theorem how the Cramer's rule can be used for solving the restricted matrix Eq (7.1).

    Theorem 7.3. Let ACn×nk be such that r(Ak)=t and let XCn×p and DCn×p. Let BCn×(nt) and CCn×(nt) be full column rank matrices such that N((Ak)Am)=R(B) and R(Ak)=N(C). Then the unique solution of the restricted matrix Eq (7.1) is given by X=[xij], where

    xij=det[A(idj)BC(i0)0]det[ABC0], i=1,2,...,n,j=1,2,...,m, (7.3)

    where d_{j} denotes the j-th column of D .

    Proof. Since X is the solution of the restricted matrix Eq (7.1) , we get that \mathcal{R}(X)\subseteq\mathcal{R}(A^{k}) = \mathcal{N}(C) , which implies CX = 0 . Then the restricted matrix Eq (7.1) can be rewritten as

    \left[\begin{array}{cc} A & B\\ C & 0\\ \end{array}\right]\left[\begin{array}{cccc} X \\ 0 \\ \end{array}\right] = \left[\begin{array}{cccc} AX \\ CX\\ \end{array}\right] = \left[\begin{array}{cccc} D\\ 0\\ \end{array}\right].

    By Theorem 7.1, we have that \left[\begin{array}{cc} A & B\\ C & 0\\ \end{array}\right] is invertible. Consequently, (7.2) follows from the Cramer's rule for the above equation.

    Example 7.4. Let

    \begin{eqnarray*} A = \left[ \begin{array}{ccccccc} 1& 0& 0& 1& 0& 0\\ 0& 1& 0& 0& 1& 0\\ 0& 0& 1& 0& 0& 1\\ 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 0& 0& 0 \end{array} \right], \ \ D = \left[ \begin{array}{ccccccc} 10& 14 & 24& 28\\ 6 & 19 & 20& 22\\ 4 & 10& 14& 15\\ 0 & 0 & 0& 0\\ 0 & 0& 0& 0\\ 0 & 0& 0& 0 \end{array} \right], \\ B = \left[ \begin{array}{ccccccc} 1& 2& 3\\ 0& 1& 2\\ 1& 3& 6\\ -2& -4& -7\\ 1& 2& 4\\ -1& -3& -6 \end{array} \right], \ \ C = \left[ \begin{array}{ccccccc} 0& 0& 0& 1& 0& 6\\ 0& 0& 0& 1& 2& 0\\ 0& 0& 0& 0& 0& 3 \end{array} \right]. \end{eqnarray*}

    It can be verified that {{{\rm{ Ind}}}}(A) = 3. Then we get that

    It is easy to check that

    \begin{eqnarray*} X = A^{{Ⓦ}_{2}}D = \left[ \begin{array}{ccccccc} 10& 14& 24& 28\\ 6& 19& 20& 22\\ 4& 10& 14& 15\\ 0& 0& 0& 0 \\ 0& 0& 0& 0 \\ 0& 0& 0& 0 \end{array} \right] \end{eqnarray*}

    satisfies the restricted matrix equation AX = D and \mathcal{R}(X)\subseteq\mathcal{R}(A^{3}) . By simple calculations, we can also get that the components of X can be expressed by (7.2) .

    This paper gives a new definition of the m -weak group inverse for the complex matrices, which extends the Drazin inverse and the weak group inverse. Some characterizations of the m -weak group inverse in terms of the range space, null space, rank, and projectors are presented. Several representations of the m -weak group inverse involving some known generalized inverses as well as limitations are also derived. The representation in Theorem 5.1 gives a better result in terms of the computational accuracy (see Examples 5.2 and 5.6). The m -weak group inverses are concerned with the solution of a restricted matrix Eq (7.1) . The solution of (7.1) can also be expressed by the Cramer's rule (see Theorem 7.3). In [38,39,40], there are some iterative methods and algorithms to compute the outer inverses. Motivated by these, further investigations deserve more attention as follows:

    (1) The applications of the m -weak group inverse in linear equations and matrix equations;

    (2) Perturbation formulae as well as perturbation bounds for the m -weak group inverse;

    (3) Iterative algorithm, a splitting method for computing the m -weak group inverse;

    (4) Other representations of the m -weak group inverse.

    This research is supported by the National Natural Science Foundation of China (No. 11961076).

    The authors declare no conflict of interest.



    [1] K. T. Attanassov, Intuitionistic fuzzy sets, Fuzzy Set. Syst., 20 (1986), 87–96. https://doi.org/10.1016/S0165-0114(86)80034-3 doi: 10.1016/S0165-0114(86)80034-3
    [2] R. E. Bellman, L. A. Zadeh, Decision making in a fuzzy environment, Manage. Sci., 17 (1970), 141–164. https://doi.org/10.1287/mnsc.17.4.B141 doi: 10.1287/mnsc.17.4.B141
    [3] P. P. Angelov, Optimization in an intuitionistic fuzzy environment, Fuzzy Set. Syst., 86 (1997), 299–306. https://doi.org/10.1016/S0165-0114(96)00009-7 doi: 10.1016/S0165-0114(96)00009-7
    [4] S. Dey, T. K. Roy, Multi-objective structural optimization using fuzzy and intuitionistic fuzzy optimization technique, International Journal of Intelligent Systems and Applications, 5 (2015), 57–65. https://doi.org/10.5815/ijisa.2015.05.08 doi: 10.5815/ijisa.2015.05.08
    [5] B. Jana, T. K. Roy, Multi-objective intuitionistic fuzzy linear programming and its application in transportation model, Notes on Intuitionistic Fuzzy Sets, 13 (2007), 34–51.
    [6] O. Bahri, E. Talbi, N. B. Amor, A generic fuzzy approach for multi-objective optimization under uncertainty, Swarm Evol. Comput., 40 (2018), 166–183. https://doi.org/10.1016/j.swevo.2018.02.002 doi: 10.1016/j.swevo.2018.02.002
    [7] C. L. Hwang, K. Yoon, Multiple attribute decision making: methods and applications, Heidelberg: Springer, 1981. https://doi.org/10.1007/978-3-642-48318-9
    [8] C. H. Yeh, The selection of multiattribute decision making methods for scholarship student selection, Int. J. Select. Assess., 11 (2003), 289–296. https://doi.org/10.1111/j.0965-075X.2003.00252.x doi: 10.1111/j.0965-075X.2003.00252.x
    [9] E. K. Zavadskas, A. Mardani, Z. Turskis, A. Jusoh, K. M. Nor, Development of TOPSIS method to solve complicated decision-making problems-an overview on developments from 2000 to 2015, Int. J. Inf. Tech. Decis., 15 (2016), 645–682. https://doi.org/10.1142/S0219622016300019 doi: 10.1142/S0219622016300019
    [10] V. P. Agrawal, V. Kohli, S. Gupta, Computer aided robot selection: the 'multiple attribute decision making' approach, Int. J. Prod. Res., 29 (1991), 1629–1644. https://doi.org/10.1080/00207549108948036 doi: 10.1080/00207549108948036
    [11] C. Parkan, M. L. Wu, Decision-making and performance measurement models with applications to robot selection, Comput. Ind. Eng., 36 (1999), 503–523. https://doi.org/10.1016/S0360-8352(99)00146-1 doi: 10.1016/S0360-8352(99)00146-1
    [12] E. Akgul, M. I. Bahtiyari, E. K. Aydoğan, H. Benli, Use of TOPSIS method for designing different textile products in coloration via natural source madder, J. Nat. Fivers, 19 (2021), 8993–9008. https://doi.org/10.1080/15440478.2021.1982106 doi: 10.1080/15440478.2021.1982106
    [13] M. Tavana, A. Hatami-Marbini, A group AHP-TOPSIS framework for human spaceflight mission planning at NASA, Expert Syst. Appl., 38 (2011), 13588–13603. https://doi.org/10.1016/j.eswa.2011.04.108 doi: 10.1016/j.eswa.2011.04.108
    [14] R. M. Rizk-Allah, E. A. Hagag, A. A. El-Fergany, Chaos-enhanced multi-objective tunicate swarm algorithm for economic-emission load dispatch problem, Soft Comput., 27 (2023), 5721–5739. https://doi.org/10.1007/s00500-022-07794-2 doi: 10.1007/s00500-022-07794-2
    [15] R. M. Rizk-Allah, M. A. Abo-Sinna, A. E. Hassanien, Intuitionistic fuzzy sets and dynamic programming for multi-objective non-linear programming problems, Int. J. Fuzzy Syst., 23 (2021), 334–352. https://doi.org/10.1007/s40815-020-00973-z doi: 10.1007/s40815-020-00973-z
    [16] R. M. Rizk-Allah, M. A. Abo-Sinna, A comparative study of two optimization approaches for solving bi-level multi-objective linear fractional programming problem, OPSEARCH, 58 (2021), 374–402. https://doi.org/10.1007/s12597-020-00486-1 doi: 10.1007/s12597-020-00486-1
    [17] D. Chakraborty, D. K. Jana, T. K. Roy, Arithmetic operations on generalized intuitionistic fuzzy number and its applications to transportation problem, OPSEARCH, 52 (2015), 431–471. https://doi.org/10.1007/s12597-014-0194-1 doi: 10.1007/s12597-014-0194-1
    [18] D. Chakraborty, D. K. Jana, T. K. Roy, A new approach to solve multi-objective multi- choice multi-item Atanassov's intuitionistic fuzzy transportation problem using chance operator, J. Intell. Fuzzy Syst., 28 (2015), 843–865. doi: 10.3233/IFS-141366
    [19] J. Razmi, E. Jafarian, S. H. Amin, An intuitionistic fuzzy goal programming approach for finding pareto-optimal solutions to multi-objective programming problems, Expert Syst. Appl., 65 (2016), 181–193. https://doi.org/10.1016/j.eswa.2016.08.048 doi: 10.1016/j.eswa.2016.08.048
    [20] S. Pramanik, T. K. Roy, An intuitionistic fuzzy goal programming approach to vector optimization problem, Notes on Intuitionistic Fuzzy Sets, 11 (2005), 1–14.
    [21] S. Chakrabortty, M. Pal, P. K. Nayak, Intuitionistic fuzzy optimization technique for Pareto optimal solution of manufacturing inventory models with shortages, Eur. J. Oper. Res., 228 (2013), 381–387. https://doi.org/10.1016/j.ejor.2013.01.046 doi: 10.1016/j.ejor.2013.01.046
    [22] H. Garg, M. Rani, An approach for reliability analysis of industrial systems using PSO and IFS technique, ISA T., 52 (2013), 701–710. https://doi.org/10.1016/j.isatra.2013.06.010 doi: 10.1016/j.isatra.2013.06.010
    [23] A. Yildiz, A. F. Guneri, C. Ozkan, E. Ayyildiz, A. Taskin, An integrated interval-valued intuitionistic fuzzy AHP-TOPSIS methodology to determine the safest route for cash in transit operations: a real case in Istanbul, Neural. Comput. & Applic., 34 (2022), 15673–15688. https://doi.org/10.1007/s00521-022-07236-y doi: 10.1007/s00521-022-07236-y
    [24] S. K. Das, N. Dey, R. G. Crespo, E. Herrera-Viedma, A non-linear multi-objective technique for hybrid peer-to-peer communication, Inform. Sciences, 629 (2023), 413–439. https://doi.org/10.1016/j.ins.2023.01.117 doi: 10.1016/j.ins.2023.01.117
    [25] R. M. Rizk-Allah, M. A. Abo-Sinna, Integrating reference point, Kuhn-Tucker conditions and neural network approach for multi-objective and multi-level programming problems, OPSEARCH, 54 (2017), 663–683. https://doi.org/10.1007/s12597-017-0299-4 doi: 10.1007/s12597-017-0299-4
    [26] N. Karimi, M. R. Feylizadeh, K. Govindan, M. Bagherpour, Fuzzy multi-objective programming: a systematic literature review, Expert Syst. Appl., 196 (2022), 116663. https://doi.org/10.1016/j.eswa.2022.116663 doi: 10.1016/j.eswa.2022.116663
    [27] R. M. Rizk-Allah, R. A. El-Sehiemy, S. Deb, G. G. Wang, A novel fruit fly framework for multi-objective shape design of tubular linear synchronous motor, J. Supercomput., 73 (2017), 1235–1256. https://doi.org/10.1007/s11227-016-1806-8 doi: 10.1007/s11227-016-1806-8
    [28] R. A. El-Sehiemy, R. M. Rizk-Allah, A. F. Attia, Assessment of hurricane versus sine‐cosine optimization algorithms for economic/ecological emissions load dispatch problem, Int. T. Electr. Energy, 29 (2019), e2716. https://doi.org/10.1002/etep.2716 doi: 10.1002/etep.2716
    [29] R. M. Rizk-Allah, A. E. Hassanien, D. Oliva, An enhanced sitting–sizing scheme for shunt capacitors in radial distribution systems using improved atom search optimization, Neural Comput. & Applic., 32 (2020), 13971–13999. https://doi.org/10.1007/s00521-020-04799-6 doi: 10.1007/s00521-020-04799-6
    [30] K. Miettinen, Nonlinear multiobjective optimization, New York: Springer, 1998. https://doi.org/10.1007/978-1-4615-5563-6
    [31] Y. J. Lai, T. J. Liu, C. L. Hwang, TOPSIS for MODM, Eur. J. Oper. Res., 76 (1994), 486–500. https://doi.org/10.1016/0377-2217(94)90282-8 doi: 10.1016/0377-2217(94)90282-8
    [32] L. A. Zadeh, Fuzzy sets, Information and Control, 8 (1965), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X doi: 10.1016/S0019-9958(65)90241-X
    [33] A. Kaufmann, M. M. Gupta, Introduction to fuzzy arithmetic, New York: Van Nostrand, 1991.
    [34] M. A. Abo-Sinna, A. H. Amer, Extensions of TOPSIS for multi-objective large-scale nonlinear programming problems, Appl. Math. Comput., 162 (2005), 243–256. https://doi.org/10.1016/j.amc.2003.12.087 doi: 10.1016/j.amc.2003.12.087
    [35] P. Singh, S. Kumari, S. Singh, Fuzzy efficient interactive goal programming approach for multi-objective transportation problems, Int. J. Appl. Comput. Math., 3 (2017), 505–525. https://doi.org/10.1007/s40819-016-0155-x doi: 10.1007/s40819-016-0155-x
    [36] D. C. Montgomery, Design and analysis of experiments, Arizona: John Wiley & Sons, 2005.
    [37] R. M. Rizk-Allah, R. A. El-Sehiemy, G. G. Wang, A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution, Appl. Soft Comput., 63 (2018), 206–222. https://doi.org/10.1016/j.asoc.2017.12.002 doi: 10.1016/j.asoc.2017.12.002
    [38] S. Boopathi, Experimental investigation and parameter analysis of LPG refrigeration system using Taguchi method, SN Appl. Sci., 1 (2019), 892. https://doi.org/10.1007/s42452-019-0925-2 doi: 10.1007/s42452-019-0925-2
    [39] N. S. Patel, P. L. Parihar, J. S. Makwana, Parametric optimization to improve the machining process by using Taguchi method: a review, Materials Today: Proceedings, 47 (2021), 2709–2714. https://doi.org/10.1016/j.matpr.2021.03.005 doi: 10.1016/j.matpr.2021.03.005
    [40] R. M. Rizk-Allah, A. E. Hassanien, D. Song, Chaos-opposition-enhanced slime mould algorithm for minimizing the cost of energy for the wind turbines on high-altitude sites, ISA T., 121 (2022), 191–205. https://doi.org/10.1016/j.isatra.2021.04.011 doi: 10.1016/j.isatra.2021.04.011
    [41] Y. Kuo, T. Yang, G. W. Huang, The use of a grey-based Taguchi method for optimizing multi-response simulation problems, Eng. Optimiz., 40 (2008), 517–528. https://doi.org/10.1080/03052150701857645 doi: 10.1080/03052150701857645
    [42] X. Li, Y. Sun, Stock intelligent investment strategy based on support vector machine parameter optimization algorithm, Neural Comput. & Applic., 32 (2020), 1765–1775. https://doi.org/10.1007/s00521-019-04566-2 doi: 10.1007/s00521-019-04566-2
    [43] B. Cao, M. Li, X. Liu, J. Zhao, W. Cao, Z. Lv, Many-objective deployment optimization for a Drone-Assisted camera network, IEEE T. Netw. Sci. Eng., 8 (2021), 2756–2764. https://doi.org/10.1109/TNSE.2021.3057915 doi: 10.1109/TNSE.2021.3057915
    [44] B. Cao, S. Fan, J. Zhao, S. Tian, Z. Zheng, Y. Yan, et al., Large-scale many-objective deployment optimization of edge servers, IEEE T. Intell. Transp., 99 (2021), 1–9. https://doi.org/10.1109/TITS.2021.3059455 doi: 10.1109/TITS.2021.3059455
    [45] S. G. Li, Efficient algorithms for scheduling equal-length jobs with processing set restrictions on uniform parallel batch machines, Math. Biosci. Eng., 19 (2022), 10731–10740. https://doi.org/10.3934/mbe.2022502 doi: 10.3934/mbe.2022502
  • This article has been cited by:

    1. Dijana Mosić, Daochang Zhang, New Representations and Properties of the m-Weak Group Inverse, 2023, 78, 1422-6383, 10.1007/s00025-023-01878-7
    2. Jin Zhong, Lin Lin, Core-EP Monotonicity Characterizations for Property-n Matrices, 2023, 11, 2227-7390, 2531, 10.3390/math11112531
    3. Dijana Mosić, Predrag S. Stanimirović, Lev A. Kazakovtsev, Application of m-weak group inverse in solving optimization problems, 2024, 118, 1578-7303, 10.1007/s13398-023-01512-9
    4. Dijana Mosić, A generalization of the MP-m-WGI, 2024, 47, 1607-3606, 2133, 10.2989/16073606.2024.2352566
    5. Dijana Mosić, Predrag S. Stanimirović, Lev A. Kazakovtsev, Minimization problem solvable by weighted m-weak group inverse, 2024, 1598-5865, 10.1007/s12190-024-02215-z
    6. Dijana Mosić, Daochang Zhang, Predrag S. Stanimirović, An extension of the MPD and MP weak group inverses, 2024, 465, 00963003, 128429, 10.1016/j.amc.2023.128429
    7. D. E. Ferreyra, Saroj B. Malik, The m-weak core inverse, 2024, 118, 1578-7303, 10.1007/s13398-023-01539-y
    8. Huanyin Chen, On m-Generalized Group Inverse in Banach *-Algebras, 2025, 22, 1660-5446, 10.1007/s00009-025-02818-1
    9. Kaiyue Zhang, Xiaoji Liu, Hongwei Jin, 1WG inverse of square matrices, 2024, 38, 0354-5180, 4225, 10.2298/FIL2412225Z
    10. S. B. Malik, D. E. Ferreyra, F. E. Levis, V. Orquera, The projection m -weak group inverse , 2025, 0308-1087, 1, 10.1080/03081087.2025.2489419
    11. Dijana Mosić, Weak CMP inverses, 2025, 0001-9054, 10.1007/s00010-025-01167-4
    12. D. Mosić, D. E. Ferreyra, Weak extended core inverse, 2025, 19, 2662-2033, 10.1007/s43037-025-00432-7
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1967) PDF downloads(101) Cited by(4)

Figures and Tables

Figures(3)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog