Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article Special Issues

Estimates of coefficients for bi-univalent Ma-Minda-type functions associated with q-Srivastava-Attiya operator

  • In this article, we consider new subclasses of analytic and bi-univalent functions associated with the q-Srivastava-Attiya operator in the open unit disk. We obtain coefficient bounds for the Taylor-Maclaurin coefficients |a2| and |a3| of the functions of these new subclasses. Furthermore, we establish the Fekete-Szegö inequality for functions in the classes Tϵτ,q,α(ψ),KHϵτ,q,α(δ,ψ),andAϵτ,q,α(δ,ψ).

    Citation: Norah Saud Almutairi, Adarey Saud Almutairi, Awatef Shahen, Hanan Darwish. Estimates of coefficients for bi-univalent Ma-Minda-type functions associated with q-Srivastava-Attiya operator[J]. AIMS Mathematics, 2025, 10(3): 7269-7289. doi: 10.3934/math.2025333

    Related Papers:

    [1] Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493
    [2] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of SDD+1 matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [3] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [4] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of SDD1 matrices and SDD1-B matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [5] Xinnian Song, Lei Gao . CKV-type B-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [6] Fatih Yılmaz, Aybüke Ertaş, Samet Arpacı . Some results on circulant matrices involving Fibonacci polynomials. AIMS Mathematics, 2025, 10(4): 9256-9273. doi: 10.3934/math.2025425
    [7] Man Chen, Huaifeng Chen . On ideal matrices whose entries are the generalized kHoradam numbers. AIMS Mathematics, 2025, 10(2): 1981-1997. doi: 10.3934/math.2025093
    [8] Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384
    [9] Baijuan Shi . A particular matrix with exponential form, its inversion and some norms. AIMS Mathematics, 2022, 7(5): 8224-8234. doi: 10.3934/math.2022458
    [10] Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of S-SDDS-B matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179
  • In this article, we consider new subclasses of analytic and bi-univalent functions associated with the q-Srivastava-Attiya operator in the open unit disk. We obtain coefficient bounds for the Taylor-Maclaurin coefficients |a2| and |a3| of the functions of these new subclasses. Furthermore, we establish the Fekete-Szegö inequality for functions in the classes Tϵτ,q,α(ψ),KHϵτ,q,α(δ,ψ),andAϵτ,q,α(δ,ψ).



    Let n be an integer number, N={1,2,,n}, and let Cn×n be the set of all complex matrices of order n. A matrix M=(mij)Cn×n(n2) is called a strictly diagonally dominant (SDD) matrix [1] if

    |mii|>ri(M)=nj=1,ji|mij|,      iN.

    It was shown that an SDD matrix is an H-matrix [1], where matrix M=(mij)Cn×n(n2) is called an H-matrix [1, 2, 3] if and only if there exists a positive diagonal matrix X such that MX is an SDD matrix [1, 2, 4]. H-matrices are widely applied in many fields, such as computational mathematics, economics, mathematical physics and dynamical system theory, see [1, 4, 5, 6]. Meanwhile, upper bounds for the infinity norm of the inverse matrices of H-matrices can be used in the convergence analysis of matrix splitting and matrix multi-splitting iterative methods for solving the large sparse of linear equations [7], as well as linear complementarity problems. Moreover, upper bounds of the infinity norm of the inverse for different classes of matrices have been widely studied, such as CKV-type matrices [8], S-SDDS matrices [9], DZ and DZ-type matrices [10, 11], Nekrasov matrices [12, 13, 14, 15], S-Nekrasov matrices [16], Q-Nekrasov matrices [17], GSDD1 matrices [18] and so on.

    In 2011, Peňa [19] proposed a new subclass of H-matrices called SDD1 matrices, whose definition is listed below. A matrix M=(mij)Cn×n(n2) is called an SDD1 matrix if

    |mii|>pi(M),iN1(M),

    where

    pi(M)=jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|,N1(M)={i||mii|ri(M)},    N2(M)={i||mii|>ri(M)}.

    In 2023, Dai et al. [18] gave a new subclass of H-matrices named generalized SDD1 (GSDD1) matrices, which extends the class of SDD1 matrices. Here, a matrix M=(mij)Cn×n(n2) is said a GSDD1 matrix if

    ri(M)pN2(M)i(M)>0,iN2(M),

    and

    (ri(M)pN2(M)i(M))(|ajj|pN1(M)j(M))>pN1(M)i(M)pN2(M)j(M),iN2(M),jN1(M),

    where

    pN2(M)i(M)=jN2(M){i}rj(M)|mjj||mij|,pN1(M)i(M)=jN1(M){i}|mij|,iN.

    Subsequently, some upper bounds for the infinite norm of the inverse matrices of SDD matrices, SDD1 matrices and GSDD1 matrices are presented, see [18, 20, 21]. For example, the following results that will be used later are listed.

    Theorem 1. (Varah bound) [21] Let matrix M=(mij)Cn×n(n2) be an SDD matrix. Then

    ||M1||1min1in(|mii|ri(M)).

    Theorem 2. [20] Let matrix M=(mij)Cn×n(n2) be an SDD matrix. Then

    ||M1||maxiNpi(M)|mii|+εminiNZi,0<ε<miniN|mii|pi(M)ri(M),

    where

    Zi=ε(|mii|ri(M))+jN{i}(rj(M)pj(M)|mjj|)|mij|.

    Theorem 3. [20] Let matrix M=(mij)Cn×n(n2) be an SDD matrix. If ri(M)>0(iN), then

    ||M1||maxiNpi(M)|mii|miniNjN{i}rj(M)pj(M)|mjj||mij|.

    Theorem 4. [18] Let M=(mij)Cn×n be a GSDD1 matrix. Then

    ||M1||max{ε,maxiN2(M)ri(M)|mii|}min{miniN2(M)ϕi,miniN1(M)ψi},

    where

    ϕi=ri(M)jN2(M){i}rj(M)|mjj||mij|jN1(M){i}|mij|ε,ψi=|mii|εjN1(M){i}|mij|ε+jN2(M){i}rj(M)|mjj||mij|,

    and

    maxiN1(M)pN2(M)i(M)|mii|pN1(M)i(M)<ε<minjN2(M)rj(M)pN2(M)j(M)pN1(M)j(M).

    On the basis of the above articles, we continue to study special structured matrices and introduce a new subclass of H-matrices called SDDk matrices, and provide some new upper bounds for the infinite norm of the inverse matrices for SDD matrices and SDDk matrices, which improve the previous results. The remainder of this paper is organized as follows: In Section 2, we propose a new subclass of H-matrices called SDDk matrices, which include SDD matrices and SDD1 matrices, and derive some properties of SDDk matrices. In Section 3, we present some upper bounds for the infinity norm of the inverse matrices for SDD matrices and SDDk matrices, and provide some comparisons with the well-known Varah bound. Moreover, some numerical examples are given to illustrate the corresponding results.

    In this section, we propose a new subclass of H-matrices called SDDk matrices, which include SDD matrices and SDD1 matrices, and derive some new properties.

    Definition 1. A matrix M=(mij)Cn×n(n2) is called an SDDk matrix if there exists kN such that

    |mii|>p(k1)i(M),iN1(M),

    where

    p(k)i(M)=jN1(M){i}|mij|+jN2(M){i}p(k1)j(M)|mjj||mij|,p(0)i(M)=jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|.

    Immediately, we know that SDDk matrices contain SDD matrices and SDD1 matrices, so

    {SDD}{SDD1}{SDD2}{SDDk}.

    Lemma 1. A matrix M=(mij)Cn×n(n2) is an SDDk(k2) matrix if and only if for iN, |mii|>p(k1)i(M).

    Proof. For iN1(M), from Definition 1, it holds that |mii|>p(k1)i(M).

    For iN2(M), we have that |mii|>ri(M). When k=2, it follows that

    |mii|>ri(M)jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|=p(0)i(M)jN1(M){i}|mij|+jN2(M){i}p(0)j(M)|mjj||mij|=p(1)i(M).

    Suppose that |mii|>p(k1)i(M)(kl,l>2) holds for iN2(M). When k=l+1, we have

    |mii|>ri(M)jN1(M){i}|mij|+jN2(M){i}p(l2)j(M)|mjj||mij|=p(l1)i(M)jN1(M){i}|mij|+jN2(M){i}p(l1)j(M)|mjj||mij|=p(l)i(M).

    By induction, we obtain that for iN2(M), |mii|>p(k1)i(M). Consequently, it holds that matrix M is an SDDk matrix if and only if |mii|>p(k1)i(M) for iN. The proof is completed.

    Lemma 2. If M=(mij)Cn×n(n2) is an SDDk(k2) matrix, then M is an H-matrix.

    Proof. Let X be the diagonal matrix diag{x1,x2,,xn}, where

    (0<) xj={1,jN1(M),p(k1)j(M)|mjj|+ε,jN2(M),

    and

    0<ε<miniN|mii|p(k1)i(M)jN2(M){i}|mij|.

    If jN2(M){i}|mij|=0, then the corresponding fraction is defined to be . Next we consider the following two cases.

    Case 1: For each iN1(M), it is not difficult to see that |(MX)ii|=|mii|, and

    ri(MX)=j=1,ji|mij|xj=jN1(M){i}|mij|+jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|jN1(M){i}|mij|+jN2(M){i}(p(k2)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+jN2(M){i}ε|mij|=p(k1)i(M)+εjN2(M){i}|mij|<p(k1)i(M)+|mii|p(k1)i(M)=|mii|=|(MX)ii|.

    Case 2: For each iN2(M), we can obtain that

    |(MX)ii|=|mii|(pk1i(M)|mii|+ε)=p(k1)i(M)+ε|mii|,

    and

    ri(MX)=j=1,ji|mij|xj=jN1(M){i}|mij|+jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|jN1(M){i}|mij|+jN2(M){i}(p(k2)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+jN2(M){i}ε|mij|=p(k1)i(M)+εjN2(A){i}|mij|<p(k1)i(M)+ε|mii|=|(MX)ii|.

    Based on Cases 1 and 2, we have that MX is an SDD matrix, and consequently, M is an H-matrix. The proof is completed.

    According to the definition of SDDk matrix and Lemma 1, we obtain some properties of SDDk matrices as follows:

    Theorem 5. If M=(mij)Cn×n(n2) is an SDDk matrix and N1(M), then for iN1(M), ji,jN2(M)|mij|>0.

    Proof. Suppose that for iN1(M), ji,jN2(M)|mij|=0. By Definition 1, we have that p(k1)i(M)=ri(M), iN1(M). Thus, it is easy to verify that |mii|>p(k1)i(M)=ri(M)|mii|, which is a contradiction. Thus for iN1(M), ji,jN2(M)|mij|>0. The proof is completed.

    Theorem 6. Let M=(mij)Cn×n(n2) be an SDDk(k2) matrix. It holds that ji,jN2(M)|mij|>0, iN2(M). Then

    |mii|>p(k2)i(M)>p(k1)i(M)>0,iN2(M),

    and

    |mii|>p(k1)i(M)>0,iN.

    Proof. By Lemma 1 and the known conditions that for iN2(M), ji,jN2(M)|mij|>0, it holds that

    |mii|>p(k2)i(M)>p(k1)i(M)>0,iN2(M),

    and

    |mii|>p(k1)i(M),iN.

    We now prove that |mii|>p(k1)i(M)>0(iN) and consider the following two cases.

    Case 1: If N1(M)=, then M is an SDD matrix. It is easy to verify that |mii|>p(k1)i(M)>0, iN2(M)=N.

    Case 2: If N1(M), by Theorem 5 and the known condition that for iN2(M), ji,jN2(M)|mij|>0, then it is easy to obtain that |mii|>p(k1)i(M)>0(iN).

    From Cases 1 and 2, we have that |mii|>p(k1)i(M)>0(iN). The proof is completed.

    Theorem 7. Let M=(mij)Cn×n(n2) be an SDDk(k2) matrix and for iN2(M), ji,jN2(M)|mij|>0. Then there exists a diagonal matrix X=diag{x1,x2,,xn} such that MX is an SDD matrix. Elements x1,x2,,xn are determined by

    xi=p(k1)i(M)|mii|,iN.

    Proof. We need to prove that matrix MX satisfies the following inequalities:

    |(MX)ii|>ri(MX),iN.

    From Theorem 6 and the known condition that for iN2(M), ji,jN2(M)|mij|>0, it is easy to verify that

    0<p(k1)i(M)|mii|<p(k2)i(M)|mii|<1,iN2(M).

    For each iN, we have that |(MX)ii|=p(k1)i(M) and

    ri(MX)=j=1,ji|mij|xj=jN1(M){i}p(k1)j(M)|mjj||mij|+jN2(M){i}p(k1)j(M)|mjj||mij|<jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|=p(k1)i(M)=|(MX)ii|,

    that is,

    |(MX)ii|>ri(MX).

    Therefore, MX is an SDD matrix. The proof is completed.

    In this section, by Lemma 2 and Theorem 7, we provide some new upper bounds of the infinity norm of the inverse matrices for SDDk matrices and SDD matrices, respectively. We also present some comparisons with the Varah bound. Some numerical examples are presented to illustrate the corresponding results. Specially, when the involved matrices are SDD1 matrices as subclass of SDDk matrices, these new bounds are in line with that provided by Chen et al. [20].

    Theorem 8. Let M=(mij)Cn×n(n2) be an SDDk(k2) matrix. Then

    ||M1||max{1,maxiN2(M)p(k1)i(M)|mii|+ε}min{miniN1(M)Ui,miniN2(M)Vi},

    where

    Ui=|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|,Vi=ε(|mii|jN2(M){i}|mij|)+jN2(M){i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|,

    and

    0<ε<miniN|mii|p(k1)i(M)jN2(M){i}|mij|.

    Proof. By Lemma 2, we have that there exists a positive diagonal matrix X such that MX is an SDD matrix, where X is defined as Lemma 2. Thus,

    ||M1||=||X(X1M1)||=||X(MX)1||||X||||(MX)1||,

    and

    ||X||=max1inxi=max{1,maxiN2(M)p(k1)i(M)|mii|+ε}.

    Notice that MX is an SDD matrix. Hence, by Theorem 1, we have

    ||(MX)1||1min1in(|(MX)ii|ri(MX)).

    Thus, for any iN1(M), it holds that

    |(MX)ii|ri(MX)=|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=Ui.

    For any iN2(M), it holds that

    |(MX)ii|ri(MX)=p(k1)i(M)+ε|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+ε|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=ε(|mii|jN2(M){i}|mij|)+jN2(M){i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|=Vi.

    Therefore, we get

    ||M1||max{1,maxiN2(M)p(k1)i(M)|mii|+ε}min{miniN1(M)Xi,miniN2(M)Yi}.

    The proof is completed.

    From Theorem 8, it is easy to obtain the following result.

    Corollary 1. Let M=(mij)Cn×n(n2) be an SDD matrix. Then

    ||M1||maxiNp(k1)i(M)|mii|+εminiNZi,

    where k2,

    Zi=ε(|mii|ri(M))+jN{i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|,

    and

    0<ε<miniN|mii|p(k1)i(M)ri(M).

    Example 1. Consider the n×n matrix:

    M=(421.51.5424824824824824823.54).

    Take that n=20. It is easy to verify that M is an SDD matrix.

    By calculations, we have that for k=2,

    maxiNp(1)i(M)|mii|+ε1=0.5859+ε1,miniNZi=0.4414+0.5ε1,0<ε1<0.4732.

    For k=4,

    maxiNp(3)i(M)|aii|+ε2=0.3845+ε2,miniNZi=0.2959+0.5ε2,0<ε2<0.7034.

    For k=6,

    maxiNp(5)i(M)|mii|+ε3=0.2504+ε3,miniNZi=0.1733+0.5ε3,0<ε3<0.8567.

    For k=8,

    maxiNp(7)i(M)|mii|+ε4=0.1624+ε4,miniNZi=0.0990+0.5ε4,0<ε4<0.9572.

    So, when k=2,4,6,8, by Corollary 1 and Theorem 1, we can get the upper bounds for ||M1||, see Table 1. Thus,

    ||M1||0.5859+ε10.4414+0.5ε1<2,||M1||0.3845+ε20.2959+0.5ε2<2,
    Table 1.  The bounds in Corollary 1 and Theorem 1.
    k 2 4 6 8
    Cor 1 0.5859+ε10.4414+0.5ε1 0.3845+ε20.2959+0.5ε2 0.2504+ε30.1733+0.5ε3 0.1624+ε40.0990+0.5ε4
    Th 1 2 2 2 2

     | Show Table
    DownLoad: CSV

    and

    ||M1||0.2504+ε30.1733+0.5ε3<2,||M1||0.1624+ε40.0990+0.5ε4<2.

    Moreover, when k=1, by Theorem 2, we have

    ||M1||0.7188+ε50.4844+0.5ε5,0<ε5<0.3214.

    The following Theorem 9 shows that the bound in Corollary 1 is better than that in Theorem 1 of [20] in some cases.

    Theorem 9. Let matrix M=(mij)Cn×n(n2) be an SDD matrix. If there exists k2 such that

    maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|,

    then

    ||M1||maxiNp(k1)i(M)|mii|+εminiNZi1min1in(|mii|ri(M)),

    where Zi and ε are defined as in Corollary 1, respectively.

    Proof. From the given condition, we have that there exists k2 such that

    maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|,

    then

    maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))+εminiN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+εminiN(|mii|ri(M)).

    Thus, we get

    (maxiNp(k1)i(M)|mii|+ε)miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+εminiN(|mii|ri(M))=miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+miniN(ε(|mii|ri(M)))miniN(ε(|mii|ri(M))+jN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|)=miniNZi.

    Since M is an SDD matrix, then

    |mii|>ri(M),Zi>0,iN.

    It's easy to verify that

    maxiNp(k1)i(M)|mii|+εminiNZi1min1in(|mii|ri(M)).

    Thus, by Corollary 1, it holds that

    ||M1||maxiNp(k1)i(M)|mii|+εminiNZi1min1in(|mii|ri(M)).

    The proof is completed.

    We illustrate Theorem 9 by the following Example 2.

    Example 2. This is the previous Example 1. For k=4, we have

    maxiNp(3)i(M)|mii|miniN(|mii|ri(M))=0.1923<0.2959=miniNjN{i}p(2)j(M)p(3)j(M)|mjj||mij|.

    Thus, by Theorem 8, we obtain that for each 0<ε2<0.7034,

    ||M1||0.3845+ε20.2959+0.5ε2<2=1min1in(|mii|ri(M)).

    However, we find that the upper bounds in Theorems 8 and 9 contain the parameter ε. Next, based on Theorem 7, we will provide new upper bounds for the infinity norm of the inverse matrices of SDDk matrices, which only depend on the elements of the given matrices.

    Theorem 10. Let M=(mij)Cn×n(n2) be an SDDk(k2) matrix and for each iN2(M), ji,jN2(M)|mij|>0. Then

    ||M1||maxiNp(k1)i(M)|mii|miniN(p(k1)i(M)jN{i}p(k1)j(M)|mjj||mij|).

    Proof. By Theorems 7 and 8, we have that there exists a positive diagonal matrix X such that MX is an SDD matrix, where X is defined as in Theorem 7. Thus, it holds that

    ||M1||=||X(X1M1)||=||X(MX)1||||X||||(MX)1||,

    and

    ||X||=max1inxi=maxiNp(k1)i(M)|mii|.

    Notice that MX is an SDD matrix. Thus, by Theorem 1, we get

    ||(MX)1||1min1in(|(MX)ii|ri(MX))=1min1in(|miixi|ri(MX))=1miniN(p(k1)i(M)jN{i}p(k1)j(M)|mjj||mij|).

    Therefore, we have that

    ||M1||maxiNp(k1)i(M)|mii|miniN(p(k1)i(M)jN{i}p(k1)j(M)|mjj||mij|).

    The proof is completed.

    Since SDD matrices are a subclass of SDDk matrices, by Theorem 10, we can obtain the following result.

    Corollary 2. Let M=(mij)Cn×n(n2) be an SDD matrix. If ri(M)>0(iN), then there exists k2 such that

    ||M1||maxiNp(k1)i(M)|mii|miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|.

    Two examples are given to show the advantage of the bound in Theorem 10.

    Example 3. Consider the following matrix:

    M=(4012120104.14620233480462023042040).

    It is easy to verify that M is not an SDD matrix, an SDD1 matrix, a GSDD1 matrix, an S-SDD matrix, nor a CKV-type matrix. Therefore, we cannot use the error bounds in [1, 8, 9, 18, 20] to estimate ||M1||. But, M is an SDD2 matrix. So by the bound in Theorem 10, we have that .

    Example 4. Consider the tri-diagonal matrix M\in R^{n\times n} arising from the finite difference method for free boundary problems [18], where

    M = \left(\begin{array}{ccccc} b+\alpha sin \left(\frac{1}{n}\right) &c &0 &\cdots &0 \\ a &b+\alpha sin \left(\frac{2}{n}\right) &c &\cdots &0 \\ &\ddots &\ddots &\ddots &\\ 0 &\cdots &a &b+\alpha sin \left(\frac{n-1}{n}\right) &c \\ 0 &\cdots &0 &a & b+\alpha sin \left(1\right) \end{array} \right).

    Take that n = 4 , a = 1 , b = 0 , c = 3.7 and \alpha = 10 . It is easy to verify that M is neither an SDD matrix nor an SDD_1 matrix. However, we can get that M is a GSDD_1 matrix and an SDD_3 matrix. By the bound in Theorem 10, we have

    \|{M^{ - 1}}\|_\infty \leq 8.2630,

    while by the bound in Theorem 4, it holds that

    \begin{eqnarray*} \|M^{- 1}\|_\infty \le \frac{\varepsilon }{{\min \left\{2.1488-\varepsilon, 0.3105, 2.474\varepsilon-3.6272 \right\} }}, \; \; \; \varepsilon\in (1.4661, 2.1488). \end{eqnarray*}

    The following two theorems show that the bound in Corollary 2 is better than that in Theorem 1 in some cases.

    Theorem 11. Let M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) be an {SDD} matrix. If {r_i}(M) > 0(\forall i \in N) and there exists k \geq 2 such that

    \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)),

    then

    ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    Proof. Since M is an {SDD} matrix, then N_1(M) = \emptyset and M is an {SDD_k} matrix. By the given condition that {r_i}(M) > 0(\forall i \in N) , it holds that

    \begin{eqnarray*} |{m_{ii}}| & > & {r_i}(M) > \sum\limits_{j \in N \backslash \{ i\} } {\frac{{r_j(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| = p_i^{(0)}(M) > 0, \; \; \; \forall i \in N, \\ p_i^{(0)}(M)& = &\sum\limits_{j \in N \backslash \{ i\} } {\frac{{r_j(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| > \sum\limits_{j \in N \backslash \{ i\} } {\frac{{p_j^{(0)}(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| = p_i^{(1)}(M) > 0, \; \; \; \forall i \in N. \end{eqnarray*}

    Similarly, we can obtain that

    \begin{eqnarray*} |{m_{ii}}| > {r_i}(M) > p_i^{(0)}(M) > \cdots > p_i^{(k-1)}(M) > 0, \; \; \; \forall i \in N, \end{eqnarray*}

    that is,

    \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} < 1.

    Since there exists k \geq 2 such that

    \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)),

    then we have

    \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    Thus, from Corollary 2, we get

    ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    The proof is completed.

    We illustrate the Theorem 11 by following Example 5.

    Example 5. Consider the matrix M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) , where

    M = \left( {\begin{array}{*{20}{c}} 4&3&{0.9}&{}&{}&{}&{}&{}&{}\\ 1&6&2&{}&{}&{}&{}&{}&{}\\ {}&2&5&2&{}&{}&{}&{}&{}\\ {}&{}&2&5&2&{}&{}&{}&{}\\ {}&{}&{}& \ddots & \ddots & \ddots &{}&{}&{}\\ {}&{}&{}&{}&2&5&2&{}&{}\\ {}&{}&{}&{}&{}&2&5&2&{}\\ {}&{}&{}&{}&{}&{}&1&6&2\\ {}&{}&{}&{}&{}&{}&{0.9}&3&4 \end{array}} \right).

    Take that n = 20 . It is easy to check that M is an {SDD} matrix. Let

    {l_k} = \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|}, \; \; m = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)).

    By calculations, we have

    \begin{eqnarray*} {l_2} & = & 0.2692 > 0.1 = m, \; \; \; \; {l_3} = 0.2567 > 0.1 = m, \; \; \; \; {l_4} = 0.1788 > 0.1 = m, \\ {l_5} & = & 0.1513 > 0.1 = m, \; \; \; \; {l_6} = 0.1037 > 0.1 = m. \end{eqnarray*}

    Thus, when k = 2, 3, 4, 5, 6 , the matrix M satisfies the conditions of Theorem 11. By Theorems 1 and 11, we can derive the upper bounds for ||{M^{ - 1}}|{|_\infty } , see Table 2. Meanwhile, when k = 1 , by Theorem 3, we get that ||{M^{ - 1}}|{|_\infty } \leq 1.6976.

    Table 2.  The bounds in Theorem 11 and Theorem 1 .
    k 2 3 4 5 6
    Th 11 1.9022 1.5959 1.8332 1.7324 2.0214
    Th 1 10 10 10 10 10

     | Show Table
    DownLoad: CSV

    From Table 2, we can see that the bounds in Theorem 11 are better than that in Theorems 1 and 3 in some cases.

    Theorem 12. Let M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) be an {SDD} matrix. If {r_i}(M) > 0(\forall i \in N) and there exists k \geq 2 such that

    \begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) &\le& \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \\ & < & \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), \end{eqnarray*}

    then

    ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    Proof. By Theorem 7 and the given condition that {r_i}(M) > 0(\forall i \in N) , it is easy to get that

    \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} > 0, \; \; \; \forall i \in N.

    From the condition that there exists k \geq 2 such that

    \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) \le \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|},

    we have

    \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    Thus, from Corollary 2, it holds that

    ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}.

    The proof is completed.

    Next, we illustrate Theorem 12 by the following Example 6.

    Example 6. Consider the tri-diagonal matrix M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) , where

    M = \left( {\begin{array}{*{20}{c}} 3&{-2.5}&{}&{}&{}&{}&{}&{}&{}\\ {-1.2}&4&-2&{}&{}&{}&{}&{}&{}\\ {}&{-2.8}&5&-1&{}&{}&{}&{}&{}\\ {}&{}&{-2.8}&5&-1&{}&{}&{}&{}\\ {}&{}&{}& \ddots & \ddots & \ddots &{}&{}&{}\\ {}&{}&{}&{}&{-2.8}&5&-1&{}&{}\\ {}&{}&{}&{}&{}&{-2.8}&5&-1&{}\\ {}&{}&{}&{}&{}&{}&{-1.2}&4&-2\\ {}&{}&{}&{}&{}&{}&{}&{-2.5}&3 \end{array}} \right).

    Take that n = 20 . It is easy to verify that M is an {SDD} matrix.

    By calculations, we have that for k = 2 ,

    \begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.2686 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(0)}(M) - p_j^{(1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.3250\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*}

    For k = 5 , we get

    \begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(4)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.1319 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(3)}(M) - p_j^{(4)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.1685\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*}

    For k = 10 , it holds that

    \begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(9)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.0386 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(8)}(M) - p_j^{(9)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.0485\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*}

    Thus, for k = 2, 5, 10 , the matrix M satisfies the conditions of Theorem 12. Thus, from Theorems 12 and 1, we get the upper bounds for ||{M^{ - 1}}|{|_\infty } , see Table 3. Meanwhile, when k = 1 , by Theorem 3, we have that ||{M^{ - 1}}|{|_\infty } \leq 1.7170.

    Table 3.  The bounds in Theorem 12 and Theorem 1 .
    k 2 5 10
    Th 12 1.6530 1.5656 1.5925
    Th 1 2 2 2

     | Show Table
    DownLoad: CSV

    From Table 3, we can see that the bound in Theorem 12 is sharper than that in Theorems 1 and 3 in some cases.

    {SDD_k} matrices as a new subclass of H -matrices are proposed, which include {SDD} matrices and {SDD_1} matrices, and some properties of {SDD_k} matrices are obtained. Meanwhile, some new upper bounds of the infinity norm of the inverse matrices for {SDD} matrices and {SDD_k} matrices are presented. Furthermore, we prove that the new bounds are better than some existing bounds in some cases. Some numerical examples are also provided to show the validity of new results. In the future, based on the proposed infinity norm bound, we will explore the computable error bounds of the linear complementarity problems for SDD_k matrices.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by Guizhou Provincial Science and Technology Projects (20191161), and the Natural Science Research Project of Department of Education of Guizhou Province (QJJ2023062, QJJ2023063).

    The authors declare that they have no competing interests.



    [1] P. L. Duren, Univalent functions, Springer Science & Business Media, 2001.
    [2] L. de-Branges, A proof of the Bieberbach conjecture, Acta Math., 154 (1985), 137–152.
    [3] S. Sivasubramanian, R. Sivakumar, S. Kanas, S. Kim, Verification of Brannan and Clunie's conjecture for certain subclasses of bi-univalent functions, Ann. Pol. Math., 113 (2015), 295–304. https://doi.org/10.4064/ap113-3-6 doi: 10.4064/ap113-3-6
    [4] D. A. Brannan, T. S. Taha, On some classes of bi-univalent functions, Math. Anal. Appl., 1988, 53–60. https://doi.org/10.1016/B978-0-08-031636-9.50012-7 doi: 10.1016/B978-0-08-031636-9.50012-7
    [5] T. S. Taha, Topics in univalent function theory, Ph.D. Thesis, University of London, 1981.
    [6] S. S. Miller, P. T. Mocanu, Differential subordinations: Theory and applications, 1st Eds., Boca Raton: CRC Press, 2000. https://doi.org/10.1201/9781482289817
    [7] H. M. Srivastava, A. A. Attiya, An integral operator associated with the Hurwitz-Lerch Zeta function and differential subordination, Integral Transforms Spec. Funct., 18 (2007), 207–216. https://doi.org/10.1080/10652460701208577 doi: 10.1080/10652460701208577
    [8] H. M. Srivastava, The Zeta and related functions: Recent developments, J. Adv. Eng. Comput., 3 (2019), 329–354. http://dx.doi.org/10.25073/jaec.201931.229 doi: 10.25073/jaec.201931.229
    [9] F. Ghanim, B. Batiha, A. H. Ali, M. Darus, Geometric properties of a linear complex operator on a subclass of meromorphic functions: An analysis of Hurwitz-Lerch-Zeta functions, Appl. Math. Nonlinear Sci., 8 (2023), 2229–2240. https://doi.org/10.2478/amns.2023.1.00407 doi: 10.2478/amns.2023.1.00407
    [10] H. Bateman, Higher transcendental functions, New York: McGraw-Hill Book Company, Ⅰ-Ⅲ (1953).
    [11] S. A. Shah, K. I. Noor, Study on the q-analogue of a certain family of linear operators, Turkish J. Math., 43 (2019), 2707–2714. https://doi.org/10.3906/mat-1907-41 doi: 10.3906/mat-1907-41
    [12] E. Deniz, M. Kamali, S. Korkmaz, A certain subclass of bi-univalent functions associated with Bell numbers and q-Srivastava-Attiya operator, AIMS Mathematics, 5 (2020), 7259–7271. https://doi.org/10.3934/math.2020464 doi: 10.3934/math.2020464
    [13] H. M. Srivastava, Operators of basic (or q-) calculus and fractional q-calculus and their applications in geometric function theory of complex analysis, Iran. J. Sci. Technol. Trans. Sci., 44 (2020), 327–344. https://doi.org/10.1007/s40995-019-00815-0 doi: 10.1007/s40995-019-00815-0
    [14] V. Kac, P. Cheung, Symmetric quantum calculus, In: Quantum calculus, New York: Springer, 2002, 99–104. https://doi.org/10.1007/978-1-4613-0071-7_26
    [15] H. M. Srivastava, A. R. S. Juma, H. M. Zayed, Univalence conditions for an integral operator defined by a generalization of the Srivastava-Attiya operator, Filomat, 32 (2018), 2101–2114. https://doi.org/10.2298/FIL1806101S doi: 10.2298/FIL1806101S
    [16] Y. J. Sim, O. S. Kwon, N. E. Cho, H. M. Srivastava, Bounds for the real parts and arguments of normalized analytic functions defined by the Srivastava-Attiya operator, J. Comput. Anal. Appl., 28 (2020), 628–645.
    [17] H. M. Srivastava, A. Prajapati, P. Gochhayat, Third-order differential subordination and differential superordination results for analytic functions involving the Srivastava-Attiya operator, Appl. Math. Inform. Sci., 12 (2018), 469–481. http://dx.doi.org/10.18576/amis/120301 doi: 10.18576/amis/120301
    [18] K. I. Noor, S. Riaz, M. A. Noor, On q-Bernardi integral operator, TWMS J. Pure Appl. Math., 8 (2017), 3–11.
    [19] S. D. Bernardi, Convex and starlike univalent functions, Trans. Amer. Math. Soc., 135 (1969), 429–446. https://doi.org/10.2307/1995025 doi: 10.2307/1995025
    [20] J. W. Alexander, Functions which map the interior of the unit circle upon simple region, Ann. Math., 17 (1915), 12–22. https://doi.org/10.2307/2007212 doi: 10.2307/2007212
    [21] R. H. Nevanlinna, Über die konforme abbildung von sterngebieten, Översikt av Finska Vetens. Soc. Förh., Avd., 63 (1921), 1–21.
    [22] E. Study, Vorlesungen über ausgewählte gegenstände der geometrie, zweites heft; konforme abbildung einfach-zusammenhängender bereiche, 1913.
    [23] W. Kaplan, Close-to-convex schlicht functions, Michigan Math. J., 1 (1952), 169–185. https://doi.org/10.1307/mmj/1028988895 doi: 10.1307/mmj/1028988895
    [24] C. Pommerenke, Univalent functions, Vandenhoeck and Ruprecht, 1975.
    [25] R. M. Ali, S. K. Lee, V. Ravichandran, S. Supramaniam, Coefficient estimates for bi-univalent Ma-Minda starlike and convex functions, Appl. Math. Lett., 25 (2012), 344–351. https://doi.org/10.1016/j.aml.2011.09.012 doi: 10.1016/j.aml.2011.09.012
    [26] W. C. Ma, D. Minda, A unified treatment of some special classes of univalent functions, In: Proceedings of the conference on complex analysis, 1992,157–169.
    [27] M. Fekete, G. Szegö, Eine Bemerkung über ungerade schlichte Funktionen, J. Lond. Math. Soc., s1-8 (1933), 85–89. https://doi.org/10.1112/jlms/s1-8.2.85 doi: 10.1112/jlms/s1-8.2.85
    [28] A. Cătaş, On the Fekete-Szegö problem for certain classes of meromorphic functions using a p, q-derivative operator and a p, q-wright type hypergeometric function, Symmetry, 13 (2021), 2143.
    [29] N. S. Almutairi, A. Shahen, A. Cătaş, H. Darwish, On the Fekete–Szegö problem for certain classes of (\gamma, \delta)-starlike and (\gamma, \delta)-convex functions related to quasi-subordinations, Symmetry, 16 (2024), 1043. https://doi.org/10.3390/sym16081043 doi: 10.3390/sym16081043
    [30] G. Murugusundaramoorthy, L. Cotîrla, Bi-univalent functions of complex order defined by Hohlov operator associated with legendrae polynomial, AIMS Mathematics, 7 (2022), 8733–8750. https://doi.org/10.3934/math.2022488 doi: 10.3934/math.2022488
    [31] P. Zaprawa, On the Fekete-Szegö problem for classes of bi-univalent functions, Bull. Belg. Math. Soc. Simon Stevin, 21 (2014), 169–178. https://doi.org/10.36045/bbms/1394544302 doi: 10.36045/bbms/1394544302
    [32] H. M. Srivastava, M. Kamali, A. Urdaletova, A study of the Fekete-Szegö functional and coefficient estimates for subclasses of analytic functions satisfying a certain subordination condition and associated with the Gegenbauer polynomials, AIMS Mathematics, 7 (2022), 2568–2584. https://doi.org/10.3934/math.2022144 doi: 10.3934/math.2022144
  • This article has been cited by:

    1. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of generalized {SDD_2} matrices with applications, 2024, 41, 0916-7005, 1477, 10.1007/s13160-024-00658-2
    2. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of Quasi-SDD_k matrices with applications, 2024, 1017-1398, 10.1007/s11075-024-01949-y
    3. Wenwen Ran, Feng Wang, Extended SDD_1^{\dag } matrices and error bounds for linear complementarity problems, 2024, 0916-7005, 10.1007/s13160-024-00685-z
    4. Yuanjie Geng, Yuxue Zhu, Fude Zhang, Feng Wang, Infinity Norm Bounds for the Inverse of \textrm{SDD}_1-Type Matrices with Applications, 2025, 2096-6385, 10.1007/s42967-024-00457-z
    5. L. Yu. Kolotilina, SSDD Matrices and Relations with Other Subclasses of the Nonsingular ℋ-Matrices, 2025, 1072-3374, 10.1007/s10958-025-07711-6
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(362) PDF downloads(35) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog