Loading [MathJax]/jax/output/SVG/jax.js
Editorial Special Issues

Diet in Gestational Diabetes Mellitus is important — but much remains to be done!

  • Citation: Sangeetha Shyam, Amutha Ramadas. Diet in Gestational Diabetes Mellitus is important — but much remains to be done![J]. AIMS Medical Science, 2019, 6(1): 128-131. doi: 10.3934/medsci.2019.1.128

    Related Papers:

    [1] Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of $ SDD_1 $ matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493
    [2] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [3] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [4] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [5] Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [6] Fatih Yılmaz, Aybüke Ertaş, Samet Arpacı . Some results on circulant matrices involving Fibonacci polynomials. AIMS Mathematics, 2025, 10(4): 9256-9273. doi: 10.3934/math.2025425
    [7] Man Chen, Huaifeng Chen . On ideal matrices whose entries are the generalized $ k- $Horadam numbers. AIMS Mathematics, 2025, 10(2): 1981-1997. doi: 10.3934/math.2025093
    [8] Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong $ SDD_{1} $ matrices and strong $ SDD_{1} $-$ B $ matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384
    [9] Baijuan Shi . A particular matrix with exponential form, its inversion and some norms. AIMS Mathematics, 2022, 7(5): 8224-8234. doi: 10.3934/math.2022458
    [10] Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of $ S $-$ SDDS $-$ B $ matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179


  • Let $ n $ be an integer number, $ N = \{ 1, 2, \ldots, n\} $, and let $ C^{n\times n} $ be the set of all complex matrices of order $ n $. A matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is called a strictly diagonally dominant ($ {SDD} $) matrix [1] if

    $ |mii|>ri(M)=nj=1,ji|mij|,      iN. $

    It was shown that an $ SDD $ matrix is an $ H $-matrix [1], where matrix $ M = ({m_{ij}}) \in {C^{n \times n}}(n \ge 2) $ is called an $ H $-matrix [1, 2, 3] if and only if there exists a positive diagonal matrix $ X $ such that $ MX $ is an $ SDD $ matrix [1, 2, 4]. $ H $-matrices are widely applied in many fields, such as computational mathematics, economics, mathematical physics and dynamical system theory, see [1, 4, 5, 6]. Meanwhile, upper bounds for the infinity norm of the inverse matrices of $ H $-matrices can be used in the convergence analysis of matrix splitting and matrix multi-splitting iterative methods for solving the large sparse of linear equations [7], as well as linear complementarity problems. Moreover, upper bounds of the infinity norm of the inverse for different classes of matrices have been widely studied, such as $ CKV $-type matrices [8], $ S $-$ SDDS $ matrices [9], $ DZ $ and $ DZ $-type matrices [10, 11], $ Nekrasov $ matrices [12, 13, 14, 15], $ S $-$ Nekrasov $ matrices [16], $ Q $-$ Nekrasov $ matrices [17], $ GSDD_1 $ matrices [18] and so on.

    In 2011, Peňa [19] proposed a new subclass of $ H $-matrices called $ {SDD_1} $ matrices, whose definition is listed below. A matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is called an $ {SDD_1} $ matrix if

    $ |mii|>pi(M),iN1(M), $

    where

    $ pi(M)=jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|,N1(M)={i||mii|ri(M)},    N2(M)={i||mii|>ri(M)}. $

    In 2023, Dai et al. [18] gave a new subclass of $ H $-matrices named generalized $ SDD_1 $ ($ GSDD_1 $) matrices, which extends the class of $ SDD_1 $ matrices. Here, a matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is said a $ GSDD_1 $ matrix if

    $ ri(M)pN2(M)i(M)>0,iN2(M), $

    and

    $ (ri(M)pN2(M)i(M))(|ajj|pN1(M)j(M))>pN1(M)i(M)pN2(M)j(M),iN2(M),jN1(M), $

    where

    $ pN2(M)i(M)=jN2(M){i}rj(M)|mjj||mij|,pN1(M)i(M)=jN1(M){i}|mij|,iN. $

    Subsequently, some upper bounds for the infinite norm of the inverse matrices of $ {SDD} $ matrices, $ {SDD_1} $ matrices and $ GSDD_1 $ matrices are presented, see [18, 20, 21]. For example, the following results that will be used later are listed.

    Theorem 1. (Varah bound) [21] Let matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. Then

    $ ||M1||1min1in(|mii|ri(M)). $

    Theorem 2. [20] Let matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. Then

    $ ||M1||maxiNpi(M)|mii|+εminiNZi,0<ε<miniN|mii|pi(M)ri(M), $

    where

    $ Zi=ε(|mii|ri(M))+jN{i}(rj(M)pj(M)|mjj|)|mij|. $

    Theorem 3. [20] Let matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. If $ {r_i}(M) > 0(\forall i \in N) $, then

    $ ||M1||maxiNpi(M)|mii|miniNjN{i}rj(M)pj(M)|mjj||mij|. $

    Theorem 4. [18] Let $ M = (m_{ij})\in {C^{n \times n}} $ be a $ GSDD_1 $ matrix. Then

    $ ||M1||max{ε,maxiN2(M)ri(M)|mii|}min{miniN2(M)ϕi,miniN1(M)ψi}, $

    where

    $ ϕi=ri(M)jN2(M){i}rj(M)|mjj||mij|jN1(M){i}|mij|ε,ψi=|mii|εjN1(M){i}|mij|ε+jN2(M){i}rj(M)|mjj||mij|, $

    and

    $ maxiN1(M)pN2(M)i(M)|mii|pN1(M)i(M)<ε<minjN2(M)rj(M)pN2(M)j(M)pN1(M)j(M). $

    On the basis of the above articles, we continue to study special structured matrices and introduce a new subclass of $ H $-matrices called $ {SDD_k} $ matrices, and provide some new upper bounds for the infinite norm of the inverse matrices for $ {SDD} $ matrices and $ {SDD_k} $ matrices, which improve the previous results. The remainder of this paper is organized as follows: In Section 2, we propose a new subclass of $ H $-matrices called $ {SDD_k} $ matrices, which include $ {SDD} $ matrices and $ {SDD_1} $ matrices, and derive some properties of $ {SDD_k} $ matrices. In Section 3, we present some upper bounds for the infinity norm of the inverse matrices for $ {SDD} $ matrices and $ {SDD_k} $ matrices, and provide some comparisons with the well-known Varah bound. Moreover, some numerical examples are given to illustrate the corresponding results.

    In this section, we propose a new subclass of $ H $-matrices called $ {SDD_k} $ matrices, which include $ {SDD} $ matrices and $ {SDD_1} $ matrices, and derive some new properties.

    Definition 1. A matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is called an $ {SDD_k} $ matrix if there exists $ {\rm{ }}k \in {N} $ such that

    $ |mii|>p(k1)i(M),iN1(M), $

    where

    $ p(k)i(M)=jN1(M){i}|mij|+jN2(M){i}p(k1)j(M)|mjj||mij|,p(0)i(M)=jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|. $

    Immediately, we know that $ {SDD_k} $ matrices contain $ {SDD} $ matrices and $ {SDD_1} $ matrices, so

    $ {SDD}{SDD1}{SDD2}{SDDk}. $

    Lemma 1. A matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is an $ {SDD_k}(k \geq 2) $ matrix if and only if for $ \forall i \in N $, $ |{m_{ii}}| > p_i^{(k - 1)}(M) $.

    Proof. For $ \forall i \in N_1(M) $, from Definition 1, it holds that $ |{m_{ii}}| > p_i^{(k - 1)}(M) $.

    For $ \forall i \in N_2(M) $, we have that $ |{m_{ii}}| > {r_i}(M) $. When $ k = 2 $, it follows that

    $ |mii|>ri(M)jN1(M){i}|mij|+jN2(M){i}rj(M)|mjj||mij|=p(0)i(M)jN1(M){i}|mij|+jN2(M){i}p(0)j(M)|mjj||mij|=p(1)i(M). $

    Suppose that $ |{m_{ii}}| > p_i^{(k-1)}(M)(k\leq l, l > 2) $ holds for $ \forall i \in N_2(M) $. When $ k = l+1 $, we have

    $ |mii|>ri(M)jN1(M){i}|mij|+jN2(M){i}p(l2)j(M)|mjj||mij|=p(l1)i(M)jN1(M){i}|mij|+jN2(M){i}p(l1)j(M)|mjj||mij|=p(l)i(M). $

    By induction, we obtain that for $ \forall i \in N_2(M) $, $ |{m_{ii}}| > p_i^{(k - 1)}(M) $. Consequently, it holds that matrix $ M $ is an $ SDD_k $ matrix if and only if $ |{m_{ii}}| > p_i^{(k - 1)}(M) $ for $ \forall i \in N $. The proof is completed.

    Lemma 2. If $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is an $ {SDD_k}(k \geq 2) $ matrix, then $ M $ is an $ H $-matrix.

    Proof. Let $ X $ be the diagonal matrix $ diag\{ {x_1}, {x_2}, \cdots, {x_n}\} $, where

    $ (0<) xj={1,jN1(M),p(k1)j(M)|mjj|+ε,jN2(M), $

    and

    $ 0<ε<miniN|mii|p(k1)i(M)jN2(M){i}|mij|. $

    If $ \sum\limits_{j \in {N_2}(M)\backslash \{ i\} } {|{m_{ij}}|} = 0 $, then the corresponding fraction is defined to be $ {\infty} $. Next we consider the following two cases.

    Case 1: For each $ i \in {N_1}(M) $, it is not difficult to see that $ |{{{(MX}})_{ii}}| = |{m_{ii}}| $, and

    $ ri(MX)=j=1,ji|mij|xj=jN1(M){i}|mij|+jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|jN1(M){i}|mij|+jN2(M){i}(p(k2)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+jN2(M){i}ε|mij|=p(k1)i(M)+εjN2(M){i}|mij|<p(k1)i(M)+|mii|p(k1)i(M)=|mii|=|(MX)ii|. $

    Case 2: For each $ i \in {N_2}(M) $, we can obtain that

    $ |(MX)ii|=|mii|(pk1i(M)|mii|+ε)=p(k1)i(M)+ε|mii|, $

    and

    $ ri(MX)=j=1,ji|mij|xj=jN1(M){i}|mij|+jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|jN1(M){i}|mij|+jN2(M){i}(p(k2)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+jN2(M){i}ε|mij|=p(k1)i(M)+εjN2(A){i}|mij|<p(k1)i(M)+ε|mii|=|(MX)ii|. $

    Based on Cases 1 and 2, we have that $ MX $ is an $ {SDD} $ matrix, and consequently, $ M $ is an $ H $-matrix. The proof is completed.

    According to the definition of $ {SDD_k} $ matrix and Lemma 1, we obtain some properties of $ {SDD_k} $ matrices as follows:

    Theorem 5. If $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ is an $ {SDD_k} $ matrix and $ N_1(M) \ne \emptyset $, then for $ \forall i \in {N_1}(M) $, $ \sum\limits_{j \ne i, j \in {N_2}(M)} {|{m_{ij}}|} > 0 $.

    Proof. Suppose that for $ \forall i \in N_1(M) $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} = 0 $. By Definition 1, we have that $ p_i^{(k - 1)}(M) = {r_i}(M) $, $ \forall i \in {N_1}{\rm{(}}M{\rm{)}} $. Thus, it is easy to verify that $ |{m_{ii}}| > p_i^{(k - 1)}(M) = {r_i}(M) \ge |{m_{ii}}| $, which is a contradiction. Thus for $ \forall i \in N_1(M) $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $. The proof is completed.

    Theorem 6. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD_k}(k \geq 2) $ matrix. It holds that $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $, $ \forall i \in N_2(M) $. Then

    $ |mii|>p(k2)i(M)>p(k1)i(M)>0,iN2(M), $

    and

    $ |mii|>p(k1)i(M)>0,iN. $

    Proof. By Lemma 1 and the known conditions that for $ \forall i \in {N_2}{\rm{(}}M{\rm{)}} $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $, it holds that

    $ |mii|>p(k2)i(M)>p(k1)i(M)>0,iN2(M), $

    and

    $ |{m_{ii}}| > p_i^{(k - 1)}(M), \; \; \; \; \forall i \in N. $

    We now prove that $ |{m_{ii}}| > p_i^{(k - 1)}(M) > 0(\forall i \in {N}) $ and consider the following two cases.

    Case 1: If $ {N_1}{\rm{(}}M{\rm{) = }}\emptyset $, then $ M $ is an $ {SDD} $ matrix. It is easy to verify that $ |{m_{ii}}| > p_i^{(k - 1)}(M) > 0 $, $ \forall i \in {N_2}{\rm{(}}M{\rm{)}} = N $.

    Case 2: If $ {N_1}{\rm{(}}M{\rm{)}} \ne \emptyset $, by Theorem 5 and the known condition that for $ \forall i \in {N_2}{\rm{(}}M{\rm{)}} $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $, then it is easy to obtain that $ |{m_{ii}}| > p_i^{(k - 1)}(M) > 0(\forall i \in N) $.

    From Cases 1 and 2, we have that $ |{m_{ii}}| > p_i^{(k - 1)}(M) > 0(\forall i \in N) $. The proof is completed.

    Theorem 7. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD_k}(k \geq 2) $ matrix and for $ \forall i \in {N_2}{\rm{(}}M{\rm{)}} $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $. Then there exists a diagonal matrix $ X = diag\{ {x_1}, {x_2}, \cdots, {x_n}\} $ such that $ MX $ is an $ {SDD} $ matrix. Elements $ {x_1}, {x_2}, \ldots, {x_n} $ are determined by

    $ xi=p(k1)i(M)|mii|,iN. $

    Proof. We need to prove that matrix $ MX $ satisfies the following inequalities:

    $ |{{\rm{(}}MX)_{ii}}| > {r_i}(MX), \; \; \; \; \forall i \in N. $

    From Theorem 6 and the known condition that for $ \forall i \in {N_2}{\rm{(}}M{\rm{)}} $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $, it is easy to verify that

    $ 0 < \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} < \frac{{p_i^{(k - 2)}(M)}}{{|{m_{ii}}|}} < 1, \; \; \; \; \forall i \in {N_2(M)}. $

    For each $ i \in N $, we have that $ |{{\rm{(}}MX)_{ii}}| = p_i^{(k - 1)}(M) $ and

    $ ri(MX)=j=1,ji|mij|xj=jN1(M){i}p(k1)j(M)|mjj||mij|+jN2(M){i}p(k1)j(M)|mjj||mij|<jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|=p(k1)i(M)=|(MX)ii|, $

    that is,

    $ |{{\rm{(}}MX)_{ii}}| > {r_i}(MX). $

    Therefore, $ MX $ is an $ SDD $ matrix. The proof is completed.

    In this section, by Lemma 2 and Theorem 7, we provide some new upper bounds of the infinity norm of the inverse matrices for $ {SDD_k} $ matrices and $ {SDD} $ matrices, respectively. We also present some comparisons with the Varah bound. Some numerical examples are presented to illustrate the corresponding results. Specially, when the involved matrices are $ {SDD_1} $ matrices as subclass of $ {SDD_k} $ matrices, these new bounds are in line with that provided by Chen et al. [20].

    Theorem 8. Let $ M = (m_{ij})\in {C^{n \times n}}(n \geq 2) $ be an $ {SDD_k}(k \geq 2) $ matrix. Then

    $ ||M1||max{1,maxiN2(M)p(k1)i(M)|mii|+ε}min{miniN1(M)Ui,miniN2(M)Vi}, $

    where

    $ Ui=|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|,Vi=ε(|mii|jN2(M){i}|mij|)+jN2(M){i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|, $

    and

    $ 0<ε<miniN|mii|p(k1)i(M)jN2(M){i}|mij|. $

    Proof. By Lemma 2, we have that there exists a positive diagonal matrix $ X $ such that $ MX $ is an $ {SDD} $ matrix, where $ X $ is defined as Lemma 2. Thus,

    $ ||M1||=||X(X1M1)||=||X(MX)1||||X||||(MX)1||, $

    and

    $ ||X||=max1inxi=max{1,maxiN2(M)p(k1)i(M)|mii|+ε}. $

    Notice that $ MX $ is an $ {SDD} $ matrix. Hence, by Theorem 1, we have

    $ ||(MX)1||1min1in(|(MX)ii|ri(MX)). $

    Thus, for any $ i \in {N_1}{\rm{(}}M{\rm{)}} $, it holds that

    $ |(MX)ii|ri(MX)=|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=Ui. $

    For any $ i \in {N_2}{\rm{(}}M{\rm{)}} $, it holds that

    $ |(MX)ii|ri(MX)=p(k1)i(M)+ε|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=jN1(M){i}|mij|+jN2(M){i}p(k2)j(M)|mjj||mij|+ε|mii|jN1(M){i}|mij|jN2(M){i}(p(k1)j(M)|mjj|+ε)|mij|=ε(|mii|jN2(M){i}|mij|)+jN2(M){i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|=Vi. $

    Therefore, we get

    $ ||M1||max{1,maxiN2(M)p(k1)i(M)|mii|+ε}min{miniN1(M)Xi,miniN2(M)Yi}. $

    The proof is completed.

    From Theorem 8, it is easy to obtain the following result.

    Corollary 1. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. Then

    $ ||M1||maxiNp(k1)i(M)|mii|+εminiNZi, $

    where $ k\geq 2 $,

    $ Zi=ε(|mii|ri(M))+jN{i}(p(k2)j(M)p(k1)j(M)|mjj|)|mij|, $

    and

    $ 0<ε<miniN|mii|p(k1)i(M)ri(M). $

    Example 1. Consider the $ n\times n $ matrix:

    $ M = \left( {421.51.5424824824824824823.54} \right). $

    Take that $ n = 20 $. It is easy to verify that $ M $ is an $ {SDD} $ matrix.

    By calculations, we have that for $ k = 2 $,

    $ maxiNp(1)i(M)|mii|+ε1=0.5859+ε1,miniNZi=0.4414+0.5ε1,0<ε1<0.4732. $

    For $ k = 4 $,

    $ maxiNp(3)i(M)|aii|+ε2=0.3845+ε2,miniNZi=0.2959+0.5ε2,0<ε2<0.7034. $

    For $ k = 6 $,

    $ maxiNp(5)i(M)|mii|+ε3=0.2504+ε3,miniNZi=0.1733+0.5ε3,0<ε3<0.8567. $

    For $ k = 8 $,

    $ maxiNp(7)i(M)|mii|+ε4=0.1624+ε4,miniNZi=0.0990+0.5ε4,0<ε4<0.9572. $

    So, when $ k = 2, 4, 6, 8 $, by Corollary 1 and Theorem 1, we can get the upper bounds for $ ||{M^{ - 1}}|{|_\infty } $, see Table 1. Thus,

    $ ||{M^{ - 1}}|{|_\infty } \leq \frac{{0.5859 + {\varepsilon _1}}}{{0.4414 + 0.5{\varepsilon _1}}} < 2, \; \; \; \; \; {\rm{ }}||{M^{ - 1}}|{|_\infty } \leq \frac{{0.3845 + {\varepsilon _2}}}{{0.2959 + 0.5{\varepsilon _2}}} < 2, $
    Table 1.  The bounds in Corollary $ 1 $ and Theorem $ 1 $.
    $ k $ 2 4 6 8
    Cor $ 1 $ $ \frac{{0.5859 + {\varepsilon _1}}}{{0.4414 + 0.5{\varepsilon _1}}} $ $ \frac{{0.3845 + {\varepsilon _2}}}{{0.2959 + 0.5{\varepsilon _2}}} $ $ \frac{{0.2504 + {\varepsilon _3}}}{{0.1733 + 0.5{\varepsilon _3}}} $ $ \frac{{0.1624 + {\varepsilon _4}}}{{0.0990 + 0.5{\varepsilon _4}}} $
    Th $ 1 $ 2 2 2 2

     | Show Table
    DownLoad: CSV

    and

    $ ||{M^{ - 1}}|{|_\infty }\leq \frac{{{\rm{0}}{\rm{.2504}} + {\varepsilon _3}}}{{{\rm{0}}{\rm{.1733}} + 0.5{\varepsilon _3}}} < 2, \; \; \; \; \; {\rm{ }}||{M^{ - 1}}|{|_\infty } \leq \frac{{0.1624 + {\varepsilon _4}}}{{0.0990 + 0.5{\varepsilon _4}}} < 2. $

    Moreover, when $ k = 1 $, by Theorem 2, we have

    $ ||{M^{ - 1}}|{|_\infty } \leq \frac{{0.7188 + {\varepsilon _5}}}{{0.4844 + 0.5{\varepsilon _5}}}, \; \; \; \; \; {\rm{ }}0 < {\varepsilon _5} < 0.3214. $

    The following Theorem 9 shows that the bound in Corollary 1 is better than that in Theorem 1 of [20] in some cases.

    Theorem 9. Let matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. If there exists $ k \geq 2 $ such that

    $ \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) \le \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} , $

    then

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} + \varepsilon }}{{\mathop {\min}\limits_{i \in N} {Z_i}}} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}, $

    where $ {Z_i} $ and $ \varepsilon $ are defined as in Corollary 1, respectively.

    Proof. From the given condition, we have that there exists $ k \geq 2 $ such that

    $ maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|, $

    then

    $ maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))+εminiN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+εminiN(|mii|ri(M)). $

    Thus, we get

    $ (maxiNp(k1)i(M)|mii|+ε)miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+εminiN(|mii|ri(M))=miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|+miniN(ε(|mii|ri(M)))miniN(ε(|mii|ri(M))+jN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|)=miniNZi. $

    Since $ M $ is an $ {SDD} $ matrix, then

    $ |mii|>ri(M),Zi>0,iN. $

    It's easy to verify that

    $ \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} + \varepsilon }}{{\mathop {\min}\limits_{i \in N} {Z_i}}} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    Thus, by Corollary 1, it holds that

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} + \varepsilon }}{{\mathop {\min}\limits_{i \in N} {Z_i}}} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    The proof is completed.

    We illustrate Theorem 9 by the following Example 2.

    Example 2. This is the previous Example 1. For $ k = 4 $, we have

    $ maxiNp(3)i(M)|mii|miniN(|mii|ri(M))=0.1923<0.2959=miniNjN{i}p(2)j(M)p(3)j(M)|mjj||mij|. $

    Thus, by Theorem 8, we obtain that for each $ 0 < {\varepsilon _2} < 0.7034 $,

    $ ||{M^{ - 1}}|{|_\infty } \leq \frac{{0.3845 + {\varepsilon _2}}}{{0.2959 + 0.5{\varepsilon _2}}} < 2 = \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    However, we find that the upper bounds in Theorems 8 and 9 contain the parameter $ {\varepsilon} $. Next, based on Theorem 7, we will provide new upper bounds for the infinity norm of the inverse matrices of $ {SDD_k} $ matrices, which only depend on the elements of the given matrices.

    Theorem 10. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD_k}(k \geq 2) $ matrix and for each $ i \in {N_2}{\rm{(}}M{\rm{)}} $, $ \sum\limits_{j \ne i, j \in N_2(M)} {|{m_{ij}}|} > 0 $. Then

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \left( {p_i^{(k - 1)}(M) - \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} } \right)}}. $

    Proof. By Theorems 7 and 8, we have that there exists a positive diagonal matrix $ X $ such that $ MX $ is an $ {SDD} $ matrix, where $ X $ is defined as in Theorem 7. Thus, it holds that

    $ ||M1||=||X(X1M1)||=||X(MX)1||||X||||(MX)1||, $

    and

    $ ||X||=max1inxi=maxiNp(k1)i(M)|mii|. $

    Notice that $ MX $ is an $ {SDD} $ matrix. Thus, by Theorem 1, we get

    $ ||(MX)1||1min1in(|(MX)ii|ri(MX))=1min1in(|miixi|ri(MX))=1miniN(p(k1)i(M)jN{i}p(k1)j(M)|mjj||mij|). $

    Therefore, we have that

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \left( {p_i^{(k - 1)}(M) - \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} } \right)}}. $

    The proof is completed.

    Since $ SDD $ matrices are a subclass of $ SDD_{k} $ matrices, by Theorem 10, we can obtain the following result.

    Corollary 2. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. If $ {r_i}(M) > 0(\forall i \in N) $, then there exists $ k \geq 2 $ such that

    $ ||M1||maxiNp(k1)i(M)|mii|miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|. $

    Two examples are given to show the advantage of the bound in Theorem 10.

    Example 3. Consider the following matrix:

    $ M = \left(4012120104.14620233480462023042040 \right). $

    It is easy to verify that $ M $ is not an $ SDD $ matrix, an $ SDD_1 $ matrix, a $ GSDD_1 $ matrix, an $ S $-$ SDD $ matrix, nor a $ CKV $-type matrix. Therefore, we cannot use the error bounds in [1, 8, 9, 18, 20] to estimate $ ||{M^{ - 1}}|{|_\infty } $. But, $ M $ is an $ SDD_2 $ matrix. So by the bound in Theorem 10, we have that $ \|{M^{ - 1}}\|_\infty \leq 0.5820 $.

    Example 4. Consider the tri-diagonal matrix $ M\in R^{n\times n} $ arising from the finite difference method for free boundary problems [18], where

    $ M = \left(b+αsin(1n)c00ab+αsin(2n)c00ab+αsin(n1n)c00ab+αsin(1) \right). $

    Take that $ n = 4 $, $ a = 1 $, $ b = 0 $, $ c = 3.7 $ and $ \alpha = 10 $. It is easy to verify that $ M $ is neither an $ SDD $ matrix nor an $ SDD_1 $ matrix. However, we can get that $ M $ is a $ GSDD_1 $ matrix and an $ SDD_3 $ matrix. By the bound in Theorem 10, we have

    $ \|{M^{ - 1}}\|_\infty \leq 8.2630, $

    while by the bound in Theorem 4, it holds that

    $ M1εmin{2.1488ε,0.3105,2.474ε3.6272},ε(1.4661,2.1488). $

    The following two theorems show that the bound in Corollary 2 is better than that in Theorem 1 in some cases.

    Theorem 11. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. If $ {r_i}(M) > 0(\forall i \in N) $ and there exists $ k \geq 2 $ such that

    $ \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), $

    then

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    Proof. Since $ M $ is an $ {SDD} $ matrix, then $ N_1(M) = \emptyset $ and $ M $ is an $ {SDD_k} $ matrix. By the given condition that $ {r_i}(M) > 0(\forall i \in N) $, it holds that

    $ |mii|>ri(M)>jN{i}rj(M)|mjj||mij|=p(0)i(M)>0,iN,p(0)i(M)=jN{i}rj(M)|mjj||mij|>jN{i}p(0)j(M)|mjj||mij|=p(1)i(M)>0,iN. $

    Similarly, we can obtain that

    $ |mii|>ri(M)>p(0)i(M)>>p(k1)i(M)>0,iN, $

    that is,

    $ \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} < 1. $

    Since there exists $ k \geq 2 $ such that

    $ \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), $

    then we have

    $ \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    Thus, from Corollary 2, we get

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    The proof is completed.

    We illustrate the Theorem 11 by following Example 5.

    Example 5. Consider the matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $, where

    $ M = \left( {430.91622522522522521620.934} \right). $

    Take that $ n = 20 $. It is easy to check that $ M $ is an $ {SDD} $ matrix. Let

    $ {l_k} = \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|}, \; \; m = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). $

    By calculations, we have

    $ l2=0.2692>0.1=m,l3=0.2567>0.1=m,l4=0.1788>0.1=m,l5=0.1513>0.1=m,l6=0.1037>0.1=m. $

    Thus, when $ k = 2, 3, 4, 5, 6 $, the matrix $ M $ satisfies the conditions of Theorem 11. By Theorems 1 and 11, we can derive the upper bounds for $ ||{M^{ - 1}}|{|_\infty } $, see Table 2. Meanwhile, when $ k = 1 $, by Theorem 3, we get that $ ||{M^{ - 1}}|{|_\infty } \leq 1.6976. $

    Table 2.  The bounds in Theorem $ 11 $ and Theorem $ 1 $.
    $ k $ 2 3 4 5 6
    Th 11 1.9022 1.5959 1.8332 1.7324 2.0214
    Th 1 10 10 10 10 10

     | Show Table
    DownLoad: CSV

    From Table 2, we can see that the bounds in Theorem 11 are better than that in Theorems 1 and 3 in some cases.

    Theorem 12. Let $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $ be an $ {SDD} $ matrix. If $ {r_i}(M) > 0(\forall i \in N) $ and there exists $ k \geq 2 $ such that

    $ maxiNp(k1)i(M)|mii|miniN(|mii|ri(M))miniNjN{i}p(k2)j(M)p(k1)j(M)|mjj||mij|<miniN(|mii|ri(M)), $

    then

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    Proof. By Theorem 7 and the given condition that $ {r_i}(M) > 0(\forall i \in N) $, it is easy to get that

    $ \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} > 0, \; \; \; \forall i \in N. $

    From the condition that there exists $ k \geq 2 $ such that

    $ \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) \le \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|}, $

    we have

    $ \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    Thus, from Corollary 2, it holds that

    $ ||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. $

    The proof is completed.

    Next, we illustrate Theorem 12 by the following Example 6.

    Example 6. Consider the tri-diagonal matrix $ M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) $, where

    $ M = \left( {32.51.2422.8512.8512.8512.8511.2422.53} \right). $

    Take that $ n = 20 $. It is easy to verify that $ M $ is an $ {SDD} $ matrix.

    By calculations, we have that for $ k = 2 $,

    $ maxiNp(1)i(M)|mii|miniN(|mii|ri(M))=0.2686<miniNjN{i}p(0)j(M)p(1)j(M)|mjj||mij|=0.3250<0.5=miniN(|mii|ri(M)). $

    For $ k = 5 $, we get

    $ maxiNp(4)i(M)|mii|miniN(|mii|ri(M))=0.1319<miniNjN{i}p(3)j(M)p(4)j(M)|mjj||mij|=0.1685<0.5=miniN(|mii|ri(M)). $

    For $ k = 10 $, it holds that

    $ maxiNp(9)i(M)|mii|miniN(|mii|ri(M))=0.0386<miniNjN{i}p(8)j(M)p(9)j(M)|mjj||mij|=0.0485<0.5=miniN(|mii|ri(M)). $

    Thus, for $ k = 2, 5, 10 $, the matrix $ M $ satisfies the conditions of Theorem 12. Thus, from Theorems 12 and 1, we get the upper bounds for $ ||{M^{ - 1}}|{|_\infty } $, see Table 3. Meanwhile, when $ k = 1 $, by Theorem 3, we have that $ ||{M^{ - 1}}|{|_\infty } \leq 1.7170. $

    Table 3.  The bounds in Theorem $ 12 $ and Theorem $ 1 $.
    $ k $ 2 5 10
    Th $ 12 $ 1.6530 1.5656 1.5925
    Th $ 1 $ 2 2 2

     | Show Table
    DownLoad: CSV

    From Table 3, we can see that the bound in Theorem 12 is sharper than that in Theorems 1 and 3 in some cases.

    $ {SDD_k} $ matrices as a new subclass of $ H $-matrices are proposed, which include $ {SDD} $ matrices and $ {SDD_1} $ matrices, and some properties of $ {SDD_k} $ matrices are obtained. Meanwhile, some new upper bounds of the infinity norm of the inverse matrices for $ {SDD} $ matrices and $ {SDD_k} $ matrices are presented. Furthermore, we prove that the new bounds are better than some existing bounds in some cases. Some numerical examples are also provided to show the validity of new results. In the future, based on the proposed infinity norm bound, we will explore the computable error bounds of the linear complementarity problems for $ SDD_k $ matrices.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by Guizhou Provincial Science and Technology Projects (20191161), and the Natural Science Research Project of Department of Education of Guizhou Province (QJJ2023062, QJJ2023063).

    The authors declare that they have no competing interests.



    Conflict of interest



    The authors declare no conflict of interest.

    [1] Metzger BE, Coustan DR (1998) Proceedings of the fourth-international workshop conference on gestational diabetes mellitus. Diabetes Care 21(Suppl.2): B1–167.
    [2] Bellamy L, Casas JP, Hingorani AD, et al. (2009) Type 2 diabetes mellitus after gestational diabetes: a systematic review and meta-analysis. Lancet 373: 1773–1779. doi: 10.1016/S0140-6736(09)60731-5
    [3] World Health Organization (2013) Diagnostic criteria and classification of hyperglycaemia first detected in pregnancy. Geneva: World Health Organization.
    [4] International Diabetes Federation (2017). IDF Diabetes Atlas, 8th edition. Brussels, Belgium: International Diabetes Federation.
    [5] Guariguata L, Linnenkamp U, Beagley J, et al. (2014). Global estimates of the prevalence of hyperglycaemia in pregnancy. Diabetes Res Clin Pract 103: 176−185.
    [6] Zhu Y, Zhang C (2016) Prevalence of gestational diabetes and risk of progression to type 2 diabetes: a global perspective. Curr Diab Rep 16: 7. doi: 10.1007/s11892-015-0699-x
    [7] World Health Organization (2016) Global report on diabetes. Geneva: World Health Organization.
    [8] Lee KW, Ching SM, Ramachandran V, et al. (2018) Prevalence and risk factors of gestational diabetes mellitus in Asia: a systematic review and meta-analysis. BMC Pregnancy Childbirth 18: 494. doi: 10.1186/s12884-018-2131-4
    [9] Wendland EM, Torloni MR, Falavigna M, et al. (2012) Gestational diabetes and pregnancy outcomes – a systematic review of the World Health Organization (WHO) and the International Association of Diabetes in Pregnancy Study Groups (IADPSG) diagnostic criteria. BMC Pregnancy Childbirth 12: 23. doi: 10.1186/1471-2393-12-23
    [10] Tamayo T, Tamayo M, Rathmann W, et al. (2016) Prevalence of gestational diabetes and risk of complications before and after initiation of a general systematic two-step screening strategy in Germany (2012-2014). Diabetes Res Clin Pract 115: 1−8.
    [11] Neiger R (2017) Long-term effects of pregnancy complications on maternal health: a review. J Clin Med 6: 76. doi: 10.3390/jcm6080076
    [12] Mitanchez D, Yzydorczyk C, Simeoni U (2015) What neonatal complications should the pediatrician be aware of in case of maternal gestational diabetes? World J Diabetes 6: 734−743.
    [13] Damm P, Houshmand-Oeregaard A, Kelstrup L, et al. (2016) Gestational diabetes mellitus and long-term consequences for mother and offspring: a view from Denmark. Diabetologia 59: 1396−1399.
    [14] American Diabetes Association (2019) 14. Management of Diabetes in Pregnancy: Standards of Medical Care in Diabetes-2019. Diabetes Care 42(Suppl 1): S165−S172.
    [15] Wang C, Yang HX (2016) Diagnosis, prevention and management of gestational diabetes mellitus. Chronic Dis Transl Med 2: 199−203.
    [16] Hernandez TL, Brand-Miller JC (2018) Nutrition therapy in gestational diabetes mellitus: time to move forward. Diabetes Care 41: 1343−1345.
    [17] Metzger BE, Buchanan TA, Coustan DR, et al. (2007) Summary and recommendations of the fifth international workshop-conference on gestational diabetes mellitus. Diabetes Care 30(Suppl 2): S251−S260.
    [18] Shyam S, Ramadas A (2018) Treatments with low glycaemic index diets in gestational diabetes, In: Rajendram R, Preedy VR, Patel VB, Nutrition and Diet in Maternal Diabetes, Springer, 237−251.
    [19] Yamamoto JM, Kellett JE, Balsells M, et al. (2018) Gestational diabetes mellitus and diet: a systematic review and meta-analysis of randomized controlled trials examining the impact of modified dietary interventions on maternal glucose control and neonatal birth weight. Diabetes Care 41: 1346−1361.
  • This article has been cited by:

    1. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of generalized SDD2 matrices with applications, 2024, 41, 0916-7005, 1477, 10.1007/s13160-024-00658-2
    2. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of Quasi-SDDk matrices with applications, 2024, 1017-1398, 10.1007/s11075-024-01949-y
    3. Wenwen Ran, Feng Wang, Extended SDD\dag1 matrices and error bounds for linear complementarity problems, 2024, 0916-7005, 10.1007/s13160-024-00685-z
    4. Yuanjie Geng, Yuxue Zhu, Fude Zhang, Feng Wang, Infinity Norm Bounds for the Inverse of SDD1-Type Matrices with Applications, 2025, 2096-6385, 10.1007/s42967-024-00457-z
    5. L. Yu. Kolotilina, SSDD Matrices and Relations with Other Subclasses of the Nonsingular ℋ-Matrices, 2025, 1072-3374, 10.1007/s10958-025-07711-6
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4881) PDF downloads(1575) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog