Loading [MathJax]/jax/output/SVG/jax.js
Research article

Family of right conoid hypersurfaces with light-like axis in Minkowski four-space

  • Received: 08 April 2024 Accepted: 29 May 2024 Published: 04 June 2024
  • MSC : 53A35, 53C42

  • In the realm of the four-dimensional Minkowski space L4, the focus is on hypersurfaces classified as right conoids and defined by light-like axes. Matrices associated with the fundamental form, Gauss map, and shape operator, all specifically tailored for these hypersurfaces, are currently undergoing computation. The intrinsic curvatures of these hypersurfaces are determined using the Cayley-Hamilton theorem. The conditions of minimality are addressed by the analysis. The Laplace-Beltrami operator for such hypersurfaces is computed, accompanied by illustrative examples aimed at fostering a more profound understanding of the involved mathematical principles. Additionally, scrutiny is applied to the umbilical condition, and the introduction of the Willmore functional for these hypersurfaces is presented.

    Citation: Yanlin Li, Erhan Güler, Magdalena Toda. Family of right conoid hypersurfaces with light-like axis in Minkowski four-space[J]. AIMS Mathematics, 2024, 9(7): 18732-18745. doi: 10.3934/math.2024911

    Related Papers:

    [1] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [2] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of SDD+1 matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [3] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of SDD1 matrices and SDD1-B matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [4] Qin Zhong, Chunyan Zhao . Extended Perron complements of M-matrices. AIMS Mathematics, 2023, 8(11): 26372-26383. doi: 10.3934/math.20231346
    [5] Baifeng Qiu, Zhiping Xiong . The reverse order law for the weighted least square g-inverse of multiple matrix products. AIMS Mathematics, 2023, 8(12): 29171-29181. doi: 10.3934/math.20231494
    [6] Qin Zhong, Ling Li . Notes on the generalized Perron complements involving inverse N0-matrices. AIMS Mathematics, 2024, 9(8): 22130-22145. doi: 10.3934/math.20241076
    [7] Xinnian Song, Lei Gao . CKV-type B-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [8] Xueyong Wang, Gang Wang, Ping Yang . Novel Pareto Z-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications. AIMS Mathematics, 2024, 9(11): 30214-30229. doi: 10.3934/math.20241459
    [9] Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493
    [10] Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of SDDk matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276
  • In the realm of the four-dimensional Minkowski space L4, the focus is on hypersurfaces classified as right conoids and defined by light-like axes. Matrices associated with the fundamental form, Gauss map, and shape operator, all specifically tailored for these hypersurfaces, are currently undergoing computation. The intrinsic curvatures of these hypersurfaces are determined using the Cayley-Hamilton theorem. The conditions of minimality are addressed by the analysis. The Laplace-Beltrami operator for such hypersurfaces is computed, accompanied by illustrative examples aimed at fostering a more profound understanding of the involved mathematical principles. Additionally, scrutiny is applied to the umbilical condition, and the introduction of the Willmore functional for these hypersurfaces is presented.



    Hadamard or H-matrix is proved to be of a great value in many classical problems in numerical linear algebra. This matrix class provides a frame for extensive research on stability of dynamical systems with emphasis on matrix spectra and spectra-related parameters. The information on eigenvalue localization within a prescribed area in the complex plane as a sufficient condition for a certain type of stability could be obtained from analysis of special H-matrices. Special H-matrices have their role in dealing with linear complementarity problems when considering questions of existence and uniqueness of solutions and in error analysis.

    H-matrices are closely related to well-known Minkowski or M-matrices. A matrix A=[aij]Cn,n is an H-matrix if its comparison matrix A=[mij] defined by

    A=[mij]Cn,n,mij={|aii|,i=j|aij|,ij

    is an M-matrix, i.e., A10; for more details, see [3,14]. H-matrices are also related to P-matrices, real square matrices with all principal minors positive. A real H-matrix with positive diagonal entries is a P-matrix. In literature, H-matrices also appear as GSDD matrices, or generalized strictly diagonally dominant matrices, as they represent a generalization of strictly diagonally dominant (SDD) matrices.

    Motivation for the research on special H-matrices comes from the fact that diagonal dominance and related matrix properties play an important role in matrix splitting methods and convergence analysis of iterative procedures for solving large sparse systems of linear equations, eigenvalue localization problems, norm for the inverse and condition number estimation, and linear complementarity problems. As the SDD condition is very limiting, results developed for SDD matrices, such as Varah norm bound for the inverse (see [46]) cannot be applied to wider matrix classes. Therefore, we are motivated to consider modifications of the SDD condition and obtain new estimates applicable to new classes of matrices.

    In this paper, we obtain new bounds for the norm of the inverse that work in some cases when well-known bounds cannot be applied. Also, for some matrices, both new bounds and some already known bounds are applicable, and we show through numerical examples that new bounds can give tighter results. As the condition number of a matrix is defined as

    κ(A)=||A||||A1||,

    results on norm bounds directly imply estimates for the condition number. Condition number helps us recognize ill-conditioned problems, with high relative change in output for a relative change in input data. Therefore, upper estimates for condition number have important applications in engineering, in control of perturbation effects.

    Another important benefit of new estimates for the inverse is the possibility to apply new results to block H-matrices. It is important to emphasize that block H-matrices do not have to be H-matrices in the classical sense. In these cases, bounds developed for H-matrices cannot be applied, but block approach could still work. This is of great importance in ecological models describing interactions between several populations (see [24]) with self-interactions in community matrices equal to zero. Matrices with zero diagonal entries, or 'hollow matrices', do not belong to H-matrices in the classical sense, but they could be block H-matrices with respect to some partition of the index set and they could be approached using tools of H-matrix theory.

    New norm estimations are further applied in linear complementarity problems. Also, we obtain eigenvalue localization for the Schur complements of special H-matrices that can be applied in convergence analysis in Schur-based iterative methods.

    Concept of diagonal dominance and related subclasses of H-matrices are widely applied in analysis of stability and other dynamical properties of complex real systems, in wireless sensor networks, structural engineering, and economy.

    The paper by Fiedler and Pták (see [17]) brought a characterization of H-matrices through their scaling relation to SDD matrices. Fiedler and Pták proved that a matrix A=[aij]Cn,n is an H-matrix if, and only if, there exists a diagonal non-singular matrix W such that AW is SDD. This definition of (non-singular) H-matrices is also known as the 'scaling characterization', while the diagonal matrix W (that can be chosen to have all positive diagonal entries) is called the 'scaling matrix'. A complex square matrix A=[aij]Cn,n is an SDD matrix if, for all indices iN,

    |aii|>ri(A)=jN,ji|aij|.

    While the SDD condition is expressed in a very elegant manner and easily checkable with low calculation cost, identification of H-matrices was never that simple. In literature, there are various iterative or direct criteria for recognizing special H-matrices; see [4,10,15,16,25,26,30,42]. Through slight relaxations of the SDD condition, using different approaches, new special subclasses of H-matrices were defined. These new conditions could be seen as different ways of 'breaking the SDD condition'.

    Although scaling characterization connects H-matrices to SDD matrices through a very simple diagonal scaling transformation, in most cases these scaling relations are unknown to us, as we are not able to easily determine the scaling matrix. In [8], new criteria for identifying non-singular H-matrices were presented based on a special form of scaling. Motivated by these results, in this paper we introduce new criteria for identifying special H-matrices based on special scaling relation of the new class to the class of Nekrasov matrices; see [25,34].

    It is worth mentioning that scaling methods could be interpreted in different ways and prove to be useful in a wide range of practical applications when the purpose of scaling is to obtain prescribed row and column sums in a nonnegative matrix see [2,3]. In analysis of budget allocation problems, transportation problems, Leontief input-output systems, Markov chains, and related mathematical models of real-life systems, the concept of scaling plays an important role.

    In the remainder of this section, we recall well-known criteria for identifying some special H-matrices. In Section 2, we introduce a new subclass of H-matrices and discuss relation of the new class to well-known matrix classes. In Section 3, we present possible applications of the new class: Estimation of the infinity norm of the inverse, bounds for minimal singular value, error estimation for linear complementarity problems, and preliminary eigenvalue localizations for Schur complements of matrices in the new class. The fourth section consists of concluding remarks.

    In [8], a scaling-based condition for identifying non-singular H-matrices was given as follows. Let A=[aij]Cn,n, n2 be a matrix with nonzero diagonal entries. Denote

    Ri(A)=kN{i}rk(A)|akk||aik|,iN.

    If

    ri(A)rj(A)>Ri(A)Rj(A)for alli,jN,ij, (1.1)

    then A is an H-matrix. In further consideration, we say that a matrix is an O-scal matrix if it satisfies the condition (1.1). The proof of the statement that O-scal matrices are non-singular H-matrices, given in [8], relies on the special scaling relation between O-scal matrices and well-known Ostrowski matrices; see [41]. Recall that a matrix A=[aij]Cn,n, n2 is an Ostrowski matrix if

    |aii||ajj|>ri(A)rj(A),for alli,jN,ij. (1.2)

    In [8], it was proved that for an O-scal matrix A and diagonal matrix

    Q=diag(qk),qk=rk(A)|akk|,k=1,,n, (1.3)

    the matrix AQ is an Ostrowski matrix.

    Now we recall Nekrasov matrices; see [25,34]. Consider recursively defined row sums:

    h1(A)=j1|a1j|,hi(A)=i1j=1|aij|hj(A)|ajj|+nj=i+1|aij|,i=2,3,,n.

    A matrix A=[aij]Cn,n,n2 is a Nekrasov matrix if, for each iN, it holds that

    |aii|>hi(A).

    In literature, there are different propositions by different authors on how to construct a scaling matrix for the given Nekrasov matrix. We recall a construction presented in [45].

    Theorem 1.1. [45] Let A=[aij]Cn,n be a Nekrasov matrix with all nonzero Nekrasov row sums. Then, for a diagonal positive matrix Y=diag(yi), where yi=εihi(A)|aii|,iN, and (εi)ni=1 is an increasing sequence of numbers with ε1=1 and εi(1,|aii|hi(A)),i=2,,n, the matrix AY is an SDD matrix.

    In [42], Peña presented a class of matrices named SDD1 matrices via a condition that generalizes the SDD condition by imposing on the rows that are not SDD of a weaker demand.

    Definition 1.2. A matrix A=[aij]Cn,n,n2, is an SDD1 matrix if

    |aii|>pi(A),iNA, (1.4)

    where

    pi(A)=jNA{i}|aij|+j¯NA{i}rj(A)|ajj||aij|,

    NA={iN||aii|ri(A)} \, and \, ¯NA={iN||aii|>ri(A)}.

    As SDD1 matrices were researched recently by different authors, we are going to discuss relations of the SDD1 condition to new criteria we presented in the Section 2.

    Well-known classes of Ostrowski (see [41]) or Dashnic-Zusmanovich (see [15,16]) were defined based on a special treatment of one index in the index set; Nekrasov matrices (see [26,34,44]) were introduced via recursively defined row sums and further generalized through conditions involving permutations of the index set (see [10]); while SDD1 matrices (see [42]) involve a new type of row sums. All of these conditions (i.e., matrix classes) were further applied in estimations of the norm of the inverse matrix (see [13,18,28,29,37,38]), estimations of errors in linear complementarity problems (see [1,20,21,22,33,38]), eigenvalue localizations (see [47]), and analysis of Schur complements, (see [9,11,12,39]).

    In this section, we introduce new conditions defining a subclass of non-singular H-matrices. These new criteria are obtained as scaling modifications of the Nekrasov condition.

    Given a matrix A=[aij]Cn,n,n2, with nonzero diagonal entries, let

    H1(A)=nj=2|a1j|hj(A)|ajj|,Hi(A)=i1j=1|aij|Hj(A)|ajj|+nj=i+1|aij|hj(A)|ajj|,i=2,,n.

    Definition 2.1. Given a matrix A=[aij]Cn,n,n2, with nonzero diagonal entries, then A is an N-scal matrix if

    hi(A)>Hi(A),foralliN.

    Theorem 2.2. If a matrix A=[aij]Cn,n,n2, is an N-scal matrix, then A is a non-singular H-matrix.

    Proof. Let us consider the diagonal matrix X=diag(xk), with

    xk=hk(A)|akk|,kN.

    For the diagonal matrix X defined in that way, let us prove that AX is a Nekrasov matrix. It is easy to see that

    (AX)kk=|akk|xk=|akk|hk(A)|akk|=hk(A).

    We now prove by induction that for all iN,

    hi(AX)=Hi(A).

    First, for i=1,

    h1(AX)=j1|a1j|xj=j1|a1j|hj(A)|ajj|=H1(A).

    Consider i{2,3,,n}. Assume that for all indices k{1,2,,i1}, it holds that

    hk(AX)=Hk(A).

    Then,

    hi(AX)=i1j=1|aij|xjhj(AX)|ajj|xj+nj=i+1|aijxj|=i1j=1|aij|Hj(A)|ajj|+nj=i+1|aij|hj(A)|ajj|=Hi(A),i=2,3,,n.

    Therefore, A is an H-matrix.

    N-scal class is derived from Nekrasov class and, just like the Nekrasov class, it is not closed under simultaneous permutations of rows and columns, as the following example shows.

    Example 2.3. Consider

    A0=[3.4000.42.972.200.134.11.3005.65.6].

    The matrix A0 is an N-scal matrix, but for P being a counter-identical permutation, PTA0P is not an N-scal matrix.

    Let us now discuss relations of the N-scal condition to other well-known criteria for identifying H-matrices.

    Example 2.4. Consider the following matrix

    A1=[10.100.50.1100.50.1010.50.410.31].

    The matrix A1 belongs to some of the matrix classes presented in Table 1.

    Table 1.  Classes that contain A1.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal
    - - - -

     | Show Table
    DownLoad: CSV

    Consider the matrix

    A2=[7032176006722109].

    The matrix A2 belongs to some of the matrix classes presented in Table 2.

    Table 2.  Classes that contain A2.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal
    - - - -

     | Show Table
    DownLoad: CSV

    Consider the matrix

    A3=[10.5000.51001210.5120.331].

    The matrix A3 belongs to some of the matrix classes presented in Table 3.

    Table 3.  Classes that contain A3.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal
    - - - -

     | Show Table
    DownLoad: CSV

    Consider the matrix

    A4=[3200130001300008].

    The matrix A4 belongs to some of the matrix classes presented in Table 4.

    Table 4.  Classes that contain A4.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal
    - -

     | Show Table
    DownLoad: CSV

    Consider the matrix

    A5=[2001031000411005].

    The matrix A5 belongs to all of the matrix classes presented in Table 5.

    Table 5.  Classes that contain A5.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal

     | Show Table
    DownLoad: CSV

    Consider the matrix

    A6=[84232925121162226].

    The matrix A6 belongs to some of the matrix classes presented in Table 6.

    Table 6.  Classes that contain A6.
    SDD SDD1 Ostrowski Nekrasov Oscal Nscal
    - - - - -

     | Show Table
    DownLoad: CSV

    Remark 2.5. If we consider the matrix A3, we see that the N-scal class is not contained in any of the following classes: SDD, SDD1, Ostrowski, Nekrasov. Considering the matrix A1, we see that the N-scal class is not contained in O-scal.

    Remark 2.6. Neither of the classes SDD, SDD1, Ostrowski, Nekrasov is contained in N-scal, as seen from the matrix A4. Considering A2, we see that O-scal is not a subclass of N-scal.

    Remark 2.7. The matrix A5 belongs to the new class N-scal and also to all of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal, so the intersection of these classes is nonempty.

    Remark 2.8. If we consider matrices A2 and A3, we see that O-scal matrices can have more than one non-SDD row. This is not the case with Ostrowski matrices.

    Remark 2.9. If we consider matrix A6, we see that it is an N-scal matrix, but it does not belong to any of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal.

    The previous remarks show that the new class N-scal stands in a general relation to each of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal.

    In [46], a max-norm bound is given for the inverse of SDD matrices.

    Theorem 3.1. [46] Given an SDD matrix A=[aij]Cn,n, the following bound applies,

    ||A1||1miniN(|aii|ri(A)).

    This result of Varah served as a starting point for defining new norm estimations applicable to wider classes of matrices.

    Theorem 3.2. [32] Let ACn,n, n2, be an Ostrowski matrix. Then,

    ||A1||maxji, i,jN|aii|+rj(A)|aii||ajj|ri(A)rj(A).

    In [40], in a slightly different form, one can find the following result.

    Theorem 3.3. Let ACn,n, n2, be an O-scal matrix. Then,

    ||A1||maxkNrk(A)|akk|maxji, i,jNri(A)+Rj(A)ri(A)rj(A)Ri(A)Rj(A).

    Theorem 3.4. [31] Given a Nekrasov matrix A=[aij]Cn,n, the following bound applies,

    ||A1||maxiNzi(A)|aii|hi(A),

    where

    z1(A)=1,zi(A)=i1j=1|aij|zj(A)|ajj|+1,i=2,,n.

    Based on the N-scal condition, we now define a new norm estimation as follows.

    Theorem 3.5. If a matrix A=[aij]Cn,n,n2, is an N-scal matrix, then the following bound applies:

    A1maxkNhk(A)|akk|maxiNzi(A)hi(A)Hi(A),

    where

    z1(A)=1,zi(A)=i1j=1|aij|zj(A)|ajj|+1,i=2,,n.

    Proof. As A is an N-scal matrix, for X=diag(xk), with xk=hk(A)|akk|, kN, we know that the matrix B=AX is a Nekrasov matrix.

    Also, B1=(AX)1=X1A1, i.e., A1=XB1. Therefore,

    ||A1||=||XB1||||X||||B1||=maxkNhk(A)|akk|||B1||.

    It is easy to prove that zi(B)=zi(A), for all iN, by induction.

    Namely, for i=1, it holds that z1(A)=z1(B)=1, by definition. If we assume that zj(B)=zj(A), for all j=1,2,,i1, then

    zi(B)=i1j=1zj(B)|bjj||bij|+1=i1j=1zj(B)|ajj|xj|aij|xj+1=i1j=1zj(A)|ajj||aij|+1=zi(A).

    Applying the upper bound for the norm of the inverse given in Theorem 3.4 to the matrix B, we obtain

    ||B1||maxiNzi(B)|bii|hi(B)=maxiNzi(A)hi(A)Hi(A).

    It follows that

    ||A1||maxkNhk(A)|akk|maxiNzi(A)hi(A)Hi(A).

    Example 3.6. Consider the matrix

    B1=[803504242520044020].

    The matrix B1 belongs to the N-scal class and the norm estimation from Theorem 3.5 is

    ||B11||0.776197,

    while the exact value is

    ||B11||=0.6187.

    The matrix B1 does not belong to any of the classes SDD, Ostrowski, Nekrasov, O-scal; therefore, the corresponding norm bounds cannot be applied.

    Example 3.7. Consider

    B2=[15020115613331703407].

    The matrix B2 belongs to the N-scal class and the norm estimation from Theorem 3.5 is

    ||B12||1.013626,

    while the exact value is

    ||B12||=0.5708.

    The matrix B2 does not belong to any of the classes SDD, Ostrowski, Nekrasov, SDD1; therefore, the corresponding norm bounds cannot be applied. It does belong to the O-scal class and the norm estimation from Theorem 3.3 is 1.035986.

    Example 3.8. Consider

    B3=[150315927236044010].

    Norm bounds for the inverse for B3 are presented in Table 7.

    Table 7.  Norm bounds for the inverse for B3.
    SDD Ostrowski Nekrasov Oscal Nscal
    in the class - - -
    bound for ||B13|| - - - 9.788136 1.138751

     | Show Table
    DownLoad: CSV

    The matrix B3 belongs to classes O-scal and N-scal. Exact value for the norm of the inverse matrix is

    ||B13||=0.3929.

    Also, the matrix B3 belongs to the SDD1 class. Applying the norm bound given in [4]HY__HY, Theorem 8] for the SDD1 class, we obtain ||B13||4.215686. Therefore, in this case, the bound obtained for N-scal matrices gave a more precise result than the bound for SDD1.

    Example 3.9. In mathematical models in ecology, when modeling interactions between species, stochastic matrices represent corresponding transition probabilities between stages in life cycles. If we consider predator and prey species in juvenile or adult life stage, probabilities of transitions between different states could be represented with the following matrix:

    B4=[0.60.30.100.20.70.100.30.10.50.10.10.30.20.4].

    Coming from real systems, these matrices often possess properties related to diagonal dominance. Here, the matrix B4 is neither SDD nor Ostrowski, but it belongs to the N-scal class and the norm estimation from Theorem 3.5 is

    ||B14||12.1099,

    which is tighter than bounds obtained for Nekrasov (24.1429) or the O-scal class (18.85714).

    Example 3.10. In mathematical models in structural engineering, when applying the finite element method, matrices representing relationships between forces and displacements occur. In some cases, these matrices can be analyzed via tools of the H-matrix theory. Consider a tridiagonal matrix of order 10, B5=tridiag[6,12,6], having a form that often appears in modeling systems in engineering or economics. The matrix B5 is neither SDD nor Ostrowski, but it belongs to the N-scal class and the norm estimation from Theorem 3.5 is

    ||B15||21.2084961,

    which is tighter than the bound obtained for Nekrasov (85.16666).

    In order to bound the smallest singular value of a special H-matrix A, let us denote

    ν(A)=maxkNhk(A)|akk|maxiNzi(A)hi(A)Hi(A).

    For the given matrix ACn,n, let ||A||1,||A||2, and ||A|| represent the 1-norm, 2-norm, and -norm of A, respectively. It is well-known that the following relation between the norms hold

    ||A||22||A||1||A||.

    Applying this inequality, the lower bound for the minimal singular value of special H-matrices is obtained.

    Theorem 3.11. Let ACn,n, n2 be a matrix with nonzero diagonal entries. If

    hi(A)>Hi(A) and  hi(AT)>Hi(AT),

    for all iN, then

    σn(A)1ν(A)ν(AT),

    where σn(A) is the smallest singular value of A.

    Proof. Matrices A and AT are both N-scal matrices. From the previous theorem, we have

    ||A1||ν(A)  and  ||(AT)1||ν(AT).

    As

    ||A1||1=||(A1)T||=||(AT)1||ν(AT),

    we obtain

    ||A1||22||A1||1||A1||ν(AT)ν(A),

    i.e.,

    σn(A)=||A1||121ν(A)ν(AT).

    Remark 3.12. We can apply obtained results to other norms as well. If matrices A and AT are both N-scal matrices, from the previous consideration, it follows that

    ||A1||1ν(AT),||A1||2ν(A)ν(AT)max{ν(A),ν(AT)}.

    Remark 3.13. One can consider block generalizations of H-matrices and subclasses and define infinity norm estimation for the inverse in the block-case as well. Block H-matrices were researched in [43,47].

    For A=[aij]Cn,n and a partition π={pj}lj=0,p0=0<p1<p2<...<pl=n of the index set N, consider the block matrix [Aij]l×l with the index set L={1,2,...,l}.

    The relation between point-wise and the block case (see [7]) implies the following estimation.

    For a given A=[aij]Cn,n and a given partition π={pj}lj=0 of the index set N, if Aπ is an N-scal matrix, then

    ||A1||||(Aπ)1||ν(Aπ).

    Here, comparison matrix is defined in a usual manner. The matrix Aπ=[pij]l×l is given by

    pij={(||A1ii||)1,i=janddet(Aii)0,0,i=janddet(Aii)=0,||Aij||,ij}.

    Example 3.14. When considering a continuous-time dynamical ecological system composed of several populations resting at a feasible equilibrium point (see [24]) self-interactions are often removed from the community matrix, meaning that diagonal entries are equal to zero. Matrices with zero diagonal, by the name 'hollow matrices', do not belong to the class of H-matrices in the classical sense; therefore, they cannot be treated nor analyzed via bounds obtained for special classes of H-matrices. However, the community matrix often has a certain block structure and we can apply results developed for block H-matrices. Consider

    G=[08003050800000000004204000400000205002000000020000404000020000000200]. (3.1)

    This matrix is not an H-matrix, as it is hollow, but with respect to partition π={0,2,4,6,8}, we obtain the comparison matrix Gπ, which is an N-scal matrix; therefore, we obtain

    ||G1||||(Gπ)1||ν(Gπ)=0.776197.

    Assume that A=[aij]Rn,n and qRn. The linear complementarity problem LCP(A,q) is defined as follows. Find xRn satisfying

    x0,Ax+q0,(Ax+q)Tx=0,

    or show that such a vector does not exist. In modern game-theory, as well as in economy and engineering, there are certain problems that can be formulated as LCP problems. Special matrix classes, including P-matrices, Q-matrices, and M-matrices, have always played an important role in analysis of linear complementarity problems, whether in establishing conditions for existence and uniqueness of the solution, or in construction of procedures for solving LCP. Necessary and sufficient condition for LCP(A,q) to have a unique solution with respect to any qRn is the demand on matrix A to be a P-matrix, a real square matrix with all its principal minors positive; see [6].

    For a P-matrix A, let x denote the solution of the LCP(A,q) and denote r(x)=min(x,Ax+q), D=diag(di),0di1. In [5], the following estimate for the error of the solution was presented:

    ||xx||maxd[0,1]n||(ID+DA)1||||r(x)||.

    It is well-known that an H+ matrix, a real H-matrix with positive diagonal entries, is a P-matrix.

    Therefore, it is convenient to apply the upper error bound for error of the solution of the given LCP(A,q) presented in [19] that can be used in cases when we know how to construct a scaling matrix for the given H+ matrix.

    Theorem 3.15. [19] Let A=[aij]Rn,nbe an H-matrix with aii>0 for all iN. Let W=diag(w1,,wn) with wi>0, for all i=1,,n, be a diagonal scaling matrix for A, such that AW is SDD. Then,

    maxd[0,1]n||(ID+DA)1||max{maxiNwiminiNβi,maxiNwiminiNwi},

    where for each i=1,2,,n,βi=aiiwiiN{i}|aij|wj.

    In what follows, we denote

    max{maxiNwiminiNβi,maxiNwiminiNwi}=:η(A,W).

    Note that the norm estimation depends both on the entries of the given matrix A and the choice of scaling matrix W.

    As we know how to construct a scaling matrix for a given N-scal matrix, we directly obtain the following error estimation.

    Theorem 3.16. Let A=[aij]Rn,n,n2 be an N-scal matrix with positive diagonal entries. Then,

    maxd[0,1]n||(ID+DA)1||η(A,W),

    where W=diag(wi) and for each i=1,2,,n,wi=εiHi(A)aii, and (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi(1,hi(A)Hi(A)),i=2,,n.

    Proof. Let A=[aij]Rn,n,n2,aii>0,i=1,,n be an N-scal matrix. Then, for the diagonal matrix X, X=diag(xi) with

    xi=hi(A)aii,i=1,,n,

    the matrix AX is a Nekrasov matrix. Notice that Nekrasov row sums are nonzero both in A and AX. Therefore, we can construct the scaling matrix Y for the Nekrasov matrix AX as proposed in Theorem 1.1. This implies that the matrix AXY, with Y=diag(yi),

    yi=εihi(AX)(AX)ii=εiHi(A)hi(A),i=1,,n,

    is an SDD matrix, for (εi)ni=1 being an increasing sequence of numbers with

    ε1=1,εi(1,(AX)iihi(AX))=(1,hi(A)Hi(A)),i=2,,n.

    Applying Theorem 3.15 with W=XY=diag(wi),i=1,,n, where

    wi=xiyi=hi(A)aiiHi(A)hi(A)εi=εiHi(A)aii,

    the proof is completed.

    Assume that A=[aij]Cn,n is an Ostrowski matrix with positive diagonal entries, with the only non-SDD row indexed by l. The estimation η(A,W) given in Theorem 3.15 can be applied to Ostrowski matrix A with scaling matrix W defined as W=diag(wk),

    wk={˜γ,k=l,1,kN{l}, (3.2)

    where ˜I1=rl(A)|all|<˜γ<minil,ail0|aii|ri(A)+|ail||ail|=˜I2, and if ail=0, for all il, then ˜I2=.

    The estimation η(A,W) of Theorem 3.15 can also be applied to an O-scal matrix A=[aij]Cn,n with positive diagonal entries, with the following choice of the scaling matrix W. Let W=diag(wi),

    wi={ri(A)aiiγ,i=l,ri(A)aii,iN{l},γ(RNl(A)rl(A),minil,ail0ri(A)allRNi(A)all+|ail|rl(A)|ail|rl(A)),

    l is the index of the only non-SDD row in AQ, Q=diag(qk) and qk=rk(A)|akk|,kN. If there is no such index l, we choose γ=1.

    In the following numerical examples, we compare η estimations for LCP obtained from Theorem 3.15 with corresponding choices of scaling matrices.

    Example 3.17. Consider

    B=[40100200010200011010003101100011040000100020502100000016001100111070100100110801222010019010200020110]. (3.3)

    The matrix B is an N-scal matrix, so we apply Theorem 3.16 for εi=1+0.0362i,i=0,,9. We obtain η estimation 2.729348. As B does not belong to any of the classes SDD, Ostrowski, O-scal, Nekrasov, the corresponding bounds obtained for these classes cannot be applied. In all the examples ε1=1 and εi,i=2,,n, are chosen equidistantly inside the interval (1,mini=2,,nhi(A)Hi(A)).

    Example 3.18. Consider

    C=[229.14.22.10.79.14.22.10.70.7422137].

    The matrix C belongs to all of the classes we considered; therefore, we apply bounds developed for all these classes and compare them. Results are presented in Table 8.

    Table 8.  Norm bounds for the inverse and η bounds for C.
    SDD Ostrowski Nekrasov Oscal Nscal
    in the class
    bound for ||C1|| 1.666667 1.368421 1.125035 1.080676 1.1319
    η bound for LCP 1.666667 1.666667 1.344799 1.30413 1.226424

     | Show Table
    DownLoad: CSV

    For this matrix, we see that the best bound for the norm of the inverse matrix is the bound from Theorem 3.3 for the O-scal class, while the best η bound for LCP is the bound from Theorem 3.16 for the N-scal class.

    The Schur complement matrix can be viewed as a side product of block-Gaussian elimination. It allows transformation of a given large dimensional system to problems of smaller dimensions.

    The Schur complement of A=[aij]Cn,n with respect to a proper subset α of the index set N is denoted by A/α and defined to be

    A/α=A(¯α)A(¯α,α)(A(α))1A(α,¯α).

    Here, A(α,β) denotes the sub-matrix of ACn,n formed by the rows indexed by α and the columns indexed by β, while A(α,α) is abbreviated to A(α). In our considerations, we assume that A(α) is a non-singular matrix.

    Results on the Schur complement of special H-matrices including spectra localizations and closure properties can be found in [9,11,12,27,39,48].

    Denoting by σ(A) the spectrum of matrix A, i.e., the set of all eigenvalues of the matrix A, we now recall the famous Geršgorin theorem; see [23]. It states that for any matrix ACn,n,

    Γ(A)=iNΓi(A)σ(A),

    where the set Γi(A)={zC||zaii|ri(A)} is called the i-th Geršgorin disk, while Γ(A) is the Geršgorin set.

    In what follows, we apply the scaling method in order to obtain information on spectra of Schur complements. For this purpose, the scaling will be performed in the form of a similarity transformation, as similar matrices have the same set of eigenvalues. Notice that if W is a diagonal scaling matrix for the given matrix A, i.e., if AW is SDD, then the matrix W1AW is also SDD, as scaling the rows does not affect SDD property. Having this in mind, for a given non-singular diagonal matrix W, we denote

    rWi(A)=ri(W1AW),

    and

    ΓWi(A)={zC||zaii|rWi(A)}.

    We also make use of another convenient property of the Schur complement. Namely, Schur complements agree well with multiplications with diagonal non-singular matrices, both from right and left. More precisely, for a diagonal non-singular W, it holds that

    (WA)/α=W(¯α)(A/α),

    and also

    (AW)/α=(A/α)W(¯α).

    In [36], the authors provided information on spectra of the Schur complement of an SDD matrix as follows.

    Theorem 3.19. [36] Let ACn,n be an SDD matrix with real diagonal entries, and let α be a proper subset of the index set. Then, A/α and A(¯α) have the same number of eigenvalues whose real parts are greater (less) than f(A) (resp., f(A)), where

    f(A)=minj¯α(|ajj|rj(A)+miniα|aii|ri(A)|aii|kα|ajk|). (3.4)

    In [35], the authors presented another interesting result on spectra of the Schur complement.

    For ACn,n, α={i1,i2,,ik}¯NA,¯α={j1,j2,,jl} and for every eigenvalue λ of A/α, there exists 1tl such that

    |λajtjt|rjt(A). (3.5)

    In other words, the eigenvalues of A/α are contained in the union of those Geršgorin disks for the original matrix A whose indices are in ¯α.

    In further considerations, we apply this result and we obtain information on spectra of Schur complements of N-scal matrices.

    Theorem 3.20. Let ACn,n be an N-scal matrix with real diagonal entries, and let α be a proper subset of the index set. Then, A/α and A(¯α) have the same number of eigenvalues whose real parts are greater (less) than f(W1AW) (resp., f(W1AW)), where f(A) is defined as in (3.4) and W=diag(wi) is a diagonal scaling matrix for A, with wi=εiHi(A)aii,i=1,2,,n, where (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi(1,hi(A)Hi(A)),i=2,,n.

    Proof. Since A is an N-scal matrix with real diagonal entries and W is the corresponding scaling matrix, W1AW is an SDD matrix with real diagonal entries. Notice that matrices (W1AW)/α and A/α are similar, as well as matrices (W1AW)(¯α) and A(¯α), so their spectra coincide. Now, the statement is proved by applying Theorem 3.19.

    The method of the proof holds for all H-matrices with real diagonal entries and their scaling matrices.

    Now, we obtain a Geršgorin-like spectra localization for the Schur complement of an N-scal matrix using scaling matrices.

    Theorem 3.21. Let ACn,n be an N-scal matrix, let αN, and let W=diag(wi) be a diagonal scaling matrix for A, with wi=εiHi(A)aii,i=1,2,,n, where (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi(1,hi(A)Hi(A)),i=2,,n. Then,

    σ(A/α)j¯αΓWj(A).

    Proof. It is easy to see that (W1AW)/α is similar to A/α. Similarity implies that spectra of these matrices coincide. Applying results from [35] to SDD matrix W1AW, we complete the proof.

    In this way, we obtain Geršgorin-like eigenvalue localization area for the Schur complement of an N-scal matrix using only the entries of the original matrix A. Again, the method of the proof holds for any H-matrix A and its scaling matrix W; see [12,39].

    Results on spectra of Schur complements can be interpreted in light of the mLCP.

    Consider mLCP(A,B,C,D,a,b) with a non-singular matrix A. For ARn,n, BRm,m, CRn,m, DRm,n, aRn i bRm, the problem can be formulated as follows. Find

    uRn,vRm,v0,

    such that

    a+Au+Cv=0,b+Du+Bv0,vT(b+Du+Bv)=0.

    For a non-singular A, the given problem can be transformed to a classical LCP(BDA1C,bDA1a). Denote by M the block matrix

    M=[ACDB].

    Then, the matrix that appears in classical LCP is the Schur complement M/A=BDA1C. If the matrix M belongs to some special class, we are able to obtain results on eigenvalues as follows.

    ● If MCn+m,n+m, n+m2 is an N-scal matrix with real diagonal entries, then M/A and B have equal number of eigenvalues with real parts greater (less) than f(W1MW) (f(W1MW)). Here, f(M) is defined as in Theorem 3.19 and W=diag(wi) is a diagonal scaling matrix for M.

    ● For MCn+m,n+m of order n+m2 being an N-scal matrix, let W=diag(wi) be a corresponding diagonal scaling matrix. Then,

    σ(M/A)j{n+1,,n+m}ΓWj(M).

    Example 3.22. Consider

    A=[520124011.52510.5025].

    It belongs both to the SDD class and N-scal class. Let α={1,2}. Applying (3.5), we obtain Γ3(A)={z||z5|4.5}, Γ4(A)={z||z5|2.5},

    σ(A/α)j{3,4}Γj(A)=:ΓSDD(A/α).

    From Theorem 3.21 for N-scal matrices, with εi=1+0.375i,i=0,1,2,3, we obtain ΓW3(A)={z||z5|2.6335}, ΓW4(A)={z||z5|3.549}, and

    σ(A/α)j{3,4}ΓWj(A)=:ΓNscal(A/α).

    We see that ΓNscal(A/α) (blue) gives tighter localization than ΓSDD(A/α) (pink); see Figure 1.

    Figure 1.  ΓSDD(A/α) and ΓNscal(A/α) for α={1,2}.

    Example 3.23. Consider, again, the matrix B given in (3.3) and set α={1,2,3,4,5,6,7}. As NB ={1,2,3,5} and α¯NB, we cannot use localization from [35].

    However, we obtain the spectra localization for B/α given in Figure 2 applying Theorem 3.21, with εi=1+0.0362i,i=0,,9. Here, ΓW8(B)={z||z8|6.68754}, ΓW9(B)={z||z9|7.398684}, and ΓW10(B)={z||z10|8.547165}.

    Figure 2.  ΓNscal(B/α) for α={1,2,3,4,5,6,7}.

    Now, let α={4,6,7,8,9,10}. As α¯NB, we are able to apply results for both the SDD case and N-scal case. From (3.5) for the SDD case, we obtain

    σ(B/α)j{1,2,3,5}Γj(B)=ΓSDD(B/α),

    where Γ1(B)={z||z4|4}, Γ2(B)={z||z2|3}, Γ3(B)={z||z3|3}, and Γ5(B)={z||z5|5}.

    Applying Theorem 3.21 for the N-scal class, we obtain

    σ(B/α)j{1,2,3,5}ΓWj(B)=ΓNscal(B/α),

    where ΓW1(B)={z||z4|3.077659}, ΓW2(B)={z||z2|1.607273}, ΓW3(B)={z||z3|2.320485}, and ΓW5(B)={z||z5|4.128828}.

    If we compare localization areas, we see that ΓNscal(B/α) (violet) is tighter than ΓSDD(B/α) (yellow); see Figure 3.

    Figure 3.  ΓSDD(B/α) and ΓNscal(B/α) for α={4,6,7,8,9,10}.

    In this paper, we presented a new sufficient condition for non-singularity of matrices by the name N-scal condition. We proved that matrices satisfying this condition belong to the H-matrix class. We discussed the relation of the new N-scal matrix class to some well-known subclasses of non-singular H-matrices, such as SDD, Ostrowski, SDD1, O-scal, and the Nekrasov class. Numerical examples show that the new class stands in a general relation to each of these classes. We used new criteria to obtain bounds for the infinity norm of the inverse matrix. Numerical examples illustrate that these bounds can give better results in some cases than existing bounds developed for some well-known matrix classes. Due to relations among classes, it is obvious that new estimates developed for N-scal matrices can work in some cases when already known estimates cannot be applied. Applying scaling relations between the new class and the well-known matrix classes, we defined error estimation for linear complementarity problems that involve N-scal matrices. Also, Schur complements of N-scal matrices are considered. Spectra localizations for Schur complements are obtained via diagonal scaling in the form of similarity transformation. If the matrix in consideration is both SDD and N-scal, numerical examples show that new localization can give tighter estimate due to the scaling method.

    Both authors have equally contributed to conceptualization, investigation, methodology, formal analysis, writing and editing, validation, inspection. All authors have read and approved the final version of the manuscript for publication.

    The authors would like to thank the editor and the anonymous referees for their valuable suggestions and comments. This research has been supported by the Ministry of Science, Technological Development and Innovation (Contract No. 451-03-65/2024-03/200156) and the Faculty of Technical Sciences, University of Novi Sad through project ''Scientific and Artistic Research Work of Researchers in Teaching and Associate Positions at the Faculty of Technical Sciences, University of Novi Sad" (No. 01-3394/1).

    The authors declare that there are no conflict of interest.



    [1] M. Berger, B. Gostiaux, Géométrie différentielle: Variétés, courbes et surfaces, Presses Univ. France, France, 1987.
    [2] B. Y. Chen, E. Güler, Y. Yaylı, H. H. Hacısalihoğlu, Differential geometry of 1-type submanifolds and submanifolds with 1-type Gauss map, Int. Elec. J. Geom., 16 (2023), 4–49. http://dx.doi.org/10.36890/iejg.1216024 doi: 10.36890/iejg.1216024
    [3] E. Güler, Helical hypersurfaces in Minkowski geometry E41, Symmetry, 12 (2020), 1206. http://dx.doi.org/10.3390/sym12081206 doi: 10.3390/sym12081206
    [4] E. Güler, Differential geometry of the family of helical hypersurfaces with a light-like axis in Minkowski spacetime L4, Universe, 9 (2023), 341. http://dx.doi.org/10.3390/universe9070341 doi: 10.3390/universe9070341
    [5] H. B. Lawson, Lectures on minimal submanifolds, 2 Eds., Mathematics Lecture Series 9, Publish or Perish Inc., Wilmington, DE, USA, 1980.
    [6] P. Li, S. T. Yau, A new conformal invariant and its applications to the Willmore conjecture and the first eigenvalue of compact surfaces, Invent. Math., 69 (1982), 269–291. http://dx.doi.org/10.1007/BF01399507 doi: 10.1007/BF01399507
    [7] Y. Li, E. Güler, Twisted hypersurfaces in Euclidean 5-space, Mathematics, 11 (2023), 4612. http://dx.doi.org/10.3390/math11224612 doi: 10.3390/math11224612
    [8] Y. Li, E. Güler, Hypersurfaces of revolution family supplying Δr=Ar in pseudo-Euclidean space E73, AIMS Math., 8 (2023), 24957–24970. http://dx.doi.org/10.3934/math.20231273 doi: 10.3934/math.20231273
    [9] Y. Li, E. Güler, A Hypersurfaces of revolution family in the five-dimensional pseudo-Euclidean space E52, Mathematics, 11 (2023), 3427. http://dx.doi.org/10.3390/math11153427 doi: 10.3390/math11153427
    [10] B. O'Neill, Semi-Riemannian geometry with applications to relativity, 1Ed. Pure and Applied Mathematics; Vol. 103, Academic Press, Inc., Cambridge, MA, USA; Harcourt Brace Jovanovich Pub., New York, USA, 1983.
    [11] M. D. Toda, Willmore energy: Brief introduction and survey, Monogr. Res. Notes Math. CRC Press, Boca Raton, FL, 2018, 1–7.
    [12] T. J. Willmore, Note on embedded surfaces, An. Ştiint. Univ. "Al. I. Cuza" Iaşi Sect. Ia Mat. (N.S.), 11 (1965), 493–496.
    [13] T. J. Willmore, Riemannian geometry, The Clarendon Press, Oxford Science Pub., Oxford Un. Press, 1993.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1334) PDF downloads(44) Cited by(12)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog