Loading [MathJax]/jax/output/SVG/jax.js
Research article

New preconditioned Gauss-Seidel type methods for solving multi-linear systems with M-tensors

  • Received: 21 January 2025 Revised: 06 April 2025 Accepted: 19 May 2025 Published: 04 June 2025
  • In this paper, we propose a new preconditioner for the multi-linear systems with M-tensors by utilizing some elements in the first row and column of the majorization matrix associated with the system tensor. The new preconditioner generalizes the one proposed in (Calcolo, 57(2020) 15). Then, we establish three new preconditioned Gauss-Seidel type methods for solving the multi-linear systems with M-tensors. Theoretically, convergence theorems and comparison theorems are given, which show that the proposed methods are convergent, and the third one has the fastest convergence rate and performs better than the original Gauss-Seidel method and the one in (Calcolo, 57(2020) 15). Also, the monotonic theorem is studied. Finally, numerical experiments including some applications in the higher-order Markov chain and directed graph verify the correctness of the theoretical analyses and the effectiveness of the proposed methods, and illustrate that they are superior to the original Gauss-Seidel method and the preconditioned Gauss-Seidel methods in (Calcolo, 57(2020) 15) and (Appl. Numer. Math., 134(2018) 105–121).

    Citation: Xiuwen Zheng, Jingjing Cui, Zhengge Huang. New preconditioned Gauss-Seidel type methods for solving multi-linear systems with M-tensors[J]. Electronic Research Archive, 2025, 33(6): 3450-3481. doi: 10.3934/era.2025153

    Related Papers:

    [1] Dongmei Yu, Yifei Yuan, Yiming Zhang . A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of $ H_+ $-matrices. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007
    [2] Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041
    [3] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [4] Keru Wen, Jiaqi Qi, Yaqiang Wang . $ SDD_2 $ tensors and $ B_2 $-tensors. Electronic Research Archive, 2025, 33(4): 2433-2451. doi: 10.3934/era.2025108
    [5] Qian He, Wenxin Du, Feng Shi, Jiaping Yu . A fast method for solving time-dependent nonlinear convection diffusion problems. Electronic Research Archive, 2022, 30(6): 2165-2182. doi: 10.3934/era.2022109
    [6] Xin Liu, Guang-Xin Huang . New error bounds for the tensor complementarity problem. Electronic Research Archive, 2022, 30(6): 2196-2204. doi: 10.3934/era.2022111
    [7] Shunjie Bai, Caili Sang, Jianxing Zhao . Localization and calculation for C-eigenvalues of a piezoelectric-type tensor. Electronic Research Archive, 2022, 30(4): 1419-1441. doi: 10.3934/era.2022074
    [8] ShinJa Jeong, Mi-Young Kim . Computational aspects of the multiscale discontinuous Galerkin method for convection-diffusion-reaction problems. Electronic Research Archive, 2021, 29(2): 1991-2006. doi: 10.3934/era.2020101
    [9] Zhengmao Chen . A priori bounds and existence of smooth solutions to a $ L_p $ Aleksandrov problem for Codazzi tensor with log-convex measure. Electronic Research Archive, 2023, 31(2): 840-859. doi: 10.3934/era.2023042
    [10] Melih Cinar, Ismail Onder, Aydin Secer, Mustafa Bayram, Abdullahi Yusuf, Tukur Abdulkadir Sulaiman . A comparison of analytical solutions of nonlinear complex generalized Zakharov dynamical system for various definitions of the differential operator. Electronic Research Archive, 2022, 30(1): 335-361. doi: 10.3934/era.2022018
  • In this paper, we propose a new preconditioner for the multi-linear systems with M-tensors by utilizing some elements in the first row and column of the majorization matrix associated with the system tensor. The new preconditioner generalizes the one proposed in (Calcolo, 57(2020) 15). Then, we establish three new preconditioned Gauss-Seidel type methods for solving the multi-linear systems with M-tensors. Theoretically, convergence theorems and comparison theorems are given, which show that the proposed methods are convergent, and the third one has the fastest convergence rate and performs better than the original Gauss-Seidel method and the one in (Calcolo, 57(2020) 15). Also, the monotonic theorem is studied. Finally, numerical experiments including some applications in the higher-order Markov chain and directed graph verify the correctness of the theoretical analyses and the effectiveness of the proposed methods, and illustrate that they are superior to the original Gauss-Seidel method and the preconditioned Gauss-Seidel methods in (Calcolo, 57(2020) 15) and (Appl. Numer. Math., 134(2018) 105–121).



    In this work, we consider the following multi-linear system:

    Aym1=b, (1.1)

    where AR[m,n] and y,bRn. Here, R[m,n] and Rn denote the sets of the m-order n-dimensional tensors and n-dimensional real vectors, respectively. Besides, the tensor-vector product Aym1 in (1.1) is defined by [1]:

    (Aym1)j=np2,,pm=1ajp2,,pmyp2ypm,j=1,,n.

    The multi-linear system (1.1) arises in many scientific and engineering fields, such as the tensor complementarity problems, image processing, numerical partial differential equations and so on [2,3,4,5,6,7,8]. Moreover, it is demanded to compute the solutions of multi-linear system (1.1) in many practical problems, thus solving the multi-linear system (1.1) as one of the topics of very active research in computational mathematics. So in this paper, we focus on the numerical solution of the multi-linear system (1.1).

    It is noteworthy that the coefficient tensors of the multi-linear systems have special structures and they are M-tensors in lots of practical applications. Thus, it is inevitable to solve the multi-linear systems with M-tensors in many practical problems. In view of this fact, designing the efficient methods for solving the multi-linear systems with M-tensors has been a hot research topic. Many researchers have established many results and numerous methods for the multi-linear systems with M-tensors recently. For instance, Ding and Wei [8] proved that the multi-linear system (1.1) has a unique positive solution when A is a strong M-tensor and b>0, and they also designed the Newton, Jacobi and Gauss-Seidel methods for (1.1). Han [9] proposed the homotopy method for the multi-linear systems with M-tensors. Xie et al. [10] derived the tensor methods for solving symmetric M-tensor systems. Wang et al. [11] proposed the neural networks based method for (1.1). He et al. [12] developed a Newton-type method to solve multi-linear systems with M-tensors. Besides, tensor splitting method [13,14] has been presented for solving (1.1) recently. Among these methods, the tensor splitting method is one of the most popular methods, because it is efficient and easy to be implemented in the practical calculation. To solve multi-linear systems effectively, Liu et al. [13] proposed the tensor splitting A=EF, and then established a general tensor splitting iterative method in the following form:

    yl=[M(E)1Fym1l1+M(E)1b]1m1,l=1,2,, (1.2)

    where M(E)1F is called the iteration tensor of the tensor splitting method (1.2), and M(E) stands for the majorization matrix of the tensor E (see Definition 3). As stated in [13], the spectral radius of the iteration tensor can be served as an approximate convergence rate for the tensor splitting iterative method (1.2).

    It is well-known that preconditioning technique can decrease the convergence factors of the iteration methods so as to accelerate their convergence speeds. Thus, it has been widely applied to the tensor splitting methods and many efficient preconditioners have been designed for (1.1) in recent years. In [14], Li et al. constructed the preconditioner Pα=I+Sα with

    Sα=[0α1a1220000α2a2330000αn1an1,nn0000],

    and αqR (q=1,,n1) are parameters. They proposed the related preconditioned Gauss-Seidel type and successive over-relaxation (SOR) type methods for (1.1). Then, Cui et al. [15] put forward a preconditioner Pmax=I+Smax with

    Smax=(si,ki)={ai,kiki, i=1,,n1, ki>i,0, otherwise,

    where ki=min{j|maxj|aijj|,i<j,i<n}. It is shown in [15] that the preconditioner Pmax outperforms Pα from the perspective of both the theoretical and numerical results. Subsequently, Zhang et al. [16] derived a new preconditioner ˜Pα=I+Gα, where

    Gα=[0000α2a211000α3a311000αnan11000],

    and αwR (w=2,,n) are parameters, and proposed three preconditioned Jacobi type methods. Then, by making use of the preconditioner ˜Pα, Liu et al. [17] presented a new preconditioned SOR method for the multi-linear system (1.1), and a new preconditioned Gauss-Seidel type method can be obtained as the relaxation factor τ=1. Very recently, Chen and Li [18] constructed a new preconditioner Pγβ=I+Sγβ with

    Sγβ=(˜sij)={γjaj11βj, j=2,,n, i=1,0, otherwise,

    where γj>0 and βj (j=2,,n) are constants. Besides, Cui et al. [19] introduced a matrix Rβ into the preconditioner Pα=I+Sα [14] and proposed a more effective preconditioner ˆPβ=I+Sα+Rβ with

    Rβ=(0β2a122β3a133βna1nn000000000000),

    and βqR (j=2,,n). Xie and Miao [20] stated that the preconditioning effect of Pα is not observed on the last horizontal slice of the system tensor A and derived a new preconditioner ˆPγ=I+Sα+Rγ and the corresponding preconditioned Gauss-Seidel method, where

    Rγ=(rij)={γtantt, j=1,,n1, i=n,0, otherwise,

    and γtR (t=1,,n1) are parameters. Comparison theorems and test results in [20] indicate that the preconditioned Gauss-Seidel method in [20] is more efficient than the one in [14]. In the sequel, Nobakht-Kooshkghazi and Najafi-Kalyani [21] presented a preconditioner ˜P=I+Sα+Rγ+Cα, where (Cα)kn=αkaknn (k=1,,n1) and the remaining elements in Cα are zero. Based on it, the authors in [21] established a new preconditioned Gauss-Seidel method for solving (1.1) and compared it with the one in [20]. Very recently, An et al. [22] replaced the matrix Sα in ˜P=I+Sα+Rγ+Cα [21] by Smax and proposed a new preconditioner ˆP=I+Smax+Rγ+Cα. It is proved in [22] that the corresponding preconditioned Gauss-Seidel method has faster convergence rate than the one in [21].

    In addition to the preconditioners for multi-linear systems with M-tensors, some researchers have extended the related results to the multi-linear systems with other or more general tensors. For example, Wang et al. [23] investigated and developed a preconditioned tensor splitting accelerated over-relaxation (AOR) for solving the multi-linear systems with nonsingular H-tensors. Moreover, Cui et al. [24] provided a general preconditioner that contains several existing preconditioners. And they constructed two different preconditioned SOR-type splitting methods for the multi-linear systems with more general Z-tensors. For more preconditioners of the multi-linear system (1.1), we can refer to [25,26] and the references therein.

    It can be seen that the preconditioners developed in [14,15,16,17,18,19,24] have preconditioning effect on some but not all horizontal slices of the coefficient tensor A in (1.1) [27]. That is, the preconditioners in [14,15,19,24] have no preconditioning effect on the last horizontal slice of the system tensor A, and the preconditioners in [16,17,18] have no preconditioning effect on the first horizontal slice of A. Thus, in this work, we will construct a new preconditioner for the multi-linear system (1.1), which has preconditioning effect on all horizontal slices of A and ameliorates the numerical performances of the preconditioners Pα [14] and ˜Pα [16,17]. Inspired by the constructions of the preconditioners in [19,20,21,22], and based on the fact that utilizing more entries of the majorization matrix of coefficient tensor in the preconditioner can make it more efficient, we design a new preconditioner Pα,β=I+Gα+Rβ. It is constructed by introducing some elements of the first row of the majorization matrix of the system tensor A into the preconditioner ˜Pα=I+Gα [16], where

    Gα=(00000α2a2110000αnan110000), Rβ=(0β2a122β3a133βna1nn000000000000),

    and β=(βt),α=(αt)Rn1, αt,βtR (t=2,,n). The new preconditioner Pα,β=I+Gα+Rβ has the form

    Pα,β=(1β2a122β3a133βn1a1,n1n1βna1nnα2a2111000α3a3110100α4a4110000α5a5110000αn1an1,110010αnan110001). (1.3)

    For the preconditioned multi-linear system with the new preconditioner Pα,β, we propose three Gauss-Seidel type splittings of preconditioned system tensor, then establish three new preconditioned Gauss-Seidel type methods. We first prove that the preconditioned coefficient tensor is a strong M-tensor, then investigate the convergence properties of the three proposed preconditioned Gauss-Seidel type methods to show that they are convergent. In the meanwhile, we establish several comparison theorems, which indicate that the convergence factors of the proposed preconditioned Gauss-Seidel type methods are monotonically decreasing and the last one is the most efficient. Also, the third proposed preconditioned Gauss-Seidel type method has a faster convergence rate than the original Gauss-Seidel method and the one in [17]. In addition, we also propose the monotonic theorem to illustrate that for a given parameter α, the convergence factor of the third proposed preconditioned Gauss-Seidel type method decreases with the parameter β in [0,1], i.e., its convergence rate becomes faster with the parameter β in [0,1]. Finally, some numerical examples are given to validate the correctness of the theoretical results, and illustrate that the proposed preconditioned Gauss-Seidel type methods are efficient, and outperform the Gauss-Seidel method and preconditioned Gauss-Seidel ones in [14,17]. We also show the feasibilities of our preconditioner and methods by applying them to solve practical problems on higher-order Markov chain and directed graph.

    The remainder of this paper is organized as follows. In Section 2, we review some basic notations, definitions, and conclusions, which will be used in the subsequent sections. In Section 3, we propose a new preconditioner and present the corresponding preconditioned Gauss-Seidel type methods for solving multi-linear system (1.1). We prove that the proposed methods are convergent in this section as well. In Section 4, we establish some comparison theorems and the monotonic theorem for the proposed preconditioned Gauss-Seidel type methods. In Section 5, numerical experiments are performed to show the efficiencies and superiorities of the proposed preconditioner and iteration methods. As applications, we also apply the preconditioned Gauss-Seidel type methods to solve the practical problems on higher-order Markov chain and directed graph in Section 6. Finally, in Section 7, some conclusions are given to end this paper.

    In this section, we list some notations, definitions, and lemmas, which will be used throughout this paper.

    Let 0, O, and O denote a zero vector, a zero matrix, and a zero tensor, respectively. Let A,BR[m,n], AB mean that each entry of A is no less than the corresponding one of A. If all elements of B are nonnegative, then B is called a nonnegative tensor. The set of all m-order n-dimensional nonnegative tensors is denoted by R[m,n]+. Let ρ(B) = max{|λ||λσ(B)} be the spectral radius of B, where σ(B) is the set of all eigenvalues of B. Let m2 be an integer, and a unit tensor Im = (μj1jm)C[m,n] is given by

    μj1jm={1, j1==jm,0, else.

    Below some needed definitions are listed.

    Definition 1. [28] Let TR[m,n]. A pair (ζ, z)C×(Cn0) is called an eigenpair of T if they satisfy the equation

    Tzm1=ζz[m1],

    where z[m1]=(zm11,,zm1n)T. Further, (ζ, z) is called an H-eigenpair of T if both ζ and z are real.

    Definition 2. [7] Let UR[m,n]. U is called a Z-tensor if its non-diagonal entries are nonpositive, U is called an M-tensor if there exist a tensor QO and a positive constant δρ(Q) such that

    U=δImQ.

    In addition, if δ>ρ(Q), then U is called a strong M-tensor.

    Definition 3. [13] Let UR[m,n]. The majorization matrix M(U) of U is defined as an n×n matrix with its entries

    M(U)kl=ukll,k,l=1,,n.

    Definition 4. [34] Let FR[2,n] and U=(ui1im)R[m,n]. The matrix-tensor product T=FUR[m,n] is defined by

    tji2im=nj2=1fjj2uj2i2im, 1j, iτn, τ=2,,m.

    The above equation can be written as T(1)=(FU)(1)=FU(1), where T(1) and U(1) are the matrices obtained from T and U flattened along the first index [27].

    Definition 5. [29] Let WR[m,n]. If M(W) is nonsingular and W=M(W)Im, we call M(W)1 the order 2 left-inverse of W.

    Definition 6. [13] Let WR[m,n]. If W has an order 2 left-inverse, then W is called a left-invertible tensor.

    Definition 7. [13] Let W, E, FR[m,n]. W=GH is said to be a splitting of W if G is left-invertible; a convergent splitting if ρ(M(G)1H)1; a regular splitting of W if M(G)1O and HO; and a weak regular splitting if M(G)1O and M(G)1HO.

    Definition 8. [7] A tensor AR[m,n] is said to be a semi-positive tensor, if there exists a vector x>0 such that Axm1>0.

    Additionally, some significant lemmas are reviewed in the following.

    Lemma 1. [13] If WR[m,n] is a Z-tensor, then the following conditions are equivalent:

    (1) W is a strong M-tensor;

    (2) There exists y0 such that Wym1>0;

    (3) W has a convergent (weak) regular splitting;

    (4) All (weak) regular splittings of W are convergent.

    Lemma 2. [14] Let TR[m,n]+, and it holds that

    μz[m1](<)Tzm1,z0,z0 μ(<)ρ(T),

    and

    Tzm1φz[m1],z>0,ρ(T)φ.

    Lemma 3. [8] If TR[m,n] is a strong M-tensor, then for every b>0, the multi-linear system Tzm1=b has a unique positive solution.

    Lemma 4. [19] Let TR[m,n] be a strong M-tensor. If T=GH is a splitting with G being a Z-tensor and H being nonnegative, then the splitting T=GH is a convergence regular splitting.

    Lemma 5. [14] Let TR[m,n] be a strong M-tensor and T=G1H1=G2H2 be two weak regular splittings with M(G2)1M(G1)1. If the Perron vector z of M(G2)1H2 satisfies Tzm10, then

    ρ(M(G2)1H2)ρ(M(G1)1H1).

    Lemma 6. [14] Let TR[m,n] and T=U1V1=U2V2 be a weak regular splitting and a regular splitting, respectively. If F2F1, F2O, then one of the following statements holds:

    (1) ρ(M(U2)1V2)ρ(M(U1)1V1)<1.

    (2) ρ(M(U2)1V2)ρ(M(U1)1V1)1. Furthermore, if U2<V1, U2O and ρ(M(U1)1V1)>1, the first inequality is strict.

    Lemma 7. [7] A semi-positive Z-tensor has all positive diagonal entries.

    Lemma 8. [13] If A is a strong M-tensor, then M(A) is a nonsingular M-matrix.

    Lemma 9. [37] Let A and B be two nonsingular M-matrices. If BA, then A1B1O.

    In this section, we present a new preconditioner Pαβ, then establish three new preconditioned Gauss-Seidel type methods and their convergence properties.

    We consider a new preconditioner Pαβ=I+Gα+Rβ, where

    Gα=[0000α2a211000α3a311000αnan11000], Rβ=[0β2a122β3a133βna1nn000000000000],

    and α=(αj), β=(βj) (j=2,,n) are parameters. By premultiplying multi-linear system (1.1) by the new preconditioner Pαβ=I+Gα+Rβ, we transform the original system (1.1) into the preconditioned form

    Aαβxm1=bαβ,

    with Aαβ=PαβA=(I+Gα+Rβ)A, bαβ=Pαβb. Without loss of generality, we assume that ajj=1 (j=1,,n) as in [13,14,15,16,18,19,20,21,22,23,24].

    Let A=ImLF, where L=LIm, L is the strictly lower triangular part of M(A) and Im is a unit tensor. According to GαL=O, by some computations, we can obtain

    Aαβ=ImLF+GαImGαF+RβImRβLRβF,
    M(RβL)=(nk=2βka1kkak11nk=3βka1kkak22βna1nnan,n1n10000000000000),
    M(GαF)=(00000α2a211a122α2a211a133α2a211a1nn0α3a311a122α3a311a133α3a311a1nn0αnan11a122αnan11a133αnan11a1nn).

    Let GαF=D1L1F1 and RβL=D2F2, where D1=D1Im, D2=D2Im and L1=L1Im. Here, D1 and D2 are the diagonal parts of M(GαF) and M(RβL), respectively, and L1 is the strictly lower triangular part of M(GαF).

    In what follows, we will study the convergence of the new preconditioned Gauss-Seidel type methods. To this end, we need the following lemma, which shows that Aαβ is a strong M-tensor.

    Lemma 10. Let AR[m,n] be a strong M-tensor and αj,βj[0,1] (j=2,,n), then Aαβ is a strong M-tensor.

    Proof. We first prove that Aαβ is a Z-tensor. We denote Aαβ=(˜aji2,,im). By Aαβ=(I+Gα+Rβ)A,

    ˜aji2,,im={a1i2,,imni=2βia1i,,iaii2,,im,j=1,aji2,,imαjaj1,,1a1i2,,im,j=2,,n.

    Now we prove that the non-diagonal entries of Aαβ are nonpositive in the following cases:

    (1) When j=1, (i2,,im)=(k,.k),kj,k{2,,n}, it holds that

    ˜a1kk=a1kkβka1kkakkni=2,ikβia1iiaikk=(1βk)a1kkni=2,ikβia1iiaikk0.

    For (i2,,im)(k,.k), k{1,2,,n}, we have

    ˜a1i2im=a1i2imni=2βia1iiaii2im0.

    (2) When j=2,,n and (i2,,im)=(1,,1), it follows that

    ˜aj11=aj11αjaj11a11=(1αj)aj110.

    For (i2,,im)=(k,,k),k{2,,n}, it has

    ˜ajkk=ajkkαjaj11a1kk0.

    For (i2,,im)(k,.k),k{1,2,,n}, we have

    ˜aji2im=aji2imαjaj11a1i2im0.

    Thus, for (j,i2,,im)(j,j,,j), when 0αj1, and 0βj1 (j=2,,n), we have ˜aji2,,im0. Therefore, Aαβ is a Z-tensor. Now we prove that the Z-tensor Aαβ is a strong M-tensor. Having in mind that A is a strong M-tensor, according to Lemma 1, there exists x0 such that Axm1>0. Since GαO and RβO, we can conclude that Aαβxm1=(I+Gα+Rβ)Axm1=Axm1+GαAxm1+RβAxm1>0. Therefore, again using Lemma 1, Aαβ is a strong M-tensor.

    Theorem 1. For Pαβ=I+Gα+Rβ with αj,βj[0,1] (j=2,,n), the preconditioned multi-linear system PαβAxm1=Pαβb has the same unique positive solution as in Axm1=b.

    Proof. The conclusion of this theorem is obtained by Lemma 3 and Lemma 10.

    Below, we construct three new preconditioned Gauss-Seidel type methods

    yk=[M(ˇE1)1ˇF1ym1k1+M(ˇE1)1Pαβb][1/m1],k=1,2,,yk=[M(ˇE2)1ˇF2ym1k1+M(ˇE2)1Pαβb][1/m1],k=1,2,,yk=[M(ˇE3)1ˇF3ym1k1+M(ˇE3)1Pαβb][1/m1],k=1,2,, (3.1)

    in terms of the following three Gauss-Seidel type splittings of Aαβ

    Aαβ=(ImL+GαIm)(F+GαFRβIm+RβL+RβF)=ˇE1ˇF1,Aαβ=(ImL+GαImD1+L1)(FF1RβIm+RβL+RβF)=ˇE2ˇF2,Aαβ=(ImL+GαImD1+L1D2)(FF1RβImF2+RβF)=ˇE3ˇF3, (3.2)

    respectively. According to (3.2), the three preconditioned Gauss-Seidel type iteration tensors are:

    ˇT1=M(ˇE1)1ˇF1=(IL+Gα)1(F+GαFRβIm+RβL+RβF),ˇT2=M(ˇE2)1ˇF2=(IL+GαD1+L1)1(FF1RβIm+RβL+RβF),ˇT3=M(ˇE3)1ˇF3=(IL+GαD1+L1D2)1(FF1RβImF2+RβF). (3.3)

    In [17], when the relaxation factor is τ=1, the Gauss-Seidel type splitting of ˜A=(I+Gα)A is defined as:

    ˜A=(ImL+GαImD1+L1)(FF1):=ˇE4ˇF4.

    If M(ˇE4)1 exists, then we use ˇT=M(ˇE4)1ˇF4 to denote the corresponding iteration tensor.

    Next, we prove that the three Gauss-Seidel type splittings in (3.2) are convergent.

    Theorem 2. Let AR[m,n] be a strong M-tensor and 0αj1, 0βj1 (j=2,,n), then Aαβ=ˇE1ˇF1=ˇE2ˇF2=ˇE3ˇF3 are convergence regular splittings.

    Proof. We divide our proof into three cases.

    (1) It is not difficult to verify that LGαImO and FRβImO under the conditions αj,βj[0,1] (j=2,,n), which means that the off-diagonal entries of ˇE1=Im(LGαIm) are non-positive, and, therefore, ˇE1 is a Z-tensor. Besides, ˇF1=F+GαFRβIm+RβL+RβF is a nonnegative tensor in view of FRβImO. Then, according to Lemma 4, Aαβ=ˇE1ˇF1 is a convergence regular splitting.

    (2) Inasmuch as L1 is the strictly lower triangular part of M(GαF) and GαFO, it has L1O. It follows from LGαImO that the off-diagonal entries of ˇE2=(ImD1)(LGαImL1) are non-positive, so ˇE2 is a Z-tensor. Owing to FRβImO, it holds that ˇF2=FF1RβIm+RβL+RβFO. By making use of Lemma 4, we have that Aαβ=ˇE2ˇF2 is a convergence regular splitting.

    (3) In the same way, we can prove that ˇE3=(ImD1D2)(LGαImL1) is a Z-tensor, and ˇF3=FF1RβImF2+RβFO, then Aαβ=ˇE3ˇF3 is a convergence regular splitting by Lemma 4.

    In this section, we first prove that the third proposed Gauss-Seidel splitting of Aαβ (the corresponding method is denoted by "PˇT3G-S" throughout this paper) is optimal. Then, we present several comparison theorems, which show that the PˇT3G-S method has faster convergence rate than the original Gauss-Seidel method and the one in [17]. Lastly, we prove that for a given α, the convergence speed of the PαβG-S method increases gradually with the parameters βj (j=2,,n) in the interval [0, 1].

    Theorem 3. Let AR[m,n] be a strong M-tensor and 0αj1 (j=2,,n), then Aα=E4F4 is a convergence regular splitting.

    Proof. According to Theorem 2, we see that ˇE4=(ImD1)(LGαImL1) is a Z-tensor. It is easy to verify that F4=FF1 is a nonnegative tensor. Then, by Lemma 4, we have that Aα=E4F4 is a convergence regular splitting.

    Next, we will prove that the PˇT3G-S method is optimal among the three proposed methods, and compare it with the preconditioned Gauss-Seidel type method according to the splitting ˜A=ˇE4ˇF4 in [17].

    Theorem 4. Let AR[m,n] be a strong M-tensor and 0αj1, 0βj1 (j=2,,n), then

    ρ(M(ˇE3)1ˇF3)ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1).

    Proof. It can be seen from Theorem 2 that Aαβ=ˇE1ˇF1=ˇE2ˇF2=ˇE3ˇF3 are three convergence regular splittings. By straightforward computations, it holds that

    ˇF2ˇF1=(FF1RβIm+RβL+RβF)(F+GαFRβIm+RβL+RβF)=D1+L1O,ˇF3ˇF1=(FF1RβImF2+RβF)(F+GαFRβIm+RβL+RβF)=D1+L1D2O,ˇF3ˇF2=(FF1RβImF2+RβF)(FF1RβIm+RβL+RβF)=D2O,

    i.e., ˇF2ˇF1, ˇF3ˇF1, and ˇF3ˇF2. Below, we distinguish two cases to discuss.

    1) If ˇF2O, then it follows from Lemma 6 that ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1)<1. If ˇF3=O, then ρ(M(ˇE3)1ˇF3)=0, which together with ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1) yields that ρ(M(ˇE3)1ˇF3)ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1). If F3O, then it holds that ρ(M(ˇE3)1ˇF3)ρ(M(ˇE2)1ˇF2)<1 in terms of Lemma 6, which combines with ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1) and leads to ρ(M(ˇE3)1ˇF3)ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1).

    2) If ˇF2=O, then it follows from OˇF3ˇF2 that ˇF3=O, and, hence, ρ(M(ˇE3)1ˇF3)=ρ(M(ˇE2)1ˇF2)=0ρ(M(ˇE1)1ˇF1).

    By summarizing the above discussions, we obtain the conclusion of this theorem.

    Remark 4.1 Theorem 4 shows that ρ(M(ˇE3)1ˇF3)ρ(M(ˇE2)1ˇF2)ρ(M(ˇE1)1ˇF1), which implies that the convergence factors of the three proposed preconditioned Gauss-Seidel type methods are monotonically decreasing, and the PˇT3G-S method has the fastest convergence speed among the proposed methods.

    Note that the three proposed different preconditioned Gauss-Seidel type methods are proved to be similar. From the results of numerical experiments in Sections 5 and 6, we see that there is not much difference between the three methods. Thus, in the following analyses and theorems, we mainly discuss the properties and results of the third proposed preconditioned Gauss-Seidel type method, which is denoted by "PˇT3G-S method".

    Now, we give a comparison theorem to compare the convergence factors between the original Gauss-Seidel method and the PˇT3G-S method.

    Theorem 5. Let AR[m,n] be a strong M-tensor and 0αj1, 0βj1 (j=2,,n). Assume that M(ImL)1F is irreducible, then we have

    ρ(M(ˇE3)1ˇF3)ρ(M(ImL)1F).

    Proof. By assumptions, we can deduce that A=(ImL)F is a convergent regular splitting. Since M(ImL)1F is irreducible, by the strong Perron-Frobenius theorem [35], there exists a positive Perron vector x of M(ImL)1F such that

    M(ImL)1Fxm1=λx[m1],

    where 0<λ=ρ(M(ImL)1F)<1. Then it holds that

    (FλIm+λL)xm1=0 (4.1)

    and

    GαFxm1=λGαImxm1. (4.2)

    Since A=(ImL)F and M(ImL)1Fxm1=λx[m1], it has 1λFxm1=(ImL)xm1 and, therefore,

    Axm1=(ImLF)xm1=1λFxm1Fxm1=1λλFxm1. (4.3)

    By pre-multiplying equation (4.3) by Rβ, we have

    RβAxm1=1λλRβFxm10. (4.4)

    Hence the combination of (4.1)–(4.4) results in

    ˇT3xm1λxm1=M(ˇE3)1(ˇF3λˇE3)xm1=M(ˇE3)1[(FF1RβImF2+RβF)λ(ImL+GαImD1+L1D2)]xm1=M(ˇE3)1[F+GαFD1+L1RβIm+RβLD2+RβFλIm+λLλGαIm+λD1λL1+λD2]xm1=M(ˇE3)1[(λ1)(D1+D2)Rβ(ImLF)+(1λ)L1+(FλIm+λL)+GαFλGαIm]xm1=M(ˇE3)1[(λ1)(D1+D2)1λλRβF+(1λ)L1]xm1=(λ1)M(ˇE3)1(D1+D2+1λRβFL1)xm1. (4.5)

    By Theorem 2, we know that Aαβ=ˇE3ˇF3 is a convergence regular splitting, so M(ˇE3)1O. Since D1,D2,L1,Rβ, and F are nonnegative, it follows from (4.5) that ˇT3xm1λxm1. By Lemma 2, we have ρ(M(ˇE3)1ˇF3)ρ(M(ImL)1F), which completes the proof of this theorem.

    Remark 4.2. Theorem 5 implies that the convergence rate of the PˇT3G-S method is faster than the original Gauss-Seidel method, which indicates that the new preconditioner Pα,β can improve the computational efficiency of the original Gauss-Seidel method.

    In the sequel, we prove that the convergence rate of the PˇT3G-S method is faster than that of the preconditioned Gauss-Seidel type method in [17]. To this end, we start with a useful lemma below.

    Lemma 11. Let AR[m,n] be a strong M-tensor. If 0αj1, 0βj1 (j=2,,n), then

    M(ˇE3)1M(ˇE4)1.

    Proof. According to Lemma 10, we know that Aαβ is a strong M-tensor. Then, by making use of Lemma 1, there exists x0 such that Aαβxm1>0. Since ˇF3O, we can get ˇE3Aαβ, and, hence,

    ˇE3xm1Aαβxm1>0.

    It is known from Theorem 4 that ˇE3 is a Z-tensor, thus, we can conclude from Lemma 1 that ˇE3 is a strong M-tensor. In the same way, we can prove that ˇE4 is a strong M-tensor. According to Lemma 8, both M(ˇE3) and M(ˇE4) are nonsingular M-matrices.

    Besides, it follows from ˇE3=ImL+GαImD1+L1D2, ˇE4=ImL+GαImD1+L1 that ˇE4ˇE3, which leads to M(ˇE4)M(ˇE3). Having in mind that M(ˇE3) and M(ˇE4) are nonsingular M-matrices, then

    M(ˇE3)1M(ˇE4)1,

    in view of Lemma 9. The proof is completed.

    Theorem 6. Let AR[m,n] be a strong M-tensor and 0αj1, 0βj1 (j=2,,n). Suppose that ¯x is the Perron vector of M(ˇE3)1ˇF3, and if ¯x satisfies A¯xm10, then

    ρ(M(ˇE3)1ˇF3)ρ(M(ˇE4)1ˇF4).

    Proof. For Aαβ=ˇE3ˇF3 and ˜A=E4F4, we consider the following two splittings of the tensor A:

    A=E3F3=E4F4,

    where E3=P1α,βˇE3, F3=P1α,βˇF3, E4=P1αˇE4, and F4=P1αˇF4. Then, we can deduce that

    M(E3)1=M(ˇE3)1Pαβ, M(E3)1F3=M(ˇE3)1ˇF3,

    and

    M(E4)1=M(E4)1Pα, M(E4)1F4=M(ˇE4)1ˇF4.

    Theorems 2 and 3 prove that both Aαβ=ˇE3ˇF3 and ˜A=ˇE4ˇF4 are convergence regular splittings, then it follows that

    M(E3)1O, M(E3)1F3O,
    M(E4)1O, M(E4)1F4O,

    which shows that A=E3F3=E4F4 are two weak regular splittings of the strong M-tensor A.

    From Lemma 11, we get M(ˇE3)1M(ˇE4)1, which together with Pα,βPα leads to

    M(E3)1M(E4)1.

    Inasmuch as A=E3F3=E4F4 are two convergence weak regular splittings, M(E3)1M(E4)1, and the Perron vector ¯x of M(E3)1F3 satisfies A¯xm10, we have

    ρ(M(E3)1F3)ρ(M(E4)1F4),

    in view of Lemma 5, which is equivalent to

    ρ(M(ˇE3)1ˇF3)ρ(M(ˇE4)1ˇF4).

    Finally, we will give a conclusion that for a given α, the spectral radius of the iteration tensor of the PˇT3G-S method decreases with β.

    Theorem 7. Let AR[m,n] be a strong M-tensor and 0αj1, 0βj1 (j=2,,n). Suppose that ¯x is the Perron vector of M(E3αβ2)1F3αβ2, if ¯x satisfies A¯xm10, αj,β1j,β2j[0,1] (j=2,,n) and β1=(β12,,β1n)β2=(β22,,β2n), then the following inequality holds

    ρ(M(E3αβ2)1F3αβ2)ρ(M(E3αβ1)1F3αβ1).

    Proof. Since ˇE3=ImL+GαImD1+L1D2 and β1β2, we can deduce that ˇE3αβ2ˇE3αβ1, which yields that M(ˇE3αβ2)M(ˇE3αβ1). According to Lemma 11, we see that ˇE3 is a strong M-tensor when 0αj1, 0βj1 (j=2,,n). Then, from Lemma 8, we obtain that M(ˇE3αβ2) and M(ˇE3αβ1) are nonsingular M-matrices. Therefore, M(ˇE3αβ2)1M(ˇE3αβ1)1O in terms of Lemma 9.

    Besides, we derive

    (Aαβ2Aαβ1)¯xm1=(I+Gα+Rβ2IGαRβ1)A¯xm1=(Rβ2Rβ1)A¯xm10,

    which means that Aαβ2¯xm1Aαβ1¯xm10. This combines with M(ˇE3αβ2)1M(ˇE3αβ1)1 and results in

    M(ˇE3αβ2)1Aαβ2¯xm1M(ˇE3αβ1)1Aαβ1¯xm1,

    i.e.,

    [ImM(ˇE3αβ2)1ˇF3αβ2]¯xm1[ImM(ˇE3αβ1)1ˇF3αβ1]¯xm1.

    So we can get M(ˇE3αβ1)1ˇF3αβ1¯xm1M(ˇE3αβ2)1ˇF3αβ2¯xm1. Inasmuch as ¯x0 is the nonnegative Perron vector of the nonnegative tensor M(ˇE3αβ2)1ˇF3αβ2, it has

    ρ(M(ˇE3αβ2)1ˇF3αβ2)¯xm1=M(ˇE3αβ2)1ˇF3αβ2¯xm1M(ˇE3αβ1)1ˇF3αβ1¯xm1.

    According to Lemma 5, we have

    ρ(M(ˇE3αβ2)1ˇF3αβ2)ρ(M(ˇE3αβ1)1ˇF3αβ1).

    Remark 4.3 From Theorem 3.5 of [17], we observe that ρ(M(ˇE4)1ˇF4) has the smallest value as αj=1 (j=2,,n). Thus we adopt αj=1 (j=2,,n) in ρ(M(ˇE3)1ˇF3). Besides, it can be concluded from Theorem 7 that for given αi (i=2,,n), ρ(M(ˇE3)1ˇF3) is monotonically decreasing with βj (j=2,,n) in the interval [0, 1]. Thus, we can choose αj=βj=1 (j=2,,n) for the PˇT3G-S method in practical computations.

    This section provides three numerical examples to illustrate the correctness of the theoretical results and the efficiencies of the proposed methods. To show the advantages of the proposed preconditioned Gauss-Seidel type methods, we also compare the convergence behaviors of the proposed methods with those of the original Gauss-Seidel, and the existing preconditioned Gauss-Seidel ones in [14,17], with respect to the number of iteration steps (IT) and the computational time in seconds (CPU).

    All numerical results are computed in MATLAB (version R2016b) on a personal computer with Intel (R) Pentium (R) CPU G3240T 2.870 GHz, 16.0 GB memory and Windows 10 system. We use the power method in [33] to compute the spectral radii of all tested preconditioned Gauss-Seidel type methods. Besides, all iterations are terminated once the residual(RES) satisfies RES:=Aym1kb21010, or IT exceeds the maximal number of iteration steps 10,000. According to Remark 3.7 in [17] and Remark 5.7 in [14], we see that ρ(Tα) and ρ(ˇT) attain minimum values as αj=1 (j=2,,n). Moreover, based on Theorem 7 and Remark 4.3, we can choose αl=1 and βl=1 (l=2,,n) in the proposed PˇT3G-S method. Then, we compare the numerical results of the preconditioned Gauss-Seidel type methods with all parameters being 1. Also, we list the numerical experimental results of the proposed PˇT3G-S method when the parameters α and β are not equal, and give a reasonable suggestion of the values of the parameters in the PˇT3G-S method. The product Axm1 is computed by transforming into the matrix-vector product Axm1=A(1)(xxm), where is the Kronecker product. The product of a matrix U and a tensor V is computed by (UV)(1)=UV(1). In our computations, the preconditioned Gauss-Seidel type methods in [14] and [17] are denoted by PTαG-S and PˇTG-S, respectively. The three proposed preconditioned Gauss-Seidel type methods are denoted by PˇT1G-S, PˇT2G-S, and PˇT3G-S, respectively.

    Example 1. [32] Let AR[3,n] and bRn, where

    {a111=annn=1, a122=an(n1)(n1)=0.5, an11=θ24h2+μ222h,a133=a166=1, a188=0.25,aiii=θ2h2+μ1h+η, i=2,3,,n1,ai(i1)i=ai(i+1)i=ai11=θ24h2+μ222h, i=2,3,,n1,ai(i1)(i1)=ai(i+1)(i+1)=(0.1θ)24h2+(0.1μ2)22h, i=2,3,,n1,

    and

    θ=0.2, μ1=0.04, η=0.04, μ2=0.04, h=2n.

    This example comes from the high order bellman equations in [32] with few modifications. It can be verified that A is a strong M-tensor. Both the righthand side vector b and the initial vector y0 are chosen to be ones(n,1).

    Example 2. [8,22] Consider the multi-linear system (1.1) with coefficient tensor A=sI3B and righthand side vector b=(1,1,,1)T, where B is generated randomly by Matlab, and s=1.01max1jn(Be2)j with e=(1,1,,1)T.

    Example 3. [23] Let AR[3,n] and bRn with

    {a111=annn=1, an11=n/16, a1nn=8/n, an(n1)(n1)=1/3,a122=1/2, a133=a144=2/3, a155=a177=1/3.4, a166=1/3,alll=n/3.5, al(l1)(l1)=al(l+1)(l+1)=n/1500, l=2,,n1,al11=al(l+1)l=n/16, al(l1)l=n/15, l=2,,n1,al(l+2)(l+2)=1/(6n), l=2,,n2,al(l+3)(l+3)=1/(6n), l=2,,n3,al(l+4)(l+4)=1/(7n), l=2,,n4,al(l+5)(l+5)=1/(8n), l=2,,n5,al(l+6)(l+6)=1/(8n), l=2,,n6,

    and bi=1 (i=1,,n).

    This example comes from [23] with few modifications. We can verify that A is a strictly diagonally dominant Z-tensor with positive diagonal elements, then A is a strong M-tensor. The initial vector is taken as y0=ones(n,1).

    We compare the convergence factors of the tested Gauss-Seidel and preconditioned Gauss-Seidel type methods for Examples 1, 2, and 3 in Tables 1, 5, and 9, respectively. As observed in Tables 1, 5, and 9, the convergence factors of the preconditioned Gauss-Seidel type methods are smaller than that of the original Gauss-Seidel one, which reveals that applying the preconditioning technique can accelerate the tensor splitting methods and is in accordance with the result in Theorem 5. Also, the convergence factors of the three proposed preconditioned Gauss-Seidel type methods satisfy ρ(ˇT3)ρ(ˇT2)ρ(ˇT1) for all cases. This coincides with the conclusion in Theorem 4. Moreover, ρ(ˇT2) and ρ(ˇT3) are smaller than ρ(Tα) [14] and ρ(ˇT) [17], which demonstrates the correctness of Theorem 6. We conclude that the proposed PˇT2G-S and PˇT3G-S methods have faster convergence rates than the existing ones in [14,17].

    Table 1.  For Example 1, spectral radii of the (preconditioned) Gauss-Seidel type iteration tensors.
    n 15 18 21 24 27
    ρ(T) 0.9291802092 0.9529154267 0.9700795372 0.9830577904 0.993209577
    ρ(Tα) [14] 0.9263271112 0.9509942787 0.9688476422 0.9823555188 0.992926632
    ρ(ˇT) [17] 0.8974244172 0.9310807552 0.9558675758 0.9748644934 0.9898795628
    ρ(ˇT1) 0.926716521 0.9512692259 0.9690297533 0.9824618081 0.9929702279
    ρ(ˇT2) 0.8962038636 0.9302714479 0.9553543212 0.9745743171 0.9897634036
    ρ(ˇT3) 0.89048139 0.9263144807 0.9527676002 0.9730779556 0.9891535564

     | Show Table
    DownLoad: CSV

    To reflect the changing of ρ(ˇT3) with respect to parameters, in Tables 2, 6, and 10, we report the convergence factor ρ(ˇT3) and the corresponding IT of the proposed PˇT3G-S method with α=β from 0.1 to 1 with step size 0.1 for Examples 1–3, respectively. The corresponding IT of the PˇT3G-S method are listed in brackets. Numerical results in Tables 2, 6, and 10 show that the convergence factor and IT of the PˇT3G-S method decrease with increasing of α=β in [0, 1] for Example 2. However, ρ(ˇT3) increases first with respect to α=β in [0, 0.9], and then decreases with respect to α=β in [0.9, 1]. The reason is that the assumption A¯xm10 in Theorem 7 is invalid. Besides, the IT and convergence factor of the PˇT3G-S method are monotonically increasing with the problem size n for Example 1, while it is opposite for Example 3.

    Table 2.  In Example 1, ρ(ˇT3) and the corresponding IT with different values of α=β.
    n 15 18 21 24 27
    α=β=0.1 0.9236512255(310) 0.9491374509(471) 0.9676317606(747) 0.9816515856(1328) 0.9926396371(3332)
    α=β=0.2 0.9181527764(288) 0.9453696787(437) 0.9651855464(694) 0.9802440685(1233) 0.9920684607(3092)
    α=β=0.3 0.9127402049(269) 0.9416499216(408) 0.9627653366(647) 0.9788492383(1151) 0.9915017037(2887)
    α=β=0.4 0.9074897847(253) 0.9380308713(383) 0.9604054808(608) 0.9774869001(1080) 0.9909474114(2711)
    α=β=0.5 0.902508159(239) 0.9345869158(362) 0.9581548335(574) 0.9761853878(1021) 0.9904171478(2562)
    α=β=0.6 0.8979471844(227) 0.9314249174(345) 0.9560840542(547) 0.9749859013(972) 0.9899278011(2438)
    α=β=0.7 0.894028065(218) 0.9287018688(331) 0.9542976333(525) 0.9739496837(933) 0.9895045808(2341)
    α=β=0.8 0.89108214(213) 0.9266550498(323) 0.9529545879(511) 0.9731704427(908) 0.9891862285(2275)
    α=β=0.9 0.8896230339(211) 0.9256561564(320) 0.9523059977(507) 0.9727970547(901) 0.9890345996(2257)
    α=β=1 0.89048139(215) 0.9263144807(327) 0.9527676002(518) 0.9730779556(921) 0.9891535564(2311)

     | Show Table
    DownLoad: CSV
    Table 3.  In Example 1, ρ(ˇT3) and the corresponding IT with different values of (α,β) and n=15.
    0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
    0.1 0.9236(310) 0.9200(295) 0.9159(280) 0.9114(265) 0.9063(249) 0.9005(234) 0.8939(219) 0.8863(204) 0.8773(188) 0.8667(173)
    0.2 0.9215(301) 0.9182(288) 0.9144(274) 0.9102(261) 0.9055(247) 0.9002(233) 0.8940(219) 0.8869(205) 0.8786(190) 0.8686(176)
    0.3 0.9192(292) 0.9162(281) 0.9127(269) 0.9089(257) 0.9046(244) 0.8997(232) 0.8940(219) 0.8875(206) 0.8798(192) 0.8707(179)
    0.4 0.9168(283) 0.9140(273) 0.9109(263) 0.9075(253) 0.9036(242) 0.8992(230) 0.8941(219) 0.8881(207) 0.8812(195) 0.8728(182)
    0.5 0.9141(275) 0.9117(266) 0.9090(257) 0.9059(248) 0.9025(239) 0.8986(229) 0.8941(219) 0.8888(208) 0.8826(197) 0.8752(186)
    0.6 0.9113(266) 0.9092(259) 0.9069(251) 0.9043(244) 0.9013(236) 0.8979(227) 0.8941(218) 0.8895(209) 0.8841(200) 0.8777(190)
    0.7 0.9082(257) 0.9065(251) 0.9046(245) 0.9024(239) 0.9000(233) 0.8972(226) 0.8940(218) 0.8903(211) 0.8858(203) 0.8804(195)
    0.8 0.9048(248) 0.9035(244) 0.9021(239) 0.9004(234) 0.8986(229) 0.8964(224) 0.8940(218) 0.8911(213) 0.8876(207) 0.8835(201)
    0.9 0.9011(239) 0.9003(236) 0.8993(233) 0.8982(229) 0.8970(226) 0.8956(222) 0.8939(218) 0.8920(214) 0.8896(211) 0.8868(207)
    1 0.8971(229) 0.8967(228) 0.8963(226) 0.8958(224) 0.8952(222) 0.8946(220) 0.8938(218) 0.8929(216) 0.8918(216) 0.8905(215)

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 1 with five different problem sizes.
    Method n 15 18 21 24 27
    IT 335 510 810 1440 3606
    Gauss-Seidel CPU 0.0092 0.0096 0.0150 0.0250 0.1022
    RES 9.71×1011 9.84×1011 9.85×1011 1.00×1010 9.93×1011
    IT 321 489 777 1383 3470
    PTαG-S [14] CPU 0.0068 0.0091 0.0140 0.0306 0.0546
    RES 9.91×1011 9.97×1011 9.94×1011 9.84×1011 9.96×1011
    IT 231 350 557 992 2496
    PˇTG-S [17] CPU 0.0083 0.0080 0.0093 0.0169 0.0762
    RES 9.21×1011 9.94×1011 9.58×1011 9.81×1011 9.97×1011
    IT 327 501 798 1425 3590
    PˇT1G-S CPU 0.0069 0.0091 0.0119 0.0220 0.0789
    RES 9.64×1011 9.53×1011 9.92×1011 9.98×1011 9.95×1011
    IT 226 343 545 972 2466
    PˇT2G-S CPU 0.0052 0.0055 0.0089 0.0164 0.0504
    RES 9.20×1011 9.67×1011 9.81×1011 9.75×1011 9.94×1011
    IT 215 327 518 921 2311
    PˇT3G-S CPU 0.0050 0.0052 0.0083 0.0145 0.0482
    RES 9.37×1011 9.37×1011 9.84×1011 9.91×1011 9.95×1011

     | Show Table
    DownLoad: CSV
    Table 5.  For Example 2, spectral radii of the (preconditioned) Gauss-Seidel type iteration tensors.
    n 2 3 4 5 6
    ρ(T) 0.8054957815 0.8257162016 0.9061906095 0.860610954 0.9281718549
    ρ(Tα) [14] 0.7617278913 0.8149589076 0.9020224374 0.857310149 0.9270187513
    ρ(ˇT) [17] 0.792988154 0.8217099319 0.9056184887 0.8600408767 0.9280077281
    ρ(ˇT1) 0.785428421 0.8105293839 0.9003715632 0.8559249663 0.9265374112
    ρ(ˇT2) 0.7742950119 0.8064817309 0.8997826747 0.8553389412 0.9263707833
    ρ(ˇT3) 0.7617278913 0.8042666224 0.8994395458 0.8551586344 0.9263225743

     | Show Table
    DownLoad: CSV
    Table 6.  In Example 2, ρ(ˇT3) and the corresponding IT with different values of α=β.
    n 2 3 4 5 6
    α=β=0.1 0.8008759455(106) 0.8234461627(122) 0.9054875866(240) 0.8600189377(159) 0.9279710229(321)
    α=β=0.2 0.7963261382(103) 0.8212063526(120) 0.9047909136(238) 0.8594371002(158) 0.927773669(320)
    α=β=0.3 0.7918421592(101) 0.818996027(119) 0.904100539(236) 0.85886552(157) 0.9275798133(319)
    α=β=0.4 0.7874191092(99) 0.8168143681(117) 0.9034164094(234) 0.8583042773(157) 0.9273894764(318)
    α=β=0.5 0.7830511895(96) 0.8146604692(116) 0.9027384702(232) 0.8577534542(156) 0.9272026795(317)
    α=β=0.6 0.7787314303(94) 0.812533316(114) 0.9020666643(231) 0.8572131344(155) 0.9270194443(316)
    α=β=0.7 0.7744513166(92) 0.8104317626(113) 0.9014009325(229) 0.8566834039(155) 0.9268397931(315)
    α=β=0.8 0.7702002655(90) 0.8083545031(112) 0.9007412128(228) 0.8561643503(154) 0.9266637487(315)
    α=β=0.9 0.7659648836(89) 0.8063000365(110) 0.9000874402(226) 0.8556560632(154) 0.9264913344(314)
    α=β=1 0.7617278913(87) 0.8042666224(109) 0.8994395458(224) 0.8551586344(153) 0.9263225743(313)

     | Show Table
    DownLoad: CSV
    Table 7.  In Example 2, ρ(ˇT3) and the corresponding IT with different values of (α,β) and n=4.
    0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
    0.1 0.9055(240) 0.9048(238) 0.9042(236) 0.9035(234) 0.9028(233) 0.9021(231) 0.9014(229) 0.9007(227) 0.8999(226) 0.8992(224)
    0.2 0.9054(239) 0.9048(238) 0.9041(236) 0.9035(234) 0.9028(233) 0.9021(231) 0.9014(229) 0.9007(227) 0.8999(226) 0.8992(224)
    0.3 0.9054(239) 0.9048(238) 0.9041(236) 0.9034(234) 0.9028(233) 0.9021(231) 0.9014(229) 0.9007(227) 0.9000(226) 0.8992(224)
    0.4 0.9053(239) 0.9047(238) 0.9041(236) 0.9034(234) 0.9028(232) 0.9021(231) 0.9014(229) 0.9007(227) 0.9000(226) 0.8993(224)
    0.5 0.9053(239) 0.9047(237) 0.9040(236) 0.9034(234) 0.9027(232) 0.9021(231) 0.9014(229) 0.9007(227) 0.9000(226) 0.8993(224)
    0.6 0.9052(239) 0.9046(237) 0.9040(236) 0.9034(234) 0.9027(232) 0.9021(231) 0.9014(229) 0.9007(227) 0.9000(226) 0.8993(224)
    0.7 0.9052(239) 0.9046(237) 0.9040(236) 0.9033(234) 0.9027(232) 0.9021(231) 0.9014(229) 0.9007(227) 0.9000(226) 0.8993(224)
    0.8 0.9051(239) 0.9045(237) 0.9039(235) 0.9033(234) 0.9027(232) 0.9021(231) 0.9014(229) 0.9007(228) 0.9001(226) 0.8994(224)
    0.9 0.9051(239) 0.9045(237) 0.9039(235) 0.9033(234) 0.9027(232) 0.9020(231) 0.9014(229) 0.9008(228) 0.9001(226) 0.8994(224)
    1 0.9050(238) 0.9045(237) 0.9039(235) 0.9033(234) 0.9027(232) 0.9020(231) 0.9014(229) 0.9008(228) 0.9001(226) 0.8994(224)

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 2 with five different problem sizes.
    Method n 2 3 4 5 6
    IT 109 124 241 159 322
    Gauss-Seidel CPU 0.0020 0.0021 0.0033 0.0023 0.0045
    RES 8.56×1011 8.88×1011 9.96×1011 9.99×1011 9.33×1011
    IT 87 116 231 156 316
    PTαG-S [14] CPU 0.0014 0.0019 0.0031 0.0023 0.0045
    RES 8.38×1011 9.08×1011 9.21×1011 8.61×1011 9.86×1011
    IT 102 121 240 159 321
    PˇTG-S [17] CPU 0.0017 0.0025 0.0033 0.0023 0.0041
    RES 8.20×1011 8.81×1011 9.46×1011 9.01×1011 9.51×1011
    IT 98 113 227 154 314
    PˇT1G-S CPU 0.0016 0.0021 0.0031 0.0021 0.0040
    RES 8.19×1011 9.09×1011 9.16×1011 9.09×1011 9.73×1011
    IT 92 110 225 153 313
    PˇT2G-S CPU 0.0015 0.0015 0.0030 0.0019 0.0037
    RES 9.19×1011 9.76×1011 9.76×1011 9.57×1011 9.93×1011
    IT 87 109 224 153 313
    PˇT3G-S CPU 0.0013 0.0014 0.0025 0.0019 0.0037
    RES 8.38×1011 9.06×1011 9.96×1011 9.27×1011 9.77×1011

     | Show Table
    DownLoad: CSV
    Table 9.  For Example 3, spectral radii of the (preconditioned) Gauss-Seidel type iteration tensors.
    n 20 30 40 50 60
    ρ(T) 0.9753471425 0.9711386234 0.9696621011 0.9689780153 0.9686062345
    ρ(Tα) [14] 0.9742949249 0.9699175026 0.9683825071 0.9676714698 0.9672850789
    ρ(ˇT) [17] 0.964831327 0.9589288847 0.9568655063 0.9559108504 0.9553923766
    ρ(ˇT1) 0.9745215319 0.9701572449 0.9686249106 0.9679147559 0.9675287508
    ρ(ˇT2) 0.9642431835 0.9582420669 0.9561442394 0.9551736548 0.9546465329
    ρ(ˇT3) 0.9624480739 0.9561045549 0.9538838235 0.9528557953 0.95229732

     | Show Table
    DownLoad: CSV
    Table 10.  In Example 3, ρ(ˇT3) and the corresponding IT with different values of α=β.
    n 20 30 40 50 60
    α=β=0.1 0.9734001591(910) 0.968866947(780) 0.9672770733(745) 0.9665405712(730) 0.9661403306(724)
    α=β=0.2 0.9714631826(847) 0.9666083348(726) 0.9649062817(694) 0.9641179241(681) 0.963689533(675)
    α=β=0.3 0.9695578019(793) 0.9643879014(681) 0.9625760661(650) 0.961736979(639) 0.9612810537(633)
    α=β=0.4 0.9677143942(747) 0.9622408988(642) 0.9603233653(613) 0.9594354525(603) 0.9589530305(598)
    α=β=0.5 0.9659760545(708) 0.9602172542(609) 0.9582004645(582) 0.957266715(572) 0.956759422(568)
    α=β=0.6 0.9644046694(676) 0.9583885977(581) 0.9562823511(556) 0.9553072984(547) 0.9547775965(543)
    α=β=0.7 0.9630905878(651) 0.9568594799(560) 0.9546784667(536) 0.9536688941(527) 0.9531204636(523)
    α=β=0.8 0.9621685643(637) 0.9557859165(548) 0.9535521552(524) 0.9525182177(515) 0.9519565653(511)
    α=β=0.9 0.9618451437(635) 0.9554073508(547) 0.9531542236(523) 0.9521113091(514) 0.9515447766(509)
    α=β=1 0.9624480739(654) 0.9561045549(563) 0.9538838235(539) 0.9528557953(530) 0.95229732(526)

     | Show Table
    DownLoad: CSV

    Also, we list the convergence factor ρ(ˇT3) and IT of the proposed PˇT3G-S method with different values of (α,β) and α,β from 0.1 to 1 with step size 0.1 for Examples 1–3 in Tables 3, 7, and 11, respectively. The best results are marked in bold. It can be observed in Tables 3, 7, and 11 that for a given α, ρ(ˇT3) and IT of the proposed PˇT3G-S method become smaller with β, which coincides with the result in Theorem 7. While for a given β, ρ(ˇT3) and IT of the proposed PˇT3G-S method are not monotonically decreasing with α for all cases. In addition, the PˇT3G-S method has the highest computational efficiency as α=0.1 and β=1. This reveals that we can adopt small values of α and large values of β for the PˇT3G-S method in numerical experiments and practical computations.

    Table 11.  In Example 3, ρ(ˇT3) and the corresponding IT with different values of (α,β) and n=40.
    0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
    0.1 0.9673(745) 0.9657(709) 0.9638(674) 0.9618(638) 0.9595(601) 0.9569(565) 0.9538(528) 0.9503(491) 0.9462(454) 0.9412(416)
    0.2 0.9664(725) 0.9649(694) 0.9632(662) 0.9613(629) 0.9592(597) 0.9568(563) 0.9540(529) 0.9507(495) 0.9468(459) 0.9421(423)
    0.3 0.9655(706) 0.9641(679) 0.9626(650) 0.9609(621) 0.9589(592) 0.9567(561) 0.9541(530) 0.9511(498) 0.9475(465) 0.9432(431)
    0.4 0.9645(688) 0.9633(664) 0.9619(639) 0.9603(613) 0.9586(587) 0.9565(560) 0.9542(531) 0.9515(502) 0.9482(472) 0.9443(440)
    0.5 0.9634(670) 0.9623(650) 0.9611(628) 0.9598(606) 0.9582(582) 0.9564(558) 0.9543(533) 0.9519(507) 0.9490(479) 0.9456(451)
    0.6 0.9623(653) 0.9614(635) 0.9603(617) 0.9591(598) 0.9578(578) 0.9563(556) 0.9545(534) 0.9524(512) 0.9499(488) 0.9469(463)
    0.7 0.9611(635) 0.9603(621) 0.9594(606) 0.9585(590) 0.9574(573) 0.9561(555) 0.9547(536) 0.9530(517) 0.9509(498) 0.9484(477)
    0.8 0.9597(616) 0.9592(606) 0.9585(594) 0.9578(582) 0.9569(568) 0.9560(554) 0.9549(539) 0.9536(524) 0.9520(509) 0.9501(494)
    0.9 0.9583(598) 0.9579(590) 0.9575(582) 0.9570(573) 0.9565(563) 0.9558(552) 0.9551(541) 0.9542(531) 0.9532(523) 0.9519(514)
    1 0.9567(579) 0.9566(574) 0.9564(570) 0.9562(564) 0.9559(558) 0.9556(551) 0.9553(544) 0.9549(540) 0.9545(539) 0.9539(539)

     | Show Table
    DownLoad: CSV

    Additionally, numerical results of the tested Gauss-Seidel and preconditioned Gauss-Seidel methods for Examples 1, 2, and 3 with different problem sizes are reported in Tables 4, 8, and 12, respectively. From Tables 4, 8, and 12, we observe that all tested methods can successfully compute approximate solutions satisfying the prescribed stopping criterion, and their IT and CPU time increase with increasing of the problem size n. For Examples 1 and 3, the proposed PˇT2G-S and PˇT3G-S methods have advantages over the other ones from the point of view of IT and CPU time, and the PˇT3G-S method performs the best among the tested methods. For Example 2, the three proposed methods outperform the other ones except for the case of n=2, and the proposed PˇT3G-S method is the most effective method in all tested methods, which is in accordance with the results in Theorems 5 and 6. Another observation is that for Examples 1 and 3, the proposed PˇT3G-S method is more stable than the other ones since the variation amplitude of its IT is the smallest among the tested methods.

    Table 12.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 3 with n=20,30,40,50,60.
    Method n 20 30 40 50 60
    IT 984 843 804 789 782
    Gauss-Seidel CPU 0.0167 0.0240 0.0364 0.0496 0.0989
    RES 9.87×1011 9.86×1011 9.99×1011 9.85×1011 9.76×1011
    IT 943 808 772 757 750
    PTαG-S [14] CPU 0.0131 0.0218 0.0351 0.0474 0.0839
    RES 9.86×1011 9.95×1011 9.73×1011 9.82×1011 9.85×1011
    IT 702 606 582 574 570
    PˇTG-S [17] CPU 0.0111 0.0158 0.0262 0.0401 0.0737
    RES 9.79×1011 9.89×1011 9.83×1011 9.60×1011 9.98×1011
    IT 972 838 804 791 786
    PˇT1G-S CPU 0.0147 0.0207 0.0400 0.0533 0.1124
    RES 9.88×1011 9.98×1011 9.79×1011 9.90×1011 9.85×1011
    IT 680 587 564 556 552
    PˇT2G-S CPU 0.0104 0.0154 0.0245 0.0348 0.0638
    RES 9.86×1011 9.93×1011 9.71×1011 9.60×1011 1.00×1011
    IT 654 563 539 530 526
    PˇT3G-S CPU 0.0098 0.0145 0.0226 0.0344 0.0627
    RES 9.85×1011 9.77×1011 9.71×1011 9.60×1011 9.61×1011

     | Show Table
    DownLoad: CSV

    In order to further show the efficiencies of the proposed preconditioned Gauss-Seidel type algorithms, we also apply the tested algorithms to solve the multi-linear system in Example 3 with larger scales n=100,150,200,250,300, and the obtained numerical results are listed in Table 13. From Table 13, it is observed that when n increases, the IT and CPU time of the tested methods increase. The proposed PˇT2G-S and PˇT3G-S methods return better numerical performances than the other ones in terms of IT and CPU time, and the PˇT3G-S method performs the best among the tested methods. The advantages of the PˇT2G-S and PˇT3G-S methods become more pronounced as the problem size n becomes larger. This shows that the proposed methods are feasible and efficient for solving multi-linear systems with large scales.

    Table 13.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 3 with n=100,150,200,250,300.
    Method n 100 150 200 250 300
    IT 775 776 779 782 784
    Gauss-Seidel CPU 0.5058 2.2053 5.1039 9.9024 15.7565
    RES 9.76×1011 9.94×1011 9.85×1011 9.77×1011 9.94×1011
    IT 744 745 748 751 753
    PTαG-S [14] CPU 0.4810 2.1121 4.8320 9.4171 15.6656
    RES 9.69×1011 9.88×1011 9.78×1011 9.68×1011 9.80×1011
    IT 571 577 582 586 590
    PˇTG-S [17] CPU 0.3554 1.6839 3.8616 7.2955 12.1772
    RES 9.93×1011 9.71×1011 9.75×1011 9.85×1011 9.66×1011
    IT 785 791 798 803 807
    PˇT1G-S CPU 0.4699 2.2796 5.0793 10.0381 17.1644
    RES 9.85×1011 9.96×1011 9.70×1011 9.80×1011 9.93×1011
    IT 553 559 564 568 572
    PˇT2G-S CPU 0.3352 1.6277 3.5638 7.0709 11.8955
    RES 9.98×1011 9.67×1011 9.74×1011 9.77×1011 9.61×1011
    IT 523 527 530 533 536
    PˇT3G-S CPU 0.3233 1.5498 3.3608 6.6699 11.2433
    RES 9.99×1011 9.56×1011 9.69×1011 9.74×1011 9.68×1011

     | Show Table
    DownLoad: CSV

    To investigate the dependence of the tested methods on the problem size and parameter, the comparisons for the convergence factors of the tested methods and the changes of ρ(ˇT3) with α=β for Examples 1–3 are depicted in Figures 2, 4, and 6, respectively. We plot the RES(log10) curves of the tested algorithms for Examples 1–3 in Figures 1, 3, and 5, respectively, to show the advantages of the proposed methods. Here, "PGS(Sα)", "PGS(Gα)", and "PGS-j" (j=1,2,3) denote the PTαG-S, PˇTG-S, and PˇTjG-S (j=1,2,3) methods, respectively. Their convergence factors are represented by ρ(Sα), ρ(Gα), and ρj (j=1,2,3), respectively. It can be seen from Figures 1, 3 and 5 that the PˇT2G-S and PˇT3G-S methods are superior to the other ones in view of IT, and the PˇT1G-S and PTαG-S [14] methods are comparable for Examples 1 and 3. In addition, for Example 2, the three proposed preconditioned Gauss-Seidel type methods outperform the other ones. Also, the PTαG-S method performs better than the PˇTG-S one, while it is opposite for Examples 1 and 3. This is in consistent with the results in Tables 4, 8 and 12. On the other hand, Figures 2, 4 and 6 clearly show that ρ(ˇT2) and ρ(ˇT3) are less than the other ones, and ρ(ˇT3) is the minimum one among all convergence factors except for n=2 in Example 2. Meanwhile, ρ(Tα) and ρ(ˇT1), and ρ(ˇT) and ρ(ˇT2) are close for Example 1. Furthermore, all convergence factors become closer with n for Example 2. Lastly, ρ(ˇT3) decreases first and then increases as α=β increase in [0, 1] for Examples 1 and 3, while it decreases with the increasing of α=β in [0, 1] for Example 2. The reason may be that the assumption A¯xm10 in Theorem 7 is invalid.

    Figure 1.  Comparisons for the convergence curves of the preconditioned Gauss-Seidel type methods for Example 1.
    Figure 2.  Comparisons of tested methods' iteration tensors' spectral radii (left) and the change of ρ(ˆT3) with α (right) for Example 1.
    Figure 3.  Comparisons for the convergence curves of the preconditioned Gauss-Seidel type methods for Example 2.
    Figure 4.  Comparisons of tested methods' iteration tensors' spectral radii (left) and the change of ρ(ˆT3) with α (right) for Example 2.
    Figure 5.  Comparisons for the convergence curves of the preconditioned Gauss-Seidel type methods for Example 3.
    Figure 6.  Comparisons of tested methods' iteration tensors' spectral radii (left) and the change of ρ(ˆT3) with α (right) for Example 3.

    As applications of the proposed preconditioned Gauss-Seidel type methods for solving the multi-linear system (1), we consider the higher-order Markov chain [4] and directed graph [5,6]. Recently, Li and Ng [4] presented an approximate tensor model for higher-order Markov chain. The approximate tensor model for higher-order Markov chain is given as follows:

    z=Pz[m1], z0, z1=1, (6.1)

    where P=(pj1j2jm) is an m-th order n-dimensional stochastic tensor (or transition probability tensor), that is:

    pj1j2jm0, nj1=1pj1j2jm=1,

    and z=(zj) is called a stochastic vector with zj0, nj=1zj=1. The following example relating to the higher-order Markov chain comes from [36].

    Example 4. [36] Let

    P=(0.52000.29860.44620.65140.43000.57760.56380.34240.49000.27000.39300.31920.19700.32000.24620.24080.36380.29000.21000.30840.23460.15160.25000.17620.19540.29380.2200). (6.2)

    This example comes from occupational mobility of physicists date given in [36]. Let A=ImθP, with a positive real number θ. In [13], Liu et al. gave an equivalent equation with (6.1):

    Azm1=z[m1]θz, z0, z1=1.

    We test and compare the preconditioned Gauss-Seidel type methods in this paper and in [14,17] for this example. According to the numerical experiments in [19,24], the initial vector is adopted to be z0=1nones(n,1), and all runs stopped once the RES satisfies RES:=Pzm1z21010 or the IT exceeds the maximal number of iteration steps 10,000.

    Example 5. [5,6] Let Γ be a directed graph with V={1,2,3,4,5}, where P={1,2} and D={3,4,5}. Then, by definition, we can get the stochastic tensor ˆQ=(ˆqi1i2i3)R[3,5], where ˆq221=ˆq421=ˆq521=ˆq312=ˆq412=ˆq512=0, ˆq122=ˆq133=ˆq144=ˆq155=ˆq121=ˆq212 = ˆq211=ˆq311=ˆq411=ˆq321=ˆq112=ˆq511=1, and other elements are 1/6. We apply the tested preconditioned Gauss-Seidel type methods to solve the following multi-linear PageRank:

    z=Pz[m1], zm11=1, P=ξˆQ+(1ξ)V,

    where V=(vi1i2im) with vi1i2im=vi1, i2,,im, and v=(vi) is a stochastic vector, ξ(0,1) is a parameter. This example comes from [5,6,24] with few modifications. As in Example 4, we take the initial vector as z0=1nones(n,1), and all runs stopped once the RES satisfies RES:=Pzm1z21010 or the IT exceeds the maximal number of iteration steps 10,000.

    In Tables 1416, we list the numerical results of the tested methods for Example 4 with five different values of θ. For Example 5, in Tables 1719, we list the same items as those in Tables 1416. In Figures 7 and 9, we plot the RES(log10) curves of the tested methods for Example 4 and Example 5, respectively. The comparisons for convergence factors of the tested methods and the variation curves of ρ(ˇT3) with α=β for Examples 4 and 5 are displayed in Figures 8 and 10, respectively. As in Section 5, we use "PGS(Sα)", "PGS(Gα)", and "PGS-j" (j=1,2,3) to denote the PTαG-S, PˇTG-S and PˇTjG-S (j=1,2,3) methods, respectively. Their convergence factors are represented by ρ(Sα), ρ(Gα) and ρj (j=1,2,3), respectively.

    Table 14.  For Example 4, spectral radii of the (preconditioned) Gauss-Seidel type iteration tensors.
    θ 0.25 0.27 0.29 0.31 0.33
    ρ(T) 0.6819738942 0.7532792482 0.827647761 0.9052264399 0.9861706816
    ρ(Tα) [14] 0.6609721182 0.7349721196 0.813350659 0.8964752842 0.9847544534
    ρ(ˇT) [17] 0.6763919018 0.7481015109 0.8233677682 0.9024662381 0.9857019214
    ρ(ˇT1) 0.6673391032 0.7410381542 0.8184785882 0.899845899 0.9853363265
    ρ(ˇT2) 0.6617337529 0.7358331887 0.8141715342 0.8970653374 0.9848636198
    ρ(ˇT3) 0.6582370153 0.7325759099 0.8114668706 0.8953127105 0.9845644548

     | Show Table
    DownLoad: CSV
    Table 15.  In Example 4, ρ(ˇT3) and the corresponding IT with different values of α=β.
    θ 0.25 0.27 0.29 0.31 0.33
    α=β=0.1 0.6792238859(118) 0.7508515945(158) 0.8257298278(236) 0.9040400967(447) 0.9859768675(3185)
    α=β=0.2 0.6765572979(117) 0.7485015855(157) 0.8238757852(234) 0.9028944781(442) 0.9857898436(3142)
    α=β=0.3 0.6739741222(115) 0.7462297051(155) 0.8220864351(231) 0.9017903596(436) 0.9856097857(3103)
    α=β=0.4 0.6714744019(114) 0.7440365316(154) 0.820362696(229) 0.9007286183(431) 0.9854368914(3066)
    α=β=0.5 0.66905823(113) 0.7419227424(152) 0.8187056132(226) 0.8997102429(427) 0.9852713829(3031)
    α=β=0.6 0.6667257474(112) 0.7398891196(151) 0.8171163686(224) 0.8987363436(423) 0.985113509(2999)
    α=β=0.7 0.6644771406(111) 0.7379365554(149) 0.8155962921(222) 0.8978081652(419) 0.9849635485(2969)
    α=β=0.8 0.6623126373(111) 0.7360660582(148) 0.8141468747(220) 0.8969270997(415) 0.9848218131(2941)
    α=β=0.9 0.6602324988(110) 0.7342787575(147) 0.8127697821(218) 0.8960947027(411) 0.9846886517(2915)
    α=β=1 0.6582370153(109) 0.7325759099(146) 0.8114668706(217) 0.8953127105(408) 0.9845644548(2891)

     | Show Table
    DownLoad: CSV
    Table 16.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 4 with five different values of θ.
    Method θ 0.25 0.27 0.29 0.31 0.33
    IT 119 160 239 453 3230
    Gauss-Seidel CPU 0.0041 0.0042 0.0050 0.0095 0.0425
    RES 9.00×1011 9.55×1011 9.67×1011 9.74×1011 9.93×1011
    IT 110 147 219 413 2928
    PTαG-S [14] CPU 0.0031 0.0034 0.0045 0.0084 0.0413
    RES 9.15×1011 9.98×1011 9.59×1011 9.65×1011 9.93×1011
    IT 116 156 233 440 3123
    PˇTG-S [17] CPU 0.0035 0.0037 0.0050 0.0090 0.0457
    RES 9.96×1011 9.86×1011 9.35×1011 9.52×1011 9.96×1011
    IT 113 151 226 427 3045
    PˇT1G-S CPU 0.0033 0.0036 0.0047 0.0087 0.0419
    RES 8.41×1011 9.99×1011 9.44×1011 9.98×1011 9.93×1011
    IT 110 148 220 415 2949
    PˇT2G-S CPU 0.0031 0.0035 0.0041 0.0082 0.0416
    RES 9.75×1011 9.33×1011 9.66×1011 9.91×1011 9.95×1011
    IT 109 146 217 408 2891
    PˇT3G-S CPU 0.0026 0.0030 0.0040 0.0070 0.0391
    RES 9.00×1011 9.19×1011 9.18×1011 9.74×1011 9.98×1011

     | Show Table
    DownLoad: CSV
    Table 17.  For Example 5, spectral radii of the (preconditioned) Gauss-Seidel type iteration tensors.
    θ(ξ) 0.22 0.23 0.24 0.25 0.26
    ρ(T) 0.7492545607 0.8025687111 0.8577808035 0.9149387587 0.9740930476
    ρ(Tα) [14] 0.7315214378 0.7875989023 0.84624098 0.9075651538 0.9716975058
    ρ(ˇT) [17] 0.7446247873 0.7983829614 0.8543373608 0.9125983061 0.9732866292
    ρ(ˇT1) 0.7340462189 0.7899152807 0.848174538 0.9088980989 0.9721630439
    ρ(ˇT2) 0.7290474546 0.7853949643 0.8444551811 0.9063697302 0.9712917751
    ρ(ˇT3) 0.7280888187 0.7844977941 0.8436914404 0.9058327578 0.9711004535

     | Show Table
    DownLoad: CSV
    Table 18.  In Example 5, ρ(ˇT3) and the corresponding IT with different values of α=β.
    θ(ξ) 0.22 0.23 0.24 0.25 0.26
    α=β=0.1 0.7469077319(155) 0.800537225(202) 0.8561759022(289) 0.9138884045(497) 0.9737437254(1679)
    α=β=0.2 0.7446087494(153) 0.7985512834(200) 0.8546099952(286) 0.9128653771(491) 0.9734040354(1657)
    α=β=0.3 0.7423588937(152) 0.7966124586(198) 0.853084694(283) 0.9118709756(486) 0.9730744925(1637)
    α=β=0.4 0.740159437(150) 0.7947223552(196) 0.851601665(280) 0.9109065538(480) 0.9727556356(1618)
    α=β=0.5 0.7380116349(149) 0.7928826061(194) 0.8501626293(277) 0.9099735219(475) 0.9724480292(1600)
    α=β=0.6 0.7359167161(147) 0.7910948693(192) 0.8487693632(274) 0.9090733483(470) 0.9721522647(1582)
    α=β=0.7 0.7338758715(146) 0.7893608229(190) 0.8474236982(271) 0.9082075619(465) 0.9718689611(1566)
    α=β=0.8 0.73189024(145) 0.7876821602(189) 0.8461275202(269) 0.9073777535(461) 0.9715987667(1551)
    α=β=0.9 0.7299608936(144) 0.7860605828(187) 0.8448827697(267) 0.9065855784(457) 0.9713423605(1537)
    α=β=1 0.7280888187(142) 0.7844977941(186) 0.8436914404(264) 0.9058327578(453) 0.9711004535(1525)

     | Show Table
    DownLoad: CSV
    Table 19.  Numerical results of six preconditioned Gauss-Seidel type methods for Example 5 with five different values of θ(ξ).
    Method θ(ξ) 0.22 0.23 0.24 0.25 0.26
    IT 156 205 293 504 1702
    Gauss-Seidel CPU 0.0035 0.0040 0.0054 0.0090 0.0310
    RES 9.66×1011 9.03×1011 9.31×1011 9.64×1011 9.91×1011
    IT 145 189 269 462 1557
    PTαG-S [14] CPU 0.0030 0.0037 0.0052 0.0087 0.0256
    RES 8.61×1011 9.13×1011 9.76×1011 9.85×1011 9.99×1011
    IT 153 200 285 490 1650
    PˇTG-S [17] CPU 0.0031 0.0041 0.0053 0.0088 0.0424
    RES 9.36×1011 9.36×1011 9.78×1011 9.66×1011 9.97×1011
    IT 146 191 273 469 1583
    PˇT1G-S CPU 0.0031 0.0038 0.0052 0.0083 0.0261
    RES 9.38×1011 9.43×1011 9.45×1011 9.79×1011 9.94×1011
    IT 143 187 266 456 1535
    PˇT2G-S CPU 0.0027 0.0037 0.0049 0.0078 0.0251
    RES 9.25×1011 8.92×1011 9.45×1011 9.74×1011 9.90×1011
    IT 142 186 264 453 1525
    PˇT3G-S CPU 0.0026 0.0034 0.0044 0.0071 0.0231
    RES 9.87×1011 9.05×1011 9.94×1011 9.86×1011 9.86×1011

     | Show Table
    DownLoad: CSV
    Figure 7.  Comparisons for the convergence curves of the preconditioned Gauss-Seidel type methods for Example 4.
    Figure 8.  Comparisons of tested methods' iteration tensors' spectral radii (left) and the change of ρ(ˆT3) with α (right) for Example 4.
    Figure 9.  Comparisons for the convergence curves of the preconditioned Gauss-Seidel type methods for Example 5.
    Figure 10.  Comparisons of tested methods' iteration tensors' spectral radii (left) and the change of ρ(ˆT3) with α (right) for Example 5.

    By the numerical results presented in Tables 1419, we can conclude some observations: First, all tested algorithms are convergent for all cases. Second, the IT and CPU time of all tested algorithms increase with θ(ξ). Third, the PˇT2G-S and PˇT3G-S methods have higher computational efficiencies than the other ones, except the PTαG-S method is slightly better than the PˇT2G-S one in Example 4, and the advantages become more pronounced with θ(ξ). Fourth, numerical performances of the PTαG-S and the PˇT1G-S methods are comparable, and they are more efficient than the Gauss-Seidel and PˇTG-S ones with respect to IT and CPU time. Fifth, all convergence factors increase with θ(ξ), and ρ(ˇT3) is less than the other ones. Also, ρ(ˇT3)<ρ(ˇT2)<ρ(ˇT1) holds for all cases, which coincides with the conclusion in Theorem 4. Finally, ρ(ˇT3) decreases gradually with increasing of α=β.

    From Figures 7 and 9, we observe that PˇT2G-S and PˇT3G-S methods have faster convergence speeds than the other ones. The convergence behaviors of the PTαG-S and PˇT1G-S methods are comparable, and they perform better than the remaining two in view of IT. Additionally, it follows from Figures 8 and 10 that all convergence factors become larger and closer with θ(ξ), and ρ(ˇT3) is the smallest in all convergence factors. Finally, ρ(ˇT3) increases with θ(ξ) and decreases with α=β. The above results are in consistent with Theorems 5–7 and Remark 4.3.

    In this paper, by introducing some elements in the first row of the majorization matrix associated with the coefficient tensor into the preconditioner ˜Pα [16,17], we propose a new preconditioner for the multi-linear system (1.1). Compared with preconditioners in [14,17], the new preconditioner has preconditioning effect on all horizontal slices of the coefficient tensor and improves their computational efficiencies. Then we construct three new preconditioned Gauss-Seidel type methods for (1.1) according to three different Gauss-Seidel type tensor splittings of the preconditioned system tensor. Convergence and comparison theorems have been given to show that the proposed methods are convergent. Meanwhile, the last proposed method has a faster convergence rate than the original Gauss-Seidel method and the preconditioned Gauss-Seidel method in [17]. Also, it is proved that for a given α, the convergence factor of the proposed PˇT3G-S method decreases monotonically with β in [0, 1]. Finally, numerical experiments are performed to illustrate that the theoretical results are correct, and the proposed methods are efficient and have better numerical performances than the Gauss-Seidel method and preconditioned Gauss-Seidel methods in [14,17]. As applications, we apply the proposed methods to solve the multi-linear system (1.1) arising from the higher-order Markov chain and directed graph to further show their feasibilities and superiorities.

    It can be observed that comparison and monotonic theorems (Theorems 6 and 7) are valid under the condition A¯xm10, which is not easy to be verified in the practical calculation; thus, we will work to weaken this assumption in the future. Last but not least, it also makes sense to study more efficient preconditioners and the related preconditioned tensor splitting methods in our future work.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China (No. 12361078), and the Guangxi Natural Science Foundation (No. 2024GXNSFAA010498). We would like to express our sincere thanks to the editor and the anonymous reviewers for their valuable suggestions and constructive comments which greatly improved the presentation of this paper.

    The authors declare there is no conflicts of interest.



    [1] L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput., 40 (2005), 1302–1324. https://doi.org/10.1016/j.jsc.2005.05.007 doi: 10.1016/j.jsc.2005.05.007
    [2] H. R. Xu, D. H. Li, S. L. Xie, An equivalent tensor equation to the tensor complementarity problem with positive semi-definite Z-tensor, Optim. Lett., 13 (2019), 685–694. https://doi.org/10.1007/s11590-018-1268-4 doi: 10.1007/s11590-018-1268-4
    [3] L. B. Cui, C. Chen, W. Li, M. K. Ng, An eigenvalue problem for even order tensors with its applications, Linear Multilinear Algebra, 64 (2016), 602–621. https://doi.org/10.1080/03081087.2015.1071311 doi: 10.1080/03081087.2015.1071311
    [4] W. Li, M. K. Ng, On the limiting probability distribution of a transition probability tensor, Linear Multilinear Algebra, 62 (2014), 362–385. https://doi.org/10.1080/03081087.2013.777436 doi: 10.1080/03081087.2013.777436
    [5] D. Liu, W. Li, S. W. Vong, Relaxation methods for solving the tensor equation arising from the higher-order Markov chains, Numer. Linear Algebra Appl., 330 (2018), 75–94. https://doi.org/10.1002/nla.2260 doi: 10.1002/nla.2260
    [6] D. F. Gleich, L. H. Lim, Y. Yu, Multilinear pagerank, SIAM J. Matrix Anal. Appl., 36 (2015), 1507–1541. https://doi.org/10.1137/140985160 doi: 10.1137/140985160
    [7] W. Ding, L. Qi, Y. Wei, M-tensors and nonsingular M-tensors, Linear Algebra Appl., 439 (2013), 3264–3278. https://doi.org/10.1016/j.laa.2013.08.038 doi: 10.1016/j.laa.2013.08.038
    [8] W. Ding, Y. Wei, Solving multi-linear system with M-tensors, J. Sci. Comput., 68 (2016), 689–715. https://doi.org/10.1007/s10915-015-0156-7 doi: 10.1007/s10915-015-0156-7
    [9] L. Han, A homotopy method for solving multilinear systems with M-tensors, Appl. Math. Lett., 69 (2017), 49–54. https://doi.org/10.1016/j.aml.2017.01.019 doi: 10.1016/j.aml.2017.01.019
    [10] Z. J. Xie, X. Q. Jin, Y. M. Wei, Tensor methods for solving symmetric M-tensor systems, J. Sci. Comput., 74 (2018), 412–425. https://doi.org/10.1007/s10915-017-0444-5 doi: 10.1007/s10915-017-0444-5
    [11] X. Z. Wang, M. L. Che, Y. M. Wei, Neural networks based approach solving multi-linear systems with M-tensors, Neurocomputing, 351 (2019), 33–42. https://doi.org/10.1016/j.neucom.2019.03.025 doi: 10.1016/j.neucom.2019.03.025
    [12] H. He, C. Ling, L. Qi, G. Zhou, A globally and quadratically convergent algorithm for solving multi-linear systems with M-tensors, J. Sci. Comput., 76 (2018), 1718–1741. https://doi.org/10.1007/s10915-018-0689-7 doi: 10.1007/s10915-018-0689-7
    [13] D. Liu, W. Li, S. W. Vong, The tensor splitting with application to solve multi-linear systems, J. Comput. Appl. Math., 330 (2018), 75–94. https://doi.org/10.1016/j.cam.2017.08.009 doi: 10.1016/j.cam.2017.08.009
    [14] W. Li, D. Liu, S. W. Vong, Comparison results for splitting iterations for solving multi-linear systems, Appl. Numer. Math., 134 (2018), 105–121. https://doi.org/10.1016/j.apnum.2018.07.009 doi: 10.1016/j.apnum.2018.07.009
    [15] L. B. Cui, M. H. Li, Y. S. Song, Preconditioned tensor splitting iterations method for solving multi-linear systems, Appl. Math. Lett., 96 (2019), 89–94. https://doi.org/10.1016/j.aml.2019.04.019 doi: 10.1016/j.aml.2019.04.019
    [16] Y. Zhang, Q. Liu, Z. Chen, Preconditioned Jacobi type method for solving multi-linear systems with M-tensors, Appl. Math. Lett., 104 (2020), 106287. https://doi.org/10.1016/j.aml.2020.106287 doi: 10.1016/j.aml.2020.106287
    [17] D. Liu, W. Li, S. W. Vong, A new preconditioned SOR method for solving multi-linear systems with an M-tensor, Calcolo, 57 (2020), 15. https://doi.org/10.1007/s10092-020-00364-8 doi: 10.1007/s10092-020-00364-8
    [18] Y. Chen, C. Li, A new preconditioned AOR method for solving multi-linear systems, Linear Multilinear Algebra, 72 (2024), 1385–1402. https://doi.org/10.1080/03081087.2023.2179586 doi: 10.1080/03081087.2023.2179586
    [19] L. B. Cui, X. Q. Zhang, S. L. Wu, A new preconditioner of the tensor splitting iterative method for solving multi-linear systems with M-tensors, Comput. Appl. Math., 39 (2020), 173. https://doi.org/10.1007/s40314-020-01194-8 doi: 10.1007/s40314-020-01194-8
    [20] K. Xie, S. X. Miao, A new preconditioner for Gauss-Seidel method for solving multi-linear systems, Jpn. J. Ind. Appl. Math., 40 (2023), 1159–1173. https://doi.org/10.1007/s13160-023-00573-y doi: 10.1007/s13160-023-00573-y
    [21] M. Nobakht-Kooshkghazi, M. Najafi-Kalyani, Improving the Gauss-Seidel iterative method for solving multi-linear systems with M-tensor, Jpn. J. Ind. Appl. Math., 41 (2024), 1061–1077. https://doi.org/10.1007/s13160-023-00637-z doi: 10.1007/s13160-023-00637-z
    [22] X. L. An, X. M. Lv, S. X. Miao, A new preconditioned Gauss-Seidel method for solving M-tensor multi-linear system, Jpn. J. Ind. Appl. Math., 42 (2025), 245–258. https://doi.org/10.1007/s13160-024-00670-6 doi: 10.1007/s13160-024-00670-6
    [23] X. Wang, M. Che, Y. Wei, Preconditioned tensor splitting AOR iterative methods for H-tensor equations, Numer Linear Algebra Appl., 27 (2020), e2329. https://doi.org/10.1002/nla.2329 doi: 10.1002/nla.2329
    [24] L. B. Cui, Y. D. Fan, Y. T. Zheng, A general preconditioner accelerated SOR-type iterative method for multi-linear systems with Z-tensors, Comput. Appl. Math., 41 (2022), 26. https://doi.org/10.1007/s40314-021-01712-2 doi: 10.1007/s40314-021-01712-2
    [25] F. P. A. Beik, M. Najafi-Kalyani, K. Jbilou, Preconditioned iterative methods for multi-linear systems based on the majorization matrix, Linear Multilinear Algebra, 70 (2022), 5827–5846. https://doi.org/10.1080/03081087.2021.1931654 doi: 10.1080/03081087.2021.1931654
    [26] Z. Jiang, J. Li, A new preconditioned AOR-type method for M-tensor equation, Appl. Numer. Math., 189 (2023), 39–52. https://doi.org/10.1016/j.apnum.2023.03.013 doi: 10.1016/j.apnum.2023.03.013
    [27] T. G. Kolda, B. W. Bader, Tensor decompositions and applications, SIAM Rev., 51 (2009), 455–500. https://doi.org/10.1137/07070111X doi: 10.1137/07070111X
    [28] K. C. Chang, K. Pearson, T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., 6 (2008), 507–520.
    [29] W. H. Liu, W. Li, On the inverse of a tensor, Linear Algebra Appl., 495 (2016), 199–205. https://doi.org/10.1016/j.laa.2016.01.011 doi: 10.1016/j.laa.2016.01.011
    [30] M. Che, L. Qi, Y. Wei, Positive-definite tensors to nonlinear complementarity problems, J. Optim Theory Appl., 168 (2016), 475–487. https://doi.org/10.1007/s10957-015-0773-1 doi: 10.1007/s10957-015-0773-1
    [31] W. Ding, Z. Luo, L. Qi, P-tensors, P0-tensors, and their applications, Linear Algebra Appl., 555 (2018), 336–354. https://doi.org/10.1016/j.laa.2018.06.028 doi: 10.1016/j.laa.2018.06.028
    [32] P. Azimzadeh, E. Bayraktar, High order bellman equations and weakly chained diagonally dominant tensors, SIAM J. Matrix Anal. Appl., 40 (2019), 276–298. https://doi.org/10.1137/18M1196923 doi: 10.1137/18M1196923
    [33] M. K. Ng, L. Qi, G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM J. Matrix Anal. Appl., 31 (2010), 1090–1099. https://doi.org/10.1137/09074838X doi: 10.1137/09074838X
    [34] J. Y. Shao, A general product of tensors with applications, Linear Algebra Appl., 439 (2013), 2350–2366. https://doi.org/10.1016/j.laa.2013.07.010 doi: 10.1016/j.laa.2013.07.010
    [35] Y. N. Yang, Q. Z. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM J. Matrix Anal. Appl., 31 (2010), 2517–2530. https://doi.org/10.1137/090778766 doi: 10.1137/090778766
    [36] A. E. Raftery, A model for high-order Markov chains, J. R. Stat. Soc. Ser. B, 47 (1985), 528–539. https://doi.org/10.1111/j.2517-6161.1985.tb01383.x doi: 10.1111/j.2517-6161.1985.tb01383.x
    [37] Z. Xu, Q. Lu, K. Y. Zhang, X. H. An, Theories and Applications of H-matrices, Science Press, 2013.
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(188) PDF downloads(28) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog