1.
Introduction
As the most crucial algorithm for web ranking in the web search engines, Google's PageRank has been extensively researched over the years [1,2,3,4,5,6,7,8]. Gleich et al. [9] extended PageRank to higher-order Markov chains and proposed the following multilinear PageRank problem:
where α∈(0,1) is a parameter, v∈Rn is a stochastic vector, i.e., v≥0,‖v‖1=1, Rn is the set of all n-dimensional real vectors. The stochastic solution x∈Rn is called the multilinear PageRank vector. Here P=(pi1,i2,⋯,im) (∀i2,⋯,im∈⟨n⟩) is an mth-order n-dimensional stochastic tensor representing an (m−1)th-order Markov chain, namely,
where ⟨n⟩={1,2,⋯,n}. The tensor-vector product Pxm−1 is a vector in Rn and its definition is given in Section 2.
In particular, the multilinear PageRank model (1.1) can be transformed into higher-order Markov chain when we take α=1 (see [10] for the general case). When m=2, (1.1) simplifies to the classical PageRank as described in [11].
The model (1.1) can be transformed into
where ˉP=αP+(1−α)V and V is an mth-order n-dimensional tensor with (V)i,i2,⋯,im=vi,∀i,i2,⋯,im∈⟨n⟩. Clearly, ˉP is also a transition probability tensor.
In recent years, the multilinear PageRank problem has attracted much attention. The theory of existence and uniqueness of solutions for the multilinear PageRank problem is a fundamental basis for analyzing the convergence of algorithms. Gleich et al. [9] pointed to the fact that the multilinear PageRank vector is unique in (1.1) for α<1m−1. The above condition can be easily applied to compute the stochastic solution of (1.1) when α<12. However, when 12<α<1, especially when α≈1, it becomes increasingly challenging to solve (1.1). This challenge inspired researchers to search for more general but stricter conditions, such as those discussed in studies by Li et al. [12,13], Huang and Wu [14], Fasino and Tudisco [15] and Liu et al. [16]. It should be noted that we cannot theoretically compare the merits of these conditions and there is no optimal condition in numerical experiments.
Gleich et al. first proposed Newton's method to solve this problem in [9]. Subsequently, Meini and Poloni [17] and Guo et al. [18] proposed the Perron Newton iterative algorithm and the multi-step Newton method, respectively. These algorithms mentioned above involve gradient computation, which have shown excellent performance. But when solving asymmetric tensor models, the difficulty of gradient computation can lead to expensive calculations. Unlike gradient algorithms, non-gradient algorithms typically do not require the gradient information of the objective function. Instead, they estimate the optimal solution of the objective function through other methods. Non-gradient algorithms have proven to be effective in solving this problem and have gained a significant amount of attention in recent years. For example, Gleich et al. [9] proposed the fixed-point method, the shifted fixed-point method, the inner-outer method and the inverse method. Liu et al. [19] introduced several relaxation methods. Other novel methods have been proposed by various scholars in an effort to achieve better convergence performances (see References [20,21,22,23] and their respective citations). Hence, the problem about how to design high-speed, robust and flexible non-gradient algorithms has become the key challenge in solving the multilinear PageRank problem.
As an important class of non-gradient algorithms, the splitting algorithm has significant applications in tensor computations. For example, Liu et al. [24] proposed a tensor splitting algorithm for solving tensor equations and high-order Markov chain models. Cui et al. [25] proposed an iterative refinement method by using higher-order singular value decomposition to solve general multilinear systems. Jiang and Li [26] proposed a new preconditioner for improving the AOR-type method. Cui and Zhang [27] proposed a new constructive way to estimate bounds on the H-eigenvalues of two kinds of interval tensors. This paper will explore a tensor splitting algorithm for solving the multilinear PageRank problem.
The main contributions of this paper can be outlined as follows:
(1) The general tensor regular splitting (GTRS) iterative method is proposed to address the multilinear PageRank problem.
(2) Additionally, we offer five typical splitting methods of the GTRS iteration.
(3) We give the convergence analysis for the proposed method based on the uniqueness condition provided by Li et al. [13].
(4) Several numerical examples are provided and the outcomes significantly outperform the existing ones in most cases.
The reminder of this work is organized as follows. In Section 2, we present some essential definitions and key properties. By making use of the (weak) regular splitting of the coefficient matrix I−αPxm−2, we propose the GTRS iterative method in Section 3. Section 4 is devoted to the convergence analysis of the GTRS method. In Section 5, some application experiments are presented to demonstrate the effectiveness of our method. Finally, the conclusion is given in Section 6.
2.
Preliminaries
In this section, we present a brief overview of the notations and definitions that are necessary for this paper.
An mth-order n-dimensional tensor A with nm entries is defined as
where R is the real field.
A is called non-negative (positive) if ai1,⋯,im≥0 (ai1,⋯,im>0).
For any two matrices A=(aij)n×n,B=(bij)n×n∈Rn, A≥B means that aij≥bij for i,j∈⟨n⟩.
Definition 1. [13] Let A be an mth-order n-dimensional tensor, x be an n-dimensional vector and xi represent the ith entry of x. Axm−r is an rth-order n-dimensional tensor is given by
Definition 2. [13] Let A be an mth-order n-dimensional tensor and both x and y be n-dimensional tensors. Then we get the following definition
Particularly, when r=1 and r=2, we obtain Axm−1 and Axm−2 from (2.1). Noticed that Axm−1=Axm−2x. Then, (2.2) can be expressed as
Definition 3. [13] Let A be an mth-order n-dimensional tensor and both x and y be n-dimensional tensors. A(k)xy is an n×n matrix whose (i,j)th- entry is defined as
It should be noted that (1.1) can be rewritten as
Definition 4. A real square matrix ˆA is defined as an M-matrix when it satisfies the following conditions: ˆA is a Z-matrix (i.e., all of its off-diagonal elements ˆaij≤0) and ˆA can be expressed as hI−ˆB, where ˆB is a nonnegative matrix with ˆbij≥0, and its spectral radius ρ(ˆB)<h.
Definition 5. [28] Let O be a null n×n matrix and ˇA, ˇM and ˇN be n×n real matrices and ˇA=ˇM−ˇN is a regular splitting of matrix ˇA if ˇM is nonsingular with ˇM−1≥O and ˇN≥O. Similarly, ˇA=ˇM−ˇN is a weak regular splitting of matrix ˇA if ˇM is nonsingular with ˇM−1≥O and ˇM−1ˇN≥O. ˇA=ˇM−ˇN is a convergent splitting of matrix ˇA if ˇM is nonsingular with ρ(ˇM−1ˇN)<1.
Definition 6. Let z∈Rn be a stochastic vector, 0∈Rn and z+=max(z,0). We define the projection proj(⋅) as follows:
It is clear that proj(⋅) is a stochastic vector.
3.
The GTRS iterative method
Building upon the success of the general inner-outer iterative method (see [29]) for solving the typical PageRank problem, we introduce the GTRS iterative method as an efficient solution for computing the multilinear PageRank vector.
Let
be a (weak) regular splitting. Based on (3.1), the iterative method, i.e., GTRS, for (1.1) is constructed as follows:
with 0<ϕ<1 and b=(1−α)v. The scheme (3.2) is derived from the matrix splitting
where ˇMxk=ˉMxk−ϕˉNxk and ˇNxk=(1−ϕ)ˉNxk.
Let ˇUx≥0 be the strictly upper-triangular matrix of the matrix Pxm−2, ˇDx≥0 be a diagonal matrix and ˇLx≥0 be a lower-triangular matrix such that
Next, we give several typical practical choices of the matrices ˉMx and ˉNx. Derived from the above matrix splitting (3.3), those well-known iterative methods for solving (1.1) are presented below:
(1) Let ˉMxk=I and ˉNxk=αPxm−2k; then, we can obtain the fixed point iteration (denote by GTRS-FP):
(2) Let ˉMxk=I−βPxm−2k and ˉNxk=(α−β)Pxm−2k; then, we get another method, i.e., the inner-outer iteration (denote by GTRS-IO):
where 0<β<α.
(3) Let ˉMxk=I−αˇDxk and ˉNxk=α(ˇLxk+ˇUxk); we derive the Jacobi iteration (denote by GTRS-JCB):
(4) Let ˉMxk=I−αˇDxk−αˇUxk and ˉNxk=αˇLxk; then, we can get the Gauss-Seidel iteration (denote by GTRS-GS):
(5) Let ˉMxk=1ω(I−αˇDxk−ωαˇLxk) and ˉNxk=1ω[(1−ω)(I−αˇDxk)+ωαˇUxk]; then, we have the successive overrelaxation iteration (denote by GTRS-SOR):
Finally, the whole algorithm of the GTRS iterative method can be outlined as follows:
4.
Convergence analysis
In this section, we analyze the convergence of the GTRS iterative method. We establish the convergence of the GTRS iteration by using the uniqueness condition of the solution to the multilinear PageRank problem in [13]. At first, we give several lemmas which will be used later.
Lemma 1. [13] Let x and y be n-dimensional stochastic vectors, and let J(k)(k=2,3,⋯,m) be mth-order n-dimensional tensors defined as
where σ(k)i1,i2,⋯,ik−1,ik+1,⋯,im∈R for any il∈⟨n⟩, l=1,2,⋯,k−1,k+1,⋯,im and k=2,3,⋯,m; then,
where Δx=x−y.
Lemma 2. [13] Let P be an mth-order stochastic tensor and v be a stochastic vector; then, the multilinear PageRank model (1.1) has the unique solution if
where μ=minJ(k),k=2,3,⋯,mμ(J(2),⋯,J(m))andν=minJ(k),k=2,3,⋯,mν(J(2),⋯,J(m)).
Lemma 3. Let P be an mth-order n-dimensional stochastic tensor, and let y and z be n-dimensional stochastic vectors; then, we have
where γ=min{ˉγ,˘γ} with ˉγ=1−mini3,⋯,im∈⟨n⟩n∑i=1mini2∈⟨n⟩pi,i2,⋯,im and ˘γ=maxi3,⋯,im∈⟨n⟩n∑i=1maxi2∈⟨n⟩pi,i2,⋯,im−1.
Proof. Let Δy=y−z and Δyi be the i-th entry of Δy, and let J(2) be defined in Lemma 1; we have
Substituting σ(2)i1,i3,⋯,im=mini2pi1,i2,⋯,im and σ(2)i1,i3⋯,im=maxi2pi1,i2,⋯,im into (4.1) respectively, it follows that
and
where
Let γ=min{ˉγ,˘γ}; we have
This completes the proof.
□
Lemma 4. [13] Let P be an mth-order n-dimensional stochastic tensor, and let x, y and z be n-dimensional stochastic vectors. For any set J(k)(k=3,⋯,m), we have
where
Lemma 5. [19] Let ˆy=(ˆyi)∈Rn with ∑ni=1ˆyi=1;z=(zi) is a stochastic vector. If y=proj(ˆy), then ‖y−z‖1≤‖ˆy−z‖1.
Lemma 6. Let ˆy=(ˆyi)∈Rn;z=(zi) is a stochastic vector. If y=proj(ˆy), then ‖y−z‖1≤2‖ˆy−z‖1.
Proof. Since y=proj(ˆy)=ˆy+‖ˆy+‖1, it follows that
The proof is completed.
□
Theorem 1. Let P be an mth-order n-dimensional stochastic tensor, α<1min{μ,ν} and x be the exact solution of (1.1). Then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.2) is convergent and
where μxk=(1−ϕ)‖ˉM−1xkˉNxk‖1+αˉμ(J(3),⋯,J(m))‖ˉM−1xk‖1(1−ϕ‖ˉM−1xkˉNxk‖1) and δxk=2μxk.
Proof. By (3.2), we have
Note that x is the solution of (1.1); hence,
Let ˆek=yk−x and ek=xk−x. Subtracting (4.3) from (4.2) yields
Hence
By taking the 1-norm of both sides of (4.4) and Lemma 4 we get
Thus
From (4.5) and Lemma 6, we obtain
Then the GTRS iteration converges linearly with the convergence rate δxk when μxk<12. □
Remark 1. In Theorem 1, the value of δxk is associated with xk, which is difficult to prove in practical scenarios. Therefore, we will provide convergence analysis for specific splittings of the GTRS iterative method, such as (3.4)–(3.8).
Theorem 2. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) , and α<min{1ϕˉμ(J(3),⋯,J(m))+γ,1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.4) is convergent and
where ς=ϕαˉμ(J(3),⋯,J(m))+(α−ϕα)γ1−ϕαγ.
Proof. It is easy to verify that α<1min{μ,ν} when α<min{1ϕˉμ(J(3),⋯,J(m))+γ,1min{μ,ν}}. According to Lemma 2, the solution x is unique.
Set ˆek=yk−x and ek=xk−x. From (3.4), it follows that
Note that x is the solution of (1.1) yields
Subtracting (4.8) from (4.7), we have
By (4.7), we have
On the other hand
Therefore, we get
Combining (4.9) and Lemmas 3 and 4 together gives
It is noted that
Then by (4.10) and Lemma 5, we get
By the above proof, we have that ς<1; then, the proof is completed.
□
Theorem 3. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{1−(1−ϕ)βˉμ(J(3),⋯,J(m))ϕˉμ(J(3),⋯,J(m))+γ,1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.5) is convergent and
where ζ=(β+ϕα−ϕβ)ˉμ(J(3),⋯,J(m))+(1−ϕ)(α−β)γ1−(β+ϕα−ϕβ)γ.
Proof. Clearly, we have the unique solution x if α<min{1−(1−ϕ)βˉμ(J(3),⋯,J(m))ϕˉμ(J(3),⋯,J(m))+γ,1min{μ,ν}} from {Lemma 2}.
By (3.5), we have
Let ˆek=yk−x and ek=xk−x. Then, we can get the following equation:
By the same proof as in Theorem 2, we get
Similarly, we have
Hence
By combining (4.11) with Lemmas 3 and 4, we get
which implies that
By (4.12) and Lemma 5, we have
It is easy to check that ζ<1 and the proof is completed.
□
Theorem 4. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{12γ(1−ϕ)+3ˉμ(J(3),⋯,J(m)),1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.6) is convergent and
where ξ=2(1−ϕ)αγ+αˉμ(J(3),⋯,J(m))1−αˉμ(J(3),⋯,J(m)).
Proof. By {Lemma 2} and the condition that α<min{12γ(1−ϕ)+3ˉμ(J(3),⋯,J(m)),1min{μ,ν}}, we know that x is unique.
Let ˆek=yk−x and ek=xk−x. By (3.6), we obtain
then we obtain
Combining this with Lemmas 3 and 4 leads to
Thus
By (4.13) and Lemma 6, we have
Notice that ξ<1. This completes the proof of the theorem.
□
Theorem 5. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{12γ(1−ϕ)+3ˉμ(J(3),⋯,J(m)),1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.7) is convergent and
Proof. It is similar to the proof of Theorem 4. □
Theorem 6. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) , α<min{2ω+(1−ω)ϕ3(1−ϕ)+ωγ+2ωˉμ+(J(3),⋯,J(m)),1min{μ,ν}} and ω∈(1−ϕ2−ϕ,1); then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}∞k=0 generated by (3.8) is convergent and
where ϑ=2ωαˉμ(J(3),⋯,J(m))+(1−ϕ)(1−ω)(1+α)+ωα(1−ϕ)1−ϕ(1−ω)−α(1−ϕ)−ωαγ.
Proof. According to Lemma 2, it is obvious that the solution x is unique when α<min{2ω+(1−ω)ϕ3(1−ϕ)+ωγ+2ωˉμ+(J(3),⋯,J(m)),1min{μ,ν}} and ω∈(1−ϕ2−ϕ,1).
By (3.8), we have
Taking ˆek=yk−x and ek=xk−x, by a proof that is analogous to that for the above theorems, we get
Due to Lemma 4, the above inequality has the following estimation
Hence
According to (4.15) and Lemma 6, we have
and ϑ<1. This completes the proof.
□
Remark 2. For the uniqueness conditions stated in other papers, such as those presented in [12,13], we can also offer corresponding convergence analysis for our proposed algorithm. Here, we omit the detailed description.
5.
Numerical experiments
In this section, we present numerical experiments to exhibit the asset of the algorithm we are proposing.
All experiments were performed by using MATLAB R2016a on a Windows 10 64 bit computer equipped with a 1.00 GHz Intel ® Core TM i4-29210M CPU processor and 8.00 GB of RAM. We chose to use three parameters to evaluate the effectiveness of the proposed method, which are the iteration number (denoted by IT), the computing time in seconds (denoted by CPU) and the error rate (denoted by err), respectively. The err parameter is defined by
We have opted to employ large damping factors during the computation process, and in this case, iterative methods typically face major convergence problems when solving multilinear PageRank problems. Therefore, we tested the following values for the damping parameter α = 0.95,0.99,0.995,0.999, respectively. We selected that the maximum iterative number to be 1,000, the initial value x0=v=1ne, where e is an n-dimensional vector with all entries being 1, and the termination tolerance as ε=10−10. The stopping criterion for all test methods was set as ‖xk−αPxm−1k−b‖1<ε. We searched the parameter ϕ (or ω) from 0.1 to 1 with the step size 0.01 and β∈(0,α) from 0.1 to α with the interval of 0.01; also, we set
Next, we provide the numerical analysis for the GTRS iteration discussed in Section 3, using four examples. We compare our approach with the relaxation methods presented by Liu et al. [19] and the TARS method. The TARS iterative method is described in Algorithm 2. To clarify, we denote Algorithms 1–4 in the work of Liu et al. as Al1,Al2,Al3 and Al4 respectively.
Example 1. In this example, we considered three tensors from the practical problems in [30,31]. Example (i) involves DNA sequence data, while Example (ii) applies interpersonal relationship data, and Example (iii) involves physicists' occupational mobility data. The numerical results are listed in Tables 1–3, where the minimum CPU times in each row are indicated in bold font.
(i)
(ii)
(iv)
From Tables 1–3, we have the following observations and remarks:
(1) Although our algorithms require more iterative steps, the computational times are typically shorter than those of Al1,Al2,Al3 and Al4 and TARS, provided that the parameters are chosen appropriately.
(2) Considering the least CPU times obtained from GTRS-FP, GTRS-IO, GTRS-JCB, GTRS-GS and GTRS-SOR, we observe that they account for approximately 0%, 91.7%, 0%, 0% and 8.3%, respectively. Therefore, in this example, GTRS-IO appears to be the most efficient algorithm among the five tested ones.
(3) Among the five types of splitting in the GTRS iteration, GTRS-JCB may result in higher CPU time usage compared to other methods, but it exhibits fewer iterative steps. This indicates that in cases with fewer iterations, GTRS-JCB can converge faster, while other methods may require more iterations to achieve the same convergence performance.
Next, we demonstrate the relationship between the iteration count and different values of the parameters α and ϕ in the GTRS iteration based on the inner-outer splitting approach, using Example 1(ⅱ). By choosing β={0.2,0.4,0.6,0.8} and ϕ from 0.1 to 1 in the interval of 0.01, we obtain Figure 1. It is shown that the GTRS iteration requires fewer iteration steps for a larger ϕ, which is more evident for a larger α, such as when α=0.999. It is clear that the iteration number changes significantly depending on the chosen β, especially with β=0.2.
Example 2. [12,19] Let Γ be a directed graph with node set V=⟨n⟩=D∪P, where D and P denote the sets of dangling nodes and pairwise connected ones, respectively. Denote np(np≥2) as the number of nodes in P, that is, P={1,2,...,np}. A nonnegative tensor A is constructed as follows:
where ai1i2⋯il is taken randomly in (0, 1). Normalizing the entries with pi1i2⋯il=ai1i2⋯il∑ni1=1ai1i2⋯il yields a stochastic tensor P=pi1i2⋯il.
Based on the data presented in Table 4, there are observations that can be made as follows:
(1) The GTRS iteration demonstrates significantly shorter CPU time than those for Al1–Al4 and TARS. This suggests that the GTRS iterative method demonstrates higher efficiency in terms of computational time when appropriate parameters are chosen.
(2) In the case of all tested algorithms for a larger np, such as the case of np=120, GTRS-SOR exhibits shorter CPU times than other methods.
(3) For n=60, in terms of the CPU times, GTRS-IO achieves the best performance when α = 0.9 or 0.95, while GTRS-FP, GTRS-JCB, GTRS-GS and GTRS-SOR outperform GTRS-IO when α = 0.99 or 0.999. This implies that the behavior and reliability of these methods do not exhibit a monotonic relationship with α.
Furthermore, we display the test results of the relationship between CPU time and err at each step in Figure 2 with n=150 and np=120. The values of err are taken as the logarithm of base-10. It is known from Figure 2 that the GTRS iterative method performs better than Al1–Al4 and TARS in terms of both CPU time and error measures.
Example 3. [13] We generated third-order stochastic tensors with dimension n of 100,150 and 200 using the function rand of MATLAB. The results are reported in Table 5.
From Table 5, we observe the following:
(1) If the parameters are chosen suitably, the GTRS iterative method can achieve shorter CPU times than Al1–Al4 and TARS.
(2) For GTRS-JCB, the optimal performance is achieved when α=0.99 or 0.999, and n=100 or 150. Similarly, GTRS-GS performs best when n=150, while GTRS-SOR achieves the best results when n=200. Notably, GTRS-IO shows the best performance when α=0.95. GTRS-FP requires the least iterative steps.
(3) In terms of CPU times, GTRS-IO outperforms GTRS-JCB when α=0.9 or 0.95. However, GTRS-JCB, GTRS-GS and GTRS-SOR outperform GTRS-IO when α=0.99 or 0.999.
Similarly, we present the test results for the correlation between CPU times and err at each iterative step for n=200 in Figure 3.
As depicted in Figure 3, it is evident that the GTRS iterative method consistently outperforms Al1–Al4 and TARS in terms of both CPU time and error measurement.
Example 4. [13] Let t denote the density of zero entries in a tensor. P is a positive tensor or a zero tensor when t=0 or t=1 respectively. Test sparse tensors with different t are generated by the function randsample of MATLAB.
In Example 4, we generated third-order tensors of dimension 200 with densities ranging from 0.1 to 0.9 in intervals of 0.1. The numerical results are presented in Table 6. The conclusions drawn from Table 6 are as follows:
(1) When suitable parameters are found, our proposed algorithm consistently outperforms the methods presented in [19] and the TARS method.
(2) Out of the 36 test cases, in terms of the minimum CPU times, GTRS-SOR, GTRS-GS, GTRS-IO, GTRS-JCB and GTRS-FP occupy about 36.1%, 0%, 44.5%, 19.4% and 0%, respectively. Based on these results, GTRS-SOR appears to be the most competitive algorithm, particularly when t = 0.4 or 0.6.
(3) When t is less than 0.5, GTRS-JCB has better convergence performance than GTRS-GS and GTRS-FP. In most cases, GTRS-IO demonstrates a shorter CPU time than the other splitting methods of the GTRS iterative type, especially when t = 0.8.
Next, we discuss the relationship between the iterative steps and ϕ with different values of α. Given the superior convergence properties of the GTRS iterative method with successive overrelaxation splitting, and the significant variation in the number of iterations in Example 4, we selected this splitting for the experiment. Specifically, we conducted the experiment by using ω={0.3,0.5,0.7,0.9}, t={0.4,0.6} and varying ϕ from 0.1 to 1 in increments of 0.01. We have plotted the associated comparison results in Figures 4 and 5. The results indicate that increasing ϕ leads to better performance of the GTRS iterative method, as evidenced by the reduction in the number of iterations required.
Figures 6–9 show the convergence history of the GTRS iteration under different densities of tensors and values of α. From Figures 6–9, we find that the proposed algorithms converge faster than the existing methods in the most cases, while GTRS-IO performs the best. It is noticed that GTRS-FP, GTRS-IO and GTRS-JCB always require less computing time than the existing methods. This is due to the fact that they compute smaller errors, which reduce the number of iterations required for convergence.
Different parameter settings can lead to optimal performance for each GTRS iteration. By selecting the appropriate values for α and n, the computational efficiency and accuracy of the GTRS iteration can be improved. It is important to note that these optimal parameter settings may vary depending on the specific problem and dataset being analyzed. In general, the GTRS iteration can achieve better convergence performance than Al1–Al4 and TARS in terms of solving the multilinear PageRank problem.
6.
Conclusions
In this paper, based on the (weak) regular splitting of the matrix I−αPxm−2, we presented the GTRS iterative method for solving (1.1) from the perspective of the multilinear PageRank problem, and we have proved its overall convergence. In addition, we provided several splits of the GTRS iteration and constructed the corresponding convergence theorems. Numerical experiments indicated that the GTRS iterative method is superior to the existing ones in [19] and the TARS method as a result of choosing appropriate parameters. Overall, this research contributes to the advancement of non-gradient algorithms and their practical implications in tensor computations. In future work, we plan to investigate the latest uniqueness conditions to establish better convergence properties.
Use of AI tools declaration
The authors declare that they have not used artificial intelligence tools in the creation of this paper.
Acknowledgments
This work was supported in part by the Postgraduate Educational Innovation Program of Guangdong (No. 2021SFKC030) and Guangzhou Basic and Applied Basic Research Project (No. 2023A04J1322).
The authors gratefully acknowledge the anonymous reviewers for their constructive and valuable suggestions, which greatly improved this paper.
Conflict of interest
The authors declare no conflict of interest.