Loading [MathJax]/jax/output/SVG/jax.js
Research article

Solution-tube and existence results for fourth-order differential equations system

  • In the present paper, we examine the existence of solutions to fourth-order differential equation systems when the L1-Carathéodory function is on the right-hand side. A concept of solution-tube for these issues is presented. The concepts of upper and lower solutions for fourth-order differential equations are extended to systems owing to this idea.

    Citation: Bouharket Bendouma, Fatima Zohra Ladrani, Keltoum Bouhali, Ahmed Hammoudi, Loay Alkhalifa. Solution-tube and existence results for fourth-order differential equations system[J]. AIMS Mathematics, 2024, 9(11): 32831-32848. doi: 10.3934/math.20241571

    Related Papers:

    [1] Qian Lin, Yan Zhu . Unicyclic graphs with extremal exponential Randić index. Mathematical Modelling and Control, 2021, 1(3): 164-171. doi: 10.3934/mmc.2021015
    [2] Ihtisham Ul Haq, Nigar Ali, Hijaz Ahmad . Analysis of a chaotic system using fractal-fractional derivatives with exponential decay type kernels. Mathematical Modelling and Control, 2022, 2(4): 185-199. doi: 10.3934/mmc.2022019
    [3] Yangtao Wang, Kelin Li . Exponential synchronization of fractional order fuzzy memristor neural networks with time-varying delays and impulses. Mathematical Modelling and Control, 2025, 5(2): 164-179. doi: 10.3934/mmc.2025012
    [4] Lu Zhou, Jin Hu, Bing Li . Partial-nodes-based state estimation for complex networks under hybrid attacks: a dynamic event-triggered approach. Mathematical Modelling and Control, 2025, 5(2): 202-215. doi: 10.3934/mmc.2025015
    [5] Shipeng Li . Impulsive control for stationary oscillation of nonlinear delay systems and applications. Mathematical Modelling and Control, 2023, 3(4): 267-277. doi: 10.3934/mmc.2023023
    [6] Xingwen Liu, Shouming Zhong . Stability analysis of delayed switched cascade nonlinear systems with uniform switching signals. Mathematical Modelling and Control, 2021, 1(2): 90-101. doi: 10.3934/mmc.2021007
    [7] Pshtiwan Othman Mohammed . Some positive results for exponential-kernel difference operators of Riemann-Liouville type. Mathematical Modelling and Control, 2024, 4(1): 133-140. doi: 10.3934/mmc.2024012
    [8] Hongwei Zheng, Yujuan Tian . Exponential stability of time-delay systems with highly nonlinear impulses involving delays. Mathematical Modelling and Control, 2025, 5(1): 103-120. doi: 10.3934/mmc.2025008
    [9] Saravanan Shanmugam, R. Vadivel, S. Sabarathinam, P. Hammachukiattikul, Nallappan Gunasekaran . Enhancing synchronization criteria for fractional-order chaotic neural networks via intermittent control: an extended dissipativity approach. Mathematical Modelling and Control, 2025, 5(1): 31-47. doi: 10.3934/mmc.2025003
    [10] Hongyu Ma, Dadong Tian, Mei Li, Chao Zhang . Reachable set estimation for 2-D switched nonlinear positive systems with impulsive effects and bounded disturbances described by the Roesser model. Mathematical Modelling and Control, 2024, 4(2): 152-162. doi: 10.3934/mmc.2024014
  • In the present paper, we examine the existence of solutions to fourth-order differential equation systems when the L1-Carathéodory function is on the right-hand side. A concept of solution-tube for these issues is presented. The concepts of upper and lower solutions for fourth-order differential equations are extended to systems owing to this idea.



    As the most crucial algorithm for web ranking in the web search engines, Google's PageRank has been extensively researched over the years [1,2,3,4,5,6,7,8]. Gleich et al. [9] extended PageRank to higher-order Markov chains and proposed the following multilinear PageRank problem:

    x=αPxm1+(1α)v, (1.1)

    where α(0,1) is a parameter, vRn is a stochastic vector, i.e., v0,v1=1, Rn is the set of all n-dimensional real vectors. The stochastic solution xRn is called the multilinear PageRank vector. Here P=(pi1,i2,,im) (i2,,imn) is an mth-order n-dimensional stochastic tensor representing an (m1)th-order Markov chain, namely,

    pi1,i2,,im0,ni1=1pi1,i2,,im=1,

    where n={1,2,,n}. The tensor-vector product Pxm1 is a vector in Rn and its definition is given in Section 2.

    In particular, the multilinear PageRank model (1.1) can be transformed into higher-order Markov chain when we take α=1 (see [10] for the general case). When m=2, (1.1) simplifies to the classical PageRank as described in [11].

    The model (1.1) can be transformed into

    x=ˉPxm1,

    where ˉP=αP+(1α)V and V is an mth-order n-dimensional tensor with (V)i,i2,,im=vi,i,i2,,imn. Clearly, ˉP is also a transition probability tensor.

    In recent years, the multilinear PageRank problem has attracted much attention. The theory of existence and uniqueness of solutions for the multilinear PageRank problem is a fundamental basis for analyzing the convergence of algorithms. Gleich et al. [9] pointed to the fact that the multilinear PageRank vector is unique in (1.1) for α<1m1. The above condition can be easily applied to compute the stochastic solution of (1.1) when α<12. However, when 12<α<1, especially when α1, it becomes increasingly challenging to solve (1.1). This challenge inspired researchers to search for more general but stricter conditions, such as those discussed in studies by Li et al. [12,13], Huang and Wu [14], Fasino and Tudisco [15] and Liu et al. [16]. It should be noted that we cannot theoretically compare the merits of these conditions and there is no optimal condition in numerical experiments.

    Gleich et al. first proposed Newton's method to solve this problem in [9]. Subsequently, Meini and Poloni [17] and Guo et al. [18] proposed the Perron Newton iterative algorithm and the multi-step Newton method, respectively. These algorithms mentioned above involve gradient computation, which have shown excellent performance. But when solving asymmetric tensor models, the difficulty of gradient computation can lead to expensive calculations. Unlike gradient algorithms, non-gradient algorithms typically do not require the gradient information of the objective function. Instead, they estimate the optimal solution of the objective function through other methods. Non-gradient algorithms have proven to be effective in solving this problem and have gained a significant amount of attention in recent years. For example, Gleich et al. [9] proposed the fixed-point method, the shifted fixed-point method, the inner-outer method and the inverse method. Liu et al. [19] introduced several relaxation methods. Other novel methods have been proposed by various scholars in an effort to achieve better convergence performances (see References [20,21,22,23] and their respective citations). Hence, the problem about how to design high-speed, robust and flexible non-gradient algorithms has become the key challenge in solving the multilinear PageRank problem.

    As an important class of non-gradient algorithms, the splitting algorithm has significant applications in tensor computations. For example, Liu et al. [24] proposed a tensor splitting algorithm for solving tensor equations and high-order Markov chain models. Cui et al. [25] proposed an iterative refinement method by using higher-order singular value decomposition to solve general multilinear systems. Jiang and Li [26] proposed a new preconditioner for improving the AOR-type method. Cui and Zhang [27] proposed a new constructive way to estimate bounds on the H-eigenvalues of two kinds of interval tensors. This paper will explore a tensor splitting algorithm for solving the multilinear PageRank problem.

    The main contributions of this paper can be outlined as follows:

    (1) The general tensor regular splitting (GTRS) iterative method is proposed to address the multilinear PageRank problem.

    (2) Additionally, we offer five typical splitting methods of the GTRS iteration.

    (3) We give the convergence analysis for the proposed method based on the uniqueness condition provided by Li et al. [13].

    (4) Several numerical examples are provided and the outcomes significantly outperform the existing ones in most cases.

    The reminder of this work is organized as follows. In Section 2, we present some essential definitions and key properties. By making use of the (weak) regular splitting of the coefficient matrix IαPxm2, we propose the GTRS iterative method in Section 3. Section 4 is devoted to the convergence analysis of the GTRS method. In Section 5, some application experiments are presented to demonstrate the effectiveness of our method. Finally, the conclusion is given in Section 6.

    In this section, we present a brief overview of the notations and definitions that are necessary for this paper.

    An mth-order n-dimensional tensor A with nm entries is defined as

    A=(ai1,,im),ai1,,imR,ijn,j=1,,m,

    where R is the real field. A is called non-negative (positive) if ai1,,im0 (ai1,,im>0).

    For any two matrices A=(aij)n×n,B=(bij)n×nRn, AB means that aijbij for i,jn.

    Definition 1. [13] Let A be an mth-order n-dimensional tensor, x be an n-dimensional vector and xi represent the ith entry of x. Axmr is an rth-order n-dimensional tensor is given by

    (Axmr)i,i2,...,ir=ir+1,,imnai,i2,,ir,ir+1,,imxir+1xim. (2.1)

    Definition 2. [13] Let A be an mth-order n-dimensional tensor and both x and y be n-dimensional tensors. Then we get the following definition

    A(xmrymr)AxmrAymr. (2.2)

    Particularly, when r=1 and r=2, we obtain Axm1 and Axm2 from (2.1). Noticed that Axm1=Axm2x. Then, (2.2) can be expressed as

    A(xm1ym1)Axm1Aym1,A(xm2ym2)Axm2Aym2.

    Definition 3. [13] Let A be an mth-order n-dimensional tensor and both x and y be n-dimensional tensors. A(k)xy is an n×n matrix whose (i,j)th- entry is defined as

    (A(k)xy)i,j=i2,,ik1,ik+1,,imnai,i2,,ik1,j,ik+1,,imxi2xik1yik+1yim.

    It should be noted that (1.1) can be rewritten as

    (IαPxm2)x=(1α)v. (2.3)

    Definition 4. A real square matrix ˆA is defined as an M-matrix when it satisfies the following conditions: ˆA is a Z-matrix (i.e., all of its off-diagonal elements ˆaij0) and ˆA can be expressed as hIˆB, where ˆB is a nonnegative matrix with ˆbij0, and its spectral radius ρ(ˆB)<h.

    Definition 5. [28] Let O be a null n×n matrix and ˇA, ˇM and ˇN be n×n real matrices and ˇA=ˇMˇN is a regular splitting of matrix ˇA if ˇM is nonsingular with ˇM1O and ˇNO. Similarly, ˇA=ˇMˇN is a weak regular splitting of matrix ˇA if ˇM is nonsingular with ˇM1O and ˇM1ˇNO. ˇA=ˇMˇN is a convergent splitting of matrix ˇA if ˇM is nonsingular with ρ(ˇM1ˇN)<1.

    Definition 6. Let zRn be a stochastic vector, 0Rn and z+=max(z,0). We define the projection proj() as follows:

    proj(z)=z+z+1.

    It is clear that proj() is a stochastic vector.

    Building upon the success of the general inner-outer iterative method (see [29]) for solving the typical PageRank problem, we introduce the GTRS iterative method as an efficient solution for computing the multilinear PageRank vector.

    Let

    IαPxm2=ˉMxˉNx (3.1)

    be a (weak) regular splitting. Based on (3.1), the iterative method, i.e., GTRS, for (1.1) is constructed as follows:

    (ˉMxkϕˉNxk)yk+1=(1ϕ)ˉNxkxk+b,xk+1=proj(yk+1),k=0,1,2, (3.2)

    with 0<ϕ<1 and b=(1α)v. The scheme (3.2) is derived from the matrix splitting

    IαPxm2k=ˇMxkˇNxk,

    where ˇMxk=ˉMxkϕˉNxk and ˇNxk=(1ϕ)ˉNxk.

    Let ˇUx0 be the strictly upper-triangular matrix of the matrix Pxm2, ˇDx0 be a diagonal matrix and ˇLx0 be a lower-triangular matrix such that

    Pxm2=ˇDx+ˇUx+ˇLx. (3.3)

    Next, we give several typical practical choices of the matrices ˉMx and ˉNx. Derived from the above matrix splitting (3.3), those well-known iterative methods for solving (1.1) are presented below:

    (1) Let ˉMxk=I and ˉNxk=αPxm2k; then, we can obtain the fixed point iteration (denote by GTRS-FP):

    (IϕαPxm2k)yk+1=(αϕα)Pxm2kxk+b. (3.4)

    (2) Let ˉMxk=IβPxm2k and ˉNxk=(αβ)Pxm2k; then, we get another method, i.e., the inner-outer iteration (denote by GTRS-IO):

    (IβPxm2kϕ(αβ)Pxm2k)yk+1=(1ϕ)(αβ)Pxm2kxk+b, (3.5)

    where 0<β<α.

    (3) Let ˉMxk=IαˇDxk and ˉNxk=α(ˇLxk+ˇUxk); we derive the Jacobi iteration (denote by GTRS-JCB):

    (IαˇDxkϕα(ˇLxk+ˇUxk))yk+1=α(1ϕ)(ˇLxk+ˇUxk)xk+b. (3.6)

    (4) Let ˉMxk=IαˇDxkαˇUxk and ˉNxk=αˇLxk; then, we can get the Gauss-Seidel iteration (denote by GTRS-GS):

    (IαˇDxkαˇUxkϕαˇLxk)yk+1=α(1ϕ)ˇLxkxk+b. (3.7)

    (5) Let ˉMxk=1ω(IαˇDxkωαˇLxk) and ˉNxk=1ω[(1ω)(IαˇDxk)+ωαˇUxk]; then, we have the successive overrelaxation iteration (denote by GTRS-SOR):

    (IαˇDxkωαˇLxkϕ(1ω)(IαˇDxk)ϕωαˇUxk)yk+1=(1ϕ)[(1ω)(IαˇDxk)+ωαˇUxk]xk+ωb. (3.8)

    Finally, the whole algorithm of the GTRS iterative method can be outlined as follows:

    Algorithm 1 The GTRS iterative method
    Require: ˉMx,ˉNx,α,ϕ,β,ω,v, maximum kmax, termination tolerance ε, and an initial stochastic vector x0
    Ensure: x
    1: k1
    2: while xkαPxm1kb<ε do
    3: if k<kmax then
    4:  ˇMxkˉMxkϕˉNxk
    5:  ˇNxk(1ϕ)ˉNxk
    6:  yk+1ˇM1xkˇNxkxk+ˇM1xkb
    7:  xk+1proj(yk+1)
    8:  k=k+1
    9: end if
    10: end while

    In this section, we analyze the convergence of the GTRS iterative method. We establish the convergence of the GTRS iteration by using the uniqueness condition of the solution to the multilinear PageRank problem in [13]. At first, we give several lemmas which will be used later.

    Lemma 1. [13] Let x and y be n-dimensional stochastic vectors, and let J(k)(k=2,3,,m) be mth-order n-dimensional tensors defined as

    (J(k))i1,,ik,,im=σ(k)i1,i2,,ik1,ik+1,,im,ikn,

    where σ(k)i1,i2,,ik1,ik+1,,imR for any iln, l=1,2,,k1,k+1,,im and k=2,3,,m; then,

    J(k)xyΔx=0,

    where Δx=xy.

    Lemma 2. [13] Let P be an mth-order stochastic tensor and v be a stochastic vector; then, the multilinear PageRank model (1.1) has the unique solution if

    α<1min{μ,ν},

    where μ=minJ(k),k=2,3,,mμ(J(2),,J(m))andν=minJ(k),k=2,3,,mν(J(2),,J(m)).

    Lemma 3. Let P be an mth-order n-dimensional stochastic tensor, and let y and z be n-dimensional stochastic vectors; then, we have

    Pxm2(yz)1γyz1,

    where γ=min{ˉγ,˘γ} with ˉγ=1mini3,,imnni=1mini2npi,i2,,im and ˘γ=maxi3,,imnni=1maxi2npi,i2,,im1.

    Proof. Let Δy=yz and Δyi be the i-th entry of Δy, and let J(2) be defined in Lemma 1; we have

    Pxm2Δy1=ni1=1|i2,,imn(pi1,i2,,imσ(2)i1,i3,,im)Δyi2yi3yim|ni1=1i2,,imn|pi1,i2,,imσ(2)i1,i3,,imΔyi2|yi3yimmaxi2,,imnni1=1|pi1,i2,,imσ(2)i1,i3,,im|Δy1. (4.1)

    Substituting σ(2)i1,i3,,im=mini2pi1,i2,,im and σ(2)i1,i3,im=maxi2pi1,i2,,im into (4.1) respectively, it follows that

    Pxm2Δy1ˉγΔy1

    and

    Pxm2Δy1ˇγΔy1,

    where

    ˉγ=1mini3,,imnni=1mini2npi,i2,,im and ˘γ=maxi3,,imnni=1maxi2npi,i2,,im1.

    Let γ=min{ˉγ,˘γ}; we have

    Pxm2Δy1γΔy1.

    This completes the proof.

    Lemma 4. [13] Let P be an mth-order n-dimensional stochastic tensor, and let x, y and z be n-dimensional stochastic vectors. For any set J(k)(k=3,,m), we have

    P(xm2ym2)z1ˉμ(J(3),,J(m))Δx1,

    where

    ˉμ(J(3),,J(m))=maxi2,iknmk=3maxi3,,ik1,ik+1,,imnni1=1|pi1,i2,,ik,,imσ(k)i1,i2,,ik1,ik+1,,im|.

    Lemma 5. [19] Let ˆy=(ˆyi)Rn with ni=1ˆyi=1;z=(zi) is a stochastic vector. If y=proj(ˆy), then yz1ˆyz1.

    Lemma 6. Let ˆy=(ˆyi)Rn;z=(zi) is a stochastic vector. If y=proj(ˆy), then yz12ˆyz1.

    Proof. Since y=proj(ˆy)=ˆy+ˆy+1, it follows that

    yz1=proj(ˆy)z1=ˆy+ˆy+1z1=(1ˆy+1)ˆy++ˆy+1(ˆy+z)1ˆy+1ˆyz1+|1ˆy+1|=2ˆyz1+|1ˆy+1|ˆyz12ˆyz1+|1ˆy+1||ˆy+11|=2ˆyz1.

    The proof is completed.

    Theorem 1. Let P be an mth-order n-dimensional stochastic tensor, α<1min{μ,ν} and x be the exact solution of (1.1). Then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.2) is convergent and

    xkx1δxk1δx0x0x1,

    where μxk=(1ϕ)ˉM1xkˉNxk1+αˉμ(J(3),,J(m))ˉM1xk1(1ϕˉM1xkˉNxk1) and δxk=2μxk.

    Proof. By (3.2), we have

    (ˉMxkϕˉNxk)yk+1=(1ϕ)ˉNxkxk+b. (4.2)

    Note that x is the solution of (1.1); hence,

    (ˉMxϕˉNx)x=(1ϕ)ˉNxx+b. (4.3)

    Let ˆek=ykx and ek=xkx. Subtracting (4.3) from (4.2) yields

    ˉMxkˆek+1=ϕˉNxkˆek+1+(1ϕ)ˉNxkek+(ˉNxkˉMxk)x+(ˉMxˉNx)x=ϕˉNxkˆek+1+(1ϕ)ˉNxkek+(αPxm2kI)x+(IαPxm2)x=ϕˉNxkˆek+1+(1ϕ)ˉNxkek+αP(xm2kxm2)x.

    Hence

    (IϕˉM1xkˉNxk)ˆek+1=(1ϕ)ˉM1xkˉNxkek+αˉM1xkP(xm2kxm2)x. (4.4)

    By taking the 1-norm of both sides of (4.4) and Lemma 4 we get

    (1ϕˉM1xkˉNxk1)ˆek+11[(1ϕ)ˉM1xkˉNxk1+αˉμ(J(3),,J(m))ˉM1xk1]ek1.

    Thus

    ˆek+11(1ϕ)ˉM1xkˉNxk1+αˉμ(J(3),,J(m))ˉM1xk1(1ϕˉM1xkˉNxk1)ek1=μxkek1. (4.5)

    From (4.5) and Lemma 6, we obtain

    ek+112ˆek+11δxkek1δxkδxk1δx0e01. (4.6)

    Then the GTRS iteration converges linearly with the convergence rate δxk when μxk<12.

    Remark 1. In Theorem 1, the value of δxk is associated with xk, which is difficult to prove in practical scenarios. Therefore, we will provide convergence analysis for specific splittings of the GTRS iterative method, such as (3.4)–(3.8).

    Theorem 2. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) , and α<min{1ϕˉμ(J(3),,J(m))+γ,1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.4) is convergent and

    xkx1ςkx0x1,

    where ς=ϕαˉμ(J(3),,J(m))+(αϕα)γ1ϕαγ.

    Proof. It is easy to verify that α<1min{μ,ν} when α<min{1ϕˉμ(J(3),,J(m))+γ,1min{μ,ν}}. According to Lemma 2, the solution x is unique.

    Set ˆek=ykx and ek=xkx. From (3.4), it follows that

    yk+1=ϕαPxm2kyk+1+(αϕα)Pxm2kxk+b. (4.7)

    Note that x is the solution of (1.1) yields

    x=ϕαPxm2x+(αϕα)Pxm2x+b. (4.8)

    Subtracting (4.8) from (4.7), we have

    ˆek+1=ϕαPxm2kˆek+1+αP(xm2kxm2)x+(αϕα)Pxm2kek. (4.9)

    By (4.7), we have

    ni=1((IϕαPxm2k)yk+1)i=(αϕα)ni=1ni2,,im=1pii2,,imxk,i2xk,im+ni=1bi=αϕα+1α=1ϕα.

    On the other hand

    1ϕα=ni=1yk+1,iϕαni=1i2,i3,,imnpii2,,imyk+1,i2xk,i3xk,im=ni=1yk+1,iϕαni=1yk+1,i.

    Therefore, we get

    ni=1yk+1,i=1,ni=1ˆek+1,i=0. (4.10)

    Combining (4.9) and Lemmas 3 and 4 together gives

    ˆek+11ϕαγˆek+11+ϕαˉμ(J(3),,J(m))ek1+(αϕα)γek1.

    It is noted that

    ˆek+11ϕαˉμ(J(3),,J(m))+(αϕα)γ1ϕαγek1=ςek1.

    Then by (4.10) and Lemma 5, we get

    ek+11ˆek+11ςek1ςk+1e01.

    By the above proof, we have that ς<1; then, the proof is completed.

    Theorem 3. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{1(1ϕ)βˉμ(J(3),,J(m))ϕˉμ(J(3),,J(m))+γ,1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.5) is convergent and

    xkx1ζkx0x1,

    where ζ=(β+ϕαϕβ)ˉμ(J(3),,J(m))+(1ϕ)(αβ)γ1(β+ϕαϕβ)γ.

    Proof. Clearly, we have the unique solution x if α<min{1(1ϕ)βˉμ(J(3),,J(m))ϕˉμ(J(3),,J(m))+γ,1min{μ,ν}} from {Lemma 2}.

    By (3.5), we have

    yk+1=(β+ϕαϕβ))Pxm2kyk+1+(1ϕ)(αβ)Pxm2kxk+b.

    Let ˆek=ykx and ek=xkx. Then, we can get the following equation:

    ˆek+1=(β+ϕαϕβ)Pxm2kˆek+1+αP(xm2kxm2)x+(1ϕ)(αβ)Pxm2kek. (4.11)

    By the same proof as in Theorem 2, we get

    ni=1(yk+1(β+ϕαϕβ)Pxm2kyk+1)i=(1ϕ)(αβ)ni=1ni2,,im=1pii2,,imxk,i2xk,im+ni=1bi=(1ϕ)(αβ)+1α=1βϕα+ϕβ.

    Similarly, we have

    1βϕα+ϕβ=ni=1yk+1,i(β+ϕαϕβ)ni=1i2,i3,,imnpii2,,imyk+1,i2xk,i3xk,im=ni=1yk+1,i(β+ϕαϕβ)ni=1yk+1,i.

    Hence

    ni=1yk,i=1,ni=1ˆek,i=0. (4.12)

    By combining (4.11) with Lemmas 3 and 4, we get

    ˆek+11(β+ϕαϕβ)γˆek+11+(β+ϕαϕβ)ˉμ(J(3),,J(m))ek1+(1ϕ)(αβ)γek1,

    which implies that

    ˆek+11(β+ϕαϕβ)ˉμ(J(3),,J(m))+(1ϕ)(αβ)γ1(β+ϕαϕβ)γek1=ζek1.

    By (4.12) and Lemma 5, we have

    ek+11ˆek+11ζek1ζk+1e01.

    It is easy to check that ζ<1 and the proof is completed.

    Theorem 4. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{12γ(1ϕ)+3ˉμ(J(3),,J(m)),1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.6) is convergent and

    xkx1ξkx0x1,

    where ξ=2(1ϕ)αγ+αˉμ(J(3),,J(m))1αˉμ(J(3),,J(m)).

    Proof. By {Lemma 2} and the condition that α<min{12γ(1ϕ)+3ˉμ(J(3),,J(m)),1min{μ,ν}}, we know that x is unique.

    Let ˆek=ykx and ek=xkx. By (3.6), we obtain

    yk+1=αˇDxkyk+1+ϕα(ˇLxk+ˇUxk)yk+1+α(1ϕ)(ˇLxk+ˇUxk)xk+b;

    then we obtain

    ˆek+1=(αˇDxk+ϕα(ˇLxk+ˇUxk))ˆek+1+α(1ϕ)(ˇLxk+ˇUxk)ek+αP(xm2kxm2)xαPxm2kˆek+1+α(1ϕ)Pxm2kek+αP(xm2kxm2)x.

    Combining this with Lemmas 3 and 4 leads to

    ˆek+11αγˆek+11+(1ϕ)αγek1+αˉμ(J(3),,J(m))ek1.

    Thus

    ˆek+11(1ϕ)αγ+αˉμ(J(3),,J(m))1αˉμ(J(3),,J(m))ek1=12ξek1. (4.13)

    By (4.13) and Lemma 6, we have

    ek+112ˆek+11ξek1ξk+1e01.

    Notice that ξ<1. This completes the proof of the theorem.

    Theorem 5. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) and α<min{12γ(1ϕ)+3ˉμ(J(3),,J(m)),1min{μ,ν}}; then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.7) is convergent and

    xkx1ξkx0x1.

    Proof. It is similar to the proof of Theorem 4.

    Theorem 6. Let P be an mth-order n-dimensional stochastic tensor, x be a solution of (1.1) , α<min{2ω+(1ω)ϕ3(1ϕ)+ωγ+2ωˉμ+(J(3),,J(m)),1min{μ,ν}} and ω(1ϕ2ϕ,1); then, for an arbitrary initial stochastic guess x0, the iterative sequence {xk}k=0 generated by (3.8) is convergent and

    xkx1ϑkx0x1,

    where ϑ=2ωαˉμ(J(3),,J(m))+(1ϕ)(1ω)(1+α)+ωα(1ϕ)1ϕ(1ω)α(1ϕ)ωαγ.

    Proof. According to Lemma 2, it is obvious that the solution x is unique when α<min{2ω+(1ω)ϕ3(1ϕ)+ωγ+2ωˉμ+(J(3),,J(m)),1min{μ,ν}} and ω(1ϕ2ϕ,1).

    By (3.8), we have

    yk+1=(αˇDxk+ωαˇLxk+ϕ(1ω)(IαˇDxk)+ϕωαˇUxk)yk+1+(1ϕ)[(1ω)(IαˇDxk)+ωαˇUxk]xk+ωb.

    Taking ˆek=ykx and ek=xkx, by a proof that is analogous to that for the above theorems, we get

    ˆek+1=(αˇDxk+ωαˇLxk+ϕ(1ω)(IαˇDxk)+ϕωαˇUxk)ˆek+1+(1ϕ)[(1ω)(IαˇDxk)+ωαˇUxk]ek+ωαP(xm2kxm2)x=[ϕ(1ω)I+α(1ϕ)ˇDxk+ωαˇLxk+ωαϕ(ˇDxk+ˇUxk)]ˆek+1+(1ϕ)[(1ω)(IαˇDxk)+ωαˇUxk]ek+ωαP(xm2kxm2)x[ϕ(1ω)I+α(1ϕ)ˇDxk+ωαPxm2k]ˆek+1+ωαP(xm2kxm2)x+(1ϕ)[(1ω)(IαˇDxk)+ωαˇUxk]ek. (4.14)

    Due to Lemma 4, the above inequality has the following estimation

    ˆek+11[ϕ(1ω)+α(1ϕ)+ωαγ]ˆek+11+ωαˉμ(J(3),,J(m))ek+11+[(1ϕ)(1ω)(1+α)+ωα(1ϕ)]ek+11.

    Hence

    ˆek+11ωαˉμ(J(3),,J(m))+(1ϕ)(1ω)(1+α)+ωα(1ϕ)1ϕ(1ω)α(1ϕ)ωαγˆek+11=12ϑek1. (4.15)

    According to (4.15) and Lemma 6, we have

    ek+112ˆek+11ϑek1ϑk+1e01.

    and ϑ<1. This completes the proof.

    Remark 2. For the uniqueness conditions stated in other papers, such as those presented in [12,13], we can also offer corresponding convergence analysis for our proposed algorithm. Here, we omit the detailed description.

    In this section, we present numerical experiments to exhibit the asset of the algorithm we are proposing.

    All experiments were performed by using MATLAB R2016a on a Windows 10 64 bit computer equipped with a 1.00 GHz  Intel ® Core TM i4-29210M CPU processor and 8.00 GB of RAM. We chose to use three parameters to evaluate the effectiveness of the proposed method, which are the iteration number (denoted by IT), the computing time in seconds (denoted by CPU) and the error rate (denoted by err), respectively. The err parameter is defined by

    err=xkαPxm1kb1.

    We have opted to employ large damping factors during the computation process, and in this case, iterative methods typically face major convergence problems when solving multilinear PageRank problems. Therefore, we tested the following values for the damping parameter α = 0.95,0.99,0.995,0.999, respectively. We selected that the maximum iterative number to be 1,000, the initial value x0=v=1ne, where e is an n-dimensional vector with all entries being 1, and the termination tolerance as ε=1010. The stopping criterion for all test methods was set as xkαPxm1kb1<ε. We searched the parameter ϕ (or ω) from 0.1 to 1 with the step size 0.01 and β(0,α) from 0.1 to α with the interval of 0.01; also, we set

    ˇDx=12diag(Pxm2),ˇLx=tril(Pxm2,1)+ˇDx,ˇUx=triu(Pxm2,1).

    Next, we provide the numerical analysis for the GTRS iteration discussed in Section 3, using four examples. We compare our approach with the relaxation methods presented by Liu et al. [19] and the TARS method. The TARS iterative method is described in Algorithm 2. To clarify, we denote Algorithms 1–4 in the work of Liu et al. as Al1,Al2,Al3 and Al4 respectively.

    Algorithm 2 The TARS iterative method
    Require: ˉMx,ˉNx,α,β,ω,v, relaxation parameter ˆγ, maximun kmax, termination tolerance ε, and an initial stochastic vector x0
    Ensure: x
    1: k1
    2: while xkαPxm1kb<ε do
    3:  if k<kmax then
    4:  yk+1ˉM1xkˉNxkxk+ˉM1xkb
    5:  ˆxk+1ˆγyk+1+(1ˆγ)xk
    6:  xk+1proj(ˆxk+1)
    7:  k=k+1
    8: end if
    9: end while

    Example 1. In this example, we considered three tensors from the practical problems in [30,31]. Example (i) involves DNA sequence data, while Example (ii) applies interpersonal relationship data, and Example (iii) involves physicists' occupational mobility data. The numerical results are listed in Tables 13, where the minimum CPU times in each row are indicated in bold font.

    Table 1.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 1(ⅰ).
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    0.9 0.49 0.000067 12 0.3 0.41 0.000059 12 0.94 0.000066 11 0.92 0.000066 11 0.6 1 0.000067 11 0.000073 10 0.000097 9 0.000093 10 0.000156 8 0.000069 14 0.000097 14 0.004579 13 0.000422 12
    0.95 0.34 0.000067 13 0.7 0.51 0.000059 12 0.29 0.000066 10 0.96 0.000066 12 0.7 0.78 0.000065 15 0.000067 10 0.000068 9 0.000065 11 0.000091 8 0.000070 9 0.000095 12 0.000135 13 0.000410 13
    0.99 0.39 0.000067 13 0.9 0.19 0.000062 12 0.97 0.000066 12 1 0.000066 12 0.3 0.7 0.000064 29 0.000066 10 0.000073 9 0.000071 11 0.000115 8 0.000069 11 0.000095 12 0.000077 13 0.000376 13
    0.999 0.86 0.000067 12 0.8 0.75 0.000063 12 0.58 0.000065 9 0.76 0.000066 13 0.6 1 0.000067 12 0.000074 10 0.000070 9 0.000072 11 0.000075 8 0.000069 14 0.000096 16 0.000076 13 0.000386 13

     | Show Table
    DownLoad: CSV
    Table 2.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 1(ⅱ).
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    0.9 0.92 0.000068 25 0.8 0.13 0.000060 25 0.24 0.000064 24 0.26 0.000063 26 0.8 0.36 0.000068 32 0.000077 17 0.000063 17 0.000062 20 0.000134 24 0.000065 20 0.000096 23 0.000124 29 0.000309 28
    0.95 0.29 0.000069 31 0.9 0.99 0.000057 27 0.35 0.000066 26 0.19 0.000066 29 0.8 0.18 0.000065 37 0.000066 18 0.000061 19 0.000062 22 0.000061 26 0.000065 26 0.000097 30 0.000165 31 0.000247 30
    0.99 0.93 0.000070 29 0.9 0.29 0.000058 28 0.6 0.000067 28 0.32 0.000068 31 1 0.9 0.000068 30 0.000071 19 0.000072 21 0.000072 24 0.000064 27 0.000066 29 0.000098 36 0.000075 33 0.000253 32
    0.999 0.72 0.000069 30 0.9 0.87 0.000059 29 0.52 0.000069 28 0.17 0.000068 31 0.3 0.32 0.000067 94 0.000257 20 0.000072 21 0.000343 24 6.11E-05 28 0.000068 28 0.000099 28 0.000076 33 0.000275 32

     | Show Table
    DownLoad: CSV
    Table 3.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 1(ⅲ).
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    0.9 0.98 0.000071 51 0.6 0.12 0.000061 55 0.98 0.000072 50 0.88 0.000071 52 0.9 0.9 0.000066 52 0.000082 39 0.000093 40 0.000108 51 0.002955 51 0.000070 47 0.000099 60 0.002206 64 0.000249 61
    0.95 0.55 0.000069 71 0.7 1 0.000061 61 0.96 0.000068 61 0.96 0.000067 62 0.7 0.34 0.000065 90 0.000084 45 0.000068 48 0.000069 63 0.000095 61 0.000070 48 0.000102 92 0.000103 78 0.000203 75
    0.99 0.98 0.000067 74 0.6 0.54 0.000079 78 0.33 0.000067 72 0.28 0.000066 88 0.5 0.7 0.000066 82 0.000068 56 0.000101 58 0.000131 76 0.000075 74 0.000068 57 0.000103 88 0.000074 94 0.000180 90
    0.999 0.45 0.000066 91 0.2 0.25 0.000060 93 0.94 0.000067 76 0.35 0.000066 90 0.3 0.74 0.000065 132 0.000070 58 0.000063 61 0.000081 80 0.000082 77 0.000066 57 0.000099 94 0.000072 98 0.000181 94

     | Show Table
    DownLoad: CSV

    (i)

    P(:,:,1)=(0.60000.40830.49350.20000.25680.24260.20000.33490.2639),P(:,:,2)=(0.52170.33000.41520.22320.28000.26580.25510.39000.3190),P(:,:,3)=(0.55650.36480.45000.21740.27420.26000.22610.36100.2900);

    (ii)

    P(:,:,1)=(0.58100.24320.142900.41090.07010.41900.34590.7870),P(:,:,2)=(0.47080.13300.03270.13410.54500.20420.39510.32200.7631),P(:,:,3)=(0.43810.100300.02290.43380.09300.53900.46590.9070);

    (iv)

    P(:,:,1)=(0.90000.33400.31060.06900.61080.07540.03100.05520.6140),P(:,:,2)=(0.67000.10400.08050.28920.83100.29560.04080.06500.6239),P(:,:,3)=(0.66040.09450.07100.07160.61330.07800.26800.29220.8501).

    From Tables 13, we have the following observations and remarks:

    (1) Although our algorithms require more iterative steps, the computational times are typically shorter than those of Al1,Al2,Al3 and Al4 and TARS, provided that the parameters are chosen appropriately.

    (2) Considering the least CPU times obtained from GTRS-FP, GTRS-IO, GTRS-JCB, GTRS-GS and GTRS-SOR, we observe that they account for approximately 0%, 91.7%, 0%, 0% and 8.3%, respectively. Therefore, in this example, GTRS-IO appears to be the most efficient algorithm among the five tested ones.

    (3) Among the five types of splitting in the GTRS iteration, GTRS-JCB may result in higher CPU time usage compared to other methods, but it exhibits fewer iterative steps. This indicates that in cases with fewer iterations, GTRS-JCB can converge faster, while other methods may require more iterations to achieve the same convergence performance.

    Next, we demonstrate the relationship between the iteration count and different values of the parameters α and ϕ in the GTRS iteration based on the inner-outer splitting approach, using Example 1(ⅱ). By choosing β={0.2,0.4,0.6,0.8} and ϕ from 0.1 to 1 in the interval of 0.01, we obtain Figure 1. It is shown that the GTRS iteration requires fewer iteration steps for a larger ϕ, which is more evident for a larger α, such as when α=0.999. It is clear that the iteration number changes significantly depending on the chosen β, especially with β=0.2.

    Figure 1.  The iterative steps for Example 1(ii) with different values of α and ϕ.

    Example 2. [12,19] Let Γ be a directed graph with node set V=n=DP, where D and P denote the sets of dangling nodes and pairwise connected ones, respectively. Denote np(np2) as the number of nodes in P, that is, P={1,2,...,np}. A nonnegative tensor A is constructed as follows:

    ai1i2il={ai1i2il,ikik+1,i1,ik,ilP,k=1,,l1,0,i1=i2( or i1D),ikik+1,ikP,k=2,,l1,1n, else, 

    where ai1i2il is taken randomly in (0, 1). Normalizing the entries with pi1i2il=ai1i2ilni1=1ai1i2il yields a stochastic tensor P=pi1i2il.

    Based on the data presented in Table 4, there are observations that can be made as follows:

    Table 4.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 2.
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    n α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    60 0.9 0.78 0.067781 30 0.2 0.25 0.049511 36 0.48 0.057342 35 0.41 0.06151 32 0.7 0.6 0.05134 40 0.07825 20 0.07809 33 0.06519 28 0.06048 36 0.08123 24 0.07108 43 0.11308 39 0.19676 39
    0.95 0.1 0.068289 51 0.3 0.34 0.048066 42 0.47 0.057637 43 0.47 0.05796 39 0.2 0.62 0.05124 134 0.05745 22 0.06632 42 0.07181 35 0.07352 45 0.08710 33 0.07085 50 0.09384 49 0.19562 49
    0.99 0.72 0.067574 48 0.3 0.2 0.073720 56 0.7 0.056396 48 0.55 0.05525 48 0.3 1 0.05087 40 0.06472 30 0.07392 55 0.07376 46 0.07325 57 0.09754 41 0.06903 77 0.09697 63 0.18844 63
    0.999 0.96 0.065590 44 0.2 0.93 0.078802 44 0.86 0.055738 47 0.96 0.05820 43 0.2 0.98 0.05130 50 0.08935 32 0.06949 60 0.05869 49 0.05847 61 0.08533 29 0.07103 74 0.08697 68 0.20649 68
    90 0.9 0.67 0.100722 33 0.4 0.88 0.071830 28 0.65 0.070657 33 0.88 0.07013 28 0.2 0.88 0.07801 55 0.09061 20 0.06785 35 0.06886 29 0.07486 37 0.08308 20 0.08977 46 0.09231 41 0.20913 41
    0.95 0.33 0.093380 49 0.9 0.51 0.073547 34 0.99 0.078598 33 0.93 0.07651 34 0.9 0.94 0.08669 35 0.14558 23 0.07550 45 0.08165 36 0.07369 46 0.08449 25 0.08936 58 0.11726 52 0.20301 52
    0.99 0.33 0.101617 64 0.2 0.22 0.078264 62 0.52 0.074716 57 0.33 0.07038 53 1 0.18 0.07446 56 0.07229 32 0.07778 59 0.07402 47 0.07806 59 0.08448 42 0.08910 61 0.10506 68 0.21183 68
    0.999 0.55 0.103049 61 0.2 0.52 0.079088 59 0.56 0.071014 61 0.11 0.06921 61 0.7 0.84 0.07499 55 0.10306 35 0.07600 65 0.07530 51 0.07626 63 0.07921 41 0.08972 75 0.10494 73 0.20750 73
    120 0.9 0.7 0.157773 16 0.7 0.16 0.105258 15 0.76 0.080931 16 1 0.07766 14 0.6 0.34 0.07067 32 0.0742168 31 0.08554 20 0.07240 39 0.11592 19 0.09979 14 0.08058 24 0.12620 19 0.22842 19
    0.95 0.96 0.176812 15 0.9 0.24 0.085191 15 0.12 0.075656 21 0.44 0.07943 17 0.2 0.26 0.06796 120 0.0933628 14 0.07782 37 0.07571 27 0.12220 22 0.09910 16 0.08187 24 0.13799 21 0.20813 21
    0.99 0.75 0.145725 18 0.4 0.96 0.092986 16 0.22 0.074801 22 0.95 0.07781 16 0.7 0.46 0.07029 27 0.0891279 23 0.07859 107 0.07475 96 0.12151 22 0.08999 17 0.13877 26 0.11978 22 0.23414 22
    0.999 0.48 0.142763 20 0.8 0.69 0.090735 16 0.34 0.080148 21 0.72 0.07928 17 0.6 0.54 0.07111 30 0.0984028 16 0.08016 20 0.07543 69 0.11979 22 0.10156 15 0.08310 21 0.13226 22 0.26749 22

     | Show Table
    DownLoad: CSV

    (1) The GTRS iteration demonstrates significantly shorter CPU time than those for Al1Al4 and TARS. This suggests that the GTRS iterative method demonstrates higher efficiency in terms of computational time when appropriate parameters are chosen.

    (2) In the case of all tested algorithms for a larger np, such as the case of np=120, GTRS-SOR exhibits shorter CPU times than other methods.

    (3) For n=60, in terms of the CPU times, GTRS-IO achieves the best performance when α = 0.9 or 0.95, while GTRS-FP, GTRS-JCB, GTRS-GS and GTRS-SOR outperform GTRS-IO when α = 0.99 or 0.999. This implies that the behavior and reliability of these methods do not exhibit a monotonic relationship with α.

    Furthermore, we display the test results of the relationship between CPU time and err at each step in Figure 2 with n=150 and np=120. The values of err are taken as the logarithm of base-10. It is known from Figure 2 that the GTRS iterative method performs better than Al1Al4 and TARS in terms of both CPU time and error measures.

    Figure 2.  The relationship between CPU time and err for different values of α in Example 2.

    Example 3. [13] We generated third-order stochastic tensors with dimension n of 100,150 and 200 using the function rand of MATLAB. The results are reported in Table 5.

    Table 5.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 3.
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    n α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    100 0.9 0.23 0.01863 4 0.8 0.91 0.01419 4 0.26 0.01441 4 0.69 0.01437 6 0.7 0.48 0.01442 11 0.01466 11 0.01677 20 0.01463 19 0.01488 8 0.02349 6 0.01810 6 0.03137 4 0.06729 4
    0.95 0.66 0.01887 4 0.4 0.75 0.01420 4 0.83 0.01441 4 0.48 0.01441 7 1 0.26 0.01434 8 0.01432 15 0.01448 36 0.01468 12 0.01935 7 0.02341 9 0.01848 9 0.03054 4 0.07217 4
    0.99 0.91 0.01868 4 0.6 0.89 0.01527 4 0.49 0.01434 4 0.93 0.01451 4 0.9 1 0.01436 4 0.01448 12 0.01446 8 0.02156 15 0.01472 8 0.02324 9 0.01792 9 0.02348 4 0.07210 4
    0.999 0.17 0.01875 4 0.7 0.57 0.01459 4 0.96 0.01417 4 0.7 0.01438 6 0.6 0.24 0.01432 17 0.01456 15 0.01655 13 0.01435 15 0.02128 8 0.02339 9 0.01829 6 0.03042 4 0.06519 4
    150 0.9 1 0.09606 4 0.5 0.25 0.07084 4 0.94 0.07177 4 0.87 0.07008 4 0.9 0.14 0.07094 9 0.09640 15 0.07460 19 0.07584 9 0.08288 8 0.10726 7 0.07295 8 0.15959 4 0.22549 4
    0.95 0.54 0.10796 4 0.4 0.67 0.06927 4 0.2 0.07429 4 0.77 0.07638 5 0.4 0.28 0.07452 27 0.08699 4 0.07486 78 0.08091 79 0.07491 7 0.10241 4 0.07412 8 0.14460 4 0.23545 4
    0.99 0.62 0.10090 4 0.5 0.53 0.09392 4 0.21 0.07004 4 0.76 0.07156 5 0.5 0.22 0.08315 21 0.08172 4 0.07614 8 0.07600 8 0.07606 4 0.10547 9 0.07243 7 0.15219 4 0.22312 4
    0.999 0.23 0.10765 4 0.4 1 0.09054 4 0.67 0.07137 4 1 0.06991 4 0.5 0.96 0.07547 6 0.07734 4 0.07529 11 0.07539 79 0.07722 4 0.10334 4 0.07598 8 0.12394 4 0.21592 4
    200 0.9 0.88 0.26805 3 0.2 0.7 0.16128 3 0.96 0.16500 3 0.86 0.17025 4 0.4 0.78 0.15421 13 0.19223 14 0.17553 8 0.16749 48 0.17695 5 0.21955 9 0.23488 7 0.28720 3 0.56819 3
    0.95 0.64 0.26859 3 0.6 0.16 0.15979 3 0.19 0.16674 4 0.47 0.18597 6 0.2 0.28 0.16859 57 0.17618 15 0.16278 33 0.17163 11 0.17772 7 0.22631 11 0.19430 8 0.25026 3 0.54380 3
    0.99 1 0.26398 3 0.3 0.57 0.28735 3 0.6 0.18075 3 0.15 0.17505 7 0.4 0.74 0.15166 14 0.18111 8 0.19766 19 0.16150 15 0.17193 6 0.23041 10 0.18182 6 0.34902 4 0.51196 4
    0.999 1 0.24523 3 0.9 0.16 0.20416 3 0.44 0.17493 4 0.96 0.17321 3 0.3 0.44 0.16722 30 0.17762 15 0.19128 25 0.19562 26 0.17200 8 0.22839 10 0.20650 10 0.26499 4 0.51895 4

     | Show Table
    DownLoad: CSV

    From Table 5, we observe the following:

    (1) If the parameters are chosen suitably, the GTRS iterative method can achieve shorter CPU times than Al1Al4 and TARS.

    (2) For GTRS-JCB, the optimal performance is achieved when α=0.99 or 0.999, and n=100 or 150. Similarly, GTRS-GS performs best when n=150, while GTRS-SOR achieves the best results when n=200. Notably, GTRS-IO shows the best performance when α=0.95. GTRS-FP requires the least iterative steps.

    (3) In terms of CPU times, GTRS-IO outperforms GTRS-JCB when α=0.9 or 0.95. However, GTRS-JCB, GTRS-GS and GTRS-SOR outperform GTRS-IO when α=0.99 or 0.999.

    Similarly, we present the test results for the correlation between CPU times and err at each iterative step for n=200 in Figure 3.

    Figure 3.  The relationship between CPU time and err for different values of α in Example 3.

    As depicted in Figure 3, it is evident that the GTRS iterative method consistently outperforms Al1Al4 and TARS in terms of both CPU time and error measurement.

    Example 4. [13] Let t denote the density of zero entries in a tensor. P is a positive tensor or a zero tensor when t=0 or t=1 respectively. Test sparse tensors with different t are generated by the function randsample of MATLAB.

    In Example 4, we generated third-order tensors of dimension 200 with densities ranging from 0.1 to 0.9 in intervals of 0.1. The numerical results are presented in Table 6. The conclusions drawn from Table 6 are as follows:

    Table 6.  Comparison of GTRS iterative method with Al1Al4 and TARS for Example 4.
    GTRS-FP GTRS-IO GTRS-JCB GTRS-GS GTRS-SOR TARS-IO TARS-JCB TARS-GS TARS-SOR AL1 AL2 AL3 AL4
    n α ϕ CPU IT β ϕ CPU IT ϕ CPU IT ϕ CPU IT ω ϕ CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT CPU IT
    0.1 0.9 0.95 0.19642 3 0.4 0.57 0.14413 3 0.64 0.14412 3 0.62 0.14913 5 0.5 0.36 0.14571 18 0.14552 3 0.14759 73 0.15818 74 0.15566 4 0.21365 10 0.15553 8 0.22266 3 0.47466 3
    0.95 0.17 0.20225 3 0.5 0.49 0.14305 3 0.66 0.14367 3 0.97 0.15258 3 0.2 0.32 0.14551 53 0.15240 8 0.15832 74 0.14742 47 0.15609 7 0.21183 5 0.15498 10 0.23578 3 0.48953 3
    0.99 0.38 0.20092 3 0.4 0.97 0.18022 3 0.8 0.14429 3 0.64 0.14984 5 0.8 0.3 0.14972 9 0.15950 14 0.15275 3 0.15652 7 0.15783 6 0.21065 6 0.15460 9 0.22561 3 0.48589 3
    0.999 0.75 0.20141 3 0.5 0.88 0.19840 3 0.65 0.14604 3 0.37 0.15242 6 0.5 0.24 0.14740 20 0.14490 3 0.15765 11 0.15297 8 0.15753 6 0.21166 10 0.15471 4 0.23385 3 0.49783 3
    0.2 0.9 0.99 0.21186 3 0.4 0.18 0.14552 3 0.42 0.14559 3 0.93 0.14556 3 0.3 0.68 0.14469 20 0.15173 11 0.15827 11 0.15538 47 0.15922 6 0.21481 4 0.15672 4 0.25215 3 0.50077 3
    0.95 0.98 0.21072 3 0.2 0.37 0.14374 3 0.41 0.14435 3 0.93 0.15314 4 0.2 0.68 0.14729 30 0.15974 8 0.14836 19 0.15982 48 0.16212 6 0.21048 8 0.15427 8 0.24552 3 0.52172 3
    0.99 0.27 0.20794 3 0.5 0.99 0.19411 3 0.51 0.14580 3 0.76 0.14726 4 0.6 0.8 0.15218 8 0.15927 14 0.16016 19 0.14984 19 0.15956 7 0.21239 10 0.15655 10 0.23365 3 0.52896 3
    0.999 0.77 0.20039 3 0.2 0.53 0.19909 3 0.81 0.14613 3 0.96 0.14667 3 0.6 0.3 0.14740 15 0.15399 8 0.15940 8 0.15988 8 0.16108 7 0.21393 10 0.15664 10 0.21781 3 0.50479 3
    0.3 0.9 0.79 0.19314 3 0.2 0.14 0.14622 4 0.63 0.14673 3 0.94 0.14635 4 0.5 0.56 0.14573 15 0.14919 11 0.14642 8 0.14712 11 0.14793 6 0.20942 7 0.15784 9 0.21804 4 0.51359 4
    0.95 0.65 0.19430 3 0.9 0.83 0.14552 3 0.67 0.14724 4 0.56 0.14653 6 0.4 0.84 0.14557 11 0.14696 11 0.14669 4 0.14686 77 0.14577 8 0.21063 11 0.15655 5 0.24682 4 0.55810 4
    0.99 0.71 0.19434 4 0.6 0.53 0.16199 3 0.32 0.14628 4 0.85 0.14620 4 1 0.18 0.14589 8 0.14617 4 0.14661 15 0.14721 11 0.14645 8 0.20728 4 0.15907 7 0.34163 4 0.53196 4
    0.999 0.28 0.19469 4 0.4 0.15 0.14798 4 0.85 0.14557 3 0.9 0.14611 4 0.7 0.38 0.14578 11 0.14708 11 0.14806 11 0.14698 20 0.14590 4 0.20649 5 0.15717 7 0.23544 4 0.49171 4
    0.4 0.9 0.79 0.19226 4 0.5 0.83 0.15152 4 0.45 0.15379 4 0.24 0.15332 7 1 0.3 0.15064 8 0.15737 11 0.15677 4 0.15806 49 0.15708 7 0.20765 11 0.15374 6 0.21360 4 0.50308 4
    0.95 0.55 0.19221 4 0.6 0.22 0.15075 4 0.85 0.15183 4 0.61 0.15352 6 0.6 0.18 0.15139 17 0.15763 11 0.15817 4 0.15651 9 0.15663 7 0.20661 5 0.15555 7 0.21131 4 0.50354 4
    0.99 0.37 0.19165 4 0.3 0.37 0.15614 4 0.99 0.15367 4 0.28 0.15352 7 0.7 0.98 0.15089 5 0.15691 11 0.15760 19 0.15867 8 0.15661 6 0.20801 5 0.15476 6 0.21246 4 0.50895 4
    0.999 0.57 0.20102 4 0.7 0.18 0.16410 4 0.98 0.15379 4 0.91 0.15317 4 0.6 0.26 0.15453 16 0.15598 8 0.15457 49 0.15615 35 0.15697 8 0.20817 11 0.15621 8 0.20973 4 0.50399 4
    0.5 0.9 0.26 0.21041 4 0.3 0.11 0.14663 4 0.87 0.14536 4 0.95 0.14570 4 0.6 0.1 0.14514 18 0.17206 4 0.17689 4 0.17628 8 0.17007 4 0.21408 11 0.15898 4 0.21593 4 0.52894 4
    0.95 0.53 0.20991 4 0.4 0.77 0.14426 4 0.21 0.15410 4 0.17 0.15057 8 0.6 0.24 0.14663 16 0.17090 15 0.16328 11 0.16553 8 0.16872 5 0.21353 5 0.15637 6 0.21681 4 0.52522 4
    0.99 0.45 0.20541 4 0.5 0.44 0.20040 4 0.79 0.15076 4 0.96 0.16091 4 0.2 0.2 0.14966 64 0.17478 15 0.16647 78 0.16376 13 0.16593 6 0.21397 6 0.15915 8 0.24374 4 0.50062 4
    0.999 0.2 0.20789 4 0.3 0.35 0.19865 4 0.3 0.14774 4 0.72 0.14974 5 0.6 0.32 0.15732 16 0.17555 11 0.16837 19 0.17116 9 0.16124 7 0.21352 7 0.15881 5 0.21705 4 0.52109 4
    0.6 0.9 0.76 0.23882 4 0.3 0.66 0.15600 4 0.1 0.17507 4 0.89 0.17825 4 0.5 0.44 0.16122 18 0.16261 4 0.16398 8 0.16587 12 0.15299 8 0.24702 6 0.16154 11 0.31112 4 0.64738 4
    0.95 0.89 0.23388 4 0.3 0.55 0.17275 4 0.97 0.16780 4 0.97 0.17455 4 0.6 0.94 0.14888 6 0.17159 4 0.15328 8 0.16860 9 0.16323 7 0.24929 5 0.15324 10 0.26301 4 0.79157 4
    0.99 0.61 0.23289 4 0.4 0.66 0.19417 4 0.17 0.15638 4 0.26 0.16145 7 0.5 0.5 0.14717 17 0.16949 8 0.18164 12 0.18022 12 0.14839 7 0.24588 4 0.15480 10 0.25240 4 0.64654 4
    0.999 0.64 0.23399 4 0.2 0.68 0.25873 4 0.2 0.16938 4 0.23 0.17446 7 0.4 0.72 0.14695 15 0.14754 15 0.16349 26 0.18106 36 0.16140 8 0.24889 7 0.15461 10 0.32796 4 0.72918 4
    0.7 0.9 0.22 0.21431 4 0.6 0.37 0.14613 4 0.63 0.14752 4 0.89 0.14829 4 0.4 0.66 0.14590 17 0.15861 4 0.17224 12 0.16393 12 0.16407 6 0.21340 10 0.15355 12 0.24924 4 0.52420 4
    0.95 0.44 0.21411 4 0.8 0.81 0.14487 4 0.61 0.14708 4 0.69 0.15214 6 0.3 0.16 0.14504 44 0.16918 4 0.14568 15 0.18287 21 0.16241 8 0.21441 6 0.15324 10 0.22028 4 0.53816 4
    0.99 0.3 0.21431 4 0.2 0.47 0.20147 4 0.3 0.14656 4 0.98 0.15153 4 0.5 0.48 0.14550 17 0.14720 4 0.16119 15 0.17006 16 0.16168 5 0.20845 9 0.15480 10 0.22898 4 0.53813 4
    0.999 0.13 0.21612 4 0.2 0.41 0.20116 4 0.67 0.14666 4 0.9 0.14663 4 0.3 0.44 0.14571 32 0.15500 4 0.15566 26 0.16399 10 0.14667 6 0.21200 10 0.15461 10 0.23828 4 0.53417 4
    0.8 0.9 0.24 0.19462 5 0.7 0.7 0.14510 4 0.43 0.14746 4 0.81 0.14714 5 0.2 0.94 0.14707 12 0.14881 4 0.14945 12 0.14781 9 0.14904 7 0.20982 11 0.15282 9 0.21831 5 0.48351 5
    0.95 0.35 0.19580 5 0.7 0.86 0.14473 4 0.94 0.14740 4 0.22 0.14791 7 0.3 0.64 0.14697 24 0.14761 12 0.14849 16 0.14806 12 0.14723 5 0.21069 7 0.15620 8 0.20987 5 0.47670 5
    0.99 0.38 0.19481 5 0.9 0.89 0.14006 4 0.5 0.14658 5 0.2 0.14782 7 0.4 0.84 0.15179 12 0.14879 9 0.14686 5 0.14823 9 0.14921 8 0.20863 7 0.15378 8 0.21490 5 0.47093 5
    0.999 0.84 0.19513 4 0.8 0.1 0.14124 4 0.13 0.14748 5 0.38 0.14756 6 1 0.8 0.14788 6 0.14897 12 0.14952 20 0.14993 16 0.14838 7 0.21093 11 0.15511 9 0.21042 5 0.47182 5
    0.9 0.9 0.49 0.19820 5 0.3 0.48 0.14732 5 0.7 0.15031 5 0.52 0.14959 6 0.8 0.88 0.13937 6 0.15959 5 0.17585 9 0.15832 13 0.15139 6 0.21135 12 0.15462 7 0.22199 5 0.48839 5
    0.95 0.34 0.19624 5 0.9 1 0.14110 5 0.35 0.14901 5 0.52 0.14994 6 0.5 0.4 0.14768 19 0.15854 12 0.15560 12 0.15924 10 0.14824 5 0.21440 10 0.15743 9 0.21881 5 0.48272 5
    0.99 0.97 0.19606 5 0.2 0.46 0.14481 5 0.38 0.14962 5 0.92 0.14993 5 0.4 0.56 0.14877 21 0.14086 16 0.15999 9 0.19237 13 0.16005 5 0.21269 12 0.15400 6 0.22300 5 0.48468 5
    0.999 0.26 0.19794 5 0.3 0.7 0.15251 5 0.37 0.14963 5 0.51 0.14918 7 0.4 0.1 0.14264 34 0.14147 16 0.15843 16 0.15982 12 0.15774 7 0.21210 8 0.15444 9 0.22177 5 0.48756 5

     | Show Table
    DownLoad: CSV

    (1) When suitable parameters are found, our proposed algorithm consistently outperforms the methods presented in [19] and the TARS method.

    (2) Out of the 36 test cases, in terms of the minimum CPU times, GTRS-SOR, GTRS-GS, GTRS-IO, GTRS-JCB and GTRS-FP occupy about 36.1%, 0%, 44.5%, 19.4% and 0%, respectively. Based on these results, GTRS-SOR appears to be the most competitive algorithm, particularly when t = 0.4 or 0.6.

    (3) When t is less than 0.5, GTRS-JCB has better convergence performance than GTRS-GS and GTRS-FP. In most cases, GTRS-IO demonstrates a shorter CPU time than the other splitting methods of the GTRS iterative type, especially when t = 0.8.

    Next, we discuss the relationship between the iterative steps and ϕ with different values of α. Given the superior convergence properties of the GTRS iterative method with successive overrelaxation splitting, and the significant variation in the number of iterations in Example 4, we selected this splitting for the experiment. Specifically, we conducted the experiment by using ω={0.3,0.5,0.7,0.9}, t={0.4,0.6} and varying ϕ from 0.1 to 1 in increments of 0.01. We have plotted the associated comparison results in Figures 4 and 5. The results indicate that increasing ϕ leads to better performance of the GTRS iterative method, as evidenced by the reduction in the number of iterations required.

    Figure 4.  Taking t = 0.4, the iterative steps with different values of α and ϕ for Example 4.
    Figure 5.  Taking t = 0.6, the iterative steps with different values of α and ϕ for Example 4.

    Figures 69 show the convergence history of the GTRS iteration under different densities of tensors and values of α. From Figures 69, we find that the proposed algorithms converge faster than the existing methods in the most cases, while GTRS-IO performs the best. It is noticed that GTRS-FP, GTRS-IO and GTRS-JCB always require less computing time than the existing methods. This is due to the fact that they compute smaller errors, which reduce the number of iterations required for convergence.

    Figure 6.  The relationship between CPU time and err for different values of t with α=0.90 for Example 4.
    Figure 7.  The relationship between CPU time and err for different values of t with α=0.95 for Example 4.
    Figure 8.  The relationship between CPU time and err for different values of t with α=0.99 for Example 4.
    Figure 9.  The relationship between CPU time and err for different values of t with α=0.999 for Example 4.

    Different parameter settings can lead to optimal performance for each GTRS iteration. By selecting the appropriate values for α and n, the computational efficiency and accuracy of the GTRS iteration can be improved. It is important to note that these optimal parameter settings may vary depending on the specific problem and dataset being analyzed. In general, the GTRS iteration can achieve better convergence performance than Al1Al4 and TARS in terms of solving the multilinear PageRank problem.

    In this paper, based on the (weak) regular splitting of the matrix IαPxm2, we presented the GTRS iterative method for solving (1.1) from the perspective of the multilinear PageRank problem, and we have proved its overall convergence. In addition, we provided several splits of the GTRS iteration and constructed the corresponding convergence theorems. Numerical experiments indicated that the GTRS iterative method is superior to the existing ones in [19] and the TARS method as a result of choosing appropriate parameters. Overall, this research contributes to the advancement of non-gradient algorithms and their practical implications in tensor computations. In future work, we plan to investigate the latest uniqueness conditions to establish better convergence properties.

    The authors declare that they have not used artificial intelligence tools in the creation of this paper.

    This work was supported in part by the Postgraduate Educational Innovation Program of Guangdong (No. 2021SFKC030) and Guangzhou Basic and Applied Basic Research Project (No. 2023A04J1322).

    The authors gratefully acknowledge the anonymous reviewers for their constructive and valuable suggestions, which greatly improved this paper.

    The authors declare no conflict of interest.



    [1] Z. Bai, The upper and lower solutions method for some fourth-order boundary value problem, Nonlinear Anal.-Theor., 67. (2007), 1704–1709. http://dx.doi.org/10.1016/j.na.2006.08.009 doi: 10.1016/j.na.2006.08.009
    [2] G. Bonanno, B. Bella, A boundary value problem for fourth-order elastic beam equations, J. Math. Anal. Appl., 343. (2008), 1166–1176. http://dx.doi.org/10.1016/j.jmaa.2008.01.049 doi: 10.1016/j.jmaa.2008.01.049
    [3] J. Fialho, F. Minhós, The role of lower and upper solutions in the generalization of Lidstone problems, Proceedings of the 9th AIMS International Conference, Dynamical Systems and Differential Equations, 2013,217–226. http://dx.doi.org/10.3934/proc.2013.2013.217
    [4] D. Franco, D. O'Regan, J. Peran, Fourth-order problems with nonlinear boundary conditions, J. Comput. Appl. Math., 174. (2005), 315–327. http://dx.doi.org/10.1016/j.cam.2004.04.013 doi: 10.1016/j.cam.2004.04.013
    [5] A. Santos, M. Grossinho, Solvability of an elastic beam equation in presence of a sign-type Nagumo control, NonLinear Studies, 18. (2011), 279–291.
    [6] A. Naimi, T. Brahim, Kh. Zennir, Existence and stability results for the solution of neutral fractional integro-differential equation with nonlocal conditions, Tamkang J. Math., 53. (2022), 239–257. http://dx.doi.org/10.5556/j.tkjm.53.2022.3550 doi: 10.5556/j.tkjm.53.2022.3550
    [7] A. Cabada, The method of lower and upper solutions for second, third, fourth and higher order boundary value problems, J. Math. Anal. Appl., 185. (1994), 302–320. http://dx.doi.org/10.1006/jmaa.1994.1250 doi: 10.1006/jmaa.1994.1250
    [8] F. Minhós, A. Santos, Higher order two-point boundary value problems with asymmetric growth, Discrete Cont. Dyn.-S, 1. (2008), 127–137. http://dx.doi.org/10.3934/dcdss.2008.1.127 doi: 10.3934/dcdss.2008.1.127
    [9] G. Karakostas, P. Tsamatos, Nonlocal boundary vector value problems for ordinary differential systems of higher order, Nonlinear Anal.-Theor., 51. (2002), 1421–1427. http://dx.doi.org/10.1016/S0362-546X(01)00906-3 doi: 10.1016/S0362-546X(01)00906-3
    [10] B. Liu, Existence and uniqueness of solutions for nonlocal boundary vector value problems of ordinary differential systems with higher order, Comput. Math. Appl., 48. (2004), 841–851. http://dx.doi.org/10.1016/j.camwa.2003.08.009 doi: 10.1016/j.camwa.2003.08.009
    [11] S. Miao, Boundary value problems for higher order nonlinear ordinary differential systems, Tohoku Math. J., 45. (1993), 259–269. http://dx.doi.org/10.2748/tmj/1178225920 doi: 10.2748/tmj/1178225920
    [12] F. Minhós, Periodique solutions for some fully nonlinear fourth-order differential equations, Proceedings of the 8th AIMS International Conference, Dynamical Systems and Differential Equations, 2011, 1068–1077. http://dx.doi.org/10.3934/proc.2011.2011.1068
    [13] F. Minhós, A. Santos, T. Gyulov, A fourth-order BVP of Sturm-Liouville type with asymmetric unbounded nonlinearities, Proceedings of the Conference on Differential and Difference Equations and Applications, 2005,795–804.
    [14] K. Mebarki, S. Georgiev, S. Djebali, Kh. Zennir, Fixed point theorems with applications, New York: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003381969
    [15] M. Frigon, Boundary and periodic value problems for systems of nonlinear second order differential equations, Topol. Method. Nonl. An., 1. (1993), 259–274.
    [16] F. Minhós, A. Santos, Existence and non-existence results for two-point higher order boundary value problems, Proceedings of International Conference on Differential Equations, 2003,249–251.
    [17] M. Grossinho, F. Minhós, Upper and lower solutions for higher order boundary value problems, NonLinear Studies, 12. (2005), 1–15.
    [18] A. Moumen, A. Benaissa Cherif, M. Ferhat, M. Bouye, Kh. Zennir, Existence results for systems of nonlinear second-order and impulsive differential equations with periodic boundary, Mathematics, 11. (2023), 4907. http://dx.doi.org/10.3390/math11244907 doi: 10.3390/math11244907
    [19] M. Grossinho, F. Minhós, A. Santos, A note on a class of problems for a higher-order fully nonlinear equation under one-sided Nagumo-type condition, Nonlinear Anal.-Theor, 70. (2009), 4027–4038. http://dx.doi.org/10.1016/j.na.2008.08.011 doi: 10.1016/j.na.2008.08.011
    [20] M. Frigon, H. Gilbert, Existence théorems for systems of third order differential equations, Dynam. Syst. Appl., 19. (2010), 1–24.
    [21] I. Natanson, Theory of functions of real variable, Dover: Dover Publications, 2016.
    [22] M. Frigon, Boundary and periodic value problems for systems of differential equations under Bernstein-Nagumo growth condition, Differ. Integral Equ., 8. (1995), 1789–1804. http://dx.doi.org/10.57262/die/1368397757 doi: 10.57262/die/1368397757
    [23] S. Georgiev, Kh. Zennir, Classical solutions for a class of IVP for nonlinear two-dimensional wave equations via new fixed point approach, Partial Differential Equations in Applied Mathematics, 2. (2020), 100014. http://dx.doi.org/10.1016/j.padiff.2020.100014 doi: 10.1016/j.padiff.2020.100014
    [24] D. O'Regan, Y. Cho, Y. Chen, Topological degree theory and applications, New York: CRC Press, 2006. http://dx.doi.org/10.1201/9781420011487
    [25] S. Georgiev, Kh. Zennir, Multiple fixed-point theorems and applications in the theory of ODEs, FDEs and PDEs, New York: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9781003028727
    [26] S. Georgiev, Kh. Zennir, Multiplicative differential equations: volume Ⅰ, New York: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003393344
    [27] S. Georgiev, Kh. Zennir, Multiplicative differential equations: volume Ⅱ, New York: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003394549
  • This article has been cited by:

    1. Zakiah I. Kalantan, Eman M. Swielum, Neama T. AL-Sayed, Abeer A. EL-Helbawy, Gannat R. AL-Dayian, Mervat Abd Elaal, Bayesian and E-Bayesian Estimation for a Modified Topp Leone–Chen Distribution Based on a Progressive Type-II Censoring Scheme, 2024, 16, 2073-8994, 981, 10.3390/sym16080981
    2. Ahmed R. El-Saeed, Badr Aloraini, Safar M. Alghamdi, Statistical modeling of disability: Using a new extended Topp Leone inverse exponential model, 2025, 18, 16878507, 101447, 10.1016/j.jrras.2025.101447
    3. Mintodê Nicodème Atchadé, Espoir Zyad Kossou, Théophile Otodji, Mahoulé Jude Bogninou, Aliou Moussa Djibril, Modeling Economics and Finance Data with the Arctan Marshall-Olkin Weibull Distribution, 2025, 2198-5804, 10.1007/s40745-025-00610-2
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(601) PDF downloads(26) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog