Loading [MathJax]/jax/output/SVG/jax.js
Research article

A physics-informed neural network model for social media user growth

  • In this paper, a physics-informed neural network model is proposed to predict the growth of online social network users. The number of online social network users is modeled by a stochastic process and the associated Kolmogorov forward equation is derived. Then, a physics-informed neural network model is built based on the Kolmogorov forward equation and trained using real-world data. By combining mathematical modeling with machine learning, our approach provides a practical and interpretable methodology that harnesses the strengths of both physical laws and advancements in machine learning, while minimizing the opacity in machine learning models.

    Citation: Lingju Kong, Ryan Z. Shi, Min Wang. A physics-informed neural network model for social media user growth[J]. Applied Computing and Intelligence, 2024, 4(2): 195-208. doi: 10.3934/aci.2024012

    Related Papers:

    [1] Kitipol Nualtong, Ronnason Chinram, Piyawan Khwanmuang, Sukrit Kirtsaeng, Thammarat Panityakul . An efficiency dynamic seasonal regression forecasting technique for high variation of water level in Yom River Basin of Thailand. AIMS Environmental Science, 2021, 8(4): 283-303. doi: 10.3934/environsci.2021019
    [2] Muhammad Andang Novianta, Syafrudin, Budi Warsito, Siti Rachmawati . Monitoring river water quality through predictive modeling using artificial neural networks backpropagation. AIMS Environmental Science, 2024, 11(4): 649-664. doi: 10.3934/environsci.2024032
    [3] Jirapa Wongsa, Ramita Liamchang, Neti Ngearnpat, Kritchaya Issakul . Cypermethrin insecticide residue, water quality and phytoplankton diversity in the lychee plantation catchment area. AIMS Environmental Science, 2023, 10(5): 609-627. doi: 10.3934/environsci.2023034
    [4] Rokhana Dwi Bekti, Kris Suryowati, Maria Oktafiana Dedu, Eka Sulistyaningsih, Erma Susanti . Machine learning and topological kriging for river water quality data interpolation. AIMS Environmental Science, 2025, 12(1): 120-136. doi: 10.3934/environsci.2025006
    [5] Ronak P. Chaudhari, Shantanu R. Thorat, Darshan J. Mehta, Sahita I. Waikhom, Vipinkumar G. Yadav, Vijendra Kumar . Comparison of soft-computing techniques: Data-driven models for flood forecasting. AIMS Environmental Science, 2024, 11(5): 741-758. doi: 10.3934/environsci.2024037
    [6] Wanjai Lamprom, Surasak Jotaworn, Nuttakit Iamsomboon, Pimnapat Bhumkittipich, Issara Siramaneerat, Anong Rukwong . Exploration of wastewater management behavior for enhancing water conservation in urban area, Thailand. AIMS Environmental Science, 2022, 9(1): 66-82. doi: 10.3934/environsci.2022005
    [7] Muhammad Shahid Iqbal, Muhammad Nauman Ahmad, Nynke Hofstra . The Relationship between Hydro-Climatic Variables and E. coli Concentrations in Surface and Drinking Water of the Kabul River Basin in Pakistan. AIMS Environmental Science, 2017, 4(5): 690-708. doi: 10.3934/environsci.2017.5.690
    [8] Eric Ariel L. Salas, Sakthi Kumaran Subburayalu . Implications of climate change on nutrient pollution: a look into the nitrogen and phosphorus loadings in the Great Miami and Little Miami watersheds in Ohio. AIMS Environmental Science, 2019, 6(3): 186-221. doi: 10.3934/environsci.2019.3.186
    [9] Seiran Haghgoo, Jamil Amanollahi, Barzan Bahrami Kamangar, Shahryar Sorooshian . Decision models enhancing environmental flow sustainability: A strategic approach to water resource management. AIMS Environmental Science, 2024, 11(6): 900-917. doi: 10.3934/environsci.2024045
    [10] Djordje Vilimanovic, Gangadhar Andaluri, Robert Hannah, Rominder Suri, A. Ronald MacGillivray . Occurrence and aquatic toxicity of contaminants of emerging concern (CECs) in tributaries of an urbanized section of the Delaware River Watershed. AIMS Environmental Science, 2020, 7(4): 302-319. doi: 10.3934/environsci.2020019
  • In this paper, a physics-informed neural network model is proposed to predict the growth of online social network users. The number of online social network users is modeled by a stochastic process and the associated Kolmogorov forward equation is derived. Then, a physics-informed neural network model is built based on the Kolmogorov forward equation and trained using real-world data. By combining mathematical modeling with machine learning, our approach provides a practical and interpretable methodology that harnesses the strengths of both physical laws and advancements in machine learning, while minimizing the opacity in machine learning models.



    With the booming development of the internet, the way people obtain information is gradually changing to the web search. Google search engine is one of the most popular and successful search engines, and it uses PageRank to determine the importance of web pages [1].

    Let's briefly introduce the mathematical model of PageRank problems. If the link structure of web pages is considered as a directed link graph, then the adjacency matrix PRn×n of the graph can be described as

    P=(pij)={1ni,if page  i   links to page  j,0,otherwise,

    where ni denotes the number of outlinks of page i. If page i contains no outlinks (page i is called as a dangling node [2]), then the ith row of P will be zero. The existence of dangling nodes is a drawback for computing PageRank. To correct this problem, a rank-1 modification is applied to obtain a row stochastic matrix, that is,

    ˜P=P+dwT,

    where d=(di)Rn×1 with

    di={1,if  ni=0,0,otherwise,

    and w=(wi)Rn×1 is a probability distribution vector.

    To further guarantee the aperiodicity and irreducibility of ˜P, a convex combination of ˜P is considered by introducing a damping factor α and a personalization vector v, which yields the Google matrix

    A=[α˜P+(1α)evT]T=α˜PT+(1α)veT,

    where α(0,1) and v=e/n with e=[1,1,,1]TRn×1. Usually, we set w=v. The Google matrix A is stochastic and irreducible after two corrections. Finally, the PageRank problem requires to solve the following linear system:

    Ax=x,  x1=1,  x>0, (1.1)

    where x is called the unknown PageRank vector. More details about the formulation of PageRank problems, please refer to [3,4] and references therein.

    The traditional method for computing PageRank is the power method [5] since it is based on matrix-vector products and requires small memory space. However, when the largest and the second largest eigenvalues of the matrix A can not be well separated, or in other words, when the damping factor α is sufficiently close to 1, the power method performs poorly. To speed up the PageRank computation, extensive studies have been carried out over the past few years, including the extrapolation methods [6,7,8,9], the inner-outer methods [10,11,12,13,14], the adaptive methods [15,16,17] and the multigrid methods [18,19].

    In addition, Krylov subspace methods play a powerful role in solving PageRank problems. Golub and Greif proposed an Arnoldi-type algorithm [20] without Ritz value computations by using a relevant shift in the refined Arnoldi method [21] to be 1. Wu and Wei developed a Power-Arnoldi algorithm [22] by periodically knitting the power method with the thick restarted Arnoldi algorithm [23]. Hu et al. proposed a variant of the Power-Arnoldi algorithm [24] by using the power method with the extrapolation process based on trace (PET) [8]. Wu and Wei developed an Arnoldi-Extrapolation algorithm [25] by periodically combining the Arnoldi-type method [20] with the extrapolation method based on Ritz values. Zhang et al. proposed a FOM-Extrapolation algorithm [26] by using the same extrapolation method. Gu et al. developed a GMRES-Power algorithm [27] by periodically knitting the GMRES method [28] with the power method. Yin et al. proposed a generalized Arnoldi (GArnoldi) algorithm [15] by replacing the standard inner products with the weighted inner products during the Arnoldi process. Wen et al. developed a Power-GArnoldi algorithm [16] by taking the adaptive GArnoldi method as an accelerated technique for the power method. More numerical methods for computing PageRank problems can be found in [29,30,31,32,33,34,35,36,37].

    Since the Google matrix A in system (1.1) is stochastic, its largest eigenvalue is 1, and thus the linear system (1.1) can be rewritten as the following singular linear system:

    (IA)x=0,  x1=1,  x>0, (1.2)

    where IRn×n is an identity matrix.

    In [26], the FOM algorithm is used to solve system (1.2). Hence, inspired by the GArnoldi algorithm [15,16,17], a generalized full orthogonalization method (GFOM) based on a weighted inner product is discussed for solving the singular linear system (1.2). To speed up the convergence performance, two accelerated techniques are applied to the GFOM algorithm respectively, one is the power method, and the other is the extrapolation method based on Ritz values, such that two new algorithms called GFOM-Power and GFOM-Extrapolation are proposed for computing PageRank. Their implementations and convergence analysis are studied in detail. Numerical results show that the performance of our proposed algorithms are superior to the other methods when the damping factor α is close to 1.

    The remainder of the paper is organized as follows. In Section 2, we first briefly review the GArnoldi process, then discuss the GFOM algorithm for computing PageRank. In Section 3, we propose the GFOM-Power algorithm and the GFOM-Extrapolation algorithm for computing PageRank, and analyze their convergence properties, respectively. Numerical experiments and comparisons are reported in Section 4. Finally, conclusions are given in Section 5.

    In this section, we first briefly review the generalized Arnoldi (GArnoldi) process [15] based on weighted inner products, and then discuss the generalized FOM (GFOM) algorithm for computing PageRank. Some comparison results for the FOM algorithm and GFOM algorithm are presented at last.

    Given a symmetric positive definite (SPD) matrix G=(gij)Rn×n, let x=(xi)Rn×1, y=(yi)Rn×1 be two vectors, then a G-inner product is defined as

    (x,y)G=xTGy=ni=1nj=1gijxiyj.

    Since the matrix G is symmetric positive definite, it can be diagonalized. Suppose G=QTDQ, where QRn×n is an orthogonal matrix, and D=diag{d1,d2,,dn} is a n×n diagonal matrix with di>0, i=1,2,,n. Then the corresponding G-norm is defined as

    xG=(x,x)G=xTGx=xTQTDQx=ni=1di(Qx)2i,  xRn×1.

    Given a general non-Hermitian matrix ˜ARn×n, an initial vector v0Rn×1, a SPD matrix G, and the steps m of the GArnoldi process, then the GArnoldi process is described as follows. More details can be found in [15,16,17].

    Algorithm 1. [Vm,Hm,vm+1,hm+1,m,β] = GArnoldi (˜A,v0,G,m)
    1. Compute β=v0G, v1=v0/β.
    2. for j=1,2,,m
    3.   Compute z=˜Avj
    4.   for i=1,2,,j
    5.     Compute hi,j=(vi,z)G
    6.     Compute z=zhi,jvi
    7.   end for
    8.   Compute hj+1,j=zG
    9.   if hj+1,j=0
    10.    break;
    11.   end if
    12.   vj+1=z/hj+1,j
    13. end for

    Note that when the matrix G=I, the GArnoldi process is reduced to the standard Arnoldi process [21,38]. And according to Algorithm 1, the following relations hold:

    AVm=VmHm+hm+1,mvm+1eTm=Vm+1Hm+1,m,   GAVm=VmHm,   Hm+1,m=(Hmhm+1,meTm), (2.1)

    where Vk=[v1,v2,,vk]Rn×k (k=m, m+1) is a G-orthogonal matrix, Hm=(hi,j)Rm×m is an upper Hessenberg matrix and emRm×1 is the mth co-ordinate vector.

    In this subsection, by using the G-inner product as given above, we consider a generalized FOM (GFOM) algorithm for computing PageRank. The specific implementation of the GFOM algorithm is given as follows.

    Algorithm 2. The GFOM algorithm for computing PageRank
    1. Given a nonzero initial vector x0, the steps m of the GArnoldi process and a prescribed tolerance tol.
    2. Compute the initial residual vector r0=(IA)x0.
    3. Set G=I.
    4. Run Algorithm 1 as [Vm, Hm, vm+1, hm+1,m, β] = GArnoldi(IA, r0, G, m).
    5. Compute the approximation vector x=x0+Vmy with y=βH1me1.
    6. Compute r=hm+1,m(eTmy)vm+1.
    7.   if r2/x1tol
    8.    Output x=x/x1 and stop
    9.   else
    10.    Set r0=r, x0=x, G=diag{|r|/r1} and goto step 4
    11. end if

    Here, some remarks need to be given about Algorithm 2.

    ● In the line 5, the GFOM algorithm seeks a solution x satisfying the Galerkin condition: (IA)xKm(IA,r0), then we have the following formulation:

    0=VTm((IA)x)=VTm(Ax0x0+AVmyVmy)=VTmr0VTm(IA)Vmy=βe1Hmy,

    where e1=[1,0,,0]T. Thus if Hm is nonsingular, the vector y can be computed as y=βH1me1.

    ● In the line 7, there are two differences between the GFOM algorithm and the FOM algorithm [26]. One is the 2-norm of the residual vector r, which can be computed as r2=|hm+1,m||y(m:)|vm+12 rather than |hm+1,m||y(m,:)|. The other is the convergence condition, which is set as r2/x1tol rather than r2/x2tol.

    ● In the first iteration, the GFOM algorithm reduces to the FOM algorithm since G=I. For other iterations, from the line 10 of Algorithm 2, we construct the matrix G as

    G=diag{δ1,δ2,,δn},δi=|ri|/r1,i=1,2,,n, (2.2)

    where ri is the ith component of the residual vector r.

    In this subsection, we aim at testing the effectiveness of the GFOM algorithm by comparing it with the FOM algorithm [26] for computing PageRank. The same initial guess x0=e/n is used, where e=[1,1,,1]T. The damping factors are chosen as α = 0.99, 0.993, 0.995 and 0.997, respectively. And the stopping criteria are set as |hm+1,m||y(m,:)|/x1tol for the FOM algorithm and |hm+1,m||y(m:)|vm+12/x1tol for the GFOM algorithm, where tol=108 is the user described tolerance.

    Here we show the choice of the restart number m by analyzing the numerical performance of the FOM algorithm and the GFOM algorithm for the wb-cs-stanford matrix, which is available from [39], and it obtains 9914 pages and 36, 854 links. Table 1 reports the number of matrix-vector products for the FOM algorithm and the GFOM algorithm when α=0.99, 0.993, 0.995, 0.997 and m=6, 7, 8, 9, 10, respectively. Figure 1 depicts the total CPU time of the FOM algorithm and the GFOM algorithm relative to different restart number m, respectively.

    Table 1.  The number of matrix-vector products of the FOM algorithm and the GFOM algorithm with different damping factor α and restart number m for the wb-cs-stanford matrix.
    α m=6 m=7 m=8 m=9 m=10
    FOM GFOM FOM GFOM FOM GFOM FOM GFOM FOM GFOM
    α=0.99 280 287 240 176 198 162 260 160 209 154
    α=0.993 455 350 312 208 261 189 340 200 264 176
    α=0.995 539 448 392 232 315 207 420 220 341 231
    α=0.997 644 490 552 320 621 252 520 240 385 330

     | Show Table
    DownLoad: CSV
    Figure 1.  The total CPU time of the FOM algorithm and the GFOM algorithm versus different restart number m for the wb-cs-stanford matrix.

    From Table 1, it observes that the number of matrix-vector products of the FOM algorithm and the GFOM algorithm is decreasing at first, and then is increasing as m increases in most cases. Specifically, it seems m=8 is a clear turning point on the whole. Meanwhile, from Figure 1, we find that the optimal CPU time of the FOM algorithm and the GFOM algorithm is different for different damping factor α. However, considering the memory requirements of the FOM algorithm and the GFOM algorithm, it is obvious that m=8 is a reasonable choice for most cases. Hence, we set m=8 in the following numerical experiments.

    Figure 2 plots the convergence behavior of the FOM algorithm and the GFOM algorithm for the wb-cs-stanford matrix when α=0.99, 0.993, 0.995 and 0.997, respectively. It shows that the GFOM algorithm has a faster convergence than the FOM algorithm. Thus it is meaningful to consider the weighted inner products in the standard Arnoldi process.

    Figure 2.  Convergence behavior of the FOM algorithm and the GFOM algorithm for the wb-cs-stanford matrix.

    Since the GFOM algorithm requires large memory as the restart number m increases, two accelerated techniques are used to improve the GFOM algorithm respectively, one is the power method and the other is the extrapolation method based on Ritz values. Thus two new algorithms named GFOM-Power and GFOM-Extrapolation are developed to compute PageRank problems, and their constructions and convergence analyses are discussed in detail.

    In this subsection, we first give the description of the GFOM-Power algorithm, and then discuss its convergence.

    Similar to the construction of these algorithms in [16,22,24]. The idea of the GFOM-Power algorithm can be described as follows: given an initial vector x0, we first run Algorithm 2 for a few times (e.g., 2–3 times) to get an approximation, if the approximation is unsatisfactory, then we use the resulting vector as the initial guess of the power method to obtain another approximation. If this approximation is still unsatisfactory, repeat the above procedure periodically until the required tolerance is achieved.

    One of the problems is when and how to control the conversion between the power method and the GFOM algorithm. Here we use the parameters ϕ, restart, maxit as the flip-flop [22]. Let τcurr and τprev be the residual norm of the current power iteration and the previous power iteration, respectively. Denoting ratio=τcurr/τprev, we estimate whether ratio is greater than ϕ, if so, let restart=restart+1. And then we check whether restart is larger than maxit, if so, terminate the power iteration and cycle the GFOM algorithm. Otherwise, continue to run the power method. Considering the asymptotic convergence rate of the power method for PageRank problems [5], it is reasonable to choose ϕ=α0.1. In summary, the GFOM-Power algorithm that computes PageRank is given as follows.

    Algorithm 3. The GFOM-Power algorithm for computing PageRank
    1. Given a unit positive initial guess x0, the steps m of the GArnoldi process, a prescribed tolerance tol, the parameters ϕ, maxit and set restart=0, τ=1, τ0=τ, τ1=τ.
    2. Run Algorithm 2 for a few times (2–3 times): iterate steps 1–11 for the first run and steps 4–11 otherwise. If the residual norm satisfies the prescribed tolerance tol, then stop, else continue.
    3. Run the power method with ˜x as the initial guess, where ˜x is the approximation vector obtained from the GFOM algorithm:
    3.1. restart=0
    3.2. while restart<maxit & τ>tol
    3.3.    ˜x=˜x/˜x1
    3.4.    ratio=0
    3.5.    while ratio<ϕ & τ>tol
    3.6.     xP=A˜x
    3.7.     r=xP˜x
    3.8.     τ=r2
    3.9.     ratio=τ/τ0 % τcurr/τprev
    3.10.      ˜x=xP, τ0=τ
    3.11.   end while
    3.12.   if τ/τ1>ϕ % τcurr/τprev
    3.13.      restart=restart+1
    3.14.     end if
    3.15.     τ0=τ, τ1=τ
    3.16. end while
    3.17. Set G=diag{|r|/r1}
    3.18. if τtol, stop, else goto step 2

    Note that the residual r in the line 3.7 of Algorithm 3 is used not only to construct G (see the line 3.17), but also for checking convergence.

    In this subsection, we pay attention to analyze the convergence of the GFOM-Power algorithm. In particular, our analysis focuses on the procedure when turning from the power method to the GFOM algorithm.

    Assume that eigenvalues of the Google matrix A are ordered as 1=|λ1|>|λ2||λn| and Λ(A) denotes the set of eigenvalues of A. Let Pm1 be the set of polynomials of degree not greater than m1 and (λi,μi), i=1,2,,n denote the eigenpairs of A. For two given vectors v and w, cos(v,w)=v,wv2w2 indicates the cosin of the corresponding angle between them. We first introduce a useful theorem about the spectrum property of the Google matrix A.

    Theorem 1 [40]. Let ˜P be an n×n column-stochastic matrix. Let α be a real number such that 0<α<1. Let E be an n×n rank-one column-stochastic matrix E=veT, where e is the n-vector whose elements are all ones and v is an n-vector whose elements are all nonnegative and sum to 1. Let A=α˜P+(1α)E be an n×n column-stochastic matrix, then its dominant eigenvalue λ1=1, |λ2|α.

    Now let's analyze the convergence of the GFOM-Power algorithm. Comparing the FOM algorithm and the GFOM algorithm, it is not difficult to find that the main difference between them lies in the Arnoldi process. The former uses the 2-norm, while the latter uses the G-norm. Suppose the weighted matrix G is given as in Eq (2.2), then the relation between the G-norm and the 2-norm is shown as follows.

    Lemma 1 [16]. Let G=diag{δ1,δ2,,δn}, δi>0,1in, be a diagonal matrix. For any vector xRn×1, according to the definitions of the G-norm and the 2-norm, it has

    min1inδix22x2Gmax1inδix22. (3.1)

    Since our analysis focuses on the procedure when turning from the power method to the GFOM algorithm, it is necessary to derive the iterative formula of the power method in Algorithm 3. Let ˜x be the initial vector for the power method, which is obtained from the GFOM algorithm. Then the power method of Algorithm 3 produces the vector xP=ωAl˜x, where lmaxit, and ω=1/Al˜x1 is the normalizing factor. In the next cycle of the GFOM-Power algorithm, the resulting vector xP will be used as the initial vector for an m-step GFOM algorithm, so that the new associated Krylov subspace is

    Km(IA,rP)=span{rP,(IA)rP,,(IA)m1rP},

    where rP=(IA)xP. For any vector uKm(IA,rP), there exists a certain polynomial p(x) achieving minpPm1maxλΛ(A)/λ1|p(λ)|, so that u=xP+p(IA)rP. Then, we present the convergence result of the GFOM-Power algorithm for computing PageRank as follows.

    Theorem 2. Let ˜x be the current approximate vector obtained from the GFOM algorithm, and assume that ˜x=ni=1γiμi with respect to the eigenbasis {μi}i=1,2,,n, in which μi2=1, i=1,2,,n and γi0. Then, the relationship between the next approximate vector u and μ1 holds asymptotically that

    |sin(u,μ1)|max1inδimin1inδiαl(ni=2|γi||γ1|)ε1,

    where lmaxit and ε1=minqPm1maxλΛ(A)/λ1|q(λ)| with q(λ)=1(1λ)p(1λ).

    Proof. From the step 3 of Algorithm 3, we know that ˜x is used as an initial vector to run the power iterations, thus it has xP=ωAl˜x, which can be represented by

    xP=ωni=1γiAlμi.

    The corresponding residual is computed as rP=(IA)xP, which is used as the initial residual vector for the next GFOM algorithm. Thus the next approximate vector uKm(IA,rP) is obtained, and there exists a polynomial pPm1 so that

    u=xP+p(IA)rP=[I(IA)p(IA)]xP=ωni=1γiλli[1(1λi)p(1λi)]μi,

    where we used the facts that Aμi=λiμi (i=1,2,,n).

    By computing, we have

    |sin(u,μ1)|ni=2γiλli[1(1λi)p(1λi)]μiGγ1μ1G. (3.2)

    Using Eq (3.1), for the numerator of Eq (3.2), it has

    ni=2γiλli[1(1λi)p(1λi)]μiGmax1inδini=2γiλli[1(1λi)p(1λi)]μi2max1inδi(ni=2|γi||λi|l|1(1λi)p(1λi)|). (3.3)

    On the other hand, for the denominator of Eq (3.2), it has

    γ1μ1Gmin1inδiγ1μ12min1inδi|γ1|. (3.4)

    Substituting Eqs (3.3) and (3.4) into (3.2), and let q(λ)=1(1λ)p(1λ), we have

    |sin(u,μ1)|max1inδi(ni=2|γi||λi|l|1(1λi)p(1λi)|)min1inδi|γ1|max1inδimin1inδiαlmaxi1|1(1λi)p(1λi)|(ni=2|γi||γ1|)max1inδimin1inδiαl(ni=2|γi||γ1|)minqPm1maxλΛ(A)/λ1|q(λ)|=max1inδimin1inδiαl(ni=2|γi||γ1|)ε1,

    where we used the result in Theorem 1, and |λn||λ2|α.

    In this subsection, we describe the GFOM-Extrapolation algorithm and discuss its convergence.

    The extrapolation procedure is based on the power iterations and Ritz values. Assume that the initial approximation x0 can be written as the following linear combination:

    x0=μ1+a2μ2+a3μ3,

    where μ1, μ2, μ3 are the first three largest eigenvectors of the Google matrix A corresponding to λ1, λ2 and λ3, respectively. After two power iterations, it has

    x1=Ax0=μ1+a2λ2μ2+a3λ3μ3,
    x2=Ax1=μ1+a2λ22μ2+a3λ23μ3.

    Then it has

    μ1=(x2(λ2+λ3)x1+λ2λ3x0)/(1λ2)(1λ3). (3.5)

    By normalizing μ1, an improved approximation to the PageRank vector can be computed as

    xEx=μ1/μ11. (3.6)

    One advantage of the extrapolation procedure is that the unknown eigenvalues λ2 and λ3 of the Google matrix A can be estimated along with the step 4 in Algorithm 2. It is well-known that the Ritz values are defined as the eigenvalues of the upper Hessenberg matrix Hm and they can be used to approximate the true eigenvalues of the cofficient matrix IA. Let ˜λ2 and ˜λ3 be the second and third largest eigenvalues of Hm, then we make a simple transformation,

    λ21˜λ2,  λ31˜λ3.

    Since ˜λ2 and ˜λ3 may be complex valued, the following classifications are made, please refer to [25,26] for details.

    Case 1. If both ˜λ2 and ˜λ3 are real, or ˜λ2 and ˜λ3 are conjugate, then xEx is computed as Eq (3.6).

    Case 2. If ˜λ2 is real but ˜λ3 is complex, we assume that x0=μ1+a2μ2 and x1=Ax0=μ1+a2λ2μ2, then xEx is computed as

    xEx=(x1λ2x0)/x1λ2x01. (3.7)

    Case 3. If ˜λ2 is complex but ˜λ3 is real, or both ˜λ2 and ˜λ3 are complex but not conjugate, then xEx is computed as

    xEx=(x1αx0)/x1αx01. (3.8)

    Now we accelerate the GFOM algorithm with the extrapolation procedure based on Ritz values for computing PageRank. The specific implementation is given as follows.

    Algorithm 4. The GFOM-Extrapolation algorithm for computing PageRank
    1. Given a unit positive initial guess x0, the steps m of the GArnoldi process, a prescribed tolerance tol, and the number for applying extrapolation procedure maxnum.
    2. Run Algorithm 2 for a few times (2–3 times): iterate steps 1-11 for the first run and steps 4–11 otherwise. If the residual norm satisfies the prescribed tolerance tol, then stop, else continue.
    3. Compute the eigenvalues of Hm obtained from Algorithm 2, and select the second and third largest Ritz values: ˜λ2, ˜λ3.
    4. Set λ2=1˜λ2, λ3=1˜λ3.
    5. Run the extrapolation procedure with x0=˜x/˜x1 as the initial guess, where ˜x is the approximation vector obtained from the Algorithm 2:
    5.1.   for i=1:maxnum
    5.2.    x1=Ax0
    5.3.    x2=Ax1
    5.4.     if case 1 is satisfied
    5.5.      xEx=(x2(λ2+λ3)x1+λ2λ3x0)/x2(λ2+λ3)x1+λ2λ3x01
    5.6.     else the case 2
    5.7.      xEx=(x1λ2x0)/x1λ2x01
    5.8.   else the case 3
    5.9.      xEx=(x1αx0)/x1αx01
    5.10.     end if
    5.11.   Compute r=xExx0
    5.12.   Set x0=xEx
    5.13. end for
    5.14. if r2tol, stop, else set G=diag{|r|/r1} and goto step 2

    Note that the choice of the number maxnum in the line 5.1 of Algorithm 4 would be discussed in Section 4.

    In this subsection, we analyze the convergence of the GFOM-Extrapolation algorithm. In particular, our analysis focuses on the procedure when turning from the extrapolation procedure to the GFOM algorithm.

    As shown in [41], the second largest eigenvalue λ2 of the Google matrix A is semi-simple. Here we assume that the third largest eigenvalue λ3 is also semi-simple and A is diagonalizable. The following theorem shows the form of the new PageRank vector obtained after the extrapolation procedure in Algorithm 4.

    Theorem 3. Let ˜x be the initial vector for the extrapolation procedure, which is obtained from the previous GFOM algorithm. Assume that ˜x can be expressed as ˜x=μ1+ni=2aiμi with respect to the eigenbasis {μi}i=1,2,,n, in which μi2=1, i=1,2,,n. Then the new initial vector which is obtained from the extrapolation procedure for the next GFOM algorithm is

    xEx=c1{(1λ2)(1λ3)μ1+ni=4aiλfi[λ2i(λ2+λ3)λi+λ2λ3λi]μi}, (3.9)

    where c=x2(λ2+λ3)x1+λ2λ3x02 is the scaling factor and f is the number for applying the extrapolation procedure maxnum.

    Proof. The proof of Theorem 3 is totally similar to the convergence of the Arnoldi-Extrapolation algorithm and the FOM-Extrapolation algorithm, please refer to [25,26] for details.

    Theorem 4. Under the above assumptions and set S=[λ1,λ2,λ3]. Then the relationship between the next approximate vector u and μ1 holds asymptotically that

    |sin(u,μ1)|max1inδimin1inδiξαfε2,

    where ξ=ni=4|ai||λ2i(λ2+λ3)λi+λ2λ3||(1λ2)(1λ3)| and ε2=minqPm1maxλΛ(A)/S|q(λ)| with q(λ)=1(1λ)p(1λ).

    Proof. According to Algorithm 4 and Theorem 3, after the extrapolation procedure, the residual vector of the approximate vector xEx is computed as rEx=(IA)xEx, which is used as the initial residual vector to run the next GFOM algorithm. Then we get the next approximation uKm(IA,rEx), and there exists a polynomial pPm1 so that

    u=xEx+p(IA)rEx=[I(IA)p(IA)]xEx=c1{(1λ2)(1λ3)μ1+ni=4aiλfi[λ2i(λ2+λ3)λi+λ2λ3λi][1(1λi)p(1λi)]μi},

    where we used the facts that λ1=1, Aμ1=μ1 and Aμi=λiμi (i=4,,n).

    Then we have

    |sin(u,μ1)|ni=4aiλfi[λ2i(λ2+λ3)λi+λ2λ3][1(1λi)p(1λi)]μiG(1λ2)(1λ3)μ1G. (3.10)

    Now, using Eq (3.1), for the numerator of Eq (3.10), it has

    ni=4aiλfi[λ2i(λ2+λ3)λi+λ2λ3][1(1λi)p(1λi)]μiGmax1inδini=4aiλfi[λ2i(λ2+λ3)λi+λ2λ3][1(1λi)p(1λi)]μi2max1inδi(ni=4|ai||λi|f|1(1λi)p(1λi)||λ2i(λ2+λ3)λi+λ2λ3|). (3.11)

    On the other hand, for the denominator of Eq (3.10), it has

    (1λ2)(1λ3)μ1Gmin1inδi(1λ2)(1λ3)μ12min1inδi|(1λ2)(1λ3)|. (3.12)

    Substituting Eqs (3.11) and (3.12) into (3.10), and let q(λ)=1(1λ)p(1λ), we have

    |sin(u,μ1)|max1inδi(ni=4|ai||λi|f|1(1λi)p(1λi)||λ2i(λ2+λ3)λi+λ2λ3|)min1inδi|(1λ2)(1λ3)|max1inδimin1inδiαfmax4in|1(1λi)p(1λi)|ni=4|ai||λ2i(λ2+λ3)λi+λ2λ3||(1λ2)(1λ3)|max1inδimin1inδiξαfminqPm1maxλΛ(A)/S|q(λ)|=max1inδimin1inδiξαfε2,

    where we also used the result in Theorem 1, and |λn||λ2|α.

    In this section, we present numerical experiments to verify the effectiveness of the proposed algorithms: the GFOM-Power algorithm and the GFOM-Extrapolation algorithm in terms of the computing time (CPU) in seconds and the number of matrix-vector products (Mv). All the numerical results are obtained by using MATLAB 2019a on the Windows 10 64 bit operating system with 2.40 GHz Intel(R) Core(TM) i7-5500U CPU and RAM 8.00 GB.

    The characteristic of test matrices is listed in Table 2, where n stands for the matrix size, nnz denotes the number of nonzero elements, numd is the number of dangling nodes and den is the density which is defined by den=nnzn×n×100. All test matrices are available from [39]. For the sake of justice, all algorithms use the same initial guess x0=e/n with e=[1,1,,1]T and the same tolerance tol=108. And the damping factors are set as α = 0.99, 0.993, 0.995 and 0.997, respectively.

    Table 2.  The characteristic of test matrices.
    Name n nnz numd den
    California 9664 5027 4637 0.538×101
    wb-cs-stanford 9914 36, 854 2861 0.375×101
    soc-Epinions1 75, 888 60, 341 15, 547 0.105×102
    flickr 820, 878 9, 837, 214 265, 189 0.146×102
    eu-2005 862, 664 790, 989 71, 675 0.106×103
    in-2004 1, 382, 908 1, 100, 602 282, 306 0.575×104
    wikipedia-20051105 1, 634, 989 19, 753, 078 72, 556 0.739×103
    wiki-Talk 2, 394, 385 5, 021, 410 2, 246, 783 0.876×104

     | Show Table
    DownLoad: CSV

    In this subsection, we test the effectiveness of the GFOM-Power algorithm (denoted as "GFOM-P"), and compare it with the the Power-Arnoldi algorithm (denoted as "PA") [22], the Arnoldi-Chebyshev algorithm (denoted as "AC") [32] as well as the GFOM algorithm (denoted as "GFOM") as given in Algorithm 2. The test matrices are soc-Epinions1, eu-2005, in-2004 and wikipedia-20051105. We define

    Spe1=CPUGFOMCPUGFOM-PCPUGFOM×100%.

    to record the speedup of the GFOM-Power algorithm with respect to the GFOM algorithm in terms of CPU time. Note that, in the Power-Arnoldi algorithm, we run the thick restarted Arnoldi procedure two times per cycle with the number of approximate eigenpairs k=6. In the GFOM-Power algorithm, we run the GFOM procedure also two time per cycle. And in the Arnoldi-Chebyshev algorithm, we set the Chebyshev steps l=10.

    In this subsection, we show the choice of the number maxit according to the numerical performance of the Power-Arnoldi algorithm and the GFOM-Power algorithm for the wb-cs-stanford matrix. Table 3 lists the number of matrix-vector products of the Power-Arnoldi algorithm and the GFOM-Power algorithm when the damping factor α=0.99, 0.993, 0.995, 0.997 and the number maxit varies from 6 to 9, respectively. Figure 3 depicts the total CPU time of the Power-Arnoldi algorithm and the GFOM-Power algorithm versus different number maxit.

    Table 3.  The number of matrix-vector products of the Power-Arnoldi algorithm and the GFOM-Power algorithm with different damping factor and number maxit for the wb-cs-stanford matrix.
    α maxit=6 maxit=7 maxit=8 maxit=9
    PA GFOM-P PA GFOM-P PA GFOM-P PA GFOM-P
    α=0.99 148 153 139 130 145 174 149 192
    α=0.993 183 350 159 167 176 161 181 190
    α=0.995 212 205 256 190 196 189 210 177
    α=0.997 235 238 267 234 224 231 243 209

     | Show Table
    DownLoad: CSV
    Figure 3.  The total CPU time of the Power-Arnoldi algorithm and the GFOM-Power algorithm versus different number maxit for the wb-cs-stanford matrix.

    From Table 3, we find that the number of matrix-vector products of the Power-Arnoldi algorithm and the GFOM-Power algorithm is first decreasing and then increasing as the number maxit increases when the damping factor α=0.99 and 0.993. As for α=0.995 and 0.997, it shows that the number of matrix-vector products of the Power-Arnoldi algorithm and the GFOM-Power algorithm is relatively small when the number maxit=8. At the same time, from Figure 3, we observe that the optimal number maxit to the minimum CPU time of the Power-Arnoldi algorithm and the GFOM-Power algorithm is different for different damping factor α. In the left picture of Figure 3, we find that the total CPU time of the Power-Arnoldi algorithm is less when maxit = 8. In the right picture of Figure 3, it seems that both the number maxit=8 and 9 are feasible. Therefore, as a compromise choice, maxit=8 is selected in the following numerical experiments.

    In this subsection, Table 4 lists numerical results of the GFOM algorithm, the Power-Arnoldi algorithm, the Arnoldi-Chebyshev algorithm and the GFOM-Power algorithm for these four test matrices when α=0.99, 0.993, 0.995 and 0.997, respectively.

    Table 4.  Numerical comparisons of the GFOM algorithm, the Power-Arnoldi algorithm, the Arnoldi-Chebyshev algorithm and the GFOM-Power algorithm for these four test matrices, where "" denotes stagnation.
    GFOM PA AC GFOM-P Spe1
    α Mv CPU Mv CPU Mv CPU Mv CPU
    soc-Epinions1 0.99 126 0.5695 118 0.3991 151 0.3545 100 0.2755 51.62%
    0.993 153 0.5443 118 0.3103 151 0.3070 104 0.2516 53.76%
    0.995 162 0.5183 161 0.4407 171 0.3379 138 0.3837 26.20%
    0.997 234 0.7594 198 0.6292 291 0.5857 151 0.3995 47.39%
    eu-2005 0.99 171 12.9438 215 11.9705 271 12.5739 168 9.4228 27.20%
    0.993 225 15.0967 218 12.7750 320 14.8302 198 12.0650 20.08%
    0.995 315 23.1987 288 17.5105 391 17.5914 211 12.6781 45.35%
    0.997 450 32.5724 551 24.9951 295 18.9743 41.75%
    in-2004 0.99 342 31.6287 535 46.8129 851 48.0653 284 23.8216 24.68%
    0.993 378 46.1470 731 78.5646 1191 66.2892 345 34.7248 24.75%
    0.995 423 48.4014 1051 113.7902 1591 88.8401 377 35.9242 25.78%
    0.997 585 70.8687 1531 164.7577 2010 112.7999 486 47.3361 33.21%
    wikipedia-20051105 0.99 126 23.0604 135 19.5614 151 32.1638 76 12.6008 45.36%
    0.993 162 32.3998 136 21.1544 191 40.5334 119 20.5868 36.46%
    0.995 207 42.5474 165 27.1784 231 50.9236 119 21.3030 49.93%
    0.997 243 48.4092 206 33.5083 351 84.0431 144 22.9271 52.64%

     | Show Table
    DownLoad: CSV

    From Table 4, we can observe that,

    ● The GFOM-Power algorithm makes great improvements on the GFOM algorithm in terms of the number of matrix-vector products and the CPU time for these four test matrices with different damping factor α. Particularly, the reduction proportion of the CPU time is up to 53.76% for the soc-Epinions1 matrix with α=0.993, which shows that using power method to accelerate the GFOM algorithm is meaningful.

    ● The GFOM-Power algorithm also outperforms the Power-Arnoldi algorithm and the Arnoldi-Chebyshev algorithm in terms of the number of matrix-vector products and the CPU time for these four test matrices in most cases. Hence, we can say that the GFOM-Power algorithm is more efficient than the Power-Arnoldi algorithm and the Arnoldi-Chebyshev algorithm, especially when the damping factor α is high.

    Figure 4 plots the convergence behavior of the GFOM algorithm, the Power-Arnoldi algorithm, the Arnoldi-Chebyshev algorithm and the GFOM-Power algorithm for the soc-Epinions1 matrix when α = 0.99, 0.993, 0.995 and 0.997, respectively. It shows that the GFOM-Power algorithm converges faster than the other two algorithms. Similar convergence behaviors can be found for the other three test matrices, so we do not plot them for simplicity.

    Figure 4.  Convergence behavior of the GFOM algorithm, the Power-Arnoldi algorithm, the Arnoldi-Chebyshev algorithm and the GFOM-Power algorithm for the soc-Epinions1 matrix.

    In this subsection, we test the effectiveness of the GFOM-Extrapolation algorithm (denoted as "GFOM-EXT"), and compare it with the FOM-Extrapolation algorithm (denoted as "FOM-EXT") [26], the Arnoldi-Chebyshev algorithm [32] and the GFOM algorithm (denoted as "GFOM") as given in Algorithm 2. The test matrices are California, soc-Epinions1, flickr and wiki-Talk. We define

    Spe2=CPUGFOMCPUGFOM-EXTCPUGFOM×100%.

    to record the speedup of the GFOM-Extrapolation algorithm with respect to the GFOM algorithm in terms of CPU time. Note that we run the GFOM procedure two times per cycle in the GFOM-Extrapolation algorithm.

    In this subsection, we show the choice of the number maxnum by analyzing the numerical behavior of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm for the wb-cs-stanford matrix. Table 5 lists the number of matrix-vector products for the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm when the damping factor α=0.99, 0.993, 0.995, 0.997 and the number maxnum=2, 3, 4, 5, respectively. Figure 5 depicts the total CPU time of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm versus different number maxnum.

    Table 5.  The number of matrix-vector products of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm with different damping factor and number maxnum for the wb-cs-stanford matrix.
    FOM-EXT GFOM-EXT
    0.99 0.993 0.995 0.997 0.99 0.993 0.995 0.997
    maxnum=2 217 279 341 461 182 213 217 304
    maxnum=3 159 198 231 297 159 192 192 288
    maxnum=4 167 202 210 272 167 167 202 245
    maxnum=5 175 212 249 360 148 185 296 259

     | Show Table
    DownLoad: CSV
    Figure 5.  The total CPU time of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm versus different number maxnum for the wb-cs-stanford matrix.

    From Table 5, it observes that the number of matrix-vector products of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm is first decreasing and then increasing as the number maxnum increases in most cases. Meanwhile, from Figure 5, we find that the CPU time of the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm has a similar trend for different damping factor α in most cases, i.e., the CPU time is relatively less when maxnum=4. Hence, we choose maxnum=4 in the following numerical experiments.

    In this subsection, Table 6 lists numerical results of the GFOM algorithm, the Arnoldi-Chebyshev algorithm, the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm for these four test matrices when α=0.99, 0.993, 0.995 and 0.997, respectively.

    Table 6.  Numerical comparisons of the GFOM algorithm, the Arnoldi-Chebyshev algorithm, the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm for these four test matrices.
    GFOM AC FOM-EXT GFOM-EXT Spe2
    α Mv CPU Mv CPU Mv CPU Mv CPU
    California 0.99 99 0.0872 131 0.0760 104 0.1081 70 0.0723 17.09%
    0.993 117 0.0547 140 0.0371 122 0.0682 70 0.0479 12.43%
    0.995 117 0.0598 160 0.0415 122 0.0599 70 0.0430 28.09%
    0.997 117 0.0482 171 0.0405 122 0.0480 96 0.0385 20.12%
    soc-Epinions1 0.99 126 0.5695 151 0.3499 148 0.4959 122 0.4215 25.98%
    0.993 153 0.5443 151 0.3119 156 0.4356 130 0.4169 23.41%
    0.995 162 0.5183 171 0.3330 200 0.6044 156 0.4591 11.42%
    0.997 234 0.7594 291 0.5868 252 0.7025 174 0.5378 29.06%
    flickr 0.99 126 9.0932 160 6.8268 174 11.0006 100 5.9112 34.99%
    0.993 144 9.6250 200 20.2166 226 13.0329 118 7.1573 25.64%
    0.995 189 12.1234 231 23.4411 252 14.8641 144 8.4009 30.71%
    0.997 234 16.6600 280 29.0787 304 18.5578 166 10.1311 39.20%
    wiki-Talk 0.99 90 12.5757 151 33.3121 102 10.2323 52 5.9181 52.94%
    0.993 108 16.1044 191 41.3051 119 12.7449 70 9.1337 43.28%
    0.995 117 17.4529 231 55.9126 153 16.4464 78 9.7445 44.17%
    0.997 171 28.5547 351 81.1511 204 22.0147 96 12.3118 56.90%

     | Show Table
    DownLoad: CSV

    From Table 6, we find that,

    ● The GFOM-Extrapolation algorithm is superior to the GFOM algorithm in terms of the number of matrix-vector products and the CPU time for these four test matrices with different damping factor α. Particularly, the speedup is up to 56.90% for the wiki-Talk matrix with α=0.997. Hence, we can say that it is meaningful to use the extrapolation procedure based on Ritz values to improve the GFOM algorithm.

    ● The GFOM-Extrapolation algorithm needs more CPU time than the Arnoldi-Chebyshev algorithm in some cases. However, in terms of the number of matrix-vector products, the GFOM-Extrapolation algorithm is superior to the Arnoldi-Chebyshev algorithm for all the test matrices.

    ● The GFOM-Extrapolation algorithm works better than the FOM-Extrapolation algorithm in terms of the number of matrix-vector products and the CPU time for these test matrices. Therefore, this suggests that the GFOM-Extrapolation algorithm has a faster convergence than the FOM-Extrapolation algorithm, which again illustrate that using weighted inner products in the standard Arnoldi process is significant.

    Figure 6 plots the convergence behavior of the GFOM algorithm, the Arnoldi-Chebyshev algorithm, the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm for the wiki-Talk matrix when α = 0.99, 0.993, 0.995 and 0.997, respectively. It shows that the GFOM-Extrapolation algorithm converges faster than its counterparts. Note that the other three test matrices have the similar convergence behaviors.

    Figure 6.  Convergence behavior of the GFOM algorithm, the Arnoldi-Chebyshev algorithm, the FOM-Extrapolation algorithm and the GFOM-Extrapolation algorithm for the wiki-Talk matrix.

    In a word, all the above numerical experiments have shown that the GFOM-Power algorithm and the GFOM-Extrapolation algorithm are superior to their counterparts for computing PageRank when the damping factor α is close to 1.

    In this paper, a generalized FOM (GFOM) algorithm based on weighted inner products is discussed for computing PageRank as given in Algorithm 2. With the aim at improving the convergence performance, we develop the GFOM-Power algorithm and the GFOM-Extrapolation algorithm by using the power method and the extrapolation method based on Ritz values as the accelerated techniques respectively. Their specific implementations and convergence analyses can be found in Section 3. Numerical experiments in Section 4 are presented to illustrate the efficiency of our proposed algorithms. Furthermore, the proposed algorithms are worth trying to solve other Markov problems, such as ProteinRank and CiteRank.

    The authors are grateful to the anonymous referees for their much constructive comments and valuable suggestions, which greatly improved the original manuscript. This research is supported by the National Natural Science Foundation of China under grant 12101433, and the Two-Way Support Programs of Sichuan Agricultural University (Grant No.1921993077).

    The authors declare there is no conflicts of interest.



    [1] E. Allen, Modeling with Itô stochastic differential equations, Dordrecht: Springer, 2007. http://dx.doi.org/10.1007/978-1-4020-5953-7
    [2] T. Boyle, R. Aygun, Kennesaw State University HPC facilities and resources, Digital Commons Training Materials, 10 (2021), 1–3.
    [3] F. Brauer, Mathematical epidemiology: past, present, and future, Infectious Disease Modelling, 2 (2017), 113–127. http://dx.doi.org/10.1016/j.idm.2017.02.001 doi: 10.1016/j.idm.2017.02.001
    [4] C. Browne, M. Wang, G. F. Webb, A stochastic model of nosocomial epidemics in hospital intensive care units, Electron. J. Qual. Theory Differ. Equ., 2017 (2017), 1–12. http://dx.doi.org/10.14232/ejqtde.2017.1.6 doi: 10.14232/ejqtde.2017.1.6
    [5] J. Cannarella, J. Spechler, Epidemiological modeling of online network dynamics, arXiv: 1401.4208. http://dx.doi.org/10.48550/arXiv.1401.4208
    [6] R. Chen, L. Kong, M. Wang, Stability analysis of an online social network model, Rocky Mountain J. Math., 53 (2023), 1019–1041. http://dx.doi.org/10.1216/rmj.2023.53.1019 doi: 10.1216/rmj.2023.53.1019
    [7] S. Chen, J. Shi, Z. Shuai, Y. Wu, Evolution of dispersal in advective patchy environments, J. Nonlinear Sci., 33 (2023), 40. http://dx.doi.org/10.1007/s00332-023-09899-w doi: 10.1007/s00332-023-09899-w
    [8] S. Cuomo, V. S. Di Cola, F. Giampaolo, G. Rozza, M. Raissi, F. Piccialli, Scientific machine learning through physics-informed neural networks: where we are and what's next, J. Sci. Comput., 92 (2022), 88. http://dx.doi.org/10.1007/s10915-022-01939-z doi: 10.1007/s10915-022-01939-z
    [9] D. Gao, X. Yuan, A hybrid Lagrangian-Eulerian model for vector-borne diseases, J. Math. Biol., 89 (2024), 16. http://dx.doi.org/10.1007/s00285-024-02109-5 doi: 10.1007/s00285-024-02109-5
    [10] J. R. Graef, S. Ho, L. Kong, M. Wang, A fractional differential equation model for bike share systems, J. Nonlinear Funct. Anal., 2019 (2019), 23. http://dx.doi.org/10.23952/jnfa.2019.23 doi: 10.23952/jnfa.2019.23
    [11] J. R. Graef, L. Kong, A. Ledoan, M. Wang, Stability analysis of a fractional online social network model, Math. Comput. Simulat., 178 (2020), 625–645. http://dx.doi.org/10.1016/j.matcom.2020.07.012 doi: 10.1016/j.matcom.2020.07.012
    [12] Z. Hao, S. Liu, Y. Zhang, C. Ying, Y. Feng, H. Su, et al., Physics-informed machine learning: a survey on problems, methods and applications, arXiv: 2211.08064. http://dx.doi.org/10.48550/arXiv.2211.08064
    [13] D. P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv: 1412.6980. http://dx.doi.org/10.48550/arXiv.1412.6980
    [14] N. Kimmel, L. Kong, M. Wang, Modeling the dynamics of user adoption and abandonment in online social networks, Math. Method. Appl. Sci., in press. http://dx.doi.org/10.1002/mma.10413
    [15] D. Kincaid, W. Cheney, Numerical analysis: mathematics of scientific computing, 3 Eds., Providence: American Mathematical Society, 2002.
    [16] L. Kong, Modelling the dynamics of product adoption and abandonment, Proc. R. Soc. A., 480 (2024), 20240034. http://dx.doi.org/10.1098/rspa.2024.0034 doi: 10.1098/rspa.2024.0034
    [17] L. Kong, M. Wang, Deterministic and stochastic online social network models with varying population size, Dynamics of Continuous, Discrete and Impulsive Systems Series A: Mathematical Analysis, 30 (2023), 253–275.
    [18] L. Kong, M. Wang, Optimal control for an ordinary differential equation online social network model, Differ. Equat. Appl., 14 (2022), 205–214.
    [19] B. Ma, C. Li, J. Warner, Structured mathematical models to investigate the interactions between Plasmodium falciparum malaria parasites and host immune response, Math. Biosci., 310 (2019), 65–75. http://dx.doi.org/10.1016/j.mbs.2019.02.005 doi: 10.1016/j.mbs.2019.02.005
    [20] M. Mohsin, 10 social media statistics you need to know in 2024, Oberlo, 2024. Available from: https://www.oberlo.com/blog/social-media-marketing-statistics.
    [21] K. Nath, X. Meng, D. J. Smith, G. Karniadakis, Physics-informed neural networks for predicting gas flow dynamics and unknown parameters in diesel engines, Sci. Rep., 13 (2023), 13683. http://dx.doi.org/10.1038/s41598-023-39989-4 doi: 10.1038/s41598-023-39989-4
    [22] E. Ortiz-Ospina, The rise of social media, Our World In Data, 2019. Available from: https://ourworldindata.org/rise-of-social-media.
    [23] L. Wang, M. Wang, Stability and bifurcation analysis for an OSN model with delay, Advances in the Theory of Nonlinear Analysis and its Application, 7 (2023), 413–427. http://dx.doi.org/10.31197/atnaa.1152602 doi: 10.31197/atnaa.1152602
    [24] L. Wang, M. Wang, Bifurcation analysis for an OSN model with two delays, Mathematics, 12 (2024), 1321. http://dx.doi.org/10.3390/math12091321 doi: 10.3390/math12091321
    [25] G. Webb, X. E. Zhao, An epidemic model with infection age and vaccination age structure, Infect. Dis. Rep., 16 (2024), 35–64. http://dx.doi.org/10.3390/idr16010004 doi: 10.3390/idr16010004
    [26] B. Wong, Top social media statistics and trends of 2024, Forbes Media LLC., 2024. Available from: https://www.forbes.com/advisor/business/social-media-statistics/.
    [27] World bank open data, The World Bank Group, 2024. Available from: https://data.worldbank.org/indicator/SP.POP.TOTL.
    [28] N. Xiao, H. Xu, A. Morani, A. Shokri, H. Mukalazi, Exploring local and global stability of COVID-19 through numerical schemes, Sci. Rep., 14 (2024), 7960. http://dx.doi.org/10.1038/s41598-024-56938-x doi: 10.1038/s41598-024-56938-x
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(874) PDF downloads(41) Cited by(0)

Figures and Tables

Figures(9)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog