Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A privacy preserving recommendation and fraud detection method based on graph convolution

  • As a typical deep learning technique, Graph Convolutional Networks (GCN) has been successfully applied to the recommendation systems. Aiming at the leakage risk of user privacy and the problem of fraudulent data in the recommendation systems, a Privacy Preserving Recommendation and Fraud Detection method based on Graph Convolution (PPRFD-GC) is proposed in the paper. The PPRFD-GC method adopts encoder/decoder framework to generate the synthesized graph of rating information which satisfies edge differential privacy, next applies graph-based matrix completion technique for rating prediction according to the synthesized graph. After calculating user's Mean Square Error (MSE) of rating prediction and generating dense representation of the user, then a fraud detection classifier based on AdaBoost is presented to identify possible fraudsters. Finally, the loss functions of both rating prediction module and fraud detection module are linearly combined as the overall loss function. The experimental analysis on two real datasets shows that the proposed method has good recommendation accuracy and anti-fraud attack characteristics on the basis of preserving users' link privacy.

    Citation: Yunfei Tan, Shuyu Li, Zehua Li. A privacy preserving recommendation and fraud detection method based on graph convolution[J]. Electronic Research Archive, 2023, 31(12): 7559-7577. doi: 10.3934/era.2023382

    Related Papers:

    [1] Jin Li, Yongling Cheng . Barycentric rational interpolation method for solving time-dependent fractional convection-diffusion equation. Electronic Research Archive, 2023, 31(7): 4034-4056. doi: 10.3934/era.2023205
    [2] Jin Li, Yongling Cheng . Barycentric rational interpolation method for solving KPP equation. Electronic Research Archive, 2023, 31(5): 3014-3029. doi: 10.3934/era.2023152
    [3] Xiumin Lyu, Jin Li, Wanjun Song . Numerical solution of a coupled Burgers' equation via barycentric interpolation collocation method. Electronic Research Archive, 2025, 33(3): 1490-1509. doi: 10.3934/era.2025070
    [4] Waleed Mohamed Abd-Elhameed, Omar Mazen Alqubori, Abdulrahman Khalid Al-Harbi, Mohammed H. Alharbi, Ahmed Gamal Atta . Generalized third-kind Chebyshev tau approach for treating the time fractional cable problem. Electronic Research Archive, 2024, 32(11): 6200-6224. doi: 10.3934/era.2024288
    [5] Li Tian, Ziqiang Wang, Junying Cao . A high-order numerical scheme for right Caputo fractional differential equations with uniform accuracy. Electronic Research Archive, 2022, 30(10): 3825-3854. doi: 10.3934/era.2022195
    [6] Sahar Albosaily, Wael Mohammed, Mahmoud El-Morshedy . The exact solutions of the fractional-stochastic Fokas-Lenells equation in optical fiber communication. Electronic Research Archive, 2023, 31(6): 3552-3567. doi: 10.3934/era.2023180
    [7] Ziqing Yang, Ruiping Niu, Miaomiao Chen, Hongen Jia, Shengli Li . Adaptive fractional physical information neural network based on PQI scheme for solving time-fractional partial differential equations. Electronic Research Archive, 2024, 32(4): 2699-2727. doi: 10.3934/era.2024122
    [8] Nelson Vieira, M. Manuela Rodrigues, Milton Ferreira . Time-fractional telegraph equation of distributed order in higher dimensions with Hilfer fractional derivatives. Electronic Research Archive, 2022, 30(10): 3595-3631. doi: 10.3934/era.2022184
    [9] Ping Zhou, Hossein Jafari, Roghayeh M. Ganji, Sonali M. Narsale . Numerical study for a class of time fractional diffusion equations using operational matrices based on Hosoya polynomial. Electronic Research Archive, 2023, 31(8): 4530-4548. doi: 10.3934/era.2023231
    [10] Yin Yang, Sujuan Kang, Vasiliy I. Vasil'ev . The Jacobi spectral collocation method for fractional integro-differential equations with non-smooth solutions. Electronic Research Archive, 2020, 28(3): 1161-1189. doi: 10.3934/era.2020064
  • As a typical deep learning technique, Graph Convolutional Networks (GCN) has been successfully applied to the recommendation systems. Aiming at the leakage risk of user privacy and the problem of fraudulent data in the recommendation systems, a Privacy Preserving Recommendation and Fraud Detection method based on Graph Convolution (PPRFD-GC) is proposed in the paper. The PPRFD-GC method adopts encoder/decoder framework to generate the synthesized graph of rating information which satisfies edge differential privacy, next applies graph-based matrix completion technique for rating prediction according to the synthesized graph. After calculating user's Mean Square Error (MSE) of rating prediction and generating dense representation of the user, then a fraud detection classifier based on AdaBoost is presented to identify possible fraudsters. Finally, the loss functions of both rating prediction module and fraud detection module are linearly combined as the overall loss function. The experimental analysis on two real datasets shows that the proposed method has good recommendation accuracy and anti-fraud attack characteristics on the basis of preserving users' link privacy.



    Lots of physical phenomena can be expressed by the FC equation, including, inter alia, dissipative and dispersive partial differential equations (PDEs). In this paper, we consider the FC equation

    ϕ(t,s)t=μ0 0C1α1tϕ(t,s)+ 0C1α2t2ϕ(t,s)s2+f(t,s),0s1,0tT, (1.1)
    ϕ(0,s)=0,ϕ(1,s)=0,s[0,T], (1.2)
    ϕ(t,0)=φ(t),t[0,1] (1.3)

    where μ0R,0 <α1,α2<1 are constants. There are some definitions of fractional derivatives, such as the Caputo type, Riemann-Liouville type and so on. In the following, we adopt the Caputo type time fractional-order partial derivative as

    0Cαtϕ(t)=1Γ(1α)t0ϕ(t)(tτ)αdτ, (1.4)

    and Γ(α) is the Γ function.

    In [1], a scheme combining the finite difference method in the time direction and a spectral method in the space direction was proposed. In [2], two implicit compact difference schemes for the FC equation were studied, this scheme was proved to be stable, and the convergence order O(τ+h4) was given. In [3], a two-dimensional FC equation was solved by orthogonal spline collocation (OSC) methods for space discretization and finite difference method for time, which was proved to be unconditionally stable. In[4], the FC equation with two time Riemann-Liouville derivatives was solved by an explicit numerical method; and the accuracy, stability and convergence of this method were studied. In [5], FC equation with two fractional time derivatives were considered, and two new implicit numerical methods for the FC equation were proposed, respectively. The stability and convergence of these methods were also investigated. In [6], nonlinear FC equation was solved by a two-grid algorithm with the finite element (FE) method. A time second-order fully discrete two-grid FE scheme and the space direction were approximated. In [7], the discrete Crank-Nicolson (CN) finite element method was obtained by the finite difference in time and the finite element in space to approximate the FC equation, the stability and error estimate were analyzed in detail and the optimal convergence rate was obtained. In [8], the FC equation involving two integro-differential operators was solved by semi-discrete finite difference approximation, and the scheme was proved unconditionally stable. In reference [9], numerical integration with the reproducing kernel gradient smoothing integration are constructed. In reference [10], recursive moving least squares (MLS) approximation was constructed.

    Like the above methods to solve the FC equation by finite difference approach or finite element method, the time direction and space direction were solved separatively. In the following, we presented the BRIM to solve the time direction and space direction of FC equation at the same time. Lagrange interpolation has been presented by mathematician Lagrange to fitting data to be a certain function. When the number n increases, there are Runge phenomenon that the interpolation result deviates from the original function. In order to avoid the Runge phenomenon, among them, barycentric interpolation was developed in 1960s to overcome it. In recent years, linear rational interpolation (LRI) was proposed by Floater [14,15,16] and error of linear rational interpolation [11,12,13] is also proved. The barycentric interpolation collocation method (BICM) has developed by Wang et al.[25,26] and the algorithm of BICM has used for linear/non-linear problems [27,28]. In recent research, Volterra integro-differential equation (VIDE) [17,21], heat equation (HE) [18], biharmonic equation (BE) [19], telegraph equation (TE) [20], generalized Poisson equations [22], fractional reaction-diffusion equation [23] and KPP equation [24] have been studied by the linear barycentric rational interpolation method (LBRIM) and their convergence rate are also proved.

    In this paper, BRIM has been used to solve the FC equation. As the fractional derivative is the nonlocal operator, the spectral method is developed to solve the FC equation and the coefficient matrix is the full matrix. The fractional derivative of the FC equation is changed to nonsingular integral by the order of density function plus one. New Gauss formula is constructed to compute it simply and matrix equation of discrete FC equation is obtained by the unknown function replaced by barycentric rational interpolation basis function. Then, the convergence rate of BRIM is proved.

    As there is singularity in Eq (1.1), the numerical methods cannot get high accuracy, by fractional integration to second part of (1.1) to overcome the difficulty of singularity. We get

    0Cαtϕ(t,s)=1Γ(ξα)t0ξϕ(τ,s)τξdτ(τt)α+1ξ=1(ξα)Γ(ξα)[ξϕ(0,s)tξtξα+t0ξ+1ϕ(τ,s)τξ+1dτ(tτ)αξ]=Γξα[ξϕ(0,s)tξtξα+t0ξ+1ϕ(τ,s)τξ+1dτ(tτ)αξ], (2.1)

    where Γξα=1(ξα)Γ(ξα).

    Combining (2.1) and (1.1), we have

    ϕt+μ0Γξα1[ξϕ(0,s)tξtξα1+t0ξ+1ϕ(τ,s)τξ+1dτ(tτ)α1ξ]=Γξα2[ξ+2ϕ(0,s)tξs2sξα2+t0ξ+3ϕ(τ,s)τξ+1s2dτ(sτ)α2ξ]+f(t,s). (2.2)

    In the following, we give the discrete formula of FC equation and to get the matrix equation from BRIM.

    Let

    ϕ(t,s)=mj=1Rj(t)ϕj(s) (2.3)

    where

    ϕ(ti,s)=ϕi(s),i=1,2,,m

    and

    Rj(t)=λjttjnk=1λkttk (2.4)

    where

    λk=jJk(1)jj+dti=j,jk1tkti,   Jk={j{0,1,,ldt}:kdtjk}

    is the basis function [18]. Taking (2.3) into Eq (2.2),

    mj=1Rj(t)ϕj(s)+μ0Γξα1mj=1[R(ξ)j(0)ϕj(s)tξα1+t0ϕj(s)R(ξ+1)j(τ)dτ(tτ)α1ξ]=Γξα2mj=1[R(ξ)j(0)ϕ(2)j(s)tξα2+t0ϕ(2)j(s)R(ξ+1)j(τ)dτ(tτ)α2ξ]+f(t,s). (2.5)

    By taking 0=t1<t2<<tm=T,a=s1<s2<<sn=b with ht=T/m,hs=(bs)/n or uninform as Chebychev point s=cos((0:m)π/m),t=cos((0:n)π/n), we get

    mj=1Rj(ti)ϕj(s)+μ0Γξα1mj=1[R(ξ)j(0)ϕj(s)tξα1i+ti0ϕj(s)R(ξ+1)j(τ)dτ(tiτ)α1ξ]=Γξα2mj=1[R(ξ)j(0)ϕ(2)j(s)tξα2i+ti0ϕ(2)j(s)R(ξ+1)j(τ)dτ(tiτ)α2ξ]+f(ti,s), (2.6)

    by noting the notation, Rj(ti)=δij,Rj(ti)=R(1,0)ij, where R(1,0)ij is the first order derivative of barycentric matrix. Equation (2.6) can be written as

    mj=1R(1,0)ijϕj(s)+μ0Γξα1mj=1[R(ξ)j(0)ϕj(s)tξα1i+ti0ϕj(s)R(ξ+1)j(τ)dτ(tiτ)α1ξ]=Γξα2mj=1[R(ξ)j(0)ϕ(2)j(s)tξα2i+ti0ϕ(2)j(s)R(ξ+1)j(τ)dτ(tiτ)α2ξ]+f(ti,s). (2.7)

    Similarly as the discrete t for s, we get

    ϕj(s)=nk=1Rk(s)ϕik (2.8)

    where ϕi(sj)=ϕ(ti,sj)=ϕij,i=1,,m;j=1,,n and

    Ri(s)=wissimk=1wkssk (2.9)

    where

    wi=jJi(1)jj+dsk=j,ji1sisk, Ji={j{0,1,,mds}:idsji},

    is the basis function [18].

    Taking (2.8) into Eq (2.7), we get

    mj=1nk=1R(1,0)ijRk(s)ϕik+μ0Γξα1mj=1nk=1[R(ξ)j(0)Rk(s)tξα1i+ti0Rk(s)R(ξ+1)j(τ)dτ(tiτ)α1ξ]ϕik=Γξα2mj=1nk=1[R(ξ)j(0)R(2)k(s)tξα2i+ti0R(2)k(s)R(ξ+1)j(τ)dτ(tiτ)α2ξ]ϕik+f(ti,s). (2.10)

    By taking s1,s2,,sn at the mesh-point, we get

    mj=1nk=1R(1,0)ijRk(sl)ϕik+μ0Γξα1mj=1nk=1[R(ξ)j(0)Rk(sl)tξα1i+ti0Rk(sl)R(ξ+1)j(τ)dτ(tiτ)α1ξ]ϕik=Γξα2mj=1nk=1[R(ξ)j(0)R(2)k(sl)tξα2i+ti0R(2)k(sl)R(ξ+1)j(τ)dτ(tiτ)α2ξ]ϕik+f(ti,sl). (2.11)

    By noting the notation, Rk(sl)=δkl,Rk(sl)=R(0,2)ij, where R(0,2)ij is the second order derivative of barycentrix matrix.

    mj=1nk=1R(1,0)ijδklϕik+μ0Γξα1mj=1nk=1[R(ξ)j(0)δkltξα1i+δklti0R(ξ+1)j(τ)dτ(tiτ)α1ξ]ϕik=Γξα2mj=1nk=1[R(ξ)j(0)R(0,2)ijtξα2i+R(0,2)ijti0R(ξ+1)j(τ)dτ(tiτ)α2ξ]ϕik+f(ti,sl), (2.12)

    where

    Rk(τ)=λkττknk=0λkττk

    and

    {Ri(τ)=Ri(τ)[1ττk+ls=0λk(ττk)2ls=0λkττk],R(ξ+1)i(τ)=[R(ξ)i(τ)],ξN+.

    The integral term of (2.12) can be written as

    ti0R(ξ+1)j(τ)dτ(tiτ)α1ξ=Qα1j(ti)=Qα1ji, (2.13)
    ti0R(ξ+1)j(τ)dτ(tiτ)α2ξ=Qα2j(ti)=Qα2ji, (2.14)

    then we get

    mj=1nk=1R(1,0)ijδklϕik+μ0Γξα1mj=1nk=1[R(ξ)j(0)δkltξα1i+δklQα2j(ti)]ϕik=Γξα2mj=1nk=1[R(ξ)j(0)R(0,2)ijtξα2i+R(0,2)ijQα1j(ti)]ϕik+f(ti,sl). (2.15)

    The integral (2.12) is calculated by

    Qα1j(ti)=ti0R(ξ+1)j(τ)dτ(tiτ)α1ξ:=gi=1R(ξ+1)i(τθ,α1i)Gθ,α1i, (2.16)

    and

    Qα2j(ti)=ti0R(ξ+1)j(τ)dτ(tiτ)α2ξ:=gi=1R(ξ+1)i(τθ,α2i)Gθ,α2i, (2.17)

    where Gθ,α1i,Gθ,α2i are Gauss weights and τθ,α1i,τθ,α2i are Gauss points with weights (tiτ)ξα1,(tiτ)ξα2, see reference [22].

    Equation systems (2.15) can be written as

    [R(01)In+Γξα2(M(ξ0)1In+ImQα2)][ϕ11ϕ1nϕn1ϕmn][μ0Γξα1(M(ξ0)1In+ImQα1)][ϕ11ϕ1nϕn1ϕmn]=[f11f1nfn1fmn], (2.18)

    Im and In are identity matrices, is Kronecker product.

    Then Eq (2.18) can be noted as

    [R(01)In+Γξα2(M(ξ0)1In+ImQα2)μ0Γξα1(M(ξ0)1In+ImQα1)]Φ=F (2.19)

    and

    RΦ=F, (2.20)

    with R=R(01)In+Γξα2(M(ξ0)1In+ImQα2)μ0Γξα1(M(ξ0)1In+ImQα1) and Φ=[ϕ11ϕ1nϕn1ϕmn]T,F=[f11f1nfn1fmn]T.

    The boundary condition can be solved by substitution method, additional method or elimination method, see [26]. We adopt substitution method and additional method to deal with boundary condition.

    In this part, error estimate of the FC equation is given with rn(s)=ni=1ri(s)ϕi to replace ϕ(s), where ri(s) is defined as (2.9) and ϕi=ϕ(si). We also define

    e(s):=ϕ(s)rn(s)=(ssi)(ssi+d)ϕ[si,si+1,,si+d,s], (3.1)

    see reference [18].

    Then we have

    Lemma 1. For e(s) be defined by (3.1) and ϕ(s)Cd+2[a,b],d=1,2,, there

    |e(k)(s)|Chdk+1,k=0,1,. (3.2)

    For the FC equation, rational interpolation function of ϕ(t,s) is defined as rmn(t,s)

    rmn(t,s)=m+dsi=1n+dtj=1wi,j(ssi)(ttj)ϕi,jm+dsi=1n+dtj=1wi,j(ssi)(ttj) (3.3)

    where

    wi,j=(1)ids+jdtk1Jik1+dsh1=k1,h1j1|sish1|k2Jik2+dth2=k2,h2j1|tjth2|. (3.4)

    We define e(t,s) be the error of ϕ(t,s) as

    e(t,s):=ϕ(t,s)rmn(t,s)=(ssi)(ssi+ds)ϕ[si,si+1,,si+d1,s;t]+(ttj)(ttj+dt)ϕ[s;tj,tj+1,,tj+d2,t](ssi)(ssi+ds)(ttj)(ttj+dt)ϕ[si,si+1,,si+d1,s;tj,tj+1,,tj+d2,t]. (3.5)

    With similar analysis of Lemma 1, we have

    Theorem 1. For e(t,s) defined as (3.5) and ϕ(t,s)Cds+2[a,b]×Cdt+2[0,T], then we have

    |e(k1,k2)(s,t)|C(hdsk1+1s+hdtk2+1t),k1,k2=0,1,. (3.6)

    Let ϕ(sm,tn) be the approximate function of ϕ(t,s) and L to be bounded operator, there holds

    Lϕ(tm,sn)=f(tm,sn) (3.7)

    and

    limm,nLϕ(tm,sn)=ϕ(t,s). (3.8)

    Then we get

    Theorem 2. For ϕ(tm,sn):Lϕ(tm,sn)=ϕ(t,s) and L defined as (3.7), there

    |ϕ(t,s)ϕ(tm,sn)|C(hds1+τdt1).

    Proof. By

    Lϕ(t,s)Lϕ(tm,sn)=ϕ(t,s)t 0C1α1t2ϕ(t,s)s2+μ0 0C1α2tϕ(t,s)f(t,s)[ϕ(tm,sn)t 0C1α1t2ϕ(tm,sn)s2+μ0 0C1α2tϕ(tm,sn)f(tm,sn)]=ϕtϕt(tm,sn)[0C1α1t2ϕs2 0C1α1t2ϕs2(sm,tn)]+μ0[0C1α2tϕ(t,s) 0C1α2t(tm,sn))][f(t,s)f(tm,sn)]:=E1(t,s)+E2(t,s)+E3(t,s)+E4(t,s), (3.9)

    here

    E1(t,s)=ϕtϕt(tm,sn),
    E2(t,s)=0C1α1t2ϕs2 0C1α1t2ϕs2(tm,sn),
    E3(t,s)=μ0[0C1α2tϕ(t,s) 0C1α2t(tm,sn))],
    E4(t,s)=f(t,s)f(tm,sn).

    As for E1(t,s), we get

    E1(t,s)=|ϕt(t,s)ϕt(tm,sn)|=|ϕt(t,s)ϕt(tm,s)+ϕt(tm,s)ϕt(tm,sn)||ϕt(t,s)ϕt(tm,s)|+|ϕt(tm,s)ϕt(tm,sn)|=|mdsi=1(1)iϕt[si,si+1,,si+d1,sn,t]mdsi=1λi(s)|+|ndtj=1(1)jϕt[tj,tj+1,,tj+d2,sn,tm]ndtj=1λj(t)|=|et(tm,s)|+|et(tm,sn)|,

    we get

    |E1(t,s)|C(hds+τdt). (3.10)

    As E2(t,s), we have

    E2(t,s)=0C1α1t2ϕs2 0C1α1t2ϕs2(tm,sn)=Γξα2[ξ+2ϕ(0,s)tξs2sξα2+t0ξ+3ϕ(τ,s)τξ+1s2dτ(tτ)α2ξ]Γξα2[ξ+2ϕ(0,sn)tξs2sξα2n+tm0ξ+3ϕ(τ,sn)τξ+1s2dτ(tmτ)α2ξ]=Γξα2[ξ+2ϕ(0,s)tξs2sξα2ξ+2ϕ(0,sn)tξs2sξα2n]+Γξα2[t0ξ+3ϕ(τ,s)τξ+1s2dτ(tτ)α2ξtm0ξ+3ϕ(τ,sn)τξ+1s2dτ(tmτ)α2ξ] (3.11)

    and

    |E2(t,s)||Γξα2[ξ+2ϕ(0,s)tξs2sξα2ξ+2ϕ(0,sn)tξs2sξα2n]|+|Γξα2[t0ξ+3ϕ(τ,s)τξ+1s2dτ(tτ)α2ξtm0ξ+3ϕ(τ,sn)τξ+1s2dτ(tmτ)α2ξ]||Γξα2||ξ+2ϕtξs2(0,s)ξ+2ϕtξs2(0,sn)|+|Γξα2||ξ+3ϕtξ+1s2(t,s)ξ+3ϕtξ+1s2(tm,sn)|:=E21(t,s)+E22(t,s) (3.12)

    where

    E21(t,s)=|Γξα2||ξ+2ϕtξs2(0,s)ξ+2ϕtξs2(0,sn)|,E22(t,s)=|Γξα2||ξ+3ϕtξ+1s2(t,s)ξ+3ϕtξ+1s2(tm,sn)|. (3.13)

    Now we estimate E21(t,s) and E22(t,s) part by part, for the second part we have

    E22(t,s)=|Γξα2||ξ+3ϕtξ+1s2(t,s)ξ+3ϕtξ+1s2(tm,sn)|=|Γξα2||ξ+3ϕtξ+1s2(t,s)ξ+3ϕtξ+1s2(tm,s)+ξ+3ϕtξ+1s2(tm,s)ξ+3ϕtξ+1s2(tm,sn)||Γξα2||ξ+3ϕtξ+1s2(t,s)ξ+3ϕtξ+1s2(tm,s)|+|Γξα2||ξ+3ϕtξ+1s2(tm,s)ξ+3ϕtξ+1s2(tm,sn)|=|Γξα2||mdsi=1(1)iξ+3ϕtξ+1s2[si,si+1,,si+d1,sn,t]mdsi=1λi(s)|+|Γξα2||ndtj=1(1)jξ+3ϕtξ+1s2[tj,tj+1,,tj+d2,sn,tm]ndtj=1λj(t)|=|Γξα2||ξ+3etξ+1s2(tm,s)|+|Γξα2||ξ+3etξ+1s2(tm,sn)|,

    then we have

    |E22(t,s)||ξ+3etξ+1s2(tm,s)|+|ξ+3etξ+1s2(tm,sn)|C(hdsξ+τdt1). (3.14)

    For E21(t,s), we get

    |E21(t,s)|C(hds+1ξ+τdt1). (3.15)

    Similarly as E2(t,s), for E3(t,s) we have

    |E3(t,s)|C(hds+τdt). (3.16)

    Combining (3.9), (3.14), (3.16) together, proof of Theorem 2 is completed.

    In this part, one example is presented to test the theorem. The nonuniform partition in this experiment defined as second kind of Chybechev point s=cos((0:m)π/m),t=cos((0:n)π/n).

    Example 1. Consider the FC equation

    ϕt= 0C1α1t2ϕs2ϕ(t,s)μ0 0C1α2tϕ(t,s)+f(t,s),0s1,0tT,

    with the analysis solutions is

    ϕ(t,s)=t2sin(πs),

    with the initial condition

    ϕ(s,0)=0,

    and boundary condition

    ϕ(0,t)=ϕ(1,t)=0,

    and

    f(t,s)=2(t+π2t1+α1Γ(2+α1)+t1+α2Γ(2+α2))sin(πs).

    In Figures 1 and 2, errors of m=n=10, [a,b]=[0,1] and m=n=10,dt=ds=7, [a,b]=[0,1] in Example 1. (a) uniform; (b) nonuniform for FC equation by rational interpolation collocation methods are presented, respectively. From the figure, we know that the precision can reach to 106 for both uniform and nonuniform partition.

    Figure 1.  Errors of m=n=10, [a,b]=[0,1] in Example 1 (a) uniform; (b) nonuniform.
    Figure 2.  Errors of m=n=10,dt=ds=7, [a,b]=[0,1] in Example 1 (a) uniform; (b) nonuniform.

    In Table 1, errors of the FC equation with m=n=10,α1=α2=0.2 for substitution methods and additional methods are presented, there are nearly no difference for the two methods. Additional method is more simple than substitution methods to add the boundary condition. In the following, we choosing the substitution method to deal with the boundary condition.

    Table 1.  Errors of FC equation with m=n=10,α1=α2=0.2.
    method of substitution method of additional
    uniform nonuniform uniform nonuniform
    Larange 1.4662e-06 2.1919e-08 2.7900e-07 1.4310e-07
    Rational 1.3038e-05 2.4541e-07 4.9788e-06 1.4310e-07

     | Show Table
    DownLoad: CSV

    Errors of the FC equation for α1=0.4,α2=0.6,dt=ds=5 with t=0.1,0.9,1,5,10,15 are presented under the uniform and nonuniform in Table 2. As the time variable become from 0.5 to 15, there are high accuracy for our methods. We can improve the accuracy by increasing m,n or choosing the parameter dt,ds approximately which means our methods is useful.

    Table 2.  Errors of FC equation for α1=0.4,α2=0.6,dt=ds=5.
    uniform nonuniform uniform nonuniform
    t (12,12) (12,12) (12,12)dt=ds=5 (12,12)dt=ds=5
    0.5 2.1021e-11 3.8250e-09 6.8506e-06 1.6436e-06
    1 9.0394e-13 4.4206e-10 4.6667e-06 7.8141e-07
    5 6.1833e-12 5.6655e-08 2.3777e-04 4.2230e-05
    10 1.0094e-12 8.5622e-07 1.9813e-04 1.5634e-05
    15 3.5397e-12 1.8827e-05 8.5498e-04 8.2551e-05

     | Show Table
    DownLoad: CSV

    In Table 3, errors of α1=0.01,0.1,0.3,0.5,0.9,0.99 under uniform with m=n=10,dt=5,ds=5 with α2=0.1,0.4,0.6,0.8,0.99 are presented under the uniform partition. From the table, we know that for different α1,α2 our methods have high accuracy with little number m and n. In the following table, numerical results are presented to test our theorem. From Tables 4 and 5, error of uniform for α1=α2=0.2,ds=5 with different dt are given, the convergence rate is O(hdt). From Table 5, with space variable uniform for α1=α2=0.2,dt=5, the convergence rate is O(h7), we will investigate in future paper. For Tables 6 and 7, the errors of Chebyshev partition for s and t are presented. For dt=5, the convergence rate is O(hds) in Table 6, while in Table 7, the convergence rate is O(hdt) which agrees with our theorem.

    Table 3.  Errors of α1 under uniform with m=n=10,dt=5,ds=5.
    α1 α2=0.1 α2=0.4 α2=0.6 α2=0.8 α2=0.99
    0.01 1.0153e-04 1.0246e-04 1.0300e-04 1.0346e-04 1.0384e-04
    0.1 1.2753e-05 1.2865e-05 1.2930e-05 1.2987e-05 1.3033e-05
    0.3 2.7464e-05 2.7704e-05 2.7845e-05 2.7971e-05 2.8074e-05
    0.5 4.5746e-06 4.6152e-06 4.6399e-06 4.6609e-06 4.6794e-06
    0.9 9.0295e-06 9.1193e-06 9.1240e-06 9.2142e-06 9.2479e-06
    0.99 1.8981e-06 1.8247e-06 1.5293e-06 1.9193e-06 2.0670e-06

     | Show Table
    DownLoad: CSV
    Table 4.  Errors of uniform for α1=α2=0.2,ds=5.
    m,n dt=2 dt=3 dt=4 dt=5
    8 1.3626e-02 6.9619e-03 2.0708e-03 9.8232e-04
    10 9.6780e-03 1.5332 3.4354e-03 3.1653 6.9542e-04 4.8900 3.2829e-04 4.9117
    12 7.0485e-03 1.7389 1.9408e-03 3.1320 2.9186e-04 4.7621 1.3132e-04 5.0255
    14 5.4466e-03 1.6725 1.2017e-03 3.1097 1.4211e-04 4.6686 6.0148e-05 5.0654

     | Show Table
    DownLoad: CSV
    Table 5.  Errors of uniform for α1=α2=0.2,dt=5.
    m,n ds=2 ds=3 ds=4
    8 4.9495e-04 4.9492e-04 4.9486e-04
    10 1.0051e-04 7.1443 1.0053e-04 7.1431 1.0053e-04 7.1426
    12 2.7700e-05 7.0690 2.7711e-05 7.0679 2.7714e-05 7.0673
    14 9.4272e-06 6.9921 9.4315e-06 6.9917 9.4314e-06 6.9925

     | Show Table
    DownLoad: CSV
    Table 6.  Errors of non-uniform partition with α1=α2=0.2,dt=5.
    m,n ds=2 ds=3 ds=4
    8 2.8113e-05 2.8110e-05 2.8108e-05
    10 2.1197e-05 1.2654 2.1196e-05 1.2652 2.1195e-05 1.2651
    12 6.6990e-06 6.3180 6.6989e-06 6.3178 6.6988e-06 6.3176
    14 1.6712e-06 9.0069 1.6712e-06 9.0068 1.6712e-06 9.0067

     | Show Table
    DownLoad: CSV
    Table 7.  Errors of non-uniform partition α1=α2=0.2,ds=5.
    m,n dt=2 dt=3 dt=4 dt=5
    8 3.1539e-02 8.7995e-03 2.1930e-03 3.3004e-04
    10 2.4329e-02 1.1632 4.0288e-03 3.5010 2.7133e-04 9.3648 2.2278e-04 1.7613
    12 1.5223e-02 2.5716 1.9127e-03 4.0859 9.5194e-05 5.7449 5.1702e-05 8.0116
    14 1.1407e-02 1.8721 1.1143e-03 3.5049 3.5772e-05 6.3493 1.1369e-05 9.8255

     | Show Table
    DownLoad: CSV

    In the following table, α1=0.4,α2=0.6 is chosen to present numerical results. From Tables 8 and 9, error of uniform partition dt=5 with different ds are given, the convergence rate is O(h7). From Table 8, with space variable s,ds=5, the convergence rate is O(hdt) which agrees with our theorem.

    Table 8.  Errors of uniform with α1=0.4,α2=0.6,dt=5.
    m,n ds=2 ds=3 ds=4
    8 4.9427e-04 4.9426e-04 4.9414e-04
    10 1.0035e-04 7.1455 1.0041e-04 7.1427 1.0041e-04 7.1413
    12 2.7639e-05 7.0720 2.7674e-05 7.0684 2.7684e-05 7.0669
    14 9.3984e-06 6.9977 9.4153e-06 6.9942 9.4254e-06 6.9895

     | Show Table
    DownLoad: CSV
    Table 9.  Errors of uniform with α1=0.4,α2=0.6,ds=5.
    m,n dt=1 dt=2 dt=3 dt=4
    8 1.3587e-02 6.9513e-03 2.0677e-03 9.8084e-04
    10 9.6497e-03 1.5334 3.4314e-03 3.1637 6.9462e-04 4.8884 3.2791e-04 4.9102
    12 7.0259e-03 1.7404 1.9389e-03 3.1311 2.9157e-04 4.7613 1.3118e-04 5.0249
    14 5.4269e-03 1.6752 1.2005e-03 3.1096 1.4198e-04 4.6682 6.0090e-05 5.0648

     | Show Table
    DownLoad: CSV

    For Tables 10 and 11, the errors of Chebyshev partition for non-uniform with α1=0.4,α2=0.6 are presented. For dt=5, the convergence rate is O(h7) in Table 11, while in Table 10, the convergence rate is O(hdt) which agrees with our theorem.

    Table 10.  Errors of non-uniform with α1=0.4,α2=0.6,ds=5.
    m,n dt=1 dt=2 dt=3 dt=4
    8 3.1481e-02 8.7825e-03 2.1876e-03 3.2930e-04
    10 2.4263e-02 1.1671 4.0219e-03 3.5000 2.7124e-04 9.3553 2.2231e-04 1.7606
    12 1.5185e-02 2.5704 1.9076e-03 4.0912 9.5106e-05 5.7481 5.1649e-05 8.0057
    14 1.1373e-02 1.8751 1.1117e-03 3.5026 3.5733e-05 6.3504 1.1365e-05 9.8211

     | Show Table
    DownLoad: CSV
    Table 11.  Errors of non-uniform with α1=0.4,α2=0.6,dt=5.
    m,n ds=2 ds=3 ds=4
    8 2.8065e-05 2.8059e-05 2.8056e-05
    10 2.1156e-05 1.2665 2.1154e-05 1.2660 2.1153e-05 1.2656
    12 6.6875e-06 6.3168 6.6874e-06 6.3164 6.6873e-06 6.3161
    14 1.6693e-06 9.0033 1.6693e-06 9.0031 1.6693e-06 9.0030

     | Show Table
    DownLoad: CSV

    In this paper, BRIM was used to solve the (1+1) dimensional FC equation that is presented. For fractional-order PDEs, the convergence order is seriously affected by the orders of fractional derivatives. By fractional integration, the singularity of the fractional derivative of the FC equation can be changed to nonsingular integral, with adding one order to the derivatives of density function. So there are no effects on the orders of fractional derivatives. The singularity of fractional derivative is overcome by the integral to density function from the singular kernel. For the arbitrary fractional derivative, the new Gauss formula is constructed to calculated it simply. For the Diriclet boundary condition, the FC equation is changed to the discrete FC equation and the matrix equation of it is given. In the future, the FC equation with Nuemann condition can be solved by BRIM, and high dimensional FC equation can also be studied by our methods.

    The work of Jin Li was supported by Natural Science Foundation of Shandong Province (Grant No. ZR2022MA003).

    The authors declare that they have no conflicts of interest.



    [1] J. Lu, B. Pan, A. M. Seid, B. Li, G. Hu, S. Wan, Truthful incentive mechanism design via internalizing externalities and lp relaxation for vertical federated learning, IEEE Trans. Comput. Social Syst., 2022. https://doi.org/10.1109/TCSS.2022.3227270 doi: 10.1109/TCSS.2022.3227270
    [2] S. Liu, J. Yu, X. Deng, S. Wan, FedCPF: An efficient-communication federated learning approach for vehicular edge computing in 6G communication networks, IEEE Trans. Comput. Social Syst., 23 (2021), 1616–1629. https://doi.org/10.1109/TITS.2021.3099368 doi: 10.1109/TITS.2021.3099368
    [3] C. Wang, C. Jiang, J. Wang, S. Shen, S. Guo, P. Zhang, Blockchain-aided network resource orchestration in intelligent internet of things, IEEE Internet Things J., 10 (2022), 6151–6163. https://doi.org/10.1109/JIOT.2022.3222911 doi: 10.1109/JIOT.2022.3222911
    [4] J. Lu, H. Liu, R. Jia, J. Wang, L. Sun, S. Wan, Towards personalized federated learning via group collaboration in ⅡoT, IEEE Trans. Ind. Inf., 19 (2022), 8923–8932. https://doi.org/10.1109/TⅡ.2022.3223234 doi: 10.1109/TⅡ.2022.3223234
    [5] G. Wu, L. Xie, H. Zhang, J. Wang, S. Shen, S. Yu, STSIR: An individual-group game-based model for disclosing virus spread in Social Internet of Things, J. Network Comput. Appl., 214 (2023), 103608. https://doi.org/10.1016/j.jnca.2023.103608 doi: 10.1016/j.jnca.2023.103608
    [6] S. Shen, L. Xie, Y. Zhang, G. Wu, H. Zhang, S. Yu, Joint differential game and double deep q-networks for suppressing malware spread in industrial internet of things, IEEE Trans. Inf. Forensics Secur., 18 (2023), 5302–5315. https://doi.org/10.1109/TIFS.2023.3307956 doi: 10.1109/TIFS.2023.3307956
    [7] G. Wu, Z. Xu, H. Zhang, S. Shen, S. Yu, Multi-agent DRL for joint completion delay and energy consumption with queuing theory in MEC-based ⅡoT, J. Parallel Distrib. Comput., 176 (2023), 80–94. https://doi.org/10.1016/j.jpdc.2023.02.008 doi: 10.1016/j.jpdc.2023.02.008
    [8] G. Wu, H. Wang, H. Zhang, Y. Zhao, S. Yu, S. Shen, Computation offloading method using stochastic games for software defined network-based multi-agent mobile edge computing, IEEE Internet Things J., 10 (2023), 17620–17634. https://doi.org/10.1109/JIOT.2023.3277541 doi: 10.1109/JIOT.2023.3277541
    [9] G. Wu, X. Chen, Z. Gao, H. Zhang, S. Yu, S. Shen, Privacy-preserving offloading scheme in multi-access mobile edge computing based on MADRL, J. Parallel Distrib. Comput., 183 (2024), 104775. https://doi.org/10.1016/j.jpdc.2023.104775 doi: 10.1016/j.jpdc.2023.104775
    [10] S. Shen, X. Wu, P. Sun, H. Zhou, Z. Wu, S. Yu, Optimal privacy preservation strategies with signaling Q-learning for edge-computing-based IoT resource grant systems, Expert Syst. Appl., 225 (2023), 120192. https://doi.org/10.1016/j.eswa.2023.120192 doi: 10.1016/j.eswa.2023.120192
    [11] H. Zhu, G. Liu, M. Zhou, Y. Xie, A. Abusorrah, Q. Kang, Optimizing weighted extreme learning machines for imbalanced classification and application to credit card fraud detection, Neurocomputing, 407 (2020), 50–62. https://doi.org/10.1016/j.neucom.2020.04.078 doi: 10.1016/j.neucom.2020.04.078
    [12] Y. Xie, G. Liu, C. Yan, C. Jiang, M. Zhou, Time-aware attention-based gated network for credit card fraud detection by extracting transactional behaviors, IEEE Trans. Comput. Social Syst., 10 (2022), 1004–1016. https://doi.org/10.1109/TCSS.2022.3158318 doi: 10.1109/TCSS.2022.3158318
    [13] D. Wang, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, et al., A semi-supervised graph attentive network for financial fraud detection, in 2019 IEEE International Conference on Data Mining (ICDM), IEEE, (2019), 598–607. https://doi.org/10.1109/ICDM.2019.00070
    [14] C. Yang, H. Wang, K. Zhang, L. Sun, Secure network release with link privacy, preprint, arXiv: 2005.00455.
    [15] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, M. Wang, Lightgcn: Simplifying and powering graph convolution network for recommendation, in SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information, (2020), 639–648. https://doi.org/10.1145/3397271.3401063
    [16] K. Mao, J. Zhu, X. Xiao, B. Lu, Z. Wang, X. He, UltraGCN: Ultra simplification of graph convolutional networks for recommendation, in CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, (2021), 1253–1262. https://doi.org/10.1145/3459637.3482291
    [17] J. Yu, H. Yin, J. Li, Q. Wang, N. V. Hung, X. Zhang, Self-supervised multi-channel hypergraph convolutional network for social recommendation, in WWW '21: Proceedings of the Web Conference 2021, (2021), 413–424. https://doi.org/10.1145/3442381.3449844
    [18] Y. Liu, X. Ao, Z. Qin, J. Chi, J. Feng, H. Yang, et al., Pick and choose: A GNN-based imbalanced learning approach for fraud detection, in WWW '21: Proceedings of the Web Conference 2021, (2021), 3168–3177. https://doi.org/10.1145/3442381.3449989
    [19] Y. Shen, S. Shen, Q. Li, H. Zhou, Z. Wu, Y. Qu, Evolutionary privacy-preserving learning strategies for edge-based IoT data sharing schemes, Digital Commun. Networks, 9 (2023), 906–919. https://doi.org/10.1016/j.dcan.2022.05.004 doi: 10.1016/j.dcan.2022.05.004
    [20] S. Zhang, H. Yin, T. Chen, N. V. Hung, Z. Huang, L. Cui, Gcn-based user representation learning for unifying robust recommendation and fraudster detection, in SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 689–698. https://doi.org/10.1145/3397271.3401165
    [21] X. Zheng, Z. Wang, C. Chen, J. Qian, Y. Yang, Decentralized graph neural network for privacy-preserving recommendation, in CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, (2023), 3494–3504. https://doi.org/10.1145/3583780.3614834
    [22] C. Wu, F. Wu, Y. Cao, Y. Huang, X. Xie, Fedgnn: Federated graph neural network for privacy-preserving recommendation, preprint, arXiv: 2102.04925.
    [23] Y. Xiao, L. Xiao, X. Lu, H. Zhang, S. Yu, H. V. Poor, Deep-reinforcement-learning-based user profile perturbation for privacy-aware recommendation, IEEE Internet Things J., 8 (2020), 4560–4568. https://doi.org/10.1109/JIOT.2020.3027586 doi: 10.1109/JIOT.2020.3027586
    [24] Z. Chen, Y. Wang, S. Zhang, H. Zhong, L. Chen, Differentially private user-based collaborative filtering recommendation based on k-means clustering, Expert Syst. Appl., 168 (2021), 114366. https://doi.org/10.1016/j.eswa.2020.114366 doi: 10.1016/j.eswa.2020.114366
    [25] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907.
    [26] C. Dwork, A. Roth, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci., 9 (2014), 211–407. http://dx.doi.org/10.1561/0400000042 doi: 10.1561/0400000042
    [27] M. Zhang, Y. Chen, Inductive matrix completion based on graph neural networks, preprint, arXiv: 1904.12058.
    [28] Yelp Open Dataset. Available from: https://www.yelp.com/dataset.
    [29] J. Ni, J. Li, J. McAuley, Justifying recommendations using distantly-labeled reviews and fine-grained aspects, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (2019), 188–197. https://doi.org/10.18653/v1/D19-1018
    [30] R. Berg, T. N. Kipf, M. Welling, Graph convolutional matrix completion, preprint, arXiv: 1706.02263.
    [31] W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, et al., Graph neural networks for social recommendation, in WWW '19: The World Wide Web Conference, (2019), 417–426. https://doi.org/10.1145/3308558.3313488
    [32] J. Hartford, D. Graham, K. Leyton-Brown, S. Ravanbakhsh, Deep models of interactions across sets, in Proceedings of the 35th International Conference on Machine Learning, 80 (2018), 1909–1918. Available from: http://proceedings.mlr.press/v80/hartford18a/hartford18a.pdf.
  • This article has been cited by:

    1. Jin Li, Yongling Cheng, Barycentric rational interpolation method for solving 3 dimensional convection–diffusion equation, 2024, 304, 00219045, 106106, 10.1016/j.jat.2024.106106
    2. Jin Li, Yongling Cheng, Spectral collocation method for convection-diffusion equation, 2024, 57, 2391-4661, 10.1515/dema-2023-0110
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1772) PDF downloads(69) Cited by(0)

Figures and Tables

Figures(7)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog