Loading [MathJax]/jax/output/SVG/jax.js
Research article

Recommendation model based on intention decomposition and heterogeneous information fusion


  • In order to solve the problem of timeliness of user and item interaction intention and the noise caused by heterogeneous information fusion, a recommendation model based on intention decomposition and heterogeneous information fusion (IDHIF) is proposed. First, the intention of the recently interacting items and the users of the recently interacting candidate items is decomposed, and the short feature representation of users and items is mined through long-short term memory and attention mechanism. Then, based on the method of heterogeneous information fusion, the interactive features of users and items are mined on the user-item interaction graph, the social features of users are mined on the social graph, and the content features of the item are mined on the knowledge graph. Different feature vectors are projected into the same feature space through heterogeneous information fusion, and the long feature representation of users and items is obtained through splicing and multi-layer perceptron. The final representation of users and items is obtained by combining short feature representation and long feature representation. Compared with the baseline model, the AUC on the Last.FM and Movielens-1M datasets increased by 1.83 and 4.03 percentage points, respectively, the F1 increased by 1.28 and 1.58 percentage points, and the Recall@20 increased by 3.96 and 2.90 percentage points. The model proposed in this paper can better model the features of users and items, thus enriching the vector representation of users and items, and improving the recommendation efficiency.

    Citation: Suqi Zhang, Xinxin Wang, Wenfeng Wang, Ningjing Zhang, Yunhao Fang, Jianxin Li. Recommendation model based on intention decomposition and heterogeneous information fusion[J]. Mathematical Biosciences and Engineering, 2023, 20(9): 16401-16420. doi: 10.3934/mbe.2023732

    Related Papers:

    [1] Hong Yan Xu, Yu Xian Chen, Jie Liu, Zhao Jun Wu . A fundamental theorem for algebroid function in k-punctured complex plane. AIMS Mathematics, 2021, 6(5): 5148-5164. doi: 10.3934/math.2021305
    [2] Yige Zhao, Yibing Sun, Zhi Liu, Yilin Wang . Solvability for boundary value problems of nonlinear fractional differential equations with mixed perturbations of the second type. AIMS Mathematics, 2020, 5(1): 557-567. doi: 10.3934/math.2020037
    [3] Tongzhu Li, Ruiyang Lin . Classification of Möbius homogeneous curves in R4. AIMS Mathematics, 2024, 9(8): 23027-23046. doi: 10.3934/math.20241119
    [4] Badriah Alamri . Fixed point theory in elliptic-valued metric spaces: applications to Fredholm integral equations and climate change analysis. AIMS Mathematics, 2025, 10(6): 14222-14247. doi: 10.3934/math.2025641
    [5] Haiming Liu, Jiajing Miao . Geometric invariants and focal surfaces of spacelike curves in de Sitter space from a caustic viewpoint. AIMS Mathematics, 2021, 6(4): 3177-3204. doi: 10.3934/math.2021192
    [6] Süleyman Şenyurt, Filiz Ertem Kaya, Davut Canlı . Pedal curves obtained from Frenet vector of a space curve and Smarandache curves belonging to these curves. AIMS Mathematics, 2024, 9(8): 20136-20162. doi: 10.3934/math.2024981
    [7] Yaqiong Liu, Yunting Li, Qiuping Liao, Yunhui Yi . Classification of nonnegative solutions to fractional Schrödinger-Hatree-Maxwell type system. AIMS Mathematics, 2021, 6(12): 13665-13688. doi: 10.3934/math.2021794
    [8] Yan Sun, Xiao-lan Liu, Jia Deng, Mi Zhou . Some fixed point results for α-admissible extended Z-contraction mappings in extended rectangular b-metric spaces. AIMS Mathematics, 2022, 7(3): 3701-3718. doi: 10.3934/math.2022205
    [9] Rafik Aguech . On the central limit theorem for the elephant random walk with gradually increasing memory and random step size. AIMS Mathematics, 2024, 9(7): 17784-17794. doi: 10.3934/math.2024865
    [10] Talat Körpinar, Yasin Ünlütürk . An approach to energy and elastic for curves with extended Darboux frame in Minkowski space. AIMS Mathematics, 2020, 5(2): 1025-1034. doi: 10.3934/math.2020071
  • In order to solve the problem of timeliness of user and item interaction intention and the noise caused by heterogeneous information fusion, a recommendation model based on intention decomposition and heterogeneous information fusion (IDHIF) is proposed. First, the intention of the recently interacting items and the users of the recently interacting candidate items is decomposed, and the short feature representation of users and items is mined through long-short term memory and attention mechanism. Then, based on the method of heterogeneous information fusion, the interactive features of users and items are mined on the user-item interaction graph, the social features of users are mined on the social graph, and the content features of the item are mined on the knowledge graph. Different feature vectors are projected into the same feature space through heterogeneous information fusion, and the long feature representation of users and items is obtained through splicing and multi-layer perceptron. The final representation of users and items is obtained by combining short feature representation and long feature representation. Compared with the baseline model, the AUC on the Last.FM and Movielens-1M datasets increased by 1.83 and 4.03 percentage points, respectively, the F1 increased by 1.28 and 1.58 percentage points, and the Recall@20 increased by 3.96 and 2.90 percentage points. The model proposed in this paper can better model the features of users and items, thus enriching the vector representation of users and items, and improving the recommendation efficiency.



    In 1929, relating to the study of value distribution theory for meromorphic functions, R. Nevanlinna [3] conjectured that the second main theorem for meromorphic functions is still valid if one replaces the fixed points by meromorphic functions of slow growth. This conjecture was solved by Osgood [4], Steinmetz [8], and Yamanoi [10] with truncation one. In 1991, Ru and Stoll [5] established the second main theorem for linearly nondegenerate holomorphic curves and moving hyperplanes in subgeneral position. We recall Ru and Stoll's result (for notations, see the following review of background materials).

    Theorem 1.1 (Ru and Stoll [5]). Let f:CPN(C) be a holomorphic map, and let Hj,1jq, be the moving hyperplanes in PN(C) which are given by Hj={X=[X0::Xn] | aj0X0++ajnXn=0}, where aj0,,ajn are entire functions without common zeros. Let KH be the smallest field that contains C and all ajμajν with ajν0. Suppose that H:={H1,...,Hq} is a family of slowly moving hyperplanes with respect to f located in m-subgeneral position. Assume that f is linearly nondegenerate over KH. Then, for any ϵ>0,

    qj=1mf(r,Hj)exc(2mN+1+ϵ)Tf(r),

    where "exc" means that the above inequality holds for all r outside a set with finite Lebesgue measure.

    Before stating our main result, we recall some basic definitions for moving targets. Let f:CPN(C) be a holomorphic map. Denote by f=(f0,...,fN). f is called a reduced representation of f if P(f)=f and f0,...,fN are entire functions without common zeros. Let f(z)=max{|f0(z)|,...,|fN(z)|}. The characteristic function of f is defined by

    Tf(r)=2π0logf(reiθ)dθ2π.

    We say a meromorphic function g on C is of slow growth with respect to f if Tg(r)=o(Tf(r)). Let Kf be the field of all meromorphic functions on C of slow growth with respect to f, which is a subfield of meromorphic functions on C. For a positive integer d, we set

    Id:={I=(i0,...,iN)ZN+10 | i0++iN=d}

    and

    nd=#Id=(d+NN ).

    A moving hypersurface D in PN(C) of degree d is defined by a homogeneous polynomial Q=IIdaIxI, where aI,IId, are holomorphic functions on C without common zeros, and xI=x0i0xNiN. Note that D can be regarded as a holomorphic map a:CPnd1(C) with a reduced representation (...,aI(z),...)IId. We call D a slowly moving hypersurface with respect to f if Ta(r)=o(Tf(r)). The proximity function of f with respect to the moving hypersurface D with defining homogeneous polynomial Q is defined by

    mf(r,D)=2π0λD(reiθ)(f(reiθ))dθ2π,

    where λD(z)(f(z))=logf(z)dQ(z)Q(f)(z) is the Weil function associated to D composites with f and Q(z)=maxIId{|aI(z)|}. If D is a slowly moving hypersurface with respect to f of degree d, we have

    mf(r,D)dTf(r)+o(Tf(r))

    by the first main theorem for moving targets.

    Definition 1.2. Under the above notations, we say that f is linearly nondegenerate over Kf if there is no nonzero linear form LKf[x0,...,xN] such that L(f0,...,fN)0, and f is algebraically nondegenerate over Kf if there is no nonzero homogeneous polynomial QKf[x0,...,xN] such that Q(f0,...,fN)0. If f is not algebraically nondegenerate over Kf, we say that f is degenerate over Kf.

    Remark 1.3. In this paper, we only consider those moving hypersurfaces D with defining function Q such that Q(f0,...,fN)0.

    We say that the moving hypersurfaces D1,,Dq are in m-subgeneral position if there exists zC such that D1(z),,Dq(z) are in m-subgeneral position (as fixed hypersurfaces), i.e., any m+1 of D1(z),...,Dq(z) do not meet at one point. Actually, if the condition is satisfied for one point zC, it is also satisfied for all zC except for a discrete set.

    In 2021, Heier and Levin generalized Schmidt's subspace theorem to closed subschemes in general position by using the concept of Seshadri constant, as shown in [1]. Recently, they extended this result to arbitrary closed subschemes without any assumption by using the notion of distributive constants and weights assigned to subvarieties [2]. As a corollary, they obtained a second main theorem for hypersurfaces in m-subgeneral position in Pn(C), establishing an inequality with factor 32.

    Main Theorem (Heier and Levin [2]). Let f:CPN(C) be a holomorphic map, and let D1,,Dq be hypersurfaces in PN(C) of degree d1,,dq, respectively. Assume that D1,,Dq are located in m-subgeneral position. Then, for any ϵ>0,

    qj=11djmf(r,Dj)exc(32(2mN+1)+ϵ)Tf(r).

    Here "exc" means that the above inequality holds for all r outside a set with finite Lebesgue measure.

    A key point of their proof is the use of the last line segment of the Nochka diagram, where they proved that the slope of this line segment has a lower bound depending solely on m and N.

    In this paper, motivated by Heier and Levin's work, we consider the moving hypersurfaces in m-subgeneral position and prove the following theorem.

    Theorem 1.4 (Main Theorem). Let f:CPN(C) be a holomorphic map, and let D1,,Dq be a family of slowly moving hypersurfaces with respect to f of degree d1,,dq, respectively. Assume that f is algebraically nondegenerate over Kf and D1,,Dq are located in m-subgeneral position. Then, for any ϵ>0,

    qj=11djmf(r,Dj)exc32(2mN+1+ϵ)Tf(r).

    Here "exc" means that the above inequality holds for all r outside a set with finite Lebesgue measure.

    Indeed, we prove a more general case when f is degenerate over Kf. To do so, we introduce the notion of "universal fields". Let k be a field. A universal field Ωk of k is a field extension of k that is algebraically closed and has infinite transcendence degree over k. A useful fact of Ωk is that any field extension obtained by adjoining finitely many field elements to k can be isomorphically imbedded in Ωk, which fixes the base field k.

    In this paper, we take k=Kf and fix a universal field Ω over k=Kf. Let f=[f0:f1::fN] be a reduced representation of f. We can regard each fi, 0iN, as an element in Ω. Hence f can be seen as a set of homogeneous coordinates of some point P in PN(Ω). Equip PN(Ω) with the natural Zariski topology.

    Definition 1.5. Under the above assumptions, we define the closure of P in PN(Ω) over Kf, denoted by Vf, by

    Vf:=hKf[x0,,xN],h(f)0{[X0::XN]PN(Ω) | h(X0,,XN)=0}PN(Ω). (1.1)

    Note that f is algebraically nondegenerate over Kf, which is equivalent to Vf=PN(Ω). We also note that every moving hypersurface D with defining function QKf[x0,,xN] can be seen as a hypersurface determined by Q in PN(Ω).

    Let VPN(Ω) be an algebraic subvariety defined by homogeneous polynomials h1,,hsKf[x0,,xN]. Let z be a point in C such that all coefficients of h1,,hs are holomorphic at z. We denote by V(z)PN(C) the algebraic subvariety of PN(C) defined by h1(z),,hs(z). Here if hj=IIdaIxI, we denote hj(z) by hj(z)(x0,,xN)=IIdaI(z)xIC[x0,,xN]. We recall Lemma 3.3 in [11].

    Lemma 1.6 (Lemma 3.3, [11]). dimV(z)=dimV and degV(z)=degV for all zC except a discrete subset.

    Definition 1.7. Let V be an algebraic subvariety of PN(Ω). We say that V is defined over Kf if V is an algebraic subvariety defined by some homogeneous polynomials in Kf[x0,,xN].

    Let VPN(Ω) be an algebraic subvariety defined over Kf and D1,...,Dq be q hypersurfaces in PN(Ω) defined over Kf. We say that D1,...,Dq are in m-subgeneral position on V if for any J{1,,q} with #Jm+1,

    dimjJDjVm#J.

    When m=n, we say D1,,Dq are in general position on V. Note that dimjJDj(z)V(z)=dimjJDjV for all zC excluding a discrete subset.

    Remark 1.8. By Lemma 1.6, the definition of m-subgeneral position above implies the definition of m-subgeneral position below Remark 1.3.

    We prove the following general result.

    Theorem 1.9. Let f be a holomorphic map of C into PN(C). Let D={D1,,Dq} be a family of slowly moving hypersurfaces in PN(C) with respect to f with degDj=dj(1jq). Let VfPN(Ω) be given as in (1.1). Assume that D1,,Dq are in m-subgeneral position on Vf and dimVf=n. Assume that the following Bezout property holds on Vf for intersections among the divisors: If I,J{1,...,q}, then

    codimVfDIJ=codimVf(DIDJ)codimVfDI+codimVfDJ,

    where for every subvariety Z of PN(Ω), codimVfZ is given by codimVfZ:=dimVfdimVfZ. Then

    qj=11djmf(r,Dj)exc32(2mn+1+ϵ)Tf(r).

    Remark 1.10. Recall that we only consider those moving hypersurfaces D with defining function QKf[x0,...,xN] such that Q(f)0. So we have VfDj for every 1jq.

    It is known that the Bezout property holds on projective spaces. Therefore, Theorem 1.4 is the special case of Theorem 1.9 when Vf=PN(Ω). Therefore, the rest of the paper is devoted to proving Theorem 1.9.

    In 2022, Quang [6] introduced the notion of distributive constant Δ as follows:

    Definition 2.1. Let f:CPN(C) be a holomorphic curve. Let D1,,Dq be q hypersurfaces in PN(Ω). Let Vf be given as in (1.1). We define the distributive constant for D1,...,Dq with respect to f by

    Δ:=maxΓ{1,...,q}#ΓcodimVf(jΓDj).

    We remark that Quang's original definition (see Definition 3.3 in [6]) is different from Definition 2.1. But by Lemma 3.3 in [11], we can see that Definition 2.1 is equivalent to Definition 3.3 in [6]. We rephrase the definition, according to Heier–Levin [2], as follows:

    Definition 2.2. With the assumptions and notations in Definition 2.1, for a closed subset W of Vf (with respect to the Zariski topology on PN(Ω)), let

    α(W)=#{j|WSuppDj}.

    We define

    Δ:=maxWVfα(W)codimVfW.

    We show that the above two definitions are equivalent. Suppose that ˜W is a subvariety of Vf such that α(W)codimVfW attains the maximum at W=˜W. Reordering if necessary, we assume that ˜WDj for j=1,,α(˜W). Let W=α(˜W)j=1Dj. Then, clearly, ˜WW and hence codimVf˜WcodimVfW. On the other hand, ˜WDj for all j>α(˜W) implies that WDj for all j>α(˜W). So α(˜W)=α(W). Thus we have

    α(W)codimVfWα(˜W)codimVf˜W.

    By our assumption for ˜W, we obtain

    α(W)codimVfW=α(˜W)codimVf˜W.

    This means that, in Definition 2.2, we only need to consider those W that are the intersections of some Di's, and our claim follows from this observation. In the following, when we deal with W, we always assume that W is the intersection of some Di's.

    S. D. Quang obtained the following result.

    Theorem 2.3 (S. D. Quang [6], Lei Shi, Qiming Yan, and Guangsheng Yu [7]). Let f be a holomorphic map of C into PN(C). Let {Dj}qj=1 be a family of slowly moving hypersurfaces in PN(C) with degDj=dj(1jq). Let VfPN(Ω) be given as in (1.1). Assume that dimVf=n. Then, for any ϵ>0,

    qj=11djmf(r,Dj)exc((n+1)maxWVfα(W)codimVfW+ϵ)Tf(r).

    We derive the following corollary of Theorem 2.3.

    Corollary 2.4. We adopt the assumptions in Theorem 2.3. Let W0 be a closed subset of VfPN(Ω). Then, for any ϵ>0, we have

    qj=11djmf(r,Dj)exc(α(W0)+(n+1)maxϕWVfα(W)α(WW0)codimVfW+ϵ)Tf(r).

    Proof. Without loss of generality, we suppose that W0SuppDj for j=qα(W0)+1,,q. Let q=qα(W0). Let

    α(W)=#{iq|WSuppDi}.

    Note that α(W)=α(W)α(WW0). Then by the first main theorem for moving targets and α(W0)=qq,

    qi=q+11dimf(r,Di)(α(W0)+ϵ)Tf(r).

    Thus, Theorem 2.3 implies that

    qj=11djmf(r,Dj)=qi=11dimf(r,Di)+qi=q+11dimf(r,Di)exc((n+1)maxWVfα(W)codimVfW+ϵ)Tf(r)+(α(W0)+ϵ)Tf(r),

    which is we desired.

    Proof of Theorem 1.9. We divide the proof into two cases. The first case is that for every algebraic subvariety WVfPN(Ω) with W, we have codimVfWn+12mn+1α(W). Then, by Definition 2.2, we have Δ2mn+1n+1. So the result follows easily from Theorem 2.3.

    Otherwise, we take a subvariety W0V such that the quantity

    n+1codimVfW2mn+1α(W)

    is maximized at W=W0. We assume that W0 is an intersection DI for some I{1,,q}. Let

    σ:=n+1codimVfW02mn+1α(W0).

    Note that σ is the slope of the straight line passing through (2mn+1,n+1) and (α(W0),codimVfW0).

    Take arbitrary WVf. By Corollary 2.4, it suffices to show that

    α(W0)+(n+1)α(W)α(WW0)codimVfW32(2mn+1).

    Assume that W=DJ for some nonempty J{1,...,q} (the case α(W)=0,J=, follows from m-subgeneral position). From the claim on page 19 of Heier–Levin [2] (apply the same argument in [2]), we have

    α(W0)α(WW0)codimVfW1σ.

    Hence

    α(W0)+(n+1)α(W)α(WW0)codimVfWα(W0)+n+1σ. (3.1)

    Finally, consider Vojta's Nochka-weight-diagram [9] (see Figure 1).

    Figure 1.  Nochka-weight-diagram.

    We note that from our assumption that codimVfW0<n+12mn+1α(W0), P=(α(W0),codimVfW0) lies below the line y=n+12mn+1x. From the m-subgeneral position, it also lies to the left of the line y=x+nm. Therefore, P must lie below and to the left of the intersection point Q=(2mn+12,n+12) of the above two straight lines. Thus, we have

    α(W0)<2mn+12,codimVfW0<n+12. (3.2)

    Since σ>n+12mn+1 (see Figure 1), by using (3.2), we obtain

    α(W0)+n+1σ<α(W0)+2mn+1<2mn+12+2mn+1=32(2mn+1).

    Combing above with (3.1), we obtain

    α(W0)+(n+1)α(W)α(WW0)codimVfW32(2mn+1).

    The theorem thus follows from Corollary 2.4.

    It is a longstanding problem in Nevanlinna theory: we expect the second main theorem, under the setting of hypersurfaces in PN(C) located in m-subgeneral position, has the upper bound of (2mN+1+ϵ)Tf(r). This problem can also be considered in the context of moving hypersurfaces in m-subgeneral position. Heier and Levin used an estimate on the slope of the last line segment of the Nochka diagram, along with the concept of distributive constants, to obtain a good coefficient 32(2mN+1+ϵ) in the case of fixed hypersurfaces. In this paper, building upon the work of Heier and Levin and utilizing the concept of universal fields, we establish a general inequality with the same coefficient 32(2mN+1+ϵ) in the case of moving hypersurfaces. From these theorems, it is seen that a more precise estimation of the slopes of the latter line segments in the Nochaka diagram could lead to a more precise upper bound.

    Qili Cai: Conceptualization, formal analysis, investigation, visualization, writing – original draft, writing – review and editing; Chinjui Yang: Conceptualization, formal analysis, methodology, validation, writing – original draft, writing – review and editing. All authors have read and approved the final version of the manuscript for publication.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest.



    [1] G. J. Zheng, F. Z. Zhang, Z. H. Zhang, Y. Xiang, N. J. Yuan, X. Xie, et al., DRN: A deep reinforcement learning framework for news recommendation, in Proceedings of the 2018 World Wide Web Conference, (2018), 167–176. https://doi.org/10.1145/3178876.3185994
    [2] G. R. Zhou, X. Q. Zhu, C. R. Song, Y Fan, H. Zhu, X. Zhu, Deep interest network for click-through rate prediction, in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2018), 1059–1068. https://dl.acm.org/doi/10.1145/3219819.3219823
    [3] H. Liu, B. Yang, D. Li, Graph collaborative filtering based on dual-message propagation mechanism, IEEE Trans. Cybernetics, 53 (2023), 352–364. https://ieeexplore.ieee.org/document/9515772
    [4] A. Hamzehei, R. K. Wong, D. Koutra, F. Chen, Collaborative topic regression for predicting topic-based social influence, Mach. Learn., 108 (2019), 1831–1850. https://doi.org/10.1007/s10994-018-05776-w doi: 10.1007/s10994-018-05776-w
    [5] N. J. Zhu, J. Cao, Y. C. Liu, Y. Yang, H. C Ying, H. Xiong, Sequential modeling of hierarchical user intention and preference for next-item recommendation, in Proceedings of the 13th ACM International Conference on Web Search and Data Mining, (2020), 807–815. https://doi.org/10.1145/3336191.3371840
    [6] X. L. Guo, C. Y. Shi, C. M. Liu, Intention modeling from ordered and unordered facets for sequential recommendation, in Proceedings of the 2020 World Wide Web Conference, (2020), 1127–1137. https://doi.org/10.1145/3366423.3380190
    [7] X. Wang, T. L. Huang, D. X. Wang, Y. C. Yuan, Z. G. Liu, X. N. He, et al., Learning intents behind interactions with knowledge graph for recommendation, in Proceedings of the 2021 World Wide Web Conference, (2021), 878–887. https://dl.acm.org/doi/10.1145/3442381.3450133
    [8] X. Wang, H. Y. Jin, A. Zhang, X. N. He, T. Xu, T. S. Chua, Disentangled graph collaborative filtering, in Proceedings of the 43st International ACM SIGIR Conference on Research & Development in Information Retrieval, (2020), 1001–1010. https://dl.acm.org/doi/10.1145/3397271.3401137
    [9] L. M. Hu, S. Y. Xu, C. Li, C. Yang, C. Shi, N. Duan, et al., Graph neural news recommendation with unsupervised preference disentanglement, in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (2020), 4255–4264. https://doi.org/10.18653/v1/2020.acl-main.392
    [10] H. Chen, X. Xin, D. Wang, Y. Ding, Decomposed collaborative filtering: Modeling explicit and implicit factors for recommender systems, in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, (2021), 958–966. https://doi.org/10.1145/3437963.3441826
    [11] T. Huang, R. Zhao, L. Bi, D. Zhang, C. Lu, Neural embedding singular value decomposition for collaborative filtering, IEEE Trans. Neural Net. Learn. Syst., 33 (2022), 6021–6029. https://doi.org/10.1109/TNNLS.2021.3070853 doi: 10.1109/TNNLS.2021.3070853
    [12] E. O. Aboagye, G. C. James, J. B. Gao, R. Kumar, R. U. Khan, Probabilistic time context framework for big data collaborative recommendation, in Proceedings of the 2018 International Conference on Computing and Artificial Intelligence, (2018), 118–121. https://doi.org/10.1145/3194452.3194458
    [13] C. Chen, X. Meng, Z. Xu, T. Lukasiewicz, Location-aware personalized news recommendation with deep semantic analysis, IEEE Access, 5 (2017), 1624–1638. https://doi.org/10.1109/ACCESS.2017.2655150 doi: 10.1109/ACCESS.2017.2655150
    [14] X. He, H. Zhang, M. Y. Kan, T. S. Chua, Fast matrix factorization for online recommendation with implicit feedback, in Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2016), 549–558. https://doi.org/10.1145/2911451.2911489
    [15] C. Y. Liu, C. Zhou, J. Wu, Y. Hu, L. Guo, Social recommendation with an essential preference space, in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 32 (2018), 346–353. https://doi.org/10.1609/aaai.v32i1.11245
    [16] S. Sedhain, A. K. Menon, S. Sanner, L. Xie, D. Braziunas, Low-rank linear cold-start recommendation from social data, in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, (2017), 236–243. https://doi.org/10.1609/aaai.v31i1.10758
    [17] M. Volkoves, G. Yu, M. T Poutanen, Dropoutnet: Addressing cold start in recommender systems, in Proceedings of the Advances in Neural Information Processing Systems, (2017), 4957–4966.
    [18] Y. Gu, B. Zhao, D. Hardtke, Y. Sun, Learning global term weights for content-based recommender systems, in Proceedings of the 25th International Conference on World Wide Web, (2016), 391–400. https://doi.org/10.1145/2872427.2883069
    [19] L. Jiang, L. Shi, L. Liu, J. Yao, M. E. Ali, User interest community detection on social media using collaborative filtering, Wireless Netw., 28 (2022), 1177. https://doi.org/10.1007/s11276-021-02826-5 doi: 10.1007/s11276-021-02826-5
    [20] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, et al., Ripple net: Propagating user preferences on the knowledge graph for recommender systems, preprint, arXiv: 1803.03467.
    [21] H. Wang, M. Zhao, X. Xie, W. Li, M. Guo, Knowledge graph convolutional networks for recommender systems, in Proceedings of the 2019 World Wide Web Conference, (2019), 3307–3313. https://doi.org/10.1145/3308558.3313417
    [22] X. Wang, X. He, Y. Cao, M. Liu, T. S. Chua, KGAT: Knowledge graph attention network for recommendation, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 950–958. https://doi.org/10.1145/3292500.3330989
    [23] Z. Wang, G. Lin, H. Tan, Q. Chen, X. Liu, CKAN: Collaborative knowledge-aware attentive network for recommender systems, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 219–228. https://doi.org/10.1145/3397271.3401141
    [24] W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, et al., Graph neural networks for social recommendation, in Proceedings of the World Wide Web Conference, (2019), 417–426. https://doi.org/10.1145/3308558.3313488
    [25] J. Guo, Y. Zhou, P. Zhang, B. Song, C. Chen, Trust-aware recommendation based on heterogeneous multi-relational graphs fusion, Inform. Fusion, 74 (2021), 87–95. https://doi.org/10.1016/j.inffus.2021.04.001 doi: 10.1016/j.inffus.2021.04.001
    [26] S. Zhang, X. Wang, R. Wang, J. Gu, J. Li, Knowledge graph recommendation model based on feature space fusion, Appl. Sci., 12 (2022). https://doi.org/10.1016/10.3390/app12178764 doi: 10.1016/10.3390/app12178764
    [27] X. Wang, X. He, M. Wang, F. Feng, T. S. Chua, Neural graph collaborative filtering, in Proceedings of the 42nd International ACM SIGIR Conference on Research & Development in Information Retrieval, (2019), 165–174. https://doi.org/10.1145/3331184.3331267
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1780) PDF downloads(130) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog