Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

An adaptive preference retention collaborative filtering algorithm based on graph convolutional method

  • Received: 07 October 2022 Revised: 10 November 2022 Accepted: 18 November 2022 Published: 01 December 2022
  • Collaborative filtering is one of the most widely used methods in recommender systems. In recent years, Graph Neural Networks (GNN) were naturally applied to collaborative filtering methods to model users' preference representation. However, empirical research has ignored the effects of different items on user representation, which prevented them from capturing fine-grained users' preferences. Besides, due to the problem of data sparsity in collaborative filtering, most GNN-based models conduct a large number of graph convolution operations in the user-item graph, resulting in an over-smoothing effect. To tackle these problems, Adaptive Preference Retention Graph Convolutional Collaborative Filtering Method (APR-GCCF) was proposed to distinguish the difference among the items and capture the fine-grained users' preferences. Specifically, the graph convolutional method was applied to model the high-order relationship on the user-item graph and an adaptive preference retention mechanism was used to capture the difference between items adaptively. To obtain a unified users' preferences representation and alleviate the over-smoothing effect, we employed a residual preference prediction mechanism to concatenate the representation of users' preferences generated by each layer of the graph neural network. Extensive experiments were conducted based on three real datasets and the experimental results demonstrate the effectiveness of the model.

    Citation: Bingjie Zhang, Junchao Yu, Zhe Kang, Tianyu Wei, Xiaoyu Liu, Suhua Wang. An adaptive preference retention collaborative filtering algorithm based on graph convolutional method[J]. Electronic Research Archive, 2023, 31(2): 793-811. doi: 10.3934/era.2023040

    Related Papers:

    [1] Kandhasamy Tamilvanan, Jung Rye Lee, Choonkil Park . Ulam stability of a functional equation deriving from quadratic and additive mappings in random normed spaces. AIMS Mathematics, 2021, 6(1): 908-924. doi: 10.3934/math.2021054
    [2] Murali Ramdoss, Divyakumari Pachaiyappan, Inho Hwang, Choonkil Park . Stability of an n-variable mixed type functional equation in probabilistic modular spaces. AIMS Mathematics, 2020, 5(6): 5903-5915. doi: 10.3934/math.2020378
    [3] K. Tamilvanan, Jung Rye Lee, Choonkil Park . Hyers-Ulam stability of a finite variable mixed type quadratic-additive functional equation in quasi-Banach spaces. AIMS Mathematics, 2020, 5(6): 5993-6005. doi: 10.3934/math.2020383
    [4] Maysaa Al-Qurashi, Mohammed Shehu Shagari, Saima Rashid, Y. S. Hamed, Mohamed S. Mohamed . Stability of intuitionistic fuzzy set-valued maps and solutions of integral inclusions. AIMS Mathematics, 2022, 7(1): 315-333. doi: 10.3934/math.2022022
    [5] Lingxiao Lu, Jianrong Wu . Hyers-Ulam-Rassias stability of cubic functional equations in fuzzy normed spaces. AIMS Mathematics, 2022, 7(5): 8574-8587. doi: 10.3934/math.2022478
    [6] Nazek Alessa, K. Tamilvanan, G. Balasubramanian, K. Loganathan . Stability results of the functional equation deriving from quadratic function in random normed spaces. AIMS Mathematics, 2021, 6(3): 2385-2397. doi: 10.3934/math.2021145
    [7] Zhihua Wang . Approximate mixed type quadratic-cubic functional equation. AIMS Mathematics, 2021, 6(4): 3546-3561. doi: 10.3934/math.2021211
    [8] Nour Abed Alhaleem, Abd Ghafur Ahmad . Intuitionistic fuzzy normed prime and maximal ideals. AIMS Mathematics, 2021, 6(10): 10565-10580. doi: 10.3934/math.2021613
    [9] Sizhao Li, Xinyu Han, Dapeng Lang, Songsong Dai . On the stability of two functional equations for (S,N)-implications. AIMS Mathematics, 2021, 6(2): 1822-1832. doi: 10.3934/math.2021110
    [10] Zhihua Wang, Choonkil Park, Dong Yun Shin . Additive ρ-functional inequalities in non-Archimedean 2-normed spaces. AIMS Mathematics, 2021, 6(2): 1905-1919. doi: 10.3934/math.2021116
  • Collaborative filtering is one of the most widely used methods in recommender systems. In recent years, Graph Neural Networks (GNN) were naturally applied to collaborative filtering methods to model users' preference representation. However, empirical research has ignored the effects of different items on user representation, which prevented them from capturing fine-grained users' preferences. Besides, due to the problem of data sparsity in collaborative filtering, most GNN-based models conduct a large number of graph convolution operations in the user-item graph, resulting in an over-smoothing effect. To tackle these problems, Adaptive Preference Retention Graph Convolutional Collaborative Filtering Method (APR-GCCF) was proposed to distinguish the difference among the items and capture the fine-grained users' preferences. Specifically, the graph convolutional method was applied to model the high-order relationship on the user-item graph and an adaptive preference retention mechanism was used to capture the difference between items adaptively. To obtain a unified users' preferences representation and alleviate the over-smoothing effect, we employed a residual preference prediction mechanism to concatenate the representation of users' preferences generated by each layer of the graph neural network. Extensive experiments were conducted based on three real datasets and the experimental results demonstrate the effectiveness of the model.



    In 1940, Ulam [24] posed the stability problem concerning group homomorphisms. For Banach spaces, the problem was solved by Hyers [7] in the case of approximate additive mappings. And then Hyers' result was extended by Aoki [1] and Rassias [18] for additive mappings and linear mappings, respectively. In 1994, another further generalization, the so-called generalized Hyer-Ulam stability, was obtained by Gavruta [6]. Later, the stability of several functional equations has been extensively discussed by many mathematicians and there are many interesting results concerning this problem (see [2,8,9,10,19,20] and references therein); also, some stability results of different functional equations and inequalities were studied and generalized [5,11,12,15,16,17,26] in various matrix normed spaces like matrix fuzzy normed spaces, matrix paranormed spaces and matrix non-Archimedean random normed spaces.

    In 2017, Wang and Xu [25] introduced the following functional equation

    2k[f(x+ky)+f(kx+y)]=k(1s+k+ks+2k2)f(x+y)+k(1s3k+ks+2k2)f(xy)+2kf(kx)+2k(s+kks2k2)f(x)+2(1ks)f(ky)+2ksf(y) (1.1)

    where s is a parameter, k>1 and s12k. It is easy to verify that f(x)=ax+bx2(xR) satisfies the functional Eq (1.1), where a,b are arbitrary constants. They considered the general solution of the functional Eq (1.1), and then determined the generalized Hyers-Ulam stability of the functional Eq (1.1) in quasi-Banach spaces by applying the direct method.

    The main purpose of this paper is to employ the direct and fixed point methods to establish the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces. The paper is organized as follows: In Sections 1 and 2, we present a brief introduction and introduce related basic definitions and preliminary results, respectively. In Section 3, we prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the direct method. In Section 4, we prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the fixed point method. Our results may be viewed as a continuation of the previous contribution of the authors in the setting of fuzzy stability (see [14,17]).

    For the sake of completeness, in this section, we present some basic definitions and preliminary results, which will be useful to investigate the Hyers-Ulam stability results in matrix intuitionistic fuzzy normed spaces. The notions of continuous t-norm and continuous t-conorm can be found in [14,22]. Using these, an intuitionistic fuzzy normed space (for short, IFNS) is defined as follows:

    Definition 2.1. ([14,21]) The five-tuple (X,μ,ν,,) is said to be an IFNS if X is a vector space, is a continuous t-norm, is a continuous t-conorm, and μ,ν are fuzzy sets on X×(0,) satisfy the following conditions. For every x,yX and s,t>0,

    (i) μ(x,t)+ν(x,t)1;

    (ii) μ(x,t)>0, (iii) μ(x,t)=1 if and only if x=0;

    (iii) μ(αx,t)=μ(x,t|α|) for each α0, (v) μ(x,t)μ(y,s)μ(x+y,t+s);

    (iv) μ(x,):(0,)[0,1] is continuous;

    (v) limtμ(x,t)=1 and limt0μ(x,t)=0;

    (vi) ν(x,t)<1, (ix) ν(x,t)=0 if and only if x=0;

    (vii) ν(αx,t)=ν(x,t|α|) for each α0, (xi) ν(x,t)ν(y,s)ν(x+y,t+s);

    (xiii) ν(x,):(0,)[0,1] is continuous;

    (ix) limtν(x,t)=0 and limt0ν(x,t)=1.

    In this case, (μ,ν) is called an intuitionistic fuzzy norm.

    The following concepts of convergence and Cauchy sequences are considered in [14,21]:

    Let (X,μ,ν,,) be an IFNS. Then, a sequence {xk} is said to be intuitionistic fuzzy convergent to xX if for every ε>0 and t>0, there exists k0N such that

    μ(xkx,t)>1ε

    and

    ν(xkx,t)<ε

    for all kk0. In this case we write

    (μ,ν)limxk=x.

    The sequence {xk} is said to be an intuitionistic fuzzy Cauchy sequence if for every ε>0 and t>0, there exists k0N such that

    μ(xkx,t)>1ε

    and

    ν(xkx,t)<ε

    for all k,k0. (X,μ,ν,,) is said to be complete if every intuitionistic fuzzy Cauchy sequence in (X,μ,ν,,) is intuitionistic fuzzy convergent in (X,μ,ν,,).

    Following [11,12], we will also use the following notations: The set of all m×n-matrices in X will be denoted by Mm,n(X). When m=n, the matrix Mm,n(X) will be written as Mn(X). The symbols ejM1,n(C) will denote the row vector whose jth component is 1 and the other components are 0. Similarly, EijMn(C) will denote the n×n matrix whose (i,j)-component is 1 and the other components are 0. The n×n matrix whose (i,j)-component is x and the other components are 0 will be denoted by EijxMn(X).

    Let (X,) be a normed space. Note that (X,{n}) is a matrix normed space if and only if (Mn(X),n) is a normed space for each positive integer n and

    AxBkABxn

    holds for AMk,n, x=[xij]Mn(X) and BMn,k, and that (X,{n}) is a matrix Banach space if and only if X is a Banach space and (X,{n}) is a matrix normed space.

    Following [23], we introduce the concept of a matrix intuitionistic fuzzy normed space as follows:

    Definition 2.2. ([23]) Let (X,μ,ν,,) be an intuitionistic fuzzy normed space, and the symbol θ for a rectangular matrix of zero elements over X. Then:

    (1) (X,{μn},{νn},,) is called a matrix intuitionistic fuzzy normed space (briefly, MIFNS) if for each positive integer n, (Mn(X),μn,νn,,) is an intuitionistic fuzzy normed space, μn and νn satisfy the following conditions:

    (i) μn+m(θ+x,t)=μn(x,t),νn+m(θ+x,t)=νn(x,t) for all t>0, x=[xij]Mn(X), θMn(X);

    (ii) μk(AxB,t)μn(x,tAB), νk(AxB,t)νn(x,tAB) for all t>0, AMk,n(R), x=[xij]Mn(X) and BMn,k(R) with AB0.

    (2) (X,{μn},{νn},,) is called a matrix intuitionistic fuzzy Banach space if (X,μ,ν,,) is an intuitionistic fuzzy Banach space and (X,{μn},{νn},,) is a matrix intuitionistic fuzzy normed space.

    The following Lemma 2.3 was found in [23].

    Lemma 2.3. ([23]) Let (X,{μn},{νn},,) be a matrix intuitionistic fuzzy normed space. Then,

    (1) μn(Eklx,t)=μ(x,t), νn(Eklx,t)=ν(x,t) for all t>0 and xX.

    (2) For all [xij]Mn(X) and t=ni,j=1tij>0,

    μ(xkl,t)μn([xij],t)min{μ(xij,tij):i,j=1,2,,n},μ(xkl,t)μn([xij],t)min{μ(xij,tn2):i,j=1,2,,n},

    and

    ν(xkl,t)νn([xij],t)max{ν(xij,tij):i,j=1,2,,n},ν(xkl,t)νn([xij],t)max{ν(xij,tn2):i,j=1,2,,n}.

    (3) limmxm=x if and only if limmxijm=xij for xm=[xijm],x=[xij]Mn(X).

    For explicit later use, we also recall the following Lemma 2.4 is due to Diaz and Margolis [4], which will play an important role in proving our stability results in this paper.

    Lemma 2.4. (The fixed point alternative theorem [4]) Let (E,d) be a complete generalized metric space and J: EE be a strictly contractive mapping with Lipschitz constant L<1. Then for each fixed element xE, either

    d(Jnx,Jn+1x)=,n0,

    or

    d(Jnx,Jn+1x)<,nn0,

    for some natural number n0. Moreover, if the second alternative holds then:

    (i) The sequence {Jnx} is convergent to a fixed point y of J.

    (ii)y is the unique fixed point of J in the set E:={yEd(Jn0x,y)<+} and d(y,y)11Ld(y,Jy),x,yE.

    From now on, let (X,{μn},{νn},,) be a matrix intuitionistic fuzzy normed space and (Y,{μn},{νn},,) be a matrix intuitionistic fuzzy Banach space. In this section, we will prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by using the direct method. For the sake of convenience, given mapping f: XY, we define the difference operators Df: X2Y and Dfn: Mn(X2)Mn(Y) of the functional Eq (1.1) by

    Df(a,b):=2k[f(a+kb)+f(ka+b)]k(1s+k+ks+2k2)f(a+b)k(1s3k+ks+2k2)f(ab)2kf(ka)2k(s+kks2k2)f(a)2(1ks)f(kb)2ksf(b),Dfn([xij],[yij]):=2k[fn([xij]+k[yij])+fn(k[xij]+[yij])]k(1s+k+ks+2k2)fn([xij]+[yij])k(1s3k+ks+2k2)fn([xij][yij])2kfn(k[xij])2k(s+kks2k2)fn([xij])2(1ks)fn(k[yij])2ksfn([yij])

    for all a,bX and all x=[xij],y=[yij]Mn(X).

    We start with the following lemmas which will be used in this paper.

    Lemma 3.1. ([25]) Let V and W be real vector spaces. If an odd mapping f: VW satisfies the functional Eq (1.1), then f is additive.

    Lemma 3.2. ([25]) Let V and W be real vector spaces. If an even mapping f: VW satisfies the functional Eq (1.1), then f is quadratic.

    Theorem 3.3. Let φo: X2[0,) be a function such that for some real number α with 0<α<k,

    φo(ka,kb)=αφo(a,b) (3.1)

    for all a,bX. Suppose that an odd mapping f: XY satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φo(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φo(xij,yij)t+ni,j=1φo(xij,yij) (3.2)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exists a unique additive mapping A: XY such that

    {μn(fn([xij])An([xij]),t)(kα)(2k+s1)t(kα)(2k+s1)t+n2ni,j=1φo(0,xij),νn(fn([xij])An([xij]),t)n2ni,j=1φo(0,xij)(kα)(2k+s1)t+n2ni,j=1φo(0,xij) (3.3)

    for all x=[xij]Mn(X) and all t>0.

    Proof. When n=1, (3.2) is equivalent to

    μ(Df(a,b),t)tt+φo(a,b)andν(Df(a,b),t)φo(a,b)t+φo(a,b) (3.4)

    for all a,bX and all t>0. Putting a=0 in (3.4), we have

    {μ(2(2k+s1)f(kb)2(2k+s1)kf(b),t)tt+φo(0,b),ν(2(2k+s1)f(kb)2(2k+s1)kf(b),t)φo(0,b)t+φo(0,b) (3.5)

    for all bX and all t>0. Replacing a by kpa in (3.5) and using (3.1), we get

    {μ(f(kp+1a)kp+1f(kpa)kp,t2k(2k+s1)kp)tt+αpφo(0,a),ν(f(kp+1a)kp+1f(kpa)kp,t2k(2k+s1)kp)αpφo(0,a)t+αpφo(0,a) (3.6)

    for all aX and all t>0. It follows from (3.6) that

    {μ(f(kp+1a)kp+1f(kpa)kp,αpt2k(2k+s1)kp)tt+φo(0,a),ν(f(kp+1a)kp+1f(kpa)kp,αpt2k(2k+s1)kp)φo(0,a)t+φo(0,a) (3.7)

    for all aX and all t>0. It follows from

    f(kpa)kpf(a)=p1=0(f(k+1a)k+1f(ka)k)

    and (3.7) that

    {μ(f(kpa)kpf(a),p1=0αt2k(2k+s1)k)p1=0μ(f(k+1a)k+1f(ka)k,αt2k(2k+s1)k)tt+φo(0,a),ν(f(kpa)kpf(a),p1=0αt2k(2k+s1)k)p1=0ν(f(k+1a)k+1f(ka)k,αt2k(2k+s1)k)φo(0,a)t+φo(0,a) (3.8)

    for all aX and all t>0, where

    pj=0aj=a1a2ap,   pj=0aj=a1a2ap.

    By replacing a with kqa in (3.8), we have

    {μ(f(kp+qa)kp+qf(kqa)kq,p1=0αt2k(2k+s1)k+q)tt+αqφo(0,a),ν(f(kp+qa)kp+qf(kqa)kq,p1=0αt2k(2k+s1)k+q)αqφo(0,a)t+αqφo(0,a) (3.9)

    for all aX, t>0, p>0 and q>0. Thus

    {μ(f(kp+qa)kp+qf(kqa)kq,p+q1=qαt2k(2k+s1)k)tt+φo(0,a),ν(f(kp+qa)kp+qf(kqa)kq,p+q1=qαt2k(2k+s1)k)φo(0,a)t+φo(0,a) (3.10)

    for all aX, t>0, p>0 and q>0. Hence

    {μ(f(kp+qa)kp+qf(kqa)kq,t)tt+p+q1=qα2k(2k+s1)kφo(0,a),ν(f(kp+qa)kp+qf(kqa)kq,t)p+q1=qα2k(2k+s1)kφo(0,a)t+p+q1=qα2k(2k+s1)kφo(0,a) (3.11)

    for all aX, t>0, p>0 and q>0. Since 0<α<k and

    =0α2k(2k+s1)k<,

    the Cauchy criterion for convergence in IFNS shows that {f(kpa)kp} is a Cauchy sequence in (Y,μ,ν,,). Since (Y,μ,ν,,) is an intuitionistic fuzzy Banach space, this sequence converges to some point A(a)Y. So one can define the mapping A: XY such that

    A(a):=(μ,ν)limpf(kpa)kp.

    Moreover, if we put q=0 in (3.11), we get

    {μ(f(kpa)kpf(a),t)tt+p1=0α2k(2k+s1)kφo(0,a),ν(f(kpa)kpf(a),t)p1=0α2k(2k+s1)kφo(0,a)t+p1=0α2k(2k+s1)kφo(0,a) (3.12)

    for all aX, t>0 and p>0. Thus, we obtain

    {μ(f(a)A(a),t)μ(f(a)f(kpa)kp,t2)μ(f(kpa)kpA(a),t2)tt+p1=0αk(2k+s1)kφo(0,a),ν(f(a)A(a),t)ν(f(a)f(kpa)kp,t2)ν(f(kpa)kpA(a),t2)p1=0αk(2k+s1)kφo(0,a)t+p1=0αk(2k+s1)kφo(0,a) (3.13)

    for every aX, t>0 and large p. Taking the limit as p and using the definition of IFNS, we get

    {μ(f(a)A(a),t)(kα)(2k+s1)t(kα)(2k+s1)t+φo(0,a),ν(f(a)A(a),t)φo(0,a)(kα)(2k+s1)t+φo(0,a). (3.14)

    Replacing a and b by kpa and kpb in (3.4), respectively, and using (3.1), we obtain

    μ(1kpDf(kpa,kpb),t)tt+(αk)pφo(a,b)andν(1kpDf(kpa,kpb),t)(αk)pφo(a,b)t+(αk)pφo(a,b) (3.15)

    for all a,bX and all t>0. Letting p in (3.15), we obtain

    μ(DA(a,b),t)=1andν(DA(a,b),t)=0 (3.16)

    for all a,bX and all t>0. This means that A satisfies the functional Eq (1.1). Since f: XY is an odd mapping, and using the definition A, we have A(a)=A(a) for all aX. Thus by Lemma 3.1, the mapping A: XY is additive. To prove the uniqueness of A, let A: XY be another additive mapping satisfying (3.14). Let n=1. Then we have

    {μ(A(a)A(a),t)=μ(A(kpa)kpA(kpa)kp,t)μ(A(kpa)kpf(kpa)kp,t2)μ(f(kpa)kpA(kpa)kp,t2)(kα)(2k+s1)t(kα)(2k+s1)t+2(αk)pφo(0,a),ν(A(a)A(a),t)=ν(A(kpa)kpA(kpa)kp,t)ν(A(kpa)kpf(kpa)kp,t2)ν(f(kpa)kpA(kpa)kp,t2)2(αk)pφo(0,a)(kα)(2k+s1)t+2(αk)pφo(0,a) (3.17)

    for all aX, t>0 and p>0. Letting p in (3.17), we get

    μ(A(a)A(a),t)=1andν(A(a)A(a),t)=0

    for all aX and t>0. Hence we get A(a)=A(a) for all aX. Thus the mapping A: XY is a unique additive mapping.

    By Lemma 2.3 and (3.14), we get

    {μn(fn([xij])An([xij]),t)min{μ(f(xij)A(xij),tn2):i,j=1,,n} (kα)(2k+s1)t(kα)(2k+s1)t+n2ni,j=1φo(0,xij),νn(fn([xij])An([xij]),t)max{ν(f(xij)A(xij),tn2):i,j=1,,n} n2ni,j=1φo(0,xij)(kα)(2k+s1)t+n2ni,j=1φo(0,xij)

    for all x=[xij]Mn(X) and all t>0. Thus A: XY is a unique additive mapping satisfying (3.3), as desired. This completes the proof of the theorem.

    Theorem 3.4. Let φe: X2[0,) be a function such that for some real number α with 0<α<k2,

    φe(ka,kb)=αφe(a,b) (3.18)

    for all a,bX. Suppose that an even mapping f: XY with f(0)=0 satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φe(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φe(xij,yij)t+ni,j=1φe(xij,yij) (3.19)

    for all x=[xij], y=[yij]Mn(X) and all t>0. Then there exists a unique quadratic mapping Q: XY such that

    {μn(fn([xij])Qn([xij]),t)(k2α)(2k+s1)t(k2α)(2k+s1)t+n2ni,j=1φe(0,xij),νn(fn([xij])Qn([xij]),t)n2ni,j=1φe(0,xij)(k2α)(2k+s1)t+n2ni,j=1φe(0,xij) (3.20)

    for all x=[xij]Mn(X) and all t>0.

    Proof. When n=1, (3.19) is equivalent to

    μ(Df(a,b),t)tt+φe(a,b)andν(Df(a,b),t)φe(a,b)t+φe(a,b) (3.21)

    for all a,bX and all t>0. Letting a=0 in (3.21), we obtain

    {μ(2(2k+s1)f(kb)2(2k+s1)k2f(b),t)tt+φe(0,b),ν(2(2k+s1)f(kb)2(2k+s1)k2f(b),t)φe(0,b)t+φe(0,b) (3.22)

    for all bX and all t>0. Replacing a by kpa in (3.22) and using (3.18), we get

    {μ(f(kp+1a)k2(p+1)f(kpa)k2p,t2k2(2k+s1)k2p)tt+αpφe(0,a),ν(f(kp+1a)k2(p+1)f(kpa)k2p,t2k2(2k+s1)k2p)αpφe(0,a)t+αpφe(0,a) (3.23)

    for all aX and all t>0. It follows from (3.23) that

    {μ(f(kp+1a)k2(p+1)f(kpa)k2p,αpt2k2(2k+s1)k2p)tt+φe(0,a),ν(f(kp+1a)k2(p+1)f(kpa)k2p,αpt2k2(2k+s1)k2p)φe(0,a)t+φe(0,a) (3.24)

    for all aX and all t>0. It follows from

    f(kpa)k2pf(a)=p1=0(f(k+1a)k2(+1)f(ka)k2)

    and (3.24) that

    {μ(f(kpa)k2pf(a),p1=0αt2k2(2k+s1)k2)p1=0μ(f(k+1a)k2(+1)f(ka)k2,αt2k2(2k+s1)k2)tt+φe(0,a),ν(f(kpa)k2pf(a),p1=0αt2k2(2k+s1)k2)p1=0ν(f(k+1a)k2(+1)f(ka)k2,αt2k2(2k+s1)k2)φe(0,a)t+φe(0,a) (3.25)

    for all aX and all t>0, where

    pj=0aj=a1a2ap,   pj=0aj=a1a2ap.

    By replacing a with kqa in (3.25), we have

    {μ(f(kp+qa)k2(p+q)f(kqa)k2q,p1=0αt2k2(2k+s1)k2(+q))tt+αqφe(0,a),ν(f(kp+qa)k2(p+q)f(kqa)k2q,p1=0αt2k2(2k+s1)k2(+q))αqφe(0,a)t+αqφe(0,a) (3.26)

    for all aX, t>0, p>0 and q>0. Thus

    {μ(f(kp+qa)k2(p+q)f(kqa)k2q,p+q1=qαt2k2(2k+s1)k2)tt+φe(0,a),ν(f(kp+qa)k2(p+q)f(kqa)k2q,p+q1=qαt2k2(2k+s1)k2)φe(0,a)t+φe(0,a) (3.27)

    for all aX, t>0, p>0 and q>0. Hence

    {μ(f(kp+qa)k2(p+q)f(kqa)k2q,t)tt+p+q1=qα2k2(2k+s1)k2φe(0,a),ν(f(kp+qa)k2(p+q)f(kqa)k2q,t)p+q1=qα2k2(2k+s1)k2φe(0,a)t+p+q1=qα2k2(2k+s1)k2φe(0,a) (3.28)

    for all aX, t>0, p>0 and q>0. Since 0<α<k2 and

    =0α2k2(2k+s1)k2<,

    the Cauchy criterion for convergence in IFNS shows that {f(kpa)k2p} is a Cauchy sequence in (Y,μ,ν,,). Since (Y,μ,ν,,) is an intuitionistic fuzzy Banach space, this sequence converges to some point Q(a)Y. So one can define the mapping Q: XY such that

    Q(a):=(μ,ν)limpf(kpa)k2p.

    Moreover, if we put q=0 in (3.28), we get

    {μ(f(kpa)k2pf(a),t)tt+p1=0α2k2(2k+s1)k2φe(0,a),ν(f(kpa)k2pf(a),t)p1=0α2k2(2k+s1)k2φe(0,a)t+p1=0α2k2(2k+s1)k2φe(0,a) (3.29)

    for all aX, t>0 and p>0. Thus, we obtain

    {μ(f(a)Q(a),t)μ(f(a)f(kpa)k2p,t2)μ(f(kpa)k2pQ(a),t2)tt+p1=0αk2(2k+s1)k2φe(0,a),ν(f(a)Q(a),t)ν(f(a)f(kpa)k2p,t2)ν(f(kpa)k2pQ(a),t2)p1=0αk2(2k+s1)k2φe(0,a)t+p1=0αk2(2k+s1)k2φe(0,a) (3.30)

    for every aX, t>0 and large p. Taking the limit as p and using the definition of IFNS, we get

    {μ(f(a)Q(a),t)(k2α)(2k+s1)t(k2α)(2k+s1)t+φe(0,a),ν(f(a)Q(a),t)φe(0,a)(k2α)(2k+s1)t+φe(0,a). (3.31)

    Replacing a and b by kpa and kpb in (3.21), respectively, and using (3.18), we obtain

    μ(1k2pDf(kpa,kpb),t)tt+(αk2)pφe(a,b),ν(1k2pDf(kpa,kpb),t)(αk2)pφe(a,b)t+(αk2)pφe(a,b) (3.32)

    for all a,bX and all t>0. Letting p in (3.32), we obtain

    μ(DQ(a,b),t)=1andν(DQ(a,b),t)=0 (3.33)

    for all a,bX and all t>0. This means that Q satisfies the functional Eq (1.1). Since f: XY is an even mapping, and using the definition Q, we have Q(a)=Q(a) for all aX. Thus by Lemma 3.2, the mapping Q: XY is quadratic. To prove the uniqueness of Q, let Q: XY be another quadratic mapping satisfying (3.31). Let n=1. Then we have

    {μ(Q(a)Q(a),t)=μ(Q(kpa)k2pQ(kpa)k2p,t)  μ(Q(kpa)k2pf(kpa)k2p,t2)μ(f(kpa)k2pQ(kpa)k2p,t2)  (k2α)(2k+s1)t(k2α)(2k+s1)t+2(αk2)pφe(0,a),ν(Q(a)Q(a),t)=ν(Q(kpa)k2pQ(kpa)k2p,t)  ν(Q(kpa)k2pf(kpa)k2p,t2)ν(f(kpa)kpQ(kpa)k2p,t2)  2(αk2)pφe(0,a)(k2α)(2k+s1)t+2(αk2)pφe(0,a) (3.34)

    for all aX, t>0 and p>0. Letting p in (3.34), we get

    μ(Q(a)Q(a),t)=1andν(Q(a)Q(a),t)=0

    for all aX and t>0. Hence we get Q(a)=Q(a) for all aX. Thus the mapping Q: XY is a unique quadratic mapping.

    By Lemma 2.3 and (3.31), we get

    {μn(fn([xij])Qn([xij]),t)min{μ(f(xij)Q(xij),tn2):i,j=1,,n}(k2α)(2k+s1)t(k2α)(2k+s1)t+n2ni,j=1φe(0,xij),νn(fn([xij])Qn([xij]),t)max{ν(f(xij)Q(xij),tn2):i,j=1,,n}n2ni,j=1φe(0,xij)(k2α)(2k+s1)t+n2ni,j=1φe(0,xij)

    for all x=[xij]Mn(X) and all t>0. Thus Q: XY is a unique quadratic mapping satisfying (3.20), as desired. This completes the proof of the theorem.

    Theorem 3.5. Let φ: X2[0,) be a function such that for some real number α with 0<α<k,

    φ(ka,kb)=αφ(a,b) (3.35)

    for all a,bX. Suppose that a mapping f: XY with f(0)=0 satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φ(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φ(xij,yij)t+ni,j=1φ(xij,yij) (3.36)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: XY and a unique additive mapping A: XY such that

    {μn(fn([xij])Qn([xij])An([xij]),t)(kα)(2k+s1)t(kα)(2k+s1)t+2n2ni,j=1˜φ(0,xij),νn(fn([xij])Qn([xij])An([xij]),t)2n2ni,j=1˜φ(0,xij)(kα)(2k+s1)t+2n2ni,j=1˜φ(0,xij) (3.37)

    for all x=[xij]Mn(X) and all t>0, ˜φ(a,b)=φ(a,b)+φ(a,b) for all a,bX.

    Proof. When n=1, (3.36) is equivalent to

    μ(Df(a,b),t)tt+φ(a,b)andν(Df(a,b),t)φ(a,b)t+φ(a,b) (3.38)

    for all a,bX and all t>0. Let

    fe(a)=f(a)+f(a)2

    for all all aX. Then fe(0)=0,fe(a)=fe(a). And we have

    {μ(Dfe(a,b),t)=μ(12Df(a,b)+12Df(a,b),t)=μ(Df(a,b)+Df(a,b),2t)μ(Df(a,b),t)μ(Df(a,b),t)min{μ(Df(a,b),t),μ(Df(a,b),t)}tt+˜φ(a,b),ν(Dfe(a,b),t)=ν(12Df(a,b)+12Df(a,b),t)=ν(Df(a,b)+Df(a,b),2t)ν(Df(a,b),t)ν(Df(a,b),t)max{ν(Df(a,b),t),ν(Df(a,b),t)}˜φ(a,b)t+˜φ(a,b) (3.39)

    for all aX and all t>0. Let

    fo(a)=f(a)f(a)2

    for all all aX. Then f0(0)=0,fo(a)=fo(a). And we obtain

    {μ(Dfo(a,b),t)=μ(12Df(a,b)12Df(a,b),t)=μ(Df(a,b)Df(a,b),2t)μ(Df(a,b),t)μ(Df(a,b),t)=min{μ(Df(a,b),t),μ(Df(a,b),t)}tt+˜φ(a,b),ν(Dfo(a,b),t)=ν(12Df(a,b)12Df(a,b),t)=ν(Df(a,b)Df(a,b),2t)ν(Df(a,b),t)ν(Df(a,b),t)=max{ν(Df(a,b),t),ν(Df(a,b),t)}˜φ(a,b)t+˜φ(a,b) (3.40)

    for all aX and all t>0. It follows that the definition of ˜φ that ˜φ(ka,kb)=α˜φ(a,b) for all a,bX. It is easy to check that the condition of Theorems 3.3 and 3.4 are satisfying. Then applying the proofs of Theorems 3.3 and 3.4, we know that there exists a unique quadratic mapping Q: XY and a unique additive mapping A: XY satisfying

    {μ(fe(a)Q(a),t)(k2α)(2k+s1)t(k2α)(2k+s1)t+˜φ(0,a),ν(fe(a)Q(a),t)˜φ(0,a)(k2α)(2k+s1)t+˜φ(0,a) (3.41)

    and

    {μ(fo(a)A(a),t)(kα)(2k+s1)t(kα)(2k+s1)t+˜φ(0,a),ν(fo(a)A(a),t)˜φ(0,a)(kα)(2k+s1)t+˜φ(0,a) (3.42)

    for all aX and all t>0. Therefore

    {μ(f(a)Q(a)A(a),t)=μ(fe(a)Q(a)+fo(a)A(a),t)μ(fe(a)Q(a),t2)μ(fo(a)A(a),t2)=min{μ(fe(a)Q(a),t2),μ(fo(a)A(a),t2)}min{(k2α)(2k+s1)t(k2α)(2k+s1)t+2˜φ(0,a),(kα)(2k+s1)t(kα)(2k+s1)t+2˜φ(0,a)}=(kα)(2k+s1)t(kα)(2k+s1)t+2˜φ(0,a),ν(f(a)Q(a)A(a),t)=ν(fe(a)Q(a)+fo(a)A(a),t)ν(fe(a)Q(a),t2)ν(fo(a)A(a),t2)=max{ν(fe(a)Q(a),t2),ν(fo(a)A(a),t2)}max{2˜φ(0,a)(k2α)(2k+s1)t+2˜φ(0,a),2˜φ(0,a)(kα)(2k+s1)t+2˜φ(0,a)}=2˜φ(0,a)(kα)(2k+s1)t+2˜φ(0,a). (3.43)

    By Lemma 2.3 and (3.43), we have

    {μn(fn([xij])Qn([xij])An([xij]),t)min{μ(f(xij)Q(xij)A(xij),tn2):i,j=1,,n}(kα)(2k+s1)t(kα)(2k+s1)t+2n2ni,j=1˜φ(0,xij),νn(fn([xij])Qn([xij])An([xij]),t)max{ν(f(xij)Q(xij)A(xij),tn2):i,j=1,,n}2n2ni,j=1˜φ(0,xij)(kα)(2k+s1)t+2n2ni,j=1˜φ(0,xij)

    for all x=[xij]Mn(X) and all t>0. Thus Q: XY is a unique quadratic mapping and a unique additive mapping A: XY satisfying (3.37), as desired. This completes the proof of the theorem.

    Corollary 3.6. Let r,θ be positive real numbers with r<1. Suppose that a mapping f: XY with f(0)=0 satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1θ(xijr+yijr),νn(Dfn([xij],[yij]),t)ni,j=1θ(xijr+yijr)t+ni,j=1θ(xijr+yijr) (3.44)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: XY and a unique additive mapping A: XY such that

    {μn(fn([xij])Qn([xij])An([xij]),t)(kkr)(2k+s1)t(kkr)(2k+s1)t+4n2ni,j=1θxijr,νn(fn([xij])Qn([xij])An([xij]),t)4n2ni,j=1θxijr(kkr)(2k+s1)t+4n2ni,j=1θxijr (3.45)

    for all x=[xij]Mn(X) and all t>0.

    Proof. The proof follows from Theorem 3.5 by taking φ(a,b)=θ(ar+br) for all a,bX, we obtain the desired result.

    In this section, we will prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the fixed point method.

    Theorem 4.1. Let φo: X2[0,) be a function such that for some real number ρ with 0<ρ<1 and

    φo(a,b)=ρkφo(ka,kb) (4.1)

    for all a,bX. Suppose that an odd mapping f: XY satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φo(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φo(xij,yij)t+ni,j=1φo(xij,yij) (4.2)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exists a unique additive mapping A: XY such that

    {μn(fn([xij])An([xij]),t)2k(2k+s1)(1ρ)t2k(2k+s1)(1ρ)t+ρn2ni,j=1φo(0,xij),νn(fn([xij])An([xij]),t)ρn2ni,j=1φo(0,xij)2k(2k+s1)(1ρ)t+ρn2ni,j=1φo(0,xij) (4.3)

    for all x=[xij]Mn(X) and all t>0.

    Proof. When n=1, similar to the proof of Theorem 3.3, we have

    {μ(2(2k+s1)f(ka)2(2k+s1)kf(a),t)tt+φo(0,a),ν(2(2k+s1)f(ka)2(2k+s1)kf(a),t)φo(0,a)t+φo(0,a) (4.4)

    for all aX and all t>0.

    Let S1={g1:XY}, and introduce a generalized metric d1 on S1 as follows:

    d1(g1,h1):=inf{λR+|{μ(g1(a)h1(a),λt)tt+φo(0,a),ν(g1(a)h1(a),λt)φo(0,a)t+φo(0,a),aX,t>0}.

    It is easy to prove that (S1,d1) is a complete generalized metric space ([3,13]). Now, we define the mapping J1: S1S1 by

    J1g1(a):=kg1(ak),for allg1S1andaX. (4.5)

    Let g1,h1S1 and let λR+ be an arbitrary constant with d1(g1,h1)λ. From the definition of d1, we get

    {μ(g1(a)h1(a),λt)tt+φo(0,a),ν(g1(a)h1(a),λt)φo(0,a)t+φo(0,a)

    for all aX and t>0. Therefore, using (4.1), we get

    {μ(J1g1(a)J1h1(a),λρt)=μ(kg1(ak)kh1(ak),λρt)=μ(g1(ak)h1(ak),λρtk)ρktρkt+ρkφo(0,a)=tt+φo(0,a),ν(J1g1(a)J1h1(a),λρt)=ν(kg1(ak)kh1(ak),λρt)=ν(g1(ak)h1(ak),λρtk)ρkφo(0,a)ρkt+ρkφo(0,a)=φo(0,a)t+φo(0,a) (4.6)

    for some ρ<1, all aX and all t>0. Hence, it holds that d1(J1g1,J1h1)λρ, that is, d1(J1g1,J1h1)ρd1(g1,h1) for all g1,h1S1.

    Furthermore, by (4.1) and (4.4), we obtain the inequality

    d(f,J1f)ρ2k(2k+s1).

    It follows from Lemma 2.4 that the sequence Jp1f converges to a fixed point A of J1, that is, for all aX and all t>0,

    A:XY,A(a):=(μ,ν)limpkpf(akp) (4.7)

    and

    A(ka)=kA(a). (4.8)

    Meanwhile, A is the unique fixed point of J1 in the set

    S1={g1S1:d1(f,g1)<}.

    Thus, there exists a λR+ such that

    {μ(f(a)A(a),λt)tt+φo(0,a),ν(f(a)A(a),λt)φo(0,a)t+φo(0,a)

    for all aX and all t>0. Also,

    d1(f,A)11ρd(f,J1f)ρ2k(1ρ)(2k+s1).

    This means that the following inequality

    {μ(f(a)A(a),t)2k(2k+s1)(1ρ)t2k(2k+s1)(1ρ)t+ρφo(0,a),ν(f(a)A(a),t)ρφo(0,a)2k(2k+s1)(1ρ)t+ρφo(0,a) (4.9)

    holds for all aX and all t>0. It follows from (3.4) and (4.1) that

    μ(kpDf(akp,bkp),t)tt+ρpφo(a,b),ν(kpDf(akp,bkp),t)ρpφo(a,b)t+ρpφo(a,b) (4.10)

    for all a,bX and all t>0. Letting p in (4.10), we obtain

    μ(DA(a,b),t)=1andν(DA(a,b),t)=0 (4.11)

    for all a,bX and all t>0. This means that A satisfies the functional Eq (1.1). Since f: XY is an odd mapping, and using the definition A, we have A(a)=A(a) for all aX. Thus by Lemma 3.1, the mapping A: XY is additive.

    By Lemma 2.3 and (4.9), we get

    {μn(fn([xij])An([xij]),t)min{μ(f(xij)A(xij),tn2):i,j=1,,n}2k(2k+s1)(1ρ)t2k(2k+s1)(1ρ)t+ρn2ni,j=1φo(0,xij),νn(fn([xij])An([xij]),t)max{ν(f(xij)A(xij),tn2):i,j=1,,n}ρn2ni,j=1φo(0,xij)2k(2k+s1)(1ρ)t+ρn2ni,j=1φo(0,xij)

    for all x=[xij]Mn(X) and all t>0. Thus A: XY is a unique additive mapping satisfying (4.3), as desired. This completes the proof of the theorem.

    Theorem 4.2. Let φe: X2[0,) be a function such that for some real number ρ with 0<ρ<1 and

    φe(a,b)=ρk2φe(ka,kb) (4.12)

    for all a,bX. Suppose that an even mapping f: XY satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φe(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φe(xij,yij)t+ni,j=1φe(xij,yij) (4.13)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exists a unique quadratic mapping Q: XY such that

    {μn(fn([xij])Qn([xij]),t)2k2(2k+s1)(1ρ)t2k2(2k+s1)(1ρ)t+ρn2ni,j=1φe(0,xij),νn(fn([xij])Qn([xij]),t)ρn2ni,j=1φe(0,xij)2k2(2k+s1)(1ρ)t+ρn2ni,j=1φe(0,xij) (4.14)

    for all x=[xij]Mn(X) and all t>0.

    Proof. When n=1, similar to the proof of Theorem 3.4, we obtain

    {μ(2(2k+s1)f(ka)2(2k+s1)k2f(a),t)tt+φe(0,a),ν(2(2k+s1)f(ka)2(2k+s1)k2f(a),t)φe(0,a)t+φe(0,a) (4.15)

    for all aX and all t>0.

    Let S2:={g2:XY}, and introduce a generalized metric d2 on S2 as follows:

    d2(g2,h2):=inf{λR+|{μ(g2(a)h2(a),λt)tt+φe(0,a),ν(g2(a)h2(a),λt)φe(0,a)t+φe(0,a),aX,t>0}.

    It is easy to prove that (S2,d2) is a complete generalized metric space ([3,13]). Now, we define the mapping J2: S2S2 by

    J2g2(a):=k2g2(ak),for allg2S2andaX. (4.16)

    Let g2,h2S2 and let λR+ be an arbitrary constant with d2(g2,h2)λ. From the definition of d2, we get

    {μ(g2(a)h2(a),λt)tt+φe(0,a),ν(g2(a)h2(a),λt)φe(0,a)t+φe(0,a)

    for all aX and t>0. Therefore, using (4.12), we get

    {μ(J2g2(a)J2h2(a),λρt)=μ(k2g2(ak)k2h2(ak),λρt)=μ(g2(ak)h2(ak),λρtk2)ρk2tρk2t+ρk2φe(0,a)=tt+φe(0,a),ν(J2g2(a)J2h2(a),λρt)=ν(k2g2(ak)k2h2(ak),λρt)=ν(g2(ak)h2(ak),λρtk2)ρk2φe(0,a)ρk2t+ρk2φe(0,a)=φe(0,a)t+φe(0,a) (4.17)

    for some ρ<1, all aX and all t>0. Hence, it holds that d2(J2g2,J2h2)λρ, that is, d2(J2g2,J2h2)ρd2(g2,h2) for all g2,h2S2.

    Furthermore, by (4.12) and (4.15), we obtain the inequality

    d(f,J2f)ρ2k2(2k+s1).

    It follows from Lemma 2.4 that the sequence Jp2f converges to a fixed point Q of J2, that is, for all aX and all t>0,

    Q:XY,Q(a):=(μ,ν)limpk2pf(akp) (4.18)

    and

    Q(ka)=k2Q(a). (4.19)

    Meanwhile, Q is the unique fixed point of J2 in the set

    S2={g2S2:d2(f,g2)<}.

    Thus there exists a λR+ such that

    {μ(f(a)Q(a),λt)tt+φe(0,a),ν(f(a)Q(a),λt)φe(0,a)t+φe(0,a)

    for all aX and all t>0. Also,

    d2(f,Q)11ρd(f,J2f)ρ2k2(1ρ)(2k+s1).

    This means that the following inequality

    {μ(f(a)Q(a),t)2k2(2k+s1)(1ρ)t2k2(2k+s1)(1ρ)t+ρφe(0,a),ν(f(a)Q(a),t)ρφe(0,a)2k2(2k+s1)(1ρ)t+ρφe(0,a) (4.20)

    holds for all aX and all t>0. The rest of the proof is similar to the proof of Theorem 4.1. This completes the proof of the theorem.

    Theorem 4.3. Let φ: X2[0,) be a function such that for some real number ρ with 0<ρ<k,

    φ(a,b)=ρk2φ(ka,kb) (4.21)

    for all a,bX. Suppose that a mapping f: XY with f(0)=0 satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1φ(xij,yij),νn(Dfn([xij],[yij]),t)ni,j=1φ(xij,yij)t+ni,j=1φ(xij,yij) (4.22)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: XY and a unique additive mapping A: XY such that

    {μn(fn([xij])Qn([xij])An([xij]),t)k(2k+s1)(1ρ)tk(2k+s1)(1ρ)t+ρn2ni,j=1˜φ(0,xij),νn(fn([xij])Qn([xij])An([xij]),t)ρn2ni,j=1˜φ(0,xij)k(2k+s1)(1ρ)t+ρn2ni,j=1˜φ(0,xij) (4.23)

    for all x=[xij]Mn(X) and all t>0, ˜φ(a,b)=φ(a,b)+φ(a,b) for all a,bX.

    Proof. The proof follows from Theorems 4.1 and 4.2, and a method similar to Theorem 3.5. This completes the proof of the theorem.

    Corollary 4.4. Let r,θ be positive real numbers with r>2. Suppose that a mapping f: XY with f(0)=0 satisfies the inequality

    {μn(Dfn([xij],[yij]),t)tt+ni,j=1θ(xijr+yijr),νn(Dfn([xij],[yij]),t)ni,j=1θ(xijr+yijr)t+ni,j=1θ(xijr+yijr) (4.24)

    for all x=[xij],y=[yij]Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: XY and a unique additive mapping A: XY such that

    {μn(fn([xij])Qn([xij])An([xij]),t)(2k+s1)(krk2)t(2k+s1)(krk2)t+2kn2ni,j=1θxijr,νn(fn([xij])Qn([xij])An([xij]),t)2kn2ni,j=1θxijr(2k+s1)(krk2)t+2kn2ni,j=1θxijr (4.25)

    for all x=[xij]Mn(X) and all t>0.

    Proof. Taking φ(a,b)=θ(ar+br) for all a,bX and ρ=k2r in Theorem 4.3, we get the desired result.

    We use the direct and fixed point methods to investigate the Hyers-Ulam stability of the functional Eq (1.1) in the framework of matrix intuitionistic fuzzy normed spaces. We therefore provide a link two various discipline: matrix intuitionistic fuzzy normed spaces and functional equations. We generalized the Hyers-Ulam stability results of the functional Eq (1.1) from quasi-Banach spaces to matrix intuitionistic fuzzy normed spaces. These circumstances can be applied to other significant functional equations.

    The author declare he has not used Artificial Intelligence (AI) tools in the creation of this article.

    The author is grateful to the referees for their helpful comments and suggestions that help to improve the quality of the manuscript.

    The author declares no conflict of interest in this paper.



    [1] S. Milano, M. Taddeo, L. Floridi, Ethical aspects of multi-stakeholder recommendation systems, Inf. Soc., 37 (2021), 35-45. https://doi.org/10.1080/01972243.2020.1832636 doi: 10.1080/01972243.2020.1832636
    [2] H. Tang, G. Zhao, X. Bu, X. Qian, Dynamic evolution of multi-graph based collaborative filtering for recommendation systems, Knowledge-Based Syst., 228 (2021), 107251. https://doi.org/10.1016/j.knosys.2021.107251 doi: 10.1016/j.knosys.2021.107251
    [3] Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 42 (2009), 30-37. https://doi.org/10.1109/MC.2009.263 doi: 10.1109/MC.2009.263
    [4] G. Datta, P. A. Beerel, Can deep neural networks be converted to ultra low-latency spiking neural networks, in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2021. https://doi.org/10.23919/DATE54114.2022.9774704
    [5] C. Gao, X. Wang, X. He, Y. Li, Graph neural networks for recommender system, in Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, (2022), 1623-1625. https://doi.org/10.1145/3488560.3501396
    [6] S. Jang, H. Lee, S. Cho, S. Woo, S. Lee, Ghost graph convolutional network for skeleton-based action recognition, in 2021 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 2021. https://doi.org/10.1109/ICCE-Asia53811.2021.9641919
    [7] J. Xu, L. Chen, M. Lv, C. Zhan, S. Chen, J. Chang, HighAir: a hierarchical graph neural network-based air quality forecasting method, preprint, arXiv: 2101.04264. 86.
    [8] V. Kalofolias, X. Bresson, M. Bronstein, P. Vandergheynst, Matrix completion on graphs, preprint, arXiv: 14081717.
    [9] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, T. Chua, Neural collaborative filtering, in Proceedings of the 26th International Conference on World Wide Web, (2017), 173-182. https://doi.org/10.1145/3038912.3052569
    [10] X. Wei, J. Liu, Effects of nonlinear functions on knowledge graph convolutional networks for recommender systems with yelp knowledge graph, Lamar University, Beaumont, (2021), 185-199. https://doi.org/10.5121/csit.2021.110715
    [11] W. Li, L. Ni, J. Wang, C. Wang, Collaborative representation learning for nodes and relations via heterogeneous graph neural network, Knowledge-Based Syst., 255 (2022), 109673. https://doi.org/10.1016/j.knosys.2022.109673 doi: 10.1016/j.knosys.2022.109673
    [12] C. Huang, Recent advances in heterogeneous relation learning for recommendation, preprint, arXiv: 211003455.
    [13] H. B. Kang, R. Kocielnik, A. Head, J. Yang, M. Latzke, A. Kittur, et al., From who you know to what you read: augmenting scientific recommendations with implicit social networks, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, (2022), 1-23. https://doi.org/10.1145/3491102.3517470
    [14] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907.
    [15] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, M. Wang, Lightgcn: simplifying and powering graph convolution network for recommendation, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 639-648. https://doi.org/10.1145/3397271.3401063
    [16] B. Jin, C. Gao, X. He, D. Lin, Y. Li, Multi-behavior recommendation with graph convolutional networks, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 659-668. https://doi.org/10.1145/3397271.3401072
    [17] L. Ma, Y. Li, J. Li, W. Tan, Y. Yu, M. A. Chapman, Multi-scale point-wise convolutional neural networks for 3D object segmentation from LiDAR point clouds in large-scale environments, IEEE Trans. Intell. Transp. Syst., 22 (2019), 821-836. https://doi.org/10.1109/TITS.2019.2961060 doi: 10.1109/TITS.2019.2961060
    [18] X. Wang, H. Jin, A. Zhang, X. He, T. Xu, T. Chua, Disentangled graph collaborative filtering, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 1001-1010. https://doi.org/10.1145/3397271.3401137
    [19] Y. Zheng, C. Gao, L. Chen, D. Jin, Y. Li, DGCN: diversified recommendation with graph convolutional networks, in Proceedings of the Web Conference, (2021), 401-412. https://doi.org/10.1145/3442381.3449835
    [20] R. Raziperchikolaei, Y. J. Chung, Simultaneous learning of the inputs and parameters in neural collaborative filtering, preprint, arXiv: 2203.07463.
    [21] R. Yin, K. Li, G. Zhang, J. Lu, A deeper graph neural network for recommender systems. Knowledge-Based Syst., 185 (2019), 105020. https://doi.org/10.1016/j.knosys.2019.105020 doi: 10.1016/j.knosys.2019.105020
    [22] R. R. Salakhutdinov, A. Mnih, Probabilistic matrix factorization, in Proceedings of the 20th International Conference on Neural Information Processing Systems, (2007), 1257-1264.
    [23] Y. Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2008), 426-434. https://doi.org/10.1145/1401890.1401944
    [24] J. Wei, J. He, K. Chen, Y. Zhou, Z. Tang, Collaborative filtering and deep learning based recommendation system for cold start items, Expert Syst. Appl., 69 (2017), 29-39. https://doi.org/10.1016/j.eswa.2016.09.040 doi: 10.1016/j.eswa.2016.09.040
    [25] S. Sedhain, A. K. Menon, S. Sanner, L. Xie, Autorec: autoencoders meet collaborative filtering, in Proceedings of the 24th International Conference on World Wide Web, (2015), 111-112. https://doi.org/10.1145/2740908.2742726
    [26] X. Wang, X. He, M. Wang, F. Feng, T. Chua, Neural graph collaborative filtering, in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2019), 165-174. https://doi.org/10.1145/3331184.3331267
    [27] L. Chen, L. Wu, R. Hong, K. Zhang, M. Wang, Revisiting graph based collaborative filtering: a linear residual graph convolutional network approach, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 27-34. https://doi.org/10.1609/aaai.v34i01.5330
    [28] X. Wang, R. Wang, C. Shi, G. Song, Q. Li, Multi-component graph convolutional collaborative filtering, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 6267-6274. https://doi.org/10.1609/aaai.v34i04.6094
    [29] C. Zhang, W. Li, D. Wei, Y. Liu, Z. Li, Network dynamic GCN influence maximization algorithm with leader fake labeling mechanism, IEEE Trans. Comput. Social Syst., 2022 (2022), 1-9. https://doi.org/10.1109/TCSS.2022.3193583 doi: 10.1109/TCSS.2022.3193583
    [30] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, G. Monfardini, The graph neural network model, IEEE Trans. Neural Networks, 20 (2008), 61-80. https://doi.org/10.1109/TNN.2008.2005605 doi: 10.1109/TNN.2008.2005605
    [31] M. Gori, G. Monfardini, F. Scarselli, A new model for learning in graph domains, in 2005 IEEE International Joint Conference on Neural Networks, (2005), 729-734. https://doi.org/10.1109/IJCNN.2005.1555942
    [32] W. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in Proceedings of the 31st International Conference on Neural Information Processing Systems, (2017), 1025-1035.
    [33] P. Velikovi, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, preprint, arXiv: 1710.10903.
    [34] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017. Available from: https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
    [35] K. Xu, W. Hu, J. Leskovec, S. Jegelka, How powerful are graph neural networks, preprint, arXiv: 181000826.
    [36] X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, et al., Heterogeneous graph attention network, in The World Wide Web Conference, (2019), 2022-2032. https://doi.org/10.1145/3308558.3313562
    [37] F. Wu, T. Zhang, A. H. de Souza, C. Fifty, T. Yu, K. Q. Weinberger, Simplifying graph convolutional networks, 2019 (2019), 6861-6871. https://doi.org/10.48550/arXiv.1902.07153
    [38] M. Liu, H. Gao, S. Ji, Towards deeper graph neural networks, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2020), 338-348. https://doi.org/10.1145/3394486.3403076
    [39] C. Louizos, M. Welling, D. P. Kingma, Learning sparse neural networks through L0 regularization, preprint, arXiv: 171201312.
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2589) PDF downloads(79) Cited by(2)

Figures and Tables

Figures(7)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog