Collaborative filtering is one of the most widely used methods in recommender systems. In recent years, Graph Neural Networks (GNN) were naturally applied to collaborative filtering methods to model users' preference representation. However, empirical research has ignored the effects of different items on user representation, which prevented them from capturing fine-grained users' preferences. Besides, due to the problem of data sparsity in collaborative filtering, most GNN-based models conduct a large number of graph convolution operations in the user-item graph, resulting in an over-smoothing effect. To tackle these problems, Adaptive Preference Retention Graph Convolutional Collaborative Filtering Method (APR-GCCF) was proposed to distinguish the difference among the items and capture the fine-grained users' preferences. Specifically, the graph convolutional method was applied to model the high-order relationship on the user-item graph and an adaptive preference retention mechanism was used to capture the difference between items adaptively. To obtain a unified users' preferences representation and alleviate the over-smoothing effect, we employed a residual preference prediction mechanism to concatenate the representation of users' preferences generated by each layer of the graph neural network. Extensive experiments were conducted based on three real datasets and the experimental results demonstrate the effectiveness of the model.
Citation: Bingjie Zhang, Junchao Yu, Zhe Kang, Tianyu Wei, Xiaoyu Liu, Suhua Wang. An adaptive preference retention collaborative filtering algorithm based on graph convolutional method[J]. Electronic Research Archive, 2023, 31(2): 793-811. doi: 10.3934/era.2023040
[1] | Kandhasamy Tamilvanan, Jung Rye Lee, Choonkil Park . Ulam stability of a functional equation deriving from quadratic and additive mappings in random normed spaces. AIMS Mathematics, 2021, 6(1): 908-924. doi: 10.3934/math.2021054 |
[2] | Murali Ramdoss, Divyakumari Pachaiyappan, Inho Hwang, Choonkil Park . Stability of an n-variable mixed type functional equation in probabilistic modular spaces. AIMS Mathematics, 2020, 5(6): 5903-5915. doi: 10.3934/math.2020378 |
[3] | K. Tamilvanan, Jung Rye Lee, Choonkil Park . Hyers-Ulam stability of a finite variable mixed type quadratic-additive functional equation in quasi-Banach spaces. AIMS Mathematics, 2020, 5(6): 5993-6005. doi: 10.3934/math.2020383 |
[4] | Maysaa Al-Qurashi, Mohammed Shehu Shagari, Saima Rashid, Y. S. Hamed, Mohamed S. Mohamed . Stability of intuitionistic fuzzy set-valued maps and solutions of integral inclusions. AIMS Mathematics, 2022, 7(1): 315-333. doi: 10.3934/math.2022022 |
[5] | Lingxiao Lu, Jianrong Wu . Hyers-Ulam-Rassias stability of cubic functional equations in fuzzy normed spaces. AIMS Mathematics, 2022, 7(5): 8574-8587. doi: 10.3934/math.2022478 |
[6] | Nazek Alessa, K. Tamilvanan, G. Balasubramanian, K. Loganathan . Stability results of the functional equation deriving from quadratic function in random normed spaces. AIMS Mathematics, 2021, 6(3): 2385-2397. doi: 10.3934/math.2021145 |
[7] | Zhihua Wang . Approximate mixed type quadratic-cubic functional equation. AIMS Mathematics, 2021, 6(4): 3546-3561. doi: 10.3934/math.2021211 |
[8] | Nour Abed Alhaleem, Abd Ghafur Ahmad . Intuitionistic fuzzy normed prime and maximal ideals. AIMS Mathematics, 2021, 6(10): 10565-10580. doi: 10.3934/math.2021613 |
[9] | Sizhao Li, Xinyu Han, Dapeng Lang, Songsong Dai . On the stability of two functional equations for (S,N)-implications. AIMS Mathematics, 2021, 6(2): 1822-1832. doi: 10.3934/math.2021110 |
[10] | Zhihua Wang, Choonkil Park, Dong Yun Shin . Additive ρ-functional inequalities in non-Archimedean 2-normed spaces. AIMS Mathematics, 2021, 6(2): 1905-1919. doi: 10.3934/math.2021116 |
Collaborative filtering is one of the most widely used methods in recommender systems. In recent years, Graph Neural Networks (GNN) were naturally applied to collaborative filtering methods to model users' preference representation. However, empirical research has ignored the effects of different items on user representation, which prevented them from capturing fine-grained users' preferences. Besides, due to the problem of data sparsity in collaborative filtering, most GNN-based models conduct a large number of graph convolution operations in the user-item graph, resulting in an over-smoothing effect. To tackle these problems, Adaptive Preference Retention Graph Convolutional Collaborative Filtering Method (APR-GCCF) was proposed to distinguish the difference among the items and capture the fine-grained users' preferences. Specifically, the graph convolutional method was applied to model the high-order relationship on the user-item graph and an adaptive preference retention mechanism was used to capture the difference between items adaptively. To obtain a unified users' preferences representation and alleviate the over-smoothing effect, we employed a residual preference prediction mechanism to concatenate the representation of users' preferences generated by each layer of the graph neural network. Extensive experiments were conducted based on three real datasets and the experimental results demonstrate the effectiveness of the model.
In 1940, Ulam [24] posed the stability problem concerning group homomorphisms. For Banach spaces, the problem was solved by Hyers [7] in the case of approximate additive mappings. And then Hyers' result was extended by Aoki [1] and Rassias [18] for additive mappings and linear mappings, respectively. In 1994, another further generalization, the so-called generalized Hyer-Ulam stability, was obtained by Gavruta [6]. Later, the stability of several functional equations has been extensively discussed by many mathematicians and there are many interesting results concerning this problem (see [2,8,9,10,19,20] and references therein); also, some stability results of different functional equations and inequalities were studied and generalized [5,11,12,15,16,17,26] in various matrix normed spaces like matrix fuzzy normed spaces, matrix paranormed spaces and matrix non-Archimedean random normed spaces.
In 2017, Wang and Xu [25] introduced the following functional equation
2k[f(x+ky)+f(kx+y)]=k(1−s+k+ks+2k2)f(x+y)+k(1−s−3k+ks+2k2)f(x−y)+2kf(kx)+2k(s+k−ks−2k2)f(x)+2(1−k−s)f(ky)+2ksf(y) | (1.1) |
where s is a parameter, k>1 and s≠1−2k. It is easy to verify that f(x)=ax+bx2(x∈R) satisfies the functional Eq (1.1), where a,b are arbitrary constants. They considered the general solution of the functional Eq (1.1), and then determined the generalized Hyers-Ulam stability of the functional Eq (1.1) in quasi-Banach spaces by applying the direct method.
The main purpose of this paper is to employ the direct and fixed point methods to establish the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces. The paper is organized as follows: In Sections 1 and 2, we present a brief introduction and introduce related basic definitions and preliminary results, respectively. In Section 3, we prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the direct method. In Section 4, we prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the fixed point method. Our results may be viewed as a continuation of the previous contribution of the authors in the setting of fuzzy stability (see [14,17]).
For the sake of completeness, in this section, we present some basic definitions and preliminary results, which will be useful to investigate the Hyers-Ulam stability results in matrix intuitionistic fuzzy normed spaces. The notions of continuous t-norm and continuous t-conorm can be found in [14,22]. Using these, an intuitionistic fuzzy normed space (for short, IFNS) is defined as follows:
Definition 2.1. ([14,21]) The five-tuple (X,μ,ν,∗,⋄) is said to be an IFNS if X is a vector space, ∗ is a continuous t-norm, ⋄ is a continuous t-conorm, and μ,ν are fuzzy sets on X×(0,∞) satisfy the following conditions. For every x,y∈X and s,t>0,
(i) μ(x,t)+ν(x,t)≤1;
(ii) μ(x,t)>0, (iii) μ(x,t)=1 if and only if x=0;
(iii) μ(αx,t)=μ(x,t|α|) for each α≠0, (v) μ(x,t)∗μ(y,s)≤μ(x+y,t+s);
(iv) μ(x,⋅):(0,∞)→[0,1] is continuous;
(v) limt→∞μ(x,t)=1 and limt→0μ(x,t)=0;
(vi) ν(x,t)<1, (ix) ν(x,t)=0 if and only if x=0;
(vii) ν(αx,t)=ν(x,t|α|) for each α≠0, (xi) ν(x,t)⋄ν(y,s)≥ν(x+y,t+s);
(xiii) ν(x,⋅):(0,∞)→[0,1] is continuous;
(ix) limt→∞ν(x,t)=0 and limt→0ν(x,t)=1.
In this case, (μ,ν) is called an intuitionistic fuzzy norm.
The following concepts of convergence and Cauchy sequences are considered in [14,21]:
Let (X,μ,ν,∗,⋄) be an IFNS. Then, a sequence {xk} is said to be intuitionistic fuzzy convergent to x∈X if for every ε>0 and t>0, there exists k0∈N such that
μ(xk−x,t)>1−ε |
and
ν(xk−x,t)<ε |
for all k≥k0. In this case we write
(μ,ν)−limxk=x. |
The sequence {xk} is said to be an intuitionistic fuzzy Cauchy sequence if for every ε>0 and t>0, there exists k0∈N such that
μ(xk−xℓ,t)>1−ε |
and
ν(xk−xℓ,t)<ε |
for all k,ℓ≥k0. (X,μ,ν,∗,⋄) is said to be complete if every intuitionistic fuzzy Cauchy sequence in (X,μ,ν,∗,⋄) is intuitionistic fuzzy convergent in (X,μ,ν,∗,⋄).
Following [11,12], we will also use the following notations: The set of all m×n-matrices in X will be denoted by Mm,n(X). When m=n, the matrix Mm,n(X) will be written as Mn(X). The symbols ej∈M1,n(C) will denote the row vector whose jth component is 1 and the other components are 0. Similarly, Eij∈Mn(C) will denote the n×n matrix whose (i,j)-component is 1 and the other components are 0. The n×n matrix whose (i,j)-component is x and the other components are 0 will be denoted by Eij⊗x∈Mn(X).
Let (X,‖⋅‖) be a normed space. Note that (X,{‖⋅‖n}) is a matrix normed space if and only if (Mn(X),‖⋅‖n) is a normed space for each positive integer n and
‖AxB‖k≤‖A‖‖B‖‖x‖n |
holds for A∈Mk,n, x=[xij]∈Mn(X) and B∈Mn,k, and that (X,{‖⋅‖n}) is a matrix Banach space if and only if X is a Banach space and (X,{‖⋅‖n}) is a matrix normed space.
Following [23], we introduce the concept of a matrix intuitionistic fuzzy normed space as follows:
Definition 2.2. ([23]) Let (X,μ,ν,∗,⋄) be an intuitionistic fuzzy normed space, and the symbol θ for a rectangular matrix of zero elements over X. Then:
(1) (X,{μn},{νn},∗,⋄) is called a matrix intuitionistic fuzzy normed space (briefly, MIFNS) if for each positive integer n, (Mn(X),μn,νn,∗,⋄) is an intuitionistic fuzzy normed space, μn and νn satisfy the following conditions:
(i) μn+m(θ+x,t)=μn(x,t),νn+m(θ+x,t)=νn(x,t) for all t>0, x=[xij]∈Mn(X), θ∈Mn(X);
(ii) μk(AxB,t)≥μn(x,t‖A‖⋅‖B‖), νk(AxB,t)≤νn(x,t‖A‖⋅‖B‖) for all t>0, A∈Mk,n(R), x=[xij]∈Mn(X) and B∈Mn,k(R) with ‖A‖⋅‖B‖≠0.
(2) (X,{μn},{νn},∗,⋄) is called a matrix intuitionistic fuzzy Banach space if (X,μ,ν,∗,⋄) is an intuitionistic fuzzy Banach space and (X,{μn},{νn},∗,⋄) is a matrix intuitionistic fuzzy normed space.
The following Lemma 2.3 was found in [23].
Lemma 2.3. ([23]) Let (X,{μn},{νn},∗,⋄) be a matrix intuitionistic fuzzy normed space. Then,
(1) μn(Ekl⊗x,t)=μ(x,t), νn(Ekl⊗x,t)=ν(x,t) for all t>0 and x∈X.
(2) For all [xij]∈Mn(X) and t=n∑i,j=1tij>0,
μ(xkl,t)≥μn([xij],t)≥min{μ(xij,tij):i,j=1,2,…,n},μ(xkl,t)≥μn([xij],t)≥min{μ(xij,tn2):i,j=1,2,…,n}, |
and
ν(xkl,t)≤νn([xij],t)≤max{ν(xij,tij):i,j=1,2,…,n},ν(xkl,t)≤νn([xij],t)≤max{ν(xij,tn2):i,j=1,2,…,n}. |
(3) limm→∞xm=x if and only if limm→∞xijm=xij for xm=[xijm],x=[xij]∈Mn(X).
For explicit later use, we also recall the following Lemma 2.4 is due to Diaz and Margolis [4], which will play an important role in proving our stability results in this paper.
Lemma 2.4. (The fixed point alternative theorem [4]) Let (E,d) be a complete generalized metric space and J: E→E be a strictly contractive mapping with Lipschitz constant L<1. Then for each fixed element x∈E, either
d(Jnx,Jn+1x)=∞,∀n≥0, |
or
d(Jnx,Jn+1x)<∞,∀n≥n0, |
for some natural number n0. Moreover, if the second alternative holds then:
(i) The sequence {Jnx} is convergent to a fixed point y∗ of J.
(ii)y∗ is the unique fixed point of J in the set E∗:={y∈E∣d(Jn0x,y)<+∞} and d(y,y∗)≤11−Ld(y,Jy),∀x,y∈E∗.
From now on, let (X,{μn},{νn},∗,⋄) be a matrix intuitionistic fuzzy normed space and (Y,{μn},{νn},∗,⋄) be a matrix intuitionistic fuzzy Banach space. In this section, we will prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by using the direct method. For the sake of convenience, given mapping f: X→Y, we define the difference operators Df: X2→Y and Dfn: Mn(X2)→Mn(Y) of the functional Eq (1.1) by
Df(a,b):=2k[f(a+kb)+f(ka+b)]−k(1−s+k+ks+2k2)f(a+b)−k(1−s−3k+ks+2k2)f(a−b)−2kf(ka)−2k(s+k−ks−2k2)f(a)−2(1−k−s)f(kb)−2ksf(b),Dfn([xij],[yij]):=2k[fn([xij]+k[yij])+fn(k[xij]+[yij])]−k(1−s+k+ks+2k2)fn([xij]+[yij])−k(1−s−3k+ks+2k2)fn([xij]−[yij])−2kfn(k[xij])−2k(s+k−ks−2k2)fn([xij])−2(1−k−s)fn(k[yij])−2ksfn([yij]) |
for all a,b∈X and all x=[xij],y=[yij]∈Mn(X).
We start with the following lemmas which will be used in this paper.
Lemma 3.1. ([25]) Let V and W be real vector spaces. If an odd mapping f: V→W satisfies the functional Eq (1.1), then f is additive.
Lemma 3.2. ([25]) Let V and W be real vector spaces. If an even mapping f: V→W satisfies the functional Eq (1.1), then f is quadratic.
Theorem 3.3. Let φo: X2→[0,∞) be a function such that for some real number α with 0<α<k,
φo(ka,kb)=αφo(a,b) | (3.1) |
for all a,b∈X. Suppose that an odd mapping f: X→Y satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φo(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φo(xij,yij)t+∑ni,j=1φo(xij,yij) | (3.2) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exists a unique additive mapping A: X→Y such that
{μn(fn([xij])−An([xij]),t)≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+n2∑ni,j=1φo(0,xij),νn(fn([xij])−An([xij]),t)≤n2∑ni,j=1φo(0,xij)(k−α)(2k+s−1)t+n2∑ni,j=1φo(0,xij) | (3.3) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. When n=1, (3.2) is equivalent to
μ(Df(a,b),t)≥tt+φo(a,b)andν(Df(a,b),t)≤φo(a,b)t+φo(a,b) | (3.4) |
for all a,b∈X and all t>0. Putting a=0 in (3.4), we have
{μ(2(2k+s−1)f(kb)−2(2k+s−1)kf(b),t)≥tt+φo(0,b),ν(2(2k+s−1)f(kb)−2(2k+s−1)kf(b),t)≤φo(0,b)t+φo(0,b) | (3.5) |
for all b∈X and all t>0. Replacing a by kpa in (3.5) and using (3.1), we get
{μ(f(kp+1a)kp+1−f(kpa)kp,t2k(2k+s−1)kp)≥tt+αpφo(0,a),ν(f(kp+1a)kp+1−f(kpa)kp,t2k(2k+s−1)kp)≤αpφo(0,a)t+αpφo(0,a) | (3.6) |
for all a∈X and all t>0. It follows from (3.6) that
{μ(f(kp+1a)kp+1−f(kpa)kp,αpt2k(2k+s−1)kp)≥tt+φo(0,a),ν(f(kp+1a)kp+1−f(kpa)kp,αpt2k(2k+s−1)kp)≤φo(0,a)t+φo(0,a) | (3.7) |
for all a∈X and all t>0. It follows from
f(kpa)kp−f(a)=p−1∑ℓ=0(f(kℓ+1a)kℓ+1−f(kℓa)kℓ) |
and (3.7) that
{μ(f(kpa)kp−f(a),∑p−1ℓ=0αℓt2k(2k+s−1)kℓ)≥∏p−1ℓ=0μ(f(kℓ+1a)kℓ+1−f(kℓa)kℓ,αℓt2k(2k+s−1)kℓ)≥tt+φo(0,a),ν(f(kpa)kp−f(a),∑p−1ℓ=0αℓt2k(2k+s−1)kℓ)≤∐p−1ℓ=0ν(f(kℓ+1a)kℓ+1−f(kℓa)kℓ,αℓt2k(2k+s−1)kℓ)≤φo(0,a)t+φo(0,a) | (3.8) |
for all a∈X and all t>0, where
p∏j=0aj=a1∗a2∗⋯∗ap, p∐j=0aj=a1⋄a2⋄⋯⋄ap. |
By replacing a with kqa in (3.8), we have
{μ(f(kp+qa)kp+q−f(kqa)kq,∑p−1ℓ=0αℓt2k(2k+s−1)kℓ+q)≥tt+αqφo(0,a),ν(f(kp+qa)kp+q−f(kqa)kq,∑p−1ℓ=0αℓt2k(2k+s−1)kℓ+q)≤αqφo(0,a)t+αqφo(0,a) | (3.9) |
for all a∈X, t>0, p>0 and q>0. Thus
{μ(f(kp+qa)kp+q−f(kqa)kq,∑p+q−1ℓ=qαℓt2k(2k+s−1)kℓ)≥tt+φo(0,a),ν(f(kp+qa)kp+q−f(kqa)kq,∑p+q−1ℓ=qαℓt2k(2k+s−1)kℓ)≤φo(0,a)t+φo(0,a) | (3.10) |
for all a∈X, t>0, p>0 and q>0. Hence
{μ(f(kp+qa)kp+q−f(kqa)kq,t)≥tt+∑p+q−1ℓ=qαℓ2k(2k+s−1)kℓφo(0,a),ν(f(kp+qa)kp+q−f(kqa)kq,t)≤∑p+q−1ℓ=qαℓ2k(2k+s−1)kℓφo(0,a)t+∑p+q−1ℓ=qαℓ2k(2k+s−1)kℓφo(0,a) | (3.11) |
for all a∈X, t>0, p>0 and q>0. Since 0<α<k and
∞∑ℓ=0αℓ2k(2k+s−1)kℓ<∞, |
the Cauchy criterion for convergence in IFNS shows that {f(kpa)kp} is a Cauchy sequence in (Y,μ,ν,∗,⋄). Since (Y,μ,ν,∗,⋄) is an intuitionistic fuzzy Banach space, this sequence converges to some point A(a)∈Y. So one can define the mapping A: X→Y such that
A(a):=(μ,ν)−limp→∞f(kpa)kp. |
Moreover, if we put q=0 in (3.11), we get
{μ(f(kpa)kp−f(a),t)≥tt+∑p−1ℓ=0αℓ2k(2k+s−1)kℓφo(0,a),ν(f(kpa)kp−f(a),t)≤∑p−1ℓ=0αℓ2k(2k+s−1)kℓφo(0,a)t+∑p−1ℓ=0αℓ2k(2k+s−1)kℓφo(0,a) | (3.12) |
for all a∈X, t>0 and p>0. Thus, we obtain
{μ(f(a)−A(a),t)≥μ(f(a)−f(kpa)kp,t2)∗μ(f(kpa)kp−A(a),t2)≥tt+∑p−1ℓ=0αℓk(2k+s−1)kℓφo(0,a),ν(f(a)−A(a),t)≤ν(f(a)−f(kpa)kp,t2)∗ν(f(kpa)kp−A(a),t2)≤∑p−1ℓ=0αℓk(2k+s−1)kℓφo(0,a)t+∑p−1ℓ=0αℓk(2k+s−1)kℓφo(0,a) | (3.13) |
for every a∈X, t>0 and large p. Taking the limit as p→∞ and using the definition of IFNS, we get
{μ(f(a)−A(a),t)≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+φo(0,a),ν(f(a)−A(a),t)≤φo(0,a)(k−α)(2k+s−1)t+φo(0,a). | (3.14) |
Replacing a and b by kpa and kpb in (3.4), respectively, and using (3.1), we obtain
μ(1kpDf(kpa,kpb),t)≥tt+(αk)pφo(a,b)andν(1kpDf(kpa,kpb),t)≤(αk)pφo(a,b)t+(αk)pφo(a,b) | (3.15) |
for all a,b∈X and all t>0. Letting p→∞ in (3.15), we obtain
μ(DA(a,b),t)=1andν(DA(a,b),t)=0 | (3.16) |
for all a,b∈X and all t>0. This means that A satisfies the functional Eq (1.1). Since f: X→Y is an odd mapping, and using the definition A, we have A(−a)=−A(a) for all a∈X. Thus by Lemma 3.1, the mapping A: X→Y is additive. To prove the uniqueness of A, let A′: X→Y be another additive mapping satisfying (3.14). Let n=1. Then we have
{μ(A(a)−A′(a),t)=μ(A(kpa)kp−A′(kpa)kp,t)≥μ(A(kpa)kp−f(kpa)kp,t2)∗μ(f(kpa)kp−A′(kpa)kp,t2)≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+2(αk)pφo(0,a),ν(A(a)−A′(a),t)=ν(A(kpa)kp−A′(kpa)kp,t)≤ν(A(kpa)kp−f(kpa)kp,t2)⋄ν(f(kpa)kp−A′(kpa)kp,t2)≤2(αk)pφo(0,a)(k−α)(2k+s−1)t+2(αk)pφo(0,a) | (3.17) |
for all a∈X, t>0 and p>0. Letting p→∞ in (3.17), we get
μ(A(a)−A′(a),t)=1andν(A(a)−A′(a),t)=0 |
for all a∈X and t>0. Hence we get A(a)=A′(a) for all a∈X. Thus the mapping A: X→Y is a unique additive mapping.
By Lemma 2.3 and (3.14), we get
{μn(fn([xij])−An([xij]),t)≥min{μ(f(xij)−A(xij),tn2):i,j=1,…,n} ≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+n2∑ni,j=1φo(0,xij),νn(fn([xij])−An([xij]),t)≤max{ν(f(xij)−A(xij),tn2):i,j=1,…,n} ≤n2∑ni,j=1φo(0,xij)(k−α)(2k+s−1)t+n2∑ni,j=1φo(0,xij) |
for all x=[xij]∈Mn(X) and all t>0. Thus A: X→Y is a unique additive mapping satisfying (3.3), as desired. This completes the proof of the theorem.
Theorem 3.4. Let φe: X2→[0,∞) be a function such that for some real number α with 0<α<k2,
φe(ka,kb)=αφe(a,b) | (3.18) |
for all a,b∈X. Suppose that an even mapping f: X→Y with f(0)=0 satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φe(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φe(xij,yij)t+∑ni,j=1φe(xij,yij) | (3.19) |
for all x=[xij], y=[yij]∈Mn(X) and all t>0. Then there exists a unique quadratic mapping Q: X→Y such that
{μn(fn([xij])−Qn([xij]),t)≥(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+n2∑ni,j=1φe(0,xij),νn(fn([xij])−Qn([xij]),t)≤n2∑ni,j=1φe(0,xij)(k2−α)(2k+s−1)t+n2∑ni,j=1φe(0,xij) | (3.20) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. When n=1, (3.19) is equivalent to
μ(Df(a,b),t)≥tt+φe(a,b)andν(Df(a,b),t)≤φe(a,b)t+φe(a,b) | (3.21) |
for all a,b∈X and all t>0. Letting a=0 in (3.21), we obtain
{μ(2(2k+s−1)f(kb)−2(2k+s−1)k2f(b),t)≥tt+φe(0,b),ν(2(2k+s−1)f(kb)−2(2k+s−1)k2f(b),t)≤φe(0,b)t+φe(0,b) | (3.22) |
for all b∈X and all t>0. Replacing a by kpa in (3.22) and using (3.18), we get
{μ(f(kp+1a)k2(p+1)−f(kpa)k2p,t2k2(2k+s−1)k2p)≥tt+αpφe(0,a),ν(f(kp+1a)k2(p+1)−f(kpa)k2p,t2k2(2k+s−1)k2p)≤αpφe(0,a)t+αpφe(0,a) | (3.23) |
for all a∈X and all t>0. It follows from (3.23) that
{μ(f(kp+1a)k2(p+1)−f(kpa)k2p,αpt2k2(2k+s−1)k2p)≥tt+φe(0,a),ν(f(kp+1a)k2(p+1)−f(kpa)k2p,αpt2k2(2k+s−1)k2p)≤φe(0,a)t+φe(0,a) | (3.24) |
for all a∈X and all t>0. It follows from
f(kpa)k2p−f(a)=p−1∑ℓ=0(f(kℓ+1a)k2(ℓ+1)−f(kℓa)k2ℓ) |
and (3.24) that
{μ(f(kpa)k2p−f(a),∑p−1ℓ=0αℓt2k2(2k+s−1)k2ℓ)≥∏p−1ℓ=0μ(f(kℓ+1a)k2(ℓ+1)−f(kℓa)k2ℓ,αℓt2k2(2k+s−1)k2ℓ)≥tt+φe(0,a),ν(f(kpa)k2p−f(a),∑p−1ℓ=0αℓt2k2(2k+s−1)k2ℓ)≤∐p−1ℓ=0ν(f(kℓ+1a)k2(ℓ+1)−f(kℓa)k2ℓ,αℓt2k2(2k+s−1)k2ℓ)≤φe(0,a)t+φe(0,a) | (3.25) |
for all a∈X and all t>0, where
p∏j=0aj=a1∗a2∗⋯∗ap, p∐j=0aj=a1⋄a2⋄⋯⋄ap. |
By replacing a with kqa in (3.25), we have
{μ(f(kp+qa)k2(p+q)−f(kqa)k2q,∑p−1ℓ=0αℓt2k2(2k+s−1)k2(ℓ+q))≥tt+αqφe(0,a),ν(f(kp+qa)k2(p+q)−f(kqa)k2q,∑p−1ℓ=0αℓt2k2(2k+s−1)k2(ℓ+q))≤αqφe(0,a)t+αqφe(0,a) | (3.26) |
for all a∈X, t>0, p>0 and q>0. Thus
{μ(f(kp+qa)k2(p+q)−f(kqa)k2q,∑p+q−1ℓ=qαℓt2k2(2k+s−1)k2ℓ)≥tt+φe(0,a),ν(f(kp+qa)k2(p+q)−f(kqa)k2q,∑p+q−1ℓ=qαℓt2k2(2k+s−1)k2ℓ)≤φe(0,a)t+φe(0,a) | (3.27) |
for all a∈X, t>0, p>0 and q>0. Hence
{μ(f(kp+qa)k2(p+q)−f(kqa)k2q,t)≥tt+∑p+q−1ℓ=qαℓ2k2(2k+s−1)k2ℓφe(0,a),ν(f(kp+qa)k2(p+q)−f(kqa)k2q,t)≤∑p+q−1ℓ=qαℓ2k2(2k+s−1)k2ℓφe(0,a)t+∑p+q−1ℓ=qαℓ2k2(2k+s−1)k2ℓφe(0,a) | (3.28) |
for all a∈X, t>0, p>0 and q>0. Since 0<α<k2 and
∞∑ℓ=0αℓ2k2(2k+s−1)k2ℓ<∞, |
the Cauchy criterion for convergence in IFNS shows that {f(kpa)k2p} is a Cauchy sequence in (Y,μ,ν,∗,⋄). Since (Y,μ,ν,∗,⋄) is an intuitionistic fuzzy Banach space, this sequence converges to some point Q(a)∈Y. So one can define the mapping Q: X→Y such that
Q(a):=(μ,ν)−limp→∞f(kpa)k2p. |
Moreover, if we put q=0 in (3.28), we get
{μ(f(kpa)k2p−f(a),t)≥tt+∑p−1ℓ=0αℓ2k2(2k+s−1)k2ℓφe(0,a),ν(f(kpa)k2p−f(a),t)≤∑p−1ℓ=0αℓ2k2(2k+s−1)k2ℓφe(0,a)t+∑p−1ℓ=0αℓ2k2(2k+s−1)k2ℓφe(0,a) | (3.29) |
for all a∈X, t>0 and p>0. Thus, we obtain
{μ(f(a)−Q(a),t)≥μ(f(a)−f(kpa)k2p,t2)∗μ(f(kpa)k2p−Q(a),t2)≥tt+∑p−1ℓ=0αℓk2(2k+s−1)k2ℓφe(0,a),ν(f(a)−Q(a),t)≤ν(f(a)−f(kpa)k2p,t2)∗ν(f(kpa)k2p−Q(a),t2)≤∑p−1ℓ=0αℓk2(2k+s−1)k2ℓφe(0,a)t+∑p−1ℓ=0αℓk2(2k+s−1)k2ℓφe(0,a) | (3.30) |
for every a∈X, t>0 and large p. Taking the limit as p→∞ and using the definition of IFNS, we get
{μ(f(a)−Q(a),t)≥(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+φe(0,a),ν(f(a)−Q(a),t)≤φe(0,a)(k2−α)(2k+s−1)t+φe(0,a). | (3.31) |
Replacing a and b by kpa and kpb in (3.21), respectively, and using (3.18), we obtain
μ(1k2pDf(kpa,kpb),t)≥tt+(αk2)pφe(a,b),ν(1k2pDf(kpa,kpb),t)≤(αk2)pφe(a,b)t+(αk2)pφe(a,b) | (3.32) |
for all a,b∈X and all t>0. Letting p→∞ in (3.32), we obtain
μ(DQ(a,b),t)=1andν(DQ(a,b),t)=0 | (3.33) |
for all a,b∈X and all t>0. This means that Q satisfies the functional Eq (1.1). Since f: X→Y is an even mapping, and using the definition Q, we have Q(−a)=−Q(a) for all a∈X. Thus by Lemma 3.2, the mapping Q: X→Y is quadratic. To prove the uniqueness of Q, let Q′: X→Y be another quadratic mapping satisfying (3.31). Let n=1. Then we have
{μ(Q(a)−Q′(a),t)=μ(Q(kpa)k2p−Q′(kpa)k2p,t) ≥μ(Q(kpa)k2p−f(kpa)k2p,t2)∗μ(f(kpa)k2p−Q′(kpa)k2p,t2) ≥(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+2(αk2)pφe(0,a),ν(Q(a)−Q′(a),t)=ν(Q(kpa)k2p−Q′(kpa)k2p,t) ≤ν(Q(kpa)k2p−f(kpa)k2p,t2)⋄ν(f(kpa)kp−Q′(kpa)k2p,t2) ≤2(αk2)pφe(0,a)(k2−α)(2k+s−1)t+2(αk2)pφe(0,a) | (3.34) |
for all a∈X, t>0 and p>0. Letting p→∞ in (3.34), we get
μ(Q(a)−Q′(a),t)=1andν(Q(a)−Q′(a),t)=0 |
for all a∈X and t>0. Hence we get Q(a)=Q′(a) for all a∈X. Thus the mapping Q: X→Y is a unique quadratic mapping.
By Lemma 2.3 and (3.31), we get
{μn(fn([xij])−Qn([xij]),t)≥min{μ(f(xij)−Q(xij),tn2):i,j=1,…,n}≥(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+n2∑ni,j=1φe(0,xij),νn(fn([xij])−Qn([xij]),t)≤max{ν(f(xij)−Q(xij),tn2):i,j=1,…,n}≤n2∑ni,j=1φe(0,xij)(k2−α)(2k+s−1)t+n2∑ni,j=1φe(0,xij) |
for all x=[xij]∈Mn(X) and all t>0. Thus Q: X→Y is a unique quadratic mapping satisfying (3.20), as desired. This completes the proof of the theorem.
Theorem 3.5. Let φ: X2→[0,∞) be a function such that for some real number α with 0<α<k,
φ(ka,kb)=αφ(a,b) | (3.35) |
for all a,b∈X. Suppose that a mapping f: X→Y with f(0)=0 satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φ(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φ(xij,yij)t+∑ni,j=1φ(xij,yij) | (3.36) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: X→Y and a unique additive mapping A: X→Y such that
{μn(fn([xij])−Qn([xij])−An([xij]),t)≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+2n2∑ni,j=1˜φ(0,xij),νn(fn([xij])−Qn([xij])−An([xij]),t)≤2n2∑ni,j=1˜φ(0,xij)(k−α)(2k+s−1)t+2n2∑ni,j=1˜φ(0,xij) | (3.37) |
for all x=[xij]∈Mn(X) and all t>0, ˜φ(a,b)=φ(a,b)+φ(−a,−b) for all a,b∈X.
Proof. When n=1, (3.36) is equivalent to
μ(Df(a,b),t)≥tt+φ(a,b)andν(Df(a,b),t)≤φ(a,b)t+φ(a,b) | (3.38) |
for all a,b∈X and all t>0. Let
fe(a)=f(a)+f(−a)2 |
for all all a∈X. Then fe(0)=0,fe(−a)=fe(a). And we have
{μ(Dfe(a,b),t)=μ(12Df(a,b)+12Df(−a,−b),t)=μ(Df(a,b)+Df(−a,−b),2t)≥μ(Df(a,b),t)∗μ(Df(−a,−b),t)≥min{μ(Df(a,b),t),μ(Df(−a,−b),t)}≥tt+˜φ(a,b),ν(Dfe(a,b),t)=ν(12Df(a,b)+12Df(−a,−b),t)=ν(Df(a,b)+Df(−a,−b),2t)≤ν(Df(a,b),t)⋄ν(Df(−a,−b),t)≤max{ν(Df(a,b),t),ν(Df(−a,−b),t)}≤˜φ(a,b)t+˜φ(a,b) | (3.39) |
for all a∈X and all t>0. Let
fo(a)=f(a)−f(−a)2 |
for all all a∈X. Then f0(0)=0,fo(−a)=−fo(a). And we obtain
{μ(Dfo(a,b),t)=μ(12Df(a,b)−12Df(−a,−b),t)=μ(Df(a,b)−Df(−a,−b),2t)≥μ(Df(a,b),t)∗μ(Df(−a,−b),t)=min{μ(Df(a,b),t),μ(Df(−a,−b),t)}≥tt+˜φ(a,b),ν(Dfo(a,b),t)=ν(12Df(a,b)−12Df(−a,−b),t)=ν(Df(a,b)−Df(−a,−b),2t)≤ν(Df(a,b),t)⋄ν(Df(−a,−b),t)=max{ν(Df(a,b),t),ν(Df(−a,−b),t)}≤˜φ(a,b)t+˜φ(a,b) | (3.40) |
for all a∈X and all t>0. It follows that the definition of ˜φ that ˜φ(ka,kb)=α˜φ(a,b) for all a,b∈X. It is easy to check that the condition of Theorems 3.3 and 3.4 are satisfying. Then applying the proofs of Theorems 3.3 and 3.4, we know that there exists a unique quadratic mapping Q: X→Y and a unique additive mapping A: X→Y satisfying
{μ(fe(a)−Q(a),t)≥(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+˜φ(0,a),ν(fe(a)−Q(a),t)≤˜φ(0,a)(k2−α)(2k+s−1)t+˜φ(0,a) | (3.41) |
and
{μ(fo(a)−A(a),t)≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+˜φ(0,a),ν(fo(a)−A(a),t)≤˜φ(0,a)(k−α)(2k+s−1)t+˜φ(0,a) | (3.42) |
for all a∈X and all t>0. Therefore
{μ(f(a)−Q(a)−A(a),t)=μ(fe(a)−Q(a)+fo(a)−A(a),t)≥μ(fe(a)−Q(a),t2)∗μ(fo(a)−A(a),t2)=min{μ(fe(a)−Q(a),t2),μ(fo(a)−A(a),t2)}≥min{(k2−α)(2k+s−1)t(k2−α)(2k+s−1)t+2˜φ(0,a),(k−α)(2k+s−1)t(k−α)(2k+s−1)t+2˜φ(0,a)}=(k−α)(2k+s−1)t(k−α)(2k+s−1)t+2˜φ(0,a),ν(f(a)−Q(a)−A(a),t)=ν(fe(a)−Q(a)+fo(a)−A(a),t)≤ν(fe(a)−Q(a),t2)⋄ν(fo(a)−A(a),t2)=max{ν(fe(a)−Q(a),t2),ν(fo(a)−A(a),t2)}≤max{2˜φ(0,a)(k2−α)(2k+s−1)t+2˜φ(0,a),2˜φ(0,a)(k−α)(2k+s−1)t+2˜φ(0,a)}=2˜φ(0,a)(k−α)(2k+s−1)t+2˜φ(0,a). | (3.43) |
By Lemma 2.3 and (3.43), we have
{μn(fn([xij])−Qn([xij])−An([xij]),t)≥min{μ(f(xij)−Q(xij)−A(xij),tn2):i,j=1,…,n}≥(k−α)(2k+s−1)t(k−α)(2k+s−1)t+2n2∑ni,j=1˜φ(0,xij),νn(fn([xij])−Qn([xij])−An([xij]),t)≤max{ν(f(xij)−Q(xij)−A(xij),tn2):i,j=1,…,n}≤2n2∑ni,j=1˜φ(0,xij)(k−α)(2k+s−1)t+2n2∑ni,j=1˜φ(0,xij) |
for all x=[xij]∈Mn(X) and all t>0. Thus Q: X→Y is a unique quadratic mapping and a unique additive mapping A: X→Y satisfying (3.37), as desired. This completes the proof of the theorem.
Corollary 3.6. Let r,θ be positive real numbers with r<1. Suppose that a mapping f: X→Y with f(0)=0 satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1θ(‖xij‖r+‖yij‖r),νn(Dfn([xij],[yij]),t)≤∑ni,j=1θ(‖xij‖r+‖yij‖r)t+∑ni,j=1θ(‖xij‖r+‖yij‖r) | (3.44) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: X→Y and a unique additive mapping A: X→Y such that
{μn(fn([xij])−Qn([xij])−An([xij]),t)≥(k−kr)(2k+s−1)t(k−kr)(2k+s−1)t+4n2∑ni,j=1θ‖xij‖r,νn(fn([xij])−Qn([xij])−An([xij]),t)≤4n2∑ni,j=1θ‖xij‖r(k−kr)(2k+s−1)t+4n2∑ni,j=1θ‖xij‖r | (3.45) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. The proof follows from Theorem 3.5 by taking φ(a,b)=θ(‖a‖r+‖b‖r) for all a,b∈X, we obtain the desired result.
In this section, we will prove the Hyers-Ulam stability of the functional Eq (1.1) in matrix intuitionistic fuzzy normed spaces by applying the fixed point method.
Theorem 4.1. Let φo: X2→[0,∞) be a function such that for some real number ρ with 0<ρ<1 and
φo(a,b)=ρkφo(ka,kb) | (4.1) |
for all a,b∈X. Suppose that an odd mapping f: X→Y satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φo(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φo(xij,yij)t+∑ni,j=1φo(xij,yij) | (4.2) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exists a unique additive mapping A: X→Y such that
{μn(fn([xij])−An([xij]),t)≥2k(2k+s−1)(1−ρ)t2k(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φo(0,xij),νn(fn([xij])−An([xij]),t)≤ρn2∑ni,j=1φo(0,xij)2k(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φo(0,xij) | (4.3) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. When n=1, similar to the proof of Theorem 3.3, we have
{μ(2(2k+s−1)f(ka)−2(2k+s−1)kf(a),t)≥tt+φo(0,a),ν(2(2k+s−1)f(ka)−2(2k+s−1)kf(a),t)≤φo(0,a)t+φo(0,a) | (4.4) |
for all a∈X and all t>0.
Let S1={g1:X→Y}, and introduce a generalized metric d1 on S1 as follows:
d1(g1,h1):=inf{λ∈R+|{μ(g1(a)−h1(a),λt)≥tt+φo(0,a),ν(g1(a)−h1(a),λt)≤φo(0,a)t+φo(0,a),∀a∈X,∀t>0}. |
It is easy to prove that (S1,d1) is a complete generalized metric space ([3,13]). Now, we define the mapping J1: S1→S1 by
J1g1(a):=kg1(ak),for allg1∈S1anda∈X. | (4.5) |
Let g1,h1∈S1 and let λ∈R+ be an arbitrary constant with d1(g1,h1)≤λ. From the definition of d1, we get
{μ(g1(a)−h1(a),λt)≥tt+φo(0,a),ν(g1(a)−h1(a),λt)≤φo(0,a)t+φo(0,a) |
for all a∈X and t>0. Therefore, using (4.1), we get
{μ(J1g1(a)−J1h1(a),λρt)=μ(kg1(ak)−kh1(ak),λρt)=μ(g1(ak)−h1(ak),λρtk)≥ρktρkt+ρkφo(0,a)=tt+φo(0,a),ν(J1g1(a)−J1h1(a),λρt)=ν(kg1(ak)−kh1(ak),λρt)=ν(g1(ak)−h1(ak),λρtk)≤ρkφo(0,a)ρkt+ρkφo(0,a)=φo(0,a)t+φo(0,a) | (4.6) |
for some ρ<1, all a∈X and all t>0. Hence, it holds that d1(J1g1,J1h1)≤λρ, that is, d1(J1g1,J1h1)≤ρd1(g1,h1) for all g1,h1∈S1.
Furthermore, by (4.1) and (4.4), we obtain the inequality
d(f,J1f)≤ρ2k(2k+s−1). |
It follows from Lemma 2.4 that the sequence Jp1f converges to a fixed point A of J1, that is, for all a∈X and all t>0,
A:X→Y,A(a):=(μ,ν)−limp→∞kpf(akp) | (4.7) |
and
A(ka)=kA(a). | (4.8) |
Meanwhile, A is the unique fixed point of J1 in the set
S∗1={g1∈S1:d1(f,g1)<∞}. |
Thus, there exists a λ∈R+ such that
{μ(f(a)−A(a),λt)≥tt+φo(0,a),ν(f(a)−A(a),λt)≤φo(0,a)t+φo(0,a) |
for all a∈X and all t>0. Also,
d1(f,A)≤11−ρd(f,J1f)≤ρ2k(1−ρ)(2k+s−1). |
This means that the following inequality
{μ(f(a)−A(a),t)≥2k(2k+s−1)(1−ρ)t2k(2k+s−1)(1−ρ)t+ρφo(0,a),ν(f(a)−A(a),t)≤ρφo(0,a)2k(2k+s−1)(1−ρ)t+ρφo(0,a) | (4.9) |
holds for all a∈X and all t>0. It follows from (3.4) and (4.1) that
μ(kpDf(akp,bkp),t)≥tt+ρpφo(a,b),ν(kpDf(akp,bkp),t)≤ρpφo(a,b)t+ρpφo(a,b) | (4.10) |
for all a,b∈X and all t>0. Letting p→∞ in (4.10), we obtain
μ(DA(a,b),t)=1andν(DA(a,b),t)=0 | (4.11) |
for all a,b∈X and all t>0. This means that A satisfies the functional Eq (1.1). Since f: X→Y is an odd mapping, and using the definition A, we have A(−a)=−A(a) for all a∈X. Thus by Lemma 3.1, the mapping A: X→Y is additive.
By Lemma 2.3 and (4.9), we get
{μn(fn([xij])−An([xij]),t)≥min{μ(f(xij)−A(xij),tn2):i,j=1,⋯,n}≥2k(2k+s−1)(1−ρ)t2k(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φo(0,xij),νn(fn([xij])−An([xij]),t)≤max{ν(f(xij)−A(xij),tn2):i,j=1,…,n}≤ρn2∑ni,j=1φo(0,xij)2k(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φo(0,xij) |
for all x=[xij]∈Mn(X) and all t>0. Thus A: X→Y is a unique additive mapping satisfying (4.3), as desired. This completes the proof of the theorem.
Theorem 4.2. Let φe: X2→[0,∞) be a function such that for some real number ρ with 0<ρ<1 and
φe(a,b)=ρk2φe(ka,kb) | (4.12) |
for all a,b∈X. Suppose that an even mapping f: X→Y satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φe(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φe(xij,yij)t+∑ni,j=1φe(xij,yij) | (4.13) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exists a unique quadratic mapping Q: X→Y such that
{μn(fn([xij])−Qn([xij]),t)≥2k2(2k+s−1)(1−ρ)t2k2(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φe(0,xij),νn(fn([xij])−Qn([xij]),t)≤ρn2∑ni,j=1φe(0,xij)2k2(2k+s−1)(1−ρ)t+ρn2∑ni,j=1φe(0,xij) | (4.14) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. When n=1, similar to the proof of Theorem 3.4, we obtain
{μ(2(2k+s−1)f(ka)−2(2k+s−1)k2f(a),t)≥tt+φe(0,a),ν(2(2k+s−1)f(ka)−2(2k+s−1)k2f(a),t)≤φe(0,a)t+φe(0,a) | (4.15) |
for all a∈X and all t>0.
Let S2:={g2:X→Y}, and introduce a generalized metric d2 on S2 as follows:
d2(g2,h2):=inf{λ∈R+|{μ(g2(a)−h2(a),λt)≥tt+φe(0,a),ν(g2(a)−h2(a),λt)≤φe(0,a)t+φe(0,a),∀a∈X,∀t>0}. |
It is easy to prove that (S2,d2) is a complete generalized metric space ([3,13]). Now, we define the mapping J2: S2→S2 by
J2g2(a):=k2g2(ak),for allg2∈S2anda∈X. | (4.16) |
Let g2,h2∈S2 and let λ∈R+ be an arbitrary constant with d2(g2,h2)≤λ. From the definition of d2, we get
{μ(g2(a)−h2(a),λt)≥tt+φe(0,a),ν(g2(a)−h2(a),λt)≤φe(0,a)t+φe(0,a) |
for all a∈X and t>0. Therefore, using (4.12), we get
{μ(J2g2(a)−J2h2(a),λρt)=μ(k2g2(ak)−k2h2(ak),λρt)=μ(g2(ak)−h2(ak),λρtk2)≥ρk2tρk2t+ρk2φe(0,a)=tt+φe(0,a),ν(J2g2(a)−J2h2(a),λρt)=ν(k2g2(ak)−k2h2(ak),λρt)=ν(g2(ak)−h2(ak),λρtk2)≤ρk2φe(0,a)ρk2t+ρk2φe(0,a)=φe(0,a)t+φe(0,a) | (4.17) |
for some ρ<1, all a∈X and all t>0. Hence, it holds that d2(J2g2,J2h2)≤λρ, that is, d2(J2g2,J2h2)≤ρd2(g2,h2) for all g2,h2∈S2.
Furthermore, by (4.12) and (4.15), we obtain the inequality
d(f,J2f)≤ρ2k2(2k+s−1). |
It follows from Lemma 2.4 that the sequence Jp2f converges to a fixed point Q of J2, that is, for all a∈X and all t>0,
Q:X→Y,Q(a):=(μ,ν)−limp→∞k2pf(akp) | (4.18) |
and
Q(ka)=k2Q(a). | (4.19) |
Meanwhile, Q is the unique fixed point of J2 in the set
S∗2={g2∈S2:d2(f,g2)<∞}. |
Thus there exists a λ∈R+ such that
{μ(f(a)−Q(a),λt)≥tt+φe(0,a),ν(f(a)−Q(a),λt)≤φe(0,a)t+φe(0,a) |
for all a∈X and all t>0. Also,
d2(f,Q)≤11−ρd(f,J2f)≤ρ2k2(1−ρ)(2k+s−1). |
This means that the following inequality
{μ(f(a)−Q(a),t)≥2k2(2k+s−1)(1−ρ)t2k2(2k+s−1)(1−ρ)t+ρφe(0,a),ν(f(a)−Q(a),t)≤ρφe(0,a)2k2(2k+s−1)(1−ρ)t+ρφe(0,a) | (4.20) |
holds for all a∈X and all t>0. The rest of the proof is similar to the proof of Theorem 4.1. This completes the proof of the theorem.
Theorem 4.3. Let φ: X2→[0,∞) be a function such that for some real number ρ with 0<ρ<k,
φ(a,b)=ρk2φ(ka,kb) | (4.21) |
for all a,b∈X. Suppose that a mapping f: X→Y with f(0)=0 satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1φ(xij,yij),νn(Dfn([xij],[yij]),t)≤∑ni,j=1φ(xij,yij)t+∑ni,j=1φ(xij,yij) | (4.22) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: X→Y and a unique additive mapping A: X→Y such that
{μn(fn([xij])−Qn([xij])−An([xij]),t)≥k(2k+s−1)(1−ρ)tk(2k+s−1)(1−ρ)t+ρn2∑ni,j=1˜φ(0,xij),νn(fn([xij])−Qn([xij])−An([xij]),t)≤ρn2∑ni,j=1˜φ(0,xij)k(2k+s−1)(1−ρ)t+ρn2∑ni,j=1˜φ(0,xij) | (4.23) |
for all x=[xij]∈Mn(X) and all t>0, ˜φ(a,b)=φ(a,b)+φ(−a,−b) for all a,b∈X.
Proof. The proof follows from Theorems 4.1 and 4.2, and a method similar to Theorem 3.5. This completes the proof of the theorem.
Corollary 4.4. Let r,θ be positive real numbers with r>2. Suppose that a mapping f: X→Y with f(0)=0 satisfies the inequality
{μn(Dfn([xij],[yij]),t)≥tt+∑ni,j=1θ(‖xij‖r+‖yij‖r),νn(Dfn([xij],[yij]),t)≤∑ni,j=1θ(‖xij‖r+‖yij‖r)t+∑ni,j=1θ(‖xij‖r+‖yij‖r) | (4.24) |
for all x=[xij],y=[yij]∈Mn(X) and all t>0. Then there exist a unique quadratic mapping Q: X→Y and a unique additive mapping A: X→Y such that
{μn(fn([xij])−Qn([xij])−An([xij]),t)≥(2k+s−1)(kr−k2)t(2k+s−1)(kr−k2)t+2kn2∑ni,j=1θ‖xij‖r,νn(fn([xij])−Qn([xij])−An([xij]),t)≤2kn2∑ni,j=1θ‖xij‖r(2k+s−1)(kr−k2)t+2kn2∑ni,j=1θ‖xij‖r | (4.25) |
for all x=[xij]∈Mn(X) and all t>0.
Proof. Taking φ(a,b)=θ(‖a‖r+‖b‖r) for all a,b∈X and ρ=k2−r in Theorem 4.3, we get the desired result.
We use the direct and fixed point methods to investigate the Hyers-Ulam stability of the functional Eq (1.1) in the framework of matrix intuitionistic fuzzy normed spaces. We therefore provide a link two various discipline: matrix intuitionistic fuzzy normed spaces and functional equations. We generalized the Hyers-Ulam stability results of the functional Eq (1.1) from quasi-Banach spaces to matrix intuitionistic fuzzy normed spaces. These circumstances can be applied to other significant functional equations.
The author declare he has not used Artificial Intelligence (AI) tools in the creation of this article.
The author is grateful to the referees for their helpful comments and suggestions that help to improve the quality of the manuscript.
The author declares no conflict of interest in this paper.
[1] |
S. Milano, M. Taddeo, L. Floridi, Ethical aspects of multi-stakeholder recommendation systems, Inf. Soc., 37 (2021), 35-45. https://doi.org/10.1080/01972243.2020.1832636 doi: 10.1080/01972243.2020.1832636
![]() |
[2] |
H. Tang, G. Zhao, X. Bu, X. Qian, Dynamic evolution of multi-graph based collaborative filtering for recommendation systems, Knowledge-Based Syst., 228 (2021), 107251. https://doi.org/10.1016/j.knosys.2021.107251 doi: 10.1016/j.knosys.2021.107251
![]() |
[3] |
Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 42 (2009), 30-37. https://doi.org/10.1109/MC.2009.263 doi: 10.1109/MC.2009.263
![]() |
[4] | G. Datta, P. A. Beerel, Can deep neural networks be converted to ultra low-latency spiking neural networks, in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2021. https://doi.org/10.23919/DATE54114.2022.9774704 |
[5] | C. Gao, X. Wang, X. He, Y. Li, Graph neural networks for recommender system, in Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, (2022), 1623-1625. https://doi.org/10.1145/3488560.3501396 |
[6] | S. Jang, H. Lee, S. Cho, S. Woo, S. Lee, Ghost graph convolutional network for skeleton-based action recognition, in 2021 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), 2021. https://doi.org/10.1109/ICCE-Asia53811.2021.9641919 |
[7] | J. Xu, L. Chen, M. Lv, C. Zhan, S. Chen, J. Chang, HighAir: a hierarchical graph neural network-based air quality forecasting method, preprint, arXiv: 2101.04264. 86. |
[8] | V. Kalofolias, X. Bresson, M. Bronstein, P. Vandergheynst, Matrix completion on graphs, preprint, arXiv: 14081717. |
[9] | X. He, L. Liao, H. Zhang, L. Nie, X. Hu, T. Chua, Neural collaborative filtering, in Proceedings of the 26th International Conference on World Wide Web, (2017), 173-182. https://doi.org/10.1145/3038912.3052569 |
[10] | X. Wei, J. Liu, Effects of nonlinear functions on knowledge graph convolutional networks for recommender systems with yelp knowledge graph, Lamar University, Beaumont, (2021), 185-199. https://doi.org/10.5121/csit.2021.110715 |
[11] |
W. Li, L. Ni, J. Wang, C. Wang, Collaborative representation learning for nodes and relations via heterogeneous graph neural network, Knowledge-Based Syst., 255 (2022), 109673. https://doi.org/10.1016/j.knosys.2022.109673 doi: 10.1016/j.knosys.2022.109673
![]() |
[12] | C. Huang, Recent advances in heterogeneous relation learning for recommendation, preprint, arXiv: 211003455. |
[13] | H. B. Kang, R. Kocielnik, A. Head, J. Yang, M. Latzke, A. Kittur, et al., From who you know to what you read: augmenting scientific recommendations with implicit social networks, in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, (2022), 1-23. https://doi.org/10.1145/3491102.3517470 |
[14] | T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907. |
[15] | X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, M. Wang, Lightgcn: simplifying and powering graph convolution network for recommendation, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 639-648. https://doi.org/10.1145/3397271.3401063 |
[16] | B. Jin, C. Gao, X. He, D. Lin, Y. Li, Multi-behavior recommendation with graph convolutional networks, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 659-668. https://doi.org/10.1145/3397271.3401072 |
[17] |
L. Ma, Y. Li, J. Li, W. Tan, Y. Yu, M. A. Chapman, Multi-scale point-wise convolutional neural networks for 3D object segmentation from LiDAR point clouds in large-scale environments, IEEE Trans. Intell. Transp. Syst., 22 (2019), 821-836. https://doi.org/10.1109/TITS.2019.2961060 doi: 10.1109/TITS.2019.2961060
![]() |
[18] | X. Wang, H. Jin, A. Zhang, X. He, T. Xu, T. Chua, Disentangled graph collaborative filtering, in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 1001-1010. https://doi.org/10.1145/3397271.3401137 |
[19] | Y. Zheng, C. Gao, L. Chen, D. Jin, Y. Li, DGCN: diversified recommendation with graph convolutional networks, in Proceedings of the Web Conference, (2021), 401-412. https://doi.org/10.1145/3442381.3449835 |
[20] | R. Raziperchikolaei, Y. J. Chung, Simultaneous learning of the inputs and parameters in neural collaborative filtering, preprint, arXiv: 2203.07463. |
[21] |
R. Yin, K. Li, G. Zhang, J. Lu, A deeper graph neural network for recommender systems. Knowledge-Based Syst., 185 (2019), 105020. https://doi.org/10.1016/j.knosys.2019.105020 doi: 10.1016/j.knosys.2019.105020
![]() |
[22] | R. R. Salakhutdinov, A. Mnih, Probabilistic matrix factorization, in Proceedings of the 20th International Conference on Neural Information Processing Systems, (2007), 1257-1264. |
[23] | Y. Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2008), 426-434. https://doi.org/10.1145/1401890.1401944 |
[24] |
J. Wei, J. He, K. Chen, Y. Zhou, Z. Tang, Collaborative filtering and deep learning based recommendation system for cold start items, Expert Syst. Appl., 69 (2017), 29-39. https://doi.org/10.1016/j.eswa.2016.09.040 doi: 10.1016/j.eswa.2016.09.040
![]() |
[25] | S. Sedhain, A. K. Menon, S. Sanner, L. Xie, Autorec: autoencoders meet collaborative filtering, in Proceedings of the 24th International Conference on World Wide Web, (2015), 111-112. https://doi.org/10.1145/2740908.2742726 |
[26] | X. Wang, X. He, M. Wang, F. Feng, T. Chua, Neural graph collaborative filtering, in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2019), 165-174. https://doi.org/10.1145/3331184.3331267 |
[27] | L. Chen, L. Wu, R. Hong, K. Zhang, M. Wang, Revisiting graph based collaborative filtering: a linear residual graph convolutional network approach, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 27-34. https://doi.org/10.1609/aaai.v34i01.5330 |
[28] | X. Wang, R. Wang, C. Shi, G. Song, Q. Li, Multi-component graph convolutional collaborative filtering, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 6267-6274. https://doi.org/10.1609/aaai.v34i04.6094 |
[29] |
C. Zhang, W. Li, D. Wei, Y. Liu, Z. Li, Network dynamic GCN influence maximization algorithm with leader fake labeling mechanism, IEEE Trans. Comput. Social Syst., 2022 (2022), 1-9. https://doi.org/10.1109/TCSS.2022.3193583 doi: 10.1109/TCSS.2022.3193583
![]() |
[30] |
F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, G. Monfardini, The graph neural network model, IEEE Trans. Neural Networks, 20 (2008), 61-80. https://doi.org/10.1109/TNN.2008.2005605 doi: 10.1109/TNN.2008.2005605
![]() |
[31] | M. Gori, G. Monfardini, F. Scarselli, A new model for learning in graph domains, in 2005 IEEE International Joint Conference on Neural Networks, (2005), 729-734. https://doi.org/10.1109/IJCNN.2005.1555942 |
[32] | W. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in Proceedings of the 31st International Conference on Neural Information Processing Systems, (2017), 1025-1035. |
[33] | P. Velikovi, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, preprint, arXiv: 1710.10903. |
[34] | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017. Available from: https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. |
[35] | K. Xu, W. Hu, J. Leskovec, S. Jegelka, How powerful are graph neural networks, preprint, arXiv: 181000826. |
[36] | X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, et al., Heterogeneous graph attention network, in The World Wide Web Conference, (2019), 2022-2032. https://doi.org/10.1145/3308558.3313562 |
[37] | F. Wu, T. Zhang, A. H. de Souza, C. Fifty, T. Yu, K. Q. Weinberger, Simplifying graph convolutional networks, 2019 (2019), 6861-6871. https://doi.org/10.48550/arXiv.1902.07153 |
[38] | M. Liu, H. Gao, S. Ji, Towards deeper graph neural networks, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2020), 338-348. https://doi.org/10.1145/3394486.3403076 |
[39] | C. Louizos, M. Welling, D. P. Kingma, Learning sparse neural networks through L0 regularization, preprint, arXiv: 171201312. |