Loading [MathJax]/jax/output/SVG/jax.js
Research article

DyCARS: A dynamic context-aware recommendation system


  • Dynamic recommendation systems aim to achieve real-time updates and dynamic migration of user interests, primarily utilizing user-item interaction sequences with timestamps to capture the dynamic changes in user interests and item attributes. Recent research has mainly centered on two aspects. First, it involves modeling the dynamic interaction relationships between users and items using dynamic graphs. Second, it focuses on mining their long-term and short-term interaction patterns. This is achieved through the joint learning of static and dynamic embeddings for both users and items. Although most existing methods have achieved some success in modeling the historical interaction sequences between users and items, there is still room for improvement, particularly in terms of modeling the long-term dependency structures of dynamic interaction histories and extracting the most relevant delayed interaction patterns. To address this issue, we proposed a Dynamic Context-Aware Recommendation System for dynamic recommendation. Specifically, our model is built on a dynamic graph and utilizes the static embeddings of recent user-item interactions as dynamic context. Additionally, we constructed a Gated Multi-Layer Perceptron encoder to capture the long-term dependency structure in the dynamic interaction history and extract high-level features. Then, we introduced an Attention Pooling network to learn similarity scores between high-level features in the user-item dynamic interaction history. By calculating bidirectional attention weights, we extracted the most relevant delayed interaction patterns from the historical sequence to predict the dynamic embeddings of users and items. Additionally, we proposed a loss function called the Pairwise Cosine Similarity loss for dynamic recommendation to jointly optimize the static and dynamic embeddings of two types of nodes. Finally, extensive experiments on two real-world datasets, LastFM, and the Global Terrorism Database showed that our model achieves consistent improvements over state-of-the-art baselines.

    Citation: Zhiwen Hou, Fanliang Bu, Yuchen Zhou, Lingbin Bu, Qiming Ma, Yifan Wang, Hanming Zhai, Zhuxuan Han. DyCARS: A dynamic context-aware recommendation system[J]. Mathematical Biosciences and Engineering, 2024, 21(3): 3563-3593. doi: 10.3934/mbe.2024157

    Related Papers:

    [1] Yiqin Bao, Qiang Zhao, Jie Sun, Wenbin Xu, Hongbing Lu . An edge cloud and Fibonacci-Diffie-Hellman encryption scheme for secure printer data transmission. Mathematical Biosciences and Engineering, 2024, 21(1): 96-115. doi: 10.3934/mbe.2024005
    [2] Takao Komatsu, Haotian Ying . The p-Frobenius and p-Sylvester numbers for Fibonacci and Lucas triplets. Mathematical Biosciences and Engineering, 2023, 20(2): 3455-3481. doi: 10.3934/mbe.2023162
    [3] Zhongxue Yang, Yiqin Bao, Yuan Liu, Qiang Zhao, Hao Zheng, Wenbin Xu . Lightweight blockchain fuzzy decision scheme through MQTT and Fibonacci for sustainable transport. Mathematical Biosciences and Engineering, 2022, 19(12): 11935-11956. doi: 10.3934/mbe.2022556
    [4] Yan Wang, Guichen Lu, Jiang Du . Calibration and prediction for the inexact SIR model. Mathematical Biosciences and Engineering, 2022, 19(3): 2800-2818. doi: 10.3934/mbe.2022128
    [5] Anh Quang Tran, Tien-Anh Nguyen, Van Tu Duong, Quang-Huy Tran, Duc Nghia Tran, Duc-Tan Tran . MRI Simulation-based evaluation of an efficient under-sampling approach. Mathematical Biosciences and Engineering, 2020, 17(4): 4048-4063. doi: 10.3934/mbe.2020224
    [6] Yong Tao, Fan Ren, Youdong Chen, Tianmiao Wang, Yu Zou, Chaoyong Chen, Shan Jiang . A method for robotic grasping based on improved Gaussian mixture model. Mathematical Biosciences and Engineering, 2020, 17(2): 1495-1510. doi: 10.3934/mbe.2020077
    [7] Zhipeng Zhao, Zhenyu Hu, Zhiyu Zhao, Xiaoyu Du, Tianfei Chen, Lijun Sun . Fault-tolerant Hamiltonian cycle strategy for fast node fault diagnosis based on PMC in data center networks. Mathematical Biosciences and Engineering, 2024, 21(2): 2121-2136. doi: 10.3934/mbe.2024093
    [8] Auwalu Saleh Mubarak, Zubaida Said Ameen, Fadi Al-Turjman . Effect of Gaussian filtered images on Mask RCNN in detection and segmentation of potholes in smart cities. Mathematical Biosciences and Engineering, 2023, 20(1): 283-295. doi: 10.3934/mbe.2023013
    [9] Sergio Bermudo, Robinson A. Higuita, Juan Rada . k-domination and total k-domination numbers in catacondensed hexagonal systems. Mathematical Biosciences and Engineering, 2022, 19(7): 7138-7155. doi: 10.3934/mbe.2022337
    [10] Lal Hussain, Wajid Aziz, Ishtiaq Rasool Khan, Monagi H. Alkinani, Jalal S. Alowibdi . Machine learning based congestive heart failure detection using feature importance ranking of multimodal features. Mathematical Biosciences and Engineering, 2021, 18(1): 69-91. doi: 10.3934/mbe.2021004
  • Dynamic recommendation systems aim to achieve real-time updates and dynamic migration of user interests, primarily utilizing user-item interaction sequences with timestamps to capture the dynamic changes in user interests and item attributes. Recent research has mainly centered on two aspects. First, it involves modeling the dynamic interaction relationships between users and items using dynamic graphs. Second, it focuses on mining their long-term and short-term interaction patterns. This is achieved through the joint learning of static and dynamic embeddings for both users and items. Although most existing methods have achieved some success in modeling the historical interaction sequences between users and items, there is still room for improvement, particularly in terms of modeling the long-term dependency structures of dynamic interaction histories and extracting the most relevant delayed interaction patterns. To address this issue, we proposed a Dynamic Context-Aware Recommendation System for dynamic recommendation. Specifically, our model is built on a dynamic graph and utilizes the static embeddings of recent user-item interactions as dynamic context. Additionally, we constructed a Gated Multi-Layer Perceptron encoder to capture the long-term dependency structure in the dynamic interaction history and extract high-level features. Then, we introduced an Attention Pooling network to learn similarity scores between high-level features in the user-item dynamic interaction history. By calculating bidirectional attention weights, we extracted the most relevant delayed interaction patterns from the historical sequence to predict the dynamic embeddings of users and items. Additionally, we proposed a loss function called the Pairwise Cosine Similarity loss for dynamic recommendation to jointly optimize the static and dynamic embeddings of two types of nodes. Finally, extensive experiments on two real-world datasets, LastFM, and the Global Terrorism Database showed that our model achieves consistent improvements over state-of-the-art baselines.



    The Fibonacci sequence is defined Fn=Fn1+Fn2 for n2 with the initial conditions F0=0,F1=1. The Fibonacci sequence and the golden ratio are used in many fields such as cryptology, coding theory and quantum physics in [1,2,3,4,5,6,7,8,9,10,11]. Horadam defined the Gaussian Fibonacci numbers in [12,13] and gave some general identities about the Gaussian Fibonacci numbers. Jordan generalized the Gaussian Fibonacci numbers with a similar definition in [14]. The Gaussian Fibonacci sequence is defined as GFn=GFn1+GFn2 with n>1 where GF0=i,GF1=1 in [14]. It can also be easily seen Fn that GFn=Fn+iFn1 where Fn is the nth Fibonacci number.

    Asci and Gurel generalized these studies and defined the k-order Gaussian Fibonacci numbers in [15] by the following recurrence relation

    GF(k)n=kj=1GF(k)nj,forn>0andk2

    with boundary conditions for 1kn0,

    GF(k)n={1i,k=1ni,k=2n0,otherwise.

    It can also be seen that GF(k)n=F(k)n+iF(k)n1 where F(k)n is the nth k-order Fibonacci number.

    In [16], Asci and Aydinyuz defined the k-order Gaussian Fibonacci polynomials and gave some important results. The k-order Gaussian Fibonacci polynomials {GFn(k)}n=0 are

    GF(k)n(x)=kj=1xkjGF(k)nj(x)

    for n>0 and k2 with the initial conditions for 1kn0,

    GF(k)n(x)={1ix,k=1ni,k=2n0,otherwise.

    It can be seen that

    GF(k)n(x)=F(k)n(x)+iF(k)n1(x)

    where F(k)n(x) is the nth k-order Fibonacci polynomial.

    In order to ensure information security in terms of data transfer over communication channels, a lot of work has been done on this subject and continues to be done. Therefore, coding/decoding algorithms play an important role to ensure information security. Especially, Fibonacci coding theory is one of the most preferred in this field. We can see examples of these in many studies. For example, in [17], Stakhov gave a new approach to a coding theory using the generalization of the Cassini formula for Fibonacci p-numbers and Qp matrices in 2006. In 2009, Basu and Prasad in [18] presented the generalized relations among the code elements for the Fibonacci coding theory. Also, Basu and Das introduced a new coding theory for Tribonacci matrices in [19] and in 2014 they defined the coding theory for Fibonacci n-step numbers by generalizing the coding theory for Tribonacci numbers in [20]. Esmaeili in [21] also described a Fibonacci polynomial-based coding method with error detection and correction.

    In this paper, we remember the coding theory for k-order Gaussian Fibonacci polynomials given in [16] by taking x=1 for k-order Gaussian Fibonacci numbers. We describe the coding theory for the k-order Gaussian Fibonacci numbers. We give illustrative examples. The interesting relation between the elements of the code-message matrix is analyzed and derived. The most important of this paper is that we deal with error detection and error correction methods and examples of error probability is given.

    When we take x=1 in the coding theory defined in [16], we obtain the k-order Gaussian Fibonacci coding method. First, we define the Qk, Rk and En(k) matrices, which plays an important role in this coding theory. Qk, Rk and En(k) are defined the k×k matrices as the following:

    Qk=[111111000001000000000010]k×k,Rk=[GF(k)k1GF(k)k2GF(k)k3GF(k)2GF(k)10GF(k)k2GF(k)k3GF(k)k4GF(k)100GF(k)k3GF(k)k4GF(k)k5000GF(k)2GF(k)1000GF(k)10000i0000i1i]k×k

    and

    E(k)n=[GF(k)n+k1GF(k)n+k2GF(k)n+1GF(k)nGF(k)n+k2GF(k)n+k3GF(k)nGF(k)n1GF(k)n+1GF(k)nGF(k)nk+3GF(k)nk+2GF(k)nGF(k)n1GF(k)nk+2GF(k)nk+1]k×k

    where GF(k)n is the nth k-order Gaussian Fibonacci number.

    Theorem 2.1. For n1, we get in [15] as follow:

    QnkRk=E(k)n. (2.1)

    Corollary 2.1. For k=2, we get

    QnR=[1110]n[1ii1i]
    =[GFn+1GFnGFnGFn1]

    where GFn is the nth usual Gaussian Fibonacci number in [15].

    Corollary 2.2. For k=3, we get

    Qn3R3=[111100010]n[1+i1010i0i1i]
    =[GTn+2GTn+1GTnGTn+1GTnGTn1GTnGTn1GTn2]

    where GTn is the nth Gaussian Tribonacci number in [23].

    In this section, we redefine k-order Gaussian Fibonacci coding theory using k-order Gaussian Fibonacci numbers and Qk, Rk and En(k) play very important role in the construction of k-order Gaussian Fibonacci coding theory.

    We now obtain the matrix En(k) using Qk, Rk matrices for the k=2,k=3 values and examine the inverse.

    • For k=2, introducing the square matrix Q2 of order 2 as:

    Q2=[1110]

    and the square matrix R2 of order 2 as:

    R2=[1ii1i]

    for n=1, we can use (2.1)

    E(2)1=Q2.R2
    =[1+i11i]
    =[GF(2)2GF(2)1GF(2)1GF(2)0]

    such that

    detE(2)1=det(Q2.R2)
    =(1).(2i)
    =2+i.

    The inverse of E(2)1 is as:

    (E(2)1)1=[1525i25+15i25+15i1535i]
    =1detE(2)1[GF0GF1GF1GF2]
    =12+i[GF0GF1GF1GF2]

    such that

    det(E(2)1)1=12+i
    =2515i.

    Also, by (2.1) for n=2, we can get E(2)2 as follows:

    E(2)2=Q22.R2
    =[2+i1+i1+i1]
    =[GF(2)3GF(2)2GF(2)2GF(2)1]

    such that

    detE(2)2=det(Q22.R2)
    =(1)2.(2i)
    =2i.

    The inverse of E(2)2 is as:

    (E(2)2)1=[25+15i1535i1535i35+45i]
    =1detE(2)2[GF(2)1GF(2)2GF(2)2GF(2)3]
    =12i[GF(2)1GF(2)2GF(2)2GF(2)3]

    such that

    det(E(2)2)1=12i
    =25+15i.

    Theorem 2.1.1. E(2)n=[GF(2)n+1GF(2)nGF(2)nGF(2)n1] where E(2)1=[1+i11i].

    Theorem 2.1.2. (E(2)n)1=1detE(2)n[GF(2)n1GF(2)nGF(2)nGF(2)n+1]

    =1(detQ2)n.detR2[GF(2)n1GF(2)nGF(2)nGF(2)n+1].

    • For k=3, introducing the square matrix Q3 of order 3 as:

    Q3=[111100010]

    and the square matrix R3 of order 3 as:

    R3=[1+i1010i0i1i]

    for n=1, we can use (2.1)

    E(3)1=Q3.R3
    =[2+i1+i11+i1010i]
    =[GF(3)3GF(3)2GF(3)1GF(3)2GF(3)1GF(3)0GF(3)1GF(3)0GF(3)1]

    such that

    detE(3)1=det(Q3.R3)
    =1.2i
    =2i.

    The inverse of E(3)1 is as:

    (E(3)1)1=[121212i12i1212i1+i1212i12i1212i12i]
    =1detE(3)1[GF(3)1GF1(3)(GF(3)0)2GF(3)0GF1(3)GF(3)1GF2(3)GF(3)0GF2(3)(GF(3)1)2GF(3)0GF1(3)GF(3)1GF2(3)GF(3)3GF1(3)(GF(3)1)2GF(3)1GF2(3)GF(3)0GF3(3)(GF(3)1)2GF(3)0GF2(3)GF(3)1GF2(3)GF(3)0GF3(3)GF(3)1GF3(3)(GF(3)2)2]
    =1detR3[GF(3)1GF1(3)(GF(3)0)2GF(3)0GF1(3)GF(3)1GF2(3)GF(3)0GF2(3)(GF(3)1)2GF(3)0GF1(3)GF(3)1GF2(3)GF(3)3GF1(3)(GF(3)1)2GF(3)1GF2(3)GF(3)0GF3(3)(GF(3)1)2GF(3)0GF2(3)GF(3)1GF2(3)GF(3)0GF3(3)GF(3)1GF3(3)(GF(3)2)2]

    such that

    det(E(3)1)1=12i
    =i2.

    Also, by (2.1) for n=2, we can get E(3)2 as follows:

    E(3)2=Q23.R3
    =[4+2i2+i1+i2+i1+i11+i10]
    =[GF(3)4GF(3)3GF(3)2GF(3)3GF(3)2GF(3)1GF(3)2GF(3)1GF(3)0]

    such that

    detE(3)2=det(Q23.R3)
    =12.2i
    =2i.

    The inverse of E(3)2 is as:

    (E(3)2)1=[12i1212i12i1212i112+32i12i12+32i1+12i]
    =1detE(3)2[GF(3)0GF(3)2(GF(3)1)2GF(3)1GF(3)2GF(3)0GF(3)3GF(3)1GF(3)3(GF(3)2)2GF(3)1GF(3)2GF(3)0GF(3)3GF(3)0GF(3)4(GF(3)2)2GF(3)2GF(3)3GF(3)1GF(3)4GF(3)1GF(3)3(GF(3)2)2GF(3)2GF(3)3GF(3)1GF(3)4GF(3)2GF(3)4(GF(3)3)2]
    =1detR3[GF(3)0GF(3)2(GF(3)1)2GF(3)1GF(3)2GF(3)0GF(3)3GF(3)1GF(3)3(GF(3)2)2GF(3)1GF(3)2GF(3)0GF(3)3GF(3)0GF(3)4(GF(3)2)2GF(3)2GF(3)3GF(3)1GF(3)4GF(3)1GF(3)3(GF(3)2)2GF(3)2GF(3)3GF(3)1GF(3)4GF(3)2GF(3)4(GF(3)3)2].

    Theorem 2.1.3. E(3)n=[GF(3)n+2GF(3)n+1GF(3)nGF(3)n+1GF(3)nGF(3)n1GF(3)nGF(3)n1GF(3)n2] where E(3)1=[2+i1+i11+i1010i].

    Theorem 2.1.4.

    (E(3)n)1=1detR3[GF(3)n2GF(3)n(GF(3)n1)2GF(3)n1GF(3)nGF(3)n2GF(3)n+1GF(3)n1GF(3)n+1(GF(3)n)2GF(3)n1GF(3)nGF(3)n2GF(3)n+1GF(3)n2GF(3)n+2(GF(3)n)2GF(3)nGF(3)n+1GF(3)n1GF(3)n+2GF(3)n1GF(3)n+1(GF(3)n)2GF(3)nGF(3)n+1GF(3)n1GF(3)n+2GF(3)nGF(3)n+2(GF(3)n+1)2].

    For arbitrary k-positive integers, the square matrix E(k)n of order k and inverses can be found similarly.

    In this section, we describe a new k-order Gaussian Fibonacci coding theory. We put our message in a matrix of M and let us represent the initial message in the form of the square matrix M of order k. We take the k-order Gaussian Fibonacci matrix E(k)n as a coding matrix and its inverse matrix (E(k)n)1 as a decoding matrix for an arbitrary positive integer k. The transformation M×E(k)n=C is called k-order Gaussian Fibonacci coding and we name the transformation C×(E(k)n)1=M as k-order Gaussian Fibonacci decoding. We define C as a code matrix.

    The given example is solved using the alphabet table below.

    Using the arbitrary value of s0, we write the following alphabet according to mod30. We can extend the characters in the table according to our wishes. We begin the “s” for the first character in Table 1.

    Table 1.  Alphabet table.
    A B C D E F G H I J
    s s+1 s+2 s+3 s+4 s+5 s+6 s+7 s+8 s+9
    K L M N O P Q R S T
    s+10 s+11 s+12 s+13 s+14 s+15 s+16 s+17 s+18 s+19
    U V W X Y Z 0 ! ? .
    s+20 s+21 s+22 s+23 s+24 s+25 s+26 s+27 s+28 s+29

     | Show Table
    DownLoad: CSV

    Example 2.3.1. Let us consider the message matrix for the following message text:

    âœCODE❠(2.2)

    Step 1: Let’s create the message matrix using the message text:

    M=[CODE]2×2

    Step 2: Let’s write the message matrix M according to the alphabet table for the arbitrary value “s” we choose. For s=2;

    M=[41656]

    Step 3: For k=2, n=2, we use (2.1);

    E(2)2=Q22.R2
    =[2+i1+i1+i1]

    Step 4: The code message is:

    C=M×E(2)2
    =[41656][2+i1+i1+i1]
    =[24+20i20+4i16+11i11+5i]

    Step 5: The decode message is:

    M=C×(E(2)2)1
    =[24+20i20+4i16+11i11+5i][25+15i1535i1535i35+45i]
    =[41656]
    =[CODE].

    Example 2.3.2. Let us consider the message matrix for the following message text:

    "PUBLICKEY"

    Step 1: Let’s create the message matrix using the message text:

    M=[PUBLICKEY]3×3

    Step 2: Let’s write the message matrix M according to the alphabet table for the arbitrary value “s” we choose. For s=5:

    M=[202561613715929]

    Step 3: For k=3, n=6, we use (2.1):

    E(3)6=Q63.R3
    =[44+24i24+13i13+7i24+13i13+7i7+4i13+7i7+4i4+2i]

    Step 4: The code message is:

    C=M×E(3)6
    =[1558+847i847+459i459+252i1107+602i602+327i327+178i1253+680i680+374i374+199i]

    Step 5: The decode message is:

    M=C×(E(6)3)1
    =[PUBLICKEY]
    =[202561613715929].

    In this section, we consider the k-order Gaussian Fibonacci coding/decoding method for k=2. We have an interesting relation among the elements of a code matrix C that has a crucial role in the error-correction process, outlined as follows:

    C=M×E(2)n=[m1m2m3m4]×[GF(2)n+1GF(2)nGF(2)nGF(2)n1] (3.1)
    =[c1c2c3c4] (3.2)

    and

    M=C×(E(2)n)1 (3.3)
    =[c1c2c3c4]×(1(det(Q2))ndetR2[GF(2)n1GF(2)nGF(2)nGF(2)n+1]) (3.4)
    =[m1m2m3m4]

    For the case for an even integer n=2m, we obtain the following equation

    [m1m2m3m4]=[c1c2c3c4]×(12i[GF(2)n1GF(2)nGF(2)nGF(2)n+1]) (3.5)

    since the det(Q2)=1 and det(R2)=2i.

    It follows from (3.5) that the elements of the matrix M can be obtained according to the following formulas:

    m1=12i(GF(2)n1c1GF(2)nc2) (3.6)
    m2=12i(GF(2)nc1+GF(2)n+1c2) (3.7)
    m3=12i(GF(2)n1c3GF(2)nc4) (3.8)
    m4=12i(GF(2)nc3+GF(2)n+1c4) (3.9)

    Since the s0, the elements of the matrix M are

    m10,m20,m30andm40 (3.10)

    Because of the condition (3.10) we can write the equalities (3.6)–(3.9) as:

    m1=12i(GF(2)n1c1GF(2)nc2)0 (3.11)
    m2=12i(GF(2)nc1+GF(2)n+1c2)0 (3.12)
    m3=12i(GF(2)n1c3GF(2)nc4)0 (3.13)
    m4=12i(GF(2)nc3+GF(2)n+1c4)0 (3.14)

    From the Eqs (3.11) and (3.12), we can get

    GF(2)nGF(2)n1c2c1GF(2)n+1GF(2)nc2 (3.15)

    or

    GF(2)nGF(2)n1c1c2GF(2)n+1GF(2)n (3.16)

    Similarly, from (3.13) and (3.14), we can obtain:

    GF(2)nGF(2)n1c4c3GF(2)n+1GF(2)nc4 (3.17)

    or

    GF(2)nGF(2)n1c3c4GF(2)n+1GF(2)n (3.18)

    Since two consecutive Gaussian Fibonacci numbers approach the golden ratio, we also obtain the following equations in (3.16) and (3.18) that connect the elements of the code matrix in (3.1):

    c1τc2 (3.19)
    c3τc4 (3.20)

    where τ=1+52 is the “golden ratio”.

    Similarly, for an odd integer n=2m+1, we can write the same approximate equalities (3.19) and (3.20) that connect the elements c1,c2,c3 and c4 of the code matrix in (3.1).

    Thus, we have generated some important identities that connect the elements of the code matrix in (3.1) for the case k=2.

    If we consider the coding/decoding method for k=3, we can obtain interesting identities as in k=2. Since two consecutive Gaussian Tribonacci numbers approach Φ where Φ=1.8393. However, for general case of k we can find mathematical identities that connect the code matrix elements similar to the (3.19) and (3.20).

    Example 3.1. Suppose k=2. Thus, we have τ=1+521.618. Assume that the following message-matrix is to be transmitted in (2.2):

    For s=2, we can get

    M=[CODE]=[41656]

    • If n=2 then

    C=[24+20i20+4i16+11i11+5i]=[c1c2c3c4]

    In this case, we have the following numbers rounded off to their first five digits:

    c1c2=1.3462+0.7307i,c3c4=1.5822+0.2808i

    • For n=5, we have

    C=[112+68i68+44i70+43i43+27i]

    In this case

    c1c2=1.61710.0463i,c3c4=1.61790.0159i

    • For n=10, we have

    C=[1236+764i764+472i775+479i479+296i]

    In this case

    c1c2=1.618+0.0003i,c3c4=1.618+0.0001i

    • For n=15, we have

    C=[13708+8472i8472+5236i8595+5312i5312+3283i]

    In this case

    c1c2=1.618,c3c4=1.618

    These show that for n=15 the relation (3.19) and (3.20) holds very well.

    Therefore, we select the value of n large enough, as seen in Examples 2.3.1 we get a good approximation of τ. If we apply similar operation in Example 2.3.2, when n is chosen large enough we get a good approximation of Φ where Φ=1.8393 and Φ is the ratio between two Gaussian Tribonacci numbers.

    The k-order Gaussian Fibonacci coding/decoding method considered above provides an interesting possibility to detect and correct “errors” in code message C. Error detection and correction for k=2 depends on the states of the determinant of the matrix C and the connections with the code matrix elements at (3.19) and (3.20). These mathematical relations given by (3.19), (3.20) and the determinant of the code matrix C play a role in checking relations of the k-order Gaussian Fibonacci coding/decoding method. The error-correction algorithm for Fibonacci coding has been explained in [10] and [13]. This defined algorithm is valid and applicable to the k-order Gaussian Fibonacci numbers we have given in this study.

    First, let’s calculate the determinant of our message matrix M and send it to the communication channel right after the code matrix elements. Here, the determinant of M will be the control element for the C code matrix received from the communication channel. After taking the determinant of the code matrix C and its control element M, we can calculate the determinant of the code matrix C and compare the determinant of M with the determinant of C with respect to the control relation. If the operations we have done after this check are exactly appropriate, we can conclude that the elements of the code matrix C are transmitted on the communication channel without errors. If the opposite happens, we can say that there are errors in the elements of the code matrix C or the determinant of the control element M.

    Suppose the elements of the C code matrix errors. This matrix can have a one-fold error, double-fold error or k2 fold error. To explain how to correct these errors, we consider the 2×2 type matrix obtained for k=2. These cases are considered.

    Our first hypothesis is that we have the case of a “single” error in the code matrix C received from the communication channel. It is clearly seen that there are four types of single-fold errors in the C code matrix:

    (a)[xc2c3c4](b)[c1yc3c4](c)[c1c2zc4](d)[c1c2c3t] (4.1)

    where x,y,z,t are possible “destroyed” elements.

    Since it is C=M×E(2)n, if we take the determinant of both sides, we get the following equations:

    xc4c2c3=det(M)(1)n(2i)(apossibleâœsingleerrorâisintheelementc1) (4.2)
    c1c4yc3=det(M)(1)n(2i)(apossibleâœsingleerrorâisintheelementc2) (4.3)
    c1c4c2z=det(M)(1)n(2i)(apossibleâœsingleerrorâisintheelementc3) (4.4)
    c1tc2c3=det(M)(1)n(2i)(apossibleâœsingleerrorâisintheelementc4) (4.5)

    or equivalently

    x=det(M)(1)n(2i)+c2c3c4(a) (4.6)
    y=det(M)(1)n(2i)+c1c4c3(b) (4.7)
    z=det(M)(1)n(2i)+c1c4c2(c) (4.8)
    t=det(M)(1)n(2i)+c2c3c1(d) (4.9)

    The formulas (4.6)–(4.9) give four possible states of a “single error”. However, we need to choose the correct case from among x,y,z,t integer solutions. We have to choice such solutions, which satisfies to the additional “checking relations” (3.19) and (3.20). If calculations by formulas (4.6)–(4.9) do not give an integer result, we can conclude that our “single error” hypothesis is incorrect or that the determinant of control element M is “error”. In the second case, we can use approximate Eqs (3.19) and (3.20) to check the correctness of the C code matrix.

    Similarly, we can easily check “double errors” of the C code matrix. Suppose that the code matrix C has elements x and y error as shown below:

    [xyc3c4] (4.10)

    Then,

    xc4yc3=det(M)(1)n(2i) (4.11)

    However, according to the (3.19) there is the following relation between x and y

    xτy (4.12)

    Again, only integer solutions are acceptable. If no integer solution is obtained, then two-fold errors have not occurred.

    Hence, we can show that with this type of approach, there is the possibility to correct all possible triple-fold errors in the C code matrix.

    [xyzc4].

    Therefore, our method of error correction depends on the verification of different hypotheses regarding errors in the code matrix using the determinant of the control relationship C matrix, (3.19) and (3.20), and the elements of the code matrix as integers. If all our solutions do not reach integer solutions, it means that the determinant of the control item M is wrong or there is a four-fold error in the C code matrix, and we should reject it. The code matrix C is defective and not correctable.

    As a result, there are 15 error conditions in the C code matrix. According to the method given in [17], it means that the correctable probability of the method is equal since 14 cases between them can be corrected and

    Scor=1415=0.9333=93.33%

    If we generalize this equation as in [20], since only k2 fold errors cannot be corrected with this method, the error correction capacity of the method is

    2k212k2

    where k is the order of the message-matrix. So far sufficiently large values of k, the probability of decoding error is almost zero.

    Consequently, for a sufficiently large value of k, in the case of n=k, the correct ability of this method is

    2k222k21.

    Therefore, for large value of k, the correct possibility of the method is

    2k222k211=100%.

    In this paper, we obtained the coding theory for k-order Gaussian Fibonacci polynomials given in [16] by taking x=1 for k-order Gaussian Fibonacci numbers and gave illustrative examples. This coding method differs from classical algebraic coding methods in this respect. We can show this difference with the following features:

    1)      Since the n,k and s0 values are arbitrarily chosen in the k-order Gaussian Fibonacci coding theory, it is difficult to estimate the information transmission between the two channels by a third channel, which increases the reliability of the information transmission. In addition, this method depends on matrix multiplication and can be performed quickly and easily by today’s computers.

    2)      With k-order Gaussian Fibonacci coding theory, we can encrypt and send messages of the desired length by enlarging the k value sufficiently.

    3)      The main practical feature of this method is that matrix elements are error detection and correction objects.

    4)      The Gaussian Fibonacci coding method, which is the simplest coding method obtained for the k=2 value, has been redefined by handling the 2×2 type matrices with “single-fold, double-fold and triple-fold errors”.

    5)      In the simplest case, for k=2, the correct capability of the method is essentially equal to 93.33%, exceeding all well-known correction codes.

    6)      This error correction method given for k=2 is generalized and the correct ability of the errors of this method increases as k increases, and the correct ability approaches 100% for a sufficiently large value of k.

    7)      This article is just a brief outline of a new coding theory based on the k-order Gaussian Fibonacci matrices of the articles in [17,18,20,21] and [22].

    The authors declare there is no conflict of interest.



    [1] Z. Lin, An empirical investigation of user and system recommendations in e-commerce, Decis. Support Syst., 68 (2014), 111–124. https://doi.org/10.1016/j.dss.2014.10.003 doi: 10.1016/j.dss.2014.10.003
    [2] T. Iba, K. Nemoto, B. Peters, P. A. Gloor, Analyzing the creative editing behavior of wikipedia editors: Through dynamic social network analysis, Proc.-Soc. Behav. Sci., 2 (2010), 6441–6456. https://doi.org/10.1016/j.sbspro.2010.04.054 doi: 10.1016/j.sbspro.2010.04.054
    [3] T. R. Liyanagunawardena, A. A. Adams, S. A. Williams, MOOCs: A systematic study of the published literature 2008–2012, Int. Rev. Res. Open Distrib. Learn., 14 (2013), 202–227. https://doi.org/10.19173/irrodl.v14i3.1455 doi: 10.19173/irrodl.v14i3.1455
    [4] Q. Liu, S. Wu, L. Wang, T. Tan, Predicting the next location: A recurrent model with spatial and temporal contexts, in Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016). https://doi.org/10.1609/aaai.v30i1.9971
    [5] S. Wu, Q. Liu, P. Bai, L. Wang, T. Tan, SAPE: A system for situation-aware public security evaluation, in Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016). https://doi.org/10.1609/aaai.v30i1.9828
    [6] Z. Hou, X. Lv, Y. Zhou, L. Bu, Q. Ma, Y. Wang, A dynamic graph Hawkes process based on linear complexity self-attention for dynamic recommender systems, PeerJ. Comput. Sci., 9 (2023), 1368. https://doi.org/10.7717/peerj-cs.1368 doi: 10.7717/peerj-cs.1368
    [7] Y. Zheng, X. Yi, M. Li, R. Li, Z. Shan, E. Chang, et al., Forecasting fine-grained air quality based on big data, in 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2015), 2267–2276. https://doi.org/10.1145/2783258.2788573
    [8] X. Wang, X. He, M. Wang, F. Feng, T. Chua, Neural graph collaborative filtering, in 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2019), 165–174. https://doi.org/10.1145/3331184.3331267
    [9] X. He, K. Deng, X. Wang, Yan Li, Y. Zhang, M. Wang, LightGCN: Simplifying and powering graph convolution network for recommendation, in 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 639–648. https://doi.org/10.1145/3397271.3401063
    [10] Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 42 (2009), 30–37. https://doi.org/10.1109/MC.2009.263 doi: 10.1109/MC.2009.263
    [11] D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, in 13th International Conference on Neural Information Processing Systems, (2000), 535–541.
    [12] R. Salakhutdinov, A. Mnih, Probabilistic matrix factorization, in 20th International Conference on Neural Information Processing Systems, (2007), 1257–1264.
    [13] S. Raza, C. Ding, Progress in context-aware recommender systems-An overview, Comput. Sci. Rev., 31 (2019), 84–97. https://doi.org/10.1016/j.cosrev.2019.01.001 doi: 10.1016/j.cosrev.2019.01.001
    [14] G. Adomavicius, K. Bauman, A. Tuzhilin, M. Unger, Context-aware recommender systems: From foundations to recent developments, in Recommender Systems Handbook, (2022), 211–250. https://doi.org/10.1007/978-1-0716-2197-4_6
    [15] C. Wu, A. Ahmed, A. Beutel, A. J. Smola, H. Jing, Recurrent recommender networks, in Tenth ACM International Conference on Web Search and Data Mining, (2017), 495–503. https://doi.org/10.1145/3018661.3018689
    [16] S. Kumar, X. Zhang, J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 1269–1278. https://doi.org/10.1145/3292500.3330895
    [17] X. Li, M. Zhang, S. Wu, Z. Liu, L. Wang, P. S. Yu, Dynamic graph collaborative filtering, in 2020 IEEE International Conference on Data Mining, (2020), 322–331. https://doi.org/10.1109/ICDM50108.2020.00041
    [18] Z. Kefato, S. Girdzijauskas, N. Sheikh, A. Montresor, Dynamic embeddings for interaction prediction, in Web Conference 2021, (2021), 1609–1618. https://doi.org/10.1145/3442381.3450020
    [19] S. Wu, F. Sun, W. Zhang, X. Xie, B. Cui, Graph neural networks in recommender systems: A survey, ACM Comput. Surv., 55 (2022), 1–37. https://doi.org/10.1145/3535101 doi: 10.1145/3535101
    [20] P. Covington, J. Adams, E. Sargin, Deep neural networks for YouTube recommendations, in 10th ACM Conference on Recommender Systems, (2016), 191–198. https://doi.org/10.1145/2959100.2959190
    [21] H. Dai, Y. Wang, R. Trivedi, L. Song, Recurrent coevolutionary latent feature processes for continuous-time recommendation, in 1st Workshop on Deep Learning for Recommender Systems, (2016), 29–34. https://doi.org/10.1145/2988450.2988451
    [22] Y. K. Tan, X. Xu, Y. Liu, Improved recurrent neural networks for session-based recommendations, in 1st Workshop on Deep Learning for Recommender Systems, (2016), 17–22. https://doi.org/10.1145/2988450.2988452
    [23] Z. Wang, W. Wei, G. Cong, X. Li, X. Mao, M. Qiu, Global context enhanced graph neural networks for session-based recommendation, in 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 169–178. https://doi.org/10.1145/3397271.3401142
    [24] S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, T. Tan, Session-based recommendation with graph neural networks, in AAAI Technical Track: AI and the Web, 33 (2019), 346–353. https://doi.org/10.1609/aaai.v33i01.3301346
    [25] C. Xu, P. Zhao, Y. Liu, V. S. Sheng, J. Xu, F. Zhuang, Graph contextualized self-attention network for session-based recommendation, in Twenty-Eighth International Joint Conference on Artificial Intelligence, (2019), 3940–3946. https://doi.org/10.24963/ijcai.2019/547
    [26] B. Hidasi, A. Karatzoglou, L. Baltrunas, D. Tikk, Session-based recommendations with recurrent neural networks, preprint, arXiv: 1511.06939.
    [27] J. Li, P. Ren, Z. Chen, Z. Ren, T. Lian, J. Ma, Neural attentive session-based recommendation, in 2017 ACM on Conference on Information and Knowledge Management, (2017), 1419–1428. https://doi.org/10.1145/3132847.3132926
    [28] Q. Liu, S. Wu, D. Wang, Z. Li, L. Wang, Context-aware sequential recommendation, in 2016 IEEE 16th International Conference on Data Mining (ICDM), (2016), 1053–1058. https://doi.org/10.1109/ICDM.2016.0135
    [29] J. You, Y. Wang, A. Pal, P. Eksombatchai, C. Rosenburg, Hierarchical temporal convolutional networks for dynamic recommender systems, in The World Wide Web Conference, (2019), 2236–2246. https://doi.org/10.1145/3308558.3313747
    [30] Y. Wang, N. Du, R. Trivedi, L. Song, Coevolutionary latent feature processes for continuous-time user-item interactions, in 30th Conference on Neural Information Processing Systems, (2016), 1–9.
    [31] Q. Wu, Y. Gao, X. Gao, P. Weng, G. Chen, Dual sequential prediction models linking sequential recommendation and information dissemination, in 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 447–457. https://doi.org/10.1145/3292500.3330959
    [32] R. Trivedi, M. Farajtabar, P. Biswal, H. Zha, DyRep: Learning representations over dynamic graphs, in International Conference on Learning Representations 2019 Conference, (2019).
    [33] A. Beutel, P. Covington, S. Jain, C. Xu, J. Li, V. Gatto, et al., Latent cross: Making use of context in recurrent recommender systems, in Eleventh ACM International Conference on Web Search and Data Mining, (2018), 46–54. https://doi.org/10.1145/3159652.3159727
    [34] Y. Zhu, H. Li, Y. Liao, B. Wang, Z. Guan, H. Liu, et al., What to do next: Modeling user behaviors by time-LSTM, in 26th International Joint Conference on Artificial Intelligence, (2017), 3602–3608.
    [35] Y. Zhang, X. Yang, J. Ivy, M. Chi, ATTAIN: Attention-based time-aware LSTM networks for disease progression modeling, in Twenty-Eighth International Joint Conference on Artificial Intelligence, (2019), 4369–4375. https://doi.org/10.24963/ijcai.2019/607
    [36] W. Kang, J. McAuley, Self-attentive sequential recommendation, in 2018 IEEE International Conference on Data Mining, (2018), 197–206. https://doi.org/10.1109/ICDM.2018.00035
    [37] D. Hendrycks, K. Gimpel, Gaussian error linear units (GeLUs), preprint, arXiv: 1606.08415.
    [38] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L. Chen, MobileNetV2: Inverted residuals and linear bottlenecks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
    [39] Y. N. Dauphin, A. Fan, M. Auli, D. Grangier, Language modeling with gated convolutional networks, in 34th International Conference on Machine Learning, 70 (2017), 933–941.
    [40] R. K. Srivastava, K. Greff, J. Schmidhuber, Highway networks, preprint, arXiv: 1505.00387.
    [41] S. Hochreiter, J. Schmidhuber, Long Short-Term Memory, Neural Comput., 9 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 doi: 10.1162/neco.1997.9.8.1735
    [42] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 7132–7141. https://doi.org/10.1109/CVPR.2018.00745
    [43] C. Santos, M. Tan, B. Xiang, B. Zhou, Attentive pooling networks, preprint, arXiv: 1602.03609.
    [44] R. Hadsell, S. Chopra, Y. LeCun, Dimensionality reduction by learning an invariant mapping, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), (2006), 1735–1742. https://doi.org/10.1109/CVPR.2006.100
    [45] S. Yang, W. Yu, Y. Zheng, H. Yao, T. Mei, Adaptive semantic-visual tree for hierarchical embeddings, in 27th ACM International Conference on Multimedia, (2019), 2097–2105. https://doi.org/10.1145/3343031.3350995
    [46] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, M. Wang, Lightgcn: Simplifying and powering graph convolution network for recommendation, in 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (2020), 639–648. https://doi.org/10.1145/3397271.3401063
    [47] C. Hsieh, L. Yang, Y. Cui, T. Lin, S. Belongie, D. Estrin, Collaborative metric learning, in 26th International Conference on World Wide Web, (2017), 193–201. https://doi.org/10.1145/3038912.3052639
    [48] B. Fu, W. Zhang, G. Hu, X. Dai, S. Huang, J. Chen, Dual side deep context-aware modulation for social recommendation, in Web Conference 2021, (2021), 2524–2534. https://doi.org/10.1145/3442381.3449940
    [49] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, in 26th International Conference on Neural Information Processing Systems, 2 (2013), 3111–3119.
    [50] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 580–587. https://doi.org/10.1109/CVPR.2014.81
    [51] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15 (2014), 1929–1958.
    [52] B. Hidasi, D. Tikk, Fast ALS-based tensor factorization for context-aware recommendation from implicit feedback, in 2012th European Conference on Machine Learning and Knowledge Discovery in Databases, (2012), 67–82.
    [53] X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, (2010), 249–256.
    [54] J. Chung, C. Gulcehre, K. Cho, Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling, preprint, arXiv: 1412.3555.
    [55] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in 31st International Conference on Neural Information Processing Systems, (2017), 6000–6010.
    [56] T. Silveira, M. Zhang, X. Lin, Y. Liu, S. Ma, How good your recommender system is? A survey on evaluations in recommendation, Int. J. Mach. Learn. Cybern., 10 (2019), 813–831. https://doi.org/10.1007/s13042-017-0762-9 doi: 10.1007/s13042-017-0762-9
  • This article has been cited by:

    1. Öznur ÖZTUNÇ KAYMAK, Coding theory for h(x)-Fibonacci polynomials, 2024, 26, 1301-7985, 226, 10.25092/baunfbed.1347379
    2. Bahar Demirtürk, Combinatorial Analysis of k-Oresme and k-Oresme–Lucas Sequences, 2025, 17, 2073-8994, 697, 10.3390/sym17050697
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2286) PDF downloads(99) Cited by(3)

Figures and Tables

Figures(7)  /  Tables(10)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog