Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Skeleton action recognition via graph convolutional network with self-attention module

  • Skeleton-based action recognition is an important but challenging task in the study of video understanding and human-computer interaction. However, existing methods suffer from two deficiencies. On the one hand, most methods usually involve manually designed convolution kernel which cannot capture spatial-temporal joint dependencies of complex regions. On the other hand, some methods just use the self-attention mechanism, ignoring its theoretical explanation. In this paper, we proposed a unified spatio-temporal graph convolutional network with a self-attention mechanism (SA-GCN) for low-quality motion video data with fixed viewing angle. SA-GCN can extract features efficiently by learning weights between joint points of different scales. Specifically, the proposed self-attention mechanism is end-to-end with mapping strategy for different nodes, which not only characterizes the multi-scale dependencies of joints, but also integrates the structural features of the graph and an ability of self-learning fusion features. Moreover, the attention mechanism proposed in this paper can be theoretically explained by GCN to some extent, which is usually not considered in most existing models. Extensive experiments on two widely used datasets, NTU-60 RGB+D and NTU-120 RGB+D, demonstrated that SA-GCN significantly outperforms a series of existing mainstream approaches in terms of accuracy.

    Citation: Min Li, Ke Chen, Yunqing Bai, Jihong Pei. Skeleton action recognition via graph convolutional network with self-attention module[J]. Electronic Research Archive, 2024, 32(4): 2848-2864. doi: 10.3934/era.2024129

    Related Papers:

    [1] Fengrong Zhang, Ruining Chen . Spatiotemporal patterns of a delayed diffusive prey-predator model with prey-taxis. Electronic Research Archive, 2024, 32(7): 4723-4740. doi: 10.3934/era.2024215
    [2] Xin Du, Quansheng Liu, Yuanhong Bi . Bifurcation analysis of a two–dimensional p53 gene regulatory network without and with time delay. Electronic Research Archive, 2024, 32(1): 293-316. doi: 10.3934/era.2024014
    [3] Miao Peng, Rui Lin, Zhengdi Zhang, Lei Huang . The dynamics of a delayed predator-prey model with square root functional response and stage structure. Electronic Research Archive, 2024, 32(5): 3275-3298. doi: 10.3934/era.2024150
    [4] Rui Ma, Xin-You Meng . Dynamics of an eco-epidemiological model with toxicity, treatment, time-varying incubation. Electronic Research Archive, 2025, 33(5): 3074-3110. doi: 10.3934/era.2025135
    [5] Rina Su . Dynamic analysis for a class of hydrological model with time delay under fire disturbance. Electronic Research Archive, 2022, 30(9): 3290-3319. doi: 10.3934/era.2022167
    [6] Mengting Sui, Yanfei Du . Bifurcations, stability switches and chaos in a diffusive predator-prey model with fear response delay. Electronic Research Archive, 2023, 31(9): 5124-5150. doi: 10.3934/era.2023262
    [7] Yujia Xiang, Yuqi Jiao, Xin Wang, Ruizhi Yang . Dynamics of a delayed diffusive predator-prey model with Allee effect and nonlocal competition in prey and hunting cooperation in predator. Electronic Research Archive, 2023, 31(4): 2120-2138. doi: 10.3934/era.2023109
    [8] Yuan Xue, Jinli Xu, Yuting Ding . Dynamics analysis of a diffusional immunosuppressive infection model with Beddington-DeAngelis functional response. Electronic Research Archive, 2023, 31(10): 6071-6088. doi: 10.3934/era.2023309
    [9] Wenbin Zhong, Yuting Ding . Spatiotemporal dynamics of a predator-prey model with a gestation delay and nonlocal competition. Electronic Research Archive, 2025, 33(4): 2601-2617. doi: 10.3934/era.2025116
    [10] Xiaowen Zhang, Wufei Huang, Jiaxin Ma, Ruizhi Yang . Hopf bifurcation analysis in a delayed diffusive predator-prey system with nonlocal competition and schooling behavior. Electronic Research Archive, 2022, 30(7): 2510-2523. doi: 10.3934/era.2022128
  • Skeleton-based action recognition is an important but challenging task in the study of video understanding and human-computer interaction. However, existing methods suffer from two deficiencies. On the one hand, most methods usually involve manually designed convolution kernel which cannot capture spatial-temporal joint dependencies of complex regions. On the other hand, some methods just use the self-attention mechanism, ignoring its theoretical explanation. In this paper, we proposed a unified spatio-temporal graph convolutional network with a self-attention mechanism (SA-GCN) for low-quality motion video data with fixed viewing angle. SA-GCN can extract features efficiently by learning weights between joint points of different scales. Specifically, the proposed self-attention mechanism is end-to-end with mapping strategy for different nodes, which not only characterizes the multi-scale dependencies of joints, but also integrates the structural features of the graph and an ability of self-learning fusion features. Moreover, the attention mechanism proposed in this paper can be theoretically explained by GCN to some extent, which is usually not considered in most existing models. Extensive experiments on two widely used datasets, NTU-60 RGB+D and NTU-120 RGB+D, demonstrated that SA-GCN significantly outperforms a series of existing mainstream approaches in terms of accuracy.



    Fractional difference calculus is a tool used to explain many phenomena in physics, control problems, modeling, chaotic dynamical systems, and various fields of engineering and applied mathematics. In this direction, different kinds of methods and techniques, including numerical and analytical methods, have been utilized by researchers to discuss given fractional discrete and continuous mathematical models and boundary value problems (BVPs) [1,2,3,4]. For some recent developments on the existence, uniqueness, and stability of solutions for fractional differential equations, see, for example, [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] and the references therein.

    Discrete fractional calculus and difference equations open a new study context for mathematicians. For this reason, they have received increasing attention in recent years. Some real-world processes and phenomena are analyzed with the aid of discrete fractional operators, since such operators provide an accurate tool to describe memory. A large number of research articles dealing with difference equations and discrete fractional boundary value problems (FBVPs) can be found in [24,25,26,27,28,29,30,31,32].

    In 2020, Selvam et al. [33] proved the existence of a solution to a discrete fractional difference equation formulated as

    {cΔϱξχ(ξ)=Φ(ξ+ϱ1,χ(ξ+ϱ1)),1<ϱ2,Δχ(ϱ2)=M1,χ(ϱ+T)=M2, (1.1)

    for ξ[0,T]N0=[0,1,2,,T], TN, η[ϱ1,T+ϱ1]Nϱ1, M1 and M2 constants, Φ:[ϱ2,ϱ+T]Nϱ2×RR continuous, and where cΔϱξ denotes the ϱth-Caputo difference. Here, motivated by the discrete model (1.1), we shall consider two generalized discrete problems.

    Our first goal consists to study existence and uniqueness of solutions to the following discrete fractional equation that involves Caputo discrete derivatives:

    {cΔϱξχ(ξ)=Φ(ξ+ϱ1,χ(ξ+ϱ1)),2<ϱ3,Δχ(ϱ3)=A1,χ(ϱ+T)=λΔβχ(η+β),Δ2χ(ϱ3)=A2, (1.2)

    for 0<β1, ξ[0,T]N0=[0,1,2,,T], TN, η[ϱ1,T+ϱ1]Nϱ1, λ, A1 and A2 constants, and where Φ:[ϱ3,ϱ+T]Nϱ3×RR is continuous.

    The second goal is to study the stability of solutions to the discrete Riemann-Liouville fractional problem

    {RLΔϱξχ(ξ)=Φ(ξ+ϱ1,χ(ξ+ϱ1)),2<ϱ3,Δχ(ϱ3)=A1,χ(ϱ+T)=λΔβχ(η+β),Δ2χ(ϱ3)=A2, (1.3)

    for 0<β1, ξ[0,T]N0=[0,1,2,,T] and η[ϱ1,T+ϱ1]Nϱ1, where RLΔϱξ is the Riemann-Liouville difference operator.

    The organization of the paper is as follows. In Section 2, we collect some fundamental definitions available from the literature. In Section 3, we prove the existence and uniqueness results for the discrete FBVP (1.2). Hyers-Ulam and Hyers-Ulam-Rassias stability of the solution for the FBVP (1.3) is established in Section 4. In Section 5, two examples are given to illustrate the obtained results. We end with Section 6 of conclusions.

    We begin by recalling some necessary definitions and essential lemmas that will be used throughout the paper.

    Definition 2.1. (See [27]) Let ϱ>0. The ϱ-order fractional sum of Φ is defined by

    ΔϱξΦ(ξ)=1Γ(ϱ)ξϱl=a(ξl1)(ϱ1)Φ(l), (2.1)

    where ξNa+ϱ:={a+ϱ,a+ϱ+1,} and ξ(ϱ):=Γ(ξ+1)Γ(ξ+1ϱ).

    Definition 2.2. (See [27]) Let ϱ>0 and Φ be defined on Na. The ϱ-order Caputo fractional difference of Φ is defined by

    CaΔϱξΦ(ξ)=Δ(nϱ)(ΔnΦ(ξ))=1Γ(nϱ)ξ(nϱ)l=a(ξl1)(nϱ1)ΔnΦ(l), (2.2)

    while the Riemann-Liouville fractional difference of Φ is defined by

    RLaΔϱξΦ(ξ)=ΔnΔ(nϱ)Φ(ξ), (2.3)

    where ξNa+nϱ and n1<ϱn.

    Lemma 2.1. (See [24,27]) For ϱ>0,

    ΔϱCaΔϱξΦ(ξ)=Φ(ξ)+C0+C1ξ++CN1ξ(N1), (2.4)

    where CiR, i=1,2,,N1, Φ is defined on Na, and 0N1<ϱN.

    Lemma 2.2. (See ([32]) Let 0N1<ϱN and Φ be defined on Na. Then,

    ΔϱRL0ΔϱξΦ(ξ)=Φ(ξ)+B1ξϱ1+B2ξ(ϱ2)++BNξ(ϱN), (2.5)

    for B1,,BNR.

    Lemma 2.3. (See [31]) Let ϱ and ξ be any arbitrary real numbers. Then,

    (1)ξϱl=0(ξl1)(ϱ1)=Γ(ξ+1)ϱΓ(ξϱ+1),(2)Ll=0(ξLl1)(ϱ1)=Γ(ϱ+L+1)ϱΓ(L+1).

    Lemma 2.4. (See [31]) For ζR(Z{0}), we have

    Δϱξ(ζ)=Γ(ζ+1)Γ(ζ+ϱ+1)ξ(ζ+ϱ).

    In this section, we prove the existence and uniqueness of solution for the Caputo three-point discrete fractional problem (1.2). To accomplish this, we denote by C(Nϱ3,ϱ+T,R) the collection of all continuous functions χ with the norm

    χ=max{|χ(ξ)|:ξNϱ3,ϱ+T}.

    Lemma 3.1. Let 2<ϱ3 and Φ:[ϱ3,ϱ+T]Nϱ3R. A function χ(ξ) (ξ[ϱ3,ϱ+T]Nϱ3) that satisfies the discrete FBVP

    {cΔϱξχ(ξ)=Φ(ξ+ϱ1),2<ϱ3,Δχ(ϱ3)=A1,χ(ϱ+T)=λΔβχ(η+β),Δ2χ(ϱ3)=A2,0<β1, (3.1)

    is given by

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)λK1Γ(ϱ)ηl=ϱlϱξ=0(η+βρ(l))(β1)(lρ(ξ))(ϱ1)Φ(ξ+ϱ1)+Γ(β)K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1)+K2+[A1A2(ϱ3)]ξ(1)+A22ξ(2), (3.2)

    with

    K1=ληl=ϱ3(η+βρ(l))(β1)Γ(β), (3.3)

    and

    K2=λK1[A2(ϱ3)A1]ηl=ϱ3(η+βρ(l))(β1)l(1)A22K1ληl=ϱ3(η+βρ(l))(β1)l(2)+Γ(β)K1(ϱ+T)[A1A2(ϱ3)+A22(ϱ+T1)]. (3.4)

    Proof. Let χ(ξ) be a solution to (3.1). Applying Lemma 2.1 and Definition 2.1, we find that

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+C0+C1ξ(1)+C2ξ(2), (3.5)

    for ξ[ϱ3,ϱ+T]Nϱ3, where C0,C1,C2R. By using the difference of order 1 for (3.5), we have

    Δχ(ξ)=1Γ(ϱ1)ξϱ+1l=0(ξρ(l))(ϱ2)Φ(l+ϱ1)+C1+2C2ξ(1),

    and

    Δ2χ(ξ)=1Γ(ϱ2)ξϱ+2l=0(ξρ(l))(ϱ3)Φ(l+ϱ1)+2C2.

    Now, from conditions Δχ(ϱ3)=A1 and Δ2χ(ϱ3)=A2, we obtain that

    C1=A1A2(ϱ3),C2=A22.

    Therefore,

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+C0+[A1A2(ϱ3)]ξ(1)+A22ξ(2), (3.6)

    for ξ[ϱ3,ϱ+T]Nϱ3. By using formula (3.5), one has

    Δβχ(ξ)=C1Γ(β)ξβl=ϱ3(ξρ(l))(β1)l(1)+C2Γ(β)ξβl=ϱ3(ξρ(l))(β1)l(2)+C0Γ(β)ξβl=ϱ3(ξρ(l))(β1)+1Γ(ϱ)Γ(β)ξβl=ϱlϱξ=0(ξρ(l))(β1)(lρ(ξ))(ϱ1)Φ(ξ+ϱ1). (3.7)

    The other condition of (3.1) gives

    λΔβχ(η+β)=λ[A1A2(ϱ3)]Γ(β)ηl=ϱ3(η+βρ(l))(β1)l(1) +λA22Γ(β)ηl=ϱ3(η+βρ(l))(β1)l(2)+λC0Γ(β)ηl=ϱ3(η+βρ(l))(β1) +λΓ(ϱ)Γ(β)ηl=ϱlϱξ=0(η+βρ(l))(β1)(lρ(ξ))(ϱ1)Φ(ξ+ϱ1)=1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1)+C0 +[A1A2(ϱ3)](ϱ+T)(1)+A22(ϱ+T)(2).

    We have

    C0=Γ(β)K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1)+K2λK1Γ(ϱ)ηl=ϱlϱξ=0(η+βρ(l))(β1)(lρ(ξ))(ϱ1)Φ(ξ+ϱ1),

    where K1 and K2 are defined by (3.3) and (3.4), and one obtains (3.2) by substituting the value of C0 into (3.6).

    Now, let us consider the operator H:C(Nϱ3,ϱ+T,R)C(Nϱ3,ϱ+T,R) defined by

    (Hχ)(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))λK1Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))+Γ(β)K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))+K2+[A1A2(ϱ3)]ξ(1)+A22ξ(2).

    Theorem 3.1. Assume that:

    (H1) Function Φ satisfies |Φ(ξ,χ1)Φ(ξ,χ2)|K|χ1χ2|, where K>0, ξNϱ3,ϱ+T and χ1,χ2C(Nϱ3,ϱ+T,R). The discrete FBVP (3.1) has a unique solution on C(Nϱ3,ϱ+T,R) provided

    Γ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)(1+Γ(β)K1)+MλK1Γ(ϱ)1K (3.8)

    with

    M=|ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)|. (3.9)

    Proof. Let χ1,χ2C(Nϱ3,ϱ+T,R). Then, for each ξNϱ3,ϱ+T, we have

    |(Hχ1)(ξ)(Hχ2)(ξ)|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)×|Φ(l+ϱ1,χ1(l+ϱ1))Φ(l+ϱ1,χ2(l+ϱ1))|+λK1Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)×|Φ(l+ϱ1,χ1(l+ϱ1))Φ(l+ϱ1,χ2(l+ϱ1))|+Γ(β)K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)×|Φ(l+ϱ1,χ1(l+ϱ1))Φ(l+ϱ1,χ2(l+ϱ1))|.

    It follows that

    (Hχ1)(ξ)(Hχ2)(ξ)Kχ1χ2Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)+λKχ1χ2K1Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)+Γ(β)Kχ1χ2K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Kχ1χ2Γ(ϱ)Γ(ϱ+T+1)ϱΓ(T+1)+λKχ1χ2K1Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)+Γ(β)Kχ1χ2K1Γ(ϱ)Γ(ϱ+T+1)ϱΓ(T+1)Kχ1χ2[Γ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)+MλK1Γ(ϱ)+Γ(β)Γ(ϱ+T+1)K1Γ(ϱ+1)Γ(T+1)].

    From (3.8), we conclude that H is a contraction. Then, by the Banach contraction principle, the discrete problem (3.1) has a unique solution on C(Nϱ3,ϱ+T,R).

    Theorem 3.2. Suppose that Φ:[ϱ3,ϱ+T]Nϱ3×RR is a continuous function and

    R=max{|Φ(l+ϱ1,χ(l+ϱ1))|,ξNϱ3,ϱ+T,χC(Nϱ3,ϱ+T,R);χ2|K2|}.

    The discrete problem (3.1) has a solution provided that

    R|K2||[A1A2(ϱ3)]|(ϱ+T)(1)|A22|(ϱ+T)(2)Γ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)(1+Γ(β)|K1|)+M|λ|Γ(ϱ)|K1|. (3.10)

    Proof. Let G={χC(Nϱ3,ϱ+T,R);χ2|K2|}. For χ(ξ)G, we get

    |(Hχ)(ξ)|=|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))+λK1Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))Γ(β)K1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))+K2+[A1A2(ϱ3)]ξ(1)+A22ξ(2)|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|Φ(l+ϱ1,χ(l+ϱ1))|+|λ||K1|Γ(ϱ)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)|Φ(l+ϱ1,χ(l+ϱ1))|+Γ(β)|K1|Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)|Φ(l+ϱ1,χ(l+ϱ1))|+|K2|+|[A1A2(ϱ3)]ξ(1)|+|A22ξ(2)|RΓ(ϱ)[ξϱl=0(ξρ(l))(ϱ1)+|λ||K1|ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)+Γ(β)|K1|Tl=0(ϱ+Tρ(l))(ϱ1)]+|K2|+|[A1A2(ϱ3)]|(ϱ+T)(1)+|A22|(ϱ+T)(2)RΓ(ϱ)[Γ(ϱ+T+1)ϱΓ(T+1)(1+Γ(β)|K1|)+M|λ||K1|]+|K2|+|[A1A2(ϱ3)]|(ϱ+T)(1)+|A22|(ϱ+T)(2).

    From (3.10), we have Hχ2|K2|, which implies that H:GG. By Brouwer's fixed point theorem, we know that the discrete problem (3.1) has a solution.

    In this section, we study the Hyers-Ulam and Hyers-Ulam-Rassias stability for the solutions of the discrete Riemann-Liouville (RL) FBVP

    {RLΔϱξχ(ξ)=Φ(ξ+ϱ1,χ(ξ+ϱ1)),2<ϱ3,Δχ(ϱ3)=A1,χ(ϱ+T)=λΔβχ(η+β),Δ2χ(ϱ3)=A2,0<β1, (4.1)

    for ξ[0,1,2,,T]=[0,T]N0 and η[ϱ1,T+ϱ1]Nϱ1, where RLΔϱξ is the RL fractional difference operator. We begin by proving the following lemma.

    Lemma 4.1. Suppose that 2<ϱ3 and Φ:[ϱ3,ϱ+T]Nϱ3R. A function χ satisfies the discrete problem

    {RLΔϱξχ(ξ)=Φ(ξ+ϱ1),2<ϱ3,Δχ(ϱ3)=A1,χ(ϱ+T)=λΔβχ(η+β),Δ2χ(ϱ3)=A2,0<β1, (4.2)

    if, and only if, χ(ξ), ξ[ϱ3,ϱ+T]Nϱ3, has the form

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+{[A2A1(ϱ3)]Γ(ϱ)+fΦ+dϱhϱ(ϱ3)2(ϱ1)}ξ(ϱ1)+[A11hϱ[fΦ+dϱ](ϱ3)Γ(ϱ2)]Γ(ϱ1)ξ(ϱ2)+fΦ+dϱhϱξ(ϱ3), (4.3)

    where

    hϱ=ϱ3ϱ2(ϱ+T)(ϱ2)ϱ32(ϱ1)(ϱ+T)(ϱ1)(ϱ+T)(ϱ3) (4.4)
    +λ(ϱ3)Γ(β)2(ϱ1)ηl=ϱ3(η+βρ(l))(β1)l(ϱ1)λ(ϱ3)Γ(β)(ϱ2)ηl=ϱ3(η+βρ(l))(β1)l(ϱ2)+λΓ(β)ηl=ϱ3(η+βρ(l))(β1)l(ϱ3),dϱ=A1(ϱ+T)(ϱ2)Γ(ϱ1)[A2A1(ϱ3)]λΓ(ϱ)Γ(β)ηl=ϱ3(η+βρ(l))(β1)l(ϱ1) (4.5)
    A1λΓ(ϱ1)Γ(β)ηl=ϱ3(η+βρ(l))(β1)l(ϱ2)+[A2A1(ϱ3)]Γ(ϱ)(ϱ+T)(ϱ1),fΦ=λΓ(ϱ)Γ(β)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)Φ(τ+ϱ1) (4.6)
    +1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1).

    Proof. Let χ(ξ) be a solution to (4.2). Applying Lemma 2.2 and Definition 2.1, we obtain that the general solution of (4.2) is given by

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+B1ξ(ϱ1)+B2ξ(ϱ2)+B3ξ(ϱ3), (4.7)

    ξ[ϱ3,ϱ+T]Nϱ3, where B1,B2,B3R. The first order difference of (4.7) is

    Δχ(ξ)=1Γ(ϱ1)ξϱ+1l=0(ξρ(l))(ϱ2)Φ(l+ϱ1)+B1(ϱ1)ξ(ϱ2)+B2(ϱ2)ξ(ϱ3)+B3(ϱ3)ξ(ϱ4),

    while

    Δ2χ(ξ)=1Γ(ϱ2)ξϱ+2l=0(ξρ(l))(ϱ3)Φ(l+ϱ1)+B1(ϱ1)(ϱ2)ξ(ϱ3)+B2(ϱ2)(ϱ3)ξ(ϱ4)+B3(ϱ3)(ϱ4)ξ(ϱ5).

    From the conditions Δχ(ϱ3)=A1 and Δ2χ(ϱ3)=A2, we obtain that

    B2=1Γ(ϱ1)[A1B3(ϱ3)Γ(ϱ2)],

    and

    B1=1Γ(ϱ)[A2A1(ϱ3)]+B3(ϱ3)2(ϱ1).

    Now, by using the difference of order β for (4.7), it follows that

    Δβχ(ξ)=1Γ(β){1Γ(ϱ)[A2A1(ϱ3)]+B3(ϱ3)2(ϱ1)}ξβl=ϱ3(ξρ(l))(β1)l(ϱ1)+B3Γ(β)ξβl=ϱ3(ξρ(l))(β1)l(ϱ3)+[A1B3(ϱ3)Γ(ϱ2)]Γ(ϱ1)Γ(β)ξβl=ϱ3(ξρ(l))(β1)l(ϱ2)+1Γ(ϱ)Γ(β)ξβl=ϱlϱτ=0(ξρ(l))(β1)(lρ(τ))(ϱ1)Φ(τ+ϱ1). (4.8)

    Based on condition (4.1), we have

    λΔβχ(η+β)=λΓ(β){1Γ(ϱ)[A2A1(ϱ3)]+B3(ϱ3)2(ϱ1)}ηl=ϱ3(η+βρ(l))(β1)l(ϱ1)+λΓ(ϱ1)Γ(β)[A1B3(ϱ3)Γ(ϱ2)]ηl=ϱ3(η+βρ(l))(β1)l(ϱ2)+B3λΓ(β)ηl=ϱ3(η+βρ(l))(β1)l(ϱ3)+λΓ(ϱ)Γ(β)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)Φ(τ+ϱ1)=1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1)+1Γ(ϱ1)[A1B3(ϱ3)Γ(ϱ2)](ϱ+T)(ϱ2)+{1Γ(ϱ)[A2A1(ϱ3)]+B3(ϱ3)2(ϱ1)}(ϱ+T)(ϱ1)+B3(ϱ+T)(ϱ3).

    Then,

    B3=1hϱ[fΦ+dϱ],B2=1Γ(ϱ1)[A11hϱ[fΦ+dϱ](ϱ3)Γ(ϱ2)],B1=1Γ(ϱ)[A2A1(ϱ3)]+1hϱ[fΦ+dϱ](ϱ3)2(ϱ1),

    where hϱ, dϱ and fΦ are defined by (4.4)-(4.6), respectively. Substituting the values of the constants B1, B2 and B3 into (4.7), we obtain (4.3) and our proof is complete.

    From Lemma 4.1, the solution of the discrete RL problem (4.1) is given by the formula

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))+{[A2A1(ϱ3)]Γ(ϱ)+fχ+dϱhϱ(ϱ3)2(ϱ1)}ξ(ϱ1)+[A11hϱ[fΦ+dϱ](ϱ3)Γ(ϱ2)]Γ(ϱ1)ξ(ϱ2)+fχ+dϱhϱξ(ϱ3), (4.9)

    where dρ and hρ are defined by (4.4) and (4.5), respectively, and

    fχ=λΓ(ϱ)Γ(β)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)Φ(τ+ϱ1,χ(τ+ϱ1))+1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1)). (4.10)

    Lemma 4.2. If χ is a solution of (4.3), then

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+Q(ξ)+fχK(ξ), (4.11)

    where

    Q(ξ)={[A2A1(ϱ3)]Γ(ϱ)+dϱ(ϱ3)2hϱ(ϱ1)}ξ(ϱ1)+[A1dϱhϱ(ϱ3)Γ(ϱ2)]Γ(ϱ1)ξ(ϱ2)+dϱhϱξ(ϱ3),

    and

    K(ξ)=(ϱ3)2hϱ(ϱ1)ξ(ϱ1)(ϱ3)hϱ(ϱ2)ξ(ϱ2)+ξ(ϱ3)hϱ.

    Proof. Take χ as a solution of (4.2). For ξ[ϱ3,ϱ+T]Nϱ3, then

    χ(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+{[A2A1(ϱ3)]Γ(ϱ)+fχ+dϱhϱ(ϱ3)2(ϱ1)}ξ(ϱ1)+[A11hϱ[fχ+dϱ](ϱ3)Γ(ϱ2)]Γ(ϱ1)ξ(ϱ2)+fχ+dϱhϱξ(ϱ3)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+{[A2A1(ϱ3)]Γ(ϱ)+dϱ(ϱ3)2hϱ(ϱ1)}ξ(ϱ1)+[A1dϱhϱ(ϱ3)Γ(ϱ2)]Γ(ϱ1)ξ(ϱ2)+dϱhϱξ(ϱ3)+fχ[(ϱ3)2hϱ(ϱ1)ξ(ϱ1)(ϱ3)hϱ(ϱ2)ξ(ϱ2)+ξ(ϱ1)hϱ]=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1)+Q(ξ)+fχK(ξ).

    The proof is complete.

    Definition 4.1. We say that the discrete RL problem (4.1) is Hyers-Ulam stable if for each function νC(Nϱ3,ϱ+T,R) of

    |RLΔϱξν(ξ)Φ(ξ+ϱ1,ν(ξ+ϱ1))|ϵ,ξ[0,T]N0, (4.12)

    and ϵ>0, there exists χC(Nϱ3,ϱ+T,R) solution of (4.1) and δ>0 such that

    |ν(ξ)χ(ξ)|δϵ,ξ[ϱ3,ϱ+T]Nϱ3. (4.13)

    Definition 4.2. We say that the discrete RL problem (4.1) is Hyers-Ulam–Rassias stable if for each function νC(Nϱ3,ϱ+T,R) of

    |RLΔϱξν(ξ)Φ(ξ+ϱ1,ν(ξ+ϱ1))|ϵθ(ξ+ϱ1),ξ[0,T]N0, (4.14)

    and ϵ>0, there exists χC(Nϱ3,ϱ+T,R) solution of (4.1) and δ2>0 such that

    |ν(ξ)χ(ξ)|δ2ϵθ(ξ+ϱ1),ξ[ϱ3,ϱ+T]Nϱ3. (4.15)

    Remark 4.1. A function χ(ξ)C(Nϱ3,ϱ+T,R) is a solution of (4.12) if, and only if, there exists μ:[ϱ3,ϱ+T]Nϱ3R satisfying:

    (H2) |μ(ξ+ϱ1)|ϵ,ξ[0,T]N0;

    (H3) RLΔϱξν(ξ)=Φ(ξ+ϱ1,ν(ξ+ϱ1))+μ(ξ+ϱ1),ξ[0,T]N0.

    Remark 4.2. A function χ(ξ)C(Nϱ3,ϱ+T,R) is a solution of (4.14) if, and only if, there exists μ:[ϱ3,ϱ+T]Nϱ3R satisfying

    (H4) |μ(ξ+ϱ1)|ϵθ(ξ+ϱ1),ξ[0,T]N0,

    (H5) RLΔϱξν(ξ)=Φ(ξ+ϱ1,ν(ξ+ϱ1))+μ(ξ+ϱ1),ξ[0,T]N0.

    Lemma 4.3. If ν satisfies (4.12), then

    |ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))Q(ξ)fνK(ξ)|ϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1).

    Proof. Using our hypothesis, and based on Remark 4.1, the solution to (H3) satisfies

    ν(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))+Q(ξ)+fνK(ξ)+1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)μ(l+ϱ1).

    Hence,

    |ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))Q(ξ)fνK(ξ)|=|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)μ(l+ϱ1)|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|μ(l+ϱ1)|ϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1),

    and the desired inequality is derived.

    Theorem 4.1. If condition (H1) holds and (4.13) is satisfied, then the discrete RL problem (4.1) is Hyers-Ulam stable under the condition

    KΓ(β)Γ(T+1)Γ(ϱ+1)2M1Γ(ϱ+T+1)Γ(β)+MM1λϱΓ(T+1), (4.16)

    where M1=max(1,|K(ξ)|).

    Proof. Let ξ[ϱ3,ϱ+T]Nϱ3. From Lemma 8, we have

    |ν(ξ)χ(ξ)||ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,χ(l+ϱ1))Q(ξ)fχK(ξ)||ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))Q(ξ)fνK(ξ)|+1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|Φ(l+ϱ1,χ(l+ϱ1))Φ(l+ϱ1,ν(l+ϱ1))|+|K(ξ)||fνfχ|.

    It follows that

    |ν(ξ)χ(ξ)|ϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)+KΓ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|χ(l+ϱ1)ν(l+ϱ1)|+|K(ξ)|K{λΓ(ϱ)Γ(β)ηl=ϱlϱτ=0(η+βρ(l))(β1)(lρ(τ))(ϱ1)|ν(l+ϱ1)χ(l+ϱ1)|+1Γ(ϱ)Tl=0(ϱ+Tρ(l))(ϱ1)|χ(l+ϱ1)ν(l+ϱ1)|}ϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)+KχνΓ(ϱ)Γ(ξ+1)ϱΓ(ξ+1ϱ)+|K(ξ)|Kχν[MλΓ(ϱ)Γ(β)+1Γ(ϱ)Γ(ϱ+T+1)ϱΓ(T+1)]ϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)+2KM1Γ(ϱ+1)Γ(ϱ+T+1)Γ(T+1)χν+χνKMM1λΓ(ϱ)Γ(β).

    Therefore,

    νχϵΓ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)+νχ[2KM1Γ(ϱ+1)Γ(ϱ+T+1)Γ(T+1)+KMM1λΓ(ϱ)Γ(β)].

    Moreover, νχϵδ, where

    δ=Γ(β)Γ(ϱ+T+1)Γ(β)Γ(T+1)Γ(ϱ+1)2KM1Γ(ϱ+T+1)Γ(β)KMM1λϱΓ(T+1)>0.

    Thus, the discrete RL problem (4.1) is Hyers-Ulam stable.

    Lemma 4.4. If v solves (4.14) under the condition:

    (H6) The function θ:[ϱ3,ϱ+T]Nϱ3R is increasing and there exists a constant γ>0 such that

    1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)θ(l+ϱ1)γθ(ξ+ϱ1),ξ[0,T]N0,

    then

    |ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))Q(ξ)fϕνK(ξ)|γθ(ξ+ϱ1).

    Proof. Let ν satisfy (4.14). From Remark 4.2, the solution to (H5) satisfies

    ν(ξ)=1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))+Q(ξ)+fνK(ξ)+1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)μ(l+ϱ1).

    Hence,

    |ν(ξ)1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)Φ(l+ϱ1,ν(l+ϱ1))Q(ξ)fνK(ξ)|=|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)μ(l+ϱ1)|1Γ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|μ(l+ϱ1)|ϵΓ(ϱ)ξϱl=0(ξρ(l))(ϱ1)|θ(l+ϱ1)|γϵθ(l+ϱ1),

    and the desired inequality is derived.

    Remark 4.3. About the restrictiveness of hypotheses (H1)(H6), and the bounds imposed on the family of discrete systems that satisfy them, one should note that such hypotheses are the usual conditions for proving existence, uniqueness or stability of solutions. In fact, the conditions (H2)(H6) are considered as the fundamental conditions in the definition of Hyers-Ulam-Rassias stability. Condition (H1) is a standard Lipschitz condition while other constants are computed based on the given fractional system. Therefore, these conditions are natural and, in real systems, with specified numerical data, their expressions are reduced to numerical bounds.

    Theorem 4.2. If the inequality (4.16) and the hypotheses (H1) and (H6) are satisfied, then the discrete RL problem (4.1) is Hyers-Ulam-Rassias stable.

    Proof. From Lemmas 4.4 and 2.3, we obtain that

    νχδ2ϵθ(ξ+ϱ1),

    where

    δ2=Γ(ϱ+1)Γ(β)Γ(T+1)Γ(ϱ+1)Γ(β)Γ(T+1)2KM1Γ(β)Γ(ϱ+T+1)KMM1λϱΓ(T+1)>0.

    Thus, the discrete RL problem (4.1) is Hyers-Ulam-Rassias stable.

    In this section, we consider two examples to illustrate the obtained results.

    Example 5.1. Let

    {Δϱξχ(ξ)=Φ(ξ+ϱ1,χ(ξ+ϱ1)),ξN0,4,Δχ(ϱ3)=1,χ(ϱ+4)=0.3Δ0.5χ(η+0.5),Δ2χ(ϱ3)=0, (5.1)

    where Δϱξ denotes the operator cΔϱξ or RLΔϱξ. Set β=0.5, T=4, λ=0.7, A1=1, A2=0, and

    Φ(ξ+1.5,χ(ξ+1.5))=137105sin(χ(ξ+1.5)).

    If Δϱξχ(ξ)=cΔ52ξχ(ξ) and η=52, then we obtain that

    K1=ληl=ϱ3(η+βρ(l))(β1)Γ(β)=λΓ(η+βϱ+4)βΓ(ηϱ+4)Γ(β)=0.9416,
    M=|ηl=ϱlϱξ=0(η+βρ(l))(β1)(lρ(ξ))(ϱ1)|=2.3562. (5.2)

    Hence, the inequality (3.8) takes the form

    Γ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)(1+Γ(β)K1)+MλK1Γ(ϱ)68.94031K729.9270,

    such that

    K=137105.

    From Theorem 3.1, the discrete problem (5.1) has a unique solution.

    In the case Δϱξχ(ξ)=RLΔ3ξχ(ξ) and η=3, we obtain

    M=3.5449,M1=1.8824,hϱ=λΓ(η+βϱ+4)Γ(β+1)Γ(ηϱ+4)1=0.5313,K1=λΓ(η+βϱ+4)βΓ(ηϱ+4)Γ(β)=0.9416.

    Also,

    Γ(β)Γ(T+1)Γ(ϱ+1)2M1Γ(ϱ+T+1)Γ(β)+MM1λϱΓ(T+1)0.0075. (5.3)

    If K=0.0014<0.0075 and

    |RLΔ3ξν(ξ)Φ(ξ+2,v(ξ+2))|ϵ,ξ[0,4]N0, (5.4)

    holds, then, by Theorem 4.1, the discrete RL problem (5.1) is Hyers-Ulam stable.

    Example 5.2. Let

    {cΔ2.4ξχ(ξ)=1107χ4(ξ+1.4),ξN0,2,Δχ(0.4)=2,χ(3.4)=0.8Δ1/3χ(1/3+2.4),Δ2χ(0.6)=0. (5.5)

    After some calculations, we find that

    M=3.277,K1=ληl=ϱ3(η+βρ(l))(β1)Γ(β)=1.0253,K2=A1Γ(β)K1(ϱ+T)λA1K1(ηβ(3ϱ))Γ(ηϱ+β+4)β(β+1)Γ(ηϱ+4)=16.2963.

    We define the following Banach space:

    C(Nϱ3,ϱ+T,R)={χ(t)|[0.5,6.5]N0.5R,χ2|K2|=32.5927}.

    Note that

    |K2||[A1A2(ϱ3)]|(ϱ+T)(1)|A22|(ϱ+T)(2)Γ(ϱ+T+1)Γ(ϱ+1)Γ(T+1)(1+Γ(β)|K1|)+M|λ|Γ(ϱ)|K1|=0.3262.

    It is clear that |Φ(t,χ)|0.05860.3262 whenever χ[32.5927,32.5927]. Therefore, by Theorem 3.2, we find out that the discrete FBVP (5.5) has a solution.

    We proved existence and uniqueness of solution to discrete fractional boundary value problems (FBVPs) involving fractional difference operators via the Brouwer fixed point theorem and the Banach contraction principle. Different versions of stability criteria were obtained for a discrete FBVP involving Riemann-Liouville difference operators. The results were illustrated by suitable examples. The approach of this paper is new and can be a beginning method for discussing different real-world models in the context of discrete behavior structures. In particular, our results can contribute for the development of discrete fractional boundary value problems describing discrete dynamics of some physical applications. In future works, we plan to extend our approach to other types of discrete differential inclusions or fully-hybrid discrete fractional differential equations.

    The third and fourth authors would like to thank Azarbaijan Shahid Madani University. The fifth author was supported by the Portuguese Foundation for Science and Technology (FCT) and CIDMA through project UIDB/04106/2020. This research received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (Grant number B05F650018).

    The authors declare no conflict of interest.



    [1] M. Vrigkas, C. Nikou, I. A. Kakadiaris, A review of human activity recognition methods, Front. Rob. AI, 2 (2015), 28. https://doi.org/10.3389/frobt.2015.00028 doi: 10.3389/frobt.2015.00028
    [2] Z. Sun, Q. Ke, H. Rahmani, M. Bennamoun, G. Wang, J. Liu, Human action recognition from various data modalities: A review, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2022), 3200–3225. https://doi.org/10.1109/TPAMI.2022.3183112 doi: 10.1109/TPAMI.2022.3183112
    [3] W. Lin, M. T. Sun, R. Poovandran, Human activity recognition for video surveillance, in 2008 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, (2008), 2737–2740. https://doi.org/10.1109/ISCAS.2008.4542023
    [4] W. Hu, D. Xie, Z. Fu, W. Zeng, S. Maybank, Semantic-based surveillance video retrieval, IEEE Trans. Image Process., 16 (2007), 1168–1181. https://doi.org/10.1109/TIP.2006.891352 doi: 10.1109/TIP.2006.891352
    [5] I. Rodomagoulakis, N. Kardaris, V. Pitsikalis, E. Mavroudi, A. Katsamanis, A. Tsiami, et al., Multimodal human action recognition in assistive human-robot interaction, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, (2016), 2702–2706. https://doi.org/10.1109/ICASSP.2016.7472168
    [6] K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos, in Advances in Neural Information Processing Systems 27 (NIPS 2014), 27 (2014).
    [7] J. Zhu, Z. Zhu, W. Zou, End-to-end video-level representation learning for action recognition, in 2018 24th International Conference on Pattern Recognition (ICPR), IEEE, (2018), 645–650. https://doi.org/10.1109/ICPR.2018.8545710
    [8] M. R. Sudha, K. Sriraghav, S. Manisha, S. G. Jacob, S. Manisha, Approaches and applications of virtual reality and gesture recognition: A review, Int. J. Ambient Comput. Intell., 8 (2017), 1–18. https://doi.org/10.4018/IJACI.2017100101 doi: 10.4018/IJACI.2017100101
    [9] J. Zhu, W. Zou, Z. Zhu, Y. Hu, Convolutional relation network for skeleton-based action recognition, Neurocomputing, 370 (2019), 109–117. https://doi.org/10.1016/j.neucom.2019.08.043 doi: 10.1016/j.neucom.2019.08.043
    [10] L. Shi, Y. Zhang, J. Cheng, H. Lu, Skeleton-based action recognition with multi-stream adaptive graph convolutional networks, IEEE Trans. Image Process., 29 (2020), 9532–9545. https://doi.org/10.1109/TIP.2020.3028207 doi: 10.1109/TIP.2020.3028207
    [11] K. Cheng, Y. Zhang, X. He, J. Cheng, H. Lu, Extremely lightweight skeleton-based action recognition with shiftgcn++, IEEE Trans. Image Process., 30 (2021), 7333–7348. https://doi.org/10.1109/TIP.2021.3104182 doi: 10.1109/TIP.2021.3104182
    [12] M. Wang, X. Li, S. Chen, X. Zhang, L. Ma, Y. Zhang, Learning representations by contrastive spatio-temporal clustering for skeleton-based action recognition, IEEE Trans. Multimedia, 26 (2023), 3207–3220. https://doi.org/10.1109/TMM.2023.3307933 doi: 10.1109/TMM.2023.3307933
    [13] C. Pang, X. Gao, Z. Chen, L. Lyu, Self-adaptive graph with nonlocal attention network for skeleton-based action recognition, IEEE Trans. Neural Networks Learn. Syst., 2023 (2023), 1–13. https://doi.org/10.1109/TNNLS.2023.3298950 doi: 10.1109/TNNLS.2023.3298950
    [14] M. Trascau, M. Nan, A. M. Florea, Spatio-temporal features in action recognition using 3D skeletal joints, Sensors, 19 (2019), 1–15. https://doi.org/10.3390/s19020423 doi: 10.3390/s19020423
    [15] P. Geng, X. Lu, C. Hu, H. Liu, L. Lyu, Focusing fine-grained action by self-attention-enhanced graph neural networks with contrastive learning, IEEE Trans. Circuits Syst. Video Technol., 33 (2023), 4754–4768. https://doi.org/10.1109/TCSVT.2023.3248782 doi: 10.1109/TCSVT.2023.3248782
    [16] T. Xu, W. Takano, Graph stacked hourglass networks for 3d human pose estimation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021), 16105–16114.
    [17] B. Doosti, S. Naha, M. Mirbagheri, D. J. Crandall, Hope-net: A graph-based model for hand-object pose estimation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 6608–6617.
    [18] K. Cheng, Y. Zhang, X. He, W. Chen, J. Cheng, H. Lu, Skeleton-based action recognition with shift graph convolutional network, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 183–192.
    [19] M. Li, S. Chen, Y. Zhao, Y. Zhang, Y. Wang, Q. Tian, Dynamic multi-scale graph neural networks for 3D skeleton based human motion prediction, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 214–223.
    [20] S. Zhang, W. Zhao, Z. Guan, X. Peng, J. Peng, Keypoint-Graph-Driven Learning Framework for Object Pose Estimation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 1065–1073.
    [21] L. Li, W. Zheng, Z. Zhang, Y. Huang, L. Wang, Skeleton-based relational modeling for action recognition, preprint, arXiv: 1805.02556, 2018.
    [22] W. Zheng, L. Li, Z. Zhang, Y. Huang, L. Wang, Relational network for skeleton-based action recognition, in 2019 IEEE International Conference on Multimedia and Expo (ICME), IEEE, (2019), 826–831. https://doi.org/10.1109/ICME.2019.00147
    [23] Q. Ke, M. Bennamoun, S. An, F. Sohel, F. Boussaid, A new representation of skeleton sequences for 3D action recognition, in roceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3288–3297.
    [24] T. S. Kim, A. Reiter, Interpretable 3D human action analysis with temporal convolutional networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, IEEE, (2017), 20–28.
    [25] S. Yan, Y. J. Xiong, D. Lin, Spatial temporal graph convolutional networks for skeleton-based action recognition, in Thirty-Second AAAI Conference on Artificial Intelligence, 32 (2018). https://doi.org/10.1609/aaai.v32i1.12328
    [26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems 30 (NIPS 2017), (2017), 30.
    [27] L. Shi, Y. Zhang, J. Cheng, H. Lu, Two-stream adaptive graph convolutional networks for skeleton-based action recognition, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 12026–12035.
    [28] C. Wang, C. Deng, On the global self-attention mechanism for graph convolutional networks, in 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, (2021), 8531–8538. https://doi.org/10.1109/ICPR48806.2021.9412456
    [29] A. Shahroudy, J. Liu, T. Ng, G. Wang, NTU RGB+D: A large scale dataset for 3D human activity analysis, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 1010–1019.
    [30] J. Liu, A. Shahroudy, M. Perez, G. Wang, L. Duan, A. C. Kot, NTU RGB+D 120: A large-scale benchmark for 3D human activity understanding, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 42 (2019), 2684–2701. https://doi.org/10.1109/TPAMI.2019.2916873
    [31] M. Defferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, in Advances in Neural Information Processing Systems 29 (NIPS 2016), (2016), 29.
    [32] M. Niepert, M. Ahmed, K. Kutzkov, Learning convolutional neural networks for graphs, in Proceedings of The 33rd International Conference on Machine Learning, PMLR, (2016), 2014–2023.
    [33] B. Li, X. Li, Z. Zhang, F. Wu, Spatio-temporal graph routing for skeleton-based action recognition, in Proceedings of the AAAI Conference on Artificial Intelligence, 33 (2019), 8561–8568. https://doi.org/10.1609/aaai.v33i01.33018561
    [34] T. Li, R. Zhang, Q. Li, Multi scale temporal graph networks for skeleton-based action recognition, preprint, arXiv: 2012.02970, 2020. https://doi.org/10.48550/arXiv.2012.02970
    [35] H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, H. Jégou, Going deeper with image transformers, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 32–42.
    [36] H. Zhang, I. Goodfellow, D. Metaxas, A. Odena, Self-attention generative adversarial networks, in Proceedings of the 36th International Conference on Machine Learning, PMLR, (2019), 7354–7363.
    [37] Y. Rao, J. Lu, J. Zhou, Attention-aware deep reinforcement learning for video face recognition, in Proceedings of the IEEE International Conference on Computer Vision (ICCV), (2017), 3931–3940.
    [38] H. Larochelle, G. E. Hinton, Learning to combine foveal glimpses with a third-order Boltzmann machine, in Advances in Neural Information Processing Systems 23 (NIPS 2010), (2010), 23.
    [39] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, et al., Residual attention network for image classification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 3156–3164.
    [40] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018), 7132–7141.
    [41] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, et al., Show, attend and tell: Neural image caption generation with visual attention, in Proceedings of the 32nd International Conference on Machine Learning, PMLR, (2015), 2048–2057.
    [42] M. E. Hussein, M. Torki, M. A. Gowayyed, M. El-Saban, Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations, in Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
    [43] J. Liu, A. Shahroudy, D. Xu, G. Wang, Spatio-temporal lstm with trust gates for 3D human action recognition, European Conference on Computer Vision, Springer, Cham, (2016), 816–833. https://doi.org/10.1007/978-3-319-46487-9_50
    [44] M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, Q. Tian, Actional-structural graph convolutional networks for skeleton-based action recognition, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3595–3603.
    [45] C. Chen, X. Zhao, J. Wang, D. Li, Y. Guan, J. Hong, Dynamic graph convolutional network for assembly behavior recognition based on attention mechanism and multi-scale feature fusion, Sci. Rep., 12 (2022), 1–13. https://doi.org/10.1038/s41598-022-11206-8 doi: 10.1038/s41598-022-11206-8
    [46] W. Peng, X. Hong, H. Chen, G. Zhao, Learning graph convolutional network for skeleton-based human action recognition by neural searching, in Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 2669–2676. https://doi.org/10.1609/aaai.v34i03.5652
    [47] F. Shi, C. Lee, L. Qiu, Y. Zhao, T. Shen, S. Muralidhar, et al., Star: Sparse transformer-based action recognition, preprint, arXiv: 2017.07089, 2021. https://doi.org/10.48550/arXiv.2107.07089
    [48] H. Zhang, H. Geng, G. Yang, Two-stream transformer encoders for skeleton-based action recognition, in 6th International Technical Conference on Advances in Computing, Control and Industrial Engineering (CCIE 2021), Springer, 920 (2022), 272–281. https://doi.org/10.1007/978-981-19-3927-3_26
    [49] Y. Meng, M. Shi, W. Yang, Skeleton action recognition based on tranformer adaptive graph convolution, in Journal of Physics: Conference Series, 2170 (2022), 012007. https://doi.org/10.1088/1742-6596/2170/1/012007
    [50] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, et al., The kinetics human action video dataset, preprint, arXiv: 1705.06950, 2017. https://doi.org/10.48550/arXiv.1705.06950
    [51] X. Qin, R. Cai, J. Yu, C. He, X. Zhang, An efficient self-attention network for skeleton-based action recognition, Sci. Rep., 12 (2022), 1–10. https://doi.org/10.1038/s41598-022-08157-5 doi: 10.1038/s41598-022-08157-5
    [52] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907, 2016. https://doi.org/10.48550/arXiv.1609.02907
    [53] Z. Chen, S. Li, B. Yang, Q. Li, H. Liu, Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition, in Proceedings of the AAAI Conference on Artificial Intelligence, 35 (2021), 1113–1122. https://doi.org/10.1609/aaai.v35i2.16197
    [54] Z. Liu, H. Zhang, Z. Chen, Z. Wang, W. Ouyang, Disentangling and unifying graph convolutions for skeleton-based action recognition, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 143–152.
  • This article has been cited by:

    1. Zimeng Lv, Xinyu Liu, Yuting Ding, Dynamic behavior analysis of an SVIR epidemic model with two time delays associated with the COVID-19 booster vaccination time, 2023, 20, 1551-0018, 6030, 10.3934/mbe.2023261
    2. Zhiliang Li, Lijun Pei, Guangcai Duan, Shuaiyin Chen, A non-autonomous time-delayed SIR model for COVID-19 epidemics prediction in China during the transmission of Omicron variant, 2024, 32, 2688-1594, 2203, 10.3934/era.2024100
    3. Liping Wang, Xinyu Wang, Dajun Liu, Xuekang Zhang, Peng Wu, Dynamical analysis of a heterogeneous spatial diffusion Zika model with vector-bias and environmental transmission, 2024, 32, 2688-1594, 1308, 10.3934/era.2024061
    4. Chen Wang, Ruizhi Yang, Hopf bifurcation analysis of a pine wilt disease model with both time delay and an alternative food source, 2025, 33, 2688-1594, 2815, 10.3934/era.2025124
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1711) PDF downloads(74) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog