Research article Special Issues

Video-based person re-identification with complementary local and global features using a graph transformer

  • Received: 08 March 2024 Revised: 16 June 2024 Accepted: 05 July 2024 Published: 23 July 2024
  • In recent years, significant progress has been made in video-based person re-identification (Re-ID). The key challenge in video person Re-ID lies in effectively constructing discriminative and robust person feature representations. Methods based on local regions utilize spatial and temporal attention to extract representative local features. However, prior approaches often overlook the correlations between local regions. To leverage relationships among different local regions, we have proposed a novel video person Re-ID representation learning approach based on a graph transformer, which facilitates contextual interactions between relevant region features. Specifically, we construct a local relation graph to model intrinsic relationships between nodes representing local regions. This graph employs the architecture of a transformer for feature propagation, iteratively refining region features and considering information from adjacent nodes to obtain partial feature representations. To learn compact and discriminative representations, we have further proposed a global feature learning branch based on a vision transformer to capture the relationships between different frames in a sequence. Additionally, we designed a dual-branch interaction network based on multi-head fusion attention to integrate frame-level features extracted by both local and global branches. Finally, the concatenated global and local features, after interaction, are used for testing. We evaluated the proposed method on three datasets, namely iLIDS-VID, MARS, and DukeMTMC-VideoReID. Experimental results demonstrate competitive performance, validating the effectiveness of our proposed approach.

    Citation: Hai Lu, Enbo Luo, Yong Feng, Yifan Wang. Video-based person re-identification with complementary local and global features using a graph transformer[J]. Mathematical Biosciences and Engineering, 2024, 21(7): 6694-6709. doi: 10.3934/mbe.2024293

    Related Papers:

    [1] Glenn Ledder . Incorporating mass vaccination into compartment models for infectious diseases. Mathematical Biosciences and Engineering, 2022, 19(9): 9457-9480. doi: 10.3934/mbe.2022440
    [2] Pannathon Kreabkhontho, Watchara Teparos, Thitiya Theparod . Potential for eliminating COVID-19 in Thailand through third-dose vaccination: A modeling approach. Mathematical Biosciences and Engineering, 2024, 21(8): 6807-6828. doi: 10.3934/mbe.2024298
    [3] Bruno Buonomo, Alberto d’Onofrio, Deborah Lacitignola . Rational exemption to vaccination for non-fatal SIS diseases: Globally stable and oscillatory endemicity. Mathematical Biosciences and Engineering, 2010, 7(3): 561-578. doi: 10.3934/mbe.2010.7.561
    [4] Beatriz Machado, Liliana Antunes, Constantino Caetano, João F. Pereira, Baltazar Nunes, Paula Patrício, M. Luísa Morgado . The impact of vaccination on the evolution of COVID-19 in Portugal. Mathematical Biosciences and Engineering, 2022, 19(1): 936-952. doi: 10.3934/mbe.2022043
    [5] Tetsuro Kobayashi, Hiroshi Nishiura . Prioritizing COVID-19 vaccination. Part 2: Real-time comparison between single-dose and double-dose in Japan. Mathematical Biosciences and Engineering, 2022, 19(7): 7410-7424. doi: 10.3934/mbe.2022350
    [6] Xiaojing Wang, Yu Liang, Jiahui Li, Maoxing Liu . Modeling COVID-19 transmission dynamics incorporating media coverage and vaccination. Mathematical Biosciences and Engineering, 2023, 20(6): 10392-10403. doi: 10.3934/mbe.2023456
    [7] Chuanqing Xu, Xiaotong Huang, Zonghao Zhang, Jing'an Cui . A kinetic model considering the decline of antibody level and simulation about vaccination effect of COVID-19. Mathematical Biosciences and Engineering, 2022, 19(12): 12558-12580. doi: 10.3934/mbe.2022586
    [8] ZongWang, Qimin Zhang, Xining Li . Markovian switching for near-optimal control of a stochastic SIV epidemic model. Mathematical Biosciences and Engineering, 2019, 16(3): 1348-1375. doi: 10.3934/mbe.2019066
    [9] Fang Wang, Lianying Cao, Xiaoji Song . Mathematical modeling of mutated COVID-19 transmission with quarantine, isolation and vaccination. Mathematical Biosciences and Engineering, 2022, 19(8): 8035-8056. doi: 10.3934/mbe.2022376
    [10] Yuto Omae, Yohei Kakimoto, Makoto Sasaki, Jun Toyotani, Kazuyuki Hara, Yasuhiro Gon, Hirotaka Takahashi . SIRVVD model-based verification of the effect of first and second doses of COVID-19/SARS-CoV-2 vaccination in Japan. Mathematical Biosciences and Engineering, 2022, 19(1): 1026-1040. doi: 10.3934/mbe.2022047
  • In recent years, significant progress has been made in video-based person re-identification (Re-ID). The key challenge in video person Re-ID lies in effectively constructing discriminative and robust person feature representations. Methods based on local regions utilize spatial and temporal attention to extract representative local features. However, prior approaches often overlook the correlations between local regions. To leverage relationships among different local regions, we have proposed a novel video person Re-ID representation learning approach based on a graph transformer, which facilitates contextual interactions between relevant region features. Specifically, we construct a local relation graph to model intrinsic relationships between nodes representing local regions. This graph employs the architecture of a transformer for feature propagation, iteratively refining region features and considering information from adjacent nodes to obtain partial feature representations. To learn compact and discriminative representations, we have further proposed a global feature learning branch based on a vision transformer to capture the relationships between different frames in a sequence. Additionally, we designed a dual-branch interaction network based on multi-head fusion attention to integrate frame-level features extracted by both local and global branches. Finally, the concatenated global and local features, after interaction, are used for testing. We evaluated the proposed method on three datasets, namely iLIDS-VID, MARS, and DukeMTMC-VideoReID. Experimental results demonstrate competitive performance, validating the effectiveness of our proposed approach.



    In the study of perturbations of the three degree of freedom Kepler Hamiltonian pulling back, the regularized Hamiltonian by the Kustaanheimo-Stiefel (KS) map, gives a perturbation of the four degree of freedom harmonic oscillator Hamiltonian, when restricted to the zero level set of the KS symmetry. We use the formulation of the KS transformation in [6] which allows us to reduce the KS symmetry using invariant theory for the first time. As an illustration, we apply this procedure to the regularized Stark Hamiltonian, which is normalized after applying the KS transformation. We do not expect this Hamiltonian to be completely integrable (see Lagrange [4] and also [5]). Our treatment follows that of [3] and gives the full details of obtaining the second order normal form [1]. We use the notation of [2] and note that our procedure of regularization, pull back by the KS map, normalization and reduction may be used to study three degree of freedom perturbed Keplerian systems.

    On T0R3=(R3{0})×R3 with coordinates (x,y) and standard symplectic form ω3=3i=1dxidyi consider the Stark Hamiltonian

    K(x,y)=12y,y1|x|+fx3. (2.1)

    Here, , is the Euclidean inner product on R3 with associated norm ||. On the negative energy level 12k2 with k>0 rescaling time dt|x|kds, we obtain

    0=12k(|x|y,y+k2|x|)1k+fx3|x|k. (2.2)

    In other words, (x,y) lies in the 1k level set of

    ˆK(x,y)=12k|x|(y,y+k2|x|)+fx3|x|k. (2.3)

    We assume that f is small, namely, f=εβ. After the symplectic coordinate change (x,y)(1kx,ky) the Hamiltonian ˆK becomes the preregularized Hamiltonian

    K(x,y)=12|x|(y,y+|x|)+εβx3|x| (2.4)

    on the level set K1(1).

    Let T0R4=(R4{0})×R4 have coordinates (q,p) and a symplectic form ω4=4i=1dqidpi. Pull back K by the Kustaanheimo-Stiefel mapping

    KS:T0R4T0R3:(q,p)(x,y),

    where

    x1=2(q1q3+q2q4)=U2K1x2=2(q1q4q2q3)=U3K2x3=q21+q22q23q24=U4K3y1=(q,q)1(q1p3+q2p4+q3p1+q4p2)=(H2+V1)1V2y2=(q,q)1(q1p4q2p3q3p2+q4p1)=(H2+V1)1V3y3=(q,q)1(q1p1+q2p2q3p3q4p4)=(H2+V1)1V4

    and

    H2=12(p21+p22+p23+p24+q21+q22+q23+q24)Ξ=q1p2q2p1+q3p4q4p3,

    to get the regularized Stark Hamiltonian

    H=H2+εβ(U4V1+H2U4K3V1H2K3) (2.5)

    on Ξ1(0), since |x|=q,q=H2+V1. Here

    K1=(q1q3+q2q4+p1p3+p2p4)K2=(q1q4q2q3+p1p4p2p3)K3=12(q23+q24+p23+p24q21q22p21p22)L1=q4p1q3p2+q2p3q1p4L2=q1p3+q2p4q3p1q4p2L3=q3p4q4p3+q2p1q1p2U1=(q1p1+q2p2+q3p3+q4p4)U2=q1q3+q2q4p1p3p2p4U3=q1q4q2q3+p2p3p1p4U4=12(q21+q22q23q24+p23+p24p21p22)V1=12(q21+q22+q23+q24p21p22p23p24)V2=q1p3+q2p4+q3p1+q4p2V3=q1p4q2p3q3p2+q4p1V4=q1p1+q2p2q3p3q4p4.

    generate the algebra of polynomials invariant under the S1 action φΞs given by the flow of XΞ on (TR4=R8,ω4). The Hamiltonian H (2.5) is invariant under this S1 action and thus is a smooth function on the orbit space Ξ1(0)/S1R16 with coordinates (K,L,H,Ξ;U,V).

    The harmonic oscillator vector field XH2 on (TR4,ω4) induces the vector field YH2=4i=1(2ViUi2UiVi) on the orbit space R8/S1R16, which leaves Ξ1(0)/S1 invariant.

    We now compute the first order normal form of the Hamiltonian H (2.5) on the reduced space Ξ1(0)/S1R8/S1.

    The average of H2U4K3V1 over the flow

    φYH2t(K,L,H2,Ξ;U,V)=(K,L,H2,Ξ;Ucos2t+Vsin2t,Usin2t+Vcos2t)

    of YH2 is

    ¯H2U4K3V1=1ππ0(φYH2t)(H2U4K3V1)dt=1ππ0(U4cos2t+V4sin2t)dtH21ππ0(U1sin2t+V1cos2t)dtK3=0.

    The second equality above follows because LXH2K3=0 and the third because ¯cos2t=¯sin2t=0. The average of U4V1 over the flow of YH2 on Ξ1(0)/S1 is

    ¯U4V1=1ππ0(φYH2t)(U4V1)dt=12U1U4¯sin4t+U4V1¯cos22tU1V4¯sin22t+12V1V4¯sin4t=12(U4V1U1V4)=12H2K3,

    since ¯cos22t=¯sin22t=12 and ¯sin4t=0. The last equality above follows from the explicit description of the orbit space R8/S1 as the semialgebraic variety in R16 with coodinates (K,L,U,V;H2,Ξ) given by

    U,U=U21+U22+U23+U24=H22Ξ20H20V,V=V21+V22+V23+V24=H22Ξ20U,V=U1V1+U2V2+U3V3+U4V4=0U2V1U1V2=L1ΞK1H2U3V1U1V3=L2ΞK2H2U4V1U1V4=L3ΞK3H2U4V3U3V4=K1ΞL1H2U2V4U4V2=K2ΞL2H2U3V2U2V3=K3ΞL3H2. (3.1)

    So the average of U4V1+H2U4K3V1H2K3 over the flow of YH2 is 32H2K3 on Ξ1(0)/S1. Thus the first order normal form of the regularized Stark Hamiltonian H (2.5) on Ξ1(0)/S1 is

    H(1)nf=H232βεH2K3. (3.2)

    In order to compute the second order normal form of the Hamiltonian H on Ξ1(0)/S1, we need to find a function F on R16 such that changing coordinates by the time ε value of the flow of the Hamiltonian vector field YF brings the regularized Hamiltonian H (2.5) into first order normal form. Choose F so that

    LYFH2=β(U4V112H2K3H2U4+K3V1). (4.1)

    The following calculation shows that this choice does the job.

    (φYFε)H=H+εLYFH+12ε2L2YFH+O(ε3)=H2+εβ(U4V1+H2U4K3V1H2K3)+εLYFH2+ε2βLYF(U4V1+H2U4K3V1H2K3)+12ε2L2YFH2+O(ε3)=H2+εβ(U4V1+H2U4K3V1H2K3)+εβ(U4V112H2K3H2U4+K3V1)+ε2[LYF(LYFH232βH2K3)+12L2YFH2]+O(ε3)=H232εβH2K312ε2(L2YFH2+3βLYF(H2K3))+O(ε3). (4.2)

    To determine the function F, we solve equation (4.1). Write F=F1+F2, where LYH2F1=β(U4V1+12H2K3) and LYH2F2=β(H2U4K3V1). Then

    LYFH2=LYH2F=LYH2F1LYH2F2=β(U4V1+12H2K3)β(H2U4K3V1). (4.3)

    Since LYH2V4=2U4 and LYH2U1=2V1, it follows that

    F2=β2(H2V4+K3U1). (4.4a)

    Now

    F1=βππ0t(φYH2t)(U4V1+12H2K3)dt=βππ0t(φYH2t)(U4V1)dt+πβ4H2K3,

    see [1], and

    βππ0t(φYH2t)(U4V1)dt==β2 (U1U4)1ππ0tsin4tdt+β(U4V1)1ππ0tcos22tdtβ(U1V4)1ππ0tsin22tdt+β2(V1V4)1ππ0tsin4tdt=β8(U1U4V1V4)+πβ4(U4V1U1V4),

    since 1ππ0tsin4tdt=14 and 1ππ0tsin22tdt=1ππ0tcos22tdt=π4. Thus

    F1=β8(U1U4V1V4)+πβ4(U4V1U1V4+H2K3)=β8(U1U4V1V4)

    on Ξ1(0)/S1, see (3.1). Hence on Ξ1(0)/S1

    F=F1+F2=β8(U1U4V1V4)β2(H2V4+K3U1). (4.5)

    We now calculate the average over the flow of YH2 of

    32βLYF(H2K3)12L2YFH2, (4.6)

    which is the ε2 term in the transformed Hamiltonian (φYFε)H, see (4.2). This determines the second order normal form of H on Ξ1(0)/S1. We begin with the term

    32βLYF(H2K3)=32β[K3(LYFH2)H2(LYK3F)].

    The average of

    32betaK3(LYFH2)=32β2K3(U4V1+12H2K3+H2U4K3V1)

    vanishes on Ξ1(0)/S1. The term

    32βH2(LYK3F)=32β2H2LYK3(18(U1U4V1V4)12(H2V4+K3U1))=32β2H2[(2L2K1+2L1K22K2L1+2K1L22U4U1+2U1U42V4V1+2V1V4)](18(U1U4V1V4)12(H2V4+K3U1)),see [2, table 1] =32β2H2[14(U24+U21+V24V21)H2V1+K3U4].

    Next we calculate 32β¯H2(LYK3F). Since ¯H2V1=0=¯K3U4 we need only calculate the average of U21, U24, V21 and V24. We get ¯U21=12(U21+V21)=¯V21 and ¯U24=12(U24+V24)=¯V24. Thus 32β¯H2(LYK3F)=0. So the average 32β¯LYF(H2K3) of the first term in expression (4.6) vanishes on Ξ1(0)/S1.

    Next we calculate the average of the term L2YFH2 in expression (4.6) on Ξ1(0)/S1. We have

    L2YFH2=LYF(LYH2F)=βLYF(U4V1+12H2K3+H2U4K3V1),using (4.3)=β[I(LYU4F)V1+IIU4(LYV1F)12III(LYFH2)K3+12IVH2(LYK3F)V(LYFH2)U4+VIH2(LYU4F)VII(LYK3F)V1VIIIK3(LYV1F)].

    We begin by finding

    LYH2F=β[(2V1U1+2V2U2+2V3U3+2V4U42U1V12U2V22U3V32U4V4)](18(U1U4V1V4)12(H2V4+K3U1))=β[12(V1U4+U1V4)+H2U4K3V1];LYK3F=β((2L2K1+2L1K22K2L1+2K1L22U4U1+2U1U42V4V1+2V1V4)(18(U1U4V1V4)12(H2V4+K3U1)))=β[14(U24+U21+V24V21)H2V1+K3U4];LYU4F=β[(2U1K32U3L1+2U2L22V4H22K3U1+2L2U2+2V3U32H2V4)(18(U1U4V1V4)12(H2V4+K3U1))]=β[V24+K2314(K3U4H2V1)+H22+U21];LYV1F=β[(2V2K1+2V3K2+2V4K3+2U1H2+2H2U1+2K1V2+2K2V3+2K3V4)(18(U1U4V1V4)12(H2V4+K3U1))]=β[2(U1V4+H2K3)+14(H2U4K3V1)].

    So the average of term I on Ξ1(0)/S1 is

    β¯(LYU4F)V1=β2(¯V1V24+¯K23V114¯K3U4V1+14¯H2V21+¯H22V1+¯U21V1)=β2(18H2K23+18H2(U21+V21)), (4.7a)

    since the average of V1V24, K23V1, H22V1, and U21V1 are each 0, ¯U4V1=12H2K3 and ¯V21=12(U21+V21).

    Term II is

    βU4(LYV1F)=β2(2U1U4V42H2K3U4+14H2U2414K3U4V1).

    So

    β¯U4(LYV1F)=14β2H2¯U2414β2K3¯U4V1=18β2H2(U24+V24)+18β2H2K23. (4.7b)

    For term III, we have already shown that

    β2¯(LYFH2)K3=0. (4.7c)

    and for term IV we have already shown that

    β2¯H2(LYK3F)=0. (4.7d)

    Term V is

    β(LYFH2)U4=β2(12U24V1+12U1U4V1+H2U24K3U4V1).

    So

    β¯(LYFH2)U4=β2(12¯U24V1+12¯U1U4V1+¯H2U24¯K3U4V1)=β22H2(U24+V24)β22K3(U4V1U1V4), (4.7e)

    since the average of U24V1 and U1U4V1 vanish; while ¯U24=12(U24+V24) and ¯U4V1=12(U4V1U1V4).

    Term VI is

    βH2(LYU4F)=β2H2(V24+K2314K3U4+14H2V1+H22+U21).

    So

    β¯H2(LYU4F)=β2H2¯V24+β2H2K23+β2H32+β2H2¯U21=12β2H2(U24+V24)+β2H2K23+β2H32+12β2H2(U21+V21), (4.7f)

    since ¯K2U4=0=¯H2V1.

    Term VII is

    β(LYK3F)V1=β2(14[U24V1U21V1V1V24+V31]+H2V21K3U4V1).

    So

    β¯(LYK3F)V1=β2H2¯V21β2K3¯U4V1=β22H2(U21+V21)β22K3(U4V1U1V4). (4.7g)

    Term VIII is

    βK3(LYV1F)=β2(2K3U1V4+2H2K2314H2K3U4+14K23V1).

    So

    β¯K3(LYV1F)=2β2K3¯U1V4+2β2H2K23=3β2H2K23, (4.7h)

    since ¯U1V4=12H2K3. Collecting together the results of all the above term calculations gives

    ¯L2YFH2=β¯(LYU4F)V1+β¯U4(LYV1F)β¯(LYFH2)U4+β¯H2(LYU4F)β¯(LYK3F)V1β¯K3(LYV1F)=β2([18H2K23+18H2(U21+V21)]+[18H2(U24+V24)+18H2K23]+[12H2(U24+V24)12K3(U4V1U1V4)]+[12H2(U24+V24)+H2K23+H32+12H2(U21+V21)]+[12H2(U21+V21)12K3(U4V1U1V4)]+3H2K23)=β2[214H2K23+H32+98H2(U21+V21)+98H2(U24+V24)],

    using U4V1U1V4=K3H2. Thus the second order normal form of the regularized Stark Hamiltonian H on Ξ1(0)/S1 is

    H(2)nf=H232εβH2K312ε2¯L2YFH2=H232εβH2K312ε2β2H2[214K23+H22+98(U21+V21)+98(U24+V24)]. (4.8)

    Since LXH2H(2)nf=0 by construction, the second order normal form H(2)nf (4.8) is a smooth function on (H12(h)Ξ1(0))/S1=ThS31, the tangent h-sphere bundle of the unit 3-sphere S31, given by

    ˜H=h12ε2β2h332εβhK312ε2β2h[214K23+98(U21+V21)+98(U24+V24)]. (5.1)

    We now show that the Hamiltonian ˜H (5.1) on ThS31 can be normalized again. On (TR4,ω4) the Hamiltonian

    K3(q,p)=12(q23+q24+p23+p24q21q22p21p22)

    gives rise to the Hamiltonian vector field XK3, whose flow φXK3t(q,p) is

    (q1costp1sint,q2costp2sint,q3cost+p3sint,q4cost+p4sint,q1sint+p1cost,q2sint+p2cost,q3sint+p3cost,q4sint+p4cost),

    which is periodic of period 2π.

    The vector field XK3 on TR4 induces the vector field

    YK3=2L2K1+2L1K22K2L1+2K1L22U4U1+2U1U42V4V1+2V1V4,

    on Ξ1(0)/S1R16 with coordinates (K,L,H2,Ξ;U,V), whose flow

    φYK3s(K,L,H2,Ξ;U,V)=(L2sin2s+K1cos2s,L1sin2s+K2cos2s,K3K2sin2s+L1cos2s,K1sin2s+L2cos2s,L3,H2,Ξ;U1cos2sU4sin2s,U2,U3,U1sin2s+U4cos2s,V1cos2sV4sin2s,V2,V3,V1sin2s+V4cos2s)

    is periodic of period π. Since LYK3 maps the ideal of smooth functions which vanish identically on Ξ1(0)/S1 into itself, YK3 is a vector field on Ξ1(0)/S1. Since LXK3H2=0, it follows that YK3 induces a vector field on ThS31 with periodic flow. So we can normalize again.

    To compute the normal form of the Hamiltonian ˜H (5.1) on ThS31 we need only calculate the average of the term

    T=214K23+98(U21+V21)+98(U24+V24)

    over the flow φYK3s. Since LYK3K3=0, we need only calculate ¯U21, ¯U24, ¯V21, and ¯V24. Now

    ¯U21=1ππ0(U1cos2sU4sin2s)2ds=1ππ0(U21cos22sU1U4sin4s+U24sin22s)ds=12(U21+U24).

    Similarly, ¯V21=12(V21+V24), ¯U24=12(U21+U24), and ¯V24=12(V21+V24). Thus

    ¯T=214K23+98h(U21+V21+U24+V24), (5.2)

    which is no surprise since LYK3T=0. So the first order normal form of ˜H (5.1) on ThS31 is

    ˜H(1)nf=h12ε2β2h332εβhK3ε2β2h[218K23+916(U21+V21+U24+V24)]. (5.3)

    The polynomial U21+V21+U24+V24 is invariant under the flows φXH2t, φXΞu, and thus is a polynomial on the orbit space ThS31/S1=S2h×S2h, defined by

    K21+K22+K23+L21+L22+L23=h2K1L1+K2L2+K3L3=0.

    We now find this polynomial. From the explicit description of R8/S1 in (3.1) it follows that on (H12(h)Ξ1(0))/S1

    U2V1U1V2=hK1U3V1U1V3=hK2U4V1U2V4=hK3.

    So on S2h×S2h

    h2(K21+K22+K23)=(U2V1U1V2)2+(U3V1U1V3)2+(U4V1U1V4)2=(U21+U22+U23+U24)V21U21V212(U1V1)(U1V1+U2V2+U3V3+U4V4)+2U21V21+(V21+V22+V23+V24)U21U21V21=h2(V21+U21),

    since U,U=h2, V,V=h2, and U,V=0. Thus U21+V21=K21+K22+K23. Again from the explicit description of (H12(h)Ξ1(0))/S1 we have

    U4V3U3V4=hL1U4V2U2V4=hL2U4V1U1V4=hK3.

    So on S2h×S2h

    h2(L21+L22+K23)=(U4V3U3V4)2+(U4V2U2V4)2+(U4V1U1V4)2=(V21+V22+V23+V24)U24U24V242(U4V4)(U1V1+U2V2+U3V3+U4V4)+2U24V24+(U21+U22+U23+U24)V24U24V24=h2(U24+V24).

    Thus U24+V24=L21+L22+K23. Consequently

    U21+V21+U24+V24=K21+K22+2K23+L21+L22=K21+K22+K23+L21+L22+L23+K23L23=2h2+K23L23.

    on S2h×S2h. Hence on S2h×S2h

    ˆH=˜H(1)nf=h138ε2β2h332εβhK35116ε2β2hK23+916ε2β2hL23. (6.1)

    Using the coordinates (ξ,η)=((K+L)/2,(KL)/2) on R3×R3, the space of smooth functions on the reduced space S2h×S2h, defined by

    ξ21+ξ22+ξ23=h2andη21+η22+η23=h2,

    has a Poisson structure with bracket relations

    {ξi,ξj}=3k=1ϵijkξk,{ηi,ηj}=3k=1ϵijkηk,{ξi,ηj}=0.

    Since {K3,L3}=0, it follows that {K3,ˆH}=0. Thus the flow φZK3r of the Hamiltonian vector field ZK3 on (S2h×S2h,{,}) generates an S1 symmetry of the Hamiltonian system (ˆH,S2h×S2h,{,}). So this system is completely integrable.

    We reduce this S1 symmetry as follows. Consider the vector field ZK3 on R3×R3 corresponding to the Hamiltonian K3=12(ξ3+η3). Its integral curves satisfy

    ˙ξ1={ξ1,K3}=12{ξ1,ξ3}=12ξ2˙ξ2={ξ2,K3}=12{ξ2,ξ3}=12ξ1˙ξ3={ξ3,K3}=0˙η1={η1,K3}=12{η1,η3}=12η2˙η2={η2,K3}=12{η2,η3}=12η1˙ξ3={η3,K3}=0.

    Thus the flow of ZK3 on R3×R3 is

    φZK3t(ξ,η)=(ξ1cost/2ξ2sint/2,ξ1sint/2+ξ2cost/2,ξ3,η1cost/2+η2sint/2,η1sint/2η2cost/2,η3),

    which preserves S2h×S2h and is periodic of period 4π.

    We now determine the space (S2h×S2h)/S1 of orbits of the vector field ZK3. We use invariant theory. The algebra of polynomials on R3×R3, which are invariant under the S1 action given by the flow φZK3t, is generated by

    σ1=ξ21+ξ22σ2=η21+η22σ3=ξ1η2ξ2η1σ4=ξ1η1+ξ2η2σ5=12(ξ3+η3)σ6=12(ξ3η3),

    which are subject to the relation

    σ23+σ24=(ξ1η2ξ2η1)2+(ξ1η1+ξ2η2)2=(ξ21+ξ22)(η21+η22)=σ1σ2,σ10σ20. (7.1a)

    In terms of invariants the defining equations of S2h×S2h become

    σ1+(σ5+σ6)2=ξ21+ξ22+ξ23=h2 (7.1b)
    σ2+(σ5σ6)2=η21+η22+η23=h2. (7.1c)

    Eliminating σ1 and σ2 from (7.1a) using (7.1b) and (7.1c) gives

    σ23+σ24=(h2(σ5+σ6)2)(h2(σ5σ6)2),|σ5+σ6|h|σ5σ6|h, (7.2a)

    which defines (S2h×S2h)/S1 as a semialgebraic variety in R4 with coordinates (σ3,σ4,σ5,σ6). Thus the reduced space (K13(2k)(S2h×S2h))/S1 is defined by (7.2a) and

    σ5=12(ξ3+η3)=12K3=k. (7.2b)

    Consequently, (K13(2k)(S2h×S2h))/S1 is the semialgebraic variety

    σ23+σ24=(h2(k+σ6)2)(h2(kσ6)2)=((hk)2σ26)((h+k)2σ26),|σ6|h|| (7.3)

    in R3 with coordinates (σ3,σ4,σ6). When 0<|k|<h the reduced space (7.3) is a smooth 2-sphere. When |k|=h it is a point. When k=0 it is a topological 2-sphere with conical singular points at (0,0,±h). These singular points correspond to the fixed points h(0,0,±1,0,0,1) of the S1 action on S2h×S2h generated by the flow of the vector field ZK3.

    By (6.1) the reduced Hamiltonian on (K13(2k)(S2h×S2h))/S1 is

    ˆHred=94ε2β2hσ26, (7.4)

    using L3=ξ3η3=2σ6, having dropped the constant h138ε2β2h332εβhk5116ε2β2hk2.

    The author declares that he has not used Artificial Intelligence (AI) tools in the creation of this article.

    The author would like to thank the referees for their careful reading and comments on the manuscript. Especially he thanks the one who pointed out errors in his calculations.

    The author receied no funding for the research in this article.

    The author declares there is no conflict of interest.



    [1] H. Li, K. Xu, J. Li, Z. Yu, Dual-stream reciprocal disentanglement learning for domain adaptation person re-identification, Knowl. Based Syst., 251 (2022), 109315. https://doi.org/10.1016/j.knosys.2022.109315 doi: 10.1016/j.knosys.2022.109315
    [2] S. Yan, Y. Zhang, M. Xie, D. Zhang, Z. Yu, Cross-domain person re-identification with pose-invariant feature decomposition and hypergraph structure alignment, Neurocomputing, 467 (2022), 229–241. https://doi.org/10.1016/j.neucom.2021.09.054 doi: 10.1016/j.neucom.2021.09.054
    [3] H. Li, J. Xu, Z. Yu, J. Luo, Jointly learning commonality and specificity dictionaries for person re-identification, IEEE Trans. Image Process., 29 (2020), 7345–7358. https://doi.org/10.1109/TIP.2020.3001424 doi: 10.1109/TIP.2020.3001424
    [4] Y. Zhang, Y. Wang, H. Li, S. Li, Cross-compatible embedding and semantic consistent feature construction for sketch re-identification, in Proceedings of the 30th ACM International Conference on Multimedia (MM'22), (2022), 3347–3355. https://doi.org/10.1145/3503161.3548224
    [5] H. Li, M. Liu, Z. Hu, F. Nie, Z. Yu, Intermediary-guided bidirectional spatial-temporal aggregation network for video-based visible-infrared person reidentification, IEEE Trans. Circuits Syst. Video Technol., 33 (2023), 4962–4972. https://doi.org/10.1109/TCSVT.2023.3246091 doi: 10.1109/TCSVT.2023.3246091
    [6] A. Subramaniam, A. Nambiar, A. Mittal, Co-segmentation inspired attention networks for video-based person re-identification, in Proceedings of the IEEE/CVF International Conference on Computer Vision, IEEE, (2019), 562–572. https://doi.org/10.1109/ICCV.2019.00065
    [7] D. Wu, M. Ye, G. Lin, X. Gao, J. Shen, Person re-identification by context-aware part attention and multi-head collaborative learning, IEEE Trans. Inf. Forensics Secur., 17 (2021), 115–126. https://doi.org/10.1109/TIFS.2021.3075894 doi: 10.1109/TIFS.2021.3075894
    [8] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16x16 words: Transformers for image recognition at scale, in 9th International Conference on Learning Representations (ICLR), (2021). https://doi.org/10.48550/arXiv.2010.11929
    [9] T. Zhang, L. Wei, L. Xie, Z. Zhuang, Y. Zhang, B. Li, et al., Spatiotemporal transformer for video-based person re-identification, preprint, arXiv: 2103.16469.
    [10] X. Liu, P. Zhang, C. Yu, H. Lu, X. Qian, X. Yang, A video is worth three views: Trigeminal transformers for video-based person re-identification, preprint, arXiv: 2104.01745.
    [11] D. Cheng, Y. Gong, X. Chang, W. Shi, A. Hauptmann, N. Zheng, Deep feature learning via structured graph laplacian embedding for person re-identification, Pattern Recognit., 82 (2018), 94–104. https://doi.org/10.1016/j.patcog.2018.05.007 doi: 10.1016/j.patcog.2018.05.007
    [12] A. Barman, S. K. Shah, Shape: A novel graph theoretic algorithm for making consensus-based decisions in person re-identification systems, in Proceedings of the IEEE International Conference on Computer Vision, IEEE, (2017), 1115–1124. https://doi.org/10.1109/ICCV.2017.127
    [13] Y. Shen, H. Li, S. Yi, D. Chen, X. Wang, Person re-identification with deep similarity-guided graph neural network, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2018), 486–504. https://doi.org/10.48550/arXiv.1807.099757
    [14] D. Chen, D. Xu, H. Li, N. Sebe, X. Wang, Group consistent similarity learning via deep crf for person re-identification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2018), 8649–8658. https://doi.org/10.1109/CVPR.2018.00902
    [15] Y. Yan, Q. Zhang, B. Ni, W. Zhang, M. Xu, X. Yang, Learning context graph for person search, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, (2019), 2153–2162. https://doi.org/10.1109/CVPR.2019.00226
    [16] M. Ye, A. J. Ma, L. Zheng, J. Li, P. C. Yuen, Dynamic label graph matching for unsupervised video re-identification, in Proceedings of the IEEE International Conference on Computer Vision, IEEE, (2017), 5122–5160. https://doi.org/10.1109/ICCV.2017.550
    [17] L. Bao, B. Ma, H. Chang, X. Chen, Preserving structural relationships for person re-identification, in 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), IEEE, (2019), 120–125. https://doi.org/10.1109/ICMEW.2019.00028
    [18] Z. Zhang, H. Zhang, S. Liu, Person re-identification using heterogeneous local graph attention networks, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 12131–12140. https://doi.org/10.1109/CVPR46437.2021.01196
    [19] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, in International Conference on Learning Representations (ICLR), (2016). https://doi.org/10.48550/arXiv.1609.02907
    [20] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio, Graph attention networks, preprint, arXiv: 1710.10903.
    [21] H. Li, S. Yan, Z. Yu, D. Tao, Attribute-identity embedding and self-supervised learning for scalable person re-identification, IEEE Transactions on Circuits and Systems for Video Technology, 30 (2020), 3472–3485. https://doi.org/10.1109/TCSVT.2019.2952550 doi: 10.1109/TCSVT.2019.2952550
    [22] H. Li, N. Dong, Z. Yu, D. Tao, G. Qi, Triple adversarial learning and multi-view imaginative reasoning for unsupervised domain adaptation person re-identification, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 2814–2830. https://doi.org/10.1109/TCSVT.2021.3099943 doi: 10.1109/TCSVT.2021.3099943
    [23] H. Li, Y. Chen, D. Tao, Z. Yu, G. Qi, Attribute-aligned domain-invariant feature learning for unsupervised domain adaptation person re-identification, IEEE Trans. Forensics Secur., 16 (2021), 1480–1494. https://doi.org/10.1109/TIFS.2020.3036800 doi: 10.1109/TIFS.2020.3036800
    [24] K. Zhou, Y. Yang, A. Cavallaro, T. Xiang, Omni-scale feature learning for person re-identification, in Proceedings of the IEEE/CVF International Conference on Computer Vision, IEEE, (2019), 3701–3711. https://doi.org/10.1109/ICCV.2019.00380
    [25] F. Yu, X. Jiang, Y. Gong, S. Zhao, X. Guo, W. S. Zheng, et al., Devil's in the details: Aligning visual clues for conditional embedding in person re-identification, preprint, arXiv: 2009.05250.
    [26] Z. Zhang, C. Lan, W. Zeng, Z. Chen, Densely semantically aligned person re-identification, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, (2019), 667–676. https://doi.org/10.1109/CVPR.2019.00076
    [27] M. Ye, H. Li, B. Du, J. Shen, L. Shao, S. C. H. Hoi, Collaborative refining for person re-identification with label noise, IEEE Trans. Image Process., 31 (2021), 379–391. https://doi.org/10.1109/TIP.2021.3131937 doi: 10.1109/TIP.2021.3131937
    [28] S. He, H. Luo, P. Wang, F. Wang, H. Li, W. Jiang, Transreid: Transformer-based object re-identification, in Proceedings of the IEEE/CVF International Conference on Computer Vision, IEEE, (2021), 14993–15002. https://doi.org/10.1109/ICCV48922.2021.01474
    [29] T. Mikolov, M. Karafiàt, L. Burget, J. Cernockỳ, S. Khudanpur, Recurrent neural network based language model, in 11th Annual Conference of the International Speech Communication Association, INTERSPEECH 2010, (2010), 1045–1048. https://doi.org/10.21437/Interspeech.2010-343
    [30] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput., 9 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 doi: 10.1162/neco.1997.9.8.1735
    [31] N. McLaughlin, J. Martinez del Rincon, P. Miller, Recurrent convolutional network for video-based person re-identification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2016), 1325–1334. https://doi.org/10.1109/CVPR.2016.148
    [32] D. Chung, K. Tahboub, E. J. Delp, A two stream siamese convolutional neural network for person re-identification, in Proceedings of the IEEE International Conference on Computer Vision, IEEE, (2017), 1992–2000. https://doi.org/10.1109/ICCV.2017.218
    [33] Y. Suh, J. Wang, S. Tang, T. Mei, K. M. Lee, Part-aligned bilinear representations for person re-identification, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2018), 418–437. https://doi.org/10.1007/978-3-030-01264-9_25
    [34] X. Gu, H. Chang, B. Ma, H. Zhang, X. Chen, Appearance-preserving 3D convolution for video-based person re-identification, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2020), 228–243. https://doi.org/10.1007/978-3-030-58536-5_14
    [35] J. Li, S. Zhang, T. Huang, Multi-scale 3d convolution network for video based person re-identification, in Proceedings of the AAAI Conference on Artificial Intelligence, (2019), 8618–8625. https://doi.org/10.1609/aaai.v33i01.33018618
    [36] P. Zhang, J. Xu, Q. Wu, Y. Huang, X. Ben, Learning spatial-temporal representations over walking tracklet for long-term person re-identification in the wild, IEEE Trans. Multimedia, 23 (2020), 3562–3576. https://doi.org/10.1109/TMM.2020.3028461 doi: 10.1109/TMM.2020.3028461
    [37] Y. Zhao, X. Shen, Z. Jin, H. Lu, X. S. Hua, Attribute-driven feature disentangling and temporal aggregation for video person re-identification, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, (2019), 4908–4917. https://doi.org/10.1109/CVPR.2019.00505
    [38] Z. Chang, Z. Yang, Y. Chen, Q. Zhou, S. Zheng, Seq-masks: Bridging the gap between appearance and gait modeling for video-based person re-identification, in 2021 International Conference on Visual Communications and Image Processing (VCIP), IEEE, (2021), 1–5. https://doi.org/10.1109/VCIP53242.2021.9675368
    [39] T. Chai, Z. Chen, A. Li, J. Chen, X. Mei, Y. Wang, Video person re-identification using attribute-enhanced features, IEEE Trans. Circuits Syst. Video Technol., 32 (2022), 7951–7966. https://doi.org/10.1109/TCSVT.2022.3189027 doi: 10.1109/TCSVT.2022.3189027
    [40] L. Wu, Y. Wang, L. Shao, M. Wang, 3-d personvlad: Learning deep global representations for video-based person reidentification, IEEE Trans. Neural Networks Learn. Syst., 30 (2019), 3347–3359. https://doi.org/10.1109/TNNLS.2019.2891244 doi: 10.1109/TNNLS.2019.2891244
    [41] V. Dwivedi, X. Bresson, A generalization of transformer networks to graphs, preprint, arXiv: 2012.0969.
    [42] T. Wang, S. Gong, X. Zhu, S. Wang, Person re-identification by video ranking, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2014), 688–703. https://doi.org/10.1007/978-3-319-10593-2_45
    [43] E. Ristani, F. Solera, R. Zou, R. Cucchiara, C. Tomasi, Performance measures and a data set for multi-target, multi-camera tracking, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2016), 17–35. https://doi.org/10.1007/978-3-319-48881-3_2
    [44] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, et al., Mars: A video benchmark for large-scale person re-identification, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2016), 868–884. https://doi.org/10.1007/978-3-319-46466-4_52
    [45] Y. Liu, Z. Yuan, W. Zhou, H. Li, Spatial and temporal mutual promotion for video-based person re-identification, in Proceedings of the AAAI Conference on Artificial Intelligence, (2019), 8786–8793. https://doi.org/10.1609/aaai.v33i01.33018786
    [46] R. Zhang, J. Li, H. Sun, Y. Ge, P. Luo, X. Wang, et al., Scan: Self-and-collaborative attention network for video person re-identification, IEEE Trans. Image Process., 28 (2019), 4870–4882. https://doi.org/10.1109/TIP.2019.2911488 doi: 10.1109/TIP.2019.2911488
    [47] G. Chen, Y. Rao, J. Lu, J. Zhou, Temporal coherence or temporal motion: Which is more critical for video-based person re-identification?, in Proceedings of the European Conference on Computer Vision (ECCV), Springer, (2020), 660–676. https://doi.org/10.1007/978-3-030-58598-3_39
    [48] X. Liu, P. Zhang, C. Yu, H. Lu, X. Yang, Watching you: Global-guided reciprocal learning for video-based person re-identification, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, (2021), 13329–13338. https://doi.org/10.1109/CVPR46437.2021.01313
    [49] A. Aich, M. Zheng, S. Karanam, T. Chen, A. K. Roy-Chowdhury, Z. Wu, Spatio-temporal representation factorization for video-based person re-identification, in Proceedings of the IEEE/CVF International Conference on Computer Vision, IEEE, (2021), 152–162. https://doi.org/10.1109/ICCV48922.2021.00022
    [50] D. Chen, H. Li, T. Xiao, S. Yi, X. Wang, Video person re-identification with competitive snippet-similarity aggregation and co-attentive snippet embedding, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2018), 1169–1178. https://doi.org/10.1109/CVPR.2018.00128
    [51] X. Liu, C. Yu, P. Zhang, H. Lu, Deeply coupled convolution–transformer with spatial–temporal complementary learning for video-based person re-identification, IEEE Trans. Neural Networks Learn.Syst., (2023), 1–11. https://doi.org/10.1109/TNNLS.2023.3271353
    [52] Z. Tang, R. Zhang, Z. Peng, J. Chen, L. Lin, Multi-stage spatio-temporal aggregation transformer for video person re-identification, IEEE Trans. Multimedia, 25 (2023), 7917–7929. https://doi.org/10.1109/TMM.2022.3231103 doi: 10.1109/TMM.2022.3231103
    [53] X. Zang, G. Li, W. Gao, Multidirection and multiscale pyramid in transformer for video-based pedestrian retrieval, IEEE Trans. Ind. Inf., 18 (2022), 8776–8785. https://doi.org/10.1109/TII.2022.3151766 doi: 10.1109/TII.2022.3151766
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1322) PDF downloads(51) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog