In recent years, there are many research cases for the diagnosis of Parkinson's disease (PD) with the brain magnetic resonance imaging (MRI) by utilizing the traditional unsupervised machine learning methods and the supervised deep learning models. However, unsupervised learning methods are not good at extracting accurate features among MRIs and it is difficult to collect enough data in the field of PD to satisfy the need of training deep learning models. Moreover, most of the existing studies are based on single-view MRI data, of which data characteristics are not sufficient enough. In this paper, therefore, in order to tackle the drawbacks mentioned above, we propose a novel semi-supervised learning framework called Semi-supervised Multi-view learning Clustering architecture technology (SMC). The model firstly introduces the sliding window method to grasp different features, and then uses the dimensionality reduction algorithms of Linear Discriminant Analysis (LDA) to process the data with different features. Finally, the traditional single-view clustering and multi-view clustering methods are employed on multiple feature views to obtain the results. Experiments show that our proposed method is superior to the state-of-art unsupervised learning models on the clustering effect. As a result, it may be noted that, our work could contribute to improving the effectiveness of identifying PD by previous labeled and subsequent unlabeled medical MRI data in the realistic medical environment.
Citation: Xiaobo Zhang, Donghai Zhai, Yan Yang, Yiling Zhang, Chunlin Wang. A novel semi-supervised multi-view clustering framework for screening Parkinson's disease[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 3395-3411. doi: 10.3934/mbe.2020192
[1] | Glenn Ledder . Incorporating mass vaccination into compartment models for infectious diseases. Mathematical Biosciences and Engineering, 2022, 19(9): 9457-9480. doi: 10.3934/mbe.2022440 |
[2] | Pannathon Kreabkhontho, Watchara Teparos, Thitiya Theparod . Potential for eliminating COVID-19 in Thailand through third-dose vaccination: A modeling approach. Mathematical Biosciences and Engineering, 2024, 21(8): 6807-6828. doi: 10.3934/mbe.2024298 |
[3] | Bruno Buonomo, Alberto d’Onofrio, Deborah Lacitignola . Rational exemption to vaccination for non-fatal SIS diseases: Globally stable and oscillatory endemicity. Mathematical Biosciences and Engineering, 2010, 7(3): 561-578. doi: 10.3934/mbe.2010.7.561 |
[4] | Beatriz Machado, Liliana Antunes, Constantino Caetano, João F. Pereira, Baltazar Nunes, Paula Patrício, M. Luísa Morgado . The impact of vaccination on the evolution of COVID-19 in Portugal. Mathematical Biosciences and Engineering, 2022, 19(1): 936-952. doi: 10.3934/mbe.2022043 |
[5] | Tetsuro Kobayashi, Hiroshi Nishiura . Prioritizing COVID-19 vaccination. Part 2: Real-time comparison between single-dose and double-dose in Japan. Mathematical Biosciences and Engineering, 2022, 19(7): 7410-7424. doi: 10.3934/mbe.2022350 |
[6] | Xiaojing Wang, Yu Liang, Jiahui Li, Maoxing Liu . Modeling COVID-19 transmission dynamics incorporating media coverage and vaccination. Mathematical Biosciences and Engineering, 2023, 20(6): 10392-10403. doi: 10.3934/mbe.2023456 |
[7] | Chuanqing Xu, Xiaotong Huang, Zonghao Zhang, Jing'an Cui . A kinetic model considering the decline of antibody level and simulation about vaccination effect of COVID-19. Mathematical Biosciences and Engineering, 2022, 19(12): 12558-12580. doi: 10.3934/mbe.2022586 |
[8] | ZongWang, Qimin Zhang, Xining Li . Markovian switching for near-optimal control of a stochastic SIV epidemic model. Mathematical Biosciences and Engineering, 2019, 16(3): 1348-1375. doi: 10.3934/mbe.2019066 |
[9] | Fang Wang, Lianying Cao, Xiaoji Song . Mathematical modeling of mutated COVID-19 transmission with quarantine, isolation and vaccination. Mathematical Biosciences and Engineering, 2022, 19(8): 8035-8056. doi: 10.3934/mbe.2022376 |
[10] | Yuto Omae, Yohei Kakimoto, Makoto Sasaki, Jun Toyotani, Kazuyuki Hara, Yasuhiro Gon, Hirotaka Takahashi . SIRVVD model-based verification of the effect of first and second doses of COVID-19/SARS-CoV-2 vaccination in Japan. Mathematical Biosciences and Engineering, 2022, 19(1): 1026-1040. doi: 10.3934/mbe.2022047 |
In recent years, there are many research cases for the diagnosis of Parkinson's disease (PD) with the brain magnetic resonance imaging (MRI) by utilizing the traditional unsupervised machine learning methods and the supervised deep learning models. However, unsupervised learning methods are not good at extracting accurate features among MRIs and it is difficult to collect enough data in the field of PD to satisfy the need of training deep learning models. Moreover, most of the existing studies are based on single-view MRI data, of which data characteristics are not sufficient enough. In this paper, therefore, in order to tackle the drawbacks mentioned above, we propose a novel semi-supervised learning framework called Semi-supervised Multi-view learning Clustering architecture technology (SMC). The model firstly introduces the sliding window method to grasp different features, and then uses the dimensionality reduction algorithms of Linear Discriminant Analysis (LDA) to process the data with different features. Finally, the traditional single-view clustering and multi-view clustering methods are employed on multiple feature views to obtain the results. Experiments show that our proposed method is superior to the state-of-art unsupervised learning models on the clustering effect. As a result, it may be noted that, our work could contribute to improving the effectiveness of identifying PD by previous labeled and subsequent unlabeled medical MRI data in the realistic medical environment.
In the study of perturbations of the three degree of freedom Kepler Hamiltonian pulling back, the regularized Hamiltonian by the Kustaanheimo-Stiefel (KS) map, gives a perturbation of the four degree of freedom harmonic oscillator Hamiltonian, when restricted to the zero level set of the KS symmetry. We use the formulation of the KS transformation in [6] which allows us to reduce the KS symmetry using invariant theory for the first time. As an illustration, we apply this procedure to the regularized Stark Hamiltonian, which is normalized after applying the KS transformation. We do not expect this Hamiltonian to be completely integrable (see Lagrange [4] and also [5]). Our treatment follows that of [3] and gives the full details of obtaining the second order normal form [1]. We use the notation of [2] and note that our procedure of regularization, pull back by the KS map, normalization and reduction may be used to study three degree of freedom perturbed Keplerian systems.
On T0R3=(R3∖{0})×R3 with coordinates (x,y) and standard symplectic form ω3=∑3i=1dxi∧dyi consider the Stark Hamiltonian
K(x,y)=12⟨y,y⟩−1|x|+fx3. | (2.1) |
Here, ⟨,⟩ is the Euclidean inner product on R3 with associated norm ||. On the negative energy level −12k2 with k>0 rescaling time dt↦|x|kds, we obtain
0=12k(|x|⟨y,y⟩+k2|x|)−1k+fx3|x|k. | (2.2) |
In other words, (x,y) lies in the 1k level set of
ˆK(x,y)=12k|x|(⟨y,y⟩+k2|x|)+fx3|x|k. | (2.3) |
We assume that f is small, namely, f=εβ. After the symplectic coordinate change (x,y)↦(1kx,ky) the Hamiltonian ˆK becomes the preregularized Hamiltonian
K(x,y)=12|x|(⟨y,y⟩+|x|)+εβx3|x| | (2.4) |
on the level set K−1(1).
Let T0R4=(R4∖{0})×R4 have coordinates (q,p) and a symplectic form ω4=∑4i=1dqi∧dpi. Pull back K by the Kustaanheimo-Stiefel mapping
KS:T0R4→T0R3:(q,p)↦(x,y), |
where
x1=2(q1q3+q2q4)=U2−K1x2=2(q1q4−q2q3)=U3−K2x3=q21+q22−q23−q24=U4−K3y1=(⟨q,q⟩)−1(q1p3+q2p4+q3p1+q4p2)=(H2+V1)−1V2y2=(⟨q,q⟩)−1(q1p4−q2p3−q3p2+q4p1)=(H2+V1)−1V3y3=(⟨q,q⟩)−1(q1p1+q2p2−q3p3−q4p4)=(H2+V1)−1V4 |
and
H2=12(p21+p22+p23+p24+q21+q22+q23+q24)Ξ=q1p2−q2p1+q3p4−q4p3, |
to get the regularized Stark Hamiltonian
H=H2+εβ(U4V1+H2U4−K3V1−H2K3) | (2.5) |
on Ξ−1(0), since |x|=⟨q,q⟩=H2+V1. Here
K1=−(q1q3+q2q4+p1p3+p2p4)K2=−(q1q4−q2q3+p1p4−p2p3)K3=12(q23+q24+p23+p24−q21−q22−p21−p22)L1=q4p1−q3p2+q2p3−q1p4L2=q1p3+q2p4−q3p1−q4p2L3=q3p4−q4p3+q2p1−q1p2U1=−(q1p1+q2p2+q3p3+q4p4)U2=q1q3+q2q4−p1p3−p2p4U3=q1q4−q2q3+p2p3−p1p4U4=12(q21+q22−q23−q24+p23+p24−p21−p22)V1=12(q21+q22+q23+q24−p21−p22−p23−p24)V2=q1p3+q2p4+q3p1+q4p2V3=q1p4−q2p3−q3p2+q4p1V4=q1p1+q2p2−q3p3−q4p4. |
generate the algebra of polynomials invariant under the S1 action φΞs given by the flow of XΞ on (TR4=R8,ω4). The Hamiltonian H (2.5) is invariant under this S1 action and thus is a smooth function on the orbit space Ξ−1(0)/S1⊆R16 with coordinates (K,L,H,Ξ;U,V).
The harmonic oscillator vector field XH2 on (TR4,ω4) induces the vector field YH2=∑4i=1(2Vi∂∂Ui−2Ui∂∂Vi) on the orbit space R8/S1⊆R16, which leaves Ξ−1(0)/S1 invariant.
We now compute the first order normal form of the Hamiltonian H (2.5) on the reduced space Ξ−1(0)/S1⊆R8/S1.
The average of H2U4−K3V1 over the flow
φYH2t(K,L,H2,Ξ;U,V)=(K,L,H2,Ξ;Ucos2t+Vsin2t,−Usin2t+Vcos2t) |
of YH2 is
¯H2U4−K3V1=1π∫π0(φYH2t)∗(H2U4−K3V1)dt=1π∫π0(U4cos2t+V4sin2t)dtH2−1π∫π0(−U1sin2t+V1cos2t)dtK3=0. |
The second equality above follows because LXH2K3=0 and the third because ¯cos2t=¯sin2t=0. The average of U4V1 over the flow of YH2 on Ξ−1(0)/S1 is
¯U4V1=1π∫π0(φYH2t)∗(U4V1)dt=−12U1U4¯sin4t+U4V1¯cos22t−U1V4¯sin22t+12V1V4¯sin4t=12(U4V1−U1V4)=−12H2K3, |
since ¯cos22t=¯sin22t=12 and ¯sin4t=0. The last equality above follows from the explicit description of the orbit space R8/S1 as the semialgebraic variety in R16 with coodinates (K,L,U,V;H2,Ξ) given by
⟨U,U⟩=U21+U22+U23+U24=H22−Ξ2≥0H2≥0⟨V,V⟩=V21+V22+V23+V24=H22−Ξ2≥0⟨U,V⟩=U1V1+U2V2+U3V3+U4V4=0U2V1−U1V2=L1Ξ−K1H2U3V1−U1V3=L2Ξ−K2H2U4V1−U1V4=L3Ξ−K3H2U4V3−U3V4=K1Ξ−L1H2U2V4−U4V2=K2Ξ−L2H2U3V2−U2V3=K3Ξ−L3H2. | (3.1) |
So the average of U4V1+H2U4−K3V1−H2K3 over the flow of YH2 is −32H2K3 on Ξ−1(0)/S1. Thus the first order normal form of the regularized Stark Hamiltonian H (2.5) on Ξ−1(0)/S1 is
H(1)nf=H2−32βεH2K3. | (3.2) |
In order to compute the second order normal form of the Hamiltonian H on Ξ−1(0)/S1, we need to find a function F on R16 such that changing coordinates by the time ε value of the flow of the Hamiltonian vector field YF brings the regularized Hamiltonian H (2.5) into first order normal form. Choose F so that
LYFH2=β(−U4V1−12H2K3−H2U4+K3V1). | (4.1) |
The following calculation shows that this choice does the job.
(φYFε)∗H=H+εLYFH+12ε2L2YFH+O(ε3)=H2+εβ(U4V1+H2U4−K3V1−H2K3)+εLYFH2+ε2βLYF(U4V1+H2U4−K3V1−H2K3)+12ε2L2YFH2+O(ε3)=H2+εβ(U4V1+H2U4−K3V1−H2K3)+εβ(−U4V1−12H2K3−H2U4+K3V1)+ε2[LYF(−LYFH2−32βH2K3)+12L2YFH2]+O(ε3)=H2−32εβH2K3−12ε2(L2YFH2+3βLYF(H2K3))+O(ε3). | (4.2) |
To determine the function F, we solve equation (4.1). Write F=F1+F2, where LYH2F1=β(U4V1+12H2K3) and LYH2F2=β(H2U4−K3V1). Then
LYFH2=−LYH2F=−LYH2F1−LYH2F2=−β(U4V1+12H2K3)−β(H2U4−K3V1). | (4.3) |
Since LYH2V4=−2U4 and LYH2U1=2V1, it follows that
F2=−β2(H2V4+K3U1). | (4.4a) |
Now
F1=βπ∫π0t(φYH2t)∗(U4V1+12H2K3)dt=βπ∫π0t(φYH2t)∗(U4V1)dt+πβ4H2K3, |
see [1], and
βπ∫π0t(φYH2t)∗(U4V1)dt==−β2 (U1U4)1π∫π0tsin4tdt+β(U4V1)1π∫π0tcos22tdt−β(U1V4)1π∫π0tsin22tdt+β2(V1V4)1π∫π0tsin4tdt=β8(U1U4−V1V4)+πβ4(U4V1−U1V4), |
since 1π∫π0tsin4tdt=−14 and 1π∫π0tsin22tdt=1π∫π0tcos22tdt=π4. Thus
F1=β8(U1U4−V1V4)+πβ4(U4V1−U1V4+H2K3)=β8(U1U4−V1V4) |
on Ξ−1(0)/S1, see (3.1). Hence on Ξ−1(0)/S1
F=F1+F2=β8(U1U4−V1V4)−β2(H2V4+K3U1). | (4.5) |
We now calculate the average over the flow of YH2 of
−32βLYF(H2K3)−12L2YFH2, | (4.6) |
which is the ε2 term in the transformed Hamiltonian (φYFε)∗H, see (4.2). This determines the second order normal form of H on Ξ−1(0)/S1. We begin with the term
−32βLYF(H2K3)=−32β[K3(LYFH2)−H2(LYK3F)]. |
The average of
−32betaK3(LYFH2)=32β2K3(U4V1+12H2K3+H2U4−K3V1) |
vanishes on Ξ−1(0)/S1. The term
32βH2(LYK3F)=32β2H2LYK3(18(U1U4−V1V4)−12(H2V4+K3U1))=32β2H2[(−2L2∂∂K1+2L1∂∂K2−2K2∂∂L1+2K1∂∂L2−2U4∂∂U1+2U1∂∂U4−2V4∂∂V1+2V1∂∂V4)](18(U1U4−V1V4)−12(H2V4+K3U1)),see [2, table 1] =32β2H2[14(−U24+U21+V24−V21)−H2V1+K3U4]. |
Next we calculate 32β¯H2(LYK3F). Since ¯H2V1=0=¯K3U4 we need only calculate the average of U21, U24, V21 and V24. We get ¯U21=12(U21+V21)=¯V21 and ¯U24=12(U24+V24)=¯V24. Thus 32β¯H2(LYK3F)=0. So the average −32β¯LYF(H2K3) of the first term in expression (4.6) vanishes on Ξ−1(0)/S1.
Next we calculate the average of the term L2YFH2 in expression (4.6) on Ξ−1(0)/S1. We have
L2YFH2=−LYF(LYH2F)=−βLYF(U4V1+12H2K3+H2U4−K3V1),using (4.3)=β[I⏞(LYU4F)V1+II⏞U4(LYV1F)−12III⏞(LYFH2)K3+12IV⏞H2(LYK3F)−V⏞(LYFH2)U4+VI⏞H2(LYU4F)−VII⏞(LYK3F)V1−VIII⏞K3(LYV1F)]. |
We begin by finding
LYH2F=β[(2V1∂∂U1+2V2∂∂U2+2V3∂∂U3+2V4∂∂U4−2U1∂∂V1−2U2∂∂V2−2U3∂∂V3−2U4∂∂V4)](18(U1U4−V1V4)−12(H2V4+K3U1))=β[12(V1U4+U1V4)+H2U4−K3V1];LYK3F=β((−2L2∂∂K1+2L1∂∂K2−2K2∂∂L1+2K1∂∂L2−2U4∂∂U1+2U1∂∂U4−2V4∂∂V1+2V1∂∂V4)(18(U1U4−V1V4)−12(H2V4+K3U1)))=β[14(−U24+U21+V24−V21)−H2V1+K3U4];LYU4F=β[(−2U1∂∂K3−2U3∂∂L1+2U2∂∂L2−2V4∂∂H2−2K3∂∂U1+2L2∂∂U2+2V3∂∂U3−2H2∂∂V4)(18(U1U4−V1V4)−12(H2V4+K3U1))]=β[V24+K23−14(K3U4−H2V1)+H22+U21];LYV1F=β[(2V2∂∂K1+2V3∂∂K2+2V4∂∂K3+2U1∂∂H2+2H2∂∂U1+2K1∂∂V2+2K2∂∂V3+2K3∂∂V4)(18(U1U4−V1V4)−12(H2V4+K3U1))]=β[−2(U1V4+H2K3)+14(H2U4−K3V1)]. |
So the average of term I on Ξ−1(0)/S1 is
β¯(LYU4F)V1=β2(¯V1V24+¯K23V1−14¯K3U4V1+14¯H2V21+¯H22V1+¯U21V1)=β2(18H2K23+18H2(U21+V21)), | (4.7a) |
since the average of V1V24, K23V1, H22V1, and U21V1 are each 0, ¯U4V1=−12H2K3 and ¯V21=12(U21+V21).
Term II is
βU4(LYV1F)=β2(−2U1U4V4−2H2K3U4+14H2U24−14K3U4V1). |
So
β¯U4(LYV1F)=14β2H2¯U24−14β2K3¯U4V1=18β2H2(U24+V24)+18β2H2K23. | (4.7b) |
For term III, we have already shown that
−β2¯(LYFH2)K3=0. | (4.7c) |
and for term IV we have already shown that
β2¯H2(LYK3F)=0. | (4.7d) |
Term V is
−β(LYFH2)U4=β2(12U24V1+12U1U4V1+H2U24−K3U4V1). |
So
−β¯(LYFH2)U4=β2(12¯U24V1+12¯U1U4V1+¯H2U24−¯K3U4V1)=β22H2(U24+V24)−β22K3(U4V1−U1V4), | (4.7e) |
since the average of U24V1 and U1U4V1 vanish; while ¯U24=12(U24+V24) and ¯U4V1=12(U4V1−U1V4).
Term VI is
βH2(LYU4F)=β2H2(V24+K23−14K3U4+14H2V1+H22+U21). |
So
β¯H2(LYU4F)=β2H2¯V24+β2H2K23+β2H32+β2H2¯U21=12β2H2(U24+V24)+β2H2K23+β2H32+12β2H2(U21+V21), | (4.7f) |
since ¯K2U4=0=¯H2V1.
Term VII is
−β(LYK3F)V1=β2(14[U24V1−U21V1−V1V24+V31]+H2V21−K3U4V1). |
So
−β¯(LYK3F)V1=β2H2¯V21−β2K3¯U4V1=β22H2(U21+V21)−β22K3(U4V1−U1V4). | (4.7g) |
Term VIII is
−βK3(LYV1F)=β2(2K3U1V4+2H2K23−14H2K3U4+14K23V1). |
So
−β¯K3(LYV1F)=2β2K3¯U1V4+2β2H2K23=3β2H2K23, | (4.7h) |
since ¯U1V4=12H2K3. Collecting together the results of all the above term calculations gives
¯L2YFH2=β¯(LYU4F)V1+β¯U4(LYV1F)−β¯(LYFH2)U4+β¯H2(LYU4F)−β¯(LYK3F)V1−β¯K3(LYV1F)=β2([18H2K23+18H2(U21+V21)]+[18H2(U24+V24)+18H2K23]+[12H2(U24+V24)−12K3(U4V1−U1V4)]+[12H2(U24+V24)+H2K23+H32+12H2(U21+V21)]+[12H2(U21+V21)−12K3(U4V1−U1V4)]+3H2K23)=β2[214H2K23+H32+98H2(U21+V21)+98H2(U24+V24)], |
using U4V1−U1V4=−K3H2. Thus the second order normal form of the regularized Stark Hamiltonian H on Ξ−1(0)/S1 is
H(2)nf=H2−32εβH2K3−12ε2¯L2YFH2=H2−32εβH2K3−12ε2β2H2[214K23+H22+98(U21+V21)+98(U24+V24)]. | (4.8) |
Since LXH2H(2)nf=0 by construction, the second order normal form H(2)nf (4.8) is a smooth function on (H−12(h)∩Ξ−1(0))/S1=ThS31, the tangent h-sphere bundle of the unit 3-sphere S31, given by
˜H=h−12ε2β2h3−32εβhK3−12ε2β2h[214K23+98(U21+V21)+98(U24+V24)]. | (5.1) |
We now show that the Hamiltonian ˜H (5.1) on ThS31 can be normalized again. On (TR4,ω4) the Hamiltonian
K3(q,p)=12(q23+q24+p23+p24−q21−q22−p21−p22) |
gives rise to the Hamiltonian vector field XK3, whose flow φXK3t(q,p) is
(q1cost−p1sint,q2cost−p2sint,q3cost+p3sint,q4cost+p4sint,q1sint+p1cost,q2sint+p2cost,−q3sint+p3cost,−q4sint+p4cost), |
which is periodic of period 2π.
The vector field XK3 on TR4 induces the vector field
YK3=−2L2∂∂K1+2L1∂∂K2−2K2∂∂L1+2K1∂∂L2−2U4∂∂U1+2U1∂∂U4−2V4∂∂V1+2V1∂∂V4, |
on Ξ−1(0)/S1⊆R16 with coordinates (K,L,H2,Ξ;U,V), whose flow
φYK3s(K,L,H2,Ξ;U,V)=(−L2sin2s+K1cos2s,L1sin2s+K2cos2s,K3−K2sin2s+L1cos2s,K1sin2s+L2cos2s,L3,H2,Ξ;U1cos2s−U4sin2s,U2,U3,U1sin2s+U4cos2s,V1cos2s−V4sin2s,V2,V3,V1sin2s+V4cos2s) |
is periodic of period π. Since LYK3 maps the ideal of smooth functions which vanish identically on Ξ−1(0)/S1 into itself, YK3 is a vector field on Ξ−1(0)/S1. Since LXK3H2=0, it follows that YK3 induces a vector field on ThS31 with periodic flow. So we can normalize again.
To compute the normal form of the Hamiltonian ˜H (5.1) on ThS31 we need only calculate the average of the term
T=214K23+98(U21+V21)+98(U24+V24) |
over the flow φYK3s. Since LYK3K3=0, we need only calculate ¯U21, ¯U24, ¯V21, and ¯V24. Now
¯U21=1π∫π0(U1cos2s−U4sin2s)2ds=1π∫π0(U21cos22s−U1U4sin4s+U24sin22s)ds=12(U21+U24). |
Similarly, ¯V21=12(V21+V24), ¯U24=12(U21+U24), and ¯V24=12(V21+V24). Thus
¯T=214K23+98h(U21+V21+U24+V24), | (5.2) |
which is no surprise since LYK3T=0. So the first order normal form of ˜H (5.1) on ThS31 is
˜H(1)nf=h−12ε2β2h3−32εβhK3−ε2β2h[218K23+916(U21+V21+U24+V24)]. | (5.3) |
The polynomial U21+V21+U24+V24 is invariant under the flows φXH2t, φXΞu, and thus is a polynomial on the orbit space ThS31/S1=S2h×S2h, defined by
K21+K22+K23+L21+L22+L23=h2K1L1+K2L2+K3L3=0. |
We now find this polynomial. From the explicit description of R8/S1 in (3.1) it follows that on (H−12(h)∩Ξ−1(0))/S1
U2V1−U1V2=−hK1U3V1−U1V3=−hK2U4V1−U2V4=−hK3. |
So on S2h×S2h
h2(K21+K22+K23)=(U2V1−U1V2)2+(U3V1−U1V3)2+(U4V1−U1V4)2=(U21+U22+U23+U24)V21−U21V21−2(U1V1)(U1V1+U2V2+U3V3+U4V4)+2U21V21+(V21+V22+V23+V24)U21−U21V21=h2(V21+U21), |
since ⟨U,U⟩=h2, ⟨V,V⟩=h2, and ⟨U,V⟩=0. Thus U21+V21=K21+K22+K23. Again from the explicit description of (H−12(h)∩Ξ−1(0))/S1 we have
U4V3−U3V4=−hL1U4V2−U2V4=hL2U4V1−U1V4=−hK3. |
So on S2h×S2h
h2(L21+L22+K23)=(U4V3−U3V4)2+(U4V2−U2V4)2+(U4V1−U1V4)2=(V21+V22+V23+V24)U24−U24V24−2(U4V4)(U1V1+U2V2+U3V3+U4V4)+2U24V24+(U21+U22+U23+U24)V24−U24V24=h2(U24+V24). |
Thus U24+V24=L21+L22+K23. Consequently
U21+V21+U24+V24=K21+K22+2K23+L21+L22=K21+K22+K23+L21+L22+L23+K23−L23=2h2+K23−L23. |
on S2h×S2h. Hence on S2h×S2h
ˆH=˜H(1)nf=h−138ε2β2h3−32εβhK3−5116ε2β2hK23+916ε2β2hL23. | (6.1) |
Using the coordinates (ξ,η)=((K+L)/2,(K−L)/2) on R3×R3, the space of smooth functions on the reduced space S2h×S2h, defined by
ξ21+ξ22+ξ23=h2andη21+η22+η23=h2, |
has a Poisson structure with bracket relations
{ξi,ξj}=3∑k=1ϵijkξk,{ηi,ηj}=−3∑k=1ϵijkηk,{ξi,ηj}=0. |
Since {K3,L3}=0, it follows that {K3,ˆH}=0. Thus the flow φZK3r of the Hamiltonian vector field ZK3 on (S2h×S2h,{,}) generates an S1 symmetry of the Hamiltonian system (ˆH,S2h×S2h,{,}). So this system is completely integrable.
We reduce this S1 symmetry as follows. Consider the vector field ZK3 on R3×R3 corresponding to the Hamiltonian K3=12(ξ3+η3). Its integral curves satisfy
˙ξ1={ξ1,K3}=12{ξ1,ξ3}=−12ξ2˙ξ2={ξ2,K3}=12{ξ2,ξ3}=12ξ1˙ξ3={ξ3,K3}=0˙η1={η1,K3}=12{η1,η3}=12η2˙η2={η2,K3}=12{η2,η3}=−12η1˙ξ3={η3,K3}=0. |
Thus the flow of ZK3 on R3×R3 is
φZK3t(ξ,η)=(ξ1cost/2−ξ2sint/2,ξ1sint/2+ξ2cost/2,ξ3,η1cost/2+η2sint/2,η1sint/2−η2cost/2,η3), |
which preserves S2h×S2h and is periodic of period 4π.
We now determine the space (S2h×S2h)/S1 of orbits of the vector field ZK3. We use invariant theory. The algebra of polynomials on R3×R3, which are invariant under the S1 action given by the flow φZK3t, is generated by
σ1=ξ21+ξ22σ2=η21+η22σ3=ξ1η2−ξ2η1σ4=ξ1η1+ξ2η2σ5=12(ξ3+η3)σ6=12(ξ3−η3), |
which are subject to the relation
σ23+σ24=(ξ1η2−ξ2η1)2+(ξ1η1+ξ2η2)2=(ξ21+ξ22)(η21+η22)=σ1σ2,σ1≥0σ2≥0. | (7.1a) |
In terms of invariants the defining equations of S2h×S2h become
σ1+(σ5+σ6)2=ξ21+ξ22+ξ23=h2 | (7.1b) |
σ2+(σ5−σ6)2=η21+η22+η23=h2. | (7.1c) |
Eliminating σ1 and σ2 from (7.1a) using (7.1b) and (7.1c) gives
σ23+σ24=(h2−(σ5+σ6)2)(h2−(σ5−σ6)2),|σ5+σ6|≤h|σ5−σ6|≤h, | (7.2a) |
which defines (S2h×S2h)/S1 as a semialgebraic variety in R4 with coordinates (σ3,σ4,σ5,σ6). Thus the reduced space (K−13(2k)∩(S2h×S2h))/S1 is defined by (7.2a) and
σ5=12(ξ3+η3)=12K3=k. | (7.2b) |
Consequently, (K−13(2k)∩(S2h×S2h))/S1 is the semialgebraic variety
σ23+σ24=(h2−(k+σ6)2)(h2−(k−σ6)2)=((h−k)2−σ26)((h+k)2−σ26),|σ6|≤h−|ℓ| | (7.3) |
in R3 with coordinates (σ3,σ4,σ6). When 0<|k|<h the reduced space (7.3) is a smooth 2-sphere. When |k|=h it is a point. When k=0 it is a topological 2-sphere with conical singular points at (0,0,±h). These singular points correspond to the fixed points h(0,0,±1,0,0,∓1) of the S1 action on S2h×S2h generated by the flow of the vector field ZK3.
By (6.1) the reduced Hamiltonian on (K−13(2k)∩(S2h×S2h))/S1 is
ˆHred=94ε2β2hσ26, | (7.4) |
using L3=ξ3−η3=2σ6, having dropped the constant h−138ε2β2h3−32εβhk−5116ε2β2hk2.
The author declares that he has not used Artificial Intelligence (AI) tools in the creation of this article.
The author would like to thank the referees for their careful reading and comments on the manuscript. Especially he thanks the one who pointed out errors in his calculations.
The author receied no funding for the research in this article.
The author declares there is no conflict of interest.
[1] | C. W. Tsai, R. T. Tsai, S. P. Liu, C. S. Chen, M. C. Tasi, S. H. Chien, et al., Neuroprotective effects of betulin in pharmacological and transgenic Caenorhabditis elegans models of parkinsons disease, Cell Transplant., 26 (2018), 1903-1918. |
[2] |
R. E. Burke, K. O'Malley, Axon degeneration in parkinson's disease, Exp. Neurol., 246 (2013), 72-83. doi: 10.1016/j.expneurol.2012.01.011
![]() |
[3] |
C. P. Weingarten, M. H. Sundman, P. Hickey, N. K. Chen, Neuroimaging of parkinson's disease: Expanding views, Neurosci. Biobehav. Rev., 59 (2015), 16-52. doi: 10.1016/j.neubiorev.2015.09.007
![]() |
[4] |
Y. Kim, S. M. Cheon, C. Youm, M. Son, J. W. Kim, Depression and posture in patients with parkinsons disease, Gait Posture, 61 (2018), 81-85. doi: 10.1016/j.gaitpost.2017.12.026
![]() |
[5] |
R. Martínez-Fernández, R. Rodríguez-Rojas, M. del Álamo, F. Hernández-Fernández, J. A. Pineda-Pardo, M. Dileone, et al., Focused ultrasound subthalamotomy in patients with asymmetric Parkinson's disease: A pilot study, Lancet Neurol., 17 (2018), 54-63. doi: 10.1016/S1474-4422(17)30403-9
![]() |
[6] |
D. Frosini, M. Cosottini, D. Volterrani, R. Ceravolo, Neuroimaging in parkinson's disease: Focus on substantia nigra and nigro-striatal projection, Curr. Opin. Neurol., 30 (2017), 416-426. doi: 10.1097/WCO.0000000000000463
![]() |
[7] |
K. Marek, D. Jennings, S. Lasch, A. Siderowf, C. Tanner, T Simuni, et al., The Parkinson progression marker initiative (PPMI), Prog. Neurobiol., 95 (2011), 629-635. doi: 10.1016/j.pneurobio.2011.09.005
![]() |
[8] | J. Shi, Z. Xue, Y. Dai, B. Peng, Y. Dong, Q. Zhang, et al., Cascaded multi-column RVFL+ classifier for single-modal neuroimaging-based diagnosis of Parkinson's disease, IEEE Trans. Biomed. Eng., 66 (2018), 2362-2371. |
[9] |
B. Peng, S. Wang, Z. Zhou, Y. Liu, B. Tong, T. Zhang, et al., A multilevel-roi-features-based machine learning method for detection of morphometric biomarkers in parkinsons disease, Neurosci. Lett., 651 (2017), 88-94. doi: 10.1016/j.neulet.2017.04.034
![]() |
[10] |
R. Prashanth, S. D. Roy, P. K. Mandal, S. Ghosh, High-accuracy classification of parkinson's disease through shape analysis and surface fitting in 123I-Ioflupane SPECT imaging, IEEE J. Biomed. Health Inf., 21 (2017), 794-802. doi: 10.1109/JBHI.2016.2547901
![]() |
[11] |
F. P. Oliveira, M. Castelo-Branco, Computer-aided diagnosis of Parkinson's disease based on [123I] FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines, J. Neural Eng., 12 (2015), 026008. doi: 10.1088/1741-2560/12/2/026008
![]() |
[12] |
G. Garraux, C. Phillips, J. Schrouff, A. Kreisler, C. Lemaire, C. Degueldre, et al., Multiclass classification of FDG PET scans for the distinction between Parkinson's disease and atypical parkinsonian syndromes, NeuroImage Clin., 2 (2013), 883-893. doi: 10.1016/j.nicl.2013.06.004
![]() |
[13] |
D. Long, J. Wang, M. Xuan, Q Gu, X. Xu, D. Kong, et al., Automatic classification of early Parkinson's disease with multi-modal MR imaging, Plos One, 7 (2012), e47714. doi: 10.1371/journal.pone.0047714
![]() |
[14] |
A. Abos, H. C. Baggio, B. Segura, A. I. García-Díaz, Y. Compta, M. J. Martí, et al., Discriminating cognitive status in Parkinson's disease through functional connectomics and machine learning, Sci. Rep., 7 (2017), 45347. doi: 10.1038/srep45347
![]() |
[15] |
H. Lei, Z. Huang, F. Zhou, A. Elazab, E. Tan, H. Li, et al., Parkinson's disease diagnosis via joint learning from multiple modalities and relations, IEEE J. Biomed. Health Inf., 23 (2019), 1437-1449. doi: 10.1109/JBHI.2018.2868420
![]() |
[16] |
E. Adeli, F. Shi, L. An, C. Y. Wee, G. Wu, T. Wang, et al., Joint feature-sample selection and robust diagnosis of Parkinson's disease from MRI data, NeuroImage, 141 (2016), 206-219. doi: 10.1016/j.neuroimage.2016.05.054
![]() |
[17] |
E. Adeli, G. Wu, B. Saghafi, L. An, F. Shi, D. Shen, Kernel-based joint feature selection and max-margin classification for early diagnosis of parkinsons disease, Sci. Rep., 7 (2017), 41069. doi: 10.1038/srep41069
![]() |
[18] | X. Cai, F. Nie, H. Huang, Multi-view k-means clustering on big data, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, 2013. Available from: https://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/viewPaper/6979. |
[19] | R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, Proceedings of the 14th International Joint conference on Artificial Intelligence, 1995, 1137-1143. Available from: https://www.researchgate.net/profile/Ron_Kohavi/publication/2352264_A_Study_of_Cross-Validation_and_Bootstrap_for_Accuracy_Estimation_and_Model_Selection/links/02e7e51bcc14c5e91c000000.pdf. |
[20] | S. Balakrishnama, A. Ganapathiraju, Linear discriminant analysis-a brief tutorial, Inst. Signal Inf. Process., 18 (1998), 1-8. |
[21] | F. Samaria, F. Fallside, Face identification and feature extraction using hidden markov models, Olivetti Research Limited, (1993). |
[22] |
N. Vlassis, A. Likas, A greedy EM algorithm for Gaussian mixture learning, Neural Process. Lett., 15 (2002), 77-87. doi: 10.1023/A:1013844811137
![]() |
[23] |
D. Steinley, K-means clustering: A half-century synthesis, Br. J. Math. Stat. Psychol., 59 (2006), 1-34. doi: 10.1348/000711005X48266
![]() |
[24] |
H. S. Park, C. H. Jun, A simple and fast algorithm for K-medoids clustering, Expert Syst. Appl., 36 (2009), 3336-3341. doi: 10.1016/j.eswa.2008.01.039
![]() |
[25] |
T. Kurita, An efficient agglomerative clustering algorithm using a heap, Pattern Recognit., 24 (1991), 205-209. doi: 10.1016/0031-3203(91)90062-A
![]() |
[26] |
T. Zhang, R. Ramakrishnan, M. Livny, BIRCH: An efficient data clustering method for very large databases, ACM Sigmod Rec., 25 (1996), 103-114. doi: 10.1145/235968.233324
![]() |
[27] |
U. Von Luxburg, A tutorial on spectral clustering, Stat. Comput., 17 (2007), 395-416. doi: 10.1007/s11222-007-9033-z
![]() |
[28] |
N. Wang, H. Yang, C. Li, G. Fan, X. Luo, Using 'swallow-tail' sign and putaminal hypointensity as biomarkers to distinguish multiple system atrophy from idiopathic Parkinson's disease: A susceptibility-weighted imaging study, Eur. Radiol., 27 (2017), 3174-3180. doi: 10.1007/s00330-017-4743-x
![]() |
[29] | K. Machhale, H. B. Nandpuru, V Kapuret, L. Kosta, MRI brain cancer classification using hybrid classifier (SVM-KNN), 2015 International Conference on Industrial Instrumentation and Control (ICIC), 2015, 60-65. Available from: https://ieeexplore.ieee.org/abstract/document/7150592. |
[30] | P. Refaeilzadeh, L. Tang, H. Liu, Cross-validation, E. Database Sys., 5 (2009), 532-538. |
[31] | M. Kirby, Geometric data analysis: An empirical approach to dimensionality reduction and the study of patterns, John Wiley Sons, Inc., New York, NY, USA, 2001. |
[32] |
K. Honda, H. Ichihashi, Fuzzy local independent component analysis with external criteria and its application to knowledge discovery in databases, Int. J. Approximate Reasoning, 42 (2006), 159-173. doi: 10.1016/j.ijar.2005.10.011
![]() |
[33] | D. Donoho, V. Stodden, When does non-negative matrix factorization give a correct decomposition into parts?, Proceedings of the 16th International Conference on Neural Information Processing Systems, 2003, 1141-1148. Available from: http://papers.nips.cc/paper/2463-when-does-non-negative-matrix-factorization-give-a-correct-decomposition-into-parts.pdf. |
[34] | A. Tharwat, Principal component analysis-a tutorial, Inderscience Enterprises Ltd, 3 (2016), 197-240. |
[35] | T. Hastie, R. Tibshirani, Discriminant analysis by Gaussian mixtures, J. R. Stat. Soc. Series B, 58 (1996), 155-176. |
[36] |
G. E. Hinton, R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504-507. doi: 10.1126/science.1127647
![]() |
[37] | S. Mika, G. Ratsch, J. Weston, B. Scholkopf, K. R. Mullers, Fisher discriminant analysis with kernels, Proceedings of the 1999 IEEE Signal Processing Society Workshop, Madison, 1999, 41-48. Available from: https://ieeexplore.ieee.org/abstract/document/788121. |
[38] |
Y. Yang, H. Wang, Multi-view clustering: A survey, Big Data Mining Anal., 1 (2018), 83-107. doi: 10.26599/BDMA.2018.9020003
![]() |
[39] |
U. Maulik, S. Bandyopadhyay, Performance evaluation of some clustering algorithms and validity indices, IEEE Trans. Pattern Anal. Mach. Intell., 24 (2002), 1650-1654. doi: 10.1109/TPAMI.2002.1114856
![]() |