In this study, we introduced an innovative and robust semi-supervised learning strategy tailored for high-dimensional data categorization. This strategy encompasses several pivotal symmetry elements. To begin, we implemented a risk regularization factor to gauge the uncertainty and possible hazards linked to unlabeled samples within semi-supervised learning. Additionally, we defined a unique non-second-order statistical indicator, termed Cp-Loss, within the kernel domain. This Cp-Loss feature is characterized by symmetry and bounded non-negativity, efficiently minimizing the influence of noise points and anomalies on the model's efficacy. Furthermore, we developed a robust safe semi-supervised extreme learning machine (RS3ELM), grounded on this educational framework. We derived the generalization boundary of RS3ELM utilizing Rademacher complexity. The optimization of the output weight matrix in RS3ELM is executed via a fixed point iteration technique, with our theoretical exposition encompassing RS3ELM's convergence and computational complexity. Through empirical analysis on various benchmark datasets, we demonstrated RS3ELM's proficiency and compared it against multiple leading-edge semi-supervised learning models.
Citation: Jun Ma, Xiaolong Zhu. Robust safe semi-supervised learning framework for high-dimensional data classification[J]. AIMS Mathematics, 2024, 9(9): 25705-25731. doi: 10.3934/math.20241256
[1] | Maurizio Verri, Giovanna Guidoboni, Lorena Bociu, Riccardo Sacco . The role of structural viscoelasticity in deformable porous media with incompressible constituents: Applications in biomechanics. Mathematical Biosciences and Engineering, 2018, 15(4): 933-959. doi: 10.3934/mbe.2018042 |
[2] | Ziad Khan, Hari Mohan Srivastava, Pshtiwan Othman Mohammed, Muhammad Jawad, Rashid Jan, Kamsing Nonlaopon . Thermal boundary layer analysis of MHD nanofluids across a thin needle using non-linear thermal radiation. Mathematical Biosciences and Engineering, 2022, 19(12): 14116-14141. doi: 10.3934/mbe.2022658 |
[3] | K. Maqbool, S. Shaheen, A. M. Siddiqui . Effect of nano-particles on MHD flow of tangent hyperbolic fluid in a ciliated tube: an application to fallopian tube. Mathematical Biosciences and Engineering, 2019, 16(4): 2927-2941. doi: 10.3934/mbe.2019144 |
[4] | Wei-wei Jiang, Xin-xin Zhong, Guang-quan Zhou, Qiu Guan, Yong-ping Zheng, Sheng-yong Chen . An automatic measurement method of spinal curvature on ultrasound coronal images in adolescent idiopathic scoliosis. Mathematical Biosciences and Engineering, 2020, 17(1): 776-788. doi: 10.3934/mbe.2020040 |
[5] | Bei Liu, Wenbin Tan, Xian Zhang, Ziqi Peng, Jing Cao . Recognition study of denatured biological tissues based on multi-scale rescaled range permutation entropy. Mathematical Biosciences and Engineering, 2022, 19(1): 102-114. doi: 10.3934/mbe.2022005 |
[6] | Wei Lin, Fengshuang Yang . Computational analysis of cutting parameters based on gradient Voronoi model of cancellous bone. Mathematical Biosciences and Engineering, 2022, 19(11): 11657-11674. doi: 10.3934/mbe.2022542 |
[7] | Cornel M. Murea, H. G. E. Hentschel . A finite element method for growth in biological development. Mathematical Biosciences and Engineering, 2007, 4(2): 339-353. doi: 10.3934/mbe.2007.4.339 |
[8] | Jianhua Song, Lei Yuan . Brain tissue segmentation via non-local fuzzy c-means clustering combined with Markov random field. Mathematical Biosciences and Engineering, 2022, 19(2): 1891-1908. doi: 10.3934/mbe.2022089 |
[9] | Ewa Majchrzak, Mikołaj Stryczyński . Dual-phase lag model of heat transfer between blood vessel and biological tissue. Mathematical Biosciences and Engineering, 2021, 18(2): 1573-1589. doi: 10.3934/mbe.2021081 |
[10] | Xu Guo, Yuanming Jing, Haizhou Lou, Qiaonv Lou . Effect and mechanism of long non-coding RNA ZEB2-AS1 in the occurrence and development of colon cancer. Mathematical Biosciences and Engineering, 2019, 16(6): 8109-8120. doi: 10.3934/mbe.2019408 |
In this study, we introduced an innovative and robust semi-supervised learning strategy tailored for high-dimensional data categorization. This strategy encompasses several pivotal symmetry elements. To begin, we implemented a risk regularization factor to gauge the uncertainty and possible hazards linked to unlabeled samples within semi-supervised learning. Additionally, we defined a unique non-second-order statistical indicator, termed Cp-Loss, within the kernel domain. This Cp-Loss feature is characterized by symmetry and bounded non-negativity, efficiently minimizing the influence of noise points and anomalies on the model's efficacy. Furthermore, we developed a robust safe semi-supervised extreme learning machine (RS3ELM), grounded on this educational framework. We derived the generalization boundary of RS3ELM utilizing Rademacher complexity. The optimization of the output weight matrix in RS3ELM is executed via a fixed point iteration technique, with our theoretical exposition encompassing RS3ELM's convergence and computational complexity. Through empirical analysis on various benchmark datasets, we demonstrated RS3ELM's proficiency and compared it against multiple leading-edge semi-supervised learning models.
In the last several decades, the kinetic theory of polyatomic gases witnessed extensive interest due to its vigorous relation with a wide range of practical applications including spacecraft flights, hypersonic flights and aerodynamics [1], plasma physics [20], thermal sciences [13,23], combustion processes, and chemical reactors. In the context of polyatomic gases, Borgnakke and Larsen proposed a microscopic model [6]. Later on, an entropic kinetic model consistent with [6] has been derived [8]. This model originates from the Boltzmann equation, which was a breakthrough in the kinetic theory, and offered an accurate description of the gas flow.
However, it is usually expensive and cumbersome to solve the Boltzmann equation directly. As an alternative to the Boltzmann equation, kinetic theory provides macroscopic models for not too large Knudsen numbers. These models are derived as approximations to the Boltzmann equation and offer high computational speed and explicit equations for macroscopic variables, which are helpful for understanding and analyzing the flow behavior. Macroscopic models are classically obtained by Chapman-Enskog method [5] and moments method [22,18]. Using the Chapman-Enskog method, Nagnibeda and Kustova [19] studied the strong vibrational nonequilibrium in diatomic gases and reacting mixture of polyatomic gases, and derived the first-order distribution function and governing equations. Cai and Li [10] extended the NRxx model to polyatomic gases using the ES-BGK model of [2] and [9]. In [24], the existence result of the ES-BGK model was achieved in the case where the solution lies close to equilibrium.
Simplified Boltzmann models for mixtures of polyatomic gases have also been proposed in [3,12]. The authors of [4] developed a generalized macroscopic 14 field theory for the polyatomic gases, based on the methods of extended thermodynamics [18]. In the full non-linear Boltzmann equation, Gamba and Pavić-Čolić [15] established existence and uniqueness theory in the space homogeneous setting.
The relation of the kinetic theory with the spectral theory was initiated by Grad [17], who was behind the history of serious investigation of the spectral properties of the linearized Boltzmann operator for monoatomic gases. With his pioneering work, Grad showed that the linearized collision operator
In fact, diatomic gases gain a solid importance due to the fact that in the upper atmosphere of the earth, the diatomic molecules Oxygen (
The plan of the document is the following: In section 2, we give a brief recall on the collision model [8], which describes the microscopic state diatomic gases. In section 3, we define the linearized operator
For the sake of clarity, we present the model in [8] on which our work is mainly based. We start with physical conservation equations and proceed as follows.
Without loss of generality, we first assume that the particle mass equals unity, and we denote as usual by
v+v∗=v′+v′∗ | (1) |
12v2+12v2∗+I+I∗=12v′2+12v′2∗+I′+I′∗. | (2) |
From the above equations, we can deduce the following equation representing the conservation of total energy in the center of mass reference frame:
14(v−v∗)2+I+I∗=14(v′−v′∗)2+I′+I′∗=E, |
with
14(v′−v′∗)2=REI′+I′∗=(1−R)E, |
and
I′=r(1−R)EI′∗=(1−r)(1−R)E. |
Using the above equations, we can express the post-collisional velocities in terms of the other quantities by the following
v′≡v′(v,v∗,I,I∗,ω,R)=v+v∗2+√RETω[v−v∗|v−v∗|]v′∗≡v′∗(v,v∗,I,I∗,ω,R)=v+v∗2−√RETω[v−v∗|v−v∗|], |
where
14(v−v∗)2=R′EI+I∗=(1−R′)E, |
and
I=r′(1−R′)EI∗=(1−r′)(1−R′)E. |
Finally, the post-collisional energies can be given in terms of the pre-collisional energies by the following relation
I′=r(1−R)r′(1−R′)II′∗=(1−r)(1−R)(1−r′)(1−R′)I∗. |
The Boltzmann equation for an interacting single polyatomic gas reads
∂tf+v.∇xf=Q(f,f), | (3) |
where
Q(f,f)(v,I)=∫R3×R+×S2×(0,1)2(f′f′∗(I′I′∗)α−ff∗(II∗)α)×B×(r(1−r))α(1−R)2α×IαIα∗(1−R)R1/2dRdrdωdI∗dv∗, | (4) |
where we use the standard notations
Q(f,f)(v,I)=∫R3×R+×S2×(0,1)2(f′f′∗−ff∗)×B×(1−R)R1/2dRdrdωdI∗dv∗, | (5) |
The function
B(v,v∗,I,I∗,r,R,ω)=B(v∗,v,I∗,I,1−r,R,−ω),B(v,v∗,I,I∗,r,R,ω)=B(v′,v′∗,I′,I′∗,r′,R′,ω). | (6) |
Main assumptions on
Together with the above assumption (6), we assume the following boundedness assumptions on the collision cross section
C1φ(R)ψ(r)|ω.(v−v∗)|v−v∗||(|v−v∗|γ+Iγ2+Iγ2∗)≤B(v,v∗,I,I∗,r,R,ω), | (7) |
and
B(v,v∗,I,I∗,r,R,ω)≤C2φ˜α(R)ψ˜β(r)(|v−v∗|γ+Iγ2+Iγ2∗), | (8) |
where for any
ψp(r)=(r(1−r))p,and φp(R)=(1−R)p. |
In addition,
φ(R)≤φ˜α(R),and ψ(r)≤ψ˜β(r), | (9) |
and
We remark that the above assumptions (7) and (8) are compatible with Maxwell molecules, hard spheres and hard potentials in the monoatomic case.
We state first the H-theorem for diatomic gases which was initially established for polyatomic gases in [8]. Namely, suppose that the positivity assumption of
D(f)=∫R3∫R+Q(f,f)logfdIdv≤0, |
and the following are equivalent
1. The collision operator
2. The entropy production vanishes, i.e.
3. There exists
f(v,I)=n(2πkT)32kTe−1kT(12(v−u)2+I), | (10) |
where
Mn,u,T(v,I)=n(2πκT)32kTe−1κT(12(v−u)2+I), | (11) |
where
n=∫R3∫R+fdIdv,nu=∫R3∫R+vfdIdv,52nT=∫R3∫R+((v−u)22+I)fdIdv. |
Without loss of generality, we will consider in the sequel a normalized version
M(v,I)=M1,0,1(v,I)=1(2π)32e−12v2−I. |
We look for a solution
f(v,I)=M(v,I)+M12(v,I)g(v,I). | (12) |
The linearization of the Boltzmann operator (5) around
Lg=M−12[Q(M,M12g)+Q(M12g,M)], |
In particular,
Lg=M−12∫Δ[M′M′12∗g′∗−MM12∗g∗+M′12M′∗g′−M12M∗g]B(1−R)R1/2drdRdωdI∗dv∗. | (13) |
Thanks to the conservation of total energy (2) we have
L(g)=−∫ΔBM12M12∗g∗(1−R)R1/2drdRdωdI∗dv∗−∫ΔBM∗g(1−R)R1/2drdRdωdI∗dv∗+∫ΔBM12∗M′12g′∗(1−R)R1/2drdRdωdI∗dv∗+∫ΔBM12∗M′12∗g′(1−R)R1/2drdRdωdI∗dv∗. |
Here,
L=K−νId, |
where
Kg=∫ΔBM12∗M′12g′∗(1−R)R1/2drdRdωdI∗dv∗+∫ΔBM12∗M′12∗g′(1−R)R1/2drdRdωdI∗dv∗−∫ΔBM12M12∗g∗(1−R)R1/2drdRdωdI∗dv∗, | (14) |
and
ν(v,I)=∫ΔBM∗(1−R)R1/2drdRdωdI∗dv∗, | (15) |
which represents the collision frequency. We write also
K1=∫ΔBM12M12∗g∗(1−R)R1/2drdRdωdI∗dv∗, | (16) |
K2=∫ΔBM12∗M′12g′∗(1−R)R1/2drdRdωdI∗dv∗, | (17) |
and
K3=∫ΔBM12∗M′12∗g′(1−R)R1/2drdRdωdI∗dv∗. | (18) |
The linearized operator
kerL=M1/2span {1,vi,12v2+I}i=1,⋯,3. |
Since
Dom(ν Id)={g∈L2(R3×R+):νg∈L2(R3×R+)}, |
then
We give now the main result on the linearized Boltzmann operator based on the assumptions of the collision cross section (8) and (7). In particular, using (7) we prove that the multiplication operator by
We state the following theorem, which is the main result of the paper.
Theorem 4.1. The operator
We carry out the proof of the coercivity of
Proof. Throughout the proof, we prove the compactness of each
Compactness of
k1(v,I,v∗,I∗)=1(2π)32∫S2×(0,1)2Be−14v2∗−14v2−12I∗−12I(1−R)R1/2drdRdω, |
and therefore
K1g(v,I)=∫R3×R+g(v∗,I∗)k1(v,I,v∗,I∗)dI∗dv∗∀(v,I)∈R3×R+. |
If
Lemma 4.2. With the assumption (8) on
Proof. Applying Cauchy-Schwarz we get
||k1||2L2≤c∫R3∫R+∫R3∫R+(Iγ+Iγ∗+|v−v∗|2γ)e−12v2∗−12v2−I∗−IdIdvdI∗dv∗≤c∫R3e−12v2∗[∫|v−v∗|≤1e−12v2dv+∫|v−v∗|≥1|v−v∗|⌈2γ⌉e−12v2dv]dv∗≤c∫R3e−12v2∗[∫|v−v∗|≥1⌈2γ⌉∑k=0|v|k|v∗|⌈2γ⌉−ke−12v2dv]dv∗≤c⌈2γ⌉∑k=0∫R3|v∗|⌈2γ⌉−ke−12v2∗[∫R3|v|ke−12v2dv]dv∗<∞, |
where
This implies that
Compactness of
Lemma 4.3. Let
σ=Tω(v−v∗|v−v∗|)=v−v∗|v−v∗|−2v−v∗|v−v∗|.ωω, | (19) |
then the Jacobian of the
dω=dσ2|σ−v−v∗|v−v∗||. |
Proof. It's enough to assume that
dσω:R3⟼R3→ω⟶→σ=−2⟨v−v∗|v−v∗|,→ω⟩ω−2⟨v−v∗|v−v∗|,ω⟩→ω. | (20) |
Let
Gram=|→σ1|2|→σ2|2−⟨→σ1,→σ2⟩2, |
where
|→σ1|2=4(⟨v−v∗|v−v∗|,→ω1⟩2+⟨v−v∗|v−v∗|,ω⟩2)=4|v−v∗|v−v∗||2=4,|→σ2|2=4(⟨v−v∗|v−v∗|,→ω2⟩2+⟨v−v∗|v−v∗|,ω⟩2)=4⟨v−v∗|v−v∗|,ω⟩2, |
and
⟨σ1,σ2⟩=0. |
As a result,
Gram=16⟨v−v∗|v−v∗|,ω⟩2=4|σ−v−v∗|v−v∗||2. |
We thus write
K2g(v,I)=∫Δe−I∗2−12r(1−R)((v−v∗)24+I+I∗)−14v2∗−14(v+v∗2+√R(14(v−v∗)2+I+I∗)σ)2×g(v+v∗2−√R(14(v−v∗)2+I+I∗)σ,(1−R)(1−r)[14(v−v∗)2+I+I∗])1(2π)32(1−R)R12B|σ−v−v∗|v−v∗||−1drdRdσdI∗dv∗. | (21) |
We seek first to write
h:R3×R+⟼h(R3×R+)⊂R3×R+(v∗,I∗)⟼(x,y)=(v+v∗2−√R(14(v−v∗)2+I+I∗)σ,(1−R)(1−r)[14(v−v∗)2+I+I∗]), |
for fixed
v∗=2x+2√Rayσ−v,I∗=ay−I−(x−v+√Rayσ)2, |
and
v′=x+2√Rayσ,I′=r1−ry, |
where
J=|∂v∗∂I∗∂x∂y|=8(1−r)(1−R), |
and the positivity of
Hv,IR,r,σ=h(R3×R+)={(x,y)∈R3×R+:ay−I−(x−v+√Rayσ)2>0}. | (22) |
In fact,
Hv,IR,r,σ={(x,y)∈R3×R+:x∈Bv−√Rayσ(√ay−I) and y∈((1−r)(1−R)I,+∞)}. |
Therefore, equation (
K2g=1(2π)32∫(0,1)2×S2∫Hv,IR,r,σ(1−R)R12JB|σ−v−x−√Rayσ|v−x−√Rayσ||−1g(x,y)×e−ay−I−(x−v+√Rayσ)22−r2(1−r)y−14(2x+2√Rayσ−v)2−14(x+2√Rayσ)2dydxdσdrdR. | (23) |
We now point out the kernel form of
Hv,I:={(y,x,σ,r,R)∈Δ:R∈(0,1),r∈(0,1),σ∈S2,x∈Bv−√Rayσ(√ay−I), and y∈((1−r)(1−R)I,+∞)}. |
We remark that
Hv,I=Hv,Ix,y×R3×R+ which is equivalent to Hv,I=(0,1)×(0,1)×S2×Hv,IR,r,σ. |
In other words,
Hv,Ix,y={(r,R,σ)∈(0,1)×(0,1)×S2:(y,x,σ,r,R)∈Hv,I}. | (24) |
Then by Fubini theorem, it holds that
K2g(v,I)=1(2π)32∫Hv,I(1−R)R12JB|σ−v−x−√Rayσ|v−x−√Rayσ||−1g(x,y)×e−ay−I−(x−v+√Rayσ)22−r2(1−r)y−14(2x+2√Rayσ−v)2−14(x+2√Rayσ)2dydxdσdrdR=1(2π)32∫R3×R+∫Hv,Ix,y(1−R)R12JB|σ−v−x−√Rayσ|v−x−√Rayσ||−1g(x,y)×e−ay−I−(x−v+√Rayσ)22−r2(1−r)y−14(2x+2√Rayσ−v)2−14(x+2√Rayσ)2dσdrdRdydx. | (25) |
The kernel of
Lemma 4.4. With the assumption (8) on
k2(v,I,x,y)=1(2π)32∫Hv,Ix,y(1−R)R12JB|σ−v−x−√Rayσ|v−x−√Rayσ||−1×e−ay−I−(x−v+√Rayσ)22−r2(1−r)y−14(2x+2√Rayσ−v)2−14(x+2√Rayσ)2dσdrdR |
is in
Proof. Rewriting
‖k2‖2L2≤c∫R3∫R+∫R3∫R+∫(0,1)2×S2(1−R)2RJ2B2×e−[ay−I−(x−v+√RayTω(v−v∗|v−v∗|))2]−r(1−r)y−12(2x+2√RayTω(v−v∗|v−v∗|)−v)2e−12(x+2√RayTω(v−v∗|v−v∗|))2dωdrdRdydxdIdv. |
Writing back in
‖k2‖2L2≤c∫R3∫R+∫R3∫R+∫(0,1)2×S2e−I∗−12v2∗−r(1−R)((v−v∗)24+I)(1−R)2RJB2(v,v∗,I,I∗,r,R,ω)dωdrdRdI∗dv∗dIdv. |
Assumption (8) on
‖k2‖2L2≤c∫(0,1)2∫R3∫R+∫R3∫R+(1−R)2RJ(|v−v∗|2γ+Iγ+Iγ∗)(r(1−r))2˜β(1−R)2˜α×e−I∗−12v2∗−r(1−R)((v−v∗)24+I)dIdvdI∗dv∗drdR≤c∫(0,1)2r2˜β−52−γ(1−r)2˜β−1R(1−R)2˜α−32−γdrdR<∞. |
with
Remark 1. For any
∫R3∫R+∫R3∫R+IaIb∗|v−v∗|ce−I∗−12v2∗−r(1−R)(v−v∗)24−r(1−R)IdIdvdI∗dv∗≤C(∫R+Iae−r(1−R)IdI)(∫R3[∫R3|v−v∗|ce−r(1−R)(v−v∗)24dv]e−12v2∗dv∗)≤C[r(1−R)]−a−1[r(1−R)]−c+32, |
for some constant
The lemma is thus proved, which implies that
Compactness of
K3g(v,I)=∫Δe−I∗2−12(1−r)(1−R)((v−v∗)24+I+I∗)e−14v2∗−14(v+v∗2−√R(14(v−v∗)2+I+I∗)σ)2g(v+v∗2+√R(14(v−v∗)2+I+I∗)σ,r(1−R)[14(v−v∗)2+I+I∗])1(2π)32R12(1−R)B|σ−v−v∗|v−v∗||−1drdRdσdI∗dv∗, |
inherits the same form as
˜h:R3×R+⟼R3×R+(v∗,I∗)⟼(x,y)=(v+v∗2+√R(14(v−v∗)2+I+I∗)σ,r(1−R)[14(v−v∗)2+I+I∗]), |
is calculated to be
˜J=8r(1−R). |
The final requirement for the kernel of
∫(0,1)2(1−r)2˜β−52−γr2˜β−1R(1−R)2˜α−32−γdrdR<∞, |
which holds by the change of variable
To this extent, the perturbation operator
We give in this section some properties of
Proposition 1 (Coercivity of
ν(v,I)≥c(|v|γ+Iγ/2+1), |
for any
Proof. The collision frequency (15) is
ν(v,I)=∫ΔBe−I∗−12v2∗drdRdωdI∗dv∗, |
where by
ν(v,I)≥c∫S2∫R3(|v−v∗|γ+Iγ/2)e−12v2∗dωdv∗≥c(Iγ/2+∫R3||v|−|v∗||γe−12v2∗dv∗), |
where
ν(v,I)≥c(Iγ/2+∫|v∗|≤12|v|(|v|−|v∗|)γe−12v2∗dv∗)≥c(Iγ/2+|v|γ∫|v∗|≤12e−12v2∗dv∗)≥c(|v|γ+Iγ/2+1). |
For
ν(v,I)≥c(Iγ/2+∫|v∗|≥2(|v∗|−|v|)γe−12v2∗dv∗)≥c(Iγ/2+∫|v∗|≥2e−12v2∗dv∗)≥c(1+Iγ/2+|v|γ). |
The result is thus proved. We give now the following proposition, which is a generalization of the work of Grad [17], in which he proved that the collision frequency of monoatomic single gases is monotonic based on the choice of the collision cross section
Proposition 2 (monotony of
∫(0,1)2×S2(1−R)R12B(|V|,I,I∗,r,R,ω)drdRdω | (26) |
is increasing (respectively decreasing) in
In particular, for Maxwell molecules, where
B(v,v∗,I,I∗,r,R,ω)=Cφ(r)ψ(R)(|v−v∗|γ+Iγ/2+Iγ/2∗), |
the integral (26) is increasing, and thus
In fact, if
Proof. We remark first that
ν(|v|,I)=1(2π)32∫Δ(1−R)R12B(|V|,I,I∗,r,R,ω)e−12(v−V)2−I∗drdRdωdI∗dV, | (27) |
where
The partial derivative of
∂ν∂vi=1(2π)32∫(1−R)R12vi−v∗i|v−v∗|∂B∂|v−v∗|(|v−v∗|,I,I∗,r,R,ω)e−12v2∗−I∗drdRdωdI∗dv∗. | (28) |
Perform the change of variable
∂ν∂vi=1(2π)32∫(1−R)R12Vi|V|∂B∂|V|(|V|,I,I∗,r,R,ω)e−12(v−V)2−I∗drdRdωdI∗dV, |
and thus,
3∑i=1vi∂ν∂vi=1(2π)32∫(1−R)R12v.V|V|∂B∂|V|(|V|,I,I∗,r,R,ω) | (29) |
e−12(v−V)2−I∗drdRdωdI∗dV. | (30) |
Applying Fubini theorem, we write (29) as
3∑i=1vi∂ν∂vi=1(2π)32∫[∫(1−R)R12∂B∂|V|(|V|,I,I∗,r,R,ω)drdRdω]v.V|V| | (31) |
e−12(v−V)2−I∗dI∗dV. | (32) |
The partial derivative of
I∂ν∂I=1(2π)32∫(1−R)R12I∂B∂I(|V|,I,I∗,r,R,ω)e−12(v−V)2−I∗drdRdωdI∗dV=1(2π)32I∫[∫(1−R)R12∂B∂I(|V|,I,I∗,r,R,ω)drdRdω]e−12(v−V)2−I∗dI∗dV. | (33) |
When
∫(1−R)R12∂B∂|V|(|V|,I,I∗,r,R,ω)drdRdω. |
It's clear as well that the partial derivative of
∫(1−R)R12∂B∂I(|V|,I,I∗,r,R,ω)drdRdω. |
As a result, for a collision cross-section
∫(0,1)2×S2(1−R)R12B(|V|,I,I∗,r,R,ω)drdRdω |
is increasing (respectively decreasing) in
[1] |
M. Belkin, P. Niyogi, V. Sindhwani, Manifold regularization: A geometric framework for learning from labeled and unlabeled examples, J. Mach. Learn. Res., 7 (2006), 2399–2434. http://dx.doi.org/10.5555/1248547.124863 doi: 10.5555/1248547.124863
![]() |
[2] | O. Chapelle, B. Scholkopf, A. Zien, Semi-supervised learning, IEEE T. Neural Networ., https://doi.org/10.1109/TNN.2009.2015974 |
[3] |
T. Yang, C. E. Priebe, The effect of model misspecification on semi-supervised classification, IEEE T. Pattern Anal., 33 (2011), 2093–2103. http://dx.doi.org/10.1109/TPAMI.2011.45 doi: 10.1109/TPAMI.2011.45
![]() |
[4] |
Y. F. Li, Z. H. Zhou, Towards making unlabeled data never hurt, IEEE T. Pattern Anal., 37 (2015), 175–188. https://doi.org/10.1109/TPAMI.2014.2299812 doi: 10.1109/TPAMI.2014.2299812
![]() |
[5] | Y. T. Li, J. T. Kwok, Z. H. Zhou, Towards safe semi-supervised learning for multivariate performance measures, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016), 1816–1822. https://doi.org/10.1609/aaai.v30i1.10282 |
[6] |
Y. Wang, S. Chen, Z. H. Zhou, New semi-supervised classification method based on modified cluster assumption, IEEE T. Neural Networ., 23 (2011), 689–702. https://doi.org/10.1609/aaai.v25i1.7920 doi: 10.1609/aaai.v25i1.7920
![]() |
[7] |
Y. Wang, S. Chen, Safety-aware semi-supervised classification, IEEE T. Neural Networ., 24 (2013), 1763–1772. https://doi.org/10.1109/TNNLS.2013.2263512 doi: 10.1109/TNNLS.2013.2263512
![]() |
[8] |
M. Kawakita, J. Takeuchi, Safe semi-supervised learning based on weighted likelihood, Neural Networks, 53 (2014), 146–164. https://doi.org/10.1016/j.neunet.2014.01.016 doi: 10.1016/j.neunet.2014.01.016
![]() |
[9] |
H. Gan, Z. Luo, M. Meng, Y. Ma, Q. She, A risk degree-based safe semi-supervised learning algorithm, Int. J. Mach. Learn. Cyb., 7 (2015), 85–94. https://doi.org/10.1007/s13042-015-0416-8 doi: 10.1007/s13042-015-0416-8
![]() |
[10] |
H. Gan, Z. Luo, Y. Sun, X. Xi, N. Sang, R. Huang, Towards designing risk-based safe Laplacian regularized least squares, Expert Syst. Appl., 45 (2016), 1–7. https://doi.org/10.1016/j.eswa.2015.09.017 doi: 10.1016/j.eswa.2015.09.017
![]() |
[11] |
H. Gan, Z. Li, Y. Fan, Z. Luo, Dual learning-based safe semi-supervised learning, IEEE Access, 6 (2017), 2615–2621. https://doi.org/10.1109/access.2017.2784406 doi: 10.1109/access.2017.2784406
![]() |
[12] |
H. Gan, Z. Li, W. Wu, Z. Luo, R. Huang, Safety-aware graph-based semi-supervised learning, Expert Syst. Appl., 107 (2018), 243–254. https://doi.org/10.1016/j.eswa.2018.04.031 doi: 10.1016/j.eswa.2018.04.031
![]() |
[13] |
N. Sang, H. Gan, Y. Fan, W. Wu, Z. Yang, Adaptive safety degree-based safe semi-supervised learning, Int. J. Mach. Learn. Cyb., 10 (2018), 1101–1108. https://doi.org/10.1007/s13042-018-0788-7 doi: 10.1007/s13042-018-0788-7
![]() |
[14] |
Y. Y. Wang, Y. Meng, Z. Fu, H. Xue, Towards safe semi-supervised classification: Adjusted cluster assumption via clustering, Neural Process. Lett., 46 (2017), 1031–1042. https://doi.org/10.1007/s11063-017-9607-5 doi: 10.1007/s11063-017-9607-5
![]() |
[15] |
H. Gan, G. Li, S. Xia, T. Wang, A hybrid safe semi-supervised learning method, Expert Syst. Appl., 149 (2020), 1–9. https://doi.org/10.1016/j.eswa.2020.113295 doi: 10.1016/j.eswa.2020.113295
![]() |
[16] | Y. T. Li, J. T. Kwok, Z. H. Zhou, Towards safe semi-supervised learning for multivariate performance measures, In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 30 (2016), 1816–1822. https://doi.org/10.1609/aaai.v30i1.10282 |
[17] |
G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, 70 (2006), 489–501. https://doi.org/10.1016/j.neucom.2005.12.126 doi: 10.1016/j.neucom.2005.12.126
![]() |
[18] |
Y. Cheng, D. Zhao, Y. Wang, G. Pei, Multi-label learning with kernel extreme learning machine autoencoder, Knowl.-Based Syst., 178 (2019), 1–10. https://doi.org/10.1016/j.knosys.2019.04.002 doi: 10.1016/j.knosys.2019.04.002
![]() |
[19] |
X. Huang, Q. Lei, T. Xie, Y. Zhang, Z. Hu, Q. Zhou, Deep transfer convolutional neural network and extreme learning machine for lung nodule diagnosis on CT images, Knowl.-Based Syst., 204 (2020), 106230. https://doi.org/10.1016/j.knosys.2020.106230 doi: 10.1016/j.knosys.2020.106230
![]() |
[20] |
J. Ma, L. Yang, Y. Wen, Q. Sun, Twin minimax probability extreme learning machine for pattern recognition, Knowl.-Based Syst., 187 (2020), 104806. https://doi.org/10.1016/j.knosys.2019.06.014 doi: 10.1016/j.knosys.2019.06.014
![]() |
[21] |
C. Yuan, L. Yang, Robust twin extreme learning machines with correntropy-based metric, Knowl.-Based Syst., 214 (2021), 106707. https://doi.org/10.1016/j.knosys.2020.106707 doi: 10.1016/j.knosys.2020.106707
![]() |
[22] |
Y. Li, Y. Wang, Z. Chen, R. Zou, Bayesian robust multi-extreme learning machine, Knowl.-Based Syst., 210 (2020), 106468. https://doi.org/10.1016/j.knosys.2020.106468 doi: 10.1016/j.knosys.2020.106468
![]() |
[23] |
H. Pei, K. Wang, Q. Lin, P. Zhong, Robust semi-supervised extreme learning machine, Knowl.-Based Syst., 159 (2018), 203–220. https://doi.org/10.1016/j.knosys.2018.06.029 doi: 10.1016/j.knosys.2018.06.029
![]() |
[24] |
G. Huang, S. Song, J. N. D. Gupta, C. Wu, Semi-supervised and unsupervised extreme learning machines, IEEE T. Cybernetics, 44 (2014), 2405. https://doi.org/10.1109/tcyb.2014.2307349 doi: 10.1109/tcyb.2014.2307349
![]() |
[25] |
W. Liu, P. P. Pokharel, J. C. Principe, Correntropy: Properties and applications in non-Gaussian signal processing, IEEE T. Signal Proces., 55 (2007), 5286–5298. https://doi.org/10.1109/tsp.2007.896065 doi: 10.1109/tsp.2007.896065
![]() |
[26] |
N. Masuyama, C. K. Loo, F. Dawood, Kernel Bayesian ART and ARTMAP, Neural Networks, 98 (2018), 76–86. https://doi.org/10.1016/j.neunet.2017.11.003 doi: 10.1016/j.neunet.2017.11.003
![]() |
[27] |
X. Liu, B. Chen, H. Zhao, J. Qin, J. Cao, Maximum correntropy Kalman filter with state constraints, IEEE Access, 5 (2017), 25846–25853. https://doi.org/10.1109/access.2017.2769965 doi: 10.1109/access.2017.2769965
![]() |
[28] |
B. Chen, X. Liu, H. Zhao, J. C. Principe, Maximum correntropy Kalman filter, Automatica, 76 (2017), 70–77. https://doi.org/10.1016/j.automatica.2016.10.004 doi: 10.1016/j.automatica.2016.10.004
![]() |
[29] |
B. Chen, X. Lei, W. Xin, Q. Jing, N. Zheng, Robust learning with kernel mean p-power error loss, IEEE T. Cybernetics, 48 (2018), 2101–2113. https://doi.org/10.1109/tcyb.2017.2727278 doi: 10.1109/tcyb.2017.2727278
![]() |
[30] |
H. Xing, X. Wang, Training extreme learning machine via regularized correntropy criterion, Neural Comput. Appl., 23 (2013), 1977–1986. https://doi.org/10.1007/s00521-012-1184-y doi: 10.1007/s00521-012-1184-y
![]() |
[31] |
Z. Yuan, X. Wang, J. Cao, H. Zhao, B. Chen, Robust matching pursuit extreme learning machines, Sci. Programming, 1 (2018), 1–10. https://doi.org/10.1155/2018/4563040 doi: 10.1155/2018/4563040
![]() |
[32] |
B. Chen, X. Wang, N. Lu, S. Wang, J. Cao, J. Qin, Mixture correntropy for robust learning, Pattern Recogn., 79 (2018), 318–327. https://doi.org/10.1016/j.patcog.2018.02.010 doi: 10.1016/j.patcog.2018.02.010
![]() |
[33] |
G. Xu, B. G. Hu, J. C. Principe, Robust C-loss kernel classifiers, IEEE T. Neur. Net. Lear., 29 (2018), 510–522. https://doi.org/10.1109/tnnls.2016.2637351 doi: 10.1109/tnnls.2016.2637351
![]() |
[34] |
A. Singh, R. Pokharel, J. Principe, The C-loss function for pattern classification, Pattern Recogn., 47 (2014), 441–453. https://doi.org/10.1016/j.patcog.2013.07.017 doi: 10.1016/j.patcog.2013.07.017
![]() |
[35] |
J. Yang, J. Cao, A. Xue, Robust maximum mixture correntropy criterion-based semi-supervised ELM with variable center, IEEE T. Circuits-II, 67 (2020), 3572–3576. https://doi.org/10.1109/tcsii.2020.2995419 doi: 10.1109/tcsii.2020.2995419
![]() |
[36] |
J. Yang, J. Cao, T. Wang, A. Xue, B. Chen, Regularized correntropy criterion based semi-supervised ELM, Neural Networks, 122 (2020), 117–129. https://doi.org/10.1016/j.neunet.2019.09.030 doi: 10.1016/j.neunet.2019.09.030
![]() |
[37] | P. L. Bartlett, S. Mendelson, Rademacher and Gaussian complexities: Risk bounds and structural results, In: Conference on Computational Learning Theory & European Conference on Computational Learning Theory, Berlin/Heidelberg: Springer, 2001,224–240. https://doi.org/10.1007/3-540-44581-1-15 |
[38] |
P. J. Huber, Robust estimation of a location parameter, Ann. Math. Stat., 35 (1964), 73–101. https://doi.org/10.1214/aoms/1177703732 doi: 10.1214/aoms/1177703732
![]() |
[39] |
Q. S. Xu, Y. Z. Liang, Monte Carlo cross validation, Chemometr. Intell. Lab., 56 (2001), 1–11. https://doi.org/10.1016/s0169-7439(00)00122-2 doi: 10.1016/s0169-7439(00)00122-2
![]() |
1. | Shingo Kosuge, Kazuo Aoki, Navier–Stokes Equations and Bulk Viscosity for a Polyatomic Gas with Temperature-Dependent Specific Heats, 2022, 8, 2311-5521, 5, 10.3390/fluids8010005 | |
2. | Niclas Bernhoff, Linearized Boltzmann Collision Operator: I. Polyatomic Molecules Modeled by a Discrete Internal Energy Variable and Multicomponent Mixtures, 2023, 183, 0167-8019, 10.1007/s10440-022-00550-6 | |
3. | Ricardo Alonso, Milana Čolić, Integrability Propagation for a Boltzmann System Describing Polyatomic Gas Mixtures, 2024, 56, 0036-1410, 1459, 10.1137/22M1539897 | |
4. | Niclas Bernhoff, Compactness Property of the Linearized Boltzmann Collision Operator for a Mixture of Monatomic and Polyatomic Species, 2024, 191, 1572-9613, 10.1007/s10955-024-03245-4 | |
5. | Ricardo J. Alonso, Milana Čolić, Irene M. Gamba, The Cauchy Problem for Boltzmann Bi-linear Systems: The Mixing of Monatomic and Polyatomic Gases, 2024, 191, 1572-9613, 10.1007/s10955-023-03221-4 | |
6. | Renjun Duan, Zongguang Li, Global bounded solutions to the Boltzmann equation for a polyatomic gas, 2023, 34, 0129-167X, 10.1142/S0129167X23500362 | |
7. | Gyounghun Ko, Sung-jun Son, Global stability of the Boltzmann equation for a polyatomic gas with initial data allowing large oscillations, 2025, 425, 00220396, 506, 10.1016/j.jde.2025.01.038 | |
8. | Stephane Brull, Annamaria Pollino, An ES-BGK model for non polytropic gases with a general framework, 2025, 0, 1937-5093, 0, 10.3934/krm.2025010 |