
The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks.
Citation: H. Y. Swathi, G. Shivakumar. Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification[J]. Mathematical Biosciences and Engineering, 2023, 20(7): 12529-12561. doi: 10.3934/mbe.2023558
[1] | Xinrui Yan, Yuan Tian, Kaibiao Sun . Effects of additional food availability and pulse control on the dynamics of a Holling-(p+1) type pest-natural enemy model. Electronic Research Archive, 2023, 31(10): 6454-6480. doi: 10.3934/era.2023327 |
[2] | Danyang Wang, Ning Chen . Research on Hopf bifurcation of vehicle air compressor system with single-pendulum TMD. Electronic Research Archive, 2025, 33(5): 3285-3304. doi: 10.3934/era.2025145 |
[3] | Yang Jin, Wanxiang Li, Hongbing Zhang . Transition characteristics of the dynamic behavior of a vehicle wheel-rail vibro-impact system. Electronic Research Archive, 2023, 31(11): 7040-7060. doi: 10.3934/era.2023357 |
[4] | Leiting Wang, Yang Jin, Wangxiang Li, Chenyang Tang . Analysis of the nonlinear dynamic behavior of longitudinal systems in heavy-haul trains. Electronic Research Archive, 2025, 33(2): 582-599. doi: 10.3934/era.2025027 |
[5] | Érika Diz-Pita . Global dynamics of a predator-prey system with immigration in both species. Electronic Research Archive, 2024, 32(2): 762-778. doi: 10.3934/era.2024036 |
[6] | Yuan Tian, Hua Guo, Wenyu Shen, Xinrui Yan, Jie Zheng, Kaibiao Sun . Dynamic analysis and validation of a prey-predator model based on fish harvesting and discontinuous prey refuge effect in uncertain environments. Electronic Research Archive, 2025, 33(2): 973-994. doi: 10.3934/era.2025044 |
[7] | Alessandro Portaluri, Li Wu, Ran Yang . Linear instability of periodic orbits of free period Lagrangian systems. Electronic Research Archive, 2022, 30(8): 2833-2859. doi: 10.3934/era.2022144 |
[8] | Xiong Jian, Zengyun Wang, Aitong Xin, Yujing Chen, Shujuan Xie . An improved finite-time stabilization of discontinuous non-autonomous IT2 T-S fuzzy interconnected complex-valued systems: A fuzzy switching state-feedback control method. Electronic Research Archive, 2023, 31(1): 273-298. doi: 10.3934/era.2023014 |
[9] | Xiaochen Mao, Weijie Ding, Xiangyu Zhou, Song Wang, Xingyong Li . Complexity in time-delay networks of multiple interacting neural groups. Electronic Research Archive, 2021, 29(5): 2973-2985. doi: 10.3934/era.2021022 |
[10] | Janarthanan Ramadoss, Asma Alharbi, Karthikeyan Rajagopal, Salah Boulaaras . A fractional-order discrete memristor neuron model: Nodal and network dynamics. Electronic Research Archive, 2022, 30(11): 3977-3992. doi: 10.3934/era.2022202 |
The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks.
One of the most significant trends in global agricultural development is the ecological management of pests. From the perspective of ecosystem integrity, reducing and controlling pests through biological and ecological control are of great significance for the construction of ecological civilization. Biological and ecological control can reduce management cost, maintain ecological stability, and avoid environmental pollution and damage to biodiversity. As a large agricultural country, China places a premium on green prevention and control within its agricultural sector and proposed the National Strategic Plan for Quality Agriculture (2018–2022), which proposes to implement green prevention and control actions instead of chemical control and achieve a coverage rate of more than 50% for green prevention and control of major crop pests. The Crop Pests Regulations on the Prevention and Control of Crop Pests prioritizes the endorsement and support of green prevention and control technologies such as ecological management, fosters the widespread application of information technology and biotechnology, and propels the advancement of intelligent, specialized, and green prevention and control efforts [1]. Therefore, the simulation of pest dynamic behavior and the research of control strategies are helpful for more scientific and reasonable pest management.
In a natural ecosystem, the predator-prey relationship is one of the most important relationships, and has become a main topic in ecological research and widely studied by scholars in recent years. Depending on the problem under consideration and the biological background, related research can be divided into two forms: ordinary differential [2,3,4,5] and partial differential [6,7,8,9,10,11]. The earliest work on the mathematical modeling of predation relationships dates back to the twentieth century, named as the Lotka-Volterra model [12,13]. Subsequently, scholars have extended the Lotka-Volterra model in different directions such as introducing different types of growth functions [14,15,16] and different forms of functional response [17,18,19,20]. The Gompertz model [14] is one of the most frequently used sigmoid models fitted to growth data and other. Scholars have fitted the Gompertz model to everything from plant growth, bird growth, fish growth, and growth of other animals [21,22,23]. Compared with the logistic model, it is more suitable for pest or disease curve fitting with S-shaped curve asymmetry, and fast development at first and slow development later. In addition, the Holling-Ⅱ functional response function is the most commonly employed one, in which the searching rate is considered as a constant. Nevertheless, in the real world, the density of the prey and the predator's searching environment can affect the predator's searching speed. Consequently, Hassell et al. [24] proposed a saturated searching rate. Guo et al. [25] introduced a fishery model with the Smith growth rate and the Holling-Ⅱ functional response with a variable searching rate. In this work, a pest-natural enemy model with the Gomportz growth rate and a variable searching rate is investigated.
To prevent the spread of pests, effective control action should be implemented before the pests cause a certain amount of damage to the environment and crops. One way is to slow down the spreading speed of pests by setting a warning threshold, and when the density of pests exceeds this threshold level, an integrated control measure is imposed on the system. This kind of control system can be modeled by a Filippov system, which has been recognized by scholars and widely used in the study of concrete models with one threshold [26,27,28,29,30,31], a ratio-dependent threshold [32], or two thresholds [33]. In this study, we will also focus on Filippov predation models with dual thresholds. In addition, considering the instantaneous behavior of the control, an integrated pest-management strategy with threshold control is adopted, which is an instantaneous intervention imposed on the system and always taken as a practical approach for pest management. In recent years, there has been a lot of research and application of impulsive differential equations (IDEs) in population dynamics to model the instantaneous intervention activities. There are mainly six types of models involved in the research: periodic [34,35,36,37], prey-dependent [38,39,40,41,42,43], predator-dependent [44], ratio-dependent [45], nonlinear prey-dependent [46], and combined prey-predator dependent [47,48,49,50,51]. In the context of integrated pest management, setting a threshold for pest population density to control its spread is crucial. Therefore, in this study, we introduce a pest economic threshold: when the pest population density exceeds this threshold, we will intervene manually, which includes not only spraying pesticides but also releasing natural enemies.
The article is organized in the following way: In Section 2, an integrated pest-management model with a variable searching rate based on double-threshold control is proposed. In Section 3, a dynamical analysis of the continuous system is performed, including the positivity and boundedness of the solutions, the existence and local stability of equilibrium points, and the dynamic behavior of the Filippov pest-management model with double thresholds. In Section 4, the complex dynamic behavior of the system induced by the economic threshold feedback control is focused on. In Section 5, numerical simulations are carried out to illustrate the main results of the above two sections step by step and to illustrate the practical implications. Finally, a summary of the research work is presented, and future research directions are discussed.
A pest-natural enemy Gomportz model with a variable searching rate and Holling-Ⅱ functional response is considered:
{dxdt=rx(lnK−lnx)−b(x)xy1+hx,dydt=b(x)exy1+hx−dy, | (2.1) |
where x (y) represents the pest's (natural enemies) density, respectively; r represents the pest's intrinsic growth rate; K represents the pest's environmental carrying capacity; b(x)=bx/(x+g) represents the variable searching rate [24,25] with maximum searching rate b and saturated constant g; e represents the conversion efficiency; and d represents the predator's natural mortality. All parameters are positive, and b, e and d are less than one. In addition, it requires that eb−dh>0, i.e., the natural enemy species can survive when pests are abundant.
To prevent the rapid spread of pests, two control methods are adapted: one is the continuous control with two thresholds, that is, when the pest density is below the pest warning threshold xET, no control measures need to be taken, when the pest and natural enemy densities satisfy x>xET and y<yET, the control action by spraying pesticides and releasing a part (q1) of the natural enemies is taken, which causes the death of pests (p1) and natural enemies (q2), when their densities satisfy x>xET,y>yET, only spraying pesticides is adapted. Based on the above control strategy, the impulsive control system can be formulated as follows:
{dxdt=rx(lnK−lnx)−bx2y(1+hx)(x+g)−δ1(x,y)x,dydt=ebx2y(1+hx)(x+g)−dy+δ2(x,y)y, | (2.2) |
where
(δ1(x,y),δ2(x,y))={(0,0),x<xET,(p1,q1−q2),x>xET,y<yET,(p1,−q2),x>xET,y>yET. | (2.3) |
Another one is an intermittent control with an economic threshold, that is, when the pest's density is below an economic threshold, no control action is implemented. Once the pest's density reaches the economic threshold, the control action by spraying pesticides and releasing a nonlinear volume τ1+ly of natural enemies is taken, which causes the death of pests (p1) and natural enemies (q2), where τ and l>0 are the formal parameters of the maximum volume of predators, respectively. Based on this control strategy, we can formulate the impulsive control system as follows:
{dxdt=rx(lnK−lnx)−bx2y(1+hx)(x+g)dydt=ebx2y(1+hx)(x+g)−dy}x<xET,x(t+)=(1−p1)x(t)y(t+)=(1−q2)y(t)+τ1+ly(t)}x=xET. | (2.4) |
The aim of this study focuses on analyzing the effects of different control measures on the dynamics of Models (2.2) and (2.4), respectively.
Consider a piecewise-continuous system
(dxdtdydt)={F1(x,y) if (x,y)∈S1,F2(x,y) if (x,y)∈S2, | (2.5) |
where
S1={(x,y)∈R+:H(x,y)>0},S2={(x,y)∈R+:H(x,y)<0} |
and discontinuous demarcation is
Σ={(x,y)∈R+:H(x,y)=0}. |
Let FiH=⟨∇H,Fi⟩, where ⟨⋅,⋅⟩ is the standard scalar product. Then FmiH=⟨∇(Fm−1iH),Fi⟩. Thus the discontinuous demarcation Σ can be distinguished into three regions: 1) sliding region: Σs={(x,y)∈Σ:F1H<0andF2H>0}; 2) crossing region: Σc={(x,y)∈Σ:F1H⋅F2H>0}; 3) escaping region: Σe={(x,y)∈Σ:F1H>0andF2H<0}.
The dynamics of system (2.5) along Σs is determined by
(dxdtdydt)=Fs(x,y)(x,y)∈Σs |
where Fs=λF1+(1−λ)F2 with λ=F2HF2H−F1H∈(0.1).
Definition 1 ([24]). For system (2.5), E∗ is a real equilibrium if ∃i∈{1,2} so that Fi(E∗)=0, E∗∈Si; E∗ is a virtual equilibrium if ∃i,j∈{1,2},i≠j, so that Fi(E∗)=0, E∗∈Sj; and E∗ is a pseudo-equilibrium if Fs(E∗)=λF1(E∗)+(1−λ)F2(E∗)=0,H(E∗)=0, and λ=F2HF2H−F1H∈(0,1).
For the given planar model
{dxdt=χ1(x,y),dydt=χ2(x,y)ω(x,y)≠0,Δx=I1(x,y),Δy=I2(x,y)ω(x,y)=0, | (2.6) |
we have:
Definition 2 (Order-k periodic solution [50,51]). The solution ˜z(t)=(˜x(t),˜y(t)) is called periodic if there exists n(⩾1) satisfying ˜zn=˜z0. Furthermore, ˜z is an order-k T-periodic solution with k≜min{j|1≤j≤n,˜zj=˜z0}.
Lemma 1 (Stability criterion [50,51]). The order-k T-periodic solution z(t)=(ξ(t),η(t))T is orbitally asymptotically stable if |μq|<1, where
μk=k∏j=1Δjexp(∫T0[∂χ1∂x+∂χ2∂y](ξ(t),η(t))dt), |
with
Δj=χ+1(∂I2∂y∂ω∂x−∂I2∂x∂ω∂y+∂ω∂x)+χ+2(∂I1∂x∂ω∂y−∂I1∂y∂ω∂x+∂ω∂y)χ1∂ω∂x+χ2∂ω∂y, |
χ+1=χ1(ξ(θ+j),η(θ+j)), χ+2=χ2(ξ(θ+j),η(θ+j)), and χ1, χ2, ∂I1∂x, ∂I1∂y, ∂I2∂x, ∂I2∂y, ∂ω∂x, ∂ω∂y are calculated at (ξ(θj),η(θj)).
For convenience, denote
f1(x,y)≜r(lnK−lnx)−bxy(1+hx)(x+g),f2(x)≜ebx2(1+hx)(x+g)−d,χ1(x,y)=xf1(x,y),χ2(x,y)=yf2(x). |
Since
x(t)=x(0)exp(∫t0f1(x,y)ds)≥0,y(t)=y(0)exp(∫t0f2(x)ds)≥0, |
then all solutions (x(t),y(t)) of Model (2.1) with x(0)>0 and y(0)>0 are positive in the region D={(x(t),y(t))|0<x≤K,y≥0}.
Theorem 1. For Model (2.1), the solutions are ultimately bounded and uniform in the region D1.
Proof. Define ι(x(t),y(t))≜x(t)+y(t). Then
dιdt=dxdt+dydt=rx(lnK−lnx)−(1−e)bx2y(1+hx)(x+g)−dy. |
Take 0<θ≤min{r,d}, and there is
dιdt+θι≤rx(lnK−lnx)+θx≜σ(x). |
Obviously, σ′(x)=r(lnK−rlnx−1)−θ. If 0<x<Keθr−1, then σ′(x)>0. If x>Keθr−1, then σ′(x)<0. Then σ(x) has a maximum σ∗. Thus ddt(ι−σ∗θ)≤−θ(ι−σ∗θ), and then
0≤ι(x(t),y(t))≤(1−e−θt)σ∗θ+ι(x(0),y(0))e−θt. |
For t→∞, there is 0≤ι(x(t),y(t))≤σ∗θ. Therefore, the solutions of Model (2.1) are uniformly bounded in the region
D1={(x,y)∈D:x(t)+y(t)≤σ∗θ}⊂D. |
For Model (2.1), the boundary equilibrium EK(K,0) always exists. Define
ˉb(d;p1)=d(Ke−p1r+g)(1+hKe−p1r)/(eK2e−2p1r),Δ(d)=d2(1+gh)2+4dg(eb−dh),U(x)=(x+g)(1+hx)+(g−hx2)(lnK−lnx). |
Theorem 2. For Model (2.1), if b<ˉb(d;0), then EB(K,0) is locally asymptotically stable. If b>ˉb(d;0), there exists a coexistence equilibrium, denoted as E∗1=(x∗1,y∗1), which is locally asymptotically stable if U(x∗1)>0, where
x∗1=d(1+gh)+√Δ(d)2(eb−dh),y∗1=r(lnK−lnx∗1)(x∗1+g)(1+hx∗1)bx∗1. |
Proof. For Model (2.1), we have
J=(r(lnK−lnx)−r−bxy(x+hgx+2g)[(x+g)(1+hx)]2−bx2(x+g)(1+hx)ebxy(x+hgx+2g)[(x+g)(1+hx)]2ebx2(x+g)(1+hx)−d). |
1) For EK(K,0), we have
J|(K,0)=(−r−bK2(K+g)(1+hK)0ebK2(K+g)(1+hK)−d). |
Then λ1=−r<0 and λ2=ebK2(K+g)(1+hx)−d. Therefore, EB(K,0) is locally asymptotically stable if b<b(d;0).
2) Since
f1x=−rx−by(g−hx2)[(x+g)(1+hx)]2,f1y=−bx(x+g)(1+hx),f2x=ebx(x+hgx+2g)[(x+g)(1+hx)]2, |
then for E∗1, we have
λ1λ2=−x∗1y∗1f1yf2x>0,λ1+λ2=x∗1f1x. |
If U(x∗1)>0 holds, then λ1λ2>0,λ1+λ2<0, i.e., E∗1 is locally asymptotically stable.
Let
F1(x,y)=(rx(lnK−lnx)−bx2y(1+hx)(x+g),ebx2y(1+hx)(x+g)−dy)T,F2(x,y)=(rx(lnK−lnx)−bx2y(1+hx)(x+g)−p1x,ebx2y(1+hx)(x+g)−dy+(q1−q2)y)T,F3(x,y)=(rx(lnK−lnx)−bx2y(1+hx)(x+g)−p1x,ebx2y(1+hx)(x+g)−dy−q2y)T. |
Then systems (2.2) and (2.3) can be described as
(dxdtdydt)=Fi(x,y),(x,y)∈Gi,i=1,2,3, | (3.1) |
where
G1={(x,y)∈R2+:x<xET},G2={(x,y)∈R2+:x>xET,y<yET},G3={(x,y)∈R2+:x>xET,y>yET}. |
The switching boundaries are, respectively,
Σ1={(x,y)∈R2+:x=xET,y<yET},Σ2={(x,y)∈R2+:x=xET,y>yET},Σ3={(x,y)∈R2+:x>xET,y=yET}. |
Let n1=(1,0) and n2=(0,1) be the normal vector for Σ1 and Σ3. If ∃Σij⊂Σi such that the trajectory of Fi(x,y) approaches or moves away from Σi (i∈{1,2,3}) on both sides, then a sliding domain exists, and the dynamics on Σi can be determined by means of the Filippov convex method.
The dynamic behavior of the model in G1 can be referred to Section 3.2. The model in G2 is described as follows:
{dxdt=rx(lnK−lnx)−bx2y(1+hx)(x+g)−p1x,dydt=ebx2y(1+hx)(x+g)−dy+(q1−q2)y. | (3.2) |
Theorem 3. Model (3.2) always has an equilibrium E¯B(Ke−p1r,0). If q1<q2+d and b<b(d−q1+q2;p1), then E¯B(Ke−p1r,0) is locally asymptotically stable. If q1<q2+d and b>ˉb(d+q2−q1,p1), Model (3.2) has a coexistence equilibrium, denoted as E∗2=(x∗2,y∗2), which is locally asymptotically stable if U(x∗2)>0, where
x∗2=(d−q1+q2)(1+gh)+√Δ(d−q1+q2)2[eb+h(q1−q2−d)],y∗2=[r(lnK−lnx∗2)−p1](x∗2+g)(1+hx∗2)bx∗2. |
Similarly, the model in G3 is described as follows:
{dxdt=rx(lnK−lnx)−bx2y(1+hx)(x+g)−p1x,dydt=ebx2y(1+hx)(x+g)−dy−q2y. | (3.3) |
Theorem 4. Model (3.3) always has an equilibrium E¯B(Ke−p1r,0). If b<b(d+q2;p1), then E¯B is locally asymptotically stable. If b>b(d+q2;p1), Model (3.3) has a coexistence equilibrium, denoted as E∗3=(x∗3,y∗3), which is locally asymptotically stable if U(x∗3)>0, where
x∗3=(d+q2)(1+gh)+√Δ(d−q1+q2)2[eb−h(d+q2)],y∗3=[r(lnK−lnx∗3)−p1](x∗3+g)(1+hx∗3)bx∗3. |
It is assumed that
(H1) p1<r;
(H2) d(1+gh)+√Δ(d)2(eb−dh)<K;
(H3) q1−q2−d<0,(d−q1+q2)(1+gh)+√Δ(d−q1+q2)2[eb+h(q1−q2−d)]<Ke−p1r;
(H4) (d+q2)(1+gh)+√Δ(d+q2)2[eb−h(d+q2)]<Ke−p1r.
For Model (2.2), we have x∗1<x∗2<x∗3 when q1<q2 and x∗2<x∗1<x∗3 when q2<q1<q2+d.
Define
yET1=[r(lnK−lnxET)−p1](xET+g)(1+hxET)bxET,yET2=r(lnK−lnxET)(xET+g)(1+hxET)bxET, |
where yET2>0 and yET1<yET2.
First, we will discuss the sliding mode domain on Σ1 and the corresponding dynamics. Since
<F1,n1>|(x,y)∈Σ1=xET[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)],<F2,n1>|(x,y)∈Σ1=xET[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)−p1], | (3.4) |
then the sliding mode domain on Σ1 does not exist if yET<yET1. When yET>yET1, we have
Σ11={(x,y)∈Σ1|max{0,yET1}<y<min{yET2,yET}}. | (3.5) |
Next, the Filippov convex method is used, i.e.,
dXdt=λF1+(1−λ)F2,(x,y)∈Σ11, | (3.6) |
where
λ=<F2,n1><F2,n1>−<F1,n1>, |
and the sliding mode dynamics of Eq (3.1) along Σ11 is determined by the following system:
{dxdt=0,dydt=[ebxET2(1+hxET)(xET+g)−d]y+q1−q2p1[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)]y. | (3.7) |
Let ς1=ebxET2+(1+hxET)(xET+g)[r(q1−q2)p1(lnK−lnxET)−d]. Then a positive equilibrium Ea1(xET,ya1) exists, where ya1=p1ς1bxET(q1−q2)>0. Therefore
ya1−yET2=p1bxET(q1−q2)[ebxET2−d(1+hxET)(xET+g)]. |
If x∗1<xET, then ya1>yET2, i.e., Ea1 is not located in Σ11, and then Ea1 is not a pseudo-equilibrium. If x∗1>xET, then ya1<yET2.
Similarly, we have
ya1−yET1=p1bxET(q1−q2)[ebxET2+(q1−q2−d)(1+hxET)(xET+g)]. |
If x∗2>xET, then ya1<yET1, i.e., Ea1 is not located in Σ11, and then Ea1 is not a pseudo-equilibrium. If x∗2<xET, then ya1>yET1. Therefore, yET1<ya1<yET2. When ya1<yET, Ea1 is the pseudo-equilibrium.
Second, we will discuss the sliding mode domain on Σ2 and the dynamic characteristics on the sliding mode. Since
<F1,n1>|(x,y)∈Σ2=xET[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)],<F3,n1>|(x,y)∈Σ2=xET[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)−p1], | (3.8) |
then the sliding mode domain on Σ2 does not exist if yET>yET2; When yET<yET2, we have
Σ22={(x,y)∈Σ2|max{yET1,yET}<y<yET2}. | (3.9) |
Therefore, when yET>yET2, there is no sliding mode domain on Σ2. When yET<yET2, the sliding mode domain of the system (3.1) on Σ2 can be expressed as Eq (3.9).
According to the Filippov convex method, we have
dXdt=λF1+(1−λ)F3,(x,y)∈Σ22, | (3.10) |
where
λ=<F3,n1><F3,n1>−<F1,n1>, |
and the sliding mode dynamics of equation (3.1) along Σ22 is determined by the following system:
{dxdt=0,dydt=[ebxET2(1+hxET)(xET+g)−d]y−q2p1[r(lnK−lnxET)−bxETy(1+hxET)(xET+g)]y. | (3.11) |
Let ς2=(1+hxET)(xET+g)[rq2p1(lnK−lnxET)+d]−ebxET2. Then the system (3.11) has a positive equilibrium Ea2(xET,ya2), where ya2=p1ς2q2bxET>0. Obviously,
ya2−yET2=−p1q2bxET[ebxET2−d(1+hxET)(xET+g)]. |
If x∗1>xET, then ya2>yET2, i.e., Ea2 is not located in Σ22. If x∗1<xET, then ya2<yET2. Similarly, we have
ya2−yET1=−p1q2bxET[ebxET2−(d+q2)(1+hxET)(xET+g)]. |
If x∗3<xET, then ya2<yET1, i.e, Ea2 is not located in Σ22. If x∗3>xET, then ya2>yET1. Therefore yET1<ya2<yET2. If yET1<yET<ya2 or yET<yET1, then Ea2 is located in Σ22 and is a pseudo-equilibrium.
Finally, we will discuss the sliding mode domain on Σ3 and the dynamic characteristics of the sliding mode. We have
<F2,n2>|(x,y)∈Σ3=yET[ebx2(1+hx)(x+g)−d+q1−q2],<F3,n2>|(x,y)∈Σ3=yET[ebx2(1+hx)(x+g)−d−q2]. | (3.12) |
According to Eq (3.12), <F2,n2>|(x,y)∈Σ3><F3,n2>|(x,y)∈Σ3. If x∗3<xET, then the system (3.1) does not have a sliding mode domain on Σ3. If x∗2<xET<x∗3, it is found through Eq (3.12) that the system (3.1) can be expressed in the sliding mode domain on Σ3 as
Σ33={(x,y)∈Σ3|xET<x<x∗3}. | (3.13) |
According to the Filippov convex method, we have
dXdt=λF2+(1−λ)F3,(x,y)∈Σ33, | (3.14) |
where
λ=<F3,n2><F3,n2>−<F2,n2>. |
The sliding mode dynamics of Eq (3.1) along Σ11 is determined by the following system:
{dxdt=rx(lnK−lnx)−bx2yET(1+hx)(x+g)−p1x,dydt=0. | (3.15) |
Then
[r(lnK−lnx)−p1](1+hx)(x+g)−bxyET=0. | (3.16) |
If the root x=xb>0 of Eq (3.16) satisfies Eq (3.13), then Eb(xb,yET) of the system (3.15) is a pseudo-equilibrium, and if it does not satisfy Eq (3.13), then Eb is not a pseudo-equilibrium.
For Model (2.4), let
y=ˆy(x)≜r(ln(K)−ln(x))(1+hx)(x+g)bx. |
The curve y=ˆy(x) intersects with x=xET and x=(1−p1)xET at P(xET,yP) (yP=ˆy(xET)) and R0((1−p1)xET,yR0). The trajectory passing through P is denoted by γ1, and it goes backward and intersects y=ˆy(x) at H(xH,yH)(yH=ˆy(xH)). If xH<(1−p1)xET, then denote Q1((1−p1)xET,yQ1),Q2((1−p1)xET,yQ2) as the intersection points between γ1 and x=(1−p1)xET with yQ1<yQ2. The trajectory passing through R0 is denoted by γ2. If γ2∩{x=xET}≠∅, then denote R1(xET,yR1) as the intersection point between γ2 and x=xET. The curve γ2 defines a function y=y(x,yR0) on the interval [(1−p1)xET,xET] with
dydx=ebx2y(1+hx)(x+g)−dyrx(lnK−lnx)−bx2y(1+hx)(x+g)≜φ(x,y),y((1−p1)xET,yR0)=yR0, |
which takes the form
y=y(x,yR0)=yR0+∫x(1−p1)xETφ(u,y(u,yR0))du. |
For Model (2.4), we have M={(x,y)∣x=xET,y>0}. The trajectory of the system (2.4) with x0<xET can reach M1={(x,y)|x=xET,0≤y≤yP}⊂M, which is called the effective impulse set, denoted by Meff. The corresponding effective phase set is denoted by Neff. Moreover, define M2={(x,y)∣x=xET,0≤y≤yR1}⊂M1.
Since Δy=−q2y+τ1+ly, then define
ρ(y)≜(1−q2)y+τ1+ly. |
Obviously, the function ρ(y) reaches a minimum at y=⌢y, where ⌢y≜√τl(1−q2)−(1−q2)l(1−q2). Denote R(xET,⌢y)∈M, and its phase point is R+((1−p1)xET,ρ(⌢y)).
Define
x1ET≜max{xET|y(xET,R0)},x2ET≜max{xET|y(xET,Q1)≥yQ2/2}. |
Denote
τ1≜1−q2l,τ2≜(1−q2)(1+lyP)2l,τ3≜(1−q2)(1+lyR1)2l. |
The exact domains of M and N can be determined by sign(ρ′(y)) and sign(⌢y), which will be discussed in the following two situations:
Case Ⅰ: x1ET<xET≤x2ET.
For this situation, Meff=M1. To determine Neff, we are required to judge the magnitude between ⌢y and yP. Denote Λ=[0,yQ1]⋃[yQ2,+∞).
ⅰ) τ≥τ1, then ⌢y≤0. For ∀y∈[0,yP], ρ′≥0 holds, and then τ≤ρ(y)≤ρ(yP) for y∈[0,yP]. Denote Λ11=[τ,ρ(yP)], Λ1=Λ⋂Λ11, and Neff=N1={(x+,y+)|x+=(1−p1)xET,y+∈Λ1}.
ⅱ) τ1<τ<τ2, then 0<⌢y<yP. For ∀y∈[0,⌢y], ρ′≤0 holds, and then ρ(⌢y)≤ρ(y)≤τ for y∈[0,⌢y]. Denote Λ21=[ρ(⌢y),τ], Λ∗21=Λ⋂Λ21, and N21={(x+,y+)|x+=(1−p1)xET,y+∈Λ∗21}. Similarly, for ∀y∈(⌢y,yP], ρ′>0 holds, i.e., ρ(⌢y)<ρ(y)≤ρ(yP). Denote Λ22=(ρ(⌢y),ρ(yP)], Λ∗22=Λ⋂Λ22, and N22={(x+,y+)|x+=(1−p1)xET,y+∈Λ∗22}. Thus, we have Neff=N2=N21⋃N22.
ⅲ) τ≤τ2, then ⌢y≥yP. For ∀y∈[0,yP], ρ′≤0 holds, i.e., ρ(yP)≤ρ(y)≤τ. Denote Λ33=[ρ(yP),τ] and Λ3=Λ⋂Λ33. Then Neff=N3={(x+,y+)|x+=(1−p1)xET,y+∈Λ3}.
Case Ⅱ: xET≥x1ET.
For this situation, Meff=M2. Similar to the discussion in case Ⅰ, we have
ⅰ) τ≥τ1. Then Neff=N4={(x+,y+)|x+=(1−p1)xET,y+∈Λ4}, where Λ4=[τ,ρ(yR1)].
ⅱ) τ1<τ<τ3. For ∀y∈[0,⌢y], we have N51={(x+,y+)|x+=(1−p1)xET,y+∈Λ21}. Similarly, for ∀y∈(⌢y,yR1], denote Λ52=(ρ(⌢y),ρ(yR1)], and then N52={(x+,y+)|x+=(1−p1)xET,y+∈Λ52}. Therefore, Neff=N5=N51⋃N52.
ⅲ) τ≤τ3. Then Neff=N6={(x+,y+)|x+=(1−p1)xET,y+∈Λ6} with Λ6=[ρ(yR1),τ].
Denote Gi(xET,yi)∈M, G+i((1−p1)xET,y+i)∈N, i=0,1,2,..., where G+i=I(Gi). Since G+i and Gi+1 lie on the same trajectory γG+i, then we have yi+1=ϖ(y+i) and y+i+1=ψ(y+i), where
ψ(y+i)≜(1−q2)ϖ(y+i)+τ1+lϖ(y+i). |
If ∃ˆy∈N such that ψ(ˆy)=ˆy, then Model (2.4) admits an order-1 periodic trajectory. Next, we will investigate the monotonicity of ψ(y) with τ>0 for situations Ⅰ and Ⅱ.
Case Ⅰ: x1ET<xET≤x2ET.
ⅰ) τ≥τ1. ρ(y) monotonically increases on [0,yP], and then the map ψ(y) monotonically increases on [0,yQ1] and monotonically decreases on [yQ2,+∞).
ⅱ) τ1<τ<τ2. we have ρ′(y)<0 for y∈[0,⌢y] and ρ′(y)>0 for y∈(⌢y,yP]. Denote yS1=min{y:ψ(y)=yR+}, yS2=max{y:ψ(y)=yR+}. Then ψ(y) monotonically increases on [yS1,yQ1], [yS2,+∞) and monotonically decreases on the interval [0,yS1], [yQ2,yS2], respectively.
ⅲ) τ≤τ2. ρ(y) monotonically decreases on [0,yP], and then the map ψ(y) monotonically decreases on [0,yQ1] and monotonically increases on [yQ2,+∞).
Case Ⅱ: xET≥x1ET.
ⅰ) τ≥τ1. Then ψ′(y)>0 for y∈[0,yR0] and ψ′(y)<0 for y∈[yR0,+∞).
ⅱ) τ1<τ<τ3. Denote yV1=min{y:ψ(y)=yR+} and yV2=max{y:ψ(y)=yR+}. Then ψ(y) monotonically decreases on [0,yV1] and [yR0,yV2], and monotonically increases on [yV1,yR0] and [yV2,+∞).
ⅲ) τ≤τ3. The map ψ(y) monotonically decreases on [0,yR0] and monotonically increases on [yR0,+∞).
For Model (2.4) with τ=0, if y0≡0, then y≡0 holds. Thus Model (2.4) is degenerated to
{dxdt=rx(lnK−lnx),x<xET,Δx=−p1x(t),x=xET. | (4.1) |
Let x=˜x(t) be the solution of equation
dxdt=rx(lnK−lnx) |
with initial value ˜x(0)=x0≜(1−p1)xET. Define
T0≜1r(lnlnK(1−p1)xET−lnlnKxET). |
We have ˜x(T0)=xET and ˜x(T+0)=x0. Thus, z(t)=(˜x(t),0) ((k−1)T0<t≤kT0, k∈N+) is a natural enemy extinction periodic trajectory.
Theorem 5. The natural enemy extinction period trajectory z(t)=(˜x(t),0) ((k−1)T0<t≤kT0, k∈N+) is orbitally asymptotically stable if q2>ˆq, where
ˆq≜1−τl−(lnK−lnxET)exp(−∫T00(r(lnK−ln˜x)−r+eb˜x2(˜x+g)(1+h˜x)−d)dt)(1−p1)(lnK−ln(1−p1)xET). |
Proof. For Model (4.1), we have
I1(x,y)=−p1x,I2(x,y)=−q2y+τ1+ly,ω(x,y)=x−xET. |
Then
∂χ1∂x=r(lnK−lnx)−r−bxy(x+hgx+2g)[(x+g)(1+hx)]2,∂χ2∂y=ebx2(x+g)(1+hx)−d,∂I1∂x=−p1,∂I1∂y=0,∂I2∂x=0,∂I2∂y=−q2−lτ(1+ly)2,∂ω∂x=1,∂ω∂y=0. |
Through calculation, we have
˜κ=(1−q2−lτ)(1−p1)(lnK−ln(1−p1)xET)lnK−lnxET |
and
∫T0+(∂χ1∂x+∂χ2∂y)(˜x,0)dt=∫T00(r(lnK−ln˜x)−r+eb˜x2(˜x+g)(1+h˜x)−d)dt. |
Thus,
ˆρ=˜κexp(∫T00(r(lnK−ln˜x)−r+eb˜x2(˜x+g)(1+h˜x)−d)dt). |
Therefore, if q2>ˆq, we have ˆρ<1, and by Lemma 1, z(t)=(˜x(t),0) ((k−1)T0<t≤kT0, k∈N+) is orbitally asymptotically stable.
Denote that the points P, R1 are mapped to the points P+((1−p1)xET,(1−q2)yP+τ1+lyP) and R+1((1−p1)xET,(1−q2)yR1+τ1+lyR1), respectively, after a single impulse. Denote W((1−p1)xET,τ).
Case Ⅰ: x1ET<xET≤x2ET.
Define
q∗2≜1−yQ22yP,l∗≜√1+4yP(1−q2)/(yQ1−(1−q2)yP)−12yP, |
˜τ1≜(1+lyP)(yQ1−(1−q2)yP),˜τ2≜(1+lyP)(yQ2−(1−q2)yP), |
ˆτ1=(1−q2)[1+lyQ2/(1−q2)]24l,ˆτ2=(1−q2)[1+lyQ1/(1−q2)]24l,τ4≜(1−q2)1+lyPl. |
Obviously, τ4<τ2 and for q2≤q∗2, we have l>max{l∗,0}.
1) For τ=˜τ2, we have ψ(yQ2)=yP+=yQ2.
2) For τ>˜τ2, we have ψ(yQ2)=yP+>yQ2. Then
● 2-a) for τ≥τ2, Wis the highest after the pulse, while P+ is the lowest after the pulse. Then ψ(τ)<τ, ψ(yP+)>yP+, and thus ∃y′∈(yP+,τ) such that ψ(y′)=y′.
● 2-b) for ˜τ2<τ<τ2, if τ≥ˆτ2, then ρ(⌢y)≥yQ2. Since the point R+ is the lowest point after the pulse, then ψ(yR+)>yR+. If τ>τ4, we have τ>yP+. Then W is the highest point after the impulse, i.e., ψ(τ)<τ. If τ≤τ4, we have τ≤yP+. Then P+ is the highest point after the impulse, i.e., ψ(yP+)<yP+. Combine the above two aspects and it can be concluded that ∃y″∈(yR+,max{τ,yP+}) such that ψ(y″)=y″. While for τ<ˆτ2, we have ρ(⌢y)<yQ2. In such a case, ψ is not defined on [ρ(⌢y),yQ2) and it is uncertain whether a fixed point of ψ exists or not.
3) When ˜τ1<τ<˜τ2, then yQ1<yP+<yQ2. If τ≥ˆτ1, then ρ(⌢y)≥yQ1, i.e., ψ does not have a fixed point. While for τ<ˆτ1, we have ρ(⌢y)<yQ1. In such a case, it is uncertain whether a fixed point of ψ exists or not.
4) When 0<τ<˜τ1, and then ψ(yQ1)=yP+<yQ1. If τ>τ1, the point R+ is the lowest point after the pulse, then ψ(yR+)>yR+, i.e., ψ(y) has a fixed point on (yR+,yQ1). If τ≤τ1, the point W is the lowest point after the pulse, and then ψ(τ)>τ, i.e., ψ(y) has a fixed point on (τ,yQ1).
Case Ⅱ: xET≥x1ET.
Define ˜τ≜(1+lyR1)(yR0−(1−q2)yR1).
1) For τ=˜τ, we have ψ(yR0)=yR+1=yR0.
2) For τ>˜τ, we have ψ(yR0)=yR+1>yR0. Then
● 2-a) for τ≥τ3, since R+1 and W are the lowest and highest points after the pulse, then we have ψ(yR+1)>yR+1, ψ(τ)<τ, and thus ∃y′∈(yR+1,τ) such that ψ(y′)=y′;
● 2-b) for ˜τ<τ<τ3 and if τ>yR+1, then ψ(τ)<τ; if τ≤yR+1, then ψ(yR+1)>yR+1. On the other hand, take the point A in a small neighborhood near the point R0, i.e., A∈⋃(R0,δ). A is above R0. By the continuity of the impulse function and the Poincaré map, we have ψ(yA)>yA. Therefore, the map ψ(y) has a fixed point on (yA,max{yR+1,τ}).
3) When τ<˜τ, then ψ(yR0)=yR+1<yR0. If τ>τ1, we have ψ(yR+)>yR+. If τ≤τ1, we have ψ(τ)>τ. Combine the above two aspects and it can be concluded that ∃y″∈(max{yR+,τ},yR0) such that ψ(y″)=y″.
To sum up, we have:
Theorem 6. For the situation of xET≥x1ET, Model (2.4) admits an order-1 periodic trajectory. While for the situation of x1ET<xET≤x2ET, Model (2.4) admits an order-1 periodic trajectory if τ<˜τ1 or τ≥max{˜τ2,ˆτ2}.
Let ˜z(t)=(ξ(t),η(t)) ((k−1)T<t≤kT, k∈N+) be the T-periodic trajectory of the system (2.4) with initial values A0((1−p1)xET,yA0). The trajectory intersects M at A−0(ξ(T),η(T)), where ξ(T)=xET,η(T)=y0, and then it is pulsed to N at A+0(ξ(T+),η(T+)). Thus, ξ(T+)=(1−p1)xET,η1(T+)=(1−q2)y0+τ1+ly0=yA0.
Theorem 7. The T-periodic trajectory ˜z(t)=(ξ(t),η(t)) ((k−1)T<t≤kT, k∈N+) with initial values ((1−p1)xET,(1−q2)y0+τ1+ly0) is orbitally asymptotically stable if
∫T0(r(lnK−lnx)−r−bxy(x+hgx+2g)[(x+g)(1+hx)]2+ebx2(x+g)(1+hx)−d)(ξ(t),η(t))dt<ln(ˆκ), |
where
ˆκ=r(lnK−lnxET)−bxETy0(xET+g)(1+hxET)(1−p1)(1−q2−lτ(1+ly0)2)[r(lnK−ln(1−p1)xET)−b(1−p1)xET((1−q2)y0+τ1+ly0)((1−p1)xET+g)(1+h(1−p1)xET)]. |
Proof. The proof can be referred to that in Theorem 5 and is, therefore omitted.
For the purpose of simulation, it is assumed that r=1.5, K=120, b=0.595, h=0.92, e=0.8, d=0.5 and g=0.8.
When b=0.595, the interior equilibrium E∗1=(54.7,103.1) is locally asymptotically stable, as presented in Figure 1. When b increases to 0.61, a limit cycle occurs, as presented in Figure 2. The effect of the maximum search rate on pests and natural enemies in the coexistence steady state is presented in Figure 3, and it is obvious that x∗1 decreases with increasing b, while y∗1 increases and then decreases with increasing b. Therefore, increasing the search rate for pests helps to reduce the number of pests.
When p1=0.25, q1=0.5 and q2=0.007, the positive equilibrium of the G1 region is E∗1=(54.707,103.1337), the positive equilibrium of the G2 region is E∗2=(0.1229,141.532), and the positive equilibrium of the G3 region is E∗3=(92.5247,20.443). When xET=100 and yET=110, there is x∗2<x∗3<xET, x∗1<xET, yET1=3.6997 and yET2=43.0879. The sliding mode domain of Model (3.1) on Σ1 can be represented as Σ11={(x,y)∈Σ1|3.6997<y<43.0879}, and there is no pseudo-equilibrium on Σ1, as illustrated in Figure 4(a). When xET=80 and yET=110, there is x∗2<xET<x∗3, x∗1<xET, yET1=45.3593 and yET2=77.0172. The sliding mode domain of Model (3.1) on Σ1 can be represented as Σ11={(x,y)∈Σ1|45.3593<y<77.0172}, and there is no pseudo-equilibrium on Σ1, as presented in Figure 4(b).
When q2=0.11, for xET=62.667, the periodic trajectory of Model (2.4) is presented by changing the killing rate p1 of the prey, the amount of predator released τ, and the value of the parameter l. When τ=0, p1=0.5 and l=0.02, the natural enemy extinction periodic trajectory is orbitally asymptotically stable (Figure 5(a)). To prevent the extinction of natural enemies, we are required to release natural enemies in an appropriate amount. When τ=5, the natural enemy extinction periodic trajectory loses its stability and an order-1 periodic trajectory occurs (Figure 5(b)).
Next, the accurate domains of M and N for different cases are presented as well as the order-1 periodic trajectory (Figure 6). The accurate domains of M and N are marked in red and blue solid lines, respectively. When p1=0.5, H lies on the left side of the phase set. The schematic diagram of the exact domain of the phase set and pulse set, and the order-1 periodic trajectories for different cases are presented in subfigures Figure 6(a)–(c). When p1=0.6, H lies on the right side of the phase set. The schematic diagram of the accurate domain of M and N and the order-1 periodic trajectories for different cases are presented in subfigures Figure 6(d)–(f).
When p1=0.5, l=0.02, τ=10, b=0.595 and xET=50, E∗ is locally asymptotically stable, and Model (2.4) admits an order-1 periodic trajectory for xET<x1ET, as presented in Figure 7.
Finally, order-n periodic solutions are presented for different τ and l. When b=0.595, E∗ is locally asymptotically stable. For control parameters l=0.03, τ=120, xET=62.667 or l=0.002, τ=40, xET=62.667, Model (2.2) admits an order-k periodic trajectory, as presented in subfigures 8(a) and (b). When b=0.61, Model (2.1) admits a limit cycle. For xET=50, Model (2.2) admits an order-k periodic trajectory, as presented in subfigures 8(c)–8(f).
Pests are important factors that harm agricultural production. In order to effectively control the spread of pests, a pest-natural enemy model with a variable search rate and threshold dependent feedback control was proposed. The dynamic properties such as the existence, positivity, and boundedness of solutions for continuous systems were discussed, and the results show that pests and natural enemies will not increase indefinitely due to system constraints (Theorem 1). In addition, it is shown that the natural enemy's searching rate b plays an important role in determining the dynamics of the system, i.e., when b is smaller than the level ˉb(d;0), the predators in the system will go to extinction and when b is greater than ˉb(d;0), there exists a steady state E∗ at which the natural enemies and the pests in the system keep a balance. Moreover, the steady state is locally asymptotically stable as long as U(x∗)>0 (Theorems 2–4, Figure 1). When U(x∗)<0, the stability is lost and a limit cycle surrounding E∗ is obtained (Figure 2). The relationship between the number of pests (natural enemies) and the maximum search rate at the steady state was presented in Figure 3.
To prevent the spread of pests, two different types of control strategies were adopted. The first is a non-smooth control and the model is described by a Filippov system with two warning thresholds. By analyzing the sliding dynamics, we discussed the existence of pseudo-equilibrium Ep (Figure 4). The pseudo-equilibrium Ep is a new state of the control system at which the pests and the natural enemies keep a balance and the pest populations can be controlled at appropriate levels, which in turn indicates the effectiveness of the control. The second is an intermittent control with an economic threshold. When the pests reach the economic threshold, manual intervention is carried out by spraying pesticides and releasing a certain amount of natural enemies. For the control model, the accurate domain of the phase set was presented and the Poincaré map was constructed, through which the conditions for the existence of the order-1 periodic trajectories were presented (Theorems 5 and 6 and Figures 5–7). The order-1 periodic solution provides a possibility for periodic pest control, thus avoiding the need and difficulty of implementing pest population monitoring. The stability of the order-1 periodic trajectory was also verified (Theorems 5 and 7). This ensures the robustness of the control, and even if there is a condition monitoring error, it can still converge to the periodic solution of the system, thus providing a guarantee for the periodic control. We also presented the order-k periodic solutions in numerical simulations (Figure 8), which further explain the complexity of the control system and the necessity of maintaining the stability of the system. The results illustrate the complex dynamics of the proposed models, which can serve as a valuable reference for the advancement of sustainable agricultural practices and the control of pests.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The work was supported by the National Natural Science Foundation of China (No. 11401068).
The authors declare that there is no known competing financial interests to influence the work in this paper.
[1] |
J. C. S. Jacques Junior, S. R. Musse, C. R. Jung, Crowd analysis using computer vision techniques, IEEE Signal Process. Mag., 27 (2010), 66–77. https://doi.org/10.1109/MSP.2010.937394 doi: 10.1109/MSP.2010.937394
![]() |
[2] |
Z. Jie, D. Gao, D. Zhang, Moving vehicle detection for automatic traffic monitoring, vehicular technology, IEEE Trans. Veh. Technol., 56 (2007), 51–59. https://doi.org/10.1109/TVT.2006.883735 doi: 10.1109/TVT.2006.883735
![]() |
[3] | H. Qiu, X. Liu, S. Rallapalli, A. J. Bency, K. Chan, R. Urgaonkar, et al., Kestrel: Video analytics for augmented multi-camera vehicle tracking, in 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI), IEEE, Orlando, FL, (2018), 48–59. https://doi.org/10.1109/IoTDI.2018.00015 |
[4] | R. Mehran, A. Oyama, M. Shah, Abnormal crowd behavior detection using social force model, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, (2009), 935–942. https://doi.org/10.1109/CVPR.2009.5206641 |
[5] |
D. Y. Chen, P. C. Huang, Motion-based unusual event detection in human crowds, J. Visual Commun. Image Represent., 22 (2011), 178–186. https://doi.org/10.1016/j.jvcir.2010.12.004 doi: 10.1016/j.jvcir.2010.12.004
![]() |
[6] |
C. C. Loy, T. Xiang, S. Gong, Detecting and discriminating behavioural anomalies, Pattern Recognit., 44 (2011), 117–132. https://doi.org/10.1016/j.patcog.2010.07.023 doi: 10.1016/j.patcog.2010.07.023
![]() |
[7] | P. C. Ribeiro, R. Audigier, Q. C. Pham, RIMOC, a feature to discriminate unstructured motions: Application to violence detection for video-surveillance, Comput. Vision Image Understanding, (2016), 1–23. https://doi.org/10.1016/j.cviu.2015.11.001 |
[8] |
Y. Benabbas, N. Ihaddadene, C. Djeraba, Motion pattern extraction and event detection for automatic visual surveillance, EURASIP J. Image Video Process., 2011 (2011), 1–7. https://doi.org/10.1155/2011/163682 doi: 10.1155/2011/163682
![]() |
[9] |
B. Krausz, C. Bauckhage, Loveparade 2010: Automatic video analysis of a crowd disaster, Comput. Vision Image Understanding, 116 (2012), 307–319. https://doi.org/10.1016/j.cviu.2011.08.006 doi: 10.1016/j.cviu.2011.08.006
![]() |
[10] | V. Kaltsa, A. Briassouli, I. Kompatsiaris, M. G. Strintzis, Timely, robust crowd event characterization, in 19th IEEE International Conference on Image Processing, IEEE, (2012), 2697–2700. https://doi.org/10.1109/ICIP.2012.6467455 |
[11] | Y. Zhang, L. Qiny, H. Yao, P. Xu, Q. Huang, Beyond particle flow: Bag of trajectory graphs for dense crowd event recognition, in IEEE International Conference on Image Processing, ICIP, 2013. https://doi.org/10.1109/ICIP.2013.6738737 |
[12] | W. Choi, S. Savarese, A unified framework for multi-target tracking and collective activity recognition, in Computer Vision-ECCV 2012, (2012), 215–230. https://doi.org/10.1007/978-3-642-33765-9_16 |
[13] | M. Hu, S. Ali, M. Shah, Learning motion patterns in crowded scenes using motion flow field, in 2008 19th International Conference on Pattern Recognition, (2008), 1–5. https://doi.org/10.1109/ICPR.2008.4761183 |
[14] | S. Ali, M. Shah, A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis, in 2007 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2007. https://doi.org/10.1109/CVPR.2007.382977 |
[15] | C. C. Loy, T. Xiang, S. Gong, Modelling multi-object activity by gaussian processes, in Proceedings of the British Machine Vision Conference, BMVC, 2009. |
[16] |
A. S. Rao, J. Gubbi, S. Marusic, M. Palaniswami, Crowd event detection on optical flow manifolds, IEEE Trans. Cybern., 46 (2016), 1524–1537. https://doi.org/10.1109/TCYB.2015.2451136 doi: 10.1109/TCYB.2015.2451136
![]() |
[17] | H. Mousavi, S. Mohammadi, A. Perina, R. Chellali, V. Murino, Analyzing tracklets for the detection of abnormal crowd behavior, in IEEE Winter Conference on Applications of Computer Vision, (2015), 148–155. https://doi.org/10.1109/WACV.2015.27 |
[18] |
H. Fradi, J. L. Dugelay, Spatial and temporal variations of feature tracks for crowd behavior analysis, J. Multimodal User Interface, 10 (2016), 307–317. https://doi.org/10.1007/s12193-015-0179-2 doi: 10.1007/s12193-015-0179-2
![]() |
[19] | Y. Zhao, Z. Li, X. Chen, Y. Chen, Mobile crowd sensing service platform for social security incidents in edge computing, in 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), (2018), 568–574. https://doi.org/10.1109/Cybermatics_2018.2018.00118 |
[20] |
Y. Yuan, J. Fang, Q. Wang, Online anomaly detection in crowd scenes via structure analysis, IEEE Trans. Cybern., 45 (2015), 548–561. https://doi.org/10.1109/TCYB.2014.2330853 doi: 10.1109/TCYB.2014.2330853
![]() |
[21] | S. Wu, B. E. Moore, M. Shah, Chaotic invariants of Lagrangian particle trajectories for anomaly detection in crowded scenes, in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 2054–2060. https://doi.org/10.1109/CVPR.2010.5539882 |
[22] | X. Cui, Q. Liu, M. Gao, D. N. Metaxas, Abnormal detection using interaction energy potentials, in Conference on Computer Vision and Pattern Recognition, CVPR, 2011, (2011), 3161–3167. https://doi.org/10.1109/CVPR.2011.5995558 |
[23] |
S. Cho, H. Kang, Abnormal behavior detection using hybrid agents in crowded scenes, Pattern Recognit. Lett., 44 (2014), 64–70. https://doi.org/10.1016/j.patrec.2013.11.017 doi: 10.1016/j.patrec.2013.11.017
![]() |
[24] | E. de-la-Calle-Silos, I. González-Díaz, E. Díaz-de-María, Mid-level feature set for specific event and anomaly detection in crowded scenes, in 2013 IEEE International Conference on Image Processing, Melbourne, (2013), 4001–4005. https://doi.org/10.1109/ICIP.2013.6738824 |
[25] | N. Ihaddadene, C. Djeraba, Real-time crowd motion analysis, in 2008 19th International Conference on Pattern Recognition, (2008), 1–4. https://doi.org/10.1109/ICPR.2008.4761041 |
[26] | A. N. Shuaibu, A. S. Malik, I. Faye, Behavior representation in visual crowd scenes using space-time features, in 2016 6th International Conference on Intelligent and Advanced Systems (ICIAS), (2016), 1–6. https://doi.org/10.1109/ICIAS.2016.7824073 |
[27] | R. Hu, Q. Mo, Y. Xie, Y. Xu, J. Chen, Y. Yang, et al., AVMSN: An audio-visual two stream crowd counting framework under low-quality conditions, IEEE Access, 9 (2021), 80500–80510. https://doi.org/10.1109/ACCESS.2021.3074797 |
[28] | U. Sajid, X. Chen, H. Sajid, T. Kim, G. Wang, Audio-visual transformer based crowd counting, in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), (2021), 2249–2259. https://doi.org/10.1109/ICCVW54120.2021.00254 |
[29] | A. N. Shuaibu, A. S. Malik, I. Faye, Adaptive feature learning CNN for behavior recognition in crowd scene, in 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Malaysia, (2017), 357–361. https://doi.org/10.1109/ICSIPA.2017.8120636 |
[30] |
K. Zheng, W. Q. Yan, P. Nand, Video dynamics detection using deep neural networks, IEEE Trans. Emerging Top. Comput. Intell., 2 (2018), 224–234. https://doi.org/10.1109/TETCI.2017.2778716 doi: 10.1109/TETCI.2017.2778716
![]() |
[31] |
Y. Li, A deep spatio-temporal perspective for understanding crowd behavior, IEEE Trans. Multimedia, 20 (2018), 3289–3297. https://doi.org/10.1109/TMM.2018.2834873 doi: 10.1109/TMM.2018.2834873
![]() |
[32] | L. F. Borja-Borja, M. Saval-Calvo, J. Azorin-Lopez, A short review of deep learning methods for understanding group and crowd activities, in 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, (2018), 1–8. https://doi.org/10.1109/IJCNN.2018.8489692 |
[33] | B. Mandal, J. Fajtl, V. Argyriou, D. Monekosso, P. Remagnino, Deep residual network with subclass discriminant analysis for crowd behavior recognition, in 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, (2018), 938–942. https://doi.org/10.1109/ICIP.2018.8451190 |
[34] | A. N. Shuaibu, A. S. Malik, I. Faye, Y. S. Ali, Pedestrian group attributes detection in crowded scenes, in 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Fez, (2017), 1–5. https://doi.org/10.1109/ATSIP.2017.8075584 |
[35] |
F. Solera, S. Calderara, R. Cucchiara, Socially constrained structural learning for groups detection in crowd, IEEE Trans. Pattern Anal. Mach. Intell., 38 (2016), 995–1008. https://doi.org/10.1109/TPAMI.2015.2470658 doi: 10.1109/TPAMI.2015.2470658
![]() |
[36] |
S. Kim, S. Kwak, B. C. Ko, Fast pedestrian detection in surveillance video based on soft target training of shallow random forest, IEEE Access, 7 (2019), 12415–12426. https://doi.org/10.1109/ACCESS.2019.2892425 doi: 10.1109/ACCESS.2019.2892425
![]() |
[37] |
S. Din, A. Paul, A. Ahmad, B. B. Gupta, S. Rho, Service orchestration of optimizing continuous features in industrial surveillance using Big Data based fog-enabled Internet of Things, IEEE Access, 6 (2018), 21582–21591. https://doi.org/10.1109/ACCESS.2018.2800758 doi: 10.1109/ACCESS.2018.2800758
![]() |
[38] |
M. U. K. Khan, H. Park, C. Kyung, Rejecting motion outliers for efficient crowd anomaly detection, IEEE Trans. Inf. Forensics Secur., 14 (2018), 541–556. https://doi.org/10.1109/TIFS.2018.2856189 doi: 10.1109/TIFS.2018.2856189
![]() |
[39] |
A. N. Shuaibu, I. Faye, Y. Salih Ali, N. Kamel, M. N. Saad, A. S. Malik, Sparse representation for crowd attributes recognition, IEEE Access, 5 (2017), 10422–10433. https://doi.org/10.1109/ACCESS.2017.2708838 doi: 10.1109/ACCESS.2017.2708838
![]() |
[40] |
C. Riachy, F. Khelifi, A. Bouridane, Video-based person re-identification using unsupervised tracklet matching, IEEE Access, 7 (2019), 20596–20606. https://doi.org/10.1109/ACCESS.2019.2896779 doi: 10.1109/ACCESS.2019.2896779
![]() |
[41] | H. Fradi, J. Dugelay, Sparse feature tracking for crowd change detection and event recognition, in 2014 22nd International Conference on Pattern Recognition, ICPR, (2014), 4116–4121. https://doi.org/10.1109/ICPR.2014.705 |
[42] | B. Yogameena, S. Saravana Perumal, N. Packiyaraj, P. Saravanan, Ma-Th algorithm for people count in a dense crowd and their behaviour classification, in 2012 International Conference on Machine Vision and Image Processing (MVIP), Taipei, (2012), 17–20. https://doi.org/10.1109/MVIP.2012.6428750 |
[43] | R. Mehran, A. Oyama, M. Shah, Abnormal crowd behavior detection using social force model, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, (2009), 935–942. https://doi.org/10.1109/ICIP.2012.6467453 |
[44] | X. Zhou, L. Zhang, Abnormal event detection using recurrent neural network, in 2015 International Conference on Computer Science and Applications (CSA), Wuhan, (2015), 222–226. https://doi.org/10.1109/CSA.2015.64 |
[45] | S. S. Pathan, A. Al-Hamadi, B. Michaelis, Crowd behavior detection by statistical modeling of motion patterns, in 2010 International Conference of Soft Computing and Pattern Recognition, Paris, (2010), 81–86. https://doi.org/10.1109/SOCPAR.2010.5686403 |
[46] | S. Wang, Z. Miao, Anomaly detection in crowd scene using historical information, in 2010 International Symposium on Intelligent Signal Processing and Communication Systems, Chengdu, (2010), 1–4. https://doi.org/10.1109/ISPACS.2010.5704770 |
[47] |
P. Wang, W. Hao, Z. Sun, S. Wang, E. Tan, L. Li, et al., Regional detection of traffic congestion using in a large-scale surveillance system via deep residual TrafficNet, IEEE Access, 6 (2018), 68910–68919. https://doi.org/10.1109/ACCESS.2018.2879809 doi: 10.1109/ACCESS.2018.2879809
![]() |
[48] |
T. P. Nguyen, C. C. Pham, S. V. Ha, J. W. Jeon, Change detection by training a triplet network for motion feature extraction, IEEE Trans. Circuits Syst. Video Technol., 29 (2019), 433–446. https://doi.org/10.1109/TCSVT.2018.2795657 doi: 10.1109/TCSVT.2018.2795657
![]() |
[49] |
C. Riachy, F. Khelifi, A. Bouridane, Video-based person re-identification using unsupervised tracklet matching, IEEE Access, 7 (2019), 20596–20606. https://doi.org/10.1109/ACCESS.2019.2896779 doi: 10.1109/ACCESS.2019.2896779
![]() |
[50] |
G. Olague, D. E. Hernández, E. Clemente, M. Chan-Ley, Evolving head tracking routines with brain programming, IEEE Access, 6 (2018), 26254–26270. https://doi.org/10.1109/ACCESS.2018.2831633 doi: 10.1109/ACCESS.2018.2831633
![]() |
[51] |
Y. Lee, S. Chen, J. Hwang, Y. Hung, An ensemble of invariant features for person reidentification, IEEE Trans. Circuits Syst. Video Technol., 27 (2017), 470–483. https://doi.org/10.1109/TCSVT.2016.2637818 doi: 10.1109/TCSVT.2016.2637818
![]() |
[52] |
S. Yi, H. Li, X. Wang, Pedestrian behavior modeling from stationary crowds with applications to intelligent surveillance, IEEE Trans. Image Process., 25 (2016), 4354–4368. https://doi.org/10.1109/TIP.2016.2590322 doi: 10.1109/TIP.2016.2590322
![]() |
[53] |
Bera, S. Kim, D. Manocha, Interactive crowd-behavior learning for surveillance and training, IEEE Comput. Graphics Appl., 36 (2016), 37–45. https://doi.org/10.1109/MCG.2016.113 doi: 10.1109/MCG.2016.113
![]() |
[54] |
H. Fradi, B. Luvison, Q. C. Pham, Crowd behavior analysis using local mid-level visual descriptors, IEEE Trans. Circuits Syst. Video Technol., 27 (2017), 589–602. https://doi.org/10.1109/TCSVT.2016.2615443 doi: 10.1109/TCSVT.2016.2615443
![]() |
[55] |
M. Abdar, N. Y. Yen, Design of a universal user model for dynamic crowd preference sensing and decision-making behavior analysis, IEEE Access, 5 (2017), 24842–24852. https://doi.org/10.1109/ACCESS.2017.2735242 doi: 10.1109/ACCESS.2017.2735242
![]() |
[56] |
X. Zhang, Q. Yu, H. Yu, Physics inspired methods for crowd video surveillance and analysis: A survey, IEEE Access, 6 (2018), 66816–66830. https://doi.org/10.1109/ACCESS.2018.2878733 doi: 10.1109/ACCESS.2018.2878733
![]() |
[57] |
Y. Zhang, L. Qin, R. Ji, S. Zhao, Q. Huang, J. Luo, Exploring coherent motion patterns via structured trajectory learning for crowd mood modeling, IEEE Trans. Circuits Syst. Video Technol., 27 (2017), 635–648. https://doi.org/10.1109/TCSVT.2016.2593609 doi: 10.1109/TCSVT.2016.2593609
![]() |
[58] |
W. Liu, R. W. H. Lau, X. Wang, D. Manocha, Exemplar-AMMs: Recognizing Crowd movements from pedestrian trajectories, IEEE Trans. Multimedia, 18 (2016), 2398–2406. https://doi.org/10.1109/TMM.2016.2598091 doi: 10.1109/TMM.2016.2598091
![]() |
[59] |
C. Chen, Y. Shao, X. Bi, Detection of anomalous crowd behavior based on the acceleration feature, IEEE Sensors J., 15 (2015), 7252–7261. https://doi.org/10.1109/JSEN.2015.2472960 doi: 10.1109/JSEN.2015.2472960
![]() |
[60] |
V. J. Kok, C. S. Chan, GrCS: Granular computing-based crowd segmentation, IEEE Trans. Cybern., 47 (2017), 1157–11680. https://doi.org/10.1109/TCYB.2016.2538765 doi: 10.1109/TCYB.2016.2538765
![]() |
[61] |
C. Chen, Y. Shao, Crowd escape behavior detection and localization based on divergent centers, IEEE Sensors J., 15 (2015), 2431–2439. https://doi.org/10.1109/JSEN.2014.2381260 doi: 10.1109/JSEN.2014.2381260
![]() |
[62] |
Y. Zhang, L. Qin, R. Ji, H. Yao, Q. Huang, Social attribute-aware force model: Exploiting richness of interaction for abnormal crowd detection, IEEE Trans. Circuits Syst. Video Technol., 25 (2015), 1231–1245. https://doi.org/10.1109/TCSVT.2014.2355711 doi: 10.1109/TCSVT.2014.2355711
![]() |
[63] |
H. Yao, A. Cavallaro, T. Bouwmans, Z. Zhang, Guest Editorial Introduction to the Special Issue on group and crowd behavior analysis for intelligent multicamera video surveillance, IEEE Trans. Circuits Syst. Video Technol., 27 (2017), 405–408. https://doi.org/10.1109/TCSVT.2017.2669658 doi: 10.1109/TCSVT.2017.2669658
![]() |
[64] |
A. S. Keçeli, A. Kaya, Violent activity detection with transfer learning method, Electron. Lett., 53 (2017), 1047–1048. https://doi.org/10.1049/el.2017.0970 doi: 10.1049/el.2017.0970
![]() |
[65] |
V. Kaltsa, A. Briassouli, I. Kompatsiaris, L. J. Hadjileontiadis, M. G. Strintzis, Swarm intelligence for detecting interesting events in crowded environments, IEEE Trans. Image Process., 24 (2015), 2153–2166. https://doi.org/10.1109/TIP.2015.2409559 doi: 10.1109/TIP.2015.2409559
![]() |
[66] | V. H. Roldão Reis, S. J. F. Guimarães, Z. K. Gonçalves do Patrocínio, Dense crowd counting with capsule networks, in 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), (2020), 267–272. https://doi.org/10.1109/IWSSIP48289.2020.9145163 |
[67] | P. Foggia, A. Saggese, N. Strisciuglio, M. Vento, N. Petkov, Car crashes detection by audio analysis in crowded roads, in 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), (2015), 1–6. https://doi.org/10.1109/AVSS.2015.7301731 |
[68] | S. Duan, X. Wang, X. Yu, Crowded abnormal detection based on mixture of kernel dynamic texture, in 2014 International Conference on Audio, Language and Image Processing, (2014), 931–936. https://doi.org/10.1109/ICALIP.2014.7009931 |
[69] | Ç. Okuyucu, M. Sert, A. Yazici, Audio feature and classifier analysis for efficient recognition of environmental sounds, in 2013 IEEE International Symposium on Multimedia, (2013), 125–132. https://doi.org/10.1109/ISM.2013.29 |
[70] | M. Baillie, J. M. Jose, An audio-based sports video segmentation and event detection algorithm, in 2004 Conference on Computer Vision and Pattern Recognition Workshop, (2004), 110. https://doi.org/10.1109/CVPR.2004.298 |
[71] | M. A. Hossan, S. Memon, M. A. Gregory, A novel approach for MFCC feature extraction, in 2010 4th International Conference on Signal Processing and Communication Systems, (2011), 1–5. https://doi.org/10.1109/ICSPCS.2010.5709752 |
[72] | T. D. Ganchev, Speaker Recognition, PhD thesis, University of Patras, 2005. |
[73] |
H. Y. Swathi, G. Shivakumar, Evolutionary computing assisted neural network for crowd behaviour classification, Neuroquantology, 20 (2022), 2848–2855. https://doi.org/10.14704/nq.2022.20.9.NQ44331 doi: 10.14704/nq.2022.20.9.NQ44331
![]() |
[74] | http://mha.cs.umn.edu/Movies/Crowd-Activity-All.avi |
[75] | PETS09 dataset, Available from: http://www.cvg.reading.ac.uk/PETS2009/a.html. |
[76] | M. Marsden, K. McGuinness, S. Little, N. E. O'Connor, Holistic features for real-time crowd behaviour anomaly detection, in 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, (2016), 918–922. https://doi.org/10.1109/ICIP.2016.7532491 |
[77] | S. Mohammadi, A. Perina, V. Murino, Violence detection in crowded scenes using substantial derivative, in 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2015. https://doi.org/10.1109/AVSS.2015.7301787 |
[78] | H. Mousavi, S. Mohammadi, A. Perina, R. Chellali, V. Murino, Analyzing tracklets for the detection of abnormal crowd behavior, in 2015 IEEE Winter Conference on Applications of Computer Vision, IEEE, (2015), 148–155. https://doi.org/10.1109/WACV.2015.27 |
[79] | Y. Itcher, T. Hassner, O, Kliper-Gross, Violent flows: Real-time detection of violent crowd behavior, in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2012. https://doi.org/10.1109/CVPRW.2012.6239348 |
[80] | H. Mousavi, M. Nabi, H. Kiani, A. Perina, V Murino, Crowd motion monitoring using tracklet-based commotion measure, in 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015. https://doi.org/10.1109/ICIP.2015.7351223 |
[81] | L. Lu, J. He, Z. Xu, Y. Xu, C. Zhang, J. Wang, et al., Crowd behavior understanding through SIOF feature analysis, in 2017 23rd International Conference on Automation and Computing (ICAC), Huddersfield, (2017), 1–6. https://doi.org/10.23919/IConAC.2017.8082086 |
1. | Hongwei Zhou, Zihan Xu, Yifan Chen, Yunbo Yan, Siyan Zhang, Xiao Lin, Di Cui, Jun Yang, Helena Puche, The combined multilayer perceptron and logistic regression (MLP-LR) method better predicted the spread of Hyphantria cunea (Lepidoptera: Erebidae), 2025, 0022-0493, 10.1093/jee/toaf087 |