
This article considered the sampled-data control issue for a dynamic positioning ship (DPS) with the Takagi-Sugeno (T-S) fuzzy model. By introducing new useful terms such as second-order term of time, an improved Lyapunov-Krasovskii function (LKF) was constructed. Additionally, the reciprocally convex method is introduced to bound the derivative of LKF. According to the constructed LKF, the sampling information during the whole sampling period was fully utilized, and less conservatism was obtained. Then, the stability condition, robust performance, mode uncertainty and sampled-data controller design were analyzed by means of the linear matrix inequality (LMI). Finally, an example was given to demonstrate the effectiveness of the proposed method.
Citation: Minjie Zheng, Yulai Su, Guoquan Chen. An improved sampled-data control for a nonlinear dynamic positioning ship with Takagi-Sugeno fuzzy model[J]. Mathematical Biosciences and Engineering, 2024, 21(5): 6019-6041. doi: 10.3934/mbe.2024265
[1] | Junjing Xiong, Xiong Li, Hao Wang . The survival analysis of a stochastic Lotka-Volterra competition model with a coexistence equilibrium. Mathematical Biosciences and Engineering, 2019, 16(4): 2717-2737. doi: 10.3934/mbe.2019135 |
[2] | Mengqing Zhang, Qimin Zhang, Jing Tian, Xining Li . The asymptotic stability of numerical analysis for stochastic age-dependent cooperative Lotka-Volterra system. Mathematical Biosciences and Engineering, 2021, 18(2): 1425-1449. doi: 10.3934/mbe.2021074 |
[3] | Yanyan Du, Ting Kang, Qimin Zhang . Asymptotic behavior of a stochastic delayed avian influenza model with saturated incidence rate. Mathematical Biosciences and Engineering, 2020, 17(5): 5341-5368. doi: 10.3934/mbe.2020289 |
[4] | Qingsheng Zhu, Changwen Xie, Jia-Bao Liu . The impact of population agglomeration on ecological resilience: Evidence from China. Mathematical Biosciences and Engineering, 2023, 20(9): 15898-15917. doi: 10.3934/mbe.2023708 |
[5] | Sha He, Sanyi Tang, Libin Rong . A discrete stochastic model of the COVID-19 outbreak: Forecast and control. Mathematical Biosciences and Engineering, 2020, 17(4): 2792-2804. doi: 10.3934/mbe.2020153 |
[6] | Tingting Ding, Tongqian Zhang . Asymptotic behavior of the solutions for a stochastic SIRS model with information intervention. Mathematical Biosciences and Engineering, 2022, 19(7): 6940-6961. doi: 10.3934/mbe.2022327 |
[7] | Lanjun Liu, Han Wu, Junwu Wang, Tingyou Yang . Research on the evaluation of the resilience of subway station projects to waterlogging disasters based on the projection pursuit model. Mathematical Biosciences and Engineering, 2020, 17(6): 7302-7331. doi: 10.3934/mbe.2020374 |
[8] | Abhishek Savaliya, Rutvij H. Jhaveri, Qin Xin, Saad Alqithami, Sagar Ramani, Tariq Ahamed Ahanger . Securing industrial communication with software-defined networking. Mathematical Biosciences and Engineering, 2021, 18(6): 8298-8313. doi: 10.3934/mbe.2021411 |
[9] | Jiongxuan Zheng, Joseph D. Skufca, Erik M. Bollt . Heart rate variability as determinism with jump stochastic parameters. Mathematical Biosciences and Engineering, 2013, 10(4): 1253-1264. doi: 10.3934/mbe.2013.10.1253 |
[10] | Helong Liu, Xinyu Song . Stationary distribution and extinction of a stochastic HIV/AIDS model with nonlinear incidence rate. Mathematical Biosciences and Engineering, 2024, 21(1): 1650-1671. doi: 10.3934/mbe.2024072 |
This article considered the sampled-data control issue for a dynamic positioning ship (DPS) with the Takagi-Sugeno (T-S) fuzzy model. By introducing new useful terms such as second-order term of time, an improved Lyapunov-Krasovskii function (LKF) was constructed. Additionally, the reciprocally convex method is introduced to bound the derivative of LKF. According to the constructed LKF, the sampling information during the whole sampling period was fully utilized, and less conservatism was obtained. Then, the stability condition, robust performance, mode uncertainty and sampled-data controller design were analyzed by means of the linear matrix inequality (LMI). Finally, an example was given to demonstrate the effectiveness of the proposed method.
Microbiome dynamics are inherently stochastic. They are not just the assemblage of living microorganisms present in a defined environment, but also encompass the entire spectrum of molecules, structural elements, metabolites, and mobile genetic elements, produced by these microorganisms, known as their theater of activity [1]. Consequently, interpreting the causal link between microbiomes and human health, a critical aspect of microbiome research [2,3], remains challenging.
Environmental fluctuations can induce variability in interactions within a microbiome and between microbes and their environment or host. For instance, some microbial species may exhibit varying levels of sensitivity to temperature, impacting their interactions with other microbes or with the environment/host depending on the ambient temperatures. This variability introduces errors in the estimation of microbial interactions, which, in turn, affect our ability to understand the drivers of dynamics. It becomes imperative to translate microbial interactions, which govern the microbiome's population dynamics, into a high-level property of a microbiome, such as its resilience.
Theories from dynamical systems, adapted from macroecology, offer insight into understanding the dynamics of the microbiome in different environments [4]. The resilience of a microbiome, i.e., its ability to maintain and recover function in the face of perturbations, such as those caused by antibiotics or opportunistic pathogens, can be studied by employing the concept of stability in dynamic systems. Microbiomes that more readily return to stationarity (i.e., when the dynamics are captured by white noise about an equilibrium) when faced with perturbations are considered to be more resilient. Figure 1 illustrates the concept of the resilience of the microbiome. The impact of a perturbation, classified as a "pulse" or a "press," on a microbial community is determined by the duration and magnitude of the perturbation [5]. Following a perturbation, a microbial community in a healthy state (eubiosis) may either transition to an unstable and transient state or recover to its initial state. In some instances, the perturbation might drive the community towards the formation of an alternative healthy state or an unhealthy stable state (dysbiosis) associated with disease. Recent empirical evidence supports these theoretical concepts. Studies of gut microbiota dynamics have revealed intrinsic stability patterns and catastrophic shifts following antibiotic perturbations [6,7]. Moreover, mechanistic research has elucidated how commensals regulate host immunity, such as the suppression of retinoic acid synthesis to control IL-2 activity and prevent dysbiosis [8]. Beyond the gut, investigations across different body sites demonstrate community-specific resilience patterns, from vaginal microbiome's stability [9] to oral ecosystems [10]. Building on these findings, recent comprehensive reviews have established resilience as the microbiota's universal capacity to resist and recover from perturbations [11,12], with longitudinal studies revealing personalized resilience features, such as the individual-specific stability observed during the Mars500 mission [13]. Therefore, understanding how a microbiome responds to perturbations is crucial to preventing disease states and promoting health.
The concept of resilience has been intensively studied in macroecological systems. In fact, the ecological term "resilience", was originally coined by Holling 1973 [14]. In this seminal article, the author defined resilience as a "measure of the persistence of ecosystems and their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables." This concept was subjected to a comprehensive examination by Pimm and Lawton 1977 [15], DeAngelis 1980 [16], and Cottingham and Carpenter 1994 [17]. The index most frequently used to calculate resilience in ecosystems is based on the eigenvalues of an ordinary differential equation system near its equilibrium. Resilience is computed as the negative reciprocal of the maximum eigenvalue, providing a measure of the time it takes the ecosystem to recover from a perturbation. Subsequently, Neubert and Caswell 1997 [18] introduced four quantities (asymptotic resilience, reactivity, and two quantities of the amplification envelope) to measure the long- and short-term responses of an ecosystem to pulse perturbations. More recently, Arnoldi et al., [19,20] attempted to unify all previous resilience measures for a community, making them more practical by extending the analysis of ecosystem responses to perturbations on all time scales.
All the previously mentioned definitions of resilience are based on the theory of ordinary differential equations, in which an equilibrium state is assumed to be a fixed point of a deterministic dynamic system. However, due to continuously fluctuating environments, equilibrium states may not be fixed points but fluctuating stationary distributions. To address this issue, Ives 1995 [21] proposed a definition of stochastic resilience based on a discrete-time linear stochastic model. This article used the process error, modeled as a sequence of normal random variables with a mean of 0 and a constant variance, to account for the variability in interactions between species. Subsequently, Ives et al., 2003 [22] extended this definition to three additional stability measures using a first-order discrete-time multivariate autoregressive model (MAR). Buckwar and Kelly 2014 [23] redefined the notions of asymptotic resilience, reactivity, and the amplification envelope, proposed by Neubert and Caswell 1997 [18], in terms of the second moment of a linear Ito stochastic differential equation system, with the goal of investigating the effect of persistent stochastic perturbations on a deterministic system.
Over the past decade, the conceptualization of resilience within microbiome research has evolved significantly through the integration of stochastic modeling approaches. Zaoli and Grilli 2021 [26] analyzed longitudinal human gut microbiome data, revealing that while most operational taxonomic units (OTUs) exhibit stable dynamics with fluctuations around a constant average, a minority display nonstationary behavior with abrupt transitions in abundance, potentially indicating shifts between alternative stable states. This study underscored the importance of considering stochastic fluctuations and the potential for multiple stable states in microbial communities.
Building on this foundation, Wolff et al., 2023 [27] investigated the daily dynamics of intraspecific genetic variation in the human gut microbiome. They discovered that approximately 80% of the strains analyzed exhibited abundance fluctuations that could be predicted using a stochastic logistic model, suggesting that the stability and macroecological properties of the human gut microbiome emerge at the strain level. Hayashi et al., 2024 [24] conducted time-series analyses of experimental microbiomes, demonstrating that both deterministic and stochastic ecological processes drive the divergence of alternative states. This work highlighted the role of stochasticity in generating multiple stable states within microbial communities. Concurrently, Ponciano et al., 2024 [25] emphasized the necessity of integrating stochastic models to predict a population's persistence probabilities within the vaginal microbiome, advocating for a shift from static snapshots to dynamic modeling approaches. Collectively, these studies illustrate a progressive shift towards embracing stochastic modeling to more accurately capture the resilience and dynamic behavior of microbiome communities.
Although well studied, existing resilience measures are inadequate for microbiome studies because stochasticity, known to affect the stability of a dynamical system, does not influence these measures. Therefore, there is a pressing need to incorporate stochasticity, such as environmental noise and uncertainty, into the mechanistic models used to identify causal mechanisms using time-series microbiome data. In nature, the environment often fluctuates randomly and without memory, making white noise an excellent candidate to account for environmentally driven stochastic effects. Given the apparent need for such a model, we propose a stochastic generalized Lotka–Volterra (SgLV) model to describe the temporal dynamics of a community of microbial species. Stochasticity is induced in the model by adding white noise to the model parameters. Then, based on the linearization of our proposed SgLV model, we develop resilience measures to quantify resilience that can be applied to time-series microbiome data.
In our modeling framework, we represent environmental stochasticity using white noise. Specifically, we modify a constant parameter, such as \(\alpha\), to include stochastic fluctuations by redefining it as \(\bar{\alpha} = \alpha + \tau \frac{dW}{dt}\), where \(W(t)\) is standard Brownian motion, and \(\tau\) represents the intensity of noise. Consequently, \(\bar{\alpha}\) transforms from a fixed parameter into a stochastic process. Within a sufficiently small time interval \([0, t]\), we approximate the term \(\frac{dW}{dt}\) by \(\frac{W(t)}{t}\), which follows a normal distribution \(N(0, \frac{1}{t})\). Thus, each time we simulate the model, the value of ˉα will be chosen from the normal distribution N(α,α+τ2/t). This way of incorporating noise is more realistic than that in Ives et al., 2003 [22], since the variance of the noise is not constant but changing over time.
The foundational work by Ives et al., 2003 [22] established critical methodologies for analyzing ecological community stability using discrete-time multivariate autoregressive (MAR) models, providing robust methods for estimating stability and ecological interactions from empirical time-series data. Their pioneering approach revealed important relationships between species interactions and environmental variability, significantly advancing the quantitative study of ecological dynamics. In our manuscript, we extend this foundational framework from discrete time to continuous time by developing stochastic generalized Lotka–Volterra models, which are particularly suited to capturing microbiome dynamics characterized by continuous growth and interaction processes. Moreover, we propose novel resilience metrics specifically tailored to microbiome community dynamics, including measures capable of quantifying recovery following perturbations and assessing how stochasticity influences the pace of community restoration. Thus, while conceptually rooted in the seminal work of Ives et al., 2003 [22], our model provides significant advancements optimized for addressing the unique complexities inherent in microbiome research.
The remainder of the paper is organized into three sections. First, we develop a general theoretical framework on how to calculate four resilience measures for the proposed SgLV model. Second, we demonstrate the practicality of our framework by applying it to a stochastic two-species competitive model to illustrate the effects of environmental noise on the resilience of the stochastic model. Lastly, we have a brief discussion on the possibilities of applying our framework to real microbiome data.
In this section, we develop a general theoretical framework to calculate four resilience measures that quantify the recovery of a microbial community from pulse perturbations (single shocks or perturbations of relatively short duration). In contrast to previous works that used ordinary differential equation (ODE) systems [18,19,20] and/or stochastic discrete-time systems [21,22], we use a stochastic differential equation (SDE) system to describe the temporal dynamics of species within a microbiome community and then study its transient dynamics under different pulse perturbations. We utilize white noise to represent random environmental fluctuations without memory to capture the fluctuations observed in microbiomic time-series data. We consider them to be the effects of persistent environmental perturbations. Mathematically, we add white noise, described as the "derivative of Brownian Motion," to a deterministic gLV model. We then obtain a non-linear stochastic gLV model. Next, we linearize this nonlinear SDE system at its positive stationary distribution to get a linear SDE system. On the basis of two works by Arnoldi et al., [19,20], we indirectly establish four resilience measures, which are the instantaneous return rate (Rinst), average return rate (Ravet), asymptotic resilience (R∞), and the convergence rate (Rc), of our stochastic gLV model, through the second moment of solutions of the corresponding linearized SDE system. These four resilience quantities, in which the first two are functions of time and the last two are just numbers, measure how strong the recovery dynamics of our original stochastic system is after a pulse perturbation is applied. Figure 2 shows the scheme of the theoretical method used to calculate the resilience of a stochastic generalized Lotka–Volterra system of a microbial community of N species. The details of this method will be presented in the following four subsections.
The widely used mechanistic model for microbiome studies is the deterministic generalized Lotka–Volterra model, which is given by
dXidt=Xi[αi+N∑j=1βijXj],i=1,⋯,N, | (2.1) |
in which Xi is the abundance of species i in a microbiome community. The parameter αi represents the intrinsic growth rate of species i. The interaction between species i and species j (j≠i) is captured by the parameter βij. When i=j, the parameter βij takes the form βii=−αici<0 where ci is the coefficient of negative intraspecific interaction representing the inverse of the carrying capacity of the species i in isolation.
To characterize the microenvironmental fluctuations and noises in pairwise interactions between species within a microbiome community, we propose a general stochastic generalized Lotka–Volterra (SgLV) model that takes the following form
dXi=Xi[αi+N∑j=1βijXj]dt+XiN∑j=1τijdWj,i=1,⋯,N, | (2.2) |
Notice that we perturbed each interaction coefficient βij in the system (2.1) by replacing each term βijXj with the term βijXj+τijdWjdt (i,j=1,⋯,N) where (W1,⋯,WN)T is a standard Brownian motion of N dimensions.
For convenience, we introduce our notation that will be used throughout this paper. First, we need to specify an appropriate completed filtered probability space. Let Ω={ω∈C(R,RN),ω(0)=0}, let F be the Borel σ-algebra on Ω, and let P be the measure induced by {W=W(t)}t∈R, a two-sided N-dimensional Brownian motion. Then the elements of Ω will be identified with paths of the N-dimensional Brownian process ω(t)=W(t,ω). Now we consider the P-completion of F, also denoted by F, that is, F contains all P-null sets. The filtration Ft is given by the canonical filtration generated by the Brownian motion {W(t)}t≥0 completed by all P-null sets of F. Denote the probability measure given by the extension of P to the completed F again by P. Therefore, a completed filtered probability space (Ω,F,{Ft}t≥0,P) is obtained. Next, consider the SgLV (2.2) whose solutions take values in [0,∞)n and describe the abundance dynamics of N interacting species X(t):=(X1(t),⋯,XN(t))T, t≥0, in a microbial community. We write
● RN:={(x1,⋯,xN)T:x1∈R,⋯,xN∈R},
● RN+:={(x1,⋯,xN)T∈RN:x1≥0,⋯,xN≥0},
● RN,∘+:={(x1,⋯,xN)T∈RN:x1>0,⋯,xN>0},
● fi=fi(X(t)):=αi+∑Nj=1βijXj (the per-capita growth rate of the species i),
● Γ:=(τij)N×N, and Σ:=ΓΓT=(σij)N×N where σij=N∑k=1τikτjk.
From now on, the stochastic process given by the solution of the system (2.2) will be denoted by X or (X(t))t≥0. We use Px to denote the probability law on Ω when the solution path starts at x=(x1,⋯,xn)T and Ex is the expectation corresponding to Px.
We use the norm ‖x‖:=∑Ni=1|xi| in RN where x=(x1,⋯,xN)T. For (u1,⋯,uN)T∈RN, we let ⋀Ni=1ui:=min{u1,⋯,uN} and ⋁Ni=1ui:=max{u1,⋯,uN}.
Based on the excellent work of Hening and Nguyen 2018 [32,33,34], we briefly present, without proofs, sharp sufficient conditions for both persistence and extinction for SgLV (2.2) and its corresponding deterministic counterpart.
First, we consider the corresponding deterministic gLV system of the stochastic system (2.2) given by
dXdt=h(X), | (2.3) |
where h(X)=(h1(X),⋯,hN(X))T, and hi(X):=Xi[αi+∑Nj=1βijXj], i=1,⋯,N. We define persistence and extinction for the deterministic system (2.3) as follows.
Definition 2.1. The system (2.3) is persistent if, for any initial value X(0) in RN,∘+, each component of the solution X(t) satisfies lim inft→∞Xi(t)>0 for all i=1,⋯,N.
Definition 2.2. For any initial value X(0)∈RN,∘+, the species Xi is said to go extinct if lim supt→∞Xi(t)=0.
In a deterministic setting, in order to investigate the persistence and extinction of the system (2.3) (i.e., we wish to know which species persist or which species go extinct in this system), we look at the set N, which is the set of all the equilibria of the deterministic system (2.3) on the boundary ∂RN+:=RN+∖RN,∘+, as well as the unique positive equilibrium E∗ in RN,∘+. Clearly, N≠∅ since 0=(0,⋯,0)T∈N. Then, for each E∈N∪{E∗}, we compute the eigenvalues λ1(E),⋯,λN(E) of the variational matrix dhdX|X=E. Hence, we have the following theorem about the persistence of the system (2.3).
Theorem 2.1. The microbial community described by the system (2.3) becomes persistent, i.e., the abundance of each species is positive, if all the equilibria on the boundary ∂RN,∘+ are unstable, which means that
maxi=1,⋯,NReλi(E)>0foreachE∈N, | (2.4) |
This is equivalent to the fact that the unique positive equilibrium E∗ is locally stable (i.e., all the eigenvalues of the linearized system at the positive equilibrium have non-positive real parts). If the deterministic system (2.3) is not persistent, we would like to track which species go extinct and which species survive. Indeed, for 0≠E=(x1,E,⋯,xN,E)T∈N, 1≤n1<⋯<nk≤N exist such that xni,E>0 for all i=1,⋯,k and xj,E=0 for all j∈{1,⋯,N}∖{n1,⋯,nk}. Let
RE+:={(x1,⋯,xN)∈RN+:xi=0fori∈IcE}, |
where IE:={n1,⋯,nk} and IcE={1,⋯,N}∖{n1,⋯,nk}. If E=0, then R0+={0}. Let
RE,∘+:={(x1,⋯,xN)∈RN+:xi=0fori∈IcEandxi>0fori∈IE} |
and ∂RE+=RE+∖RE,∘+,. Now we make the following assumption.
Assumption 1. If a E∈N exists such that
maxi∈IcEReλi(E)<0, |
If RE+≠{0}, then suppose further that for any E′∈NE, we have
maxi∈IEReλi(E′)>0, |
where NE:={E′∈N:RE′+⊆∂RE+}.
Define
N1:={E∈N:E satisfies Assumption 1}andN2:=N∖N1. |
To determine exactly which species go extinct, we need an additional assumption ensuring that equilibria that are not in N1 are unstable.
Assumption 2. Either N2=∅ or, for any E′∈N2, maxi=1,⋯,NReλi(E′)>0.
Theorem 2.2. If Assumptions 1 and 2 are satisfied and N1≠∅, then for any E∈N1, the species Xi(t) converges to 0 with the convergence rate λi(E) for any i∈IcE.
Next, we consider the stochastic system (2.2). We make a standing assumption that will be used throughout this section.
Assumption 3. The noise intensity matrix Γ of the system (2.2) satisfies the condition that the matrix Σ:=ΓΓT=(σij)N×N is a positive-definite matrix. Furthermore, c=(c1,⋯,cN)T∈RN,∘+ and δ>0 exist such that for sufficiently large ‖X‖, we have
N∑i=1ciXifi(X)1+cTX−N∑i,j=1σijcicjXiXj2(1+cTX)2+δ(1+N+N∑i=1|fi(X)|)<0. | (2.5) |
Since the drift functions Xifi(X(t)) are locally Lipschitz functions on RN for any i=1,⋯,N and the diffusion functions in the system (2.2) are linear, for each initial value in RN, the system (2.2) has a unique almost surely (a.s.) continuous solution X=X(t) up to the explosion time
τe=inf{t>0:⋀Ni=1Xi(t)=−∞or⋁Ni=1Xi(t)=∞}. |
Using the Ito's formula for logXi(t), the system (2.2) can be written as
Xi(t)=Xi(0)exp{∫t0fi(s)ds−(12N∑j=1τ2ij)t+N∑j=1τijWj(t)}a.s., |
which means that if X(0)∈RN+, then X(t)∈RN+ a.s. for all t∈[0,τe). Due to (2.5), we can obtain the tightness of the family of transition probabilities of the solution X to the system (2.2). This means that τe=∞ a.s. Thus, for any initial value X(0) in RN+, the system (2.2) has a unique a.s. continuous strong solution X(t) in RN+ for all t∈[0,∞).
The positive definiteness of the matrix Σ in Assumption 3 ensures that the solution to the system (2.2) is nondegenerate. This is an important assumption because it makes sure that we have enough noise that can locally push the dynamics of the solution in all directions, and hence any positive solution state can move close to any other positive solution state in a finite time. In other words, this condition guarantees that the family of transition probabilities of the solution has a smooth density, which will be used for the proof of the persistence and extinction of the system (2.2).
Now, we define what we mean by persistence and extinction in our stochastic model.
Definition 2.3. The process X is strongly stochastically persistent if it possesses a unique invariant probability measure π∗ in RN,∘+ and
limt→∞‖P(t,x,⋅)−π∗(⋅)‖TV=0,x∈RN,∘+, |
where ‖⋅‖TV is the total variation norm and P(t,x,⋅) is the transition probability of X.
Definition 2.4. If X(0)=x∈RN,∘+, we say that the species Xi goes extinct with probability px>0 if
Px{limt→∞Xi(t)=0}=px. |
We say that the species Xi goes extinct if, for all x∈RN,∘+,
Px{limt→∞Xi(t)=0}=1. |
In a stochastic setting, to look into the persistence and extinction, we consider the set M as the collection of all ergodic invariant probability measures of the solution X(t) on the boundary ∂RN+. Clearly, M≠∅, since δ∗∈M is the Dirac measure concentrated at 0=(0,⋯,0)T∈RN+. For ˜M⊆M, we use Conv(˜M) to denote the convex hull of ˜M, which is the collection of probability measures of the form
π(⋅)=∑μ∈˜Mpμμ(⋅) |
with pμ>0 and ∑μ∈˜Mpμ=1. For δ∗≠μ∈M, since the stochastic system (2.2) is nondegenerate by Assumption 3, 1≤n1<⋯<nk≤N exist such that the support of the ergodic invariant probability measure μ, denoted by supp(μ), is exactly Rμ+, in which
Rμ+:={(x1,⋯,xN)∈RN+:xi=0fori∈Icμ} |
for Iμ:={n1,⋯,nk} and Icμ:={1,⋯,N}∖Iμ. Let
Rμ,∘+:={(x1,⋯,xN)∈RN+:xi=0fori∈Icμandxi>0fori∈Iμ} |
and ∂Rμ+:=Rμ+∖Rμ,∘+. Similarly to a deterministic setting, persistence in a stochastic setting means that every ergodic invariant probability measure on the boundary is a repeller. Therefore, ergodic invariant probability measures play a similar role as equilibria in a deterministic setting. To determine an ergodic invariant probability measure of the stochastic system (2.3) to be a repeller, we compute its Lyapunov exponents by looking at the equations for lnXi(t), i=1,⋯,N. Applying Ito's formula for lnXi(t) gives
lnXi(t)t=lnXi(0)t+1t∫t0fi(X(s))ds−12N∑j=1τ2ij+N∑j=1τijWj(t)t. |
When the solution X(t) to the system (2.3) is close to the support of an ergodic invariant probability measure μ, supp(μ), for a long time, then the Lyapunov exponent of the solution component Xi with respect to μ (which is also called the invasion rate of species Xi with respect to μ)
λi(μ):=limt→∞lnXi(t)t=limt→∞1t∫t0fi(X(s))ds−12N∑j=1τ2ij, |
can be approximated by the average with respect to μ (using the strong law of large numbers)
∫∂RN+fi(x)μ(dx)−12N∑j=1τ2ij, |
while the term
lnXi(0)t+N∑j=1τijWj(t)t, |
is negligible. This implies that the Lyapunov exponents λi(μ), i=1,⋯,N, give the long-term growth rate of species Xi if X(t) is close to supp(μ). Therefore, an ergodic invariant probability measure μ is a repeller if maxi=1,⋯,Nλi(μ)>0. Now we impose the following assumption that ensures the strong stochastic persistence.
Assumption 4. For any μ in Conv(M),
maxi=1,⋯,Nλi(μ)>0, |
where
λi(μ)=∫∂RN+fi(x)μ(dx)−12N∑j=1τ2ij, |
Theorem 2.3. Suppose that Assumptions 3 and 4 are true. Then the solution X(t) to the system (2.2) is strongly stochastically persistent, and its transition probabilities converge weakly to its unique invariant probability measure π∗ in RN,∘+ exponentially fast in the total variation norm.
Next, we make an assumption that will imply the extinction of the stochastic system.
Assumption 5. There is a μ∈M such that
maxi∈Icμλi(μ)<0, | (2.6) |
If Rμ+≠{0}, suppose further that for any ν∈Conv(Mμ), we get
maxi∈Iμλi(ν)>0, | (2.7) |
where Mμ:={ν′∈M:supp(ν′)⊆∂Rμ+}.
Theorem 2.4. Under Assumptions 3 and 5, for any δ>0 sufficiently small and for any initial value x∈RN,∘+, we obtain
limt→∞Ex(⋀Ni=1Xi(t))δ=0. |
We have several remarks for Assumption 5. First, if μ∈M and i∈Iμ, then λi(μ)=0. Intuitively, this is because if the solution is inside the support of an ergodic invariant probability measure μ, then it is at an "equilibrium" and it does not tend to grow or decay. If μ∈M and maxi∈Icμλi(μ)<0, then μ is regarded as an "attractor", which attracts solutions starting nearby. So we require the condition (2.6) to make sure that the solution component Xi(t), i∈Icμ, will get close to 0 if it starts near Rμ,∘+. Second, the condition (2.7) is needed to guarantee that μ is really a "sink" in Rμ,∘+ but not the other invariant probability measures on ∂Rμ+, that is, if X is close to Rμ,∘+, then it is not pulled away to the boundary ∂Rμ+.
Finally, let
M1:={μ∈M:μ satisfies Assumption 5} |
and M2:=M∖M1. If there is no persistence, we would like to know exactly which species go extinct and which species survive. On the basis of the repulsion of the invariant probability measures in M2, we can derive the "attracting" region of invariant probability measures in M1 for the solution X. We make an additional assumption that ensures all the invariant probability measures outside Conv(M1) are repellers.
Assumption 6. Suppose that one of the following holds true:
● M2=∅;
● For any ν∈Conv(M2), maxi=1,⋯,Nλi(ν)>0.
For any initial value X(0)=x∈RN,∘+, let U=U(ω) denote the weak∗-limit set of the family of random occupation measures {˜Πt(⋅),t≥0} where
˜Πt(A)=1t∫t01{X(s)∈A}ds,t>0,A∈B(RN,∘+), |
which is the proportion of time the solution spends in a set A up to time t.
Theorem 2.5. Suppose that Assumptions 1, 5, and 6 hold true and M1≠∅. Then for any x∈RN,∘+, we obtain
∑μ∈M1Pμx=1 |
in which
Pμx:=Px{U(ω)={μ}andlimt→∞lnXi(t)t=λi(μ)<0,i∈Icμ},μ∈M1. |
Let us get started with the linearization of the deterministic system (2.3). We assume that all the equilibria of this system on the boundary ∂RN,∘+ are unstable. Therefore, by Theorem 2.1, the unique positive equilibrium E∗=(X∗1,⋯,X∗N)T is locally stable. To understand the behavior of the deterministic system near E∗, we approximate the system (2.3) by the following linear system, as the solution is close to E∗:
dXidt=ϕi0+N∑j=1θij(Xj−X∗j),i=1,⋯,N, | (2.8) |
where the values of ϕi0 and the θij are constants to be determined. In fact, we would like to replace each right-hand side of the system (2.3), hi(X), by its linearized form ϕi0+∑Nj=1θij(Xj−X∗j) for i=1,⋯,N when X1−X∗1,⋯,XN−X∗N are close to 0. The constant coefficients ϕi0 and θij are determined so that all the differences
|hi(X)−ϕi0−N∑j=1θij(Xj−X∗j)|, |
for i=1,⋯,N approximate 0 as X is close to E∗. If we use the norm ‖x‖2=√x21+⋯+x2N in RN to identify how close two vectors are, then the linearization of the system (2.3) at the unique positive equilibrium E∗ is equivalent to finding the vector \boldsymbol{\phi}_0 = (\phi_0^1, \cdots, \phi_0^N)^T and the matrix \Theta = (\theta_{ij})_{N\times N} so that the difference
\|h({\mathbf{X}})-\boldsymbol{\phi}_0-\Theta({\mathbf{X}}-E^*)\|_2, |
approximates 0 as \|{\mathbf{X}}-E^*\|_2\to 0 . For each i = 1, \cdots, N , applying the Taylor expansion for the function h_i({\mathbf{X}}) at E^* gives
\begin{equation*} \begin{aligned} h_i({\mathbf{X}}) = h_i(E^*)&+\frac{\partial h_i(E^*)}{\partial{\mathbf{X}}}\cdot({\mathbf{X}}-E^*)\\ &+\frac{1}{2}({\mathbf{X}}-E^*)^T\frac{\partial^2h_i(E^*)}{\partial{\mathbf{X}}^2}({\mathbf{X}}-E^*)+o(\|{\mathbf{X}}-E^*\|^2_2), \end{aligned} \end{equation*} |
in which
\dfrac{\partial h_i(E^*)}{\partial{\mathbf{X}}} = \left(\dfrac{\partial h_i(E^*)}{\partial X_1}, \cdots, \dfrac{\partial h_i(E^*)}{\partial X_N}\right)^T |
and
\dfrac{\partial^2h_i(E^*)}{\partial{\mathbf{X}}^2} = \left(\dfrac{\partial^2h_i(E^*)}{\partial X_j\partial X_k}\right)_{N\times N}. |
This implies that as \|{\mathbf{X}}-E^*\|_2\to 0 , we get
\left|h_i({\mathbf{X}})-h_i(E^*)-\frac{\partial h_i(E^*)}{\partial{\mathbf{X}}}\cdot({\mathbf{X}}-E^*)\right|\to 0, |
Notice that h_i(E^*) = 0 for each i = 1, \cdots, N . Therefore, when we choose \boldsymbol\phi_0 = \mathbf{0} and \Theta = \dfrac{\partial h(E^*)}{\partial{\mathbf{X}}}: = \left(\dfrac{\partial h_i(E^*)}{\partial X_j}\right)_{N\times N} , then
h_i({\mathbf{X}})\approx \sum\limits_{j = 1}^N\theta_{ij}(X_j-X_j^*), |
as {\mathbf{X}} is close to E^* . Then the linearization of the deterministic system (2.3) at E^* takes the form
\begin{equation} \frac{dX_i}{dt} = \sum\limits_{j = 1}^N\dfrac{\partial h_i(E^*)}{\partial X_j}(X_j-X_j^*), \, \, i = 1, \cdots, N, \end{equation} | (2.9) |
Next, we continue with the linearization of the stochastic system (2.2). We assume that the sufficient conditions (i.e., Assumption 3 and Assumption 4) for the strong persistence of the stochastic system are satisfied. By Theorem 2.3, there is a unique exponentially ergodic invariant probability measure \pi^* in {\mathbb{R}}_{+}^{N, \circ} . So, for any {\mathbf{x}}\in{\mathbb{R}}_{+}^{N, \circ} , all transition probabilities P(t, {\mathbf{x}}, \cdot) = {\mathbb{P}}_{{\mathbf{x}}}\{{\mathbf{X}}(t)\in\cdot\} of the solution {\mathbf{X}}(t) starting at {\mathbf{x}} become the same and equal to \pi^* as the time t becomes large enough. We call \pi^* the unique positive stationary solution of the stochastic system (2.2). We say that the solution {\mathbf{X}}(t) is close to \pi^* if \|P(t, {\mathbf{x}}, \cdot)-\pi^*(\cdot)\|_{TV} is close to 0. Note that {\mathbb{E}}_{{\mathbf{x}}}\pi^* approaches E^* for any {\mathbf{x}}\in{\mathbb{R}}_{+}^{N, \circ} when all entries in the noise intensity matrix \Gamma approach 0. Since, as time passes, all solutions {\mathbf{X}}(t) will eventually end up in the unique positive stationary distribution no matter where it starts, we will drop the subscript {\mathbf{x}} in the notation of {\mathbb{E}}_{{\mathbf{x}}} and {\mathbb{P}}_{{\mathbf{x}}} in this subsection.
The linearization of the stochastic system (2.2) at the stationary solution \pi^* is to approximate this system by the following linear SDE system when the solution is close to \pi^* :
\begin{equation} dX_i = \left[\Phi_i+\sum\limits_{j = 1}^N\gamma_{ij}(X_j-m_j)\right]dt + X_i\sum\limits_{j = 1}^N\tau_{ij}dW_j, \quad i = 1, \cdots, N, \end{equation} | (2.10) |
where (m_1, \cdots, m_N) = {\mathbb{E}}\, \pi^* . In fact, our statistical linearization method is the replacement of each drift term h_i({\mathbf{X}}) = X_i\left[\alpha_i+\sum_{j = 1}^N\beta_{ij}X_j\right] in (2.2) by its linearized form \Phi_i+\sum_{j = 1}^N\gamma_{ij}(X_j-m_j) for i = 1, \cdots, N, in which the coefficients \Phi_i 's and \gamma_{ij} 's are determined by using the Taylor expansion of each h_i({\mathbf{X}}) at {\mathbf{m}} = (m_1, \cdots, m_N) . In other words, we would like to find these coefficients to minimize
\begin{equation} \left|h_i({\mathbf{X}})-\Phi_i-\sum\limits_{j = 1}^N\gamma_{ij}(X_j-m_j)\right|, \end{equation} | (2.11) |
as the solution {\mathbf{X}} of the stochastic system (2.2) is close to \pi^* . As in the deterministic setting, we obtain
\begin{equation} \Phi_i = h_i({\mathbf{m}}) = m_i\left[\alpha_i+\sum\limits_{j = 1}^N\beta_{ij}m_j\right]\quad i = 1, \cdots, N, \end{equation} | (2.12) |
and for i, j = 1, \cdots, N
\begin{equation} \gamma_{ij} = \frac{\partial h_i}{\partial x_j}({\mathbf{m}}) = \begin{cases} \alpha_i+\sum\limits_{k = 1}^N\beta_{ik}m_k &\, \, \text{if}\, \, i = j, \\ \beta_{ij}m_i &\, \, \text{if}\, \, i\neq j, \end{cases} \end{equation} | (2.13) |
in which each m_i can be computed using the strong law of large numbers
\begin{equation} m_i = \int_{\partial{\mathbb{R}}_{+}^{N}} X_i\, \pi^*(d{\mathbf{X}}) = \lim\limits_{t\to\infty}\frac{1}{t}\int_0^t X_i(s)ds, \end{equation} | (2.14) |
Therefore the system (2.2) can be approximated by the linear SDE system at the positive stationary solution \pi^*
\begin{equation} \begin{aligned} d\begin{bmatrix} \widetilde X_1\\ \cdots\\ \widetilde X_N\end{bmatrix} = &\left(\begin{bmatrix} \Phi_1\\ \cdots\\ \Phi_N\end{bmatrix}+ \begin{bmatrix}\gamma_{11}&\cdots&\gamma_{N1}\\ \vdots & \ddots & \vdots\\ \gamma_{1N} &\cdots & \gamma_{NN}, \end{bmatrix} \begin{bmatrix} \widetilde X_1-m_1\\ \cdots\\ \widetilde X_N-m_N\end{bmatrix}\right) dt\\ &+\sum\limits_{i = 1}^N\begin{bmatrix}\tau_{1i}&\cdots& 0\\ \vdots & \ddots & \vdots\\ 0 &\cdots & \tau_{Ni}, \end{bmatrix} \begin{bmatrix} \widetilde X_1\\ \cdots\\ \widetilde X_N\end{bmatrix} dW_i. \end{aligned} \end{equation} | (2.15) |
Finally, we comment on our linearization method. We know that the solution of the nonlinear stochastic system (2.2) is a Markov process {\mathbf{X}} = {\mathbf{X}}(t) . In fact, our linearization is to find another Markov process, say \widetilde{{\mathbf{X}}} = \widetilde{{\mathbf{X}}}(t) , which represents the solution of the linear SDE system (2.15) such that both {\mathbf{X}} and \widetilde{{\mathbf{X}}} have the same mean, and the variance {\mathbb{E}}\|{\mathbf{X}} - \widetilde{{\mathbf{X}}}\|_2^2 is to be minimized as the time becomes large enough. In other words, when time goes by and becomes sufficiently large, the long-term behaviors of the two Markov processes {\mathbf{X}} and \widetilde{{\mathbf{X}}} are approximately the same. Because of that, a \delta > 0 exists such that the linear SDE system (2.15) is considered as a " \delta -perturbation" of the non-linear SDE system (2.2), and thus strong persistence of the system (2.2) implies that of the system (2.15) (see the definition of " \delta -perturbation" and Proposition 3 in [28] for more details).
On the basis of two excellent works by Arnoldi [19,20], we indirectly establish four resilience measures, which are the instantaneous return rate, average return rate, asymptotic resilience, and convergence rate of our stochastic gLV model, through the second moment of the solutions of the corresponding linearized SDE system. These four resilience quantities, in which the first two are functions of time and the last two are just numbers, measure how strong the recovery dynamics of our original stochastic system is after a pulse perturbation is applied.
Now we start building up these four stability concepts by letting
\begin{align*} A& = \begin{bmatrix}\gamma_{11}&\cdots&\gamma_{N1}\\ \vdots & \ddots & \vdots\\ \gamma_{1N} &\cdots & \gamma_{NN}\end{bmatrix}, \quad B_i = \begin{bmatrix}\tau_{1i}& \cdots & 0\\ \vdots & \ddots & \vdots\\ 0 &\cdots & \tau_{Ni} \end{bmatrix}, \quad i = 1, \cdots, N, \\ \Theta & = (\Phi_1, \cdots, \Phi_N)^T, \, \, {\mathbf{m}} = (m_1, \cdots, m_N)^T, \, \, \text{and}\, \, {\mathbf{Y}} = \widetilde{\mathbf{X}}-{\mathbf{m}}. \end{align*} |
Then (2.15) can be written in matrix form:
\begin{equation} d{\mathbf{Y}} = (A{\mathbf{Y}}+\Theta) dt+\sum\limits_{i = 1}^N (B_i{\mathbf{Y}}+B_i{\mathbf{m}}) dW_i. \end{equation} | (2.16) |
We assume that the linearized system (2.16) is already at its equilibrium state. A pulse perturbation at time t = 0 applied to this linear SDE system is characterized by a vector {\mathbf{u}} = {\mathbf{Y}}(0) , which describes the state of the system right after the perturbation. Then the recovery trajectory {\mathbf{Y}}(t) after this pulse perturbation is given by
\begin{equation} \begin{aligned} {\mathbf{Y}}(t) = \Phi(t){\mathbf{u}}&+\Phi(t)\int_0^t\Phi^{-1}(s)\left[\Theta-\sum\limits_{i = 1}^N B_i^2{\mathbf{m}}\right]ds\\ &+\sum\limits_{i = 1}^N\Phi(t)\int_0^t \Phi^{-1}(s) B_i{\mathbf{m}}\, dW_i(s), \end{aligned} \end{equation} | (2.17) |
in which \Phi(t) is the solution matrix of the homogeneous system
\begin{equation} d{\mathbf{Y}} = A{\mathbf{Y}} dt+\sum\limits_{i = 1}^N B_i {\mathbf{Y}} dW_i. \end{equation} | (2.18) |
with the initial condition \Phi(0) = I (the identity matrix of size N ). In general, we cannot explicitly compute the solution {\mathbf{Y}}(t) in terms of A , B_1 , ..., B_N . Notice that the solution {\mathbf{Y}}(t) is nowhere differentiable. Without the smoothness of the solution, it is difficult to build up stability concepts of a dynamic system that are able to be tractable. To avoid this difficulty, we can smooth out the solutions of the linearized system (2.16) by looking at the dynamics of their second moments. To proceed, let P(t) = {\mathbb{E}}({\mathbf{Y}}(t){\mathbf{Y}}(t)^T) be the expectation of the matrix-valued process {\mathbf{Y}}(t){\mathbf{Y}}(t)^T , which is given by the unique solution of the matrix ODE (see [23])
\begin{equation} \begin{aligned} \frac{dP}{dt} = AP+PA^T&+\sum\limits_{i = 1}^N B_i P B_i+\Theta M^T+M\Theta^T\\ &+\sum\limits_{i = 1}^N B_i(M{\mathbf{m}}^T+{\mathbf{m}} M^T+{\mathbf{m}}{\mathbf{m}}^T) B_i, \end{aligned} \end{equation} | (2.19) |
with the initial value P(0) = {\mathbf{Y}}(0){\mathbf{Y}}(0)^T = {\mathbf{u}}{\mathbf{u}}^T , and M: = M(t) = {\mathbb{E}}{\mathbf{Y}}(t) is the solution to the linear ODE system, with the initial value M(0) = {\mathbf{u}} ,
\begin{equation} \dot{M} = A M+\Theta. \end{equation} | (2.20) |
We can treat the matrix P(t) as a N^2 -dimensional vector
\begin{align*} {\mathbf{Z}}(t)&: = \text{vec}(P(t)) = (Z_1(t), \cdots, Z_{N^2}(t))^T\\ & = ({\mathbb{E}} (Y_1^2(t)), \, {\mathbb{E}} (Y_2(t)Y_1(t)), \cdots, \, {\mathbb{E}}(Y_N(t)Y_1(t)), \\ &\quad\:\:\:{\mathbb{E}}(Y_1(t)Y_2(t)), \, {\mathbb{E}}(Y_2(t)^2), \cdots, \, {\mathbb{E}}(Y_N(t)Y_2(t)), \cdots, \, {\mathbb{E}}(Y_N(t)^2))^T. \end{align*} |
Applying the vectorization operation on both sides of the system (2.19) and then using the properties of Kronecker products and matrix vectorization in linear algebra [23], we obtain the equivalent N^2 -dimensional ODE system
\begin{equation} \frac{d{\mathbf{Z}}}{dt} = S{\mathbf{Z}}+T, \end{equation} | (2.21) |
in which
S = I \otimes A + A \otimes I + \sum\limits_{i = 1}^N B_i\otimes B_i, |
and
T = \rm{vec}(\Theta M^T+M\Theta^T)+\sum\limits_{i = 1}^N(B_i\otimes B_i)\text{vec}(M{\mathbf{m}}^T+{\mathbf{m}} M^T+{\mathbf{m}}{\mathbf{m}}^T), |
The matrix S is called the mean-square stability matrix of the linear SDE system (2.16). The stationary solution to (2.16) is globally mean-square asymptotically stable if \alpha(S) = \max_{i = 1, \cdots, N^2}\text{Re}(\lambda_i) < 0 , where \text{Re}(\lambda_i) is the real part of the eigenvalue \lambda_i of the matrix S . Now let
W(t): = \sum\limits_{i = 1}^N{\mathbb{E}} Y_i^2(t) = {\mathbb{E}}\left(\sum\limits_{i = 1}^N Y_i^2(t)\right) = {\mathbb{E}}({\mathbf{Y}}^T(t){\mathbf{Y}}(t)), |
then W(t) measures the average squared distance of the solution {\mathbf{X}}(t) of the system (2.2) to the expectation of its equilibrium state \pi^* . We define four resilience quantities based on W(t) as follows:
● {\mathcal{R}}_t^{\text{ins}} = -\dfrac{d}{dt}\ln W(t)^{1/2} is the instantaneous return rate at time t > 0 ,
● {\mathcal{R}}_t^{\text{ave}} = -\dfrac{\ln W(t)^{1/2}-\ln W(0)^{1/2}}{t} is the average return rate over the time interval [0, t] ,
● {\mathcal{R}}_c = -\dfrac{1}{2}\alpha(S) is the rate of convergence of the ODE system (2.21), and
● {\mathcal{R}}_{\infty} = \lim\limits_{t\to \infty}{\mathcal{R}}_t^{\text{ave}} is the asymptotic resilience of the ODE system (2.21).
In these definitions, {\mathcal{R}}_t^{\text{ins}} and {\mathcal{R}}_t^{\text{ave}} are deterministic functions of time t , which capture the short-term behavior of the recovery trajectory {\mathbf{Y}}(t) , while {\mathcal{R}}_c and {\mathcal{R}}_{\infty} are only numbers, which represent the long-term dynamics of the solution's recovery. These two functions and two numbers give us more tractable computations that measure the recovery dynamics of the stochastic system (2.16) since we are able to explicitly compute the quantity \ln W(t)^{1/2} and its derivative in terms of the matrices S and T . In fact, two quantities {\mathcal{R}}_t^{\text{ins}} and {\mathcal{R}}_t^{\text{ave}} can be computed on the basis of the derivative of \ln W(t)
\begin{align*} \frac{d}{dt}\ln W = \frac{1}{W}\frac{dW}{dt}& = \frac{1}{W}\left\{{\mathbb{E}}({\mathbf{Y}}^T(A^T+A){\mathbf{Y}}) +\Theta^TM+M^T\Theta \right.\\ &\left.+\sum\limits_{i = 1}^N\left[{\mathbb{E}}({\mathbf{Y}}^TB_i^2{\mathbf{Y}})+{\mathbf{m}}^TB_i^2 M+M^TB_i^2{\mathbf{m}}+{\mathbf{m}}^TB_i^2{\mathbf{m}}\right]\right\}\\ & = \frac{1}{W}\left\{\left[\rm{vec}(A^T+A)+\sum\limits_{i = 1}^N\rm{vec}(B_i^2)\right]^T{\mathbf{Z}} +\Theta^TM+M^T\Theta\right.\\ &\left.+\sum\limits_{i = 1}^N({\mathbf{m}}^T B_i^2M+M^T B_i^2{\mathbf{m}}+{\mathbf{m}}^T B_i^2{\mathbf{m}})\right\}, \end{align*} |
where W , M , and {\mathbf{Z}} can be computed from two ODE systems (2.20) and (2.21). Lastly, when all noise intensities are equal to 0 in the system (2.2) (that is, B_i = 0 for all i = 1, \cdots, N ), all our definitions of the four resilience measures above correspond to those defined in Arnoldi [20] in which the rate of convergence and the asymptotic resilience coincide.
To demonstrate the method we proposed in previous section, let us look at the stochastic gLV model of two species
\begin{equation} \begin{aligned} dX_1 & = X_1(\alpha_1-\beta_{11}X_1+\beta_{12}X_2)dt+X_1(\tau_{11}dW_1+\tau_{12}dW_2), \\ dX_2 & = X_2(\alpha_2+\beta_{21}X_1-\beta_{22}X_2)dt+X_2(\tau_{21}dW_1+\tau_{22}dW_2), \end{aligned} \end{equation} | (3.1) |
where \beta_{11} > 0 and \beta_{22} > 0 . There are three cases:
● If \beta_{12} < 0 and \beta_{21} < 0 , then the system (3.1) is a competitive model.
● If \beta_{12} > 0 and \beta_{21} > 0 , then the system (3.1) is a mutualistic model.
● If (\beta_{12} < 0, \, \beta_{21} > 0) or (\beta_{12} > 0, \, \beta_{21} < 0) , then the system (3.1) is a predator–prey model (mixed competition and mutualism).
Next, we analyze the dynamics of the system (3.1) on the boundary of {\mathbb{R}}_+^2 , which is \partial{\mathbb{R}}_{+}^2: = {\mathbb{R}}_{1+}^{\circ}\cup{\mathbb{R}}_{2+}^{\circ}\cup\{(0, 0)\} , where {\mathbb{R}}_{1+}^{\circ}: = \{(x_1, 0):x_1 > 0\}\, \, \text{and}\, \, {\mathbb{R}}_{2+}^{\circ}: = \{(0, x_2):x_2 > 0\} .
Case 1. If X_1(0) = 0 and X_2(0) = 0 , then, by (3.1), we get, for a.s.
\begin{align*} X_1(t) = X_1(0)\exp&\left\{\int_0^t\left[\alpha_1-\beta_{11}X_1(s)+\beta_{12}X_2(s)-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}\right]dt\right.\\ &\left.+\tau_{11}W_1(t)+\tau_{12}W_2(t)\right\}, \\ X_2(t) = X_2(0)\exp&\left\{\int_0^t\left[\alpha_2+\beta_{21}X_1(s)-\beta_{22}X_2(s)-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}\right]dt\right.\\ &\left.+\tau_{21}W_1(t)+\tau_{22}W_2(t)\right\}. \end{align*} |
This implies that X_1(t) = X_2(t) = 0 for all t > 0 a.s. So, the Dirac delta measure at (0, 0) , \mu_0 , is an ergodic invariant probability measure for the system (3.1) on the boundary \partial{\mathbb{R}}_+^2 .
Case 2. If X_1(0) = 0 and X_2(0) > 0 , then, as in Case 1, X_1(t)\equiv 0 a.s. and X_2(t) > 0 for all t > 0 a.s. Plugging in 0 into X_1 in the second equation of (3.1) gives
\begin{equation} dX_2 = X_2(\alpha_2-\beta_{22}X_2)dt+X_2(\tau_{21}dW_1+\tau_{22}dW_2). \end{equation} | (3.2) |
For a fixed \alpha_2 > 0 , consider
\begin{align*} s(X_2)& = \int_{\alpha_2}^{X_2}\exp\left\{-\int_{\alpha_2}^{y}\frac{(2\alpha_2-2\beta_{22}u)u}{(\tau_{21}^2+\tau_{22}^2)u^2}du\right\}dy\\ & = C_2\int_{\alpha_2}^{X_2} y^{-\frac{2\alpha_2}{\tau_{21}^2+\tau_{22}^2}}\exp\left\{\frac{2\beta_{22}}{\tau_{21}^2+\tau_{22}^2}y\right\}dy. \end{align*} |
Since the integrand in the last integral can be written as
y^{\frac{-2\alpha_2}{\tau_{21}^2+\tau_{22}^2}}\left[1+\frac{2\beta_{22}}{\tau_{21}^2+\tau_{22}^2}y+\frac{1}{2!}\frac{4\beta_{22}^2}{(\tau_{21}^2+\tau_{22}^2)^2}y^2+\cdots\right], |
a k\in{\mathbb{Z}}_+ exists such that \frac{-2\alpha_2}{\tau_{21}^2+\tau_{22}^2}+k > -1 , s(\infty) = \infty . If \frac{-2\alpha_2}{\tau_{21}^2+\tau_{22}^2}+1 > 0 , which is equivalent to \alpha_{2}-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2} < 0 , then s(0+) > -\infty . This means that \lim_{t\to\infty}X_2(t) = 0 a.s. and so the equation (3.2) does not have any invariant probability measure on {\mathbb{R}}_{+}^{\circ} . If \frac{-2\alpha_2}{\tau_{21}^2+\tau_{22}^2}+1 < 0 , which is equivalent to \alpha_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2} > 0 , then s(0+) = -\infty . Therefore, X_2(t) oscillates between 0 and \infty . Thus the equation (3.2) has a unique invariant probability measure \pi_2 on {\mathbb{R}}_+^{\circ} , which is the Gamma distribution
\pi_2\sim\text{Gamma}\left(\frac{2\alpha_2}{\tau_{21}^2+\tau_{22}^2}-1, \frac{\tau_{21}^2+\tau_{22}^2}{2\beta_{22}}\right). |
So \mu_2: = \delta_0^*\times\pi_2 is an ergodic invariant probability measure for the system (3.1) on the boundary {\mathbb{R}}_{2+}^{\circ} , in which \delta_0^* is the Dirac Delta measure with the mass at 0.
Case 3. If X_1(0) > 0 and X_2(0) = 0 then X_1(t) > 0 for all t > 0 a.s. and X_2(t)\equiv 0 a.s. But then the first equation of (3.1) becomes
\begin{equation} dX_1 = X_1(\alpha_1-\beta_{11}X_1)dt+X_1(\tau_{11}dW_1+\tau_{12}dW_2). \end{equation} | (3.3) |
By the same argument as above, the equation (3.3) has a unique invariant probability measure \pi_1 on {\mathbb{R}}_{+}^{\circ} , which is the Gamma distribution
\pi_1\sim\text{Gamma}\left(\frac{2\alpha_1}{\tau_{11}^2+\tau_{12}^2}-1, \frac{\tau_{21}^2+\tau_{22}^2}{2\beta_{11}}\right), |
provided that \alpha_1-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2} > 0 . Otherwise, (3.3) does not have any invariant probability measure on {\mathbb{R}}_+^{\circ} . Therefore, \mu_1 = \pi_1\times\delta_0^* is an ergodic invariant probability measure for the system (3.1) on the boundary {\mathbb{R}}_{1+}^{\circ} .
Now we compute the Lyapunov exponents of \mu_0 = \delta_0^*\times\delta_0^* , \mu_1 , and \mu_2 . By Ito's formula, the system (3.1) is equivalent to
\begin{align*} \frac{\ln X_1(t)}{t} = \frac{\ln X_1(0)}{t}&+\frac{1}{t}\int_0^t\left[\alpha_1-\beta_{11}X_1(s)+\beta_{12}X_2(s)-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}\right]ds\\ &+\tau_{11}\frac{W_1(t)}{t}+\tau_{12}\frac{W_2(t)}{t}, \\ \frac{\ln X_2(t)}{t} = \frac{\ln X_2(0)}{t}&+\frac{1}{t}\int_0^t\left[\alpha_2+\beta_{21}X_1(s)-\beta_{22}X_2(s)-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}\right]ds\\ &+\tau_{21}\frac{W_1(t)}{t}+\tau_{22}\frac{W_2(t)}{t}. \end{align*} |
By the strong law of large numbers, when the solution {\mathbf{X}}(t) = (X_1(t), X_2(t))^T of (3.1) is close to \text{supp}(\delta^*) = \{(0, 0)\} for a long time (meaning that the solution {\mathbf{X}}(t) is within some small neighborhood of (0, 0) for a long time), the Lyapunov exponent for the first component (the invasion rate of Species 1) with respect to \mu_0 is
\begin{align*} \lambda_1(\mu_0)& = \lim\limits_{t\to\infty}\frac{\ln X_1(t)}{t}\\ & = \lim\limits_{t\to\infty}\frac{1}{t}\int_0^t\left[\alpha_1-\beta_{11}X_1(s)+\beta_{12}X_2(s)-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}\right]ds\\ & = \int_{\partial{\mathbb{R}}_{+}^2}\left[\alpha_1-\beta_{11}X_1+\beta_{12}X_2-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}\right]\mu_0(d{\mathbf{X}})\\ & = \alpha_1-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}. \end{align*} |
Similarly,
\begin{align*} \lambda_2(\mu_0)& = \int_{\partial{\mathbb{R}}_{+}^2}\left[\alpha_2+\beta_{21}X_1+\beta_{22}X_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}\right]\mu_0(d{\mathbf{X}})\\ & = \alpha_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}. \end{align*} |
By the similar computations, the Lyapunov exponents of \mu_1 and \mu_2 can be computed as
\begin{align*} \lambda_1(\mu_1)& = 0, \\ \lambda_2(\mu_1)& = \alpha_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}+\frac{\beta_{21}}{\beta_{11}}\alpha_1-\frac{\beta_{21}}{2\beta_{11}}(\tau_{11}^2+\tau_{12}^2), \\ \lambda_1(\mu_2)& = \alpha_1-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}+\frac{\beta_{12}}{\beta_{22}}\alpha_2-\frac{\beta_{12}}{2\beta_{22}}(\tau_{21}^2+\tau_{22}^2), \\ \lambda_2(\mu_2)& = 0. \end{align*} |
On the basis of the Lyapunov exponents of the set of all ergodic invariant probability measures of the system (3.1) on the boundary \partial{\mathbb{R}}_{+}^2 , denoted \mathcal{M} = \{\mu_0, \mu_1, \mu_2\} , we can give the complete classification of the dynamics of the system (3.1). First of all, we assume that the matrix
\Sigma = \begin{bmatrix} \sigma_{11} & \sigma_{12} \\ \sigma_{21} & \sigma_{22} \end{bmatrix}, |
where \sigma_{ij} = \tau_{i1}\tau_{j1}+\tau_{i2}\tau_{j2} ( i, j = 1, 2 ), is positive definite. This is equivalent to requiring that
\tau_{11}^2+\tau_{12}^2 > 0, \, \, \tau_{21}^2+\tau_{22}^2 > 0, \, \, \text{and}\, \, \tau_{11}\tau_{22}-\tau_{12}\tau_{21}\neq 0. |
Next, we claim that the family of transition probability functions of the solution to the system (3.1) is tight when the system (3.1) is either competitive or predator–prey. Furthermore, when the system (3.1) is mutualistic, the family of transition probability functions of its solution might not be tight. In this case, we show that the solution blows up in finite time almost surely or that there is no invariant measure on {\mathbb{R}}_{+}^{2, \circ} . Indeed, we first assume \beta_{12} < 0 and \beta_{21} < 0 . In order to prove the tightness of the solution of (3.1), it suffices to show that there is a \delta > 0 such that
\begin{equation} \frac{X_1f_1({\mathbf{X}})+X_2f_2({\mathbf{X}})}{1+X_1+X_2}-\frac{\sum\limits_{i, j = 1}^2\sigma_{ij}X_iX_j}{2(1+X_1+X_2)^2} +\delta(3+|f_1({\mathbf{X}})|+|f_2({\mathbf{X}})|) < 0 \end{equation} | (3.4) |
for any \|{\mathbf{X}}\| = |X_1|+|X_2| that is sufficiently large, where f_1({\mathbf{X}}) = X_1(\alpha_1-\beta_{11}X_1+\beta_{12}X_2) and f_2({\mathbf{X}}) = X_2(\alpha_2+\beta_{21}X_1-\beta_{22}X_2) . Since \beta_{12} and \beta_{21} are both negative, as \|{\mathbf{X}}\|\to\infty , we get
\frac{X_1f_1({\mathbf{X}})+X_2f_2({\mathbf{X}})}{(1+X_1+X_2)^2} = \frac{\alpha_1X_1+\alpha_2X_2-(\beta_{11}X_1^2+\beta_{22}X_2^2)+(\beta_{12}+\beta_{21})X_1X_2}{(1+X_1+X_2)^2} |
which approaches -\infty . So there is a \tilde{\beta} > 0 such that for any \|{\mathbf{X}}\| that is sufficiently large
\frac{X_1f_1({\mathbf{X}})+X_2f_2({\mathbf{X}})}{1+X_1+X_2} < -\tilde{\beta}(1+X_1+X_2). |
On the other hand, using the Cauchy–Schwarz's inequality, a \tilde{\sigma} > 0 exists such that
-\frac{\sum\limits_{i, j = 1}^2\sigma_{ij}X_iX_j}{2(1+X_1+X_2)^2}\le -\tilde{\sigma}\frac{X_1^2+X_2^2}{(1+X_1+X_2)^2}\le -\tilde{\sigma} |
when \|{\mathbf{X}}\| is sufficiently large. Then we can choose a \delta > 0 that is small enough so that we can obtain the inequality (3.4) as \|X\| is large enough. Second, we suppose either (\beta_{12} < 0, \beta_{21} > 0) or (\beta_{12} > 0, \beta_{21} < 0) . Then c_1 > 0 and c_2 > 0 exist such that c_1\beta_{12}+c_2\beta_{21} = 0 . This implies that
\lim\limits_{\|{\mathbf{X}}\|\to\infty}\frac{c_1X_1f_1({\mathbf{X}})+c_2X_2f_2({\mathbf{X}})}{(1+c_1X_1+c_2X_2)^2} = -\infty. |
By the same reasoning as in the competitive case, we can easily find a \delta' > 0 that is sufficiently small such that, for any \|{\mathbf{X}}\| large enough, we get the inequality (3.4). Lastly, for the case \beta_{12} > 0 and \beta_{21} > 0 , we show that the solution of (3.1) blows up in finite time almost surely under the assumptions that
\alpha_1-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2} > 0, \, \, \alpha_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2} > 0, \, \, \text{and}\, \, \beta_{11}\beta_{22}-\beta_{12}\beta_{21}\le 0. |
By way of contradiction, assume that {\mathbf{X}}(t) does not blow up in finite time and has an invariant probability measure on {\mathbb{R}}_{+}^{2, \circ} . As a result, {\mathbf{X}}(t) is a recurrent process. But then, by Ito's formula,
\begin{align*} \frac{\beta_{22}\ln X_1(t)+\beta_{12}\ln X_2(t)}{t}& = \frac{\beta_{22}\ln X_1(0)+\beta_{12}\ln X_2(0)}{t}\\ &+\beta_{22}\left(\alpha_1-\frac{\tau_{11}^2}{2}-\frac{\tau_{12}^2}{2}\right)+\beta_{12}\left(\alpha_2-\frac{\tau_{21}^2}{2}-\frac{\tau_{22}^2}{2}\right)\\ &+(\beta_{12}\beta_{21}-\beta_{11}\beta_{22})\frac{1}{t}\int_0^t X_1(s)ds\\ &+(\tau_11\beta_{22}+\tau_{21}\beta_{12})\frac{W_1(t)}{t}\\ &+(\tau_{12}\beta_{22}+\tau_{22}\beta_{12})\frac{W_2(t)}{t}. \end{align*} |
It follows that, by the above assumptions,
\limsup\limits_{t\to\infty}\frac{\beta_{22}\ln X_1(t)+\beta_{12}\ln X_2(t)}{t} > 0\, \, \text{a.s.} |
which is a contradiction. Therefore, the system (3.1), which is competitive or predator–prey, satisfies Assumption 3 in Subsection 2.2.
Now we focus on the stochastic model (3.1) of the competitive type and predator–prey type. On the basis of Theorems 2.3, 2.4, and 2.5 in Subsection 2.2, we are able to describe the complete dynamic picture of the system (3.1) as follows.
C1. Suppose \lambda_1(\mu_0) < 0 and \lambda_2(\mu_0) < 0 . This means that there are no other ergodic invariant probability measures in \partial{\mathbb{R}}_{+}^2 in addition to \mu_0 . Then \mathcal{M} = \{\mu_0\} , which implies that Assumptions 3 and 5 hold for the system (3.1). By Theorem 2.4, the solution {\mathbf{X}}(t) = (X_1(t), X_2(t))^T converges a.s. to (0, 0)^T for any initial condition {\mathbf{x}} = (x_1, x_2)\in{\mathbb{R}}_{+}^{2, \circ} .
C2. Suppose \lambda_1(\mu_0) > 0 and \lambda_2(\mu_0) < 0 . Then \mathcal{M} = \{\mu_0, \mu_1\} . There are two cases.
(ⅰ) If \lambda_2(\mu_1) > 0 then Assumption 4 is fulfilled and hence transition probability function of the solution to the system (3.1) converges to an invariant probability measure \bar{\mu}_1 in {\mathbb{R}}_{+}^{2, \circ} in total variation norm.
(ⅱ) If \lambda_2(\mu_1) < 0 then \mathcal{M}^1 = \{\mu_1\} and so \mathcal{M}^2 = \mathcal{M}\backslash\mathcal{M}^1 = \{\mu_0\} . This means that Assumptions 3 and 6 are satisfied. By Theorem 2.5, X_2(t) converges to 0 with the exponential rate \lambda_2(\mu_1) and the family of random occupation measures of the solution converges to \mu_1 .
C3. Suppose \lambda_1(\mu_0) < 0 and \lambda_2(\mu_0) > 0 . Then \mathcal{M} = \{\mu_0, \mu_2\} . There are 2 cases:
(ⅰ) If \lambda_1(\mu_2) > 0 , then Assumption 4 is fulfilled, and hence the transition probability function of the solution to the system (3.1) converges to an invariant probability measure \bar{\mu}_2 in {\mathbb{R}}_{+}^{2, \circ} in the total variation norm.
(ⅱ) If \lambda_1(\mu_2) < 0 , then \mathcal{M}^1 = \{\mu_2\} and so \mathcal{M}^2 = \mathcal{M}\backslash\mathcal{M}^1 = \{\mu_0\} . This means that Assumptions 3 and 6 are satisfied. By Theorem 2.5, X_1(t) converges to 0 with the exponential rate \lambda_1(\mu_2) , and the family of random occupation measures of the solution converges to \mu_2 .
C4. Suppose \lambda_1(\mu_0) > 0 and \lambda_2(\mu_0) > 0 . Then \mathcal{M} = \{\mu_0, \mu_1, \mu_2\} . There are four cases.
(ⅰ) If \lambda_1(\mu_2) > 0 and \lambda_2(\mu_1) > 0 , then any invariant probability measure on \partial{\mathbb{R}}_{+}^2 has the form \mu = p_0\mu_0+p_1\mu_1+p_2\mu_2 , where p_0 , p_1 , and p_2 are positive numbers such that p_0+p_1+p_2 = 1 . It is straightforward that \max\{\lambda_1(\mu), \lambda_2(\mu)\} > 0 . Then Assumptions 3 and 4 are satisfied. By Theorem 2.3, there is a unique invariant probability measure \mu^* on {\mathbb{R}}_{+}^{2, \circ} , and the transition probability function P(t, {\mathbf{x}}, \cdot) , {\mathbf{x}}\in{\mathbb{R}}_{+}^{2, \circ} , converges to \mu^* in the total variation norm.
(ⅱ) If \lambda_1(\mu_2) < 0 and \lambda_2(\mu_1) < 0 , then \mathcal{M}^1 = \{\mu_1, \mu_2\} and so \mathcal{M}^2 = \mathcal{M}\backslash\mathcal{M}^1 = \{\mu_0\} . Hence, Assumptions 3 and 6 hold true. By Theorem 2.5, p_1^{{\mathbf{x}}} > 0 , p_2^{{\mathbf{x}}} > 0 , and p_1^{{\mathbf{x}}}+p_2^{{\mathbf{x}}} = 1 , where
\begin{align*} p_1^{{\mathbf{x}}}& = {\mathbb{P}}_{{\mathbf{x}}}\left\{\mathcal{U}(\omega) = \{\mu_1\}\, \, \text{and}\, \, \lim\limits_{t\to\infty}\frac{\ln X_2(t)}{t} = \lambda_2(\mu_1) < 0\right\}, \\ p_2^{{\mathbf{x}}}& = {\mathbb{P}}_{{\mathbf{x}}}\left\{\mathcal{U}(\omega) = \{\mu_2\}\, \, \text{and}\, \, \lim\limits_{t\to\infty}\frac{\ln X_1(t)}{t} = \lambda_1(\mu_2) < 0\right\}. \end{align*} |
(ⅲ) If \lambda_1(\mu_2) > 0 and \lambda_2(\mu_1) < 0 , then the conclusion is the same as in C2(ⅱ).
(ⅳ) If \lambda_1(\mu_2) < 0 and \lambda_2(\mu_1) > 0 , then the conclusion is the same as in C3(ⅱ).
Finally, to study the resilience of the two-dimensional stochastic gLV model (3.1), we assume that the system (3.1) is competitive or predator–prey type with the matrix \Sigma being positive-definite and satisfies the inequality (3.4) for some \delta > 0 . Furthermore, we assume that the system (3.1) satisfies the assumptions as in C2(ⅰ) or C3(ⅰ) or C4(ⅰ).
Then the stochastic gLV model of two species is stochastically strongly persistent, that is, it has a unique positive stationary distribution \pi^* on {\mathbb{R}}_{+}^{2, \circ} such that the transition probability function of its solution converges to \pi^* in the total variation norm. The linearized system of (3.1) at \pi^* is given by
\begin{equation} \begin{aligned} d\widetilde X_1 & = [\Phi_1+\gamma_{11}(\widetilde X_1-m_1)+\gamma_{12}(\widetilde X_2-m_2)]dt+\widetilde X_1(\tau_{11}dW_1+\tau_{12}dW_2), \\ d\widetilde X_2 & = [\Phi_2+\gamma_{21}(\widetilde X_1-m_1)+\gamma_{22}(\widetilde X_2-m_2)]dt+\widetilde X_2(\tau_{21}dW_1+\tau_{22}dW_2), \end{aligned} \end{equation} | (3.5) |
where the values of \theta_i and the \gamma_{ij} are determined by
\begin{align*} \Phi_1 & = \alpha_1m_1-\beta_{11}m_1^2+\beta_{12}m_1m_2, \\ \Phi_2 & = \alpha_2m_2+\beta_{21}m_2m_1-\beta_{22}m_2^2, \\ \gamma_{11}& = \alpha_1+2\beta_{11}m_1+\beta_{12}m_2, \\ \gamma_{12}& = \beta_{12}m_1, \\ \gamma_{21}& = \beta_{21}m_2, \\ \gamma_{22}& = \alpha_2+\beta_{21}m_1+2\beta_{22}m_2, \end{align*} |
with
\begin{align*} m_i = \int_{\partial{\mathbb{R}}_{+}^{2, \circ}}X_i\pi^*(dX_1dX_2) = \lim\limits_{t\to\infty}\frac{1}{t}\int_0^t X_i(s)ds\, \, (i = 1, 2). \end{align*} |
Let A = \begin{bmatrix} \gamma_{11} & \gamma_{12} \\ \gamma_{21} & \gamma_{22} \end{bmatrix} , \Theta = (\Phi_1, \Phi_2)^T , B_1 = \begin{bmatrix} \tau_{11} & 0 \\ 0 & \tau_{21} \end{bmatrix} , B_2 = \begin{bmatrix} \tau_{12} & 0 \\ 0 & \tau_{22} \end{bmatrix} , \sigma_{ij} = \tau_{i1}\tau_{j1}+\tau_{i2}\tau_{j2} ( i, j = 1, 2 ), and Y_i = \widetilde X_i-m_i ( i = 1, 2 ). Then the system (3.5) becomes
\begin{equation} d{\mathbf{Y}} = (A{\mathbf{Y}} + \Theta)dt + (B_1{\mathbf{Y}}+B_1{\mathbf{m}}) dW_1 + (B_2{\mathbf{Y}}+B_2{\mathbf{m}}) dW_2 \end{equation} | (3.6) |
where {\mathbf{Y}} = (Y_1, Y_2)^T and {\mathbf{m}} = (m_1, m_2)^T . Next, by computation, we have
\begin{align*} I_2\otimes A & = \begin{bmatrix} \gamma_{11} & \gamma_{12} & 0 & 0\\ \gamma_{21} & \gamma_{22} & 0 & 0\\ 0 & 0 & \gamma_{11} & \gamma_{12} \\ 0 & 0 & \gamma_{21} & \gamma_{22} \end{bmatrix}, A\otimes I_2 = \begin{bmatrix} \gamma_{11} & 0 &\gamma_{12} & 0\\ 0 & \gamma_{11} & 0 & \gamma_{12}\\ \gamma_{21} & 0 & \gamma_{22} & 0\\ 0 & \gamma_{21} & 0 & \gamma_{22} \end{bmatrix}, \\ B_1\otimes B_1 & = \begin{bmatrix} \tau_{11}^2 & 0 & 0 & 0 \\ 0 & \tau_{11}\tau_{21} & 0 & 0 \\ 0 & 0 & \tau_{21}\tau_{11} & 0 \\ 0 & 0 & 0 & \tau_{21}^2 \end{bmatrix}, B_2\otimes B_2 = \begin{bmatrix} \tau_{12}^2 & 0 & 0 & 0 \\ 0 & \tau_{12}\tau_{22} & 0 & 0 \\ 0 & 0 & \tau_{22}\tau_{12} & 0 \\ 0 & 0 & 0 & \tau_{22}^2 \end{bmatrix}. \end{align*} |
Thus S = I_2 \otimes A + A \otimes I_2 + B_1 \otimes B_1 + B_2 \otimes B_2 , which is equal to
\begin{align*} S = \begin{bmatrix} 2\gamma_{11}+\sigma_{11} & \gamma_{12} & \gamma_{12} & 0 \\ \gamma_{21} & \gamma_{11}+\gamma_{22}+\sigma_{12} & 0 & \gamma_{12} \\ \gamma_{21} & 0 & \gamma_{11}+\gamma_{22}+\sigma_{21} & \gamma_{12} \\ 0 & \gamma_{21} & \gamma_{21} & 2\gamma_{22}+\sigma_{22} \end{bmatrix}, \end{align*} |
and T = \rm{vec}(\Theta M^T+M\Theta^T)+[B_1 \otimes B_1 + B_2 \otimes B_2]\text{vec}(M{\mathbf{m}}^T+{\mathbf{m}} M^T+{\mathbf{m}}{\mathbf{m}}^T) , which is equal to
\begin{align*} T = \begin{bmatrix} 2m_1\theta_1+\sigma_{11}(2M_1m_1+m_1^2)\\ m_1\theta_2+m_2\theta_1+\sigma_{12}(M_2m_1+m_2M_1+m_2m_1)\\ m_1\theta_2+m_2\theta_1+\sigma_{21}(M_1m_2+m_1M_2+m_1m_2)\\ 2m_2\theta_2+\sigma_{22}(2M_2m_2+m_2^2) \end{bmatrix}, \end{align*} |
in which M = (M_1, M_2)^T = ({\mathbb{E}} Y_1, {\mathbb{E}} Y_2)^T is the solution to the ODE system
\begin{equation} \dot{M} = A M+\Theta, \end{equation} | (3.7) |
with the initial value M(0) = {\mathbf{u}} . If we let {\mathbf{Z}} = ({\mathbb{E}} Y_1^2, {\mathbb{E}} Y_2Y_1, {\mathbb{E}} Y_1Y_2, {\mathbb{E}} Y_2^2)^T , then we can smooth out the solutions of (3.6) by looking at the solutions of the following ODE system
\begin{equation} \frac{d{\mathbf{Z}}}{dt} = S{\mathbf{Z}} + T, \end{equation} | (3.8) |
By using two ODE systems (3.7) and (3.8), we will develop graphical representations of the four resilience measures, which are the instantaneous return rate, the average return rate, the asymptotic resilience, and the convergence rate, for the stochastic system (3.1) of the competitive type over the relevant parameter regimes in the next subsection.
As a simple illustration of how environmental noise and microbial interactions combine to drive the rate of return of a microbial community to the equilibrium state after a pulse perturbation, we look at a SgLV model of two competitive species (3.1). Here, we assume that two species are in direct competition with each other, so that their interaction strengths \beta_{12} and \beta_{21} are negative.
In each panel of Figure 3, two trajectories of the system (3.1) are produced, in which the trajectories before time zero represent the equilibrium states of the two species and the trajectories after time zero are the recovery dynamics of the two species following a pulse perturbation at time zero. The difference between Figure 3A and Figure 3C is due to the difference in the interaction strength \beta_{12} , which measures the strength of interspecific competition of Species 2 on Species 1. In Figure 3A, \beta_{12} is twice as large as in Figure 3C, while the noise intensities are the same in both plots. In contrast, the difference between Figure 3A and Figure 3B is caused by the difference in the noise intensities of two white noises: In Figure 3B, the noise intensities of \frac{dW_1}{dt} and \frac{dW_2}{dt} are five times those in Figure 3A, while the interaction strengths between species are the same, leading to greater fluctuations in the species' abundances. Figure 3D differs from Figure 3A in that the noise intensities increase by a factor of 5 and the interaction strength \beta_{12} decreases by a factor of 2.
Resilience in a pulse-perturbed stochastic system can be illustrated by this example. When comparing Figure 3A and Figure 3C, we observe that decreasing the interaction strength \beta_{12} reduces the return time of the system to the equilibrium state after a pulse perturbation. That is, lower competition between two species intensifies the resilience of the system under the same environmental regime. When comparing Figure 3A and Figure 3B, increasing the noise intensities decreases the time required for recovery trajectories to return to an equilibrium state. In this example, it appears that increasing environmental noise actually increases stability, which contrasts with the intuition that increasing noise destabilizes the stability of the system [21,22,23]. Figure 3D shows the combined effect of environmental noise and interactions between species on the resilience of a stochastic system. In the next subsection, we will use our theoretical framework to test our observations in Figure 3.
To demonstrate the usefulness of our theoretical framework developed above, we apply the procedure to calculate the resilience of a SgLV (described in Figure 2) to the simulated data in Figure 3. If we observe Figure 3 in Section 2, the recovery trajectories of two species return to an equilibrium state at time 4.2 after a pulse perturbation at time zero in Panel A. Reducing the interaction strength \beta_{12} by 2-fold relative to Panel A, Panel C shows that the return time to the equilibrium state is reduced to approximately 2.4 after the same pulse perturbation as in Panel A at time zero. This means that the microbial community in Panel C is more resilient than that in Panel A. Increasing the noise intensities five times compared with Panel A, the return time to the equilibrium state in Panel B decreases to some time around 3.6 with the same perturbation as in Panel A. In other words, the microbial community in Panel C is more resilient than that in Panel A. This shows that environmental noise influences the resilience of the microbial community. Panel D shows the fastest return time, which is approximately 1.8 , to the equilibrium state among communities when we decrease the interaction strength \beta_{12} two times and increase the noise intensities five times. Below, we calculate four resilience measures for each panel of Figure 3 to confirm our observation that the resilience of the microbial community is driven by environmental noise and to disentangle the sources of the effects of environmental fluctuations on the resilience of a microbiome system. To calculate the resilience measures, the first step is to linearize the stochastic system (3.1) at its stationary distribution \pi^* , and then we can obtain the linearized stochastic system (3.5), where all coefficients \Phi_i , \gamma_{ij} , and m_i ( i, j = 1, 2 ) are computed and given in Table 1. Note that (m_1, m_2)^T is the expectation of the stationary distribution \pi^* . Figures 4A, 4B, 5A, and 5B show that the stochastic dynamics of both the original stochastic system (3.1) and the linearized stochastic system (3.5) converge to the equilibrium state characterized by the stationary distribution \pi^* following the same pulse perturbation at time 0 in four scenarios corresponding to the four parameter sets in Table 1. In Figure 4A, we see that the trajectories of the linearized system approach and merge into those of the original system between time 4 and 4.5 , while the time in Figure 4B is between 3 and 3.5 . In Figure 5A, the merging time of the trajectories of both systems is between 2.5 and 3. The fastest merging time between 1.5 and 2 is shown in Figure 5B. Thus, the patterns of the return time to the equilibrium state in both the original system and the linearized system when applied to the same pulse perturbation are the same, even though the dynamics of the trajectories of both systems are quite different before the merging time. Therefore, we are able to utilize a much simpler linearized system (3.5) to investigate the resilience of a microbial community whose temporal dynamics are captured by the original system (3.1).
Parameter set 1 | Parameter set 2 | Parameter set 3 | Parameter set 4 | |
m_1 | 38.7470 | 38.8413 | 84.4804 | 84.4508 |
m_2 | 135.3518 | 134.9853 | 118.2069 | 117.8869 |
\Phi_{1} | 0.0813 | 0.3254 | 0.2431 | 0.7731 |
\Phi_{2} | 0.4263 | 1.6171 | 0.3562 | 1.3958 |
\gamma_{11} | -1.9353 | -1.9337 | -4.2211 | -4.2134 |
\gamma_{12} | -1.1624 | -1.1652 | -1.2672 | -1.2668 |
\gamma_{21} | -1.3535 | -1.3499 | -1.1821 | -1.1789 |
\gamma_{22} | -3.6062 | -3.5876 | -3.1492 | -3.1318 |
Next, we transform the linearized system (3.5) into the two linear ODE systems (3.7) and (3.8) by, respectively, finding the first and second moment of solutions and then using vectorization in linear algebra. Then four resilience measures \mathcal{R}_t^{\text{ins}} , \mathcal{R}_t^{\text{ave}} , \mathcal{R}_c , and \mathcal{R}_{\infty} are calculated on the basis of W(t) , the mean square difference between the solution of the linearized system (3.5), the expectation of the stationary distribution \pi^* , and the mean square stability matrix S . Note that the mean square stability matrix S and W(t) can be calculated from the linearized coefficients shown in Table 1. The last four panels of Figure 4 and Figure 5 show the dynamics of four resilience measures over time in the four scenarios above.
Two resilience measures {\mathcal{R}}_t^{\text{ins}} and {\mathcal{R}}_t^{\text{ave}} , which are deterministic functions of time, can help us track the short-term behavior of \ln W(t) . When {\mathcal{R}}_t^{\text{ins}} > 0 , \frac{d}{dt}\ln W(t)^{1/2} < 0 , which implies that \ln W(t) is decreasing to its equilibrium point at time t . In other words, when {\mathcal{R}}_t^{\text{ins}} > 0 , the solution \widetilde X(t) of the linearized system (3.5) moves closer to the equilibrium state \pi^* at time t . By the same argument, when {\mathcal{R}}_t^{\text{ins}} < 0 , \ln W(t) is increasing, and hence the solution \widetilde X(t) moves away from the equilibrium state \pi^* at time t . In terms of {\mathcal{R}}_t^{\text{ave}} , if {\mathcal{R}}_t^{\text{ave}} > 0 , then \ln W(t) < \ln W(0) , which means that the solution \widetilde X(t) comes closer to the equilibrium state at time t , since the end of the perturbation at time 0. If {\mathcal{R}}_t^{\text{ave}} < 0 , then the solution \widetilde X(t) moves far away from the equilibrium state since the end of the perturbation. The deterministic function {\mathcal{R}}_t^{\text{ins}} measures the rate of change in \ln W(t) . So we can use it to track when \ln W(t) comes back to its equilibrium point after the perturbation is applied. It is straightforward to see that the time when \ln W(t) reaches its equilibrium point (which is a constant) is the time when {\mathcal{R}}_t^{\text{ins}} crosses 0. Therefore, we can estimate the return time of the solution \widetilde X(t) to the equilibrium state \pi^* when the first time {\mathcal{R}}_t^{\text{ins}} crosses 0. In the last four panels of Figure 4, the return time to the equilibrium state decreases from 4.45 to 3.08 when the noise intensities are increased by five times. The same decreasing pattern in the return time occurs in the last four panels of Figure 5, which is that the return times go down from 2.75 to 1.9 . This pattern can be explained by the physical meaning of W(t) . By our definition in Section 2.4, W(t) is the sum of all the variances of the solution components of the linearized system (3.5), which measures how far the solution of the linearized system (3.5) is from the equilibrium state \pi^* . Then high noise makes the equilibrium state \pi^* have higher variance so that there is less recovery required following perturbation to return to \pi^* . So, the return to the equilibrium state \pi^* is actually faster than in the system with high noise than with low noise.
Two resilience measures {\mathcal{R}}_c and {\mathcal{R}}_{\infty} , which are constants, can be used to predict the resilience of the stochastic system (3.1) by looking at the gap {\mathcal{R}}_c - {\mathcal{R}}_{\infty} and how fast the average return rate {\mathcal{R}}_t^{\text{ave}} enters this gap. In Table 2, when we increase the noise five times, the gap {\mathcal{R}}_c-{\mathcal{R}}_{\infty} decreases from 1.2588 (parameter Set 1: low noise, strong interaction) to 1.2508 (parameter Set 2: high noise, strong interaction). From Panels C and D in Figure 4, we also observe that the time when the average return rate {\mathcal{R}}_t^{\text{ave}} moves into the gap between {\mathcal{R}}_c and {\mathcal{R}}_{\infty} is shorter when the noise increases. This pattern is the same as we move from parameter Set 3 (low noise, weak interaction) to parameter Set 4 (high noise, weak interaction). The pattern of a decreased gap implies shorter for moving into the time gap, caused by increasing noise, which can be elucidated by looking at the mathematical meanings of the two resilience measures {\mathcal{R}}_c and {\mathcal{R}}_{\infty} . First, the quantity {\mathcal{R}}_c measures how fast the covariant matrix P(t) , which is the solution of the system (2.19), converges to its equilibrium state. Basically, {\mathcal{R}}_c equals the negative half of the real part of the largest eigenvalue of the mean square stability matrix S , which means that {\mathcal{R}}_c tells us about the smallest convergence rate among the convergence rates of the elements of the covariant matrix P(t) . While {\mathcal{R}}_{\infty} can be considered as an equilibrium state of the average return rate {\mathcal{R}}_t^{\text{ave}} , and so {\mathcal{R}}_{\infty} approximately equals the smallest convergence rate among the convergence rates of the diagonal elements of the covariance matrix P(t) . Therefore, the gap {\mathcal{R}}_c-{\mathcal{R}}_{\infty} is always positive when noise appears in the original system (3.1). Without noise, this gap will be zero, and hence these two resilience measures coincide. Our simulations (Figures 4 and 5) and calculations (Table 2) demonstrate that increasing noise will decrease the gap {\mathcal{R}}_c-{\mathcal{R}}_{\infty} , which, in turn, decreases the time when {\mathcal{R}}_t^{\text{ave}} enters the gap. This implies that increasing the level of noise increases the resilience of the system.
Low noise, strong interaction | High noise, strong interaction |
{\mathcal{R}}_c^1=1.2635 | {\mathcal{R}}_c^2=1.2539 |
{\mathcal{R}}_{\infty}^1=0.0047 | {\mathcal{R}}_{\infty}^2=0.0031 |
{\mathcal{R}}_c^1 - {\mathcal{R}}_{\infty}^1=1.2588 | {\mathcal{R}}_c^2 - {\mathcal{R}}_{\infty}^2=1.2508 |
Low noise, weak interaction | High noise, weak interaction |
{\mathcal{R}}_c^3=2.3488 | {\mathcal{R}}_c^4=2.3307 |
{\mathcal{R}}_{\infty}^3=0.0048 | {\mathcal{R}}_{\infty}^4=0.0032 |
{\mathcal{R}}_c^3 - {\mathcal{R}}_{\infty}^3=2.3440 | {\mathcal{R}}_c^4 - {\mathcal{R}}_{\infty}^4=2.3275 |
The concept of resilience has been used for a long time in community ecology, although it is much less developed for microbial systems, even though resilience is at the center of many pressing questions in microbiome research. Microbiome communities differ from macroecological communities in at least two important ways that impact the modeling of ecological stability. First, while stability metrics in macroecological systems have been developed for first-order multivariate autoregressive models [21,22], the nonlinear generalized Lotka–Volterra model is more commonly used in microbiome studies. This kind of model has been applied to study time-series inference and stability in intestinal microbiota [7,38], to model primary succession of murine gut communities [35], and to capture dynamics of complex microbial metapopulations in aquatic environments [36,37]. Second, and perhaps more importantly, microbial time-series data are incredibly noisy. Some of this noise is measurement error due to the sampling and sequencing processes involved in data collection. These challenges have been documented in various microbial systems, from cheese communities [40] to human microbiome studies with dense temporal sampling [41,42]. Furthermore, noise arises in experimental studies of gut microbiota reconstitution and synthetic community design [43,44], as well as in investigations of microbiome dynamics at different timescales [45,46]. But there is also intrinsic noise in the system, with sources including fluctuations in temperature, different local concentrations of species, or slight variations among individuals.
Currently, existing definitions of resilience are based on the theory of ordinary differential equations or difference equations, so all existing stability metrics to measure resilience are not functions of noise. In this paper, we have built a general framework to allow stochasticity to be deeply involved in the dynamics of microbial communities. Specifically, we have proposed an SgLV model to describe the temporal dynamics of a microbial community with white noise added to each interaction. We then developed a general theoretical framework for how to calculate four resilience measures based on the SgLV model. Each measure captures a different facet of stability and, together, they give a picture of resilience.
All four resilience measures are inherently functions of noise, since they are defined in terms of the ODE system established by the variance matrix of the linearized SDE system. Therefore, all four measures can be manipulated by the magnitude of the noise. One surprising result to emerge from our framework is that the return to the original stationary state is actually faster in the system with high noise than with low noise. This is primarily driven by the fact that the stationary distribution has higher variance in the system with high noise, so that there is less recovery required following a perturbation to return to the stationary distribution.
The next step of this work is the possibility of applying this general framework to time-series data from real-world microbiomes. This application involves a fundamental question regarding the identifiability and estimability of the parameters within our proposed stochastic generalized Lotka–Volterra (SgLV) model. Parameter identifiability remains a significant challenge, particularly for complex hierarchical models involving microbiome dynamics. While traditional approaches to identifiability, such as those described in Browning 2020 [47] and in Remien 2021 [48], focus primarily on deterministic systems, fewer methodologies explicitly address stochastic models. To address this critical gap, methodologies such as data cloning, as described by Lele et al., 2010 [49] may be particularly valuable. Although originally designed for generalized linear mixed models, data cloning could be adapted to our SgLV framework to provide a practical graphical test for assessing parameter estimability. This approach involves generating multiple replicated datasets (clones) from microbiome time-series data and using Markov Chain Monte Carlo (MCMC) methods to sample from the resulting posterior distributions. By assessing whether the parameters have degenerate posterior distributions as the number of clones increases, we can determine whether each parameter—or specific functions of parameters, such as the resilience metrics proposed in our study—is estimable. Such a rigorous assessment of estimability would significantly enhance the reliability of the scientific conclusions drawn from applying our framework to real-world microbiome data.
An important direction of future research is the application of our proposed stochastic generalized Lotka–Volterra (SgLV) framework and resilience measures to practical scenarios in microbiome studies. For example, predicting the recovery dynamics of microbial communities after perturbations such as single-dose antibiotic treatments represents a significant area of potential application. Recent studies, including those by Hayashi et al., 2024 [24] and Ponciano et al., 2024 [25], have demonstrated that microbial community compositions can shift between alternative stable states under environmental perturbations, influenced by both deterministic selection and stochastic drift. Using our developed resilience metrics, researchers could quantify the microbiome's rate and likelihood of returning to a "healthy" or pre-perturbation state, providing a valuable predictive tool for microbiome resilience. This approach could significantly aid experimental design, inform intervention strategies, and guide the interpretation of the microbiome's recovery patterns following ecological disturbances.
Nevertheless, applying our SgLV framework and resilience metrics to real-world microbiome data requires careful consideration of several microbial-data-specific challenges. Microbiome datasets frequently suffer from the undersampling of rare taxa, which can lead to inaccurate abundance estimates and consequently affect the reliability of the calculated resilience measures. Additionally, relative abundance data, as commonly obtained through sequencing technologies, complicate the accurate estimation of interaction parameters because these data lack information on the absolute abundances. As emphasized by Ponciano et al., 2024 [25], addressing these challenges might involve combining sequencing-based relative abundances with absolute abundance quantification methods, such as quantitative polymerase chain reaction (PCR), to improve the estimation accuracy of interaction coefficients. Additionally, sparse temporal sampling can obscure rapid microbial dynamics, potentially reducing the reliability of resilience predictions. Future studies might employ robust parameter estimation methods such as data cloning (Lele et al., 2010 [49]) to assess parameter estimability rigorously and thus enhance confidence in resilience measures calculated from real-world microbiome time-series data.
Recently, Flamant et al., 2020 [50] introduced an innovative approach using neural network solution bundles to approximate the solutions of ordinary differential equations, which could be extended effectively to stochastic generalized Lotka–Volterra (SgLV) models for microbiome studies. In this approach, neural networks are trained to approximate solution bundles of differential equations over a broad parameter space, enabling efficient exploration of model dynamics and rapid fitting to empirical data. Specifically, once trained, these neural network solution bundles could quickly approximate the trajectories predicted by the SgLV model, significantly reducing the computational overhead compared with traditional numerical solvers. When applied to real-world microbiome time-series data, this methodology could effectively address common data challenges. For instance, by efficiently handling sparse temporal sampling, neural network approximations could infer microbiome dynamics even when the data resolution is limited. Furthermore, the flexibility of neural networks might better accommodate biases arising from relative abundance data, potentially integrating with methods used to infer absolute abundances. Lastly, the approach could also help mitigate the challenges arising from undersampling rare taxa by efficiently exploring parameter spaces to identify robust solutions despite incomplete information on abundance. Thus, adopting neural network solution bundles presents a promising strategy for reliably applying the SgLV framework to complex microbiome datasets.
All authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.
Research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number P20GM104420.
All authors declare that there are no conflicts of interest regarding the publication of this manuscript.
[1] |
H. R. Karimi, Y. Lu, Guidance and control methodologies for marine vehicles: A survey, Control Eng. Pract., 111 (2021), 104785. https://doi.org/10.1016/j.conengprac.2021.104785 doi: 10.1016/j.conengprac.2021.104785
![]() |
[2] | T. I. Fossen, Handbook of Marine Craft Hydrodynamics and Motion Control, Wiley, 2011. |
[3] | J. C. Patra, D. Wang, Ship dynamic positioning control system: A review, in Proceedings of the 2004 American Control Conference, (2004). |
[4] |
K. D. Do, Global robust and adaptive output feedback control for a marine dynamic positioning of surface ships, J. Mar. Sci. Appl., 10 (2011), 325–332. https://doi.org/10.1007/s11804-011-1076-z doi: 10.1007/s11804-011-1076-z
![]() |
[5] |
Y. Su, C. Zheng, P. Mercorelli, Nonlinear PD fault-tolerant control for dynamic positioning of ships with actuator constraints, IEEE/ASME Trans. Mech., 22 (2017), 1132−31142. https://doi.org/10.1109/TMECH.2016.2603538 doi: 10.1109/TMECH.2016.2603538
![]() |
[6] |
G. Zhang, M. Yao, W. Zhang, W. Zhang, Event-triggered distributed adaptive cooperative control for multiple dynamic positioning ships with actuator faults, Ocean Eng., 242 (2021), 110124. https://doi.org/10.1016/j.oceaneng.2021.110124 doi: 10.1016/j.oceaneng.2021.110124
![]() |
[7] |
S. Donnarumma, M. Martelli, F. D'Agostino, D. Kaza, F. Silvestro, Multiphysics modeling and simulation of integrated electric propulsion system for ship dynamic positioning, IEEE Trans. Ind. Appl., 2024 (2024), 1–10. https://doi.org/10.1109/speedam53979.2022.9841976 doi: 10.1109/speedam53979.2022.9841976
![]() |
[8] |
Y. Wang, X. Yang, H. Yan, Reliable fuzzy tracking control of near-space hypersonic vehicle using aperiodic measurement information, IEEE Trans. Ind. Electron., 66 (2019), 9439–9447. https://doi.org/10.1109/TIE.2019.2892696 doi: 10.1109/TIE.2019.2892696
![]() |
[9] |
X. Meng, B. Jiang, H. R. Karimi, C. Gao, Leader-follower sliding mode formation control of fractional-order multi-agent systems: A dynamic event-triggered mechanism, Neurocomputing, 557 (2023), 126691. https://doi.org/10.1016/j.neucom.2023.126691 doi: 10.1016/j.neucom.2023.126691
![]() |
[10] |
Y. Wang, Y. Xia, P. Zhou, Fuzzy-model-based sampled-data control of chaotic systems: A fuzzy time-dependent Lyapunov-Krasovskii functional approach, IEEE Trans. Fuzzy Syst., 25 (2016), 1672–1684. https://doi.org/10.1109/TFUZZ.2016.2617378 doi: 10.1109/TFUZZ.2016.2617378
![]() |
[11] |
Y. Wang, P. Shi, On master-slave synchronization of Chaotic Lur'e systems using sampled-data control, IEEE Trans. Circuits Syst. II, 85 (2016), 981–992. https://doi.org/10.1007/s11071-016-2737-x doi: 10.1007/s11071-016-2737-x
![]() |
[12] |
W. H. Chen, Z. Wang, X. Lu, On sampled-data control for masterslave synchronization of chaotic Lur'e systems, IEEE Trans. Circuits Syst. II, 59 (2012), 515–519. https://doi.org/10.1109/TCSII.2012.2204114 doi: 10.1109/TCSII.2012.2204114
![]() |
[13] |
H. Xiao, Q. Zhu, H. R. Karimi, Stability of stochastic delay switched neural networks with all unstable subsystems: A multiple discretized Lyapunov-Krasovskii functionals method, Inf. Sci., 582 (2022), 302–315. https://doi.org/10.1016/j.ins.2021.09.027 doi: 10.1016/j.ins.2021.09.027
![]() |
[14] |
Z. G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data, IEEE Trans. Cybern., 43 (2013), 1796–1806. https://doi.org/10.1109/TSMCB.2012.2230441 doi: 10.1109/TSMCB.2012.2230441
![]() |
[15] |
Z. G. Wu, P. Shi, H. Su, J. Chu, Local synchronization of chaotic neural networks with sampled-data and saturating actuators, IEEE Trans. Cybern., 44 (2014), 2635–2645. https://doi.org/10.1109/TCYB.2014.2312004 doi: 10.1109/TCYB.2014.2312004
![]() |
[16] |
B. Jiang, H. R. Karimi, X. Zhang, Z. Wu, Adaptive neural-network-based sliding mode control of switching distributed delay systems with Markov jump parameters, Neural Networks, 165 (2023), 846–859. https://doi.org/10.1016/j.neunet.2023.06.022 doi: 10.1016/j.neunet.2023.06.022
![]() |
[17] |
F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Trans. Circuits Syst. I, 52 (2005), 1179–1187. https://doi.org/10.1109/TCSI.2005.849144 doi: 10.1109/TCSI.2005.849144
![]() |
[18] |
L. Hu, P. Shi, P. Frank, Robust sampled-data control for Markovian jump linear systems, Automatica, 42 (2006), 2025–2030. https://doi.org/10.1016/j.automatica.2006.05.029 doi: 10.1016/j.automatica.2006.05.029
![]() |
[19] |
Z. G. Wu, P. Shi, H. Y. Su, Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data, IEEE Trans. Cybern., 43 (2013), 796–1806. https://doi.org/10.1109/TSMCB.2012.2230441 doi: 10.1109/TSMCB.2012.2230441
![]() |
[20] |
L. Yan, Z. Wang, M. Zhang, Y. Fan, Sampled-data control for mean-square exponential stabilization of memristive neural networks under deception attacks, Chaos Solitons Fractals, 174 (2023), 113787. https://doi.org/10.1016/j.chaos.2023.113787 doi: 10.1016/j.chaos.2023.113787
![]() |
[21] |
A. Yerudkar, E. Chatzaroulas, C. Del Vecchio, S. Moschoyiannis, Sampled-data control of probabilistic boolean control networks: A deep reinforcement learning approach, Inf. Sci., 619 (2023), 374–389. https://doi.org/10.1016/j.ins.2022.11.030 doi: 10.1016/j.ins.2022.11.030
![]() |
[22] |
S. Li, L. Yang, K. Li, Z. Gao, Robust sampled-data cruise control scheduling of high speed train, Transp. Res. Part C, 46 (2014), 274–283. https://doi.org/10.1016/j.trc.2014.06.004 doi: 10.1016/j.trc.2014.06.004
![]() |
[23] |
Y. Wang, Q. Wang, P. Zhou, D. Duan, Robust H∞ directional control for a sampled-data autonomous airship, J. Cent. South Univ., 21 (2014), 1339–1346. https://doi.org/10.1007/s11771-014-2071-8 doi: 10.1007/s11771-014-2071-8
![]() |
[24] |
M. Zheng, Y. Zhou, S. Yang, L. Li, Robust H∞ control of neutral system for sampled-data dynamic positioning ships, IMA J. Math. Control Inf., 36 (2019), 1325–1345. https://doi.org/10.1093/imamci/dny029 doi: 10.1093/imamci/dny029
![]() |
[25] |
Z. Zou, M. Zheng, Design and stabilization analysis of luxury cruise with dynamic positioning systems based on sampled-data control, Math. Biosci. Eng., 20 (2023), 14026–14045. https://doi.org/10.3934/mbe.2023626 doi: 10.3934/mbe.2023626
![]() |
[26] |
M. Zheng, Y. Su, S. Yang, L. Li, RReliable fuzzy dynamic positioning tracking controller for unmanned surface vehicles based on aperiodic measurement information, Int. J. Fuzzy Syst., 25 (2023), 358–368. https://doi.org/10.1007/s40815-022-01414-9 doi: 10.1007/s40815-022-01414-9
![]() |
[27] |
H. Zhang, D. Yang, T. Chai, Guaranteed cost networked control for T-S fuzzy systems with time delays, IEEE Trans. Syst. Man Cybern. Part C, 37 (2007), 160–172. https://doi.org/10.1109/tsmcc.2006.88698 doi: 10.1109/tsmcc.2006.88698
![]() |
[28] |
P. Mercorelli, Using fuzzy PD controllers for soft motions in a car-like robot, Adv. Sci. Technol. Eng. Syst. J., 3 (2018), 380–390. https://doi.org/10.25046/aj030646 doi: 10.25046/aj030646
![]() |
[29] |
R. Sakthivel, S. A. Karthick, B. Kaviarasan, F. Alzahrani, Dissipativity-based non-fragile sampled-data control design of interval type-2 fuzzy systems subject to random delays, ISA Trans., 83 (2018), 154–164. https://doi.org/10.1016/j.isatra.2018.08.017 doi: 10.1016/j.isatra.2018.08.017
![]() |
[30] |
Z. Du, Y. Kao, J. H. Park, New results for sampled-data control of interval type-2 fuzzy nonlinear systems, J. Franklin Inst., 357 (2020), 121–141. https://doi.org/10.1016/j.jfranklin.2019.09.035 doi: 10.1016/j.jfranklin.2019.09.035
![]() |
[31] |
G. Velmurugan, Y. H. Joo, Sampled-data control design for TS fuzzy system via quadratic function negative determination approach, IEEE Trans. Fuzzy Syst., 32 (2024), 979–988. https://doi.org/10.1109/tfuzz.2023.3316351 doi: 10.1109/tfuzz.2023.3316351
![]() |
[32] |
H. Li, Y. Liu, Y. Ma, Stability of TS fuzzy system under non-fragile sampled-data H∞ control using augmented Lyapunov-Krasovskii functional, J. Franklin Inst., 360 (2023), 3162–3188. https://doi.org/10.1016/j.jfranklin.2023.01.032 doi: 10.1016/j.jfranklin.2023.01.032
![]() |
[33] |
H. Katayama, Nonlinear sampled-data stabilization of dynamically positioned ships, IEEE Trans. Control Syst. Technol., 18 (2010), 463–468. https://doi.org/10.1109/TCST.2009.2014876 doi: 10.1109/TCST.2009.2014876
![]() |
[34] |
H. Katayama, H. Aoki, Straight-line trajectory tracking control for sampled-data underactuated ships, IEEE Trans. Control Syst. Technol., 22 (2014), 1638–1645. https://doi.org/10.1109/TCST.2013.2280717 doi: 10.1109/TCST.2013.2280717
![]() |
[35] |
M. Zheng, Y. Zhou, S. Yang, L. Li, Robust H∞ control of neutral system for sampled-data dynamic positioning ships, IMA J. Math. Control Inf., 36 (2019), 1325–1345. https://doi.org/10.1093/imamci/dny029 doi: 10.1093/imamci/dny029
![]() |
[36] | S. Yang, M. Zheng, H-infinity fault-tolerant control for dynamic positioning ships based on sampled-data, J. Control Eng. Appl. Inf., 20 (2018), 32–39. |
[37] |
M. Zheng, Y. Zhou, S. Yang, Robust fuzzy sampled-data control for dynamic positioning ships, J. Shanghai Jiaotong Univ., 23 (2018), 209–217. https://doi.org/10.1007/s12204-018-1931-z doi: 10.1007/s12204-018-1931-z
![]() |
[38] | G. Chen, Y. Suo, M. Zheng, S. Yang, L. Li, Reliable tracking control of dynamic positioning ships based on aperiodic measurement information, J. Control Eng. Appl. Inf., 24 (2022), 80–89. |
[39] |
P. G. Park, J. W. Ko, C. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica, 47 (2011), 235–238. https://doi.org/10.1016/j.automatica.2010.10.014 doi: 10.1016/j.automatica.2010.10.014
![]() |
[40] |
F. Yang, H. Zhang, Y. Wang, An enhanced input-delay approach to sampled-data stabilization of T-S fuzzy systems via mixed convex combination, Nonlinear Dyn., 75 (2014), 501–512. https://doi.org/10.1007/s11071-013-1080-8 doi: 10.1007/s11071-013-1080-8
![]() |
[41] |
L. Xie, Output feedback H∞ control of systems with parameter uncertainty, Int. J. Control, 63 (1996), 741–750. https://doi.org/10.1080/00207179608921866 doi: 10.1080/00207179608921866
![]() |
[42] |
E. Fridman, A refined input delay approach to sampled-data control, Automatica, 46 (2010), 421–427. https://doi.org/10.1016/j.automatica.2009.11.017 doi: 10.1016/j.automatica.2009.11.017
![]() |
[43] |
J. Yoneyama, Robust sampled-data stabilization of uncertain fuzzy systems via input delay approach, Inf. Sci., 198 (2012), 169–176. https://doi.org/10.1016/j.ins.2012.02.007 doi: 10.1016/j.ins.2012.02.007
![]() |
[44] |
H. Zhang, D. Yang, T. Y. Chai, Guaranteed cost networked control for T-S fuzzy systems with time delays, IEEE Trans. Syst. Man Cybern. Part C, 37 (2007), 160–172. https://doi.org/10.1109/tsmcc.2006.886983 doi: 10.1109/tsmcc.2006.886983
![]() |
[45] |
H. Shao, Q. L. Han, Z. Zhang, X. Zhu, Sampling-interval-dependent stability for sampled-data systems with state quantization, Int. J. Robust Nonlinear Control, 24 (2014), 2995–3008. https://doi.org/10.1002/rnc.3038 doi: 10.1002/rnc.3038
![]() |
[46] |
E. Tannuri, A. Agostinho, H. Morishita, L. Moratelli, Dynamic positioning systems: An experimental analysis of sliding mode control, Control Eng. Pract., 18 (2010), 1121–1132. https://doi.org/10.1016/j.conengprac.2010.06.007 doi: 10.1016/j.conengprac.2010.06.007
![]() |
Parameter set 1 | Parameter set 2 | Parameter set 3 | Parameter set 4 | |
m_1 | 38.7470 | 38.8413 | 84.4804 | 84.4508 |
m_2 | 135.3518 | 134.9853 | 118.2069 | 117.8869 |
\Phi_{1} | 0.0813 | 0.3254 | 0.2431 | 0.7731 |
\Phi_{2} | 0.4263 | 1.6171 | 0.3562 | 1.3958 |
\gamma_{11} | -1.9353 | -1.9337 | -4.2211 | -4.2134 |
\gamma_{12} | -1.1624 | -1.1652 | -1.2672 | -1.2668 |
\gamma_{21} | -1.3535 | -1.3499 | -1.1821 | -1.1789 |
\gamma_{22} | -3.6062 | -3.5876 | -3.1492 | -3.1318 |
Low noise, strong interaction | High noise, strong interaction |
{\mathcal{R}}_c^1=1.2635 | {\mathcal{R}}_c^2=1.2539 |
{\mathcal{R}}_{\infty}^1=0.0047 | {\mathcal{R}}_{\infty}^2=0.0031 |
{\mathcal{R}}_c^1 - {\mathcal{R}}_{\infty}^1=1.2588 | {\mathcal{R}}_c^2 - {\mathcal{R}}_{\infty}^2=1.2508 |
Low noise, weak interaction | High noise, weak interaction |
{\mathcal{R}}_c^3=2.3488 | {\mathcal{R}}_c^4=2.3307 |
{\mathcal{R}}_{\infty}^3=0.0048 | {\mathcal{R}}_{\infty}^4=0.0032 |
{\mathcal{R}}_c^3 - {\mathcal{R}}_{\infty}^3=2.3440 | {\mathcal{R}}_c^4 - {\mathcal{R}}_{\infty}^4=2.3275 |
Parameter set 1 | Parameter set 2 | Parameter set 3 | Parameter set 4 | |
m_1 | 38.7470 | 38.8413 | 84.4804 | 84.4508 |
m_2 | 135.3518 | 134.9853 | 118.2069 | 117.8869 |
\Phi_{1} | 0.0813 | 0.3254 | 0.2431 | 0.7731 |
\Phi_{2} | 0.4263 | 1.6171 | 0.3562 | 1.3958 |
\gamma_{11} | -1.9353 | -1.9337 | -4.2211 | -4.2134 |
\gamma_{12} | -1.1624 | -1.1652 | -1.2672 | -1.2668 |
\gamma_{21} | -1.3535 | -1.3499 | -1.1821 | -1.1789 |
\gamma_{22} | -3.6062 | -3.5876 | -3.1492 | -3.1318 |
Low noise, strong interaction | High noise, strong interaction |
{\mathcal{R}}_c^1=1.2635 | {\mathcal{R}}_c^2=1.2539 |
{\mathcal{R}}_{\infty}^1=0.0047 | {\mathcal{R}}_{\infty}^2=0.0031 |
{\mathcal{R}}_c^1 - {\mathcal{R}}_{\infty}^1=1.2588 | {\mathcal{R}}_c^2 - {\mathcal{R}}_{\infty}^2=1.2508 |
Low noise, weak interaction | High noise, weak interaction |
{\mathcal{R}}_c^3=2.3488 | {\mathcal{R}}_c^4=2.3307 |
{\mathcal{R}}_{\infty}^3=0.0048 | {\mathcal{R}}_{\infty}^4=0.0032 |
{\mathcal{R}}_c^3 - {\mathcal{R}}_{\infty}^3=2.3440 | {\mathcal{R}}_c^4 - {\mathcal{R}}_{\infty}^4=2.3275 |