The latest advances in engineering, science, and technology have contributed to an enormous generation of datasets. This vast dataset contains irrelevant, redundant, and noisy features that adversely impact classification performance in data mining and machine learning (ML) techniques. Feature selection (FS) is a preprocessing stage to minimize the data dimensionality by choosing the most prominent feature while improving the classification performance. Since the size data produced are often extensive in dimension, this enhances the complexity of search space, where the maximal number of potential solutions is 2nd for n feature datasets. As n becomes large, it becomes computationally impossible to compute the feature. Therefore, there is a need for effective FS techniques for large-scale problems of classification. Many metaheuristic approaches were utilized for FS to resolve the challenges of heuristic-based approaches. Recently, the swarm algorithm has been suggested and demonstrated to perform effectively for FS tasks. Therefore, I developed a Hybrid Mutated Tunicate Swarm Algorithm for FS and Global Optimization (HMTSA-FSGO) technique. The proposed HMTSA-FSGO model mainly aims to eradicate unwanted features and choose the relevant ones that highly impact the classifier results. In the HMTSA-FSGO model, the HMTSA is derived by integrating the standard TSA with two concepts: A dynamic s-best mutation operator for an optimal trade-off between exploration and exploitation and a directional mutation rule for enhanced search space exploration. The HMTSA-FSGO model also includes a bidirectional long short-term memory (BiLSTM) classifier to examine the impact of the FS process. The rat swarm optimizer (RSO) model can choose the hyperparameters to boost the BiLSTM network performance. The simulation analysis of the HMTSA-FSGO technique is tested using a series of experiments. The investigational validation of the HMTSA-FSGO technique showed a superior outcome of 93.01%, 97.39%, 61.59%, 99.15%, and 67.81% over diverse datasets.
Citation: Turki Althaqafi. Mathematical modeling of a Hybrid Mutated Tunicate Swarm Algorithm for Feature Selection and Global Optimization[J]. AIMS Mathematics, 2024, 9(9): 24336-24358. doi: 10.3934/math.20241184
[1] | Liping Wang, Runqi Liu, Yangyang Shi . Dynamical analysis of a nonlocal delays spatial malaria model with Wolbachia-infected male mosquitoes release. Electronic Research Archive, 2025, 33(5): 3177-3200. doi: 10.3934/era.2025139 |
[2] | Songbai Guo, Xin Yang, Zuohuan Zheng . Global dynamics of a time-delayed malaria model with asymptomatic infections and standard incidence rate. Electronic Research Archive, 2023, 31(6): 3534-3551. doi: 10.3934/era.2023179 |
[3] | Gaohui Fan, Ning Li . Application and analysis of a model with environmental transmission in a periodic environment. Electronic Research Archive, 2023, 31(9): 5815-5844. doi: 10.3934/era.2023296 |
[4] | Wenxuan Li, Suli Liu . Dynamic analysis of a stochastic epidemic model incorporating the double epidemic hypothesis and Crowley-Martin incidence term. Electronic Research Archive, 2023, 31(10): 6134-6159. doi: 10.3934/era.2023312 |
[5] | Xueyong Zhou, Xiangyun Shi . Stability analysis and backward bifurcation on an SEIQR epidemic model with nonlinear innate immunity. Electronic Research Archive, 2022, 30(9): 3481-3508. doi: 10.3934/era.2022178 |
[6] | Tao Zhang, Mengjuan Wu, Chunjie Gao, Yingdan Wang, Lei Wang . Probability of disease extinction and outbreak in a stochastic tuberculosis model with fast-slow progression and relapse. Electronic Research Archive, 2023, 31(11): 7104-7124. doi: 10.3934/era.2023360 |
[7] | Yike Lv, Xinzhu Meng . Evolution of infectious diseases induced by epidemic prevention publicity and interaction between heterogeneous strains. Electronic Research Archive, 2024, 32(8): 4858-4886. doi: 10.3934/era.2024223 |
[8] | Ting Kang, Qimin Zhang, Qingyun Wang . Nonlinear adaptive control of avian influenza model with slaughter, educational campaigns and treatment. Electronic Research Archive, 2023, 31(8): 4346-4361. doi: 10.3934/era.2023222 |
[9] | Jia Chen, Yuming Chen, Qimin Zhang . Ergodic stationary distribution and extinction of stochastic pertussis model with immune and Markov switching. Electronic Research Archive, 2025, 33(5): 2736-2761. doi: 10.3934/era.2025121 |
[10] | Wenhui Niu, Xinhong Zhang, Daqing Jiang . Dynamics and numerical simulations of a generalized mosquito-borne epidemic model using the Ornstein-Uhlenbeck process: Stability, stationary distribution, and probability density function. Electronic Research Archive, 2024, 32(6): 3777-3818. doi: 10.3934/era.2024172 |
The latest advances in engineering, science, and technology have contributed to an enormous generation of datasets. This vast dataset contains irrelevant, redundant, and noisy features that adversely impact classification performance in data mining and machine learning (ML) techniques. Feature selection (FS) is a preprocessing stage to minimize the data dimensionality by choosing the most prominent feature while improving the classification performance. Since the size data produced are often extensive in dimension, this enhances the complexity of search space, where the maximal number of potential solutions is 2nd for n feature datasets. As n becomes large, it becomes computationally impossible to compute the feature. Therefore, there is a need for effective FS techniques for large-scale problems of classification. Many metaheuristic approaches were utilized for FS to resolve the challenges of heuristic-based approaches. Recently, the swarm algorithm has been suggested and demonstrated to perform effectively for FS tasks. Therefore, I developed a Hybrid Mutated Tunicate Swarm Algorithm for FS and Global Optimization (HMTSA-FSGO) technique. The proposed HMTSA-FSGO model mainly aims to eradicate unwanted features and choose the relevant ones that highly impact the classifier results. In the HMTSA-FSGO model, the HMTSA is derived by integrating the standard TSA with two concepts: A dynamic s-best mutation operator for an optimal trade-off between exploration and exploitation and a directional mutation rule for enhanced search space exploration. The HMTSA-FSGO model also includes a bidirectional long short-term memory (BiLSTM) classifier to examine the impact of the FS process. The rat swarm optimizer (RSO) model can choose the hyperparameters to boost the BiLSTM network performance. The simulation analysis of the HMTSA-FSGO technique is tested using a series of experiments. The investigational validation of the HMTSA-FSGO technique showed a superior outcome of 93.01%, 97.39%, 61.59%, 99.15%, and 67.81% over diverse datasets.
In the eco-environment, there are a large number of pests or annoying animals such as rodents and mosquitoes that can spread diseases or destroy crops or livestock. They are called vermin. In addition, vermin have the strong reproductive abilities, which makes it necessary to control them [1]. Usually, chemical drugs are used to poison the vermin, which will pollute the environment and destroy the ecological system. In addition, long-term use of chemical drugs will make the vermin resistant to drugs, which makes it impossible to control the vermin effectively for a long time. Ecological research shows that reducing the reproduction rate is an effective way to manage the over-abundance of species. Currently, female sterilant is used to reduce the size of vermin [2,3]. This is because, compared with chemical drugs, sterilants not only have the advantage of not polluting the environment, but also have the dual effects of causing sterility and death of vermin.
Since the reproductive ability of vermin is related to the age of individuals [4], we can use first-order partial differential equations coupled with integral equations to simulate the dynamics of vermin [5,6]. Along this line, many studies have appeared on population models. To name a few, see [7,8,9,10] for age-dependent models, and [11,12,13,14,15] for size-structured models. Aniţa and Aniţa [7] considered two optimal harvesting problems related to age-structured population dynamics with logistic term and time-periodic control and vital rates. The control variable is the harvest effort, which only depends on time and only appears in the principal equation. Li et al. [9] studied the optimal control of an age-structured model describing mosquito plasticity. He et al. [10] investigated the optimal birth control problem for a nonlinear age-structured population model. The control variable is the birth rate and only appears in the boundary condition. He and Liu [11] and Liu and Liu [12] discussed optimal birth control problems for population models with size structures. Li et al. [13] investigated the optimal harvesting problem for a size-stage-structured population model and the control variable is the harvest effort for the adult population.
However, only a small amount of work is directly aimed at the contraception control problems for vermin with individual structure [16,17], and no work has yet considered the reproduction law of vermin in modeling. In this paper, we will formulate a nonlinear hierarchical age-structured model to discuss the optimal contraception management problem for vermin. The so-called hierarchical structure of the population is to rank individuals according to their age, body size, or any other structural variables that may affect their life rate [18]. Moreover, Gurney and Nisbet [18] pointed out that the hierarchy of ranks in a population is one of the important factors to maintain species' persistence and ecological stability. Most studies on the hierarchical population models mainly discuss the existence, uniqueness, and numerical approximation of solutions [19,20], and the asymptotic behavior of solutions [18,21]. However, studies on optimal control problems of hierarchical population models are rather rare. He and his collaborators have investigated optimal harvesting problems in hierarchical species [22,23]. The control variables are the harvest effort and only appear in the principal equation.
Compared with known closely related ones, our model has the following features. Firstly, the control function is the amount of sterilant ingested by an individual, which depends on the individual's age. Secondly, the control variable appears not only in the principal equation (distributed control) but also in the boundary condition (boundary control). Thirdly, the reproduction rate of vermin depends not only on the age of individuals but also on the mechanism of encounters between males and females and an "internal environment". Fourthly, the mortality of vermin depends not only on the intrinsic and weighted total size of vermin but also on the influence of ingested sterilant. The model obtained in this paper is a nonlinear integro-partial differential equation with a global feedback boundary condition. Based on this model, this paper will investigate how to apply the female sterilant to minimize the final size of vermin when the control cost is the lowest.
In this paper, firstly, the existence of a unique non-negative solution is established based on Theorem 4.1 of [14]. More importantly, by transforming the model into a system of two subsystems, we show that the solution has a separable form. Then, the existence of an optimal policy is discussed with compactness and minimization sequences. To show the compactness, we use the Fréchet-Kolmogorov Theorem and its generalization. Next, the Euler-Lagrange optimality conditions are derived by employing adjoint systems and normal cones techniques. The high nonlinearity of the model makes it difficult to construct the adjoint system. For this reason, we give a new continuous result, that is, the continuity of the solution of an integro-partial differential equation with respect to its boundary distribution and inhomogeneous term.
Let us make some comments on the difference between our methods and results from those of closely related works. Aniţa and Aniţa [7] only gave the first order necessary optimality conditions by using an adjoint system and normal cones techniques. Li et al. [13] only discussed the existence of the optimal solution for a harvest problem via a maximizing sequence. Hritonenko et al. [15] only gave the maximum principle for a size-structured model of forest and carbon sequestration management via adjoint system. Kato [14] only discussed the existence of the optimal solution for a nonlinear size-structured harvest model by means of a maximizing sequence.
In this section, taking the reproduction law of vermin into consideration, we will formulate a hierarchical age-structured model to discuss the optimal contraception management problem for vermin. Ecological studies show that the reproduction of vermin follows the following laws [4]:
(1∘) The reproductive ability of vermin is related to the age of individuals;
(2∘) Most vermin are polygamous hybridization;
(3∘) A large proportion of females increases the reproductive intensity of vermin;
(4∘) The average number of offspring from middle-aged and elderly individuals is more than that of individuals who first participate in reproduction.
To build our model, let p(a,t) denote the density of vermin with age a at time t, and a† be the maximum age of survival of vermin.
Firstly, we simulate the reproduction process. Note that most vermin are polygamous hybridization. As in [24], we should consider the mechanism of encounters between males and females when describing the birth process. Here we assume that the sex ratio is determined by fixed environmental or social factors, and ω(a) (0<ω(a)<1) is the proportion of females with age a. Then the number of males at time t is
S(t)=∫a†0[1−ω(a)]p(a,t)da. |
Further, we introduce the function B(a,S(t)) to represent the number of males encountered by a female with age a per unit time.
From (3∘) and (4∘), we see that middle-aged and elderly females play a dominant role in reproduction of vermin. Thus, there exist dominant ranks of individuals in vermin [22]. As in [19], one can assume that the fertility of vermin is related to its "internal environment" E(p)(a,t), which is given by
E(p)(a,t)=α∫a0ω(r)p(r,t)dr+∫a†aω(r)p(r,t)dr,0≤α<1. |
The parameter α is the hierarchical coefficient, which is the weight of the lower ranks (i.e., age smaller than a). From [21], α=0 (i.e., "contest competition") implies an absolute hierarchical structure, whereas α tending to 1 means that the effect of higher ranks is similar to that of lower ranks. Moreover, the limiting case α=1 (i.e., "scramble competition") means that there is no hierarchy. Hence, the fertility of vermin can be defined as ˜β(a,t,E(p)(a,t)), which denotes the average number of offspring produced per an encounter of a male with a female with age a at time t.
Next, we simulate the sterile process and death process. In order to inhibit the excessive reproduction of vermin, humans put female sterilant into their living environment of vermin. For the convenience of modeling, we assume that the sterilant used at any time will be completely eaten by vermin (including males), and individuals of the same age will eat the same amount of sterilant at the same time [16]. Liu et al. pointed out that sterilant can not only cause sterility of vermin but also kill them [25]. Thus, when the amount of sterilant ingested by an individual with age a at time t is u(a,t), we can use δ1u(a,t) and δ2u(a,t), respectively, to describe the mortality and infertility rates caused by ingestion of sterilant. Hence, the density of fertile females with age a at time t can be written as [1−δ2u(a,t)]ω(a)p(a,t), and the total number of newborns that are produced at time t is given by
∫a†0˜β(a,t,E(p)(a,t))B(a,S(t))[1−δ2u(a,t)]ω(a)p(a,t)da. |
Next we denote β(a,t,E(p)(a,t),S(t))≜˜β(a,t,E(p))B(a,S(t))ω(a). In addition, the restriction of living space or habitat can lead to an increase of mortality. Thus, in addition to natural mortality μ(a,t) and external mortality δ1u(a,t), we assume that the vermin also has a mortality Φ(I(t)), which depends on the total size I(t) weighted by m(a). That is,
I(t)=∫a†0m(a)p(a,t)da. |
Finally, we build our model. Motivated by the above discussions, in this paper, we propose the following hierarchical age-structured model to describe the contraception control problem of vermin
{∂p(a,t)∂t+∂p(a,t)∂a=f(a,t)−[μ(a,t)+δ1u(a,t)+Φ(I(t))]p(a,t),(a,t)∈D,p(0,t)=∫a†0β(a,t,E(p)(a,t),S(t))[1−δ2u(a,t)]p(a,t)da,t∈[0,T],p(a,0)=p0(a),a∈[0,a†),I(t)=∫a†0m(a)p(a,t)da,S(t)=∫a†0[1−ω(a)]p(a,t)da,t∈[0,T],E(p)(a,t)=α∫a0ω(r)p(r,t)dr+∫a†aω(r)p(r,t)dr,(a,t)∈D, | (2.1) |
where D=(0,a†)×(0,T) and T∈(0,+∞) is the control horizon. f(a,t) is the rate of immigration. The control variable u∈U={u∈L∞(D):0≤u(a,t)≤L,a.e.(a,t)∈D}, where L>0 is a constant. Biologically, we have δ2L<1. Let pu(a,t) be the solution of (2.1) with u∈U. The optimization problem discussed in this paper is
minu∈U J(u), | (2.2) |
where
J(u)=∫a†0pu(a,T)da+∫T0[r(t)∫a†0u(a,t)pu(a,t)da]dt. |
Here the first integral represents the total number of vermin at time T, while r(t)∫a†0u(a,t)pu(a,t)da is the cost of infertility control at time t. The purpose of this paper is to investigate how to apply female sterilant to minimize the final size of vermin when the control cost is the lowest.
After rewriting our model (see Section 3), we see it is a special case of (4.1) in [14]. Note this model contains some exiting ones. Assume that β(a,t,E(p)(a,t),S(t))=β(a,t) and Φ(I(t))=0. If δ1=δ2=0, then our model reduces to model (2.1.1) in [6]; if δ1=1 and δ2=0, then model (2.1) becomes the harvest control model (3.1.1) in [6]. Assume that f(a,t)=0, m(a)=1, and β(a,t,E(p)(a,t),S(t))=β(a,t). Then we get model (2.2.15) in [6] by letting δ1=δ2=0 and the harvest control model (3.2.1) in [6] by letting δ1=1 and δ2=0. Moreover, if we take f(a,t)=0, m(a)=1, δ1=δ2=0 and β(a,t,E(p)(a,t),S(t))=β(a,t)ω(a,t), then model (2.1) improves the age-structured birth control model in [10].
Let R+≜[0,+∞), L1+≜L1(0,a†;R+) and L∞+≜L∞(0,a†;R+). In this paper, we assume that:
(A1) For each t∈[0,T], μ(⋅,t)∈L1loc[0,a†) and ∫a†0μ(a,t)da=+∞.
(A2) Φ:R+→R+ is bounded, that is, there is a constant ˉΦ>0 such that Φ(s)≤ˉΦ for all s∈R+. Moreover, there is an increasing function CΦ:R+→R+ such that
|Φ(s1)−Φ(s2)|≤CΦ(r)|s1−s2|,0≤s1,s2≤r. |
(A3) β:D×R+×R+→R+ is measurable and 0≤β(a,t,s,q)≤ˉβ for some ˉβ>0. Moreover, there exists an increasing function Cβ:R+→R+ such that for a∈[0,a†) and t∈[0,T]
|β(a,t,s1,q1)−β(a,t,s2,q2)|≤Cβ(r)(|s1−s2|+|q1−q2|),0≤s1,s2,q1,q2≤r. |
(A4) f∈L∞(0,T;L1+), p0∈L1+, ω∈L∞+, m∈L∞+ and 0≤ω(a)≤ˉω<1, 0≤m(a)≤ˉm for any a∈[0,a†). Here ˉω and ˉm are positive constants.
In this section, we show that (2.1) admits solutions in a separable form. First, we show (2.1) is well posed. For any u∈U and ϕ∈L1, let
G(t,ϕ)(a)=f(a,t)−[μ(a,t)+δ1u(a,t)+Φ(Iϕ)]ϕ(a),a∈[0,a†), | (3.1) |
F(t,ϕ)=∫a†0β(a,t,E(ϕ)(a),Sϕ)[1−δ2u(a,t)]ϕ(a)da, | (3.2) |
where Iϕ=∫a†0m(a)ϕ(a)da, E(ϕ)(a)=α∫a0ω(r)ϕ(r)dr+∫a†aω(r)ϕ(r)dr a∈[0,a†), and Sϕ=∫a†0[1−ω(a)]ϕ(a)da. Then (2.1) can be written in the following general form
{∂p(a,t)∂t+∂p(a,t)∂a=G(t,p(⋅,t))(a),(a,t)∈[0,a†)×[0,T],p(0,t)=F(t,p(⋅,t)),t∈[0,T],p(a,0)=p0(a),x∈[0,a†). |
This is a special case of (4.1) in [14] with V(x,t)=1. Obviously, under (A1)–(A4), G satisfies (G0) and (G1) in [14]. Now, we show F satisfies (F0) and (F1) in [14]. For any ϕi∈L1 with ‖ (i = 1, 2) , we have
\begin{align*} |E(\phi_{i})(a)| = & \bigg|\alpha\int_{0}^{a}\omega(r)\phi_{i}(r)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)\phi_{i}(r)\, {\rm d}r\bigg| \leq \bar{\omega}\int_{0}^{a_{\dagger}}|\phi_{i}(r)|\, {\rm d}r \leq \|\phi_{i}\|_{L^{1}}\leq r, \\ |S\phi_{i}| = & \bigg|\int_{0}^{a_{\dagger}}[1-\omega(a)]\phi_{i}(a)\, \mbox{d}a\bigg| \leq \int_{0}^{a_{\dagger}}|\phi_{i}(r)|\, {\rm d}r = \|\phi_{i}\|_{L^{1}}\leq r. \end{align*} |
Then, by {\bf(A_3)} , we get
\begin{align*} \big|\beta\big(a, t, &E(\phi_{1})(a), S\phi_{1}\big)-\beta\big(a, t, E(\phi_{2})(a), S\phi_{2}\big)\big| \\ \leq& C_{\beta}(r)\bigg|\alpha\int_{0}^{a}\omega(r)[\phi_{1}(r)-\phi_{2}(r)]\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)[\phi_{1}(r)-\phi_{2}(r)]\, {\rm d}r\bigg| \\ +& C_{\beta}(r)\bigg|\int_{0}^{a_{\dagger}}[1-\omega(a)][\phi_{1}(a)-\phi_{2}(a)]\, {\rm d}a\bigg| \\ \leq& C_{\beta}(r)\bigg\{\alpha\int_{0}^{a}|\omega(r)||\phi_{1}(r)-\phi_{2}(r)|\, {\rm d}r + \int_{a}^{a_{\dagger}}|\omega(r)||\phi_{1}(r)-\phi_{2}(r)|\, {\rm d}r\bigg\} \\ +& C_{\beta}(r)\int_{0}^{a_{\dagger}}[1-\omega(a)]|\phi_{1}(a)-\phi_{2}(a)|\, {\rm d}a \\ \leq& (\bar{\omega}+1)C_{\beta}(r)\int_{0}^{a_{\dagger}}|\phi_{1}(r)-\phi_{2}(r)|\, {\rm d}r = (\bar{\omega}+1)C_{\beta}(r)\|\phi_{1}-\phi_{2}\|_{L^{1}}. \end{align*} |
Hence,
\begin{align*} |F(t, \phi_{1})-F(t, \phi_{2})| \leq& \int_{0}^{a_{\dagger}}\big|\beta\big(a, t, E(\phi_{1})(a), S\phi_{1}\big)-\beta\big(a, t, E(\phi_{2})(a), S\phi_{2}\big)\big|\cdot|\phi_{1}(a)|\, \mbox{d}a \\ +& \int_{0}^{a_{\dagger}}\big|\beta\big(a, t, E(\phi_{2})(a), S\phi_{2}\big)\big|\cdot|\phi_{1}(a)-\phi_{2}(a)|\, \mbox{d}a \\ \leq& (\bar{\omega}+1)C_{\beta}(r)\|\phi_{1}-\phi_{2}\|_{L^{1}}\int_{0}^{a_{\dagger}}|\phi_{1}(a)|\, \mbox{d}a + \bar{\beta}\int_{0}^{a_{\dagger}}|\phi_{1}(a)-\phi_{2}(a)|\, \mbox{d}a \\ \leq& [(\bar{\omega}+1)C_{\beta}(r)r+\bar{\beta}]\|\phi_{1}-\phi_{2}\|_{L^{1}}. \end{align*} |
Let C_{F}(r) = (\bar{\omega}+1)C_{\beta}(r)r+\bar{\beta} . By {\bf(A_3)} , we know that C_{F} is an increasing function. Thus, (F0) in [14] holds. Clearly, F satisfies (F1) in [14]. In addition, take \omega_{1}(t) = \|f(\cdot, t)\|_{L^{1}} and \omega_{2}(t) = \bar{\beta} . Then all conditions of [14] are satisfied. Similar to the proof of [14], we have the following result.
Theorem 3.1. For each u\in\mathcal{U} , model (2.1) has a unique global solution p\in C([0, T]; L_{+}^{1}) , which satisfies
\begin{align} \|p(\cdot, t)\|_{L^{1}} \leq e^{\bar{\beta}t}\|p_{0}\|_{L^{1}}+\int_{0}^{t}e^{\bar{\beta}(t-s)}\|f(\cdot, s)\|_{L^{1}} \, {\rm d}s. \end{align} | (3.3) |
Next we consider the solution of model (2.1) in the following form
\begin{align} p(a, t) = y(t)\bar{p}(a, t). \end{align} | (3.4) |
From (2.1) and (3.4), we get two subsystems about \bar{p}(a, t) and y(t) as follows
\begin{eqnarray} && \left\{ \begin{array}{ll} \frac{\partial \bar{p}(a, t)}{\partial t}+\frac{\partial\bar{p}(a, t)}{\partial a} = \frac{f(a, t)}{y(t)}-[\mu(a, t)+\delta_{1}u(a, t)]\bar{p}(a, t), & (a, t)\in D, \\ \bar{p}(0, t) = \int_{0}^{a_{\dagger}}\beta\big(a, t, y(t)E(\bar{p})(a, t), y(t)\bar{S}(t)\big)\big[1-\delta_{2}u(a, t)]\bar{p}(a, t)\, \mbox{d}a, \quad & t\in [0, T], \\ \bar{S}(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]\bar{p}(a, t)\, {\rm d}a, & t\in [0, T], \\ E(\bar{p})(t) = \alpha \int_{0}^{a}\omega(r)\bar{p}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)\bar{p}(r, t)\, {\rm d}r, & t\in [0, T], \\ \bar{p}(a, 0) = p_{0}(a), & a\in [0, a_{\dagger}), \end{array} \right. \quad \end{eqnarray} | (3.5) |
\begin{eqnarray} && \left\{ \begin{array}{ll} y'(t)+\Phi\big(y(t)\bar{I}(t)\big)y(t) = 0, \quad\quad & t\in [0, T], \\ \bar{I}(t) = \int_{0}^{a_{\dagger}}m(a)\bar{p}(a, t)\, \mbox{d}a, & t\in [0, T], \\ y(0) = 1. \end{array}\right. \end{eqnarray} | (3.6) |
Definition 1. A pair of functions (\bar{p}(a, t), y(t)) with \bar{p}\in C([0, T]; L_{+}^{1}) and y\in C([0, T]; R_{+}) is said to be a solution of (3.5)–(3.6) if it satisfies
\begin{eqnarray} && \bar{p}(a, t) = \left\{ \begin{array}{ll} F_{y}(\tau, \bar{p}(\cdot, \tau))+\int_{\tau}^{t}G_{y}(s, \bar{p}(\cdot, s))(s-t+a)\, {\rm d}s, \quad\quad& a\leq t, \\ p_{0}(a-t)+ \int_{0}^{t}G_{y}(s, \bar{p}(\cdot, s))(s-t+a)\, {\rm d}s, \quad & a > t, \end{array} \right. \end{eqnarray} | (3.7) |
\begin{eqnarray} && y(t) = \exp\left\{- \int_{0}^{t}\Phi(y(s)\bar{I}(s))\, {\rm d}s\right\}, \end{eqnarray} | (3.8) |
where \tau = t-a , \bar{I}(s) = \int_{0}^{a_{\dagger}}m(a)\bar{p}(a, s)\, {\rm d}a , and
\begin{align*} &F_{y}(t, \phi) = \int_{0}^{a_{\dagger}}\beta\big(a, t, y(t) E(\phi)(a), y(t)S\phi\big)\big[1-\delta_{2}u(a, t)\big]\phi(a)\, \mbox{d}a, \\ &G_{y}(t, \phi)(a) = -\mu(a, t)\phi(a)-\delta_{1}u(a, t)\phi(a) +\frac{f(a, t)}{y(t)}, \; \; \; a\in[0, a_{\dagger}) \end{align*} |
for t\in[0, T] and \phi\in L^{1} . Here E(\phi)(a) = \alpha\int_{0}^{a}\omega(r)\phi(r)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)\phi(r)\, {\rm d}r and S\phi = \int_{0}^{a_{\dagger}}[1-\omega(a)]\phi(a)\, \mbox{d}a .
Denote \theta\triangleq\exp\{-\bar{\Phi}T\} > 0 and define \mathcal{S} = \big\{h\in C[0, T]:\theta\leq h(t)\leq 1, \; t\in[0, T]\big\} . In addition, define an equivalent norm in C[0, T] by
\begin{align} \| h\|_{\lambda} = \sup\limits_{t\in[0, T]}e^{-\lambda t}|h(t)|\; \; {\rm for}\; h\in C[0, T] \end{align} | (3.9) |
for some \lambda > 0 . Clearly, (S, \|\cdot\|_{\lambda}) is a Banach space.
For any y\in\mathcal{S} , by Theorem 3.1, system (3.5) has a unique non-negative solution \bar{p}^{y}\in C([0, T]; L_{+}^{1}) satisfying
\begin{align} \|\bar{p}^{y}(\cdot, t)\|_{L^{1}} \leq& e^{\bar{\beta}t}\|p_{0}\|_{L^{1}}+\int_{0}^{t}e^{\bar{\beta}(t-s)}\left\|\frac{f(\cdot, s)}{y(s)}\right\|_{L^{1}}\, {\rm d}s \\ \leq& e^{\bar{\beta}t}\|p_{0}\|_{L^{1}}+\int_{0}^{t}e^{\bar{\beta}(t-s)}\frac{\|f(\cdot, s)\|_{L^{1}}}{y(s)}\, {\rm d}s \\ \leq& e^{\bar{\beta}T}\left(\|p_{0}\|_{L^{1}}+\frac{\|f(\cdot, \cdot)\|_{L^{1}(D)}}{\theta}\right)\triangleq r_{0}. \end{align} | (3.10) |
Lemma 3.2. There is a positive constant M such that
\begin{align} \left\|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\right\|_{L^{1}} \leq& M\int_{0}^{t}\left| y_{1}(s)-y_{2}(s)\right|\, {\rm d}s, \end{align} | (3.11) |
\begin{align} e^{-\lambda t}\left\|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\right\|_{L^{1}} \leq& \frac{M}{\lambda}\| y_{1}-y_{2}\|_{\lambda} \end{align} | (3.12) |
for all t\in [0, T] and y_{1} , y_{2}\in\mathcal{S} .
Proof. Since (3.12) can be obtained directly from (3.11), we only need to prove (3.11). For any y\in\mathcal{S} , from (3.10) and 0 < \omega(a)\leq\bar{\omega} < 1 , it follows that
\begin{align} |E(\bar{p}^{y})(a, t)| = & \bigg|\alpha\int_{0}^{a}\omega(a)\bar{p}^{y}(a, t)\, {\rm d}a+\int_{a}^{a_{\dagger}}\omega(a)\bar{p}^{y}(a, t)\, {\rm d}a\bigg| \leq \bar{\omega}\int_{0}^{a_{\dagger}}|\bar{p}^{y}(a, t)|\, {\rm d}a \leq r_{0}, \end{align} | (3.13) |
\begin{align} |\bar{S}^{y}(t)| = & \bigg|\int_{0}^{a_{\dagger}}[1-\omega(a)]\bar{p}^{y}(a, t)\, {\rm d}a\bigg| \leq \int_{0}^{a_{\dagger}}|\bar{p}^{y}(a, t)|\, {\rm d}a \leq r_{0}, \end{align} | (3.14) |
and
\begin{align} |E(\bar{p}^{y_{1}})-E(\bar{p}^{y_{2}})|(a, t) = & \bigg|\alpha\int_{0}^{a}\omega(r)[\bar{p}^{y_{1}}-\bar{p}^{y_{2}}](r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)[\bar{p}^{y_{1}}-\bar{p}^{y_{2}}](r, t)\, {\rm d}r\bigg| \\ \leq& \alpha\int_{0}^{a}|\omega(r)||\bar{p}^{y_{1}}-\bar{p}^{y_{2}}|(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}|\omega(r)||\bar{u}^{y_{1}}-\bar{u}^{y_{2}}|(r, t)\, {\rm d}r \\ \leq& \bar{\omega}\int_{0}^{a_{\dagger}}|\bar{p}^{y_{1}}-\bar{p}^{y_{2}}|(a, t)\, {\rm d}a \leq \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}}, \end{align} | (3.15) |
\begin{align} |\bar{S}^{y_{1}}(t)-\bar{S}^{y_{2}}(t)| = & \bigg|\int_{0}^{a_{\dagger}}[1-\omega(a)]\bar{p}^{y_{1}}(a, t)\, {\rm d}a-\int_{0}^{a_{\dagger}}[1-\omega(a)]\bar{p}^{y_{2}}(a, t)\, {\rm d}a\bigg| \\ \leq& \int_{0}^{a_{\dagger}}|\bar{p}^{y_{1}}(a, t)-\bar{p}^{y_{2}}(a, t)|\, {\rm d}a = \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}}. \end{align} | (3.16) |
Moreover, using (3.7), we have
\begin{align} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} = & \int_{0}^{t}|\bar{p}^{y_{1}}(a, t)-\bar{p}^{y_{2}}(a, t)|\, {\rm d}a+\int_{t}^{a_{\dagger}}|\bar{p}^{y_{1}}(a, t)-\bar{p}^{y_{2}}(a, t)|\, {\rm d}a \\ \leq& \int_{0}^{t}|F_{y_{1}}(\tau, \bar{p}^{y_{1}}(\cdot, \tau))-F_{y_{2}}(\tau, \bar{p}^{y_{2}}(\cdot, \tau))|\, {\rm d}a \\ +& \int_{0}^{t}\int_{\tau}^{t}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ +& \int_{t}^{a_{\dagger}}\int_{0}^{t}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ \triangleq& I_{1}+I_{2}+I_{3}. \end{align} | (3.17) |
It follows from Fubini's Theorem that
\begin{align*} I_{2}+I_{3} = & \int_{0}^{t}\int_{t-s}^{t}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}a\, {\rm d}s \\ +& \int_{0}^{t}\int_{t}^{a_{\dagger}}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}a\, {\rm d}s \\ = & \int_{0}^{t}\int_{t-s}^{a_{\dagger}}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}a\, {\rm d}s. \end{align*} |
Using the transformation s = t-a , we have s = t when a = 0 while s = 0 when a = t , and {\rm d}s = -{\rm d}a . Thus, by (3.13)–(3.16), we obtain
\begin{align} I_{1} = & \int_{0}^{t} \bigg| \int_{0}^{a_{\dagger}}\beta\big(a, s, y_{1}(s)E(\bar{p}^{y_{1}})(a, s), y_{1}(s)\bar{S}^{y_{1}}(s)\big)\big[1-\delta_{2}u(a, s)]\bar{p}^{y_{1}}(a, s)\, {\rm d}a \\ -& \int_{0}^{a_{\dagger}}\beta\big(a, s, y_{2}(s)E(\bar{p}^{y_{2}})(a, s), y_{2}(s)\bar{S}^{y_{2}}(s)\big)\big[1-\delta_{2}u(a, s)]\bar{p}^{y_{2}}(a, s)\, {\rm d}a \bigg|\, {\rm d}s \\ \leq& \bar{\beta}\int_{0}^{t}\int_{0}^{a_{\dagger}}|\bar{p}^{y_{1}}(a, s)-\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ +& C_{\beta}(r_{0})\int_{0}^{t}\int_{0}^{a_{\dagger}} \Big[|E(\bar{p}^{y_{1}})-E(\bar{p}^{y_{2}})|(a, s)+|\bar{S}^{y_{1}}(s)-\bar{S}^{y_{2}}(s)|\Big]\cdot|\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ +& C_{\beta}(r_{0})\int_{0}^{t}\int_{0}^{a_{\dagger}} \Big[E(\bar{p}^{y_{2}})(a, s)+\bar{S}^{y_{2}}(s)\Big]\cdot|y_{1}(s)-y_{2}(s)|\cdot|\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ \leq& \bar{\beta}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + 2C_{\beta}(r_{0})r_{0}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\int_{0}^{a_{\dagger}}|\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ +& 2C_{\beta}(r_{0})\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\int_{0}^{a_{\dagger}} |\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ \leq& (\bar{\beta}+2C_{\beta}(r_{0})r_{0})\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + 2C_{\beta}(r_{0})r_{0}^{2}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s. \end{align} | (3.18) |
Using the transformation \eta = s-t+a , we have \eta = 0 when a = t-s while \eta = s-t+a_{\dagger}\leq a_{\dagger} ( t-s\geq0 ) when a = a_{\dagger} , and {\rm d}\eta = {\rm d}a . Thus, we obtain
\begin{align} I_{2}+I_{3} \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(\eta)\, {\rm d}\eta\, {\rm d}s \\ \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}\bigg|-(\mu(\eta, s)+\delta_{1}u(\eta, s))\big(\bar{p}^{y_{1}}(\eta, s)-\bar{p}^{y_{2}}(\eta, s)\big) + \bigg(\frac{f(\eta, s)}{y_{1}(s)}-\frac{f(\eta, s)}{y_{2}(s)}\bigg)\bigg|\, {\rm d}\eta\, {\rm d}s \\ \leq& (\bar{\mu}+\delta_{1}L)\int_{0}^{t}\int_{0}^{a_{\dagger}}\big|\bar{p}^{y_{1}}(\eta, s)-\bar{p}^{y_{2}}(\eta, s)\big|\, {\rm d}\eta\, {\rm d}s + \int_{0}^{t}\int_{0}^{a_{\dagger}}\bigg|\frac{f(\eta, s)}{y_{1}(s)}-\frac{f(\eta, s)}{y_{2}(s)}\bigg|\, {\rm d}\eta\, {\rm d}s \\ = & (\bar{\mu}+\delta_{1}L)\int_{0}^{t}\int_{0}^{a_{\dagger}}\big|\bar{p}^{y_{1}}(\eta, s)-\bar{p}^{y_{2}}(\eta, s)\big|\, {\rm d}\eta\, {\rm d}s + \int_{0}^{t}\bigg|\frac{1}{y_{1}(s)}-\frac{1}{y_{2}(s)}\bigg|\|f(\cdot, s)\|_{L^{1}}\, {\rm d}s \\ \leq& (\bar{\mu}+\delta_{1}L)\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + \frac{\|f\|_{L^{\infty}(0, T;L^{1})}}{\theta^{2}}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s. \end{align} | (3.19) |
Here \bar{\mu} is the upper bound of \mu(a, t) . From (3.17)–(3.19), we have
\begin{align} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} \leq& (\bar{\beta}+2C_{\beta}(r_{0})r_{0}+\bar{\mu}+\delta_{1}L)\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|\, {\rm d}s \\ &+ \bigg(2C_{\beta}(r_{0})r_{0}^{2}+\frac{\|f\|_{L^{\infty}(0, T;L^{1})}}{\theta^{2}}\bigg)\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s. \end{align} | (3.20) |
Then (3.11) follows from Gronwall's inequality.
Theorem 3.3. For any p_{0}\in L_{+}^{1} and u\in\mathcal{U} , (3.5)–(3.6) has a unique solution (\bar{p}^{y}, y)\in C([0, T]; L_{+}^{1}) \times C([0, T]; R_{+}) . In addition, p(a, t) = \bar{p}^{y}(a, t)y(t) is the unique solution of (2.1).
Proof. First, we show that for any y\in\mathcal{S} there is a unique \bar{y}\in\mathcal{S} such that
\begin{align} \bar{y}(t) = \exp\bigg\{-\int_{0}^{t}\Phi(\bar{I}^{y}(s)\bar{y}(s))\, {\rm d}s\bigg\}. \end{align} | (3.21) |
Here \bar{I}^{y}(t) = \int_{0}^{a_{\dagger}}m(a)\bar{p}^{y}(a, t)\, {\rm d}a . From (3.10), it is easy to show
\begin{align} |\bar{I}^{y}(t)| = \bigg|\int_{0}^{a_{\dagger}}m(a)\bar{p}^{y}(a, t)\, {\rm d}a\bigg| \leq \bar{m}\int_{0}^{a_{\dagger}}|\bar{p}^{y}(a, t)|\, \mbox{d}a \leq \bar{m}r_{0} \triangleq r_{1}. \end{align} | (3.22) |
For fixed \bar{I}^{y} , define the operator \mathcal{A}: \mathcal{S}\rightarrow C[0, T] by
\begin{align*} [\mathcal{A}h](t) = \exp\bigg\{-\int_{0}^{t}\Phi(\bar{I}^{y}(s)h(s))\, {\rm d}s\bigg\}\; \; {\rm for}\; h\in\mathcal{S}. \end{align*} |
Clearly, [\mathcal{A}h](t)\geq\theta for each h\in\mathcal{S} . Thus, \mathcal{A} maps \mathcal{S} into itself. In addition, for any h_{1} , h_{2}\in\mathcal{S} , we have
\begin{align*} \|(\mathcal{A}h_{1})(t)-(\mathcal{A}h_{2})(t)\|_{\lambda} & = \sup\limits_{t\in[0, T]}\left\{e^{-\lambda t}\left| (\mathcal{A}h_{1})(t)-(\mathcal{A}h_{2})(t)\right|\right\} \\ &\leq \sup\limits_{t\in[0, T]}\bigg\{e^{-\lambda t}\int_{0}^{t}\left|\Phi(\bar{I}^{y}(s)h_{1}(s))-\Phi(\bar{I}^{y}(s)h_{2}(s))\right|\, {\rm d}s\bigg\} \\ &\leq \sup\limits_{t\in[0, T]}\bigg\{e^{-\lambda t}C_{\Phi}(r_{1})r_{1}\int_{0}^{t}e^{\lambda s}e^{-\lambda s}|h_{1}(s)-h_{2}(s)|\, {\rm d}s\bigg\} \\ &\leq \frac{C_{\Phi}(r_{1})r_{1}}{\lambda}\| h_{1}-h_{2}\|_{\lambda}. \nonumber \end{align*} |
Taking \lambda > 0 large enough such that \lambda > C_{\Phi}(r_{1})r_{1} , we see that \mathcal{A} is a contraction on (\mathcal{S}, \|\cdot\|_{\lambda}) . Fixed point theory shows that \mathcal{A} owns a unique fixed point \bar{y} in \mathcal{S} , and \bar{y} satisfies (3.21).
Next, based on (3.21), we define another operator \mathcal{B}:\mathcal{S}\rightarrow \mathcal{S} by
\begin{align} \mathcal{B}y = \bar{y}\; \; {\rm for}\; y\in\mathcal{S}. \end{align} | (3.23) |
For any y_{1} , y_{2}\in\mathcal{S} , it is easy to show that
\begin{align*} |\bar{I}^{y_{1}}(s)-\bar{I}^{y_{2}}(s)| = \bigg|\int_{0}^{a_{\dagger}}m(a)\bar{p}^{y_{1}}(a, s)\, {\rm d}a-\int_{0}^{a_{\dagger}}m(a)\bar{p}^{y_{2}}(a, s)\, {\rm d}a\bigg| \leq \bar{m}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}. \end{align*} |
Then, together with (3.12), one yields
\begin{align*} e^{-\lambda t}\int_{0}^{t}|\bar{I}^{y_{1}}(s)-\bar{I}^{y_{2}}(s)|\, {\rm d}s \leq \bar{m}e^{-\lambda t}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s \leq \frac{M\bar{m}}{\lambda^{2}}\|y_{1}-y_{2}\|_{\lambda}. \end{align*} |
Further, it follows from (3.21) and (3.22) that
\begin{align} e^{-\lambda t}&\big|\tilde{y}_{1}(t)-\tilde{y}_{2}(t)\big| \\ = & e^{-\lambda t}\big|(\mathcal{B}y_{1})(t)-(\mathcal{B}y_{2})(t)\big| \\ \leq& e^{-\lambda t}\bigg|\int_{0}^{t}\Phi(\bar{I}^{y_{1}}(s)\bar{y}_{1}(s))\, {\rm d}s-\int_{0}^{t}\Phi(\bar{I}^{y_{2}}(s)\bar{y}_{2}(s))\, {\rm d}s\bigg| \\ \leq& e^{-\lambda t}C_{\Phi}(r_{1})\int_{0}^{t}|\bar{I}^{y_{1}}(s)\bar{y}_{1}(s)-\bar{I}^{y_{2}}(s)\bar{y}_{2}(s)|\, {\rm d}s \\ \leq& \frac{C_{\Phi}(r_{1})M\bar{m}}{\lambda^{2}}\|y_{1}-y_{2}\|_{\lambda} + C_{\Phi}(r_{1})r_{1}\int_{0}^{t}e^{-\lambda s}|\bar{y}_{1}(s)-\bar{y}_{2}(s)|\, {\rm d}s. \end{align} | (3.24) |
The Gronwall's inequality implies
\begin{align*} e^{-\lambda t}|\bar{y}_{1}(t)-\bar{y}_{2}(t)| \leq \frac{C_{\Phi}(r_{1})M\bar{m}e^{C_{\Phi}(r_{1})r_{1}T}}{\lambda^{2}}\|y_{1}-y_{2}\|_{\lambda}. \end{align*} |
Thus, \mathcal{B} is a contraction on (\mathcal{S}, \|\cdot\|_{\lambda}) by choosing \lambda > 0 such that C_{\Phi}(r_{1})M\bar{m}e^{C_{\Phi}(r_{1})r_{1}T}/\lambda^{2} < 1 . Let y be the unique fixed point of \mathcal{B} in \mathcal{S} . Then (\bar{p}, y) = (\bar{p}^y, y) is the solution to (3.5)–(3.6), which is non-negative and bounded.
Finally, from Theorem 3.1, model (2.1) has a unique solution. In addition, it is easy to verify that p(a, t) = \bar{p}^{y}(a, t)y(t) satisfies (2.1). Thus, p(a, t) = \bar{p}^{y}(a, t)y(t) is the unique solution to (2.1). In summary, model (2.1) has a unique non-negative solution p(a, t) , which is uniformly bounded.
Theorem 3.4. The solution p^{u} of model (2.1) is continuous in u\in\mathcal{U} . That is, for any u_{1} , u_{2}\in\mathcal{U} , there are positive constants K_{1} and K_{2} such that
\begin{align*} &\|p_{1}-p_{2}\|_{L^{\infty}(0, T;L^{1}(0, a_{\dagger}))} \leq K_{1}T\|u_{1}-u_{2}\|_{L^{\infty}(0, T;L^{1}(0, a_{\dagger}))}, \\ &\|p_{1}-p_{2}\|_{L^{1}(D)} \leq K_{2}T\|u_{1}-u_{2}\|_{L^{1}(D)}, \end{align*} |
where p_{i} is the solution of (2.1) with respect to u_{i}\; (i = 1, 2) .
Proof. By Theorem 3.3, one has p_{i}(a, t) = y_{i}(t)\bar{p}^{y_{i}}(a, t) , i = 1, 2 . Here (\bar{p}^{y_{i}}, y_{i}) is the solution of (3.5)–(3.6) with respect to u_{i}\in\mathcal{U} . From (3.10), it follows that
\begin{align} \|p_{1}(\cdot, t)-p_{2}(\cdot, t)\|_{L^{1}} \leq& \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}}+r_{0}|y_{1}(t)-y_{2}(t)|. \end{align} | (3.25) |
Recall that |\bar{I}(s)|\leq r_{1} . Then, by y(t)\leq1 , {\bf(A_2)} , and (3.8), we obtain
\begin{align*} |y_{1}(t)-y_{2}(t)| \leq& \int_{0}^{t}|\Phi(y_{1}(s)\bar{I}^{y_{1}}(s))-\Phi(y_{2}(s)\bar{I}^{y_{2}}(s))|\, {\rm d}s \\ \leq& C_{\Phi}(r_{1})\int_{0}^{t}|y_{1}(s)\bar{I}^{y_{1}}(s)-y_{2}(s)\bar{I}^{y_{2}}(s)|\, {\rm d}s \\ \leq& C_{\Phi}(r_{1})r_{1}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s + C_{\Phi}(r_{1})\bar{m}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s. \end{align*} |
Applying the Gronwall's inequality produces
\begin{align} |y_{1}(t)-y_{2}(t)| \leq M_{1}\int_{0}^{t}\|\bar{u}^{y_{1}}(\cdot, s)-\bar{u}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s, \end{align} | (3.26) |
where M_{1} = C^{2}_{\Phi}(r_{1})r_{1}\bar{m}Te^{C_{\Phi}(r_{1})r_{1}T}+C_{\Phi}(r_{1})\bar{m} . Further, it can be seen from (3.7) that
\begin{align} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} \leq& \int_{0}^{t}|F_{y_{1}}(\tau, \bar{p}^{y_{1}}(\cdot, \tau))-F_{y_{2}}(\tau, \bar{p}^{y_{2}}(\cdot, \tau))|\, {\rm d}a \\ +& \int_{0}^{t}\int_{\tau}^{t}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ +& \int_{t}^{a_{\dagger}}\int_{0}^{t}|G_{y_{1}}(s, \bar{p}^{y_{1}}(\cdot, s))-G_{y_{2}}(s, \bar{p}^{y_{2}}(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ \triangleq& I_{4}+I_{5}+I_{6}. \end{align} | (3.27) |
Arguing similarly as for I_{1} and I_{2}+I_{3} , respectively, we can show that
\begin{align} I_{4} \leq& \bar{\beta}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + \delta_{2}\bar{\beta}\int_{0}^{t}\|\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\int_{0}^{a_{\dagger}}|u_{1}(a, s)-u_{2}(a, s)|\, {\rm d}a\, {\rm d}s \\ +& 2C_{\beta}(r_{0})\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\int_{0}^{a_{\dagger}}|\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ +& 2C_{\beta}(r_{0})r_{0}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\int_{0}^{a_{\dagger}}|\bar{p}^{y_{2}}(a, s)|\, {\rm d}a\, {\rm d}s \\ \leq& (\bar{\beta}+2C_{\beta}(r_{0})r_{0})\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + 2C_{\beta}(r_{0})r_{0}^{2}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s \\ +& \delta_{2}\bar{\beta}r_{0}\int_{0}^{t}\|u_{1}(\cdot, s)-u_{2}(\cdot, s)\|_{L^{1}}\, {\rm d}s \end{align} | (3.28) |
and
\begin{align} I_{5}+I_{6} \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}\bigg| \bigg(\frac{f(\eta, s)}{y_{1}(s)}-\frac{f(\eta, s)}{y_{2}(s)}\bigg) - \mu(\eta, s)\big(\bar{u}^{y_{1}}(\eta, s)-\bar{u}^{y_{2}}(\eta, s)\big) \\ -& \delta_{1}\big(u_{1}(\eta, s)\bar{p}^{y_{1}}(\eta, s)-u_{2}(\eta, s)\bar{p}^{y_{2}}(\eta, s)\big) \bigg|\, {\rm d}\eta\, {\rm d}s \\ = & \int_{0}^{t}\int_{0}^{a_{\dagger}}\bigg| \bigg(\frac{f(\eta, s)}{y_{1}(s)}-\frac{f(\eta, s)}{y_{2}(s)}\bigg) - \mu(\eta, s)\big(\bar{u}^{y_{1}}(\eta, s)-\bar{u}^{y_{2}}(\eta, s)\big) \\ -& \delta_{1}u_{1}(\eta, s)\big(\bar{p}^{y_{1}}(\eta, s)-\bar{p}^{y_{2}}(\eta, s)\big) -\delta_{1}\big(u_{1}(\eta, s)-u_{2}(\eta, s)\big)\bar{p}^{y_{2}}(\eta, s) \bigg|\, {\rm d}\eta\, {\rm d}s \\ \leq& (\bar{\mu}+\delta_{1}L)\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + \delta_{1}r_{0}\int_{0}^{t}\|u(\cdot, s)-u(\cdot, s)\|_{L^{1}}\, {\rm d}s \\ +& \frac{\|f\|_{L^{\infty}(0, T;L^{1})}}{\theta^{2}}\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s. \end{align} | (3.29) |
It follows from (3.27)–(3.29) that
\begin{align} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} \leq& (\bar{\beta}+2C_{\beta}(r_{0})r_{0}+\bar{\mu}+\delta_{1}L)\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s \\ +& \bigg(2C_{\beta}(r_{0})r_{0}^{2}+\frac{\|f\|_{L^{\infty}(0, T;L^{1})}}{\theta^{2}}\bigg)\int_{0}^{t}|y_{1}(s)-y_{2}(s)|\, {\rm d}s \\ +& (\delta_{1}r_{0}+\delta_{2}\bar{\beta}r_{0})\int_{0}^{t}\|u_{1}(\cdot, s)-u_{2}(\cdot, s)\|_{L^{1}}\, {\rm d}s. \end{align} | (3.30) |
This, together with (3.26), yields
\begin{align*} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} \leq& M_{2}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s + (\delta_{1}r_{0}+\delta_{2}\bar{\beta}r_{0})\int_{0}^{t}\|u_{1}(\cdot, s)-u_{2}(\cdot, s)\|_{L^{1}}\, {\rm d}s, \end{align*} |
where M_{2} = (\bar{\beta}+2C_{\beta}(r_{0})r_{0}+\bar{\mu}+\delta_{1}L)+M_{1}T\big(2C_{\beta}(r_{0})r_{0}^{2}+\frac{\|f\|_{L^{\infty}(0, T; L^{1})}}{\theta^{2}}\big) . By Gronwall's inequality, we have
\begin{align} \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}} \leq& M_{3}\int_{0}^{t}\|u_{1}(\cdot, s)-u_{2}(\cdot, s)\|_{L^{1}}\, {\rm d}s, \end{align} | (3.31) |
where M_{3} = (\delta_{1}r_{0}+\delta_{2}\bar{\beta}r_{0})(1++M_{2}Te^{M_{2}T}) . Substituting (3.26) and (3.31) into (3.25), one yields
\begin{align} \|p_{1}(\cdot, t)-p_{2}(\cdot, t)\|_{L^{1}} \leq& \|\bar{p}^{y_{1}}(\cdot, t)-\bar{p}^{y_{2}}(\cdot, t)\|_{L^{1}}+r_{0}M_{1}\int_{0}^{t}\|\bar{p}^{y_{1}}(\cdot, s)-\bar{p}^{y_{2}}(\cdot, s)\|_{L^{1}}\, {\rm d}s \\ \leq& M_{3}(1+r_{0}M_{1}T)\int_{0}^{t}\|u_{1}(\cdot, s)-u_{2}(\cdot, s)\|_{L^{1}}\, {\rm d}s. \end{align} |
The conclusion then follows immediately from the above analysis.
The purpose of this section is to prove the existence of optimal management policy. To this end, we first establish two lemmas on compactness.
Lemma 4.1. Let I^{u}(t) = \int_{0}^{a_{\dagger}}m(a)p^{u}(a, t)\, {\rm d}a and S^{u}(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]p^{u}(a, t)\, {\rm d}a . Then \{I^{u}:u\in\mathcal{U}\} and \{S^{u}:u\in\mathcal{U}\} are relatively compact sets in L^{2}(0, T) .
Proof. We only show that \{I^{u}:u\in\mathcal{U}\} is relatively compact in L^2(0, T) as \{S^{u}:u\in\mathcal{U}\} can be dealt with similarity. For given \varepsilon > 0 sufficiently small, define
\begin{align*} I^{u, \varepsilon}(t) = \int_{0}^{a_{\dagger}-\varepsilon}m(a)p^{u}(a, t)\, {\rm d}a. \end{align*} |
Since p^{u} is uniformly bounded in u , there is a positive constant M_{T} such that
\begin{align*} |I^{u}(t)-I^{u, \varepsilon}(t)| = \int_{a_{\dagger}-\varepsilon}^{a_{\dagger}}m(a)p^{u}(a, t)\, {\rm d}a \leq \bar{m}M_{T}\varepsilon, \; \; \; \; \forall\; t\in[0, T], \; \; \; \forall\; u\in\mathcal{U}. \end{align*} |
Obviously, the relative compactness of \{I^{u, \varepsilon}:u\in\mathcal{U}\} in L^{2}(0, T) implies that the set \{I^{u}:u\in\mathcal{U}\} is also relatively compact in L^{2}(0, T) .
Now, by using Fréchet-Kolmogorov Theorem [26], we show that \{I^{u, \varepsilon}:u\in\mathcal{U}\} is relatively compact in L^2(0, T) . For convenience, we denote I^{u, \varepsilon}(t) = 0 if t < 0 or t > T .
1) For each u\in\mathcal{U} , by Theorem 3.1, we have
\begin{align*} \sup\limits_{u\in\mathcal{U}}\|I^{u, \varepsilon}\| = & \sup\limits_{u\in\mathcal{U}}\bigg(\int_{R}[I^{u, \varepsilon}(t)]^{2}\, {\rm d}t\bigg)^{\frac{1}{2}} = \sup\limits_{u\in\mathcal{U}}\bigg(\int_{0}^{T}\left[\int_{0}^{a_{\dagger}-\varepsilon}m(a)p^{u}(a, t)\, {\rm d}a\right]^{2}\, {\rm d}t\bigg)^{\frac{1}{2}} \\ \leq& \sup\limits_{u\in\mathcal{U}}\bigg(\bar{m}\int_{0}^{T}\left[\|p^{u}(\cdot, t)\|_{L^{1}}\right]^{2}\, {\rm d}t\bigg)^{\frac{1}{2}} < +\infty. \end{align*} |
2) It is easy to verify that
\begin{align*} \lim\limits_{x\rightarrow +\infty}\int_{|s| > x} [I^{u, \varepsilon}(s)]^{2}\, {\rm d}s = 0. \end{align*} |
3) We need to show that \lim\limits_{t\rightarrow0}\int_{0}^{T}[I^{u, \varepsilon}(s+t)-I^{u, \varepsilon}(s)]^{2}\, {\rm d}s = 0 for any u\in\mathcal{U} . Note that
\begin{align*} \int_{0}^{T}\left[I^{u, \varepsilon}(s+t)-I^{u}(s, \varepsilon)\right]^{2}\, {\rm d}s = & \int_{0}^{T}\bigg[\int_{s}^{s+t}\frac{\, {\rm d}I^{u, \varepsilon}(r)}{\, {\rm d}r}\, {\rm d}r\bigg]^{2}\, {\rm d}s \\ \leq& |t|T\int_{0}^{T}\left(\frac{\, {\rm d}I^{u, \varepsilon}(r)}{\, {\rm d}r}\right)^{2}\, {\rm d}r. \end{align*} |
It suffices to show that \frac{{\rm d}I^{u, \varepsilon}(t)}{{\rm d}t} is uniformly bounded about u . Clearly,
\frac{{\rm d}I^{u, \varepsilon}(t)}{{\rm d}t} = \int_{0}^{a_{\dagger}-\varepsilon}m(a)\frac{\partial p^{u}(a, t)}{\partial t}\, {\rm d}a. |
Multiplying (2.1) by m(a) and integrating on (0, a_{\dagger}-\varepsilon) , one yields
\begin{align*} \int_{0}^{a_{\dagger}-\varepsilon}m(a)\frac{\partial p^{u}(a, t)}{\partial t}{\rm d}a = & \int_{0}^{a_{\dagger}-\varepsilon}m(a)\Big\{f(a, t)-\big[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))\big]p^{u}(a, t)\Big\}\, {\rm d}a \\ -& \int_{0}^{a_{\dagger}-\varepsilon}m(a)\frac{\partial p^{u}(a, t)}{\partial a}\, {\rm d}a \\ \triangleq& I_{7}+I_{8}. \end{align*} |
By assumptions and Theorem 3.1, we know that I_{7} is uniformly bounded about u . For I_{8} , by the second equation of (2.1), we obtain
\begin{align*} I_{8} = & -m(a_{\dagger}-\varepsilon)p^{u}(a_{\dagger}-\varepsilon, t)+m(0)p^{u}(0, t)+\int_{0}^{a_{\dagger}-\varepsilon}m'(a)p^{u}(a, t)\, {\rm d}a \\ = & m(0)\int_{0}^{a_{\dagger}-\varepsilon}\beta\big(a, t, E(p^{u})(a, t), S^{\alpha}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)\, \mbox{d}a \\ +& \int_{0}^{a_{\dagger}}m'(a)p^{u}(a, t)\, {\rm d}a-m(a_{\dagger}-\varepsilon)p^{u}(a_{\dagger}-\varepsilon, t). \end{align*} |
Similarly, I_{8} is also uniformly bounded about u . In summary, we have proved that \frac{{\rm d}I^{u, \varepsilon}(t)}{{\rm d}t} is uniformly bounded about u . Hence, we can obtain
\begin{align*} \lim\limits_{t\rightarrow0}\int_{0}^{T}[I^{u, \varepsilon}(s+t)-I^{u, \varepsilon}(s)]^{2}\, {\rm d}s = 0. \end{align*} |
Thus, by Fréchet-Kolmogorov Theorem, we know that \{I^{u, \varepsilon}:u\in\mathcal{U}\} is relatively compact in L^{2}(0, T) . The proof is complete.
Lemma 4.2. Let E(p^{u})(a, t) = \alpha\int_{0}^{a}\omega(r)p^{u}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)p^{u}(r, t)\, {\rm d}r . Then the set \{E(p^{u}):u\in\mathcal{U}\} is relatively compact in L^{2}(D) .
Proof. For given \varepsilon > 0 sufficiently small, define
\begin{align*} E^{\varepsilon}(p^{u})(a, t) = \alpha\int_{0}^{a}\omega(r)p^{u}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}-\varepsilon}\omega(r)p^{u}(r, t)\, {\rm d}r, \; \; \; (a, t)\in D. \end{align*} |
With a similar discussion as that in the proof of Lemma 4.1, we only need to show that \{E^{\varepsilon}(p^{u}):u\in\mathcal{U}\} is relatively compact in L^{2}(D) . We shall use Fréchet-Kolmogorov Theorem (with S = R^{2} ) to prove this. For convenience, we extend E^{\varepsilon}(p^{u}) to R^{2} by defining E^{\varepsilon}(p^{u})(a, t) = 0 for (a, t) = R^{2}\setminus D .
1) For any u\in\mathcal{U} , by Theorem 3.1, we have
\begin{align*} \sup\limits_{u\in\mathcal{U}}\|E^{\varepsilon}(p^{u})\| = & \sup\limits_{u\in\mathcal{U}}\bigg(\int_{R^{2}}[E^{\varepsilon}(p^{u})(a, t)]^{2}\, {\rm d}a\, {\rm d}t\bigg)^{\frac{1}{2}} \\ = & \sup\limits_{u\in\mathcal{U}}\bigg(\int_{0}^{T}\int_{0}^{a_{\dagger}}\left[\alpha\int_{0}^{a}\omega(r)p^{u}(r, t)\, {\rm d}r +\int_{a}^{a_{\dagger}-\varepsilon}\omega(r)p^{u}(r, t)\, {\rm d}r \right]^{2}\, {\rm d}a\, {\rm d}t\bigg)^{\frac{1}{2}} \\ \leq& \sup\limits_{u\in\mathcal{U}}\bigg(\bar{\omega}\int_{0}^{T}\int_{0}^{a_{\dagger}}\left[\|p^{u}(\cdot, t)\|_{L^{1}}\right]^{2}\, {\rm d}a\, {\rm d}t\bigg)^{\frac{1}{2}} < +\infty. \end{align*} |
2) It is clear that
\begin{align*} \lim\limits_{x\rightarrow +\infty\atop y\rightarrow +\infty}\int_{|a| > x \atop |t| > y}[E^{\varepsilon}(p^{u})(a, t)]^{2}\, {\rm d}a\, {\rm d}t = 0. \end{align*} |
3) It remains to show that
\begin{align} \lim\limits_{\Delta a\rightarrow0\atop \Delta t\rightarrow0}\int_{0}^{T}\int_{0}^{a_{\dagger}}[E^{\varepsilon}(p^{u})(a+\Delta a, t+\Delta t) -E^{\varepsilon}(p^{u})(a, t)]^{2}\, {\rm d}a\, {\rm d}t = 0\quad \mbox{for any}\; u\in\mathcal{U}. \end{align} | (4.1) |
Obviously, we have
\begin{align*} &\int_{0}^{T}\int_{0}^{a_{\dagger}}[E^{\varepsilon}(p^{u})(a+\Delta a, t+\Delta t)-E^{\varepsilon}(p^{u})(a, t)]^{2}\, {\rm d}a\, {\rm d}t \\ = & \int_{0}^{T}\int_{0}^{a_{\dagger}}\bigg[\frac{\partial E^{\varepsilon}(p^{u})(a+\theta_{1}\Delta a, t+\Delta t)}{\partial a}\Delta a +\frac{E^{\varepsilon}(p^{u})(a, t+\theta_{2}\Delta t)}{\partial t}\Delta t\bigg]^{2}\, {\rm d}a\, {\rm d}t, \end{align*} |
where \theta_{1} , \theta_{2}\in[0, 1] . To show (4.1), we should discuss the uniform boundedness of \frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial a} and \frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial t} with respect to u . Multiplying the first equation in model (2.1) by \omega(a) and integrating on (0, a_{\dagger}-\varepsilon) , we obtain
\begin{align*} \alpha\int_{0}^{a}&\omega(r)\bigg[\frac{\partial p^{u}(r, t)}{\partial t}+\frac{\partial p^{u}(r, t)}{\partial r}\bigg]\, {\rm d}r + \int_{a}^{a_{\dagger}-\varepsilon}\omega(r)\bigg[\frac{\partial p^{u}(r, t)}{\partial t}+\frac{\partial p^{u}(r, t)}{\partial r}\bigg]\, {\rm d}r \\ = & \alpha\int_{0}^{a}\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]\, {\rm d}r \\ +& \int_{a}^{a_{\dagger}-\varepsilon}\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]\, {\rm d}r. \end{align*} |
Thus,
\begin{align*} \frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial t} = & \frac{\partial[\alpha\int_{0}^{a}\omega(r)p^{u}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}-\varepsilon}\omega(r)p^{u}(r, t)\, {\rm d}r]}{\partial t} \\ = & \alpha\int_{0}^{a}\omega(r)\frac{\partial p^{u}(r, t)}{\partial t}\, {\rm d}r+\int_{a}^{a_{\dagger} - \varepsilon}\omega(r)\frac{\partial p^{u}(r, t)}{\partial t}\, {\rm d}r \\ = & \alpha\int_{0}^{a}\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]\, {\rm d}r \\ +& \int_{a}^{a_{\dagger}-\varepsilon}\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]\, {\rm d}r \\ -& \left[ \alpha\int_{0}^{a}\omega(r)\frac{\partial p^{u}(r, t)}{\partial r}\, {\rm d}r+\int_{a}^{a_{\dagger}-\varepsilon}\omega(r)\frac{\partial p^{u}(r, t)}{\partial r}\, {\rm d}r \right]. \end{align*} |
Denote I_{9}\triangleq- \big[\alpha\int_{0}^{a}\omega(r)\frac{\partial p^{u}(r, t)}{\partial r}\, {\rm d}r+\int_{a}^{a_{\dagger}-\varepsilon}\omega(r)\frac{\partial p^{u}(r, t)}{\partial r}\, {\rm d}r \big] . Then, using the second equation in (2.1), we can obtain
\begin{align*} I_{9} = & -\bigg[\alpha\omega(a)p^{u}(a, t)-\alpha\omega(0)p^{u}(0, t)-\alpha\int_{0}^{a}\omega'(r)p^{u}(r, t)\, {\rm d}r + \omega(a_{\dagger}-\varepsilon)p^{u}(a_{\dagger}-\varepsilon, t) \\ -&\omega(a)p^{u}(a, t)-\int_{a}^{a_{\dagger}-\varepsilon}\omega'(r)p^{u}(r, t)\, {\rm d}r\bigg] \\ = & (1-\alpha)\omega(a)p^{u}(a, t)+\alpha\omega(0)\int_{0}^{a_{\dagger}}\beta\big(a, t, E(p^{u})(a, t), S^{\alpha}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)\, \mbox{d}a \\ +& \alpha\int_{0}^{a}\omega'(r)p^{u}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}-\varepsilon}\omega'(r)p^{u}(r, t)\, {\rm d}r - \omega(a_{\dagger}-\varepsilon)p^{u}(a_{\dagger}-\varepsilon, t). \end{align*} |
Hence,
\begin{align*} &\frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial t} \\ &\, \, \, = \alpha\int_{0}^{a}\bigg\{\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]+\omega'(r)p^{u}(r, t)\bigg\}\, {\rm d}r \\ &\, \, \, + \int_{a}^{a_{\dagger}-\varepsilon}\bigg\{\omega(r)\Big[f(r, t)-[\mu(r, t)-\delta_{1}u(r, t)-\Phi(I^{u}(t))]p^{u}(r, t)\Big]+\omega'(r)p^{u}(r, t)\bigg\}\, {\rm d}r \\ &\, \, \, + (1-\alpha)\omega(a)p^{u}(a, t)+\alpha\omega(0)\int_{0}^{a_{\dagger}}\beta\big(a, t, E(p^{u})(a, t), S^{\alpha}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)\, \mbox{d}a \\ &\, \, \, - \omega(a_{\dagger}-\varepsilon)p^{u}(a_{\dagger}-\varepsilon, t). \end{align*} |
By assumptions and Theorem 3.1, \frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial t} is uniformly bounded about u . On the other hand, we have
\begin{align*} \frac{\partial E^{\varepsilon}(p^{u})(a, t)}{\partial a} = & \alpha\omega(a)p^{u}(a, t)-\omega(a)p^{u}(a, t) = (\alpha-1)\omega(a)p^{u}(a, t). \end{align*} |
Similarly, \frac{\partial E(p^{u})(a, t)}{\partial a} is also uniformly bounded about u\in\mathcal{U} . Hence, we have (4.1).
By now, we have verified all conditions of Fréchet-Kolmogorov Theorem (with S = R^{2} ) and hence \{E^{\varepsilon}(p^{u}):u\in\mathcal{U}\} is relatively compact in L^{2}(D) . The proof is complete.
Theorem 4.3. There is at least one solution to the optimal management problem (2.1)–(2.2).
Proof. Let d = \inf\limits_{u\in\mathcal{U}}J(u) . For any u\in\mathcal{U} , by Theorem 3.1, we have
\begin{align*} 0 < J(u)\leq\|p^{u}(\cdot, T)\|_{L^{1}}+\bar{r}L\int_{0}^{T}\|p^{u}(\cdot, t)\|_{L^{1}}\, {\rm d}t < +\infty. \end{align*} |
Thus, d\in[0, +\infty) . For any n\geq 1 , according to the definition of d , there exists u_{n}\in\mathcal{U} such that
\begin{align*} d\leq J(u_{n}) < d+\frac{1}{n}. \end{align*} |
The boundness of \{p^{u_{n}}:u_{n}\in\mathcal{U}\} implies that there is a subsequence of \{u_{n}\} , still recorded as \{u_{n}\} , such that
\begin{align} p^{u_{n}}\xrightarrow{\rm weakly} p^{*}\; \; \mbox{in}\; \; L^{2}(D)\; \; {\rm as}\; \; n\rightarrow +\infty. \end{align} | (4.2) |
By Lemmas 4.1 and 4.2, there exists a subsequence of \{u_{n}\} , still recorded as \{u_{n}\} , such that
\begin{align} I^{u_{n}}\rightarrow I^{*}, \; \; \; \; S^{u_{n}}\rightarrow & S^{*}, \; \; \; \; E(p^{u_{n}})\rightarrow E(p^{*}), \end{align} | (4.3) |
\begin{align} I^{u_{n}}(t)\rightarrow I^{*}(t), \; \; \; \; S^{u_{n}}(t)\rightarrow & S^{*}(t), \; \; \; \; E(p^{u_{n}})(a, t)\rightarrow E(p^{*})(a, t), \end{align} | (4.4) |
as n\rightarrow +\infty . Here I^{*}, S^{*}\in L^{2}(0, T) and E(p^{*})\in L^{2}(D) . Further, from (4.2)–(4.4), we can infer that
\begin{align*} &I^{*}(t) = \int_{0}^{a_{\dagger}}m(a)p^{*}(a, t)\, {\rm d}a, \; \; \; \; S^{*}(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]p^{*}(a, t)\, {\rm d}a, \; \; \; \; t\in[0, T]. \\ &E(p^{*})(a, t) = \alpha\int_{0}^{a}\omega(r)p^{*}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)p^{*}(r, t)\, {\rm d}r, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; (a, t)\in D. \end{align*} |
Moreover, by Mazur Theorem, we can obtain the convex combination of \{p^{u_{n}}\} as follows
\begin{align} \tilde{p}_{n}(a, t) = \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}(a, t), \quad \lambda_{i}^{n}\geq 0, \quad \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n} = 1, \quad k_{n}\geq n+1, \end{align} | (4.5) |
such that
\begin{align} \tilde{p}_{n}\xrightarrow{\rm strongly} p^{*}\; \; \mbox{in}\; \; L^{2}(D)\; \; {\rm as}\; \; n\rightarrow +\infty. \end{align} | (4.6) |
Now, define a new control sequence \{\tilde{u}_{n}\} as follows
\begin{equation} \tilde{u}_{n}(a, t) = \left\{ \begin{array}{ll} \frac{\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}(a, t)u_{i}(a, t)}{\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}(a, t)} \quad & \mbox{if}\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}(a, t)\neq 0, \\ 0 & \mbox{if}\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}(a, t) = 0. \end{array} \right. \end{equation} | (4.7) |
It is easy to verify that \tilde{u}_n\in\mathcal{U} . Since \{\tilde{u}_{n}\} is bounded, the weak compactness of bounded sequence implies that there is a subsequence of \{\tilde{u}_{n}\} , still recorded as \{\tilde{u}_{n}\} , such that
\begin{align*} \tilde{u}_{n}\xrightarrow{\rm weakly} u^{*}\; \; \mbox{in}\; \; L^{2}(D)\; \; {\rm as}\; \; n\rightarrow +\infty. \end{align*} |
From (2.1), (4.5) and (4.7), it follows that
\begin{align} \left\{ \begin{array}{ll} \frac{\partial\tilde{p}_{n}}{\partial t}+\frac{\partial\tilde{p}_{n}}{\partial a} = f(a, t)-[\mu(a, t)+\delta_{1}\tilde{u}_{n}(a, t)]\tilde{p}_{n}(a, t) - \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}\Phi(I^{u_{i}}(t))p^{u_{i}}(a, t), \\ \tilde{p}_{n}(0, t) = \int_{0}^{a_{\dagger}}\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n} \beta\big(a, t, E(p^{u_{i}})(a, t), S^{u_{i}}(t)\big)[1-\delta_{2}u_{i}(a, t)]p^{u_{i}}(a, t)\, \mbox{d}a, \\ \tilde{p}_{n}(a, 0) = p_{0}(a), \end{array} \right. \end{align} | (4.8) |
where I^{u_{i}}(t) = \int_{0}^{a_{\dagger}}m(a)p^{u_{i}}(a, t)\, \mbox{d}a , E(p^{u_{i}})(a, t) = \alpha\int_{0}^{a}\omega(r)p^{u_{i}}(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)p^{u_{i}}(r, t)\, {\rm d}r and S^{u_{i}}(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]p^{u_{i}}(a, t)\, {\rm d}a . From {\bf(A_2)} – {\bf(A_3)} and the boundedness of p^{u} , there is a positive constant M_{4} such that
\begin{align*} \bigg|\sum\limits_{i = n+1}^{k_{n}}&\lambda_{i}^{n}\beta\big(a, t, E(p^{u_{i}}), S^{u_{i}}\big)[1-\delta_{2}u_{i}]p^{u_{i}} - \beta\big(a, t, E(p^{*}), S^{\ast}\big)[1-\delta_{2}u^{\ast}]p^{\ast}\bigg| \\ \leq& \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}\Big|\beta\big(a, t, E(p^{u_{i}}), S^{u_{i}}\big)-\beta\big(a, t, E(p^{*}), S^{\ast}\big)\Big|\cdot|p^{u_{i}}| \\ +& \bar{\beta}\delta_{2}\bigg|\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}u_{i}p^{u_{i}} - u^{\ast}\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}}\bigg| + \bar{\beta}\bigg|\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{u_{i}} - \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}p^{\ast}\bigg| \\ \leq& M_{4}\sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}\Big[|E(p^{u_{i}})(a, t)-E(p^{*})(a, t)|+|S^{u_{i}}(t)-S^{\ast}(t)|\Big] \\ +& \bar{\beta}\delta_{2}|\tilde{u}_{n}(a, t)\tilde{p}_{n}(a, t)-u^{\ast}(a, t)\tilde{p}_{n}(a, t)| + \bar{\beta}|\tilde{p}_{n}(a, t)-p^{\ast}(a, t)|. \end{align*} |
By (4.4) and (4.6), we get
\begin{align*} \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}\beta\big(a, t, E(p^{u_{i}})(a, t), S^{u_{i}}(t)\big)[1-\delta_{2}u_{i}(a, t)]p^{u_{i}} \rightarrow \beta\big(a, t, E(p^{*})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]p^{\ast} \end{align*} |
as n\rightarrow +\infty . Similarly, we also get
\begin{align*} \sum\limits_{i = n+1}^{k_{n}}\lambda_{i}^{n}\Phi(I^{u_{i}}(t))p^{u_{i}}(a, t) \rightarrow \Phi(I^{\ast}(t))p^{\ast}(a, t)\; \; {\rm as}\; \; n\rightarrow +\infty. \end{align*} |
Hence, in the sense of weak solutions, we have p^{*}(a, t) = p^{u^{*}}(a, t) , I^{*}(t) = I^{u^{*}}(t) , S^{*}(t) = S^{u^{*}}(t) and E(p^{*})(a, t) = E(p^{u^{*}})(a, t) .
Finally, arguing similarly as in the proof of [16], we can show that u^{\ast}\in\mathcal{U} is an optimal policy for the management problem (2.2). This completes the proof.
In this section, we will establish the optimality conditions for the management problem (2.2). For any u\in\mathcal{U} , let \mathcal{T}_{\mathcal{U}}(u) and \mathcal{N}_{\mathcal{U}}(u) be, respectively, the tangent cone and normal cone of \mathcal{U} at the element u [27]. To show the optimality conditions, we need the following two lemmas.
Lemma 5.1. Assume that \eta_{0}\in L_{+}^{1} , \mu_{i} (i = 1, 2) , \beta_{j}\in L^{\infty}(D) (j = 1, 2, 3) , f_{n}\in L^{1}(D) , b_{n}\in L^{1}[0, T] , n\geq 1 . Let \eta_{n} be the solution of
\begin{align} \left\{ \begin{array}{ll} \frac{\partial\eta}{\partial t} + \frac{\partial\eta}{\partial a} = f_{n}(a, t)-\mu_{1}(a, t)\eta(a, t)-\mu_{2}(a, t)I(t), & (a, t)\in D, \\ \eta(0, t) = \int_{0}^{a_{\dagger}}[\beta_{1}(a, t)\eta(a, t)+\beta_{2}(a, t)S(t)+\beta_{3}(a, t)E(\eta)(a, t)]\, {\rm d}a+b_{n}(t), & t\in[0, T], \\ \eta(a, 0) = \eta_{0}(a), & a\in[0, a_{\dagger}), \\ I(t) = \int_{0}^{a_{\dagger}}m(a)\eta(a, t)\, {\rm d}a, \; \; \; \; S(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]\eta(a, t)\, {\rm d}a, & t\in[0, T], \\ E(\eta)(a, t) = \alpha\int_{0}^{a}\omega(a)\eta(a, t)\, {\rm d}a+\int_{a}^{a_{\dagger}}\omega(a)\eta(a, t)\, {\rm d}a, & (a, t)\in D. \end{array} \right. \end{align} | (5.1) |
If (f_{n}, b_{n})\rightarrow (f, b) in L^{1}(D)\times L^{1}[0, T] as n\rightarrow +\infty , then \eta_{n}\rightarrow \eta in L^{\infty}(0, T; L^{1}(0, a_{\dagger})) as n\rightarrow +\infty . Here \eta is the solution of (5.1) with respect to f_{n} = f and b_{n} = b .
Proof. Similar to the prove of [14], model (5.1) has a unique solution. Moreover, on the characteristic lines, the solution to (5.1) has the form
\begin{eqnarray} \eta_{n}(a, t) = \left\{ \begin{array}{ll} F(\tau, \eta_{n}(\cdot, \tau))+ \int_{\tau}^{t}G(s, \eta_{n}(\cdot, s))(s-t+a)\, {\rm d}s, & a\leq t, \\ \eta_{0}(a-t)+ \int_{0}^{t}G(s, \eta_{n}(\cdot, s))(s-t+a)\, {\rm d}s, \quad\quad & a > t, \end{array} \right. \end{eqnarray} |
where \tau = t-a and, for t\in[0, T] and \phi\in L^{1} ,
\begin{align*} &F(t, \phi) = \int_{0}^{a_{\dagger}}[\beta_{1}(a, t)\phi(a)+\beta_{2}(a, t)S\phi+\beta_{3}(a, t)E(\phi)(a)]\, \mbox{d}a+b_{n}(t), \\ &G(t, \phi)(a) = f_{n}(a, t)-\mu_{1}(a, t)\phi(a)-\mu_{2}(a, t)I\phi, \; \; \; \; \; \; \; \; \; \; \; \; \; \; a\in[0, a_{\dagger}). \end{align*} |
Here S\phi = \int_{0}^{a_{\dagger}}[1-\omega(a)]\phi(a)\, \mbox{d}a , E(\phi)(a) = \alpha\int_{0}^{a}\omega(r)\phi(r)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)\phi(r)\, {\rm d}r , a\in[0, a_{\dagger}) and I\phi = \int_{0}^{a_{\dagger}}m(a)\phi(a)\, \mbox{d}a .
Let \eta_{n} and \eta be solutions of (5.1) with respect to (f_{n}, b_{n}) and (f, b) , respectively. Then we have
\begin{align} |S\eta_{n}-S\eta|(t) = & \bigg|\int_{0}^{a_{\dagger}}[1-\omega(a)]\eta_{n}(a, t)\, {\rm d}a-\int_{0}^{a_{\dagger}}[1-\omega(a)]\eta(a, t)\, {\rm d}a\bigg| \\ \leq& \int_{0}^{a_{\dagger}}|\eta_{n}(a, t)-\eta(a, t)|\, {\rm d}a = \|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}}, \end{align} | (5.2) |
\begin{align} |I\eta_{n}-I\eta|(t) = & \bigg|\int_{0}^{a_{\dagger}}m(a)\eta_{n}(a, t)\, {\rm d}a-\int_{0}^{a_{\dagger}}m(a)\eta(a, t)\, {\rm d}a\bigg| \\ \leq& \bar{m}\int_{0}^{a_{\dagger}}|\eta_{n}(a, t)-\eta(a, t)|\, {\rm d}a = \bar{m}\|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}}, \end{align} | (5.3) |
\begin{align} |E(\eta_{n})-E(\eta)|(a, t) = & \bigg|\alpha\int_{0}^{a}\omega(r)[\eta_{n}-\eta](r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}\omega(r)[\eta_{n}-\eta](r, t)\, {\rm d}r\bigg| \\ \leq& \alpha\int_{0}^{a}|\omega(r)|\cdot|\eta_{n}-\eta|(r, t)\, {\rm d}r+\int_{a}^{a_{\dagger}}|\omega(r)|\cdot|\eta_{n}-\eta|(r, t)\, {\rm d}r \\ \leq& \bar{\omega}\int_{0}^{a_{\dagger}}|\eta_{n}-\eta|(a, t)\, {\rm d}a \leq \|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}}. \end{align} | (5.4) |
Moreover,
\begin{align} \|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}} \leq& \int_{0}^{t}|F(\tau, \eta_{n}(\cdot, \tau))-F(\tau, \eta(\cdot, \tau))|\, {\rm d}a \\ +& \int_{0}^{t}\int_{\tau}^{t}|G(s, \eta_{n}(\cdot, s))-G(s, \eta(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ +& \int_{t}^{a_{\dagger}}\int_{0}^{t}|G(s, \eta_{n}(\cdot, s))-G(s, \eta(\cdot, s))|(s-t+a)\, {\rm d}s\, {\rm d}a \\ \triangleq& I_{10}+I_{11}+I_{12}. \end{align} | (5.5) |
Arguing similarly as for I_{1} and I_{2}+I_{3} , we can show that
\begin{align} I_{10} \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}|\beta_{1}(a, s)|\cdot|\eta_{n}(a, s)-\eta(a, s)|\, {\rm d}a\, {\rm d}s + \int_{0}^{t}\int_{0}^{a_{\dagger}}|\beta_{2}(a, s)|\cdot|S\eta_{n}-S\eta|(s)\, {\rm d}a\, {\rm d}s \\ +& \int_{0}^{t}\int_{0}^{a_{\dagger}}|\beta_{3}(a, s)|\cdot|E(\eta_{n})-E(\eta)|(a, s)\, {\rm d}a\, {\rm d}s + \int_{0}^{t}|b_{n}(s)-b(s)|\, {\rm d}s \\ \leq& \sum\limits_{i = 1}^{3}\|\beta_{i}\|_{L^{\infty}(D)}\int_{0}^{t}\|\eta_{n}(\cdot, s)-\eta(\cdot, s)\|_{L^{1}}\, {\rm d}s + \int_{0}^{t}|b_{n}(s)-b(s)|\, {\rm d}s \end{align} | (5.6) |
and
\begin{align} I_{11}+I_{12} \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}|f_{n}(\varsigma, s)-f(\varsigma, s)|\, {\rm d}\varsigma\, {\rm d}s + \int_{0}^{t}\int_{0}^{a_{\dagger}}|\mu_{1}(\varsigma, s)|\cdot|\eta_{n}(\varsigma, s)-\eta(\varsigma, s)|\, {\rm d}\varsigma\, {\rm d}s \\ +& \int_{0}^{t}\int_{0}^{a_{\dagger}}|\mu_{2}(\varsigma, s)|\cdot|I\eta_{n}-I\eta|(s)\, {\rm d}\varsigma\, {\rm d}s \\ \leq& \int_{0}^{t}\int_{0}^{a_{\dagger}}|f_{n}(\varsigma, s)-f(\varsigma, s)|\, {\rm d}\varsigma\, {\rm d}s \\ +& \Big(\|\mu_{1}\|_{L^{\infty}(D)}+\bar{m}\|\mu_{2}\|_{L^{\infty}(D)}\Big)\int_{0}^{t}\|\eta_{n}(\cdot, s)-\eta(\cdot, s)\|_{L^{1}}\, {\rm d}s. \end{align} | (5.7) |
From (5.5)–(5.7), we can obtain
\begin{align*} \|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}} \leq& \int_{0}^{t}|b_{n}(s)-b(s)|\, {\rm d}s + \int_{0}^{t}\int_{0}^{a_{\dagger}}|f_{n}(\varsigma, s)-f(\varsigma, s)|\, {\rm d}\varsigma\, {\rm d}s \\ +& M_{5}\int_{0}^{t}\|\eta_{n}(\cdot, s)-\eta(\cdot, s)\|_{L^{1}}\, {\rm d}s, \end{align*} |
where M_{5} = \sum_{i = 1}^{3}\|\beta_{i}\|_{L^{\infty}(D)}+\|\mu_{1}\|_{L^{\infty}(D)}+\bar{m}\|\mu_{2}\|_{L^{\infty}(D)} . Thus, Gronwall's inequality implies that
\begin{align*} \|\eta_{n}(\cdot, t)-\eta(\cdot, t)\|_{L^{1}} \leq& M_{6}\int_{0}^{t}|b_{n}(s)-b(s)|\, {\rm d}s + M_{6}\int_{0}^{t}\int_{0}^{a_{\dagger}}|f_{n}(\varsigma, s)-f(\varsigma, s)|\, {\rm d}\varsigma\, {\rm d}s, \end{align*} |
where M_{6} = 1+M_{5}Te^{M_{5}T} . Hence, we can claim that \eta_{n}\rightarrow \eta in L^{\infty}(0, T; L^{1}(0, a_{\dagger})) as n\rightarrow +\infty .
Lemma 5.2. For any u\in\mathcal{U} , v\in T_{\mathcal{U}}(u) and sufficiently small \varepsilon > 0 , if u+\varepsilon v\in\mathcal{U} , we have
\begin{align*} \lim\limits_{\varepsilon\rightarrow0^{+}}\frac{p^{u+\varepsilon v}(a, t)-p^{u}(a, t)}{\varepsilon} = z(a, t), \end{align*} |
where p^{u+\varepsilon v} and p^{u} are, respectively, solutions of (2.1) corresponding to u+\varepsilon v and u , and z satisfies
\begin{align} \left\{ \begin{array}{ll} \frac{\partial z(a, t)}{\partial t}+\frac{\partial z(a, t)}{\partial t} = -\big[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))\big]z(a, t)-\Phi'(I^{u}(t))p^{u}(a, t)P(t) \\ \quad -\delta_{1}v(a, t)p^{u}(a, t), \\ z(0, t) = \int_{0}^{a_{\dagger}}[1-\delta_{2}u(a, t)]\Big[\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)z(a, t)+\beta_{E}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big) \\ \quad \times E(z)(a, t)p^{u}(a, t)+\beta_{S}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)p^{u}(a, t)Q(t)\Big]\, {\rm d}a \\ \quad - \int_{0}^{a_{\dagger}}\delta_{2}\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)v(a, t)p^{u}(a, t)\, {\rm d}a, \\ z(a, 0) = 0, \\ P(t) = \int_{0}^{a_{\dagger}}m(a)z(a, t)\, {\rm d}a, \; \; Q(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]z(a, t)\, {\rm d}a, \\ E(z)(a, t) = \alpha \int_{0}^{a}\omega(r)z(r, t)\, {\rm d}a+\int_{a}^{a_{\dagger}}\omega(r)z(r, t)\, {\rm d}r. \end{array} \right. \end{align} | (5.8) |
Here, \beta_{E} and \beta_{S} are, respectively, the derivatives of \beta with respect to E and S .
Proof. Denote p^{\varepsilon}(a, t)\triangleq p^{u+\varepsilon v}(a, t) . A similar discussion as that in Theorem 3.3 shows that system (5.8) has a unique solution. Now, we prove the existence of \lim_{\varepsilon\rightarrow0^{+}}\frac{p^{\varepsilon}(a, t)-p^{u}(a, t)}{\varepsilon} . Let
\begin{align*} \theta_{\varepsilon}(a, t)\triangleq\frac{1}{\varepsilon}\big[p^{\varepsilon}(a, t)-p^{u}(a, t)\big]-z(a, t). \end{align*} |
Firstly, from (2.1) and (5.8), it follows that
\begin{align*} \frac{\partial\theta_{\varepsilon}(a, t)}{\partial t} + \frac{\partial\theta_{\varepsilon}(a, t)}{\partial a} = & -[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))]\theta_{\varepsilon}(a, t)-\delta_{1}v(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)] \\ -& \Phi'(I^{u}(t))\frac{1}{\varepsilon}[I^{\varepsilon}(t)-I^{u}(t)]p^{\varepsilon}(a, t) +\Phi'(I^{u}(t))p^{u}(a, t)P(t) \\ = & -[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))]\theta_{\varepsilon}(a, t)-\delta_{1}v(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)] \\ -& \Phi'(I^{u}(t))I(\theta_{\varepsilon})(t)p^{\varepsilon}(a, t)+\Phi'(I^{u}(t))P(t)[p^{\varepsilon}(a, t)-p^{u}(a, t)], \end{align*} |
where I(\theta_{\varepsilon})(t) = \int_{0}^{a_{\dagger}}m(a)\big\{\frac{1}{\varepsilon}[p^{\varepsilon}(a, t)-p^{u}(a, t)]-z(a, t)\big\}\, {\rm d}a = \int_{0}^{a_{\dagger}}m(a)\theta_{\varepsilon}(a, t)\, {\rm d}a .
Secondly, a simple calculation shows that
\begin{align*} \theta_{\varepsilon}(0, t) & = \int_{0}^{a_{\dagger}}\beta_{E}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)] E\Big(\frac{1}{\varepsilon}[p^{\varepsilon}-p^{u}]\Big)(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)]\, \mbox{d}a \\ &+ \int_{0}^{a_{\dagger}}\beta_{E}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)E(\theta_{\varepsilon})(a, t)\, \mbox{d}a \\ & +\int_{0}^{a_{\dagger}}\beta_{S}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)]\frac{1}{\varepsilon}[S^{\varepsilon}(t)-S^{u}(t)][p^{\varepsilon}(a, t)-p^{u}(a, t)]\, \mbox{d}a \\ & +\int_{0}^{a_{\dagger}}\beta_{S}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)S(\theta_{\varepsilon})(t)\, \mbox{d}a \\ & +\int_{0}^{a_{\dagger}}\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)]\theta_{\varepsilon}(a, t)\, \mbox{d}a \\ & -\int_{0}^{a_{\dagger}}\delta_{2}\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)v(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)]\, \mbox{d}a+b_{0}(\varepsilon), \end{align*} |
where \lim_{\varepsilon\rightarrow0^{+}}b_{0}(\varepsilon) = 0 and S(\theta_{\varepsilon})(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]\theta_{\varepsilon}(a, t)\, {\rm d}a .
Then, we can obtain the system with \varepsilon as follows
\begin{align} \left\{ \begin{array}{ll} \frac{\partial\theta_{\varepsilon}}{\partial t} + \frac{\partial\theta_{\varepsilon}}{\partial a} = -[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))]\theta_{\varepsilon}(a, t)-\delta_{1}v(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)] \\ \quad -\Phi'(I^{u}(t))I(\theta_{\varepsilon})(t)p^{\varepsilon}(a, t)+\Phi'(I^{u}(t))P(t)[p^{\varepsilon}(a, t)-p^{u}(a, t)], \\ \theta_{\varepsilon}(0, t) = \int_{0}^{a_{\dagger}}\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)]\theta_{\varepsilon}(a, t)\, \mbox{d}a \\ \quad + \int_{0}^{a_{\dagger}}\beta_{S}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)S(\theta_{\varepsilon})(t)\, \mbox{d}a \\ \quad + \int_{0}^{a_{\dagger}}\beta_{E}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)E(\theta_{\varepsilon})(a, t)\, \mbox{d}a \\ \quad + \int_{0}^{a_{\dagger}}\beta_{S}\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)[1-\delta_{2}u(a, t)]\frac{1}{\varepsilon}[S^{\varepsilon}(t)-S^{u}(t)][p^{\varepsilon}-p^{u}](a, t)\, \mbox{d}a \\ \quad +\int_{0}^{a_{\dagger}}\beta_{E}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)] E\Big(\frac{1}{\varepsilon}[p^{\varepsilon}-p^{u}]\Big)(a, t)[p^{\varepsilon}-p^{u}](a, t)\, \mbox{d}a \\ \quad - \int_{0}^{a_{\dagger}}\delta_{2}\beta\big(a, t, E(p^{u})(a, t), S^{u}(t)\big)v(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)]\, \mbox{d}a+b_{0}(\varepsilon), \\ \theta_{\varepsilon}(a, 0) = 0. \end{array} \right. \end{align} | (5.9) |
By Theorem 3.4, we have p^{\varepsilon}(a, t)-p^{u}(a, t)\rightarrow0 as \varepsilon\rightarrow0^{+} , and
\begin{align*} E\Big(\frac{1}{\varepsilon}[p^{\varepsilon}-p^{u}]\Big)(a, t)[p^{\varepsilon}(a, t)-p^{u}(a, t)]\rightarrow0, \; \; \frac{1}{\varepsilon}[S^{\varepsilon}(t)-S^{u}(t)][p^{\varepsilon}(a, t)-p^{u}(a, t)]\rightarrow0\; \; {\rm as}\; \; \varepsilon\rightarrow0^{+}. \end{align*} |
Passing to \varepsilon\rightarrow0^{+} , we can obtain the following limit system of (5.9)
\begin{align} \left\{ \begin{array}{ll} \frac{\partial\theta}{\partial t}+\frac{\partial\theta}{\partial a} = -[\mu(a, t)+\delta_{1}u(a, t)+\Phi(I^{u}(t))]\theta(a, t)-\Phi'(I^{u}(t))I(\theta)(t)p^{u}(a, t), \\ \theta(0, t) = \int_{0}^{a_{\dagger}}\beta\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]\theta(a, t)\, \mbox{d}a \\ \quad + \int_{0}^{a_{\dagger}}\beta_{S}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)S(\theta)(t)\, \mbox{d}a \\ \quad + \int_{0}^{a_{\dagger}}\beta_{E}\big(a, t, E(p^{u}), S^{u}(t)\big)[1-\delta_{2}u(a, t)]p^{u}(a, t)E(\theta)(a, t)\, \mbox{d}a, \\ \theta(a, 0) = 0. \end{array} \right. \end{align} | (5.10) |
Clearly, (5.10) is a homogeneous linear system with the zero initial value. Thus, \theta(a, t)\equiv0 (see [22, Theorem 4.1]). Further, from Lemma 5.1, we can claim that \lim_{\varepsilon\rightarrow0^{+}}\theta_{\varepsilon}(a, t) = 0 . Hence,
\begin{align*} \lim\limits_{\varepsilon\rightarrow0^{+}}\frac{p^{\varepsilon}(a, t)-p^{u}(a, t)}{\varepsilon} = z(a, t), \end{align*} |
and z(a, t) satisfies (5.8). The proof is complete.
Theorem 5.3. Let u^{\ast}(a, t) be an optimal policy for the management problem (2.1)–(2.2). Then
\begin{equation} u^{*}(a, t) = \left\{ \begin{array}{ll} 0\ \ \ &\mathit{\mbox{if}}\ \ \delta_{1}\xi(a, t)+\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)\xi(0, t) < r(t), \\ L &\mathit{\mbox{if}}\ \ \delta_{1}\xi(a, t)+\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)\xi(0, t) > r(t), \end{array} \right. \end{equation} | (5.11) |
where \xi(a, t) satisfies the following adjoint system
\begin{align} \left\{ \begin{array}{ll} \frac{\partial \xi}{\partial t}+\frac{\partial \xi}{\partial a} = \big[\mu+\delta_{1}u^{\ast}(a, t)+\Phi(I^{\ast}(t))\big]\xi(a, t)+\Phi'(I^{\ast}(t)) \int_{0}^{a_{\dagger}}m(r)p^{\ast}(r, t)\xi(r, t)\, {\rm d}r \\ \quad - \alpha\int_{a}^{a_{\dagger}}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r \\ \quad - \int_{0}^{a}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r \\ \quad - \int_{0}^{a_{\dagger}}\beta_{S}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)][1-\omega(r)]p^{\ast}(r, t)\xi(0, t)\, {\rm d}r \\ \quad -\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]\xi(0, t)-r(t)u^{\ast}(a, t), \\ \xi(a, T) = 1, \; \; \; \; \; \; \; \xi(a_{\dagger}, t) = 0. \end{array} \right. \end{align} | (5.12) |
Here p^{\ast}(a, t) is the solution of model (2.1) corresponding to u^{\ast}\in\mathcal{U} , I^{\ast}(t) = \int_{0}^{a_{\dagger}}m(a)p^{\ast}(a, t)\, {\rm d}a , S^{\ast}(t) = \int_{0}^{a_{\dagger}}[1-\omega(a)]p^{\ast}(a, t)\, {\rm d}a and E(p^{\ast})(a, t) = \alpha\int_{0}^{a}\omega(r)p^{\ast}(r, t)\, {\rm d}r +\int_{a}^{a_{\dagger}}\omega(r)p^{\ast}(r, t)\, {\rm d}r .
Proof. For any v\in\mathcal{T}_{\mathcal{U}}(u^{\ast}) and sufficiently small \varepsilon > 0 , we have u^{\varepsilon}\triangleq u^{\ast}+\varepsilon v\in \mathcal{U} . Let p^{\varepsilon}(a, t) be the solution of (2.1) with respect to u^{\varepsilon} . Then the optimality of u^{\ast} implies J(u^{\ast})\leq J(u^{\varepsilon}) , that is,
\begin{align*} \int_{0}^{a_{\dagger}}\frac{p^{\varepsilon}(a, T)-p^{\ast}(a, T)}{\varepsilon}\, {\rm d}a + \int_{0}^{T}\int_{0}^{a_{\dagger}}r(t)\bigg[\frac{u^{\ast}(a, t)[p^{\varepsilon}(a, t)-p^{\ast}(a, t)]}{\varepsilon}+v(a, t)p^{\varepsilon}(a, t)\bigg]\, {\rm d}a\, {\rm d}t \geq0. \end{align*} |
It follows from Theorem 3.4 and Lemma 5.2 that
\begin{align} \int_{0}^{a_{\dagger}}z(a, T)\, {\rm d}a + \int_{0}^{T}\int_{0}^{a_{\dagger}}r(t)\bigg[u^{\ast}(a, t)z(a, t)+v(a, t)p^{\ast}(a, t)\bigg]\, {\rm d}a\, {\rm d}t \geq0. \end{align} | (5.13) |
Here z(a, t) is the solution of (5.8) with u and p^{u} being replaced by u^{\ast} and p^{\ast} , respectively.
In system (5.8) (with u and p^{u} being replaced by u^{\ast} and p^{\ast} , respectively), multiplying the first equation by \xi(a, t) and integrating on D yield
\begin{align} \int_{D}\Big(\frac{\partial z}{\partial t}+\frac{\partial z}{\partial a}\Big)\xi\, {\rm d}a\, {\rm d}t = & -\int_{D}\bigg\{\big[\mu(a, t)+\delta_{1}u^{\ast}(a, t)+\Phi(I^{\ast}(t))\big]z(a, t) \\ +& \Phi'(I^{\ast}(t))p^{\ast}(a, t)P(t)+\delta_{1}v(a, t)p^{\ast}(a, t)\bigg\}\xi(a, t)\, {\rm d}a\, {\rm d}t. \end{align} | (5.14) |
Using integration by parts and (5.8), one can derive
\begin{align} \int_{D}\Big(\frac{\partial z}{\partial t}+\frac{\partial z}{\partial a}\Big)\xi\, {\rm d}a\, {\rm d}t & = \int_{0}^{a_{\dagger}}z(a, T)\, {\rm d}a - \int_{D}\Big(\frac{\partial \xi}{\partial t}+\frac{\partial \xi}{\partial a}\Big)z(a, t)\, {\rm d}a\, {\rm d}t \\ & -\int_{D}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]z(a, t)\xi(0, t)\, {\rm d}a\, {\rm d}t \\ & -\int_{D}\beta_{E}\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]p^{\ast}E(z)(a, t)\xi(0, t)\, {\rm d}a\, {\rm d}t \\ & -\int_{D}\beta_{S}\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]p^{\ast}(a, t)Q(t)\xi(0, t)\, {\rm d}a\, {\rm d}t \\ & +\int_{D}\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)v(a, t)p^{\ast}(a, t)\xi(0, t)\, {\rm d}a\, {\rm d}t. \end{align} | (5.15) |
Thus, from (5.14) and (5.15) and a simple calculation, we obtain
\begin{align} \int_{D}\Big(\frac{\partial \xi}{\partial t}&+\frac{\partial \xi}{\partial a}\Big)z(a, t)\, {\rm d}a\, {\rm d}t \\ & = \int_{0}^{a_{\dagger}}z(a, T)\, {\rm d}a - \int_{D}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]z(a, t)\xi(0, t)\, {\rm d}a\, {\rm d}t \\ & -\int_{D}z(a, t)\alpha\int_{a}^{a_{\dagger}}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ & -\int_{D}z(a, t)\int_{0}^{a}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ & -\int_{D}z(a, t)\int_{0}^{a_{\dagger}}\beta_{S}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)][1-\omega(r)]p^{\ast}(r, t)\xi(0, t)\, {\rm d}a\, {\rm d}t \\ & + \int_{D}\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)v(a, t)p^{\ast}(a, t)\xi(0, t)\, {\rm d}a\, {\rm d}t + \int_{D}\delta_{1}v(a, t)p^{\ast}(a, t)\xi(a, t)\, {\rm d}a\, {\rm d}t \\ &+ \int_{D}z(a, t)\Phi'(I^{\ast}(t))\int_{0}^{a_{\dagger}}m(r)p^{\ast}(r, t)\xi(r, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ &+ \int_{D}z(a, t)\big[\mu(a, t)+\delta_{1}u^{\ast}(a, t)+\Phi(I^{\ast}(t))\big]\xi(a, t)\, {\rm d}a\, {\rm d}t. \end{align} | (5.16) |
Multiplying z(a, t) on both sides of the first equation of (5.12) and integrating on D , we get
\begin{align} \int_{D}\Big(\frac{\partial \xi}{\partial t}&+\frac{\partial \xi}{\partial a}\Big)z(a, t)\, {\rm d}a\, {\rm d}t \\ & = -\int_{D}z(a, t)\alpha\int_{a}^{a_{\dagger}}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ & -\int_{D}z(a, t)\int_{0}^{a}\beta_{E}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)]\omega(r)p^{\ast}(r, t)\xi(0, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ & -\int_{D}z(a, t)\int_{0}^{a_{\dagger}}\beta_{S}\big(r, t, E(p^{\ast})(r, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(r, t)][1-\omega(r)]p^{\ast}(r, t)\xi(0, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t \\ & + \int_{D}z(a, t)\Phi'(I^{\ast}(t))\int_{0}^{a_{\dagger}}m(r)p^{\ast}(r, t)\xi(r, t)\, {\rm d}r\, {\rm d}a\, {\rm d}t - \int_{D}z(a, t)r(t)u^{\ast}(a, t)\, {\rm d}a\, {\rm d}t \\ &+ \int_{D}z(a, t)\big[\mu(a, t)+\delta_{1}u^{\ast}(a, t)+\Phi(I^{\ast}(t))\big]\xi(a, t)\, {\rm d}a\, {\rm d}t \\ & - \int_{D}z(a, t)\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)[1-\delta_{2}u^{\ast}(a, t)]\xi(0, t)\, {\rm d}a\, {\rm d}t. \end{align} | (5.17) |
Thus, from (5.16) and (5.17), we have
\begin{align} \int_{0}^{a_{\dagger}}z(a, T)&\, {\rm d}a+\int_{0}^{T}\int_{0}^{a_{\dagger}}r(t)z(a, t)u^{\ast}(a, t)\, {\rm d}a\, {\rm d}t \\ = & - \int_{0}^{T}\int_{0}^{a_{\dagger}}[\delta_{1}\xi(a, t)+\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)\xi(0, t)]v(a, t)p^{\ast}(a, t)\, {\rm d}a\, {\rm d}t. \end{align} | (5.18) |
For each v\in\mathcal{T}_{\mathcal{U}}(u^{\ast}) , by (5.13) and (5.18), we claim that
\begin{align*} \int_{0}^{T}\int_{0}^{a_{\dagger}}[\delta_{1}\xi(a, t)+\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)\xi(0, t)-r(t)] v(a, t)p^{\ast}(a, t)\, {\rm d}a\, {\rm d}t \leq0. \end{align*} |
That is, [\delta_{1}\xi(a, t)+\delta_{2}\beta\big(a, t, E(p^{\ast})(a, t), S^{\ast}(t)\big)\xi(0, t)-r(t)]p^{\ast}(a, t)\in\mathcal{N}_{\mathcal{U}}(u^{\ast}) . Hence, the conclusion follows from using the structure of normal cone.
In this section, we will give an illustrative example to show the conditions for the existence are not empty.
Example 1. Let the parameters be a_{\dagger} = 10 , T = 20 , \alpha = 0.3 , L = 2 , \delta_{1} = 0.05 , \delta_{2} = 0.02 . Obviously, \delta_{2}L = 0.04 < 1 . With the weight functions \omega(a) = 0.5 and m(a) = 1 , the immigration rate f(a, t) = (1+\sin\pi t)(10-a) and the initial age distribution p_{0}(a) = 0.4(10-a)(1+\cos 2a) , we can easily verify that assumption {\bf(A_4)} holds. Choose the natural mortality rate to be
\begin{align*} \mu(a, t) = \left\{ \begin{array}{ll} (2+\cos \pi t)\left[0.04(1+\cos a)+\frac{(2-a)^{2}}{20}\right], \qquad &(a, t)\in [0, 2)\times[0, 20], \\ (2+\cos \pi t)0.04(1+\cos a), &(a, t)\in [0, 8)\times[0, 20], \\ (2+\cos \pi t)\left[0.04(1+\cos a)+\frac{a-8}{10-a}\right], &(a, t)\in [8, 10)\times[0, 20]. \end{array} \right. \end{align*} |
For t\in[0, 20] , a direct calculation gives
\begin{align*} \int_{0}^{a_{\dagger}}\mu(a, t)\, {\rm d}a = & (2+\cos \pi t)\Bigg[0.04\int_{0}^{10}(1+\cos a)\, {\rm d}a+\int_{0}^{2}\frac{(2-a)^{2}}{20}\, {\rm d}a+\int_{8}^{10}\frac{a-8}{10-a}\, {\rm d}a\Bigg] \\ = & (2+\cos \pi t)\Bigg[0.04(a+\sin a)\bigg|_{0}^{10}-\frac{(2-a)^{3}}{60}\bigg|_{0}^{2}-a\bigg|_{8}^{10} -2\ln(10-a)\bigg|_{8}^{10}\Bigg] \\ = & +\infty, \end{align*} |
which means that assumption {\bf(A_1)} holds. Assume that \Phi(s) = 0.02(e^{-0.8s}+\cos s+2) . It is easy to show that \Phi(s)\leq 0.08 for any s\in R_{+} . Moreover, for any s_{1} , s_{2}\in R_{+} , we have
\begin{align*} |\Phi(s_{1})-\Phi(s_{2})| = & 0.02\left|e^{-0.8s_{1}}+\cos s_{1}-e^{-0.8s_{2}}-\cos s_{2}\right| \\ \leq& 0.02\left|e^{-0.8s_{1}}-e^{-0.8s_{2}}\right|+0.02|\cos s_{1}-\cos s_{2}| \\ \leq& 0.04|s_{1}-s_{2}|. \end{align*} |
Thus assumption {\bf(A_2)} holds. For any (t, s, q)\in[0, 20]\times R_{+}\times R_{+} , take the birth rate as
\begin{align*} \beta(a, t, s, q) = \left\{ \begin{array}{ll} 0, \qquad &a\in [0, 1)\cup[9, 10), \\[2.5mm] (1+\sin \pi t)\left[0.31(1+\sin a)+\frac{0.03}{1+s}+0.2(1+\sin q)\right](a-1)^{2}, &a\in [1, 2), \\[2.5mm] (1+\sin \pi t)\left[0.51(1+\sin a)+\frac{0.05}{1+s}+0.5(1+\sin q)\right], &a\in [2, 7), \\[2.5mm] (1+\sin \pi t)\left[0.21(1+\sin a)+\frac{0.03}{1+s}+0.2(1+\sin q)\right](a-9)^{2}, &a\in [7, 9). \end{array} \right. \end{align*} |
By a simple computation, we have \beta(a, t, s, q)\leq 2.1 for any (a, t, s, q)\in[0, 10)\times[0, 20]\times R_{+}\times R_{+} . Moreover, for any t\in[0, 20] , s_{1} , s_{2} , q_{1} , q_{2}\in R_{+} , when a\in[1, 2) , we have
\begin{align*} |\beta(a, t, s_{1}, q_{1})-\beta(a, t, s_{2}, q_{2})| \leq& 2\left|\frac{0.03}{1+s_{1}}+0.2(1+\sin q_{1})-\frac{0.03}{1+s_{2}}-0.2(1+\sin q_{2})\right| \\ \leq& 0.06\left|\frac{1}{1+s_{1}}-\frac{1}{1+s_{2}}\right|+0.4|\sin q_{1}-\sin q_{2}| \\ \leq& 0.4(|s_{1}-s_{2}|+|q_{1}-q_{2}|), \end{align*} |
when a\in[2, 7) , we have
\begin{align*} |\beta(a, t, s_{1}, q_{1})-\beta(a, t, s_{2}, q_{2})| \leq& 2\left|\frac{0.05}{1+s_{1}}+0.5(1+\sin q_{1})-\frac{0.05}{1+s_{2}}-0.5(1+\sin q_{2})\right| \\ \leq& 0.1\left|\frac{1}{1+s_{1}}-\frac{1}{1+s_{2}}\right|+|\sin q_{1}-\sin q_{2}| \\ \leq& |s_{1}-s_{2}|+|q_{1}-q_{2}|, \end{align*} |
when a\in[7, 9) , we have
\begin{align*} |\beta(a, t, s_{1}, q_{1})-\beta(a, t, s_{2}, q_{2})| \leq& 8\left|\frac{0.03}{1+s_{1}}+0.2(1+\sin q_{1})-\frac{0.03}{1+s_{2}}-0.2(1+\sin q_{2})\right| \\ \leq& 0.24\left|\frac{1}{1+s_{1}}-\frac{1}{1+s_{2}}\right|+1.6|\sin q_{1}-\sin q_{2}| \\ \leq& 1.6(|s_{1}-s_{2}|+|q_{1}-q_{2}|). \end{align*} |
Thus, for any a\in[0, a_{\dagger}) and t\in[0, T] , we have
\begin{align*} |\beta(a, t, s_{1}, q_{1})-\beta(a, t, s_{2}, q_{2})|\leq 1.6(|s_{1}-s_{2}|+|q_{1}-q_{2}|). \end{align*} |
This implies that assumption {\bf(A_3)} holds. Hence, from Theorems 3.1–3.3, for any p_{0}\in L_{+}^{1} and u\in\mathcal{U} , system (2.1) has a unique non-negative solution p(a, t) . Moreover, the solution has the form p(a, t) = \bar{p}^{y}(a, t)y(t) . Here (\bar{p}^{y}, y)\in C([0, T]; L_{+}^{1})\times C([0, T]; R_{+}) is the solution of (3.5)–(3.6).
In view of the reproductive laws of vermin, we formulated and analyzed a hierarchical age-structured vermin contraception control model. The model is based on the assumption that the reproductive ability of vermin mainly depends on older females. It also considers the encounter mechanism between females and males. This allows the fertility of an individual to depend not only on age and time but also on their "internal environment" and the size of males. Note that sterilant has the dual effects of causing infertility and death of vermin. Thus, we assumed that the mortality of vermin depends not only on its intrinsic dynamics (including natural mortality and mortality caused by competition) but also on the effect of female sterilant. The dual effects of sterilant make the control variable appear not only in the principal equation (distributed control) but also in the boundary condition (boundary control). Our model contains some existing ones as special cases.
By transforming our model into two subsystems and using the contraction mapping principle, we have shown that the model has a unique non-negative bounded solution, which has a separable form. In this work, we discussed the existence of optimal management policy and derived the Euler-Lagrange optimality conditions. The former is established by using compactness and minimization sequences, while the latter is derived by employing adjoint systems and normal cones techniques. To show the compactness, we used the Fréchet-Kolmogorov Theorem (see Lemma 4.1) and its generalization (see Lemma 4.2). In order to construct the adjoint system, we used the continuity of the solution on the control parameters (see Theorem 3.4) and the continuity of the solution of an integro-partial differential equation with respect to its boundary distribution and inhomogeneous term (see Lemma 5.1).
This paper only discussed the existence and structure of the optimal management policy and did not carry out any numerical simulations. This is because it is very challenging to choose an appropriate numerical algorithm and analyze its convergence. The relevant numerical algorithm can be found in [20]. However, our model is more complicated than that in [20], because the birth rate depends not only on the "internal environment" of vermin but also on the number of males. We leave the study on the numerical algorithm of our optimal control problem as future work.
This work was supported by the National Natural Science Foundation of China (Nos. 12071418, 12001341).
The authors declare there is no conflict of interest.
[1] | M. Sharawi, H. M. Zawbaa, E. Emary, H. M. Zawbaa, E. Emary, Feature selection approach based on whale optimization algorithm, in 2017 Ninth International Conference on Advanced Computational Intelligence (ICACI), 163–168. https://doi.org/10.1109/ICACI.2017.7974502 |
[2] |
G. I. Sayed, A. Darwish, A. E. Hassanien, A new chaotic whale optimization algorithm for features selection, J. Classification, 35 (2018), 300–344. https://doi.org/10.1007/s00357-018-9261-2 doi: 10.1007/s00357-018-9261-2
![]() |
[3] |
K. Chen, F. Y. Zhou, X. F. Yuan, Hybrid particle swarm optimization with spiral-shaped mechanism for feature selection, Expert Syst. Appl., 128 (2019), 140–156. https://doi.org/10.1016/j.eswa.2019.03.039 doi: 10.1016/j.eswa.2019.03.039
![]() |
[4] |
M. Ragab, Hybrid firefly particle swarm optimisation algorithm for feature selection problems, Expert Syst., 41 (2024), e13363. https://doi.org/10.1111/exsy.13363 doi: 10.1111/exsy.13363
![]() |
[5] |
H. Faris, M. A. Hassonah, A. M. Al-Zoubi, S. Mirjalili, I. Aljarah, A multi-verse optimizer approach for feature selection and optimizing SVM parameters based on a robust system architecture, Neural Comput. Appl., 30 (2018), 2355–2369. https://doi.org/10.1007/s00521-016-2818-2 doi: 10.1007/s00521-016-2818-2
![]() |
[6] |
S. Gu, R. Cheng, Y. Jin, Feature selection for high-dimensional classification using a competitive swarm optimizer, Soft Comput., 22 (2018), 811–822. https://doi.org/10.1007/s00521-016-2818-2 doi: 10.1007/s00521-016-2818-2
![]() |
[7] |
Q. Tu, X. Chen, X. Liu, Hierarchy strengthened grey wolf optimizer for numerical optimization and feature selection, IEEE Access, 7 (2019), 78012–78028. https://doi.org/10.1109/ACCESS.2019.2921793 doi: 10.1109/ACCESS.2019.2921793
![]() |
[8] |
F. Hafiz, A. Swain, N. Patel, C. Naik, A two-dimensional (2-D) learning framework for particle swarm based feature selection, Pattern Recognit., 76 (2018), 416–433. https://doi.org/10.1016/j.patcog.2017.11.027 doi: 10.1016/j.patcog.2017.11.027
![]() |
[9] |
M. Ragab, Multi-Label scene classification on remote sensing imagery using modified Dingo Optimizer with deep learning, IEEE Access, 12 (2024), 11879–11886. https://doi.org/10.1109/ACCESS.2023.3344773 doi: 10.1109/ACCESS.2023.3344773
![]() |
[10] | R. C. T. De Souza, L. D. S. Coelho, C. A. De Macedo, J. Pierezan, A V-Shaped binary crow search algorithm for feature selection, in 2018 IEEE Congress on Evolutionary Computation (CEC), 2018, 1–8. https://doi.org/10.1109/CEC.2018.8477975 |
[11] |
E. H. Houssein, A. Hammad, M. M. Emam, A. A. Ali, An enhanced Coati Optimization Algorithm for global optimization and feature selection in EEG emotion recognition, Comput. Biol. Med., 173 (2024), 108329. https://doi.org/10.1016/j.compbiomed.2024.108329 doi: 10.1016/j.compbiomed.2024.108329
![]() |
[12] |
M. Chaibi, L. Tarik, M. Berrada, A. El Hmaidi, Machine learning models based on random forest feature selection and Bayesian optimization for predicting daily global solar radiation, Inter. J. Renew. Energy D., 11 (2022), 309. https://doi.org/10.14710/ijred.2022.41451 doi: 10.14710/ijred.2022.41451
![]() |
[13] |
R. R. Mostafa, M. A. Gaheen, M. Abd ElAziz, M. A. Al-Betar, A. A. Ewees, An improved gorilla troops optimizer for global optimization problems and feature selection, Knowl.-Based Syst., 269 (2023), 110462. https://doi.org/10.1016/j.knosys.2023.110462 doi: 10.1016/j.knosys.2023.110462
![]() |
[14] |
B. D. Kwakye, Y. Li, H. H. Mohamed, E. Baidoo, T. Q. Asenso, Particle guided metaheuristic algorithm for global optimization and feature selection problems, Expert Syst. Appl., 248 (2024), 123362. https://doi.org/10.1016/j.eswa.2024.123362 doi: 10.1016/j.eswa.2024.123362
![]() |
[15] |
E. H. Houssein, M. E. Hosney, D. Oliva, E. M. Younis, A. A. Ali, W. M. Mohamed, An efficient discrete rat swarm optimizer for global optimization and feature selection in chemoinformatics, Knowl.-Based Syst., 275 (2023), 110697. https://doi.org/10.1016/j.knosys.2023.110697 doi: 10.1016/j.knosys.2023.110697
![]() |
[16] |
M. Qaraad, S. Amjad, N. K. Hussein, M. A. Elhosseini, 2022. Large scale salp-based grey wolf optimization for feature selection and global optimization, Neural Comput. Appl., 34 (11), 8989–9014. https://doi.org/10.1007/s00521-022-06921-2 doi: 10.1007/s00521-022-06921-2
![]() |
[17] |
L. Abualigah, M. Altalhi, A novel generalized normal distribution arithmetic optimization algorithm for global optimization and data clustering problems, J. Amb. Intel. Hum. Comput., 15 (2024), 389–417. https://doi.org/10.1007/s12652-022-03898-7 doi: 10.1007/s12652-022-03898-7
![]() |
[18] |
B. Xu, A. A. Heidari, Z. Cai, H. Chen, Dimensional decision covariance colony predation algorithm: global optimization and high− dimensional feature selection, Artif. Intell. Rev., 56 (2023), 11415–11471. https://doi.org/10.1007/s10462-023-10412-8 doi: 10.1007/s10462-023-10412-8
![]() |
[19] |
T. Si, P. B. Miranda, U. Nandi, N. D. Jana, S. Mallik, U. Maulik, Opposition-based Chaotic Tunicate Swarm Algorithms for Global Optimization, IEEE Access, 12 (2024), 18168–18188. https://doi.org/10.1109/ACCESS.2024.3359587 doi: 10.1109/ACCESS.2024.3359587
![]() |
[20] |
G. Liu, Z. Guo, W. Liu, B. Cao, S. Chai, C.Wang, MSHHOTSA: A variant of tunicate swarm algorithm combining multi-strategy mechanism and hybrid Harris optimization, Plos one, 18 (2023), e0290117. https://doi.org/10.1371/journal.pone.0290117 doi: 10.1371/journal.pone.0290117
![]() |
[21] |
A. Alizadeh, F. S. Gharehchopogh, M. Masdari, A. Jafarian, An improved hybrid salp swarm optimization and African vulture optimization algorithm for global optimization problems and its applications in stock market prediction, Soft Comput., 28 (2024), 5225–5261. https://doi.org/10.1007/s00500-023-09299-y doi: 10.1007/s00500-023-09299-y
![]() |
[22] |
Z. Pan, D. Lei, L. Wang, A knowledge-based two-population optimization algorithm for distributed energy-efficient parallel machines scheduling, IEEE T. Cybernetics, 52 (2020), 5051–5063. https://doi.org/10.1109/TCYB.2020.3026571 doi: 10.1109/TCYB.2020.3026571
![]() |
[23] |
F. Zhao, S. Di, L. Wang, A hyperheuristic with Q-learning for the multiobjective energy-efficient distributed blocking flow shop scheduling problem, IEEE T. Cybernetics, 53 (2022), 3337–3350. https://doi.org/10.1109/TCYB.2022.3192112 doi: 10.1109/TCYB.2022.3192112
![]() |
[24] |
F. Zhao, C. Zhuang, L. Wang, C. Dong, An Iterative Greedy Algorithm With Q -Learning Mechanism for the Multiobjective Distributed No-Idle Permutation Flowshop Scheduling, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024. https://doi.org/10.1109/TSMC.2024.3358383 doi: 10.1109/TSMC.2024.3358383
![]() |
[25] |
V. Chandran, P. Mohapatra, A Novel Multi-Strategy Ameliorated Quasi-Oppositional Chaotic Tunicate Swarm Algorithm for Global Optimization and Constrained Engineering Applications, Heliyon, 2024. https://doi.org/10.1016/j.heliyon.2024.e30757 doi: 10.1016/j.heliyon.2024.e30757
![]() |
[26] |
F. A. Hashim, E. H. Houssein, R. R. Mostafa, A. G. Hussien, F. Helmy, An efficient adaptive-mutated Coati optimization algorithm for feature selection and global optimization, Alexandria Eng. J., 85 (2023), 29–48. https://doi.org/10.1016/j.aej.2023.11.004 doi: 10.1016/j.aej.2023.11.004
![]() |
[27] |
A. S. AL-Ghamdi, M. Ragab, Tunicate swarm algorithm with deep convolutional neural network-driven colorectal cancer classification from histopathological imaging data, Electron. Res. Arch., 31 (2023), 2793–2812. https://doi.org/10.3934/era.2023141 doi: 10.3934/era.2023141
![]() |
[28] |
A. Adamu, M. Abdullahi, S. B. Junaidu, I. H. Hassan, An hybrid particle swarm optimization with crow search algorithm for feature selection, Machine Learn. Appl., 6 (2021), 100108. https://doi.org/10.1016/j.mlwa.2021.100108 doi: 10.1016/j.mlwa.2021.100108
![]() |
[29] |
A. Kumar, S. R. Sangwan, A. Arora, A. Nayyar, M. Abdel-Basset, Sarcasm detection using soft attention-based bidirectional long short-term memory model with convolution network, IEEE Access, 7 (2019), 23319–23328. https://doi.org/10.1109/ACCESS.2019.2899260 doi: 10.1109/ACCESS.2019.2899260
![]() |
[30] | I. M. Batiha, B. Mohamed, Binary rat swarm optimizer algorithm for computing independent domination metric dimension problem, 2024. https://doi.org/10.1109/ACCESS.2019.2899260 |
[31] | https://archive.ics.uci.edu/datasets |
[32] |
A. E. Hegazy, M. A. Makhlouf, G. S. El-Tawel, Improved salp swarm algorithm for feature selection, J. King Saud Univ-Com, 32 (2020), 335–344. https://doi.org/10.1109/ACCESS.2019.2899260 doi: 10.1109/ACCESS.2019.2899260
![]() |
1. | ChungYuen Khew, Rahmad Akbar, Norfarhan Mohd-Assaad, Progress and challenges for the application of machine learning for neglected tropical diseases, 2023, 12, 2046-1402, 287, 10.12688/f1000research.129064.1 | |
2. | ChungYuen Khew, Rahmad Akbar, Norfarhan Mohd-Assaad, Progress and challenges for the application of machine learning for neglected tropical diseases, 2025, 12, 2046-1402, 287, 10.12688/f1000research.129064.2 |