Mathematical and numerical analysis for Predator-prey system in a polluted environment

  • In this paper, we prove existence results for a Predator-prey system in a polluted environment. The existence result is proved by the Schauder fixed-point theorem. Moreover, we construct a combined finite volume - finite element scheme to our model, we establish existence of discrete solutions to this scheme, and show that it converges to a weak solution. The convergence proof is based on deriving series of a priori estimates and using a general Lp compactness criterion. Finally we give some numerical examples.

    Citation: Verónica Anaya, Mostafa Bendahmane, Mauricio Sepúlveda. Mathematical and numerical analysis for Predator-prey system in a polluted environment[J]. Networks and Heterogeneous Media, 2010, 5(4): 813-847. doi: 10.3934/nhm.2010.5.813

    Related Papers:

    [1] Vincent Renault, Michèle Thieullen, Emmanuel Trélat . Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via Optogenetics. Networks and Heterogeneous Media, 2017, 12(3): 417-459. doi: 10.3934/nhm.2017019
    [2] Simone Göttlich, Stephan Martin, Thorsten Sickenberger . Time-continuous production networks with random breakdowns. Networks and Heterogeneous Media, 2011, 6(4): 695-714. doi: 10.3934/nhm.2011.6.695
    [3] Yilun Shang . Group pinning consensus under fixed and randomly switching topologies with acyclic partition. Networks and Heterogeneous Media, 2014, 9(3): 553-573. doi: 10.3934/nhm.2014.9.553
    [4] Ciro D'Apice, Olha P. Kupenko, Rosanna Manzo . On boundary optimal control problem for an arterial system: First-order optimality conditions. Networks and Heterogeneous Media, 2018, 13(4): 585-607. doi: 10.3934/nhm.2018027
    [5] Xianmin Geng, Shengli Zhou, Jiashan Tang, Cong Yang . A sufficient condition for classified networks to possess complex network features. Networks and Heterogeneous Media, 2012, 7(1): 59-69. doi: 10.3934/nhm.2012.7.59
    [6] Charles Bordenave, David R. McDonald, Alexandre Proutière . A particle system in interaction with a rapidly varying environment: Mean field limits and applications. Networks and Heterogeneous Media, 2010, 5(1): 31-62. doi: 10.3934/nhm.2010.5.31
    [7] Michael Herty, Lorenzo Pareschi, Sonja Steffensen . Mean--field control and Riccati equations. Networks and Heterogeneous Media, 2015, 10(3): 699-715. doi: 10.3934/nhm.2015.10.699
    [8] Guy Barles, Emmanuel Chasseigne . (Almost) Everything you always wanted to know about deterministic control problems in stratified domains. Networks and Heterogeneous Media, 2015, 10(4): 809-836. doi: 10.3934/nhm.2015.10.809
    [9] Jésus Ildefonso Díaz, Tommaso Mingazzini, Ángel Manuel Ramos . On the optimal control for a semilinear equation with cost depending on the free boundary. Networks and Heterogeneous Media, 2012, 7(4): 605-615. doi: 10.3934/nhm.2012.7.605
    [10] Karoline Disser, Matthias Liero . On gradient structures for Markov chains and the passage to Wasserstein gradient flows. Networks and Heterogeneous Media, 2015, 10(2): 233-253. doi: 10.3934/nhm.2015.10.233
  • In this paper, we prove existence results for a Predator-prey system in a polluted environment. The existence result is proved by the Schauder fixed-point theorem. Moreover, we construct a combined finite volume - finite element scheme to our model, we establish existence of discrete solutions to this scheme, and show that it converges to a weak solution. The convergence proof is based on deriving series of a priori estimates and using a general Lp compactness criterion. Finally we give some numerical examples.


    Optogenetics is a recent and innovative technique which allows to induce or prevent electric shocks in living tissues, by means of light stimulation. Successfully demonstrated in mammalian neurons in 2005 ([8]), the technique relies on the genetic modification of cells to make them express particular ionic channels, called rhodopsins, whose opening and closing are directly triggered by light stimulation. One of these rhodopsins comes from an unicellular flagellate algae, Chlamydomonas reinhardtii, and has been baptized Channelrodhopsins-2 (ChR2). It is a cation channel that opens when illuminated with blue light.

    Since the field of Optogenetics is young, the mathematical modeling of the phenomenon is quite scarce. Some models have been proposed, based on the study of the photocycles initiated by the absorption of a photon. In 2009, Nikolic and al. [33] proposed two models for the ChR2 that are able to reproduce the photocurrents generated by the light stimulation of the channel. Those models are constituted of several states that can be either conductive (the channel is open) or non-conductive (the channel is closed). Transitions between those states are spontaneous, depend on the membrane potential or are triggered by the absorption of a photon. For example, the four-states model of Nikolic and al. [33] has two open states (o1 and o2) and two closed states (c1 and c2). Its transitions are represented on Figure 1.

    Figure 1.  Simplified four states ChR2 channel : ε1, ε2, e12, e21, Kd1, Kd2 and Kr are positive constants.

    The purpose of this paper is to extend to infinite dimension the optimal control of Piecewise Deterministic Markov Processes (PDMPs) and to define an infinite-dimensional controlled Hodgkin-Huxley model, containing ChR2 channels, as an infinite-dimensional controlled PDMP and prove existence of optimal ordinary controls. We now give the definition of the model.

    We consider an axon, described as a 1-dimensional cable and we set I=[0,1] (the more physical case I=[l,l] with 2l>0 the length of the axon is included here by a scaling argument). Let DChR2:={o1,o2,c1,c2}. Individually, a ChR2 features a stochastic evolution which can be properly described by a Markov Chain on the finite space constituted of the different states that the ChR2 can occupy. In the four-states model above, two of the transitions are triggered by light stimulation, in the form of a parameter u that can evolve in time. Here u(t) is physically proportional to the intensity of the light with which the protein is illuminated. For now, we will consider that when the control is on (i.e., when the light is on), the entire axon is uniformly illuminated. Hence for all t0, u(t) features no spatial dependency.

    The deterministic Hodgkin-Huxley model was introduced in [30]. A stochastic infinite-dimensional model was studied in [4], [10], [27] and [39]. The Sodium (Na+) channels and Potassium (K+) channels are described by two pure jump processes with state spaces D1:={n0,n1,n2,n3,n4} and

    D2:={m0h1,m1h1,m2h1,m3h1,m0h0,m1h0,m2h0,m3h0}.

    For a given scale NN, we consider that the axon is populated by Nhh=N1 channels of type Na+, K+ or ChR2, at positions 1N(ZN˚I). In the sequel we will use the notation IN:=ZN˚I. We consider the Gelfand triple (V,H,V) with V:=H10(I) and H:=L2(I). The process we study is defined as a controlled infinite-dimensional Piecewise Deterministic Markov Process (PDMP). All constants and auxiliary functions in the next definition will be defined further in the paper.

    Definition 1.1. Stochastic controlled infinite-dimensional Hodgkin-HuxleyChR2 model. Let NN. We call Nth stochastic controlled infinite-dimensional Hodgkin-Huxley-ChR2 model the controlled PDMP (v(t),d(t))V×DN defined by the following characteristics:

    • A state space V×DN with DN=DIN and D=D1D2DChR2.

    • A control space U=[0,umax], umax>0.

    • A set of uncontrolled PDEs: For every dDN,

    {v(t)=1CmΔv(t)+fd(v(t)),v(0)=v0V,v0(x)[V,V+]xI,v(t,0)=v(t,1)=0,t>0, (1)

    with

    D(Δ)=V,fd(v):=1NiIN(gK1{di=n4}(VKv(iN))+gNa1{di=m3h1}(VNav(iN))+gL(VLv(iN))+gChR2(1{di=O1}+ρ1{di=O2})(VChR2v(iN)))δiN, (2)

    with δzV the Dirac mass at zI, ρ>0 a constant and Cm>0 the membrane capacitance. For x{K,Na,L,ChR2}, gx>0 is the normalized conductance of the channel of type x and VxR is the driving potential of the channel. See Section 5 for more details.

    • A jump rate function λ:V×DN×UR+ defined for all (v,d,u)H×DN×U by

    λd(v,u)=iINxDyD,yxσx,y(v(iN),u)1{di=x}, (3)

    with σx,y:R×UR+ smooth functions for all (x,y)D2. See Table 1 in Section 5.1 for the expression of those functions.

    Table 1.  Expression of the individual jump rate functions and the Hodgkin-Huxley model.
    InD1={n0,n1,n2,n3,n4}_:
    σn0,n1(v,u)=4αn(v),σn1,n2(v,u)=3αn(v),
    σn2,n3(v,u)=2αn(v),σn3,n4(v,u)=αn(v)
    σn4,n3(v,u)=4βn(v),σn3,n2(v,u)=3βn(v),
    σn2,n1(v,u)=2βn(v),σn1,n0(v,u)=βn(v).
    InD2={m0h1,m1h1,m2h1,m3h1,m0h0,m1h0,m2h0,m3h0}_: :
    σm0h1,m1h1(v,u)=σm0h0,m1h0(v,u)=3αm(v),
    σm1h1,m2h1(v,u)=σm1h0,m2h0(v,u)=2αm(v),
    σm2h1,m3h1(v,u)=σm2h0,m3h0(v,u)=αm(v),
    σm3h1,m2h1(v,u)=σm3h0,m2h0(v,u)=3βm(v),
    σm2h1,m1h1(v,u)=σm2h0,m1h0(v,u)=2βm(v),
    σm1h1,m0h1(v,u)=σm1h0,m0h0(v,u)=βm(v).
    InDChR2={o1,o2,c1,c2}_:
    σc1,o1(v,u)=ε1u,σo1,c1(v,u)=Kd1,
    σo1,o2(v,u)=e12,σo2,o1(v,u)=e21
    σo2,c2(v,u)=Kd2,σc2,o2(v,u)=ε2u,
    σc2,c1(v,u)=Kr.
    αn(v)=0.10.01ve10.1v1,βn(v)=0.125ev80,
    αm(v)=2.50.1ve2.50.1v1,βm(v)=4ev18,
    αh(v)=0.07ev20,βh(v)=1e30.1v+1.
    (HH){C˙V(t)=ˉgKn4(t)(EKV(t))+ˉgNam3(t)h(t)(ENaV(t))         +gL(ELV(t))+Iext(t),˙n(t)=αn(V(t))(1n(t))βn(V(t))n(t),˙m(t)=αm(V(t))(1m(t))βm(V(t))m(t),˙h(t)=αh(V(t))(1h(t))βh(V(t))h(t).

     | Show Table
    DownLoad: CSV

    • A discrete transition measure Q:V×DN×UP(DN) defined for all (v,d,u)E×DN×U and yD by

    Q({di:y}|v,d)=σdi,y(v(iN),u)1{diy}λd(v,u), (4)

    where di:y is obtained from d by putting its ith component equal to y.

    This model can be physically understood as follows. Between jumps of the stochastic component, the membrane potential evolves according to the Hodgkin-Huxley dynamics with fixed conductances given by the state of the piecewise constant stochastic component (equation (2)). The propagation of the potential along the membrane is governed by the Laplacian term 1CmΔ and we obtain equation (1). The piecewise stochastic component gives the configuration of the ion channels, taking values in D, along the axon. A jump of this component represent a change of the state of one ion channel along the axon (basically the opening or the closing of a channel). The intensity of this jumping process is given by the rate functions defining the dynamics of the ion channel variables in the deterministic Hodgkin-Huxley model (see Table 1 in Section 5.1 for the equations of the deterministic finite-dimensional Hodgkin-Huxley model).

    Remark 1. If the PDEs in Definition 1.1 do not depend on the control variable, the theory developed in this papers addresses optimal control problems where the function fd can be controlled (see Section 5.2).

    The optimal control problem consists in mimicking an output signal that encodes a given biological behavior, while minimizing the intensity of the light applied to the neuron. For example, it can be a time-constant signal and in this case, we want to change the resting potential of the neuron to study its role on its general behavior. We can also think of pathological behaviors that would be fixed in this way. The minimization of light intensity is crucial because the range of intensity experimentally reachable is quite small and is always a matter of preoccupation for experimenters. These considerations lead us to formulate the following mathematical optimal control problem.

    Suppose we are given a reference signal VrefV. The control problem is then to find αA that minimizes the following expected cost

    Jz(α)=Eαz[T0(κ||Xαt(ϕ)Vref||2V+α(Xαt))dt],zΥ, (5)

    where A is the space of control strategies, Υ an auxiliary state space that comprises V×DN, Xα is the controlled PDMP and Xα(ϕ) its continuous component.

    We will prove the following result.

    Theorem 1.2. Under the assumptions of Section 2.1, there exists an optimal control strategy αA such that for all zΥ,

    Jz(α)=infαAEαz[T0(κ||Xαt(ϕ)Vref||2V+α(Xαt))dt],

    and the value function zinfαAJz(α) is continuous on Υ.

    Piecewise Deterministic Markov Processes constitute a large class of Markov processes suited to describe a tremendous variety of phenomena such as the behavior of excitable cells ([4], [10], [36]), the evolution of stocks in financial markets ([11]) or the congestion of communication networks ([22]), among many others. PDMPs can basically describe any non diffusive Markovian system. The general theory of PDMPs, and the tools to study them, were introduced by Davis ([18]) in 1984, at a time when the theory of diffusion was already amply developed. Since then, they have been widely investigated in terms of asymptotic behavior, control, limit theorems and CLT, numerical methods, among others (see for instance [9], [14], [15], [17] and references therein). PDMPs are jump processes coupled with a deterministic evolution between the jumps. They are fully described by three local characteristics: the deterministic flow ϕ, the jump rate λ, and the transition measure Q. In [18], the temporal evolution of a PDMP between jumps (i.e., the flow ϕ) is governed by an Ordinary Differential Equation (ODE). For that matter, this kind of PDMPs will be referred to as finite-dimensional in the sequel.

    Optimal control of such processes have been introduced by Vermes ([40]) in finite dimension. In [40], the class of piecewise open-loop controls is introduced as the proper class to consider to obtain strongly Markovian processes. A Hamilton-Jabobi-Bellman equation is formulated and necessary and sufficient conditions are given for the existence of optimal controls. The standard broader class of so-called relaxed controls is considered and it plays a crucial role in getting the existence of optimal controls when no convexity assumption is imposed. This class of controls has been studied, in the finite-dimensional case, by Gamkrelidze ([26]), Warga ([42] and [41]) and Young ([45]). Relaxed controls provide a compact class that is adequate for studying optimization problems. Still in finite dimension, many control problems have been formulated and studied such as optimal control ([25]), optimal stopping ([16]) or controllability ([28]). In infinite dimension, relaxed controls were introduced by Ahmed ([1], [2], [3]). They were also studied by Papageorgiou in [37] where the author shows the strong continuity of relaxed trajectories with respect to the relaxed control. This continuity result will be of great interest in this paper.

    A formal infinite-dimensional PDMP was defined in [10] for the first time, the set of ODEs being replaced by a special set of Partial Differential Equations (PDE). The extended generator and its domain are provided and the model is used to define a stochastic spatial Hodgkin-Huxley model of neuron dynamics. The optimal control problem we have in mind here regards those Hodgkin-Huxley type models. Seminal work on an uncontrolled infinite-dimensional Hodgkin-Huxley model was conducted in [4] where the trajectory of the infinite-dimensional stochastic system is shown to converge to the deterministic one, in probability. This type of model has then been studied in [39] in terms of limit theorems and in [27] in terms of averaging. The extension to infinite dimension heavily relies on the fact that semilinear parabolic equations can be interpreted as ODEs in Hilbert spaces.

    To give a sense to Definition 1.1 and to Theorem 1.2, we will define a controlled infinite-dimensional PDMP for which the control acts on the three local characteristics. We consider controlled semilinear parabolic PDEs, jump rates λ and transition measures Q depending on the control. This kind of PDE takes the form

    ˙x(t)=Lx(t)+f(x(t),u(t)),

    where L is the infinitesimal generator of a strongly continuous semigroup and f is some function (possibly nonlinear). The optimal control problem we address is the finite-time minimization of an unbounded expected cost functional along the trajectory of the form

    minuET0c(x(t),u(t))dt,

    where x() is the continuous component of the PDMP, u() the control and T>0 the finite time horizon, the cost function c(,) being potentially unbounded.

    To address this optimal control problem, we use the fairly widespread approach that consists in studying the imbedded discrete-time Markov chain composed of the times and the locations of the jumps. Since the evolution between jumps is deterministic, there exists a one-to-one correspondence between the PDMP and a pure jump process that enable to define the imbedded Markov chain. The discrete-time Markov chain belongs to the class of Markov Decision Processes (MDPs). This kind of approach has been used in [25] and [12] (see also the book [31] for a self-contained presentation of MDPs). In these articles, the authors apply dynamic programming to the MDP derived from a PDMP, to prove the existence of optimal relaxed strategies. Some sufficient conditions are also given to get non-relaxed, also called ordinary, optimal strategies. However, in both articles, the PDMP is finite dimensional. To the best of our knowledge, the optimal control of infinite-dimensional PDMPs has not yet been treated and this is one of our main objectives here, along with its motivation, derived from the Optogenetics, to formulate and study infinite-dimensional controlled neuron models.

    The paper is structured as follows. In Section 2 we adapt the definition of a standard infinite-dimensional PDMP given in [10] in order to address control problems of such processes. To obtain a strongly Markovian process, we enlarge the state space and we prove an extension to controlled PDMPs of [10,Theorem 4]. We also define in this section the MDP associated to our controlled PDMP and that we study later on. In Section 3 we use the results of [37] to define relaxed controlled PDMPs and relaxed MDPs in infinite dimension. Section 4 gathers the main results of the paper. We show that the optimal control problems of PDMPs and of MDPs are equivalent. We build up a general framework in which the MDP is contracting. The value function is then shown to be continuous and existence of optimal relaxed control strategies is proved. We finally give in this section, some convexity assumptions under which an ordinary optimal control strategy can be retrieved.

    The final Section 5 is devoted to showing that the previous theoretical results apply to the model of Optogenetics previously introduced. Several variants of the model are discussed, the scope of the theoretical results being much larger than the model of Definition 1.1.

    In the present section we define the infinite-dimensional controlled PDMPs that we consider in this paper in a way that enables us to formulate control problems in which the three characteristics of the PDMP depend on an additional variable that we call the control parameter. In particular we introduce the enlarged process which enable us to address optimization problems in the subsequent sections.

    Let (Ω,F,(Ft)t0,P) be a filtered probability space satisfying the usual conditions. We consider a Gelfand triple (VHV) such that H is a separable Hilbert space and V a separable, reflexive Banach space continuously and densely embedded in H. The pivot space H is identified with its dual H, V is the topological dual of V. H is then continuously and densely embedded in V. We will denote by ||||V, ||||H, and ||||V the norms on V, H, and V, by (,) the inner product in H and by , the duality pairing of (V,V). Note that for vV and hH, h,v=(h,v).

    Let D be a finite set, the state space of the discrete variable and Z a compact Polish space, the control space. Let T>0 be the finite time horizon. Intuitively a controlled PDMP (vt,dt)t[0,T] should be constructed on H×D from the space of ordinary control rules defined as

    A:={a:(0,T)U measurable},

    where U, the action space, is a closed subset of Z. Elements of A are defined up to a set in [0,T] of Lebesgue measure 0. The control rules introduced above are called ordinary in contrast with the relaxed ones that we will introduce and use in order to prove existence of optimal strategies. When endowed with the coarsest σ-algebra such that

    aT0etw(t,a(t))dt

    is measurable for all bounded and measurable functions w:R+×UR, the set of control rules A becomes a Borel space (see [46,Lemma 1]). This will be crucial for the discrete-time control problem that we consider later. Conditionally to the continuous component vt and the control a(t), the discrete component dt is a continuous-time Markov chain given by a jump rate function λ:H×D×UR+ and a transition measure Q:H×D×UP(D).

    Between two consecutive jumps of the discrete component, the continuous component vt solves a controlled semilinear parabolic PDE

    {˙vt=Lvt+fd(vt,a(t)),v0=v,vV. (6)

    For (v,d,a)H×D×A we will denote by ϕa(v,d) the flow of (6). Let Tn,nN be the jump times of the PDMP. Their distribution is then given by

    P[Tn+1Tn>Δt|Tn,vTn,dTn]=exp(Tn+ΔtTnλ(ϕas(vTn,dTn),ds,a(s))ds). (7)

    for t[Tn;Tn+1). When a jump occurs, the distribution of the post jump state is given by

    P[dt=d|dtdt]=Q({d}|dt,vt,a(t)). (8)

    The triple (λ,Q,ϕ) fully describes the process and is referred to as the local characteristics of the PDMP.

    We will make the following assumptions on the local characteristics of the PDMP.

    (H(λ)): For every dD, λd:H×ZR+ is a function such that:

    1. There exist Mλ,δ>0 such that:

    δλd(x,z)Mλ,(x,z)H×Z.

    2. zλd(x,z) is continuous on Z, for all xH.

    3. xλd(x,z) is locally Lipschitz continuous, uniformly in Z, that is, for every compact set KH, there exists lλ(K)>0 such that

    |λd(x,z)λd(y,z)|lλ(K)||xy||H(x,y,z)K2×Z.

    (H(Q)): The function Q:H×D×Z×B(D)[0,1] is a transition probability such that: (x,z)Q({p}|x,d,z) is continuous for all (d,p)D2 (weak continuity) and Q({d}|x,d,z)=0 for all (x,z)H×Z.

    (H(L)): L:VV is such that:

    1. L is linear, monotone;

    2. ||Lx||Vc+c1||x||V with c>0 and c10;

    3. Lx,xc2||x||2V,c2>0;

    4. L generates a strongly continuous semigroup (S(t))t0 on H such that S(t):HH is compact for every t>0. We will denote by MS a bound, for the operator norm, of the semigroup on [0,T].

    (H(f)): For every dD, fd:H×ZH is a function such that:

    1. xfd(x,z) is Lipschitz continuous, uniformly in Z, that is,

    ||fd(x,z)fd(y,z)||Hlf||xy||H(x,z)H×Z,lf>0.

    2. (x,z)fd(x,z) is continuous from H×Z to Hw, where Hw denotes the space H endowed with the topology of weak convergence.

    Let us make some comments on the assumptions above. Assumption (H(λ))1. will ensure that the process is regular, i.e. the number of jumps of dt is almost surely finite in every finite time interval. Assumption (H(λ))2. will enable us to construct relaxed trajectories. Assumptions (H(λ))3. and (H(Q)) will be necessary to obtain the existence of optimal relaxed controls for the associated MDP. Assumptions (H(L))1.2.3. (H(f)) will ensure the existence and uniqueness of the solution of (6). Note that all the results of this paper are unchanged if assumption (H(f))1 is replaced by

    (H(f))': For every dD, fd:H×ZH is a function such that:

    1. xfd(x,z) is continuous monotone, for all zZ.

    2. ||fd(x,z)||Hb1+b2||x||H,b10,b2>0, for all zZ.

    In particular, assumption (H(f)) implies (H(f))'2. and we will use the constants b1 and b2 further in this paper. Note that they can be chosen uniformly in D since it is a finite set. To see this, note that zfd(0,z) is a weakly continuous on the compact space Z and thus weakly bounded. It is then strongly bounded by the Uniform Boundedness Principle.

    Finally, assumptions (H(f))3. and (H(L))4. will respectively ensure the existence of relaxed solutions of (6) and the strong continuity of these solutions with regards to the relaxed control. For that last matter, the compactness of Z is also required. The following theorem is a reminder that the assumption on the semigroup does not make the problem trivial since it implies that L is unbounded when H is infinite-dimensional.

    Theorem 2.1. (see [24,Theorem 4.29])

    1. For a strongly continuous semigroup (T(t))t0 the following properties are equivalent

    (a) (T(t))t0 is immediately compact.

    (b) (T(t))t0 is immediately norm continuous, and its generator has compact resolvent.

    2. Let X be a Banach space. A bounded operator AL(X) has compact resolvent if and only if X is finite-dimensional.

    We define Uad((0,T),U):={aL1((0,T),Z)|a(t)U a.e.}=A (Z is compact and so is U) the space of admissible rules. Because of (H(L)) and (H(f)), for all aUad((0,T),U), (6) has a unique solution belonging to L2((0,T),V)H1((0,T),V) and moreover, the solution belongs to C([0,T],H) (see [37] for the construction of such a solution). We will make an extensive use of the mild formulation of the solution of (6), given by

    ϕat(v,d)=S(t)v+t0S(ts)fd(ϕas(v,d),a(s))ds, (9)

    with ϕa0(v,d)=v. One of the keys in the construction of a controlled PDMP in finite or infinite dimension is to ensure that ϕa enjoys the flow property ϕat+s(v,d)=ϕas(ϕat(v,d),d) for all (v,d,a)H×D×Uad((0,T),U) and (t,s)R+. It is the flow property that guarantees the Markov property for the process. Under the formulation (9), it is easy to see that the solution ϕa cannot feature the flow property for any reasonable set of admissible rules. In particular, the jump process (dt,t0) given by (7) and (8) is not Markovian. Moreover in control problems, and especially in Markovian control problems, we are generally looking for feedback controls which depend only on the current state variable so that at any time, the controller needs only to observe the current state to be able to take an action. Feedback controls would ensure the flow property. However they impose a huge restriction on the class of admissible controls. Indeed, feedback controls would be functions u:H×DU and for the solution of (6) to be uniquely determined, the function xfd(x,u(x,d)) needs to be Lipschitz continuous. It would automatically exclude discontinuous controls and therefore would not be adapted to control problems. To avoid this issue, Vermes introduced piecewise open-loop controls (see [40]): after a jump of the discrete component, the controller observes the location of the jump, say (v,d)H×D and chooses a control rule aUad((0,T),U) to be applied until the next jump. The time elapsed since the last jump must then be added to the state variable in order to see a control rule as a feedback control. While Vermes [40] and Davis [19] only add the last post jump location we also want to keep track of the time of the last jump in order to define proper controls for the Markov Decision Processes that we introduce in the next section, and to eventually obtain optimal feedback policies. According to these remarks, we now enlarge the state space and define control strategies for the enlarged process. We introduce first several sets that will be useful later on.

    Definition 2.2. We define the sets D(T,2):={(t,s)[0,T]2t+sT}, Ξ:=H×D×D(T,2)×H and Υ:=H×D×[0,T].

    Definition 2.3. Control strategies. Enlarged controlled PDMP. Survival function.

    a) The set A of admissible control strategies is defined by

    A:={α:ΥUad([0,T];U) measurable}.

    b) On Ξ we define the enlarged controlled PDMP (Xαt)t0=(vt,dt,τt,ht,νt)t0 with strategy αA as follows:

    (vt,dt)t0 is the process defined by (6), (7) and (8),

    τt is the time elapsed since the last jump at time t,

    ht is the time of the last jump before time t,

    νt is the post jump location right after the jump at time ht.

    c) Let z:=(v,d,h)Υ. For aUad([0,T];U) we will denote by χa.(z) the solution of

    ddtχat(z)=χat(z)λd(ϕat(z),a(t)),    χa0(z)=1,

    and its immediate extension χα.(z) to A such that the process (Xαt)t0 starting at (v,d,0,h,v)Ξ, admits χα. as survival function:

    P[T1>t]=χαt(z).

    The notation ϕat(z) means here

    ϕat(z):=S(t)v+t0S(ts)fd(ϕas(z),a(s))ds.

    and ϕαt(z) is to be understood as ϕα(z)t(z).

    Remark 2. ⅰ) Thanks to [46,Lemma 3], the set of admissible control strategies is in bijection with

    {α:Υ×[0,T]U measurable},

    and thus can be seen as a set of measurable feedback controls acting on Ξ (but not depending on the first component) and with values in U. The formulation of Definition 2.3 is adequate to address the associated discrete-time control problem in Section 2.3.

    ⅱ) In view of Definition 2.3, given αA, the deterministic dynamics of the process (Xαt)t0=(vt,dt,τt,,ht,νt)t0 between two consecutive jumps obeys the initial value problem

    {˙vt=Lvt+fd(vt,α(v,d,s)(τt)),      vs=vE,˙dt=0,      ds=dD,˙τt=1,      τs=0,˙ht=0,      hs=s[0,T],˙νt=0,      νs=vs=v, (10)

    with s the last time of jump. The jump rate function and transition measure of the enlarged PDMP are straightforwardly given by the ones of the original process and will be denoted the same (see Appendix A for their expression).

    ⅲ) If the relation t=ht+τt indicates that the variable ht might be redundant, recall that we keep track of it on purpose. Indeed, the optimal control will appear as a function of the jump times so that keeping them as a variable will make the control feedback.

    ⅳ) Because of the special definition of the enlarged process, for every control strategy in A, the initial point of the process (Xαt)t0 cannot be any point of the enlarged state space Ξ. More precisely we introduce in Definition 2.4 below the space of coherent initial points.

    Definition 2.4. Space of coherent initial points.

    Take αA and x:=(v0,d0,0,h0,v0)Ξ and extend the notation ϕαt(x) of Definition 2.3 to Ξ by

    ϕαt(x):=S(t)v0+t0S(ts)fd0(ϕαs(x),α(v0,d0,h0)(τs))ds

    The set ΞαΞ of coherent initial points is defined as follows

    Ξα:={(v,d,τ,h,ν)Ξv=ϕατ(ν,d,0,h,ν)}. (11)

    Then we have for all x:=(v0,d0,τ0,h0,ν0)Ξα,

    ϕαt(x):=S(t)v0+t0S(ts)fd0(ϕαs(x),α(ν0,d0,h0)(τs))ds (12)

    Note that (Xαt) can be constructed like any PDMP by a classical iteration that we recall in Appendix A for the sake of completeness.

    Proposition 1. The flow property.

    Take αA and x:=(v0,d0,τ0,h0,ν0)Ξα. Then ϕαt+s(x)=ϕαt(ϕαs(x),ds,τs,hs,νs) for all (t,s)R2+.

    Based on equation (12) and the definition of Ξα, the proof of Proposition 1 is straightforward.

    Notation. Let αA. For zΥ, we will use the notation αs(z):=α(z)(s). Furthermore, we will sometimes denote by Qα(|v,d) instead of Q(|v,d,ατ(ν,d,h)) for all (v,d,τ,h,ν)A×Ξα.

    Up to now, thanks to Definition 2.3, we can formally associate the PDMP (Xαt)tR+ to a given strategy αA. However, we need to show that there exists a filtered probabily space satisfying the usual conditions under which, for every control strategy αA, the controlled PDMP (Xαt)t0 is a homogeneous strong Markov process. This is what we do in the next theorem which provides an extension of [10,Theorem 4] to controlled infinite-dimensional PDMPs and some estimates on the continuous component of the PDMP.

    Theorem 2.5. Assume that assumptions (H(λ)), (H(Q)), (H(L)) and (H(f)) (or (H(f))') are satisfied.

    a) There exists a filtered probability space satisfying the usual conditions such that for every control strategy αA the process (Xαt)t0 introduced in Definition 2.3 is a homogeneous strong Markov process on Ξ with extended generator Gα given in Appendix B.

    b) For every compact set KH, there exists a deterministic constant cK>0 such that for all control strategy αA and initial point x:=(v,d,τ,h,ν)Ξα, with vK, the first component vαt of the control PDMP (Xαt)t0 starting at x is such that

    supt[0,T]||vαt||HcK.

    The proof of Theorem 2.5 is given in Appendix B. In the next section, we introduce the MDP that will allow us to prove the existence of optimal strategies.

    Because of the particular definition of the state space Ξ, the state of the PDMP just after a jump is in fact fully determined by a point in Υ. In Appendix B we recall the one-to-one correspondence between the PDMP on Ξ and the included pure jump process (Zn)nN with values in Υ. This pure jump process allows to define a Markov Decision Process (Zn)nN with values in Υ{Δ}, where Δ is a cemetery state added to Υ to define a proper MDP. In order to lighten the notations, the dependence on a control strategy αA of both jump processes is implicit. The stochastic kernel Q of the MDP satisfies

    Q(B×C×E|z,a)=Th0ρtdt, (13)

    for any z:=(v,d,h)Υ, Borel sets BH, CD, E[0,T], and aUad([0,T],U), where

    ρt:=λd(ϕat(z),a(t))χat(z)1E(h+t)1B(ϕat(z))Q(C|ϕat(z),d,a(t)),

    with ϕat(z) given by (9) and Q({Δ}|z,a)=χaTh(z), and Q({Δ}|Δ,a)=1. The conditional jumps of the MDP (Zn)nN are then given by the kernel Q(|z,α(z)) for (z,α)Υ×A. Note that Zn=Zn as long as TnT, where Tn is the last component of Zn. Since we work with Borel state and control spaces, we will be able to apply techniques of [6] for discrete-time stochastic control problems, without being concerned by measurability matters. See [6,Section 1.2] for an illuminating discussion on these measurability questions.

    Relaxed controls are constructed by enlarging the set of ordinary ones, in order to convexify the original system, and in such a way that it is possible to approximate relaxed strategies by ordinary ones. The difficulty in doing so is twofold. First, the set of relaxed trajectories should not be much larger than the original one. Second, the topology considered on the set of relaxed controls should make it a compact set and, at the same time, make the flow of the associated PDE continuous. Compactness and continuity are two notions in conflict so being able to achieve such a construction is crucial. Intuitively a relaxed control strategy on the action space U corresponds to randomizing the control action: at time t, instead of taking a predetermined action, the controller will take an action with some probability, making the control a transition probability. This has to be formalized mathematically.

    Notation and reminder. Z is a compact Polish space, C(Z) denotes the set of all real-valued continuous, necessarily bounded, functions on Z, endowed with the supremum norm. Because Z is compact, by the Riesz Representation Theorem, the dual space [C(Z)] of C(Z) is identified with the space M(Z) of Radon measures on B(Z), the Borel σ-field of Z. We will denote by M1+(Z) the space of probability measures on Z. The action space U is a closed subset of Z. We will use the notations L1(C(Z)):=L1((0,T),C(Z)) and L(M(Z)):=L((0,T),M(Z)).

    Let B([0,T]) denote the Borel σ -field of [0,T] and Leb the Lebesgue measure. A transition probability from ([0,T],B([0,T]),Leb) into (Z,B(Z)) is a function γ:[0,T]×B(Z)[0,1] such that

    {tγ(t,C) is measurable for all CB(Z),γ(t,)M1+(Z) for all t[0,T].

    We will denote by R([0,T],Z) the set of all transition probability measures from ([0,T],B([0,T]),Leb) into (Z,B(Z)).

    Recall that we consider the PDE (6):

    ˙vt=Lvt+fd(vt,a(t)),    v0=v,    vV,    aUad([0,T],U). (14)

    The relaxed PDE is then of the form

    ˙vt=Lvt+Zfd(vt,u)γ(t)(du),    v0=v,    vV,    γR([0,T],U), (15)

    where R([0,T],U):={γR([0,T],Z)|γ(t)(U)=1 a.e. in [0,T]} is the set of transition probabilities from ([0,T],B([0,T]),Leb) into (Z,B(Z)) with support in U. The integral part of (15) is to be understood in the sense of Bochner-Lebesgue as we show now. The topology we consider on R([0,T],U) follows from [5] and because Z is a compact metric space, it coincides with the usual topology of relaxed control theory of [43]. It is the coarsest topology that makes continuous all mappings

    γT0Zf(t,z)γ(t)(dz)dtR,

    for every Carathodory integrand f:[0,T]×ZR, a Carathodory integrand being such that

    {tf(t,z) is measurable for all zZ,zf(t,z) is continuous a.e., |f(t,z)|b(t) a.e., with bL1((0,T),R).

    This topology is called the weak topology on R([0,T],Z) but we show now that it is in fact metrizable. Indeed, Carathodory integrands f on [0,T]×Z can be identified with the Lebesgue-Bochner space L1(C(Z)) via the application tf(t,)L1(C(Z)). Now, since M(Z) is a separable (Z is compact), dual space (dual of C(Z)), it enjoys the Radon-Nikodym property. Using [21,Theorem 1 p. 98], it follows that [L1(C(Z))]=L(M(Z)). Hence, the weak topology on R([0,T],Z) can be identified with the w-topology in (L(M(Z)),L1(C(Z))), the latter being metrizable since L1(C(Z)) is a separable space (see [23,Theorem 1 p. 426]). This crucial property allows to work with sequences when dealing with continuity matters with regards to relaxed controls.

    Finally, by Alaoglu's Theorem, R([0,T],U) is w-compact in L(M(Z)), and the set of original admissible controls Uad([0,T],U) is dense in R([0,T],U) (see [5,Corollary 3 p. 469]).

    For the same reasons why (14) admits a unique solution, by setting ˉfd(v,γ):=Zfd(v,u)γ(du), it is straightforward to see that (15) admits a unique solution. The following theorem gathers the results of [37,Theorems 3.2 and 4.1] and will be of paramount importance in the sequel.

    Theorem 3.1. If assumptions (H(L)) and (H(f)) (or (H(f))') hold, then

    a) the space of relaxed trajectories (i.e. solutions of 15) is a convex, compact set of C([0,T],H). It is the closure in C([0,T],H) of the space of original trajectories (i.e. solutions of 14).

    b) The mapping that maps a relaxed control to the solution of (15) is continuous from R([0,T],U) into C([0,T],H).

    First of all, note that since the control acts on all three characteristics of the PDMP, convexity assumptions on the fields fd(v,U) would not necessarily ensure existence of optimal controls as it does for partial differential equations. Such assumptions should also be imposed on the rate function and the transition measure of the PDMP. For this reason, relaxed controls are even more important to prove existence of optimal controls for PDMP. For what has been done for PDE above, we are now able to define relaxed PDMPs. The next definition is the relaxed analogue of Definition 2.3.

    Definition 3.2. Relaxed control strategies, relaxed local characteristics.

    a) The set AR of relaxed admissible control strategies for the PDMP is defined by

    AR:={μ:ΥR([0,T];U) measurable}.

    Given a relaxed control strategy μAR and zΥ, we will denote by μz:=μ(z)R([0,T];U) and μzt the corresponding probability measure on (Z,B(Z)).

    b) For γM1+(Z), (v,d)H×D and CB(D), we extend the jump rate function and transition measure as follows

    {λd(v,γ):=Zλd(v,u)γ(du),Q(C|v,d,γ):=(λd(v,γ))1Zλd(v,u)Q(C|v,d,u)γ(du), (16)

    the expression for the enlarged process being straightforward. This allows us to give the relaxed survival function of the PDMP and the relaxed mild formulation of the solution of (15)

    {ddtχμt(z)=χμt(z)λd(ϕμt(z),μzt),χμ0(z)=1,ϕμt(z)=S(t)v+t0ZS(ts)fd(ϕμs(z),u)μzs(du)ds, (17)

    for μAR and z:=(v,d,h)Υ. For γR([0,T],U), we will also use the following notation

    {χγt(z)=exp(t0λd(ϕγs(z),γ(s))ds),ϕγt(z)=S(t)v+t0ZS(ts)fd(ϕγs(z),u)γ(s)(du)ds,

    The following proposition is a direct consequence of Theorem 2.5b).

    Proposition 2. For every compact set KH, there exists a deterministic constant cK>0 such that for all control strategy μAR and initial point x:=(v,d,τ,h,ν)Ξα, with vK, the first component vμt of the control PDMP (Xμt)t0 starting at x is such that

    supt[0,T]||vμt||HcK.

    The relaxed transition measure is given in the next section through the relaxed stochastic kernel of the MDP associated to our relaxed PDMP.

    Let z:=(v,d,h)Υ and γR([0,T],U). The relaxed stochastic kernel of the relaxed MDP satisfies

    Q(B×C×E|z,γ)=Th0˜ρtdt, (18)

    for Borel sets BH, CD, E[0,T], where

    ˜ρt:=χγt(z)1E(h+t)1B(ϕγt(z))Zλd(ϕγt(z),u)Q(C|ϕγt(z),d,u)γ(t)(du),=χγt(z)1E(h+t)1B(ϕγt(z))λd(ϕγt(z),γ(t))Q(C|ϕγt(z),d,γ(t))

    and Q({Δ}|z,γ)=χγTh(z), and Q({Δ}|Δ,γ)=1, with, as before, the conditional jumps of the MDP (Zn)nN given by the kernel Q(|z,μ(z)) for (z,μ)Υ×AR.

    Here, we are interested in finding optimal controls for optimization problems involving infinite-dimensional PDMPs. For instance, we may want to track a targeted "signal" (as a solution of a given PDE, see Section 5). To do so, we are going to study the optimal control problem of the imbedded MDP defined in Section 2.3. This strategy has been for example used in [12] in the particular setting of a decoupled finite-dimensional PDMP, the rate function being constant.

    Thanks to the preceding sections we can consider ordinary or relaxed costs for the PDMP Xα or the MDP and their corresponding value functions. For z:=(v,d,h)Υ and αA we denote by Eαz the conditional expectation given that Xαh=(v,d,0,h,v) and by Xαs(ϕ) the first component of Xαs. Furthermore, we denote by Xαs:=(vs,ds,τs,hs,νs), then the shortened notation α(Xαs) will refer to ατs(νs,ds,hs). These notations are straightforwardly extended to AR. We introduce a running cost c:H×ZR+ and a terminal cost g:HR+ satisfying

    (H(c)): (v,z)c(v,z) and vg(v) are nonnegative quadratic functions, that is there exists (a,b,c,d,e,f,g,h,i,j)R9 such that for v,zH×Z,

    c(v,u)=a||v||2H+bˉd(0,u)2+c||v||Hˉd(0,u)+d||v||H+eˉd(0,u)+f,g(v)=h||v||2H+i||v||H+j,

    with ˉd(,) the distance on Z.

    Remark 3. This assumption might seem a bit restrictive, but it falls within the framework of all the applications we have in mind. More importantly, it can be widely loosened if we slightly change the assumptions of Theorem 4.3. In particular, all the following results, up to Lemma 4.14, are true and proved for continuous functions c:H×ZR+ and g:HR+. See Remark 6 below.

    Definition 4.1. Ordinary value function for the PDMP Xα.

    For αA, we define the ordinary expected total cost function Vα:ΥR and the corresponding value function V as follows:

    Vα(z):=Eαz[Thc(Xαs(ϕ),α(Xαs))ds+g(XαT(ϕ))],z:=(v,d,h)Υ, (19)
    V(z)=infαAVα(z),zΥ. (20)

    Assumption (H(c)) ensures that Vα and V are properly defined.

    Definition 4.2. Relaxed value function for the PDMP Xμ.

    For μAR we define the relaxed expected cost function Vμ:ΥR and the corresponding relaxed value function ˜V as follows:

    Vμ(z):=Eμz[ThZc(Xμs(ϕ),u)μ(Xμs)(du)ds+g(XμT(ϕ))],z:=(v,d,h)Υ, (21)
    ˜V(z)=infμARVμ(z),zΥ. (22)

    We can now state the main result of this section.

    Theorem 4.3. Under assumptions (H(λ)), (H(Q)), (H(L)), (H(f)) and (H(c)), the value function ˜V of the relaxed optimal control problem on the PDMP is continuous on Υ and there exists an optimal relaxed control strategy μAR such that

    ˜V(z)=Vμ(z),      zΥ.

    Remark 4. All the subsequent results that lead to Theorem 4.3 would be easily transposable to the case of a lower semicontinuous cost function. We would then obtain a lower semicontinuous value function.

    The next section is dedicated to proving Theorem 4.3 via the optimal control of the MDP introduced before. Let us briefly sum up what we are going to do. We first show that the optimal control problem of the PDMP is equivalent to the optimal control problem of the MDP and that an optimal control for the latter gives an optimal control strategy for the original PDMP. We will then build up a framework, based on so called bounding functions (see [12]), in which the value function of the MDP is the fixed point of a contracting operator. Finally, we show that under the assumptions of Theorem 4.3, the relaxed PDMP Xμ belongs to this framework.

    Let us define the ordinary cost c on Υ{Δ}×Uad([0,T];U) for the MDP defined in Section 2.3. For z:=(v,d,h)Υ and aUad([0,T];U),

    c(z,a):=Th0χas(z)c(ϕas(z),a(s))ds+χaTh(z)g(ϕaTh(z)), (23)

    and c(Δ,a):=0.

    Assumption (H(c)) allows c to be properly extended to R([0,T],U) by the formula

    c(z,γ)=Th0χγs(z)Zc(ϕγs(z),u)γ(s)(du)ds+χγTh(z)g(ϕγTh(z)), (24)

    and c(Δ,γ)=0 for (z,γ)Υ×R([0,T],U). We can now define the expected cost function and value function for the MDP.

    Definition 4.4. Cost and value functions for the MDP (Zn).

    For αA (resp. μAR), we define the total expected cost Jα (resp. Jμ) and the value function J (resp. J)

    Jα(z)=Eαz[n=0c(Zn,α(Zn))],Jμ(z)=Eμz[n=0c(Zn,μ(Zn))],J(z)=infαAJα(z),J(z)=infμARJμ(z),

    for zΥ and with α(Zn) (resp. μ(Zn)) being elements of Uad([0,T],U) (resp. R([0,T],U)).

    The finiteness of these sums will by justified later by Lemma 4.9.

    In the following theorem we prove that the relaxed expected cost function of the PDMP equals the one of the associated MDP. Thus, the value functions also coincide. For the finite-dimensional case we refer the reader to [19] or [12] where the discrete component of the PDMP is a Poisson process and therefore the PDMP is entirely decoupled. The PDMPs that we consider are fully coupled.

    Theorem 4.5. The relaxed expected costs for the PDMP and the MDP coincide: Vμ(z)=Jμ(z) for all zΥ and relaxed control μAR. Thus, the value functions ˜V and J coincide on Υ.

    Remark 5. Since we have AAR, the expected costs Vα(z) and Jα(z) also coincide for all zΥ and ordinary control strategy αA

    Proof. Let μAR and z=(v,d,h)Υ and consider the PDMP Xμ starting at (v,d,0,h,v)Ξμ. We drop the dependence in the control in the notation and denote by (Tn)nN the jump times, and Zn:=(vTn,dTn,Tn)Υ the point in Υ corresponding to XμTn. Let Hn=(Z0,,Zn), TnT. For a purpose of concision we will rewrite μn:=μ(Zn)R([0,T],U) for all nN.

    Vμ(z)=Eμz[n=0TTn+1TTnZc(Xμs(ϕ),u)μnsTn(du)ds+1{TnT<Tn+1}g(XμT(ϕ))]=n=0Eμz[Eμz[TTn+1TTnZc(Xμs(ϕ),u)μnsTn(du)ds+1{TnT<Tn+1}g(XμT(ϕ))|Hn]],

    all quantities being non-negative. We want now to examine the two terms that we call I1 and I2 separately. For nN, we start with

    I1:=Eμz[TTn+1TTnZc(Xμs(ϕ),u)μnsTn(du)ds|Hn]

    that we split according to TnT<Tn+1 or Tn+1T (if TTn, the corresponding term vanishes). Then

    I1=1{TnT}Eμz[TTnZc(Xμs(ϕ),u)μnsTn(du)1{Tn+1>T}ds|Hn]+Eμz[1{Tn+1T}Tn+1TnZc(Xμs(ϕ),u)μnsTn(du)ds|Hn].

    By the strong Markov property and the flow property, the first term on the RHS is equal to

    1{TnT}Eμz[TTn0Zc(XμTn+s(ϕ),u)μns(du)1{Tn+1Tn>TTn}ds|Hn]=1{TnT}χμTTn(Zn)TTn0Zc(ϕμs(Zn),u)μns(du)ds.

    Using the same arguments, the second term on the RHS of I1 can be written as

    1{TnT}TTn0Zλdn(ϕμt(Zn),u)μnt(du)χμt(Zn)t0Zc(ϕμs(Zn),u)μnt(du)dsdt,

    An integration by parts yields

    I1=1{TnT}TTn0χμt(Zn)Zc(ϕαt(Zn),u)μnt(du)dt.

    Moreover

    I2:=Eμz[1{TnT<Tn+1}g(XμT)|Hn]=1{TnT}χμTTn(Zn)g(ϕμTTn(Zn))

    By definition of the Markov chain (Zn)nN and the function c, we then obtain for the total expected cost of the PDMP,

    Vμ(z)=n=0Eμz[1{TnT}TTn0χμt(Zn)Zc(ϕαt(Zn),u)μnt(du)dt+1{TnT}χμTTn(Zn)g(ϕμTTn(Zn))]=Eμz[n=0c(Zn,μ(Zn))]=Jμ(z).

    We now show existence of optimal relaxed controls under a contraction assumption. We use the notation R:=R([0,T];U) in the sequel. Let us also recall some notations regarding the different control sets we consider.

    u is an element of the control set U.

    a:[0,T]U is an element of the space of admissible control rules Uad([0,T],U)

    α:ΥUad([0,T],U) is an element of the space of admissible strategies for the original PDMP.

    γ:[0,T]M1+(Z) is an element of the space of relaxed admissible control rules R.

    μ:ΥR is an element of the space of relaxed admissible strategies for the relaxed PDMP.

    The classical way to address the discrete-time stochastic control problem that we introduced in Definition 4.4 is to consider an additional control space that we will call the space of Markovian policies and denote by Π. Formally Π:=(AR)N and a Markovian control policy for the MDP is a sequence of relaxed admissible strategies to be applied at each stage. The optimal control problem is to find π:=(μn)nNΠ that minimizes

    Jπ(z):=Eπz[n=0c(Zn,μn(Zn))].

    Now denote by J(z) this infimum. We will in fact prove the existence of a stationary optimal control policy that will validate the equality

    J(z)=J(z).

    Let us now define some operators that will be useful for our study and state the first theorem of this section. Let w:ΥR a continuous function, (z,γ,μ)Υ×R×AR and define

    Rw(z,γ):=c(z,γ)+(Qw)(z,γ),Tμw(z):=c(z,μ(z))+(Qw)(z,μ(z))=Rw(z,μ(z)),(Tw)(z):=infγR{c(z,γ)+(Qw)(z,γ)}=infγRRw(z,γ),

    where (Qw)(z,γ):=Υw(x)Q(dx|z,γ) which admits also the expression

    Th0χγt(z)Zλd(ϕγt(z),u)Dw(ϕγt(z),r,h+t)Q(dr|ϕγt(z),d,u)γ(t)(du)dt.

    Theorem 4.6. Assume that there exists a subspace C of the space of continuous bounded functions from Υ to R such that the operator T:CC is contracting and the zero function belongs to C. Assume furthermore that C is a Banach space. Then J is the unique fixed point of T and there exists an optimal control μAR such that

    J(z)=Jμ(z),     zΥ.

    All the results needed to prove this Theorem can be found in [6]. We break down the proof into the two following elementary propositions, suited to our specific problem. Before that, recall that from [6,Proposition 9.1 p.216], Π is the adequate control space to consider since history-dependent policies do not improve the value function.

    Let us now consider the n-stages expected cost function and value function defined by

    Jnπ(z):=Eπz[n1i=0c(Zi,μi(Zi))]     Jn(z):=infπΠEπz[n1i=0c(Zi,μi(Zi))]

    for nN and π:=(μn)nNΠ. We also set J:=limnJn.

    Proposition 3. Let assumptions of Theorem 4.3 hold. Let v,w:ΥR such that vw on Υ, and let μAR. Then TμvTμw. Moreover

    Jn(z)=infπΠ(Tμ0Tμ1Tμn10)(z)=(Tn0)(z),

    with π:=(μn)nN and J is the unique fixed point of T in C.

    Proof. The first relation is straightforward since all quantities defining Q are nonnegative. The equality Jn=infπΠTμ0Tμ1Tμn10 is also immediate since Tμ just shifts the process of one stage (see also [6,Lemma 8.1,p194]).

    Let IC, ε>0 and nN. For every k{1..n1}, TkIC and so there exist μ0,μ1,,μn1(AR)n such that

    Tμn1ITI+ε,    Tμn2TITTI+ε,,    Tμ0Tn1ITTn1I+ε.

    We then get

    TnITμ0Tn1IεTμ0Tμ1Tn2I2εTμ0Tμ1Tμn1InεinfπΠTμ0Tμ1Tμn1Inε.

    Since this last inequality is true for any ε>0 we get

    TnIinfπΠTμ0Tμ1Tμn1I,

    and by definition of T, TITμn1I. Using the first relation of the proposition we get

    TnITμ0Tμ1Tμn1I.

    Finally, TnI=infπΠTμ0Tμ1Tμn1I for all IC and nN. We deduce from the Banach fixed point theorem that J=limnTn0 belongs to C and is the only fixed point of T.

    Proposition 4. There exists μAR such that J=Jμ=J.

    Proof. By definition, for every πΠ, JnJnπ, so that JJ. Now from the previous proposition, J=infγRRJ(,γ), R is a compact space and RJ is a continuous function. We can thus find a measurable mapping μ:ΥR such that J=TμJ. J0 so from the first relation of the previous proposition, for all nN, J=TnμJTnμ0 and by taking the limit JJμ. Since JμJ we get J=Jμ=J. We conclude the proof by remarking that JJJμ.

    The next section is devoted to proving that the assumptions of Theorem 4.6 are satisfied for the MDP.

    The concept of bounding function that we define below will ensure that the operator T is a contraction. The existence of the space C of Theorem 4.6 will mostly result from Theorem 3.1 and again from the concept of bounding function.

    Definition 4.7. Bounding functions for a PDMP.

    Let c (resp. g) be a running (resp. terminal) cost as in Section 4.1. A measurable function b:HR+ is called a bounding function for the PDMP if there exist constants cc,cg,cϕR+ such that

    (ⅰ) c(v,u)ccb(v) for all (v,u)H×Z,

    (ⅱ) g(v)cgb(v) for all vH,

    (ⅲ) b(ϕγt(z))cϕb(v) for all (t,z,γ)[0,T]×Υ×R, z=(v,d,h).

    Given a bounding function for the PDMP we can construct one for the MDP with or without relaxed controls, as shown in the next lemma (cf. [13,Definition 7.1.2 p.195]).

    Lemma 4.8. Let b is a bounding function for the PDMP. We keep the notations of Definition 4.7. Let ζ>0. The function Bζ:ΥR+ defined by Bζ(z):=b(v)eζ(Th) for z=(v,d,h) is an upper bounding function for the MDP. The two inequalities below are satisfied for all (z,γ)Υ×R,

    c(z,γ)Bζ(z)cϕ(ccδ+cg), (25)
    ΥBζ(y)Q(dy|z,γ)Bζ(z)cϕMλ(ζ+δ). (26)

    Proof. Take (z,γ)Υ×R, z=(v,d,h). On the one hand from (24) and Definition 4.7 we obtain

    c(z,γ)Th0eδscccϕb(v)ds+eδ(Th)cgcϕb(v)            Bζ(z)eζ(Th)cϕ(cc1eδ(Th)δ+eδ(Th)cg),

    which immediately implies (25). On the other hand

    ΥBζ(y)Q(dy|z,γ)=Th0χγs(z)b(ϕγs(z))eζ(Ths)Zλd(ϕγs(z),u)Q(D|ϕγs(z),u)γs(du)dseζ(Th)b(v)cϕMλeζτTh0eδseζsds=Bζ(z)cϕMλζ+δ(1e(ζ+δ)(Th))

    which implies (26).

    Let b be a bounding function for the PDMP. Consider ζ such that C:=cϕMλζ+δ<1. Denote by B the associated bounding function for the MDP. We introduce the Banach space

    L:={v:ΥR continuous ;||v||:=supzΥ|v(z)||B(z)|<}. (27)

    The following two lemmas give an estimate on the expected cost of the MDP that justifies manipulations of infinite sums.

    Lemma 4.9. The inequality Eγz[B(Zk)]CkB(z) holds for any (z,γ,k)Υ×R×N.

    Proof. We proceed by induction on k. Let zΥ. The desired inequality holds for k=0 since Eγz[B(Z0)]=B(z). Suppose now that it holds for kN. Then

    Eγz[B(Zk+1)]=Eγz[Eγz[B(Zk+1)|Zk]]                       =Eγz[ΥB(y)Q(dy|Zk,γ)]                       =Eγz[B(Zk)ΥB(y)Q(dy|Zk,γ)B(Zk)].

    Using (26) and the definition of C, we conclude that Eγz[B(Zk+1)]CEγz[B(Zk)] and by the assumption on k Eγz[B(Zk+1)]Ck+1B(z).

    Lemma 4.10. There exists κ>0 such that for any (z,μ)Υ×AR,

    Eμz[k=nc(Zk,μ(Zk))]κCn1CB(z).

    Proof. The results follows from Lemma 4.9 and from the fact that

    c(Zk,μ(Zk))B(Zk)cϕ(ccδ+cg)

    for any kN.

    We now state the result on the operator T.

    Lemma 4.11. T is a contraction on L: for any (v,w)L×L,

    ||TvTw||BC||vw||B,

    where C=cϕMλζ+δ.

    Proof. We prove here the contraction property. The fact T:LL is less straightforward and is addressed in the next section. Let z:=(v,d,h)Υ. Let us recall that for functions f,g:RR

    supγRf(γ)supγRg(γ)supγR(f(γ)g(γ)).

    Moreover since infγRf(γ)infγRg(γ)=supγR(g(γ))supγR(f(γ)), we have

    Tv(z)Tw(z)supγRTh0χγs(z)Zλd(ϕγs(z),u)I(u,s)γ(s)(du)ds,

    where

    I(u,s):=D(v(ϕγs(z),r,h+s)w(ϕγs(z),r,h+s))Q(dr|ϕγs(z),d,u),

    so that

    ||TvTw||Bsup(z,γ)Υ×RTh0χγs(z)Zλd(ϕγs(z),u)J(s,u)γ(s)(du)ds

    where

    J(s,u):=DB(ϕγs(z),r,h+s)B(z)||vw||BQ(dr|ϕγs(z),d,u)

    We then conclude that

    ||TvTw||Bsup(z,γ)Υ×RTh0eδsMλcϕeζsds||vw||BMλcϕ||vw||BTh0e(δ+ζ)sdsC||vw||B.

    Here we prove that the trajectories of the relaxed PDMP are continuous w.r.t. the control and that the operator R transforms continuous functions in continuous functions.

    Lemma 4.12. Assume that (H(L)) and (H(f)) are satisfied. Then the mapping

    ϕ:(z,γ)Υ×Rϕγ(z)=S(0)v+0ZS(s)fd(ϕγs(z),u)γ(s)(du)ds

    is continuous from Υ×R in C([0,T];H).

    Proof. This proof is based on the result of Theorem 3.1. Here we add the joint continuity on Υ×R whereas the continuity is just on R in [37]. Let t[0,T] and let (z,γ)Υ×R. Assume that (zn,γn)(z,γ). Since D is a finite set, we take the discrete topology on it and if we denote by zn=(vn,dn,hn) and z=(v,d,h), we have the equality dn=d for n large enough. So for n large enough we have

    ϕγnt(zn)ϕγt(z)=S(t)vnS(t)v+t0ZS(ts)fd(ϕγnt(zn),u)γn(s)(du)dst0ZS(ts)fd(ϕγt(z),u)γ(s)(du)ds=S(t)vnS(t)v+t0ZS(ts)[fd(ϕγnt(zn),u)γn(s)(du)fd(ϕγt(z),u)γn(s)(du)]ds+t0ZS(ts)[fd(ϕγt(z),u)γn(s)(du)fd(ϕγt(z),u)γ(s)(du)]ds.

    From(H(f))1. we get

    ||ϕγnt(zn)ϕγt(z)||HMS||vnv||H+MSlft0||ϕγns(zn)ϕγs(z)||Hds+||n(t)||H

    where n(t):=t0ZS(ts)[fd(ϕγt(z),u)γn(s)(du)fd(ϕγt(z),u)γ(s)(du)]ds. By the Gronwall lemma we obtain a constant C>0 such that

    ||ϕγnt(zn)ϕγt(z)||HC(||vnv||H+sups[0,T]||n(s)||H).

    Since limn+||vnv||H=0, the proof is complete if we show that the sequence of functions (||n||H) uniformly converges to 0.

    Let us denote by xn(t):=t0Z(h,S(ts)fd(ϕγt(z),u)))Hγn(s)(du)ds. Using the same argument as the proof of [37,Theorem 3.1], there is no difficulty in proving that (xn)nN is compact in C([0,T],H) so that, passing to a subsequence if necessary, we may assume that xnx in C([0,T],H). Now let hH.

    (h,n(t))H=t0Z(h,S(ts)fd(ϕγt(z),u)))Hγn(s)(du)dst0Z(h,S(ts)fd(ϕγt(z),u)))Hγ(s)(du)dsn0,

    since (t,u)(h,S(ts)fd(ϕγt(z),u)))HL1(C(Z)) and γnγ weakly* in L(M(Z))=[L1(C(Z))]. Thus, x(t)=t0ZS(ts)fd(ϕγt(z),u)γ(s)(du)ds and n(t)=xn(t)x(t) for all t[0,T], proving the uniform convergence of ||n||H on [0,T].

    The next lemma establishes the continuity property of the operator R.

    Lemma 4.13. Suppose that assumptions (H(L)), (H(f)), (H(λ)), (H(Q)), (H(c)) are satisfied. Let b be a continuous bounding function for the PDMP. Let w:Υ×UR be continuous with |w(z,u)|cwB(z) for some cw0. Then

    (z,γ)Th0χγs(z)(Zw(ϕγs(z),d,h+s,u)γ(s)(du))ds

    is continuous on Υ×R, with z:=(v,d,h). Quite straightforwardly,

    (z,γ)Rw(z,γ)=c(z,γ)+Qw(z,γ)

    is continuous on Υ×R.

    Proof. See Appendix C.

    It now remains to show that there exists a bounding function for the PDMP. This is the result of the next lemma.

    Lemma 4.14. Suppose assumptions (H(L)), (H(f)) and (H(c)) are satisfied. Now define ˜c and ˜g from c and g by taking the absolute value of the coefficients of these quadratic functions. Let M2>0. Define M3:=(M2+b1T)MSeMSb2T and b:HR+ by

    b(v):={max||x||HM3maxuU˜c(x,u)+max||x||HM3˜g(x),   if   ||v||HM3,maxuU˜c(v,u)+˜g(v),   if   ||v||H>M3, (28)

    is a continuous bounding function for the PDMP.

    Proof. For all (v,u)H×U, c(v,u)b(v) and g(v)b(v). Now let (t,z,γ)[0,T]×Υ×R, z=(v,d,h).

    • If ||ϕγt(z)||HM3, b(ϕγt(z))=b(M3). If ||v||HM3 then b(v)=b(M3)=b(ϕγt(z)). Otherwise, ||v||H>M3 and b(v)>b(M3)=b(ϕγt(z)).

    • If ||ϕγt(z)||H>M3 then ||v||H>M2 and ||ϕγt(z)||H||v||HM3/M2 (See 42 in Appendix B). So,

    b(ϕγt(z)))=maxuU˜c(ϕγt(z),u)+˜g(ϕγt(z))b(M3M2v)M23M22b(v),

    since M3/M2>1.

    Remark 6. Lemma 4.14 ensures the existence of a bounding function for the PDMP. To broaden the class of cost functions considered, we could just assume the existence of a bounding for the PDMP in Theorem 4.3 and then, the assumption on c and g should just be the continuity.

    Ordinary strategies are of crucial importance because they are the ones that the controller can implement in practice. Here we give convexity assumptions that ensure the existence of an ordinary optimal control strategy for the PDMP.

    (A) (a) For all dD, the function fd:(y,u)H×UE is linear in the control variable u.

    (b) For all dD, yH and EB(D), the function λd(y,):UR+ is concave and the function λd(y,)Q(E|y,d,):uH×UR+ is convex.

    (c) The cost function c:(y,u)E×UR+ is convex in the control variable u.

    Theorem 4.15. Suppose that assumptions (H(L)), (H(f)), (H(λ)), (H(Q)), (H(c)) and (A) are satisfied. If we consider μAR an optimal relaxed strategy for the PDMP, then the ordinary strategy ˉμt:=Zuμt(du)A is optimal, i.e. Vˉμ(z)=˜Vμ(z)=V(z),zΥ.

    Proof. This result is based on the fact that for all (z,γ)Υ×R,(Lw)(z,γ)(Lw)(z,ˉγ), with ˉγ=Zuγ(du). Indeed, the fact that the function fd is linear in the control variable implies that for all (t,z,γ)[0,T]×Υ×R,ϕγt(z)=ϕˉγt(z). The convexity assumptions (A) give the following inequalities

    Zλd(ϕγs(z),u)γ(s)(du)λd(ϕˉγs(z),ˉγ(s)),Zλd(ϕγs(z),u)Q(E|ϕγs(z),d,u)γ(s)(du)λd(ϕˉγs(z),ˉγ(s))Q(E|ϕˉγs(z),d,ˉγ(s)),Zc(ϕγs(z),u)γs(du)c(ϕˉγs(z),¯γs),

    for all (s,z,γ,E)[0,T]×Υ×R×B(D), so that in particular χγt(z)χˉγt(z). We can now denote for all (z,γ)Υ×R and w:ΥR+,

    (Lw)(z,γ)=Th0χγs(z)Zc(ϕγs(z),u)γ(s)(du)ds+χγTh(z)g(ϕγTh(z))+Th0χγs(z)Zλd(ϕγs(z),u)Dw(ϕγs(z),r,h+s)Q(dr|ϕγs(z),d,u)γ(s)(du)dsTh0χˉγs(z)c(ϕˉγs(z),ˉγ(s))ds+χˉγTh(z)g(ϕˉγTh(z))+Th0χˉγs(z)Zλd(ϕˉγs(z),u)Dw(ϕˉγs(z),r,h+s)Q(dr|ϕˉγs(z),d,u)γ(s)(du)ds.

    Furthermore,

    Zλd(ϕˉγs(z),u)Dw(ϕˉγs(z),r,h+s)Q(dr|ϕˉγs(z),d,u)γ(s)(du)λd(ϕˉγs(z),ˉγ(s))Dw(ϕˉγs(z),r,h+s)Q(dr|ϕˉγs(z),d,ˉγ(s)),

    so that

    (Lw)(z,γ)Th0χˉγs(z)c(ϕˉγs(z),ˉγ(s))ds+χˉγTh(z)g(ϕˉγTh(z))+Th0χˉγs(z)λd(ϕˉγs(z),ˉγ(s))Dw(ϕˉγs(z),r,h+s)Q(dr|ϕˉγs(z),d,ˉγ(s))=(Lw)(z,ˉγ).

    Here we treat an elementary example that satisfies the assumptions made in the previous two sections.

    Let V=H10([0,1]), H=L2([0,1]), D={1,1}, U=[1,1]. V is a Hilbert space with inner product

    (v,w)V:=10v(x)w(x)+v(x)w(x)dx.

    We consider the following PDE for the deterministic evolution between jumps

    tv(t,x)=Δv(t,x)+(d+u)v(t,x),

    with Dirichlet boundary conditions. We define the jump rate function for (v,u)H×U by

    λ1(v,u)=1e||v||2+1+u2,     λ1(v,u)=e1||v||2+1+u2,

    and the transition measure by Q({1}|v,1,u)=1, and Q({1}|v,1,u)=1.

    Finally, we consider a quadratic cost function c(v,u)=K||Vrefv||2+u2, where VrefD(Δ) is a reference signal that we want to approach.

    Lemma 4.16. The PDMP defined above admits the continuous bounding function

    b(v):=||Vref||2H+||v||2H+1. (29)

    Furthermore, the value function of the optimal control problem is continuous and there exists an optimal ordinary control strategy.

    Proof. The proof consists in verifying that all assumptions of Theorem 4.15 are satisfied. Assumptions (H(Q)), (H(c)) and (A) are straightforward. For (v,u)H×U, 1/2λ1(v,u)2 and e1λ1(v,u)2. The continuity in the variable u is straightforward and the locally Lipschitz continuity comes from the fact that the functions v1/(e||v||2+1), and veβ(v), with β(v):=1/(||v||2+1), are Fréchet differentiable with derivatives v2(v,)H/(e||v||2+1)2, and v2(v,)Hβ2(v)eβ(v).

    Δv:wV10v(x)w(x)dx so that Δ:VV is linear. Let (v,w)V2.

    Δ(vw),vw=10((vw)(x))2dx0.
    |Δv,w|2=|10v(x)w(x)dx|210(v(x))2dx10(w(x))2dx||v||2V||w||2V,

    and so ||Δv||V||v||V. Δv,v=10(v(x))2dxC||v||2V, for some constant C>0, by the Poincaré inequality.

    Now, define for kN, fk():=2sin(kπ), a Hilbert base of H.

    On H, S(t) is the diagonal operator

    S(t)v=k1e(kπ)2t(v,fk)Hfk.

    For t>0, S(t) is a contracting Hilbert-Schmidt operator.

    For (v,w,u)H2×U, fd(v,u)=(d+u)v and

    ||fd(v,u)fd(w,u)||H2||vw||H,     ||fd(v,u)||H2||v||H.

    This means that for every z=(v,d,h)Υ, γR([0,T],U) and t[0,T], ||ϕγt(z)||He2T||v||H.

    We begin this section by making some comments on Definition 1.1.

    In (1), Cm>0 is the membrane capacitance and V and V+ are constants defined by V:=min{VNa,VK,VL,VChR2} and V+:=max{VNa,VK,VL,VChR2}. They represent the physiological domain of our process. In (2), the constants gx>0 are the normalized conductances of the channels of type x and VxR are the driving potentials of the channels. The constant ρ>0 is the relative conductance between the open states of the ChR2. For a matter of coherence with the theoretical framework presented in the paper, we will prove Theorem 1.2 for the mollification of the model that we define now. This model is very close to the one of Definition 1.1. It is obtained by replacing the Dirac masses δz by their mollifications ξNz that are defined as follows. Let φ be the function defined on R by

    φ(x):={Ce1x21, if |x|<1,0, if |x|1, (30)

    with

    C:=(11exp(1x21)dx)1

    such that Rφ(x)dx=1.

    Now, let UN:=(12N,112N) and φN(x):=2Nφ(2Nx) for xR. For zIN, the Nth mollified Dirac mass ξNz at z is defined for x[0,1] by

    ξNz(x):={φN(xz), if xUN0, if x[0,1]UN. (31)

    For all zIN,ξNzC([0,1]) and ξNzδz almost everywhere in [0,1] as N+, so that (ξNz,ϕ)Hϕ(z), as N for every ϕC(I,R). The expressions v(i/N) in Definition 1.1 are also replaced by (ξNi/N,v)H. The decision to use the mollified Dirac mass over the Dirac mass can be motivated by two main reasons. First of all, as mentioned in [10], the concentration of ions is homogeneous in a spatially extended domain around an open channel so the current is modeled as being present not only at the point of a channel, but in a neighborhood of it. Second, the smooth mollified Dirac mass leads to smooth solutions of the PDE and we need at least continuity of the flow. Nevertheless, the results of Theorem 1.2 remain valid with the Dirac masses and we refer the reader to Section 5.2.

    The following lemma is a direct consequence of [10,Proposition 7] and will be very important for the model to fall within the theoretical framework of the previous sections.

    Lemma 5.1. For every y0V with y0(x)[V,V+] for all xI, the solution y of (1) is such that for t[0,T],

    Vy(t,x)V+,   xI.

    Physiologically speaking, we are only interested in the domain [V,V+]. Since Lemma 5.1 shows that this domain is invariant for the controlled PDMP, we can modify the characteristics of the PDMP outside the domain [V,V+] without changing its dynamics. We will do so for the rate functions σx,y of Table 1. From now on, consider a compact set K containing the closed ball of H, centered in zero and with radius max(V,V+). We will rewrite σx,y the quantities modified outside K such that they all become bounded functions. This modification will enable assumption (H(λ))1. to be verified.

    The next lemma shows that the stochastic controlled infinite-dimensional Hodgkin-Huxley-ChR2 model defines a controlled infinite-dimensional PDMP as defined in Definition 2.3 and that Theorem 2.5 applies.

    Lemma 5.2. For NN, the Nth stochastic controlled infinite-dimensional Hodgkin-Huxley-ChR2 model satisfies assumptions (H(λ)), (H(Q)), (H(L)) and (H(f)). Moreover, for any control strategy αA, the membrane potential vα satisfies

    Vvαt(x)V+,    (t,x)[0,T]×I.

    Proof. The local Lipschitz continuity of λd from H×Z in R+ comes from the local Lipschitz continuity of all the functions σx,y of Table 2.5 and the inequality |(ξNz,v)H(ξNz,w)H|2N||vw||H. By Lemma 5.1, the modified jump rates are bounded. Since they are positive, they are bounded away from zero, and then, Assumption (H(λ)) is satisfied. Assumption (H(Q)) is also easily satisfied. We showed in Section 4.4that (H(L)) is satisfied. As for fd, the function does not depend on the control variable and is continuous from H to H. For dD and (y1,y2)H2,

    fd(y1)fd(y2)=1NiIN(gK1{di=n4}+gNa1{di=m3h1}+gChR2(1{di=O1}+ρ1{di=O2})+gL)(ξNiN,y2y1)HξNiN.

    We then get

    ||fd(y1)fd(y2)||H4N2(gK+gNa+gChR2(1+ρ)+gL)||(y2y1)||H.

    Finally, since the continuous component vαt of the PDMP does not jump, the bounds are a direct consequence of Lemma 5.1.

    Proof of Theorem 1.2. In Lemma 5.2 we already showed that assumptions (H(λ)), (H(Q)), (H(L)) and (H(f)) are satisfied. The cost function c is convex in the control variable and norm quadratic on H×Z. The flow does not depend on the control. The rate function λ is linear in the control. the function λQ is also linear in the control. We conclude that all the assumptions of Theorem 4.3 are satisfied and that an optimal ordinary strategy can be retrieved.

    We end this section with an important remark that significantly extends the scope of this example. Up to now, we only considered stationary reference signals but nonautonomous ones can be studied as well, as long as they feature some properties. Indeed, it is only a matter of incorporating the signal reference VrefC([0,T],H) in the process by adding a variable to the PDMP. Instead of considering H as the initial state space for the continuous component, we consider ˜H:=H×H.

    This way, the part on the control problem is not impacted at all and we consider the continuous cost function ˜c defined for (v,ˉv,u)˜H×U by

    ˜c(v,ˉv,u)=κ||vˉv||2H+u+cmin, (32)

    the result and proof of lemma 1.2 remaining unchanged with the continuous bounding function defined for vH by

    b(v):={κM23+κsupt[0,T]||Vref(t)||2H+umax,if ||v||HM3,κ||v||2H+κsupt[0,T]||Vref(t)||2H+umax,if ||v||H>M3.

    In the next section, we present some variants of the model and the corresponding results in terms of optimal control.

    We begin this section by giving arguments showing that the results of Theorem 4.3 remain valid for the model of Definition 1.1, which does not exactly fits into our theoretical framework. Then, the variations we present concern the model of ChR2, the addition of other light-sensitive ionic channels, the way the control acts on the three local characteristics and the control space. The optimal control problem itself will remain unchanged. First of all, let us mention that since the model of Definition 1.1 satisfies the convexity conditions (A), the theoretical part on relaxed controls is not necessary for this model. Nevertheless, the model of ChR2 presented on Figure 1 is only one among several others, some of which do not enjoy a linear, or even concave, rate function λ. For those models, that we present next, assumption (A) fails and the relaxed controls are essential.

    We will not present them here, but the previous results for the Hodgkin-Huxley model remain straightforwardly unchanged for other neuron models such as the FitzHugh-Nagumo model or the Morris-Lecar model.

    Optimal control for the original model.

    In the original model, the function fd is defined from V to V. Nevertheless, the semigroup of the Laplacian regularizes Dirac masses (see [4,Lemma 3.1]) and the uniform bound in Theorem 2.5 is in fact valid in V, the solution belonging to C([0,T],V). This is all we need since the control does not act on the PDE. This is why the domain of our process is V×DN and not just H×DN, and all computations of the proofs of the previous sections can be done in the Hilbert space V. From this consideration, and using the continuous embedding of H10(I) in C0(I) we can justify the local Lipschitz continuity of λd from V×Z in R+. Indeed, it comes from the local Lipschitz continuity of all functions σx,y of Table 1 and from the inequality

    |v(iN)w(iN)|supxI|v(x)w(x)|C||vw||V.

    Finally, [10,Proposition 5] states that the bounds of Lemma 5.2 remain valid with Dirac masses.

    Modifications of the ChR2 model.

    We already mentioned the paper of Nikolic and al. [33] in which a three states model is presented. It is a somehow simpler model that the four states model of Figure 1 but it gives good qualitative results on the photocurrents produced by the ChR2. In first approximation the model can be considered to depend linearly in the control as seen on Figure 2.

    Figure 2.  Simplified ChR2 three states model.

    This model features one open state o and two closed states, one light-adapted d and one dark-adapted c. This model would lead to the same type of model as in the previous Section. In fact, the time constants 1/Kd and 1/Kr are also light dependent with a dependence in log(u). The corresponding model is represented on Figure 3 below

    Figure 3.  ChR2 three states model.

    Some mathematical comments are needed here. On Figure 3, the control u represents the light intensity and c1, c2, Kr and τd are positive constants. This model of ChR2 is experimentally accurate for intensities between 108 and 1010 μm2s1 approximately. We would then consider U:=[0,umax] with umax1010 μm2s1. Furthermore,

    limu0Kr+c2log(u)=,     limu01τdlog(u)=0.

    The first limit is not physical since rate jumps between states are positive numbers. The second limit is not physical either because it would mean that, in the dark, the proteins are trapped in the open state o, which is not the case. In the dark, when u=0, the jump rates corresponding to the transition od and dc are positive constants. For this reason, the functions σo,d and σd,c should be smooth functions such that they are equal to the rates of Figure 3 for large intensities, but still with τdlog(u)>0, and converge to Kdarkd>0 and Kdarkr>0 respectively, when u goes to 0. The resulting rate function λ is not concave and thus does not satisfy assumption (A) anymore. We can only affirm the existence of optimal relaxed strategies.

    The four states model of Figure 1 is also an approximation of a more accurate model that we represent on Figure 4 below. The transition rates can depend on either the membrane potential v or the irradiance u, which is the control variable. The details of the model and the numerical constants can be found in [44]. Note that the model of Figure 4 is already an approximation of the model in [44] because the full model in [44] would not lead to a Markovian behavior for the ChR2 (the transition rates would depend on the time elapsed since the light was switched on).

    Kd1(v)=K(1)d1K(2)d1tanh((v+20)/20),
    e12(u)=e12d+c1ln(1+u/c),e21(u)=e21d+c2ln(1+u/c),Kr(v)=K(1)rexp(K(2)rv),
    Figure 4.  ChR2 channel : Ka1, Ka2, and Kd2 are positive constants defined by:.

    with K(1)d1, K(2)d1, e12d, e21d, c, c1 and c2 positive constants. As for the model of Figure 3, the mathematical definition of the function σo1,c1 should be such that it is a positive smooth function and equals Kd1(v) in some subset of the physiological domain [V,V+]. The resulting rate function λ will be concave but the function λQ will not be convex (it will be concave as well). Hence, Assumption (A) is not satisfied.

    Addition of other light-sensitive ion channels.

    Channelrhodopsin-2 has a promoting role in eliciting action potentials. There also exists a chlorine pump, called Halorhodopsin (NpHR), that has an inhibitory action. NpHR can be used along with ChR2 to obtain a control in both directions. Its modelisation as a multistate model was considered in [34]. The transition rates between the different states have the same shape that the ones of the ChR2 and the same simplifications are possible. This new light-sensitive channel can be easily incorporated in our stochastic model and we can state existence of optimal relaxed and/or ordinary control strategies depending on the level of complexity of the NpHR model we consider. It is here important to remark that since the two ionic channels do not react to the same wavelength of the light, the resulting control variable would be two-dimensional with values in [0,umax]2. This would not change the qualitative results of the previous sections.

    Modification of the way the control acts on the local characteristics.

    Up to now, the control acts only on the rate function, and also on the measure transition via its special definition from the rate function. Nevertheless, we can present here a modification of the model where the control acts linearly on the PDE. This modification amounts to considering that the control variable is directly the gating variable of the ChR2. Indeed, we show in [38] that the optimal control of the deterministic counterpart of the stochastic Hodgkin-Huxley-ChR2 model, in finite dimension and with the three states ChR2 model of Figure 2, is closely linked to the optimal control of

    {dVdt=gKn4(t)(VKV(t))+gNam3(t)h(t)(VNaV(t))+gChR2u(t)(VChR2V(t))+gL(VLV(t)),dndt=αn(V(t))(1n(t))βn(V(t))n(t),dmdt=αm(V(t))(1m(t))βm(V(t))m(t),dhdt=αh(V(t))(1h(t))βh(V(t))h(t),

    where the control variable is the former gating variable o. Now the stochastic counterpart of the last model is such that the function fd is now linear in the control and the rate function λ and the transition measure function Q do not depend on the control any more. Finally, by adding NpHR channels to this model, we would obtain a fully controlled infinite-dimensional PDMP in the sense that the control would then act on the three local characteristics of the PDMP. Depending on the model of NpHR chosen, we would obtain relaxed or ordinary optimal control strategy.

    Modification of the control space.

    In all models discussed previously, the control has no spatial dependence. Any light-stimulation device, such as a laser, has a spatial resolution and it is possible that we do not want or cannot stimulate the entire axon. For this reason, spatial dependence of the control should be considered. Now, as long as the control space remains a compact Polish space, spatial dependence of the control could be considered. We propose here a control space defined as a subspace of the Skorohod space D, constituted of the càdlàg functions from [0,1] to R. This control space represents the aggregation of multiple laser beams that can be switched on and off. Suppose that each of these beams produces on the axon a disc of light of diameter r>0 that we call spatial resolution of the light. For an axon represented by the segment [0,1], r is exactly the spatial domain illuminated. We consider now two possibilities for the control space. Suppose first that the spatial resolution is fixed and define p:=1r and

    U:={u:[0,1][0,umax]u is constant on [i/p,(i+1)/p),i=0,..,p1,u(1)=u((p1)/p)}.

    Lemma 5.3. U is a compact subset of D.

    Proof. We tackle this proof by remarking that U is in bijection with the finite dimensional compact space [0,umax]p.

    In this case, the introduction of the space D was quite artificial since the control space remains finite-dimensional. Nevertheless, the Skorohod space will be very useful for the other control space. Suppose now that the spatial resolution of the laser can evolve in [rmin,rmax] with rmin,rmax>0. Let pN the number of lasers used and define

    ˜U:={u:[0,1][0,umax]{xi}0ip subdivision of [0,1],u is constant on [xi,xi+1),i=0,..,p1,u(1)=u(xp1)}.

    Now ˜U is infinite-dimensional and the Skorohod space allows us to use the characterization of compact subsets of D.

    Lemma 5.4. ˜U is a compact subset of D.

    Proof. For this proof, we need to introduce some notation and a critera of compactness in D. A complete treatment of the space D can be found in [7].

    Let uD and {xi}0in a subdivision of [0,1], nN. We define, for i{0,..,n1},

    wu([xi,xi+1)):=supx,y[xi,xi+1)|u(x)u(y)|,

    and for δ>0,

    wu(δ):=inf{xi}max0i<nwu([xi,xi+1)),

    the infimum being taken on all the subdivisions {xi}0in of [0,1] such that xi+1xi>δ for all i{0,..,n1}. Now since ˜U is obviously bounded in D, from [7,Theorem 14.3], we need to show that

    limδ0supu˜Uwu(δ)=0.

    Let δ>0 with δ<rmin and u˜U. There exists as subdivision {xi}0ip of [0,1] such that for every i{0,..,p1}, u is constant on [xi,xi+1) and xi+1xi>δ. Thus wu(δ)=0 which ends the proof.

    With either U or ˜U as the control space, the stochastic controlled infinite-dimensional Hodgkin-Huxley-ChR2 model admits an optimal ordinary control strategy.

    Perspectives.

    Theorem 4.3 proves the existence of optimal controls for a wide class of infinite-dimensional PDMPs and Theorem 4.15 gives sufficient conditions to retrieve ordinary optimal controls. Nevertheless, these theorems do not indicate how to compute optimal controls or approximations of optimal controls. Thus, it would be very interesting, in a further work, to implement numerical methods in order to compute at least approximation of optimal controls for the control problems defined in this paper. One efficient way to address numerical optimal control problems for PDMPs is to use quantization methods that consist in replacing the state and control spaces by discrete spaces and work with approximations of the processes on these discrete spaces ([29], [35], [20]).

    Let αA and let x:=(v,d,τ,h,ν)Ξα with z:=(ν,d,h)Υ. The existence of the probability Pαx below is the object of the next section where Theorem 2.5 is proved.

    • Let T1 be the time of the first jump of (Xαt). With the notations of Proposition 1, the law of T1 is defined by its survival function given for all t>0 by

    Pαx(T1>t)=exp(t0λd(ϕαs(x),α(νs,ds,hs)(τs))ds).

    • For t<T1, Xαt solves (10) starting from x namely (vt,dt,τt,ht,νt)=(ϕαt(x),d,τ+t,h,ν).

    •When a jump occurs at time T1, conditionally to T1, XαT1 is a random variable distributed according to a measure ˆQ on (Ξ,B(Ξ)), itself defined by a measure Q on (D,B(D)). The target state d1 of the discrete variable is a random variable distributed according to the measure

    Q(|ϕαT1(x),dT1,α(νT1,dT1,hT1)(τT1)) such that for all BB(D),

    ˆQ({ϕαT1(x)}×B×{0}×{h+τT1}×{ϕαT1(x)}|ϕαT1(x),dT1,τT1,hT1,νT1,α(T1))=Q(B|ϕαT1s(s),d,α(ν,d,h)(τ+T1)), (33)

    where we use the notation α(T1)=α(dT1,τT1,hT1,νT1). This equality means that the variables v and ν do not jump at time T1, and the variables τ and h jump in a deterministic way to {0} and {h+τT1} respectively.

    • The construction iterates after time T1 with the new starting point (vT1,dT1,0,h+T1,vT1).

    Formally the expressions of the jump rate and the transition measures on Ξ are

    λ(x,u):=λd(v,u),ˆQ(F×B×E×G×J|x,u):=1F×E×G×J(v,0,h+τ,ν)Q(B|v,d,u),

    with F×B×E×G×JB(Ξ), uU and x:=(v,d,τ,h,ν)Ξ.

    There are two filtered spaces on which we can define the enlarged process (Xα) of Definition 2.3. They are linked by the one-to-one correspondence between the PDMP (Xα) and the included jump process (Zα) that we define now. We then introduce both spaces since each one of them is relevant to prove useful properties.

    Given the sample path (Xαs,sT) such that Xα0:=(v,d,τ,h,ν)Ξα, the jump times Tk of Xα can be retrieved by the formula

    {Tk,k=1,,n}={s(0,T]|hshs}.

    Moreover we can associate to Xα a pure jump process (Zαt)t0 taking values in Υ in a one-to-one correspondence as follows,

    Zαt:=(νTk,dTk,Tk),    Tkt<Tk+1. (34)

    Conversely, given the sample path of Zα on [0,T] starting from Zα0=(νZ0,dZ0,TZ0), we can recover the path of Xα on [0,T]. Denote Zαt as (νZt,dZt,TZt) and define T0:=TZ0 and Tk:=inf{t>Tk1|TZtTZt}. Then

    {Xαt=(ϕαt(Zα0),dZ0,t,TZ0,νZ0),t<T1,Xαt=(ϕαtTk(ZαTk),dTk,tTk,TZTk,νZTk),Tkt<Tk+1. (35)

    Let us note that TZTk=Tk for all kN, and that by construction of the PDMP all jumps are detected since Pα[Tk+1=Tk]=0. When no confusion is possible, we write, for αA and nN, Zn=ZαTn.

    Part 1. The canonical space of jump processes with values in Υ. The following construction is very classical, see for instance Davis [19] Appendix A1. We adapt it here to our peculiar process and to the framework of control. Remember that a jump process is defined by a sequence of inter-arrival times and jump locations

    ω=(γ0,s1,γ1,s2,γ2,), (36)

    where γ0Υ is the initial position, and for iN, si is the time elapsed between the (i1)th and the ith jump while γi is the location right after the ith jump. The jump times (ti)iN are deduced from the sequence (si)iN by t0=0 and ti=ti1+si for iN and the jump process (Jt)t0 is given by Jt:=γi for t[ti,ti+1) and Jt=Δ for tt:=limiti, Δ being an extra state, called cemetery.

    Accordingly we introduce YΥ:=(R+×Υ){(R+,Δ)}. Let (YΥi)iN be a sequence of copies of the space YΥ. We define ΩΥ:=Υ×Πi=1YΥi the canonical space of jump processes with values in Υ, endowed with its Borel σ-algebra FΥ and the coordinate mappings on ΩΥ as follows

    {Si:ΩΥR+{},ωSi(ω)=si,for iN,Γi:ΩΥΥ{Δ},ωΓi(ω)=γi,for iN. (37)

    We also introduce ωi:ΩΥΩΥi for iN, defined by

    ωi(ω):=(Γ0(ω),S1(ω),Γ1(ω),,Si(ω),Γi(ω))

    for ωΩΥ. Now for ωΩΥ and iN, let

    T0(ω):=0,Ti(ω):={ik=1Sk(ω),if Sk(ω) and Γk(ω)Δ,k=1,,i,if Sk(ω)= or Γk(ω)=Δ for some k=1,,i,T(ω):=limiTi(ω).

    and the sample path (xt(ω))t0 be defined by

    xt(ω):={Γi(ω)    Ti(ω)t<Ti+1(ω),Δ    tT(ω). (38)

    A relevant filtration for our problem is the natural filtration of the coordinate process (xt)t0 on ΩΥ

    FΥt:=σ{xs|st},

    for all tR+. For given starting point γ0Υ and control strategy αA, a controlled probability measure, denoted Pαγ0, is defined on ΩΥ by the specification of a family of controlled conditional distribution functions as follows: μ1 is a controlled probability measure on (YΥ,B(YΥ)) or equivalently a measurable mapping from Uad([0,T];U) to the set of probability measures on (YΥ,B(YΥ)), such that for all αA,

    μ1(α(γ0);({0}×Υ)(R+×{γ0}))=0.

    For iN{0,1}, μi:ΩΥi×Uad([0,T];U)×B(YΥ)[0,1] are controlled transition measures satisfying:

    1. μi(;Σ) is measurable for each ΣB(YΥ),

    2. μi(ωi1(ω),α(Γi1(ω));) is a probability measure for every ωΩΥ and αA,

    3. μi(ωi1(ω),α(Γi1(ω));({0}×Υ)(R+×{Γi1(ω)}))=0 for every ωΩΥ and αA,

    4. μi(ωi1(ω),α(Γi1(ω));{(,Δ)})=1 if Sk(ω)= or Γk(ω)=Δ for some k{1,,i1}, for every αA.

    We need to extend the definition of αA to the state (,Δ) by setting α(Δ):=uΔ where uΔ is itself an isolated cemetery state and α takes in fact values in Uad([0,T];U{uΔ}).

    Now for a given control strategy αA, Pαγ0 is the unique probability measure on (ΩΥ,TΥ) such that for each iN and bounded function f on ΩΥi

    ΩΥf(ωi(ω))Pαγ0(dω)=YΥ1YΥif(y1,,yi)μi(y1,,yi1,α(yi1);dyi)×μi1(y1,,yi2,α(yi2);dyi1)μ1(α(γ0);dy1),

    with α depending only on the variable in Υ when writing "α(yi1)", yi1=(si1,γi1). Let's now denote by FΥγ,α and (FΥ,γ,αt)t0 the completed σ-fields of FΥ and (FΥt)t0 with all the Pαγ-null sets of FΥ. We then rename the intersection of these σ-fields redefine FΥ and (FΥt)t0 so that we have

    FΥ:=γΥαAFΥγ,α,
    FΥt:=γΥαAFΥ,γ,αt for all t0.

    Then (ΩΥ,FΥ,(FΥt)t0) is the natural filtered space of controlled jump processes.

    Part 2. The canonical space of cdlg functions with values in Ξ. Let ΩΞ be the set of right-continuous functions with left limits (cdlg functions), defined on R+ with values in Ξ. Analogously to what we have done in Part 1, we can construct a filtered space (ΩΞ,FΞ,(FΞt)t0) with coordinate process (xΞt)t0 and a probability Pα on (ΩΞ,FΞ) for every control strategy αA such that the infinite-dimensional PDMP is a Pα-strong Markov process. For (t,y)R+×ΩΞ,xΞt(y)=y(t).

    We start with the definition of FΞ,0t:=σ{xΞs|st} for tR+ and FΞ,0:=t0FΞ,0t. In Davis [19] p 59, the construction of the PDMP is conducted on the Hilbert cube, the space of sequences of independent and uniformly distributed random variables in [0,1]. In the case of controlled PDMP, the survival function F(t,x) in [19] is replaced by the extension to ξα of χα defined in Definition 2.3 and the construction depends on the chosen control. This extension is defined for x:=(v,d,τ,h,ν)Ξα by

    χαt(x):=exp(t0λd(ϕαs(x),ατ+s(ν,d,h))ds),

    such that for z:=(v,d,h)Υ, χαt(z)=χαt(v,d,0,h,v).

    This procedure thus provides for each control αA and starting point xΞα a measurable mapping ψαx from the Hilbert cube to ΩΞ. Let Pαx:=P[(ψαx)1] denote the image measure of the Hilbert cube probability P under ψαx. Now for xΞα, let Fx,αt be the completion of FΞ,0t with all Pαx-null sets of FΞ,0, and define

    FΞt:=αA,xΞαFx,αt. (39)

    The right-continuity of (FΞt)t0 follows from the right-continuity of (FΥt)t0 and the one-to-one correspondence. The right-continuity of (FΥt)t0 is a classical result on right-constant processes. For these reasons, we lose the superscripts Ξ and Υ consider the natural filtration (Ft)t0 in the sequel.

    Now that we have a filtered probability space that satisfies the usual conditions, let us show that the simple Markov property holds for (Xαt). Let αA be a control strategy, s>0 and kN. By construction of the process (Xαt)t0,

    Pα[Tk+1Tk>s|FTk]=exp(s0λdTk(ϕαt(XαTk),αu(νTk,dTk,hTk))du)=χαs(XαTk).

    Now for xΞα, (t,s)R2+ and kN,

    Pαx[Tk+1>t+s|Ft]1{Tkt<Tk+1}=Pαx[Tk+1Tk>t+sTk|Ft]1{0tTk<Tk+1Tk}=exp(t+sTktTkλdTk(ϕαu(XαTk),αu(νTk,dTk,hTk))du)1{0tTk<Tk+1Tk}()=exp(s0λdTk(ϕαu+tTk(XαTk),αu+tTk(νTkdTk,hTk))du)1{0tTk<Tk+1Tk}.

    The equality (*) is the classical formula for jump processes (see Jacod [32]). On the other hand,

    χαs(Xαt)1{Tkt<Tk+1}=exp(s0λdt(ϕαu(Xαt),αu+τt(νt,dt,ht))du)1{Tkt<Tk+1}=exp(s0λdTk(ϕαu(Xαt),αu+tTk(νTk,dTk,hTk))du)1{Tkt<Tk+1}=exp(s0λdTk(ϕαu+tTk(XαTk),αu+tTk(νTkdTk,hTk))du)1{Tkt<Tk+1},

    because Xαt=(ϕαtTk(XαTk),dTk,tTk,hTk,νTk) and by the flow property ϕαu(Xαt)=ϕαu+tTk(XαTk) on 1{Tkt<Tk+1}.

    Thus we showed that for all xΞα, (t,s)R2+ and kN,

    Pαx[Tk+1>t+s|Ft]1{Tkt<Tk+1}=χαs(Xαt)1{Tkt<Tk+1}.

    Now if we write Tαt:=inf{s>t:XαsXαs} the next jump time of the process after t, we get

    Pαx[Tαt>t+s|Ft]=χαs(Xαt), (40)

    which means that, conditionally to Ft, the next jump has the same distribution as the first jump of the process started at Xαt. Since the location of the jump only depends on the position at the jump time, and not before, equality (40) is just what we need to prove our process verifies the simple Markov property.

    To extend the proof to the strong Markov property, the application of Theorem (25.5) (Davis [19]) on the characterization of jump process stopping times on Borel spaces is straightforward.

    From the results of [10], there is no difficulty in finding the expression of the extended generator Gα and its domain:

    • Let αA. The domain D(Gα) of Gα is the set of all measurable f:ΞR such that tf(ϕαt(x),d,τ+t,h,ν) (resp. (v0,d0,τ0,h0,ν0,t,ω)f(v0,d0,τ0,h0,ν0)f(v(t,ω),d(t,ω),τ(t,ω),h(t,ω),ν(t,ω))) is absolutely continuous on R+ for all x=(v,d,τ,h,ν)Ξα (resp. a valid integrand for the associated random jump measure).

    • Let f be continuously differentiable w.r.t. vV and τR+. Define hv as the unique element of V such that

    dfdv[v,d,τ,h,ν](y)=hv(v,d,τ,h,ν),yV,V    yV,

    where dfdv[v,d,τ,h,ν] denotes the Frchet-derivative of f w.r.t vE evaluated at (v,d,τ,h,ν). If hv(v,d,τ,h,ν)V whenever vV and is bounded in V for bounded arguments then for almost every t[0,T],

    Gαf(v,d,τ,h,ν)=τf(v,d,τ,h,ν)+hv(v,d,τ,h,ν),Lv+fd(v,ατ(ν,d,h))V,V+λd(v,ατ(ν,d,h))D[f(v,p,0,h+τ,v)f(v,d,τ,h,ν)]Qα(dp|v,d). (41)

    The bound on the continuous component of the PDMP comes from the following estimation. Let αA and x:=(v,d,τ,h,ν)Ξα and denote by vα the first component of Xα. Then for t[0,T],

    ||vαt||H||S(t)v||H+t0||S(ts)fds(vαs,ατs(νs,ds,hs))||HdsMS||v||H+t0MS(b1+b2||vαs||H)dsMS(||v||H+b1T)eMSb2T, (42)

    by Gronwall's inequality.

    Part 1. Let's first look at the case when w is bounded by a constant w and define for (z,γ)Υ×R

    W(z,γ)=Th0χγs(z)(Zw(ϕγs(z),d,h+s,u)γ(s)(du))ds

    Now take (z,γ)Υ×R and suppose (zn,γn)(z,γ). Let's write z=(v,d,h) and zn=(vn,dn,hn) for nN. For s[0,T], let wn(s,u):=w(ϕγns(zn),dn,hn+s,u) and w(s,u):=w(ϕγs(z),d,h+s,u). Let also an=min(Th,Thn) and bn=max(Th,Thn). Then

    |W(zn,γn)W(z,μ)||bnanχγns(zn)Zwn(s,u)γn(s)(du)ds|+Th0χγns(zn)Z|wn(s,u)w(s,u)|γn(s)(du)ds+|Th0χγns(zn)Zw(s,u)γn(s)(du)dsTh0χγs(z)Zw(s,u)γn(s)(du)ds|+|Th0χγs(z)Zw(s,u)γn(s)(du)dsTh0χγs(z)Zw(s,u)γ(s)(du)ds|

    The first term on the right-hand side converges to zero for since the integrand is bounded.

    by dominated convergence and the continuity of and of proved in Lemma 4.12.

    again by dominated convergence, provided that for , the convergence holds. For this convergence to hold it is enough that for

    It is enough to take large enough so that and to write

    By the local Lipschitz property of ,

    and by Lemma 4.12. The second term converges to zero by the definition of the weakly* convergence in .

    Part 2. In the general case where , let for . is a continuous function and there exists a nonincreasing sequence of bounded continuous functions such that . By the first part of the proof we know that

    is bounded, continuous, decreasing and converges to

    which is thus upper semicontinuous. Since is a continuous bounding function it is easy to show that

    is continuous so that in fact is upper semicontinuous. Now considering the function we easily show that is also lower semicontinuous so that finally is continuous.

    Now the continuity of the applications and comes from the previous result applied to the continuous functions defined for by and with . Here the different assumptions of continuity (H())2.3., (H(c))1. and (H()) are needed.

    [1] A. S. Ackleh and L. Ke, Existence-uniqueness and long time behavior for a class on nonlocal nonlinear parabolic evolution equations, Proc. Amer. Math. Soc., 128 (2000), 3483-3492. doi: 10.1090/S0002-9939-00-05912-8
    [2] V. Anaya, M. Bendahmane and M. Sepúlveda, Mathematical and numerical analysis for reaction-diffusion systems modeling the spread of early tumors, Bol. Soc. Esp. Mat. Apl., (2009), 55-62.
    [3] V. Anaya, M. Bendahmane and M. Sepúlveda, A numerical analysis of a reaction-diffusion system modelling the dynamics of growth tumors, Math. Models Methods Appl. Sci., 20 (2010), 731-756. doi: 10.1142/S0218202510004428
    [4] B. Ainseba, M. Bendahmane and A. Noussair, A reaction-diffusion system modeling predator-prey with prey-taxis, Nonlinear Anal. Real World Appl., 128 (2008), 2086-2105. doi: 10.1016/j.nonrwa.2007.06.017
    [5] L. Bai and K. Wang, A diffusive stage-structured model in a polluted environment, Nonlinear Anal. Real World Appl., 7 (2006), 96-108. doi: 10.1016/j.nonrwa.2004.11.010
    [6] M. Bendahmane, K. H. Karlsen and J. M. Urbano, On a two-sidedly degenerate chemotaxis model with volume-filling effect, Math. Models Methods Appl. Sci., 17 (2007), 783-804. doi: 10.1142/S0218202507002108
    [7] M. Bendahmane and M. Sepúlveda, Convergence of a finite volume scheme for nonlocal reaction-diffusion systems modelling an epidemic disease, Discrete Contin. Dyn. Syst. Ser. B, 11 (2009), 823-853. doi: 10.3934/dcdsb.2009.11.823
    [8] M. Chipot and B. Lovat, Some remarks on nonlocal elliptic and parabolic problem, Nonlinear Anal., 30 (1997), 4619-4627. doi: 10.1016/S0362-546X(97)00169-7
    [9] B. Dubey and J. Hussain, Modelling the interaction of two biological species in a polluted environment, J. Math. Anal. Appl., 246 (2000), 58-79. doi: 10.1006/jmaa.2000.6741
    [10] B. Dubey and J. Hussain, Models for the effect of environmental pollution on forestry resources with time delay, Nonlinear Anal. Real World Appl., 5 (2004), 549-570. doi: 10.1016/j.nonrwa.2004.01.001
    [11] R. Eymard, Th. Gallouët and R. Herbin, "Finite Volume Methods. Handbook of Numerical Analysis," vol. VII, North-Holland, Amsterdam, 2000.
    [12] R. Eymard, D. Hilhorst and M. Vohralík, A combined finite volume-nonconforming/mixed-hybrid finite element scheme for degenerate parabolic problems, Numer. Math., 105 (2006), 73-131. doi: 10.1007/s00211-006-0036-z
    [13] H. I. Freedman and J. B. Shukla, Models for the effects of toxicant in single-species and predator-prey systems, J. Math. Biol., 30 (1991), 15-30. doi: 10.1007/BF00168004
    [14] T. G. Hallam, C. E. Clark and R. R. Lassider, Effects of toxicants on populations: A qualitative approach I. Equilibrium environment exposured, Ecol. Model, 18 (1983), 291-304. doi: 10.1016/0304-3800(83)90019-4
    [15] T. G. Hallam, C. E. Clark and G. S Jordan, Effects of toxicants on populations: A qualitative approach II. First order kinetics, J. Math. Biol., 18 (1983), 25-37.
    [16] T. G. Hallam and J. T. De Luna, Effects of toxicants on populations: A qualitative approach III. Environment and food chains pathways, J. Theor. Biol., 109 (1984), 11-29. doi: 10.1016/S0022-5193(84)80090-9
    [17] J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires," Dunod, 1969.
    [18] C. A. Raposo, M. Sepúlveda, O. Vera, D. Carvalho Pereira and M. Lima Santos, Solution and asymptotic behavior for a nonlocal coupled system of reaction-diffusion, Acta Appl. Math. 102 (2008), 37-56. doi: 10.1007/s10440-008-9207-5
    [19] J. Simon, Compact sets in the space , Ann. Mat. Pura Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360
    [20] J. B. Shukla and B. Dubey, Simultaneous effect of two toxicants on biological species: A mathematical model, J. Biol. Syst., 4 (1996), 109-130. doi: 10.1142/S0218339096000090
    [21] R. Temam, "Navier-Stokes Equations, Theory and Numerical Analysis," 3rd revised edition, North-Holland, Amsterdam, reprinted in the AMS Chelsea series, AMS, Providence, 2001.
    [22] M. Vohralik, "Numerical Methods for Nonlinear Elliptic and Parabolic Equations. Application to Flow Problems in Porous and Fractured Media," Ph.D. dissertation, Université de Paris-Sud Czech Technical University, Prague, 2004.
    [23] X. Yang, Z. Jin and Y. Xue, Weak average persistence and extinction of a predator-prey system in a polluted environment with impulsive toxicant input, Chaos Solitons Fractals, 31 (2007), 726-735. doi: 10.1016/j.chaos.2005.10.042
    [24] K. Yosida, "Functional Analysis and its Applications," New York, Springer-Verlag, 1971.
  • This article has been cited by:

    1. Nathalie Krell, Émeline Schmisser, Nonparametric estimation of jump rates for a specific class of piecewise deterministic Markov processes, 2021, 27, 1350-7265, 10.3150/20-BEJ1312
    2. Alessandro Calvia, Stochastic filtering and optimal control of pure jump Markov processes with noise-free partial observation, 2020, 26, 1292-8119, 25, 10.1051/cocv/2019020
    3. Elena Bandini, Michèle Thieullen, Optimal Control of Infinite-Dimensional Piecewise Deterministic Markov Processes: A BSDE Approach. Application to the Control of an Excitable Cell Membrane, 2021, 84, 0095-4616, 1549, 10.1007/s00245-020-09687-y
    4. Benoîte de Saporta, Aymar Thierry d’Argenlieu, Régis Sabbadin, Alice Cleynen, Inés P. Mariño, A Monte-Carlo planning strategy for medical follow-up optimization: Illustration on multiple myeloma data, 2024, 19, 1932-6203, e0315661, 10.1371/journal.pone.0315661
  • Reader Comments
  • © 2010 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4335) PDF downloads(108) Cited by(7)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog