Processing math: 100%
Research article

C1;1-smoothness of constrained solutions in the calculus of variations with application to mean field games

  • Received: 23 June 2018 Accepted: 09 September 2018 Published: 31 October 2018
  • We derive necessary optimality conditions for minimizers of regular functionals in the calculus of variations under smooth state constraints. In the literature, this classical problem is widely investigated. The novelty of our result lies in the fact that the presence of state constraints enters the Euler-Lagrange equations as a local feedback, which allows to derive the C1;1-smoothness of solutions. As an application, we discuss a constrained Mean Field Games problem, for which our optimality conditions allow to construct Lipschitz relaxed solutions, thus improving an existence result due to the first two authors.

    Citation: Piermarco Cannarsa, Rossana Capuani, Pierre Cardaliaguet. C1;1-smoothness of constrained solutions in the calculus of variations with application to mean field games[J]. Mathematics in Engineering, 2019, 1(1): 174-203. doi: 10.3934/Mine.2018.1.174

    Related Papers:

    [1] Benoît Perthame, Edouard Ribes, Delphine Salort . Career plans and wage structures: a mean field game approach. Mathematics in Engineering, 2019, 1(1): 38-54. doi: 10.3934/Mine.2018.1.38
    [2] Yves Achdou, Ziad Kobeissi . Mean field games of controls: Finite difference approximations. Mathematics in Engineering, 2021, 3(3): 1-35. doi: 10.3934/mine.2021024
    [3] Diogo Gomes, Julian Gutierrez, Ricardo Ribeiro . A mean field game price model with noise. Mathematics in Engineering, 2021, 3(4): 1-14. doi: 10.3934/mine.2021028
    [4] Mario Pulvirenti . On the particle approximation to stationary solutions of the Boltzmann equation. Mathematics in Engineering, 2019, 1(4): 699-714. doi: 10.3934/mine.2019.4.699
    [5] Pablo Blanc, Fernando Charro, Juan J. Manfredi, Julio D. Rossi . Games associated with products of eigenvalues of the Hessian. Mathematics in Engineering, 2023, 5(3): 1-26. doi: 10.3934/mine.2023066
    [6] François Murat, Alessio Porretta . The ergodic limit for weak solutions of elliptic equations with Neumann boundary condition. Mathematics in Engineering, 2021, 3(4): 1-20. doi: 10.3934/mine.2021031
    [7] Dmitrii Rachinskii . Bifurcation of relative periodic solutions in symmetric systems with hysteretic constitutive relations. Mathematics in Engineering, 2025, 7(2): 61-95. doi: 10.3934/mine.2025004
    [8] Isabeau Birindelli, Kevin R. Payne . Principal eigenvalues for k-Hessian operators by maximum principle methods. Mathematics in Engineering, 2021, 3(3): 1-37. doi: 10.3934/mine.2021021
    [9] Luis Silvestre, Stanley Snelson . Solutions to the non-cutoff Boltzmann equation uniformly near a Maxwellian. Mathematics in Engineering, 2023, 5(2): 1-36. doi: 10.3934/mine.2023034
    [10] Serena Della Corte, Antonia Diana, Carlo Mantegazza . Global existence and stability for the modified Mullins–Sekerka and surface diffusion flow. Mathematics in Engineering, 2022, 4(6): 1-104. doi: 10.3934/mine.2022054
  • We derive necessary optimality conditions for minimizers of regular functionals in the calculus of variations under smooth state constraints. In the literature, this classical problem is widely investigated. The novelty of our result lies in the fact that the presence of state constraints enters the Euler-Lagrange equations as a local feedback, which allows to derive the C1;1-smoothness of solutions. As an application, we discuss a constrained Mean Field Games problem, for which our optimality conditions allow to construct Lipschitz relaxed solutions, thus improving an existence result due to the first two authors.


    The centrality of necessary conditions in optimal control is well-known and has originated an immense literature in the fields of optimization and nonsmooth analysis, see, e.g., [3,16,17,29,33,35].

    In control theory, the celebrated Pontryagin Maximum Principle plays the role of the classical Euler-Lagrange equations in the calculus of variations. In the case of unrestricted state space, such conditions provide Lagrange multipliers---the so-called co-states---in the form of solutions to a suitable adjoint system satisfying a certain transversality condition. Among various applications of necessary optimality conditions is the deduction of further regularity properties for minimizers which, a priori, would just be absolutely continuous.

    When state constraints are present, a large body of results provide adaptations of the Pontryagin Principle by introducing appropriate corrections in the adjoint system. The price to pay for such extensions usually consists of reduced regularity for optimal trajectories which, due to constraint reactions, turn out to be just Lipschitz continuous while the associated co-states are of bounded variation, see [20].

    The maximum principle under state constraints was first established by Dubovitskii and Milyutin [17] (see also the monograph [35] for different forms of such a result). It may happen that the maximum principle is degenerate and does not yield much information (abnormal maximum principle). As explained in [8,10,18,19] in various contexts, the so-called "inward pointing condition" generally ensures the normality of the maximum principle under state constraints. In our setting (calculus of variation problem, with constraints on positions but not on velocities), this will never be an issue. The maximum principle under state constraints generally involves an adjoint state which is the sum of a W1,1 map and a map of bounded variation. This latter mapping may be very irregular and have infinitely many jumps [32], which allows for discontinuities in optimal controls. However, under suitable assumptions (requiring regularity of the data and the affine dynamics with respect to controls), it has been shown that optimal controls and the corresponding adjoint states are continuous, and even Lipschitz continuous: see the seminal work by Hager [22] (in the convex setting) and the subsequent contributions by Malanowski [31] and Galbraith and Vinter [21] (in much more general frameworks). Generalization to less smooth frameworks can also be found in [9,18].

    Let ΩRn be a bounded open domain with C2 boundary. Let Γ be the metric subspace of AC(0,T;Rn) defined by

    Γ={γAC(0,T;Rn): γ(t)¯Ω,  t[0,T]},

    with the uniform metric. For any x¯Ω, we set

    Γ[x]={γΓ:γ(0)=x}.

    We consider the problem of minimizing the classical functional of the calculus of variations

    J[γ]=T0f(t,γ(t),˙γ(t))dt+g(γ(T)).

    Let URn be an open set such that ¯ΩU. Given x¯Ω, we consider the constrained minimization problem

    infγΓ[x]J[γ],    where     J[γ]={T0f(t,γ(t),˙γ(t))dt+g(γ(T))}, (1.1)

    where f:[0,T]×U×RnR and g:UR.In this paper, we obtain a certain formulation of the necessary optimality conditions for the above problem, which are particularly useful to study the regularity of minimizers. More precisely, given a minimizer γΓ[x] of (1.1), we prove that there exists a Lipschitz continuous arc p:[0,T]Rn such that

    {˙γ(t)=DpH(t,γ(t),p(t))    for all t[0,T]˙p(t)=DxH(t,γ(t),p(t))Λ(t,γ,p)1Ω(γ)DbΩ(γ(t))for a. e. t[0,T] (1.2)

    where Λ is a bounded continuous function independent of γ and p (Theorem 3.1). By the above necessary conditions we derive a sort of maximal regularity, showing that any solutions γ is of class C1,1. As is customary in this kind of problems, the proof relies on the analysis of suitable penalized functional which has the following form:

    infγAC(0,T;Rn)γ(0)=x{T0[f(t,γ(t),˙γ(t))+1ϵ dΩ(γ(t))]dt+1δ dΩ(γ(T))+g(γ(T))}.

    Then, we show that all solutions of the penalized problem remain in ¯Ω (Lemma3.7).

    A direct consequence of our necessary conditions is the Lipschitz regularity of the value function associated to (1.1) (Proposition 4.1).

    Our interest is also motivated by application to mean field games, as we explain below. Mean field games (MFG) theory has been developed simultaneously by Lasry and Lions ([25,26,27]) and by Huang, Malhamé and Caines ([23,24]) in order to study differential games with an infinite number of rational players in competition. The simplest MFG model leads to systems of partial differential equations involving two unknown functions: the value function u of an optimal control problem of a typical player and the density m of the population of players. In the presence of state constraints, the usual construction of solutions to the MFG system has to be completely revised because the minimizers of the problem lack many of the good properties of the unconstrained case. Such constructions are discussed in detail in [11], where a relaxed notion of solution to the constrained MFG problem was introduced following the so-called Lagrangian formulation (see [4,5,6,7,13,14]. In this paper, applying our necessary conditions, we deduce the existence of more regular solutions than those constructed in [11], assuming data to be Lipschitz continuous.

    This paper is organised as follows. In Section 2, we introduce the notation and recall preliminary results. In Section 3, we derive necessary conditions for the constrained problem. Moreover, we prove the C1,1-smoothness of minimizers. In Section 4, we apply our necessary conditions to obtain the Lipschitz regularity of the value function for the constrained problem. Furthermore, we deduce the existence of more regular constrained MFG equilibria. Finally, in the Appendix, we prove a technical result on limiting subdifferentials.

    Throughout this paper we denote by || and , respectively, the Euclidean norm and scalar product in Rn. Let ARn×n be a matrix. We denote by |||| the norm of A defined as follows

    ||A||=maxxRn,|x|=1||Ax|| .

    For any subset SRn, ¯S stands for its closure, S for its boundary, and Sc for RnS. We denote by 1S:Rn{0,1} the characteristic function of S, i.e.,

    1S(x)={1   xS,0xSc.

    We write AC(0,T;Rn) for the space of all absolutely continuous Rn-valued functions on [0,T], equipped with the uniform norm ||γ||=sup[0,T] |γ(t)|. We observe that AC(0,T;Rn) is not a Banach space.

    Let U be an open subset of Rn. C(U) is the space of all continuous functions on U and Cb(U) is the space of all bounded continuous functions on U. Ck(U) is the space of all functions ϕ:UR that are k-times continuously differentiable. Let ϕC1(U). The gradient vector of ϕ is denoted by Dϕ=(Dx1ϕ,,Dxnϕ), where Dxiϕ=ϕxi. Let ϕCk(U) and let α=(α1,,αn)Nn be a multiindex. We define Dαϕ=Dα1x1Dαnxnϕ. Ckb(U) is the space of all function ϕCk(U) and such that

    ϕk,:=supxU|α|k|Dαϕ(x)|<

    Let Ω be a bounded open subset of Rn with C2 boundary. C1,1(¯Ω) is the space of all the functions C1 in a neighborhood U of Ω and with locally Lipschitz continuous first order derivates in U.

    The distance function from ¯Ω is the function dΩ:Rn[0,+[ defined by

    dΩ(x):=infy¯Ω|xy|     (xRn).

    We define the oriented boundary distance from Ω by

    bΩ(x)=dΩ(x)dΩc(x)    (xRn).

    We recall that, since the boundary of Ω is of class C2, there exists ρ0>0 such that

    bΩ()C2b  on  Σρ0={yB(x,ρ0):xΩ}. (2.1)

    Throughout the paper, we suppose that ρ0 is fixed so that (2.1) holds.

    Take a continuous function f:RnR and a point xRn. A vector pRn is said to be a proximal subetaadient of f at x if there exists ϵ>0 and C0 such that

    p(yx)f(y)f(x)+C|yx|2  for all y that satisfy |yx|ϵ.

    The set of all proximal subetaadients of f at x is called the proximal subdifferential of f at x and is denoted by pf(x). A vector pRn is said to be a limiting subetaadient of f at x if there exist sequences xiRn, pipf(xi) such that xix and pip (i).

    The set of all limiting subetaadients of f at x is called the limiting subdifferential and is denoted by f(x).In particular, for the distance function we have the following result.

    Lemma 2.1. Let Ω be a bounded open subset of Rn with C2 boundary. Then, for every xRn it holds

    pdΩ(x)=dΩ(x)={DbΩ(x)     0<bΩ(x)<ρ0,DbΩ(x)[0,1]xΩ,0xΩ,

    where ρ0 is as in (2.1) and DbΩ(x)[0,1] denotes the set {DbΩ(x)α : α[0,1]}.

    The proof is given in the Appendix.

    Let X be a separable metric space. Cb(X) is the space of all bounded continuous functions on X. We denote by B(X) the family of the Borel subset of X and by P(X) the family of all Borel probability measures on X. The support of ηP(X), supp(η), is the closed set defined by

    supp(η):={xX:η(V)>0 for each neighborhood V of x}.

    We say that a sequence (ηi)P(X) is narrowly convergent to ηP(X) if

    limiXf(x)dηi(x)=Xf(x)dη    fCb(X).

    We denote by d1 the Kantorovich-Rubinstein distance on X, which---when X is compact---can be characterized as follows

    d1(m,m)=sup{Xf(x)dm(x)Xf(x)dm(x) | f:XR  is 1-Lipschitz}, (2.2)

    for all m,mP(X).

    Let Ω be a bounded open subset of Rn with C2 boundary. We write Lip(0,T;P(¯Ω)) for the space of all maps m:[0,T]P(¯Ω) that are Lipschitz continuous with respect to d1, i.e.,

    d1(m(t),m(s))C|ts|,    t,s[0,T], (2.3)

    for some constant C0. We denote by Lip(m) the smallest constant that verifies (2.3).

    Let ΩRn be a bounded open set with C2 boundary. Let Γ be the metric subspace of AC(0,T;Rn) defined by

    Γ={γAC(0,T;Rn): γ(t)¯Ω,  t[0,T]}.

    For any x¯Ω, we set

    Γ[x]={γΓ:γ(0)=x}.

    Let URn be an open set such that ¯ΩU. Given x¯Ω, we consider the constrained minimization problem

    infγΓ[x]J[γ],    where     J[γ]={T0f(t,γ(t),˙γ(t))dt+g(γ(T))}. (3.1)

    We denote by X[x] the set of solutions of (3.1), that is

    X[x]={γΓ[x]:J[γ]=infΓ[x]J[γ]}.

    We assume that f:[0,T]×U×RnR and g:UR satisfy the following conditions.

    (g1) gC1b(U)

    (f0) fC([0,T]×U×Rn) and for all t[0,T] the function (x,v)f(t,x,v) is differentiable. Moreover, Dxf, Dvf are continuous on [0,T]×U×Rn and there exists a constant M0 such that

    |f(t,x,0)|+|Dxf(t,x,0)|+|Dvf(t,x,0)|M     (t,x)[0,T]×U. (3.2)

    (f1) For all t[0,T] the map (x,v)Dvf(t,x,v) is continuously differentiable and there exists a constant μ1 such that

    IμD2vvf(t,x,v)Iμ, (3.3)
    ||D2vxf(t,x,v)||μ(1+|v|), (3.4)

    for all (t,x,v)[0,T]×U×Rn, where I denotes the identity matrix.

    (f2) For all (x,v)U×Rn the function tf(t,x,v) and the map tDvf(t,x,v) are Lipschitz continuous. Moreover, there exists a constant κ0 such that

    |f(t,x,v)f(s,x,v)|κ(1+|v|2)|ts| (3.5)
    |Dvf(t,x,v)Dvf(s,x,v)|κ(1+|v|)|ts| (3.6)

    for all t, s[0,T], xU, vRn.

    Remark 3.1. By classical results in the calculus of variation (see, e.g., [15, Theorem 11.1i]), there exists at least one minimizer of (3.1) in Γ for any fixed point x¯Ω.

    In the next lemma we show that (f0)-(f2) imply the useful growth conditions for f and for its derivatives.

    Lemma 3.1. Suppose that (f0)-(f2) hold. Then, there exists a positive constant C(μ,M) depending only on μ and M such that

    |Dvf(t,x,v)|C(μ,M)(1+|v|), (3.7)
    |Dxf(t,x,v)|C(μ,M)(1+|v|2), (3.8)
    14μ|v|2C(μ,M)f(t,x,v)4μ|v|2+C(μ,M), (3.9)

    for all (t,x,v)[0,T]×U×Rn.

    Proof. By (3.2), and by (3.3) one has that

    |Dvf(t,x,v)||Dvf(t,x,v)Dvf(t,x,0)|+|Dvf(t,x,0)|10|D2vvf(t,x,τv)||v|dτ+|Dvf(t,x,0)|μ|v|+MC(μ,M)(1+|v|)

    and so (3.7) holds. Furthermore, by (3.2), and by (3.4) we have that

    |Dxf(t,x,v)||Dxf(t,x,v)Dxf(t,x,0)|+|Dxf(t,x,0)|10|D2xvf(t,x,τv)||v|dτ+Mμ(1+|v|)|v|+MC(μ,M)(1+|v|2).

    Therefore, (3.8) holds. Moreover, fixed vRn there exists a point ξ of the segment with endpoints 0, v such that

    f(t,x,v)=f(t,x,0)+Dvf(t,x,0),v+12D2vvf(t,x,ξ)v,v.

    By (3.2), (3.3), and by (3.7) we have that

    C(μ,M)+14μ|v|2MC(μ,M)|v|+12μ|v|2f(t,x,v)M+C(μ,M)|v|+μ2|v|2C(μ,M)+4μ|v|2,

    and so (3.9) holds. This completes the proof.

    In the next result we show a special property of the minimizers of (3.1).

    Lemma 3.2. For any x¯Ω and for any γX[x] we have that

    T014μ|˙γ(t)|2dtK,

    where

    K:=T(C(μ,M)+M)+2maxU|g(x)|. (3.10)

    Proof. Let x¯Ω and let γX[x]. By comparing the cost of γ with the cost of the constant trajectory γ(t)x, one has that

    T0f(t,γ(t),˙γ(t))dt+g(γ(T))T0f(t,x,0)dt+g(x)Tmax[0,T]×U|f(t,x,0)|+maxU|g(x)|. (3.11)

    Using (3.2) and (3.9) in (3.11), one has that

    T014μ|˙γ(t)|2dtK,

    where

    K:=T(C(μ,M)+M)+2maxU|g(x)|.

    We denote by H:[0,T]×U×RnR the Hamiltonian

    H(t,x,p)=supvRn{p,vf(t,x,v)}, (t,x,p)[0,T]×U×Rn.

    Our assumptions on f imply that H satisfies the following conditions.

    (H0) HC([0,T]×U×Rn) and for all t[0,T] the function (x,p)H(t,x,p) is differentiable. Moreover, DxH, DpH are continuous on [0,T]×U×Rn and there exists a constant M0 such that

    |H(t,x,0)|+|DxH(t,x,0)|+|DpH(t,x,0)|M     (t,x)[0,T]×U. (3.12)

    (H1) For all t[0,T] the map (x,p)DpH(t,x,p) is continuously differentiable and

    IμD2ppH(t,x,p)Iμ, (3.13)
    ||D2pxH(t,x,p)||C(μ,M)(1+|p|), (3.14)

    for all (t,x,p)[0,T]×U×Rn, where μ is the constant given in (f1) and C(μ,M) depends only on μ and M.

    (H2) For all (x,p)U×Rn the function tH(t,x,p) and the map tDpH(t,x,p) are Lipschitz continuous. Moreover

    |H(t,x,p)H(s,x,p)|κC(μ,M)(1+|p|2)|ts|, (3.15)
    |DpH(t,x,p)DpH(s,x,p)|κC(μ,M)(1+|p|)|ts|, (3.16)

    for all t, s[0,T], xU, pRn, where κ is the constant given in (f2) and C(μ,M) depends only on μ and M.

    Remark 3.2. Arguing as in Lemma 3.1 we deduce that

    |DpH(t,x,p)|C(μ,M)(1+|p|), (3.17)
    |DxH(t,x,p)|C(μ,M)(1+|p|2), (3.18)
    14μ|p|2C(μ,M)H(t,x,p)4μ|p|2+C(μ,M), (3.19)

    for all (t,x,p)[0,T]×U×Rn and C(μ,M) depends only on μ and M.

    Under the above assumptions on Ω, f and g our necessary conditions can be stated as follows.

    Theorem 3.1. For any x¯Ω and any γX[x] the following holds true.

    (i) γ is of class C1,1([0,T];¯Ω).

    (ii) There exist:

    (a) a Lipschitz continuous arc p:[0,T]Rn,

    (b) a constant νR such that

    0νmax{1,2μ supxU|DpH(T,x,Dg(x))|},

    which satisfy the adjoint system

    {˙γ=DpH(t,γ,p)   for all t[0,T],˙p=DxH(t,γ,p)Λ(t,γ,p)1Ω(γ)DbΩ(γ)for a.e. t[0,T], (3.20)

    and the transversality condition

    p(T)=Dg(γ(T))+νDbΩ(γ(T))1Ω(γ(T)),

    where Λ:[0,T]×Σρ0×RnR is a bounded continuous function independent of γ and p.

    Moreover,

    (iii) the following estimate holds

    ||˙γ||L,   γX[x], (3.21)

    where L=L(μ,M,M,κ,T,||Dg||,||g||).

    The (feedback) function Λ in (3.20) can be computed explicitly, see Remark 3.4 below.

    In this section, we prove Theorem 3.1 in the special case of U=Rn. The proof for a general open set U will be given in the next section.

    The proof is based on [12, Theorem 2.1] where the Maximum Principle under state constraints is obtained for a Mayer problem. The reasoning requires several intermediate steps.

    Fix x¯Ω. The key point is to approximate the constrained problem by penalized problems as follows

    infγAC(0,T;Rn)γ(0)=x{T0[f(t,γ(t),˙γ(t))+1ϵ dΩ(γ(t))]dt+1δ dΩ(γ(T))+g(γ(T))}. (3.22)

    Then, we will show that, for ϵ>0 and δ(0,1] small enough, the solutions of the penalized problem remain in ¯Ω.

    Observe that the Hamiltonian associated with the penalized problem is given by

    Hϵ(t,x,p)=supvRn{p,vf(t,x,v)1ϵ dΩ(x)}=H(t,x,p)1ϵ dΩ(x), (3.23)

    for all (t,x,p)[0,T]×Rn×Rn.

    By classical results in the calculus of variation (see, e.g., [15, Section 11.2]), there exists at least one mimimizer of (3.22) in AC(0,T;Rn) for any fixed initial point x¯Ω. We denote by Xϵ,δ[x] the set of solutions of (3.22).

    Remark 3.3. Arguing as in Lemma 3.2 we have that, for any x¯Ω, all γXϵ,δ[x] satisfy

    T0[14μ|˙γ(t)|2+1ϵ dΩ(γ(t))]dtK, (3.24)

    where K is the constant given in (3.10).

    The first step of the proof consists in showing that the solutions of the penalized problem remain in a neighborhood of ¯Ω.

    Lemma 3.3. Let ρ0 be such that (2.1) holds. For any ρ(0,ρ0], there exists ϵ(ρ)>0 such that for all ϵ(0,ϵ(ρ)] and all δ(0,1] we have that

     x¯Ω, γXϵ,δ[x]    supt[0,T]dΩ(γ(t))ρ. (3.25)

    Proof. We argue by contradiction. Assume that, for some ρ>0, there exist sequences {ϵk}, {δk}, {tk}, {xk} and {γk} such that

    ϵk0, δk>0, tk[0,T], xk¯Ω, γkXϵk,δk[xk] and dΩ(γk(tk))>ρ,   for all k1.

    By Remark 3.3, one has that for all k1

    T0[14μ|˙γk(t)|2+1ϵk dΩ(γk(t))]dtK,

    where K is the constant given in (3.10). The above inequality implies that γk is 1/2Hölder continuous with Hölder constant (4μK)1/2. Then, by the Lipschitz continuity of dΩ and the regularity of γk, we have that

    dΩ(γk(tk))dΩ(γk(s))(4μK)1/2|tks|1/2,  s[0,T].

    Since dΩ(γk(tk))>ρ, one has that

    dΩ(γk(s))>ρ(4μK)1/2|tks|1/2.

    Hence, dΩ(γk(s))ρ/2 for all sJ:=[tkρ216μK,tk+ρ216μK][0,T] and all k1. So,

    K1ϵkT0dΩ(γk(t))dt1ϵkJdΩ(γk(t))dt1ϵkρ332μK.

    But the above inequality contradicts the fact that ϵk0. So, (3.25) holds true.

    In the next lemma, we show the necessary conditions for the minimizers of the penalized problem.

    Lemma 3.4. Let ρ(0,ρ0] and let ϵ(0,ϵ(ρ)], where ϵ(ρ) is given by Lemma 3.3. Fix δ(0,1], let x0¯Ω, and let γXϵ,δ[x0]. Then,

    (i) γ is of class C1,1([0,T];Rn);

    (ii) there exists an arc pLip(0,T;Rn), a measurable map λ:[0,T][0,1], and a constant β[0,1] such that

    {˙γ(t)=DpH(t,γ(t),p(t)),   for all t[0,T],˙p(t)=DxH(t,γ(t),p(t))λ(t)ϵ DbΩ(γ(t)),for a.e. t[0,T],p(T)=Dg(γ(T))+βδ DbΩ(γ(T)), (3.26)

    where

    λ(t){{0}ifγ(t)Ω,{1}if0<dΩ(γ(t))<ρ,[0,1]ifγ(t)Ω, (3.27)

    and

    β{{0}ifγ(T)Ω,{1}if0<dΩ(γ(T))<ρ,[0,1]ifγ(T)Ω. (3.28)

    Moreover,

    (iii) the function

    r(t):=H(t,γ(t),p(t))1ϵ dΩ(γ(t)),   t[0,T]

    belongs to AC(0,T;R) and satisfies

    T0|˙r(t)|dtκ(T+4μK),

    where K is the constant given in (3.10) and μ, κ are the constants in (3.5) and (3.9), respectively;

    (iv) the following estimate holds

    |p(t)|24μ[1ϵdΩ(γ(t))+C1δ2],     t[0,T], (3.29)

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK).

    Proof. In order to use the Maximum Principle in the version of [35, Theorem 8.7.1], we rewrite (3.22) as a Mayer problem in a higher dimensional state space. Define X(t)Rn×R as

    X(t)=(γ(t)z(t)),

    where z(t)=t0[f(s,γ(s),˙γ(s))+1ϵ dΩ(γ(s))]ds. Then the state equation becomes

    {˙X(t)=(˙γ(t)˙z(t))=Fϵ(t,X(t),u(t)),X(0)=(x00).

    where

    Fϵ(t,X,u)=(uLϵ(t,x,u))

    and Lϵ(t,x,u)=f(t,x,u)+1ϵ dΩ(x) for X=(x,z) and (t,x,z,u)[0,T]×Rn×R×Rn. Thus, (3.22) can be written as

    min{Φ(Xu(T)):uL1}, (3.30)

    where Φ(X)=g(x)+1δ dΩ(x)+z for any X=(x,z)Rn×R. The associated unmaximized Hamiltonian is given by

    Hϵ(t,X,P,u)=P,Fϵ(t,X,u),(t,X,P,u)[0,T]×Rn+1×Rn+1×Rn.

    We observe that, as γ() is minimizer for (3.22), X is minimizer for (3.30). Hence, the hypotheses of [35, Theorem 8.7.1] are satisfied. It follows that there exist P()=(p(),b())AC(0,T;Rn+1), r()AC(0,T;R), and λ00 such that

    (ⅰ) (P,λ0)(0,0),

    (ⅱ) (˙r(t),˙P(t))co t,XHϵ(t,X(t),P(t),˙γ(t)), a.e t[0,T],

    (ⅲ) P(T)λ0Φ(Xu(T)),

    (ⅳ) Hϵ(t,X(t),P(t),˙γ(t))=maxuRnHϵ(t,X(t),P(t),u), a.e. t[0,T],

    (ⅴ)Hϵ(t,X(t),P(t),˙γ(t))=r(t), a.e. t[0,T],

    where t,XHϵ and Φ denote the limiting subdifferential of Hϵ and Φ with respect to (t,X) and X respectively, while co stands for the closed convex hull. Using the definition of Hϵ we have that

    (p,b,λ0)(0,0,0), (3.31)
    (˙r(t),˙p(t))b(t) co t,xLϵ(t,γ(t),˙γ(t)), (3.32)
    ˙b(t)=0, (3.33)
    p(T)λ0 (g+1δ dΩ)(γ(T)), (3.34)
    b(T)=λ0, (3.35)
    r(t)=Hϵ(t,γ(t),p(t)), (3.36)

    where t,xLϵ and (g+1δ dΩ) stands for the limiting subdifferential of Lϵ(,,u) and g()+1δdΩ(). We claim that λ0>0. Indeed, suppose that λ0=0. Then b0 by (3.33) and (3.35). Moreover, p(T)=0 by (3.34). It follows from (3.32) that p0, which is in contradiction with (3.31). So, λ0>0 and we may rescale p and b so that b(t)=λ0=1 for any t[0,T].

    Note that the Weierstrass Condition (ⅳ) becomes

    p(t),˙γ(t)f(t,γ(t),˙γ(t))=supuRn{p(t),uf(t,γ(t),u)}. (3.37)

    Therefore

    ˙γ(t)=DpH(t,γ(t),p(t)),a.e.t[0,T]. (3.38)

    By Lemma 2.1, by the definition of ρ, and by (3.5) we have that

    t,xLϵ(t,x,u){[κ(1+|u|2),κ(1+|u|2)]×Dxf(t,x,u)ifxΩ,[κ(1+|u|2),κ(1+|u|2)]×(Dxf(t,x,u)+1ϵ DbΩ(x))if0<bΩ(x)<ρ,[κ(1+|u|2),κ(1+|u|2)]×(Dxf(t,x,u)+1ϵ[0,1] DbΩ(x))ifxΩ.

    Thus (3.32) implies that there exists λ(t)[0,1] as in (3.27) such that

    |˙r(t)|κ(1+|˙γ(t)|2),  t[0,T], (3.39)
    ˙p(t)=Dxf(t,γ(t),˙γ(t))λ(t)ϵ DbΩ(γ(t)), a.e. t[0,T]. (3.40)

    Hence, by (3.39), and by Remark 3.3 we conclude that

    T0|˙r(t)|dtκT0(1+|˙γ(t)|2)dtκ(T+4μK). (3.41)

    Moreover, by Lemma 2.1, and by assumption on g, one has that

    (g+1δ dΩ)(x){Dg(x)ifxΩ,Dg(x)+1δ DbΩ(x)if0<bΩ(x)<ρ,Dg(x)+1δ[0,1] DbΩ(x)ifxΩ.

    So, by (3.34), there exists β[0,1] as in (3.28) such that

    p(T)=Dg(x)+βδ DbΩ(x). (3.42)

    Finally, by well-known properties of the Legendre transform one has that

    DxH(t,x,p)=Dxf(t,x,DpH(t,x,p)).

    So, recalling (3.38), (3.40) can be rewritten as

    ˙p(t)=DxH(t,γ(t),p(t))λ(t)ϵ DbΩ(γ(t)),a.e. t[0,T].

    We have to prove estimate (3.29). Recalling (3.23) and (3.19), we have that

    Hϵ(t,γ(t),p(t))=H(t,γ(t),p(t))1ϵ dΩ(γ(t))14μ|p(t)|2C(μ,M)1ϵ dΩ(γ(t)).

    So, using (3.41) one has that

    |Hϵ(T,γ(T),p(T))Hϵ(t,γ(t),p(t))|=|r(T)r(t)|Tt|˙r(s)|dsκ(T+4μK).

    Moreover, (3.42) implies that |p(T)|1δ+||Dg||. Therefore, using again (3.19), we obtain

    14μ|p(t)|2C(μ,M)1ϵ dΩ(γ(t))Hϵ(t,γ(t),p(t))Hϵ(T,γ(T),p(T))+κ(T+4μK)4μ|p(T)|2+C(μ,M)+κ(T+4μK)8μ[1δ2+||Dg||2]+C(μ,M)+κ(T+4μK).

    Hence,

    |p(t)|24μ[1ϵdΩ(γ(t))+C1δ2],

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK). This completes the proof of (3.29).

    Finally, by the regularity of H, we have that pLip(0,T;Rn). So, γC1,1([0,T];Rn). Observing that the right-hand side of the equality ˙γ(t)=DpH(t,γ(t),p(t)) is continuous we conclude that this equality holds for all t in [0,T].

    Lemma 3.5. Let ρ(0,ρ0] and let ϵ(0,ϵ(ρ)], where ϵ(ρ) is given by Lemma 3.3. Fix δ(0,1], let x¯Ω, and let γXϵ,δ[x]. If γ(¯t)Ω for some ¯t[0,T], then there exists τ>0 such that γC2((¯tτ,¯t+τ)[0,T];Rn).

    Proof. Let γXϵ,δ[x] and let ¯t[0,T] be such that γ(¯t)Ω(Rn¯Ω). If γ(¯t)Rn¯Ω, then there exists τ>0 such that γ(t)Rn¯Ω for all tI:=(¯tτ,¯t+τ)[0,T]. By Lemma 3.4, we have that there exists pLip(0,T;Rn) such that

    ˙γ(t)=DpH(t,γ(t),p(t)),˙p(t)=DxH(t,γ(t),p(t))1ϵDbΩ(γ(t)),

    for tI. Since p(t) is Lipschitz continuous for tI, and ˙γ(t)=DpH(t,γ(t),p(t)), then γ belongs to C1(I;Rn). Moreover, by the regularity of H, bΩ, p, and γ one has that ˙p(t) is continuous for tI. Then pC1(I;Rn). Hence, ˙γC1(I;Rn). So, γC2(I;Rn). Finally, if γ(¯t)Ω, the conclusion follows by a similar argument.

    In the next two lemmas, we show that, for ϵ>0 and δ(0,1] small enough, any solution γ of problem (3.22) belongs to ¯Ω for all t[0,T]. For this we first establish that, if δ(0,1] is small enough and γ(T)¯Ω, then the function tbΩ(γ(t)) has nonpositive slope at t=T. Then we prove that the entire trajectory γ remains in ¯Ω provided ϵ is small enough. Hereafter, we set

    ϵ0=ϵ(ρ0),   where ρ0 is such that (2.1) holds and ϵ() is given by Lemma 3.3.

    Lemma 3.6. Let

    δ=12μN1, (3.43)

    where

    N=supxRn|DpH(T,x,Dg(x))|.

    Fix any δ1(0,δ] and let x¯Ω. Let ϵ(0,ϵ0]. If γXδ1,ϵ[x] is such that γ(T)¯Ω, then

    ˙γ(T),DbΩ(γ(T))0.

    Proof. As γ(T)¯Ω, by Lemma 3.4 we have that p(T)=Dg(γ(T))+1δ DbΩ(γ(T)). Hence,

    DpH(T,γ(T),p(T)),DbΩ(γ(T))=DpH(T,γ(T),Dg(γ(T))),DbΩ(γ(T))+DpH(T,γ(T),Dg(γ(T))+1δ DbΩ(γ(T)))DpH(T,γ(T),Dg(γ(T))),DbΩ(γ(T)).

    Recalling that D2ppH(t,x,p)Iμ, one has that

    DpH(T,γ(T),Dg(γ(T))+1δ DbΩ(γ(T)))DpH(T,γ(T),Dg(γ(T))),1δ DbΩ(γ(T))12μ1δ2 |DbΩ(γ(T))|2=12δ2μ.

    So,

    DpH(T,γ(T),p(T)),DbΩ(γ(T))12δμ|DpH(T,γ(T),Dg(γ(T)))|.

    Therefore, we obtain

    ˙γ(T),DbΩ(γ(T))=DpH(T,γ(T),p(T)),DbΩ(γ(T))12δμ+|DpH(T,γ(T),Dg(γ(T)))|.

    Thus, choosing δ as in (3.43) gives the result.

    Lemma 3.7. Fix δ as in (3.43). Then there exists ϵ1(0,ϵ0], such that for any ϵ(0,ϵ1]

    x¯Ω, γXϵ,δ[x]    γ(t)¯Ω   t[0,T].

    Proof. We argue by contradiction. Assume that there exist sequences {ϵk}, {tk}, {xk}, {γk} such that

    ϵk0, tk[0,T], xk¯Ω, γkXϵk,δ[xk] and γk(tk)¯Ω,   for all k1. (3.44)

    Then, for each k1 one could find an interval with end-points 0ak<bkT such that

    {dΩ(γk(ak))=0,dΩ(γk(t))>0   t(ak,bk),dΩ(γk(bk))=0  or else  bk=T.

    Let ¯tk(ak,bk] be such that

    dΩ(γk(¯tk))=maxt[ak,bk]dΩ(γk(t)).

    We note that, by Lemma 3.5, γk is of class C2 in a neighborhood of ˜tk.

    Step 1

    We claim that

    d2dt2dΩ(γk(t))|t=¯tk0. (3.45)

    Indeed, (3.45) is trivial if ¯tk(ak,bk). Suppose ¯tk=bk. Since ¯tk is a maximum point of the map tdΩ(γk(t)) and γk(¯tk)¯Ω, we have that dΩ(γk(¯tk))0. So, bk=T=¯tk and we get

    ddtdΩ(γk(t))|t=¯tk0.

    Moreover, Lemma 3.6 yields

    ddtdΩ(γk(t))|t=¯tk0.

    So,

    ddtdΩ(γk(t))|t=¯tk=0,

    and we have that (3.45) holds true at ¯tk=T.

    Step 2

    Now, we prove that

    1μϵkC(μ,M,κ)[1+4μC1δ2+4μϵk dΩ(γk(¯tk))],    k1, (3.46)

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK) and the constant C(μ,M,κ) depends only on μ, M and κ. Indeed, since γ is of class C2 in a neighborhood of ¯tk one has that

    ¨γ(¯tk)=D2ptH(˜tk,γ(˜tk),p(˜tk))D2pxH(˜tk,γ(˜tk),p(˜tk)),˙γ(˜tk)D2ppH(˜tk,γ(˜tk),p(˜tk)),˙p(˜tk). (3.47)

    Developing the second order derivative of dΩγ, by (3.47) and the expression of the derivatives of γ and p in Lemma 3.4 one has that

    0D2dΩ(γ(˜tk))˙γ(˜tk),˙γ(˜tk)+DdΩ(γ(˜tk)),¨γ(˜tk)=D2dΩ(γ(˜tk))DpH(˜tk,γ(˜tk),p(˜tk)),DpH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)),D2ptH(˜tk,γ(˜tk),p(˜tk))+DdΩ(γ(˜tk)),D2pxH(˜tk,γ(˜tk),p(˜tk))DpH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)),D2ppH(˜tk,γ(˜tk),p(˜tk))DxH(˜tk,γ(˜tk),p(˜tk))+1ϵDdΩ(γ(˜tk)),D2ppH(˜tk,γ(˜tk),p(˜tk))DdΩ(γ(˜tk)).

    We now use the growth properties of H in (3.14), and (3.16)-(3.18), the lower bound for D2ppH in (3.13), and the regularity of the boundary of Ω to obtain:

    1μϵkC(μ,M)(1+|p(˜tk)|)2+κC(μ,M)(1+|p(˜tk)|)C(μ,M,κ)(1+|p(˜tk)|2),

    where the constant C(μ,M,κ) depends only on μ, M and κ. By our estimate for p in (3.29) we get:

    1μϵkC(μ,M,κ)[1+4μC1δ2+4μϵkdΩ(γ(˜tk))],   k1,

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK).

    Conclusion

    Let ρ=min{ρ0,132C(μ,M,κ)μ2}. Owing to Lemma 3.3, for all ϵ(0,ϵ(ρ)] we have that

    supt[0,T]dΩ(γ(t))ρ,    γXϵ,δ[x].

    Hence, using (3.46), we deduce that

    12μϵk4C(μ,M,κ)[1+4μC1δ2].

    Since the above inequality fails for k large enough, we conclude that (3.44) cannot hold true. So, γ(t) belongs to ¯Ω for all t[0,T].

    An obvious consequence of Lemma 3.7 is the following:

    Corollary 3.1. Fix δ as in (3.43) and take ϵ=ϵ1, where ϵ1 is defined as in Lemma 3.7. Then an arc γ() is a solution of problem (3.22) if and only if it is also a solution of (3.1).

    We are now ready to complete the proof of Theorem 3.1.

    Proof of Theorem 3.1. Let x¯Ω and γX[x]. By Corollary 3.1 we have that γ is a solution of problem (3.22) with δ as in (3.43) and ϵ=ϵ1 as in Lemma 3.7. Let p() be the associated adjoint map such that (γ(),p()) satisfies (3.26). Moreover, let λ() and β be defined as in Lemma 3.4. Define ν=βδ. Then we have 0ν1δ and, by (3.26),

    p(T)=Dg(γ(T))+ν DbΩ(γ(T)). (3.48)

    By Lemma 3.4 γC1,1([0,T];¯Ω) and

    ˙γ(t)=DpH(t,γ(t),p(t)),    t[0,T]. (3.49)

    Moreover, p()Lip(0,T;Rn) and by (3.29) one has that

    |p(t)|2μC1δ,   t[0,T],

    where C1=8μ+8μ||Dg||2+2C(μ,M)+κ(T+4μK). Hence, p is bounded. By (3.49), and by (3.17) one has that

    ||˙γ||=supt[0,T]|DpH(t,γ(t),p(t))|C(μ,M)(supt[0,T]|p(t)|+1)C(μ,M)(2μC1δ+1))=L,

    where L=L(μ,M,M,κ,T,||Dg||,||g||). Thus, (3.21) holds

    Finally, we want to find an explicit expression for λ(t). For this, we set

    D={t[0,T]:γ(t)Ω}andDρ0={t[0,T]:|bΩ(γ(t))|<ρ0},

    where ρ0 is as in assumption (2.1). Note that ψ(t):=bΩγ is of class C1,1 on the open set Dρ0, with

    ˙ψ(t)=DbΩ(γ(t)),˙γ(t)=DbΩ(γ(t)),DpH(t,γ(t),p(t)).

    Since pLip(0,T;Rn), ˙ψ is absolutely continuous on Dρ0 with

    ¨ψ(t)=D2bΩ(γ(t))˙γ(t),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ptH(t,γ(t),p(t))DbΩ(γ(t)),D2pxH(t,γ(t),p(t))˙γ(t)DbΩ(γ(t)),D2ppH(t,γ(t),p(t))˙p(t)=D2bΩ(γ(t))DpH(t,γ(t),p(t)),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ptH(t,γ(t),p(t))+DbΩ(γ(t)),D2pxH(t,γ(t),p(t))DpH(t,γ(t),p(t))DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DxH(t,γ(t),p(t))+λ(t)ϵ DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t)).

    Let Nγ={tD(0,T)| ˙ψ(t)0}. Let tNγ, then there exists σ>0 such that γ(s)Ω for any s((tσ,t+σ){t})(0,T). Therefore, Nγ is composed of isolated points and so it is a discrete set. Hence, ˙ψ(t)=0 a.e. tD(0,T). So, ¨ψ(t)=0 a.e. in D, because ˙ψ is absolutely continuous. %¨ψ(t)=0 a.e. in D. Moreover, since D2ppH(t,x,p)>0 and |DbΩ(γ(t))|=1, we have that

    DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t))>0,a.e.tDρ0.

    So, for a.e. tD, λ(t) is given by

    λ(t)ϵ=1DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DbΩ(γ(t)) [DbΩ(γ(t)),D2ptH(t,γ(t),p(t))D2bΩ(γ(t))DpH(t,γ(t),p(t)),DpH(t,γ(t),p(t))DbΩ(γ(t)),D2pxH(t,γ(t),p(t))DpH(t,γ(t),p(t))+DbΩ(γ(t)),D2ppH(t,γ(t),p(t))DxH(t,γ(t),p(t))].

    Since λ(t)=0 for all t[0,T]D by (3.27), taking Λ(t,γ(t),p(t))=λ(t)ϵ, we obtain the conclusion.

    Remark 3.4. The above proof gives a representation of Λ, i.e., for all (t,x,p)[0,T]×Σρ0×Rn one has that

    Λ(t,x,p)=1θ(t,x,p) [D2bΩ(x)DpH(t,x,p),DpH(t,x,p)DbΩ(x),D2ptH(t,x,p)DbΩ(x),D2pxH(t,x,p)DpH(t,x,p)+DbΩ(x),D2ppH(t,x,p)DxH(t,x,p)],

    where θ(t,x,p):=DbΩ(x),D2ppH(t,x,p)DbΩ(x). Observe that (3.13) ensures that θ(t,x,p)>0 for all t[0,T], for all xΣρ0 and for all pRn.

    We now want to remove the extra assumption U=Rn. For this purpose, it suffices to show that the data f and g---a priori defined just on U---can be extended to Rn preserving the conditions in (f0)-(f2) and (g1). So, we proceed to construct such an extension by taking a cut-off function ξC(R) such that

    {ξ(x)=0    if  x(,13],0<ξ(x)<1if  x(13,23),ξ=1if  x[23,+). (3.50)

    Lemma 3.8. Let ΩRn be a bounded open set with C2 boundary. Let U be a open subset of Rn such that ¯ΩU and set

    σ0=dist(¯Ω,RnU)>0.

    Suppose that f:[0,T]×U×RnR and g:UR satisfy (f0)-(f2) and (g1), respectively. Set σ=σ0ρ0. Then, the function f admits the extension

    ˜f(t,x,v)=ξ(bΩ(x)σ)|v|22+(1ξ(bΩ(x)σ))f(t,x,v),    (t,x,v)[0,T]×Rn×Rn,

    that satisfies conditions (f0)-(f2) with U=Rn. Moreover, g admits the extension

    ˜g(x)=(1ξ(bΩ(x)σ))g(x),    xRn,

    that satisfies condition (g1) with U=Rn.

    Note that, since Ω is bounded and U is open, the distance between ¯Ω and RnU is positive.

    Proof. By construction we note that ˜fC([0,T]×Rn×Rn). Moreover, for all t[0,T] the function (x,v)˜f(t,x,v) is differentiable and the map (x,v)Dv˜f(t,x,v) is continuously differentiable by construction. Furthermore, Dx˜f, Dv˜f are continuous on [0,T]×Rn×Rn and ˜f satisfies (3.2). In order to prove (3.3) for ˜f, we observe that

    Dv˜f(t,x,v)=ξ(bΩ(x)σ)v+(1ξ(bΩ(x)σ))Dvf(t,x,v),

    and

    D2vv˜f(t,x,v)=ξ(bΩ(x)σ)I+(1ξ(bΩ(x)σ))D2vvf(t,x,v).

    Hence, by the definition of ξ and (3.3) we obtain that

    (11μ)ID2vv˜f(t,x,v)(1μ)I,      (t,x,v)[0,T]×Rn×Rn.

    Since μ1, we have that ˜f verifies the estimate in (3.3).

    Moreover, since

    Dx(Dv˜f(t,x,v))=˙ξ(bΩ(x)σ)vDbΩ(x)σ+(1ξ(bΩ(x)σ))D2vxf(t,x,v)˙ξ(bΩ(x)σ)Dvf(t,x,v)DbΩ(x)σ,

    and by (3.4) we obtain that

    ||D2vx˜f(t,x,v)||C(μ,M)(1+|v|)   (t,x,v)[0,T]×Rn×Rn.

    For all (x,v)Rn×Rn the function t˜f(t,x,v) and the map tDv˜f(t,x,v) are Lipschitz continuous by construction. Moreover, by (3.5) and the definition of ξ one has that

    |˜f(t,x,v)˜f(s,x,v)|=|(1ξ(bΩ(x)σ))[f(t,x,v)f(s,x,v)]|κ(1+|v|2)|ts|

    for all t, s[0,T], xRn, vRn. Now, we have to prove that (3.6) holds for ˜f. Indeed, using (3.6) we deduce that

    |Dv˜f(t,x,v))Dv˜f(s,x,v))||(1ξ(bΩ(x)σ))[Dvf(t,x,v)Dvf(s,x,v))]|κ(1+|v|)|ts|,

    for all t, s[0,T], xRn, vRn. Therefore, ˜f verifies the assumptions (f0)-(f2).

    Finally, by the regularity of bΩ, ξ, and g we have that ˜g is of class C1b(Rn). This completes the proof.

    Suppose that f:[0,T]×U×RnR and g:UR satisfy the assumptions (f0)-(f2) and (g1), respectively. Let (t,x)[0,T]ׯΩ. Define u:[0,T]ׯΩR as the value function of the minimization problem (3.1), i.e.,

    u(t,x)=infγΓγ(t)=xTtf(s,γ(s),˙γ(s))ds+g(γ(T)). (4.1)

    Proposition 4.1. Let Ω be a bounded open subset of Rn with C2 boundary. Suppose that f and g satisfy (f0)-(f2) and (g1), respectively. Then, u is Lipschitz continuous in [0,T]ׯΩ.

    Proof. First, we shall prove that u(t,) is Lipschitz continuous on Ω, uniformly for t[0,T]. Since u(T,)=g, it suffices to consider the case of t[0,T). Let x0Ω and choose 0<r<1 such that Br(x0)B2r(x0)B4r(x0)Ω. To prove that u(t,) is Lipschitz continuous in Br(x0), take xy in Br(x0). Let γ be an optimal trajectory for u at (t,x) and let ˉγ be the trajectory defined by

    {ˉγ(t)=y,˙ˉγ(s)=˙γ(s)+xyτ  if s[t,t+τ]  a.e.,˙ˉγ(s)=˙γ(s)  otherwise,

    where τ=|xy|2L<Tt. We claim that

    (a) ˉγ(t+τ)=γ(t+τ);

    (b) ˉγ(s)=γ(s) for any s[t+τ,T];

    (c) |ˉγ(s)γ(s)||yx| for any s[t,t+τ];

    (d) ˉγ(s)¯Ω for any s[t,T].

    Indeed, by the definition of ˉγ we have that

    ˉγ(t+τ)ˉγ(t)=ˉγ(t+τ)y=t+τt(˙γ(s)+xyτ)ds=γ(t+τ)y,

    and this gives (a). Moreover, by (a), and by the definition of ˉγ one has that ˉγ(s)=γ(s) for any s[t+τ,T]. Hence, ˉγ verifies (b). By the definition of ˉγ, for any s[t,t+τ] we obtain that

    |ˉγ(s)γ(s)||yx+st(˙ˉγ(σ)˙γ(σ))dσ|=|yx+stxyτdσ||yx|

    and so (c) holds. Since γ is an optimal trajectory for u and by ˉγ(s)=γ(s) for all s[t+τ,T], we only have to prove that ˉγ(s) belongs to ¯Ω for all s[t,t+τ]. Let s[t,t+τ], by Theorem 3.1 one has that

    |ˉγ(s)x0||ˉγ(s)y|+|yx0||st˙ˉγ(σ)dσ|+rst |˙γ(σ)+xyτ|dσ+rst[|˙γ(σ)|+|xy|τ]dσ+rL(st)+|xy|τ(st)+rLτ+|xy|+r.

    Recalling that τ=|xy|2L one has that

    |ˉγ(s)x0||xy|2+|xy|+r4r.

    Therefore, ˉγ(s)B4r(x0)¯Ω for all s[t,t+τ].

    Now, owing to the dynamic programming principle, by (a) one has that

    u(t,y)t+τtf(s,ˉγ(s),˙ˉγ(s))ds+u(t+τ,γ(t+τ)). (4.2)

    Since γ is an optimal trajectory for u at (t,x), we obtain that

    u(t,y)u(t,x)+t+τt[f(s,ˉγ(s),˙ˉγ(s))f(s,γ(s),˙γ(s))]ds.

    By (3.7), (3.8), and the definition of ˉγ, for s[t,t+τ] we have that

    |f(s,ˉγ(s),˙ˉγ(s))f(s,γ(s),˙γ(s))||f(s,ˉγ(s),˙ˉγ(s))f(s,ˉγ(s),˙γ(s))|+|f(s,ˉγ(s),˙γ(s))f(s,γ(s),˙γ(s))|10|Dvf(s,ˉγ(s),λ˙ˉγ(s)+(1λ)˙γ(s)),˙ˉγ(s)˙γ(s)|dλ+10|Dxf(s,λˉγ(s)+(1λ)γ(s),˙γ(s)),ˉγ(s)γ(s)|dλC(μ,M)|˙ˉγ(s)˙γ(s)|10(1+|λ˙ˉγ(s)+(1λ)˙γ(s)|)dλ+C(μ,M)|ˉγ(s)γ(s)|10(1+|˙γ(s)|2)dλ.

    By Theorem 3.1 one has that

    10(1+|λ˙ˉγ(s)+(1λ)˙γ(s)|)dλ1+4L, (4.3)
    10(1+|˙γ(s)|2)dλ1+(L)2. (4.4)

    Using (4.3), (4.4), and (c), by the definition of ¯γ one has that

    |f(s,ˉγ(s),˙ˉγ(s))f(s,γ(s),˙γ(s))|C(μ,M)(1+4L)|xy|τ+C(μ,M)(1+(L)2)|xy|, (4.5)

    for a.e. s[t,t+τ]. By (4.5), and the choice of τ we deduce that

    u(t,y)u(t,x)+C(μ,M)(1+4L)t+τt|xy|τds+C(μ,M)(1+(L)2)t+τt|xy|dsu(t,x)+C(μ,M)(1+4L)|xy|+τC(μ,M)(1+(L)2)|xy|u(t,x)+CL|xy|

    where CL=C(μ,M)(1+4L)+12LC(μ,M)(1+(L)2). Thus, u is locally Lipschitz continuous in space and one has that ||Du||ϑ, where ϑ is a constant not depending on Ω. Owing to the smoothness of Ω, u is globally Lipschitz continuous in space, uniformly for t[0,T].

    In order to prove Lipschitz continuity in time, let x¯Ω and fix t1, t2[0,T] with t2t1. Let γ be an optimal trajectory for u at (t1,x). Then,

    |u(t2,x)u(t1,x)||u(t2,x)u(t2,γ(t2))|+|u(t2,γ(t2))u(t1,x)|. (4.6)

    The first term on the right-side of (4.6) can be estimated using the Lipschitz continuity in space of u and Theorem 3.1. Thus, we get

    |u(t2,x)u(t2,γ(t2))|CL|xγ(t2)|CLt2t1|˙γ(s)|dsLCL(t2t1). (4.7)

    We only have to estimate the second term on the right-side of (4.6). By the dynamic programming principle, (3.9), and the assumptions on F we deduce that

    |u(t2,γ(t2))u(t1,x)|=|t2t1f(s,γ(s),˙γ(s))ds|t2t1|f(s,γ(s),˙γ(s))|dst2t1[C(μ,M)+4μ|˙γ(s)|2]ds[C(μ,M)+4μL](t2t1) (4.8)

    Using (4.7) and (4.8) to bound the right-hand side of (4.6), we obtain that u is Lipschitz continuous in time. This completes the proof.

    In this section we want to apply Theorem 3.1 to a mean field game (MFG) problem with state constraints. Such a problem was studied in [11], where the existence and uniqueness of constrained equilibria was obtained under fairly general assumptions on the data. Here, we will apply our necessary conditions to deduce the existence of more regular equilibria than those constructed in [11], assuming the data F and G to be Lipschitz continuous.

    Assumptions

    Let Ω be a bounded open subset of Rn with C2 boundary. Let P(¯Ω) be the set of all Borel probability measures on ¯Ω endowed with the Kantorovich-Rubinstein distance d1 defined in (2.2). Let U be an open subset of Rn and such that ¯ΩU. Assume that F:U×P(¯Ω)R and G:U×P(¯Ω)R satisfy the following hypotheses.

    (D1) For all xU, the functions mF(x,m) and mG(x,m) are Lipschitz continuous, i.e., there exists a constant κ0 such that

    |F(x,m1)F(x,m2)|+|G(x,m1)G(x,m2)|κd1(m1,m2), (4.9)

    for any m1, m2P(¯Ω).

    (D2) For all mP(¯Ω), the functions xG(x,m) and xF(x,m) belong to C1b(U). Moreover

    |DxF(x,m)|+|DxG(x,m)|κ,     xU,  mP(¯Ω).

    Let L:U×RnR be a function that satisfies the following assumptions.

    (L0) LC1(U×Rn) and there exists a constant M0 such that

    |L(x,0)|+|DxL(x,0)|+|DvL(x,0)|M,     xU. (4.10)

    (L1) DvL is differentiable on U×Rn and there exists a constant μ1 such that

    IμD2vvL(x,v)Iμ, (4.11)
    ||D2vxL(x,v)||μ(1+|v|), (4.12)

    for all (x,v)U×Rn.

    Remark 4.1. (ⅰ) F, G and L are assumed to be defined on U×P(¯Ω) and on U×Rn, respectively, just for simplicity. All the results of this section hold true if we replace U by ¯Ω. This fact can be easily checked by using well-known extension techniques (see, e.g. [1, Theorem 4.26]).

    (ⅱ) Arguing as Lemma 3.1 we deduce that there exists a positive constant C(μ,M) that dependes only on M, μ such that

    |DxL(x,v)|C(μ,M)(1+|v|2), (4.13)
    |DvL(x,v)|C(μ,M)(1+|v|), (4.14)
    |v|24μC(μ,M)L(x,v)4μ|v|2+C(μ,M), (4.15)

    for all (x,v)U×Rn.

    Let mLip(0,T;P(¯Ω)). If we set f(t,x,v)=L(x,v)+F(x,m(t)), then the associated Hamiltonian H takes the form

    H(t,x,p)=HL(x,p)F(x,m(t)),    (t,x,p)[0,T]×U×Rn,

    where

    HL(x,p)=supvRn{p,vL(x,v)},      (x,p)U×Rn.

    The assumptions on L imply that HL satisfies the following conditions.

    1. HLC1(U×Rn) and there exists a constant M0 such that

    |HL(x,0)|+|DxHL(x,0)|+|DpHL(x,0)|M,    xU. (4.16)

    2. DpHL is differentiable on U×Rn and satisfies

    IμDppHL(x,p)Iμ,               (x,p)U×Rn, (4.17)
    ||D2pxHL(x,p)||C(μ,M)(1+|p|),    (x,p)U×Rn, (4.18)

    where μ is the constant in (L1) and C(μ,M) depends only on μ and M.

    For any t[0,T], we denote by et:Γ¯Ω the evaluation map defined by

    et(γ)=γ(t),    γΓ.

    For any ηP(Γ), we define

    mη(t)=etη    t[0,T].

    Remark 4.2. We observe that for any ηP(Γ), the following holds true (see [11] for a proof).

    (ⅰ) mηC([0,T];P(¯Ω)).

    (ⅱ) Let ηi, ηP(Γ), i1, be such that ηi is narrowly convergent to η. Then mηi(t) is narrowly convergent to mη(t) for all t[0,T].

    For any fixed m0P(¯Ω), we denote by Pm0(Γ) the set of all Borel probability measures η on Γ such that e0η=m0. For all ηPm0(Γ), we set

    Jη[γ]=T0[L(γ(t),˙γ(t))+F(γ(t),mη(t))] dt+G(γ(T),mη(T)),     γΓ.

    For all x¯Ω and ηPm0(Γ), we define

    Γη[x]={γΓ[x]:Jη[γ]=minΓ[x]Jη}.

    It is shown in [11] that, for every ηPm0(Γ), the set Γη[x] is nonempty and Γη[] has closed graph. We recall the definition of constrained MFG equilibria for m0, given in [11].

    Definition 4.1. Let m0P(¯Ω). We say that ηPm0(Γ) is a contrained MFG equilibrium for m0 if

    supp(η)x¯ΩΓη[x].

    Let Γ be a nonempty subset of Γ. We denote by Pm0(Γ) the set of all Borel probability measures η on Γ such that e0η=m0. We now introduce special subfamilies of Pm0(Γ) that play a key role in what follows.

    Definition 4.2. Let Γ be a nonempty subset of Γ. We define by PLipm0(Γ) the set of ηPm0(Γ) such that mη(t)=etη is Lipschitz continuous, i.e.,

    PLipm0(Γ)={ηPm0(Γ):mηLip(0,T;P(¯Ω))}.

    Remark 4.3. We note that PLipm0(Γ) is a nonempty convex set. Indeed, let j:¯ΩΓ be the continuous map defined by

    j(x)(t)=x    t[0,T].

    Then,

    η:=jm0

    is a Borel probability measure on Γ and ηPLipm0(Γ).

    In order to show that PLipm0(Γ) is convex, let {ηi}i=1,2PLipm0(Γ) and let λ1, λ20 be such that λ1+λ2=1. Since ηi are Borel probability measures, η:=λη1+(1λ)η2 is a Borel probability measure as well. Moreover, for any Borel set BB(¯Ω) we have that

    e0η(B)=η(e10(B))=2i=1λiηi(e10(B))=2i=1λie0ηi(B)=2i=1λim0(B)=m0(B).

    So, ηPm0(Γ). Since mη1, mη2Lip(0,T;P(¯Ω)), we have that mη(t)=λ1mη1(t)+λ2mη2(t) belongs to Lip(0,T;P(¯Ω)).

    In the next result, we apply Theorem 3.1 to prove a useful property of minimizers of Jη.

    Proposition 4.2. Let Ω be a bounded open subset of Rn with C2 boundary and let m0P(¯Ω). Suppose that (L0), (L1), (D1), and (D2) hold true. Let ηPLipm0(Γ) and fix x¯Ω. Then Γη[x]C1,1([0,T];Rn) and

    ||˙γ||L0,   γΓη[x], (4.19)

    where L0=L0(μ,M,M,κ,T,||G||,||DG||).

    Proof. Let ηPLipm0(Γ), x¯Ω and γΓη[x]. Since mLip(0,T;P(¯Ω)), taking f(t,x,v)=L(x,v)+F(x,m(t)), one can easly check that all the assumptions of Theorem 3.1 are satisfied by f and G. Therefore, we have that Γη[x]C1,1([0,T];Rn) and, in this case, (3.21) becomes

    ||˙γ||L0,   γΓη[x],

    where L0=L0(μ,M,M,κ,T,||G||,||DG||).

    We denote by ΓL0 the set of γΓ such that (4.19) holds, i.e.,

    ΓL0={γΓ:||˙γ||L0}. (4.20)

    Lemma 4.1. Let m0P(¯Ω). Then, PLipm0(ΓL0) is a nonempty convex compact subset of Pm0(Γ). Moreover, for every ηPm0(ΓL0), mη(t):=etη is Lipschitz continuous of constant L0, where L0 is as in Proposition 4.2.

    Proof. Arguing as in Remark 4.3, we obtain that PLipm0(ΓL0) is a nonempty convex set. Moreover, since ΓL0 is compactly embedded in Γ, one has that PLipm0(ΓL0) is compact.

    Let ηPm0(ΓL0) and mη(t)=etη. For any t1,t2[0,T], we recall that

    d1(mη(t2),mη(t1))=sup{¯Ωϕ(x)(mη(t2,dx)mη(t1,dx)) | ϕ:¯ΩR  is 1-Lipschitz}.

    Since ϕ is 1-Lipschitz continuous, one has that

    ¯Ωϕ(x)(mη(t2,dx)mη(t1,dx))=Γ[ϕ(et2(γ))ϕ(et1(γ))]dη(γ)=Γ[ϕ(γ(t2))ϕ(γ(t1))]dη(γ)Γ|γ(t2)γ(t1)|dη(γ).

    Since ηPm0(ΓL0), we deduce that

    Γ|γ(t2)γ(t1)|dη(γ)L0Γ|t2t1|dη(γ)=L0|t2t1|

    and so mη(t) is Lipschitz continuous of constant L0.

    In the next result, we deduce the existence of more regular equilibria than those constructed in [11].

    Theorem 4.1. Let Ω be a bounded open subset of Rn with C2 boundary and m0P(¯Ω). Suppose that (L0), (L1), (D1), and (D2) hold true. Then, there exists at least one constrained MFG equilibrium ηPLipm0(Γ).

    Proof. First of all, we recall that for any ηPLipm0(Γ), there exists a unique Borel measurable family * of probabilities {ηx}x¯Ω on Γ which disintegrates η in the sense that

    *We say that {ηx}x¯Ω is a Borel family (of probability measures) if x¯Ωηx(B)R is Borel for any Borel set BΓ.

    {η(dγ)=¯Ωηx(dγ)m0(dx),supp(ηx)Γ[x]  m0a.e. x¯Ω (4.21)

    (see, e.g., [2, Theorem 5.3.1]). Proceeding as in [11], we introduce the set-valued map

    E:Pm0(Γ)Pm0(Γ),

    by defining, for any ηPm0(Γ),

    E(η)={ˆηPm0(Γ):supp(ˆηx)Γη[x]  m0a.e. x¯Ω}. (4.22)

    We recall that, by [11, Lemma 3.6], the map E has closed graph.

    Now, we consider the restriction E0 of E to PLipm0(Γ), i.e.,

    E0:PLipm0(ΓL0)Pm0(Γ),   E0(η)=E(η)  ηPLipm0(ΓL0).

    We will show that the set-valued map E0 has a fixed point, i.e., there exists ηPLipm0(ΓL0) such that ηE0(η). By [11, Lemma 3.5] we have that for any ηPLipm0(ΓL0), E0(η) is a nonempty convex set. Moreover, we have that

    E0(PLipm0(ΓL0))PLipm0(ΓL0). (4.23)

    Indeed, let ηPLipm0(ΓL0) and ˆηE0(η). Since, by Proposition 4.2 one has that

    Γη[x]ΓL0   x¯Ω,

    and by definition of E0 we deduce that

    supp(ˆη)ΓL0.

    So, ˆηPm0(ΓL0). By Lemma 4.1, ˆηPLipm0(ΓL0).

    Since E has closed graph, by Lemma 4.1 and (4.23) we have that E0 has closed graph as well. Then, the assumptions of Kakutani's Theorem [30] are satisfied and so, there exists ¯ηPLipm0(ΓL0) such that ¯ηE0(¯η).

    We recall the definition of a mild solution of the constrained MFG problem, given in [11].

    Definition 4.3. We say that (u,m)C([0,T]ׯΩ)×C([0,T];P(¯Ω)) is a mild solution of the constrained MFG problem in ¯Ω if there exists a constrained MFG equilibrium ηPm0(Γ) such that

    (i) m(t)=etη for all t[0,T];

    (ii) u is given by

    u(t,x)=infγΓγ(t)=x{Tt[L(γ(s),˙γ(s))+F(γ(s),m(s))] ds+G(γ(T),m(T))}, (4.24)

    for (t,x)[0,T]ׯΩ.

    Theorem 4.2. Let Ω be a bounded open subset of Rn with C2 boundary. Suppose that (L0), (L1), (D1) and (D2) hold true. There exists at least one mild solution (u,m) of the constrained MFG problem in ¯Ω. Moreover,

    (i) u is Lipschitz continuous in [0,T]ׯΩ;

    (ii) mLip(0,T;P(¯Ω)) and Lip(m)=L0, where L0 is the constant in (4.19).

    The question of the Lipschitz continuity up to the boundary of the value function under state constraints was addressed in [28] and [34], for stationary problems, and in a very large literature that has been published since. We refer to the survey paper [20] for references.

    Proof. Let m0P(¯Ω) and let ηPLipm0(Γ) be a constrained MFG equilibrium for m0. Then, by Theorem 4.1 there exists at least one mild solution (u,m) of the constrained MFG problem in ¯Ω. Moreover, by Theorem 4.1 one has that mLip(0,T;P(¯Ω)) and Lip(m)=L0, where L0 is the constant in (4.19). Finally, by Proposition 4.1 we conclude that u is Lipschitz continuous in (0,T)ׯΩ.

    Remark 4.4. Recall that F:U×P(¯Ω)R is strictly monotone if

    ¯Ω(F(x,m1)F(x,m2))d(m1m2)(x)  0, (4.25)

    for any m1,m2P(¯Ω), and ¯Ω(F(x,m1)F(x,m2))d(m1m2)(x)=0 if and only if F(x,m1)=F(x,m2) for all x¯Ω.

    Suppose that F and G satisfy (4.25). Let η1, η2PLipm0(Γ) be constrained MFG equilibria and let Jη1 and Jη2 be the associated functionals, respectively. Then Jη1 is equal to Jη2. Consequently, if (u1,m1), (u2,m2) are mild solutions of the constrained MFG problem in ¯Ω, then u1=u2 (see [11] for a proof).

    In this Appendix we prove Lemma 2.1. The only case which needs to be analyzed is when xΩ. We recall that ppdΩ(x) if and only if there exists ϵ>0 such that

    dΩ(y)dΩ(x)p,yxC|yx|2,  for any y such that |yx|ϵ, (5.1)

    for some constant C0. Let us show that pdΩ(x)=DbΩ(x)[0,1]. By the regularity of bΩ, one has that

    dΩ(y)dΩ(x)DbΩ(x),yxbΩ(y)bΩ(x)DbΩ(x),yxC|yx|2.

    This shows that DbΩ(x)pdΩ(x). Moreover, since

    dΩ(y)dΩ(x)λDbΩ(x),yxλ(dΩ(y)dΩ(x)DbΩ(x),yx)   λ[0,1],

    we further obtain the inclusion

    DbΩ(x)[0,1]dΩ(x).

    Next, in order to show the reverse inclusion, let ppdΩ(x){0} and let yΩc. Then, we can rewrite (5.1) as

    bΩ(y)bΩ(x)p,yxC|yx|2,   |yx|ϵ. (5.2)

    Since yΩc, by the regularity of bΩ one has that

    bΩ(y)bΩ(x)DbΩ(x),yx+C|yx|2 (5.3)

    for some constant CR. By (5.2) and (5.3) one has that

    DbΩ(x)p,yx|yx|C|yx|.

    Hence, passing to the limit for yx, we have that

    DbΩ(x)p,v0,    vTΩc(x),

    where TΩc(x) is the contingent cone to Ωc at x (see e.g. [35] for a definition). Therefore, by the regularity of Ω,

    DbΩ(x)p=λv(x),

    where λ0 and v(x) is the exterior unit normal vector to Ω in x. Since v(x)=DbΩ(x), we have that

    p=(1λ)DbΩ(x).

    Now, we prove that λ1. Suppose that yΩ, then, by (5.1) one has that

    0=dΩ(y)(1λ)DbΩ(x),yx+C|yx|2.

    Hence,

    (1λ)DbΩ(x),yx|yx|C|yx|.

    Passing to the limit for yx, we obtain

    (1λ)DbΩ(x),w0,     wT¯Ω(x),

    where T¯Ω(x) is the contingent cone to Ω at x. We now claim that λ1. If λ>1, then DbΩ(x),w0 for all wT¯Ω(x) but this is impossible since DbΩ(x) is the exterior unit normal vector to Ω in x. Using the regularity of bΩ, simple limit-taking procedures permit us to prove that dΩ(x)=DbΩ(x)[0,1] when xΩ. This completes the proof of Lemma 2.1.

    This work was partly supported by the University of Rome Tor Vergata (Consolidate the Foundations 2015) and by the Istituto Nazionale di Alta Matematica "F. Severi" (GNAMPA 2016 Research Projects). The authors acknowledge the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006. The second author is grateful to the Universitá Italo Francese (Vinci Project 2015).

    The authors declare no conflict of interest.



    [1] Adams RA (1975) Sobolev Spaces. Academic Press, New York.
    [2] Ambrosio L, Gigli N, Savare G (2008) Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag.
    [3] Arutyanov AV, Aseev SM (1997) Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints. SIAM J Control Optim 35: 930–952. doi: 10.1137/S036301299426996X
    [4] Benamou JD, Brenier Y (2000) A computational fluid mechanics solution to the Monge- Kantorovich mass transfer problem. Numer Math 84: 375–393. doi: 10.1007/s002110050002
    [5] Benamou JD, Carlier G (2015) Augmented Lagrangian Methods for Transport Optimization, Mean Field Games and Degenerate Elliptic Equations. J Optimiz Theory App 167: 1–26. doi: 10.1007/s10957-015-0725-9
    [6] Benamou JD, Carlier G, Santambrogio F (2017) Variational Mean Field Games, In: Bellomo N, Degond P, Tadmor E (eds) Active Particles, Modeling and Simulation in Science, Engineering and Technology, Birkhäuser, 1: 141–171.
    [7] Brenier Y (1999) Minimal geodesics on groups of volume-preserving maps and generalized solutions of the Euler equations. Comm Pure Appl Math 52: 411–452. doi: 10.1002/(SICI)1097-0312(199904)52:4<411::AID-CPA1>3.0.CO;2-3
    [8] Bettiol P, Frankowska H (2007) Normality of the maximum principle for nonconvex constrained bolza problems. J Differ Equations 243: 256–269. doi: 10.1016/j.jde.2007.05.005
    [9] Bettiol P, Frankowska H (2008) Hölder continuity of adjoint states and optimal controls for state constrained problems. Appl Math Opt 57: 125–147. doi: 10.1007/s00245-007-9015-8
    [10] Bettiol P, Khalil N and Vinter RB (2016) Normality of generalized euler-lagrange conditions for state constrained optimal control problems. J Convex Anal 23: 291–311.
    [11] Cannarsa P, Capuani R (2017) Existence and uniqueness for Mean Field Games with state constraints. Available from: http://arxiv.org/abs/1711.01063.
    [12] Cannarsa P, Castelpietra M and Cardaliaguet P (2008) Regularity properties of a attainable sets under state constraints. Series on Advances in Mathematics for Applied Sciences 76: 120–135. doi: 10.1142/9789812776075_0006
    [13] Cardaliaguet P (2015)Weak solutions for first order mean field games with local coupling. Analysis and geometry in control theory and its applications 11: 111–158.
    [14] Cardaliaguet P, Mészáros AR, Santambrogio F (2016) First order mean field games with density constraints: pressure equals price. SIAM J Control Optim 54: 2672–2709. doi: 10.1137/15M1029849
    [15] Cesari L (1983) Optimization–Theory and Applications: Problems with Ordinary Differential Equations, Vol 17, Springer-Verlag, New York.
    [16] Clarke FH (1983) Optimization and Nonsmooth Analisis, John Wiley & Sons, New York.
    [17] Dubovitskii AY and Milyutin AA (1964) Extremum problems with certain constraints. Dokl Akad Nauk SSSR 149: 759–762.
    [18] Frankowska H (2006) Regularity of minimizers and of adjoint states in optimal control under state constraints. J Convex Anal 13: 299.
    [19] Frankowska H (2009) Normality of the maximum principle for absolutely continuous solutions to Bolza problems under state constraints. Control Cybern 38: 1327–1340.
    [20] Frankowska H (2010) Optimal control under state constraints. Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II–IV: Invited Lectures, 2915–2942.
    [21] Galbraith GN and Vinter RB (2003) Lipschitz continuity of optimal controls for state constrained problems. SIAM J Control Optim 42: 1727–1744. doi: 10.1137/S0363012902404711
    [22] Hager WW(1979) Lipschitz continuity for constrained processes. SIAM J Control Optim 17: 321– 338.
    [23] Huang M, Caines PE and Malhamé RP (2007) Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized ϵ - Nash equilibria. IEEE T Automat Contr 52: 1560–1571. doi: 10.1109/TAC.2007.904450
    [24] Huang M, Malhamé RP, Caines PE (2006) Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainly equivalence principle. Communication in information and systems 6: 221–252. doi: 10.4310/CIS.2006.v6.n3.a5
    [25] Lasry JM, Lions PL (2006) Jeux à champ moyen. I – Le cas stationnaire. CR Math 343: 619–625.
    [26] Lasry JM, Lions PL (2006) Jeux à champ moyen. II – Horizon fini et contrôle optimal. CR Math 343: 679–684.
    [27] Lasry JM, Lions PL (2007) Mean field games. Jpn J Math 2: 229–260. doi: 10.1007/s11537-007-0657-8
    [28] Lions PL (1985) Optimal control and viscosity solutions. Recent mathematical methods in dynamic programming, Springer, Berlin, Heidelberg, 94–112.
    [29] Loewen P, Rockafellar RT (1991) The adjoint arc in nonsmooth optimization. T Am Math Soc 325: 39–72. doi: 10.1090/S0002-9947-1991-1036004-7
    [30] Kakutani S (1941) A generalization of Brouwer's fixed point theorem. Duke Math J 8: 457–459. doi: 10.1215/S0012-7094-41-00838-4
    [31] Malanowski K (1978) On regularity of solutions to optimal control problems for systems with control appearing linearly. Archiwum Automatyki i Telemechaniki 23: 227–242.
    [32] Milyutin AA (2000) On a certain family of optimal control problems with phase constraint. Journal of Mathematical Sciences 100: 2564–2571. doi: 10.1007/BF02673842
    [33] Rampazzo F, Vinter RB (2000) Degenerate optimal control problems with state constraints. SIAM J Control Optim 39: 989–1007. doi: 10.1137/S0363012998340223
    [34] Soner HM (1986) Optimal control with state-space constraint I. SIAM J Control Optim 24: 552– 561. doi: 10.1137/0324032
    [35] Vinter RB (2000) Optimal control. Birkhäuser Boston, Basel, Berlin.
  • This article has been cited by:

    1. Yves Achdou, Paola Mannucci, Claudio Marchi, Nicoletta Tchou, Deterministic mean field games with control on the acceleration, 2020, 27, 1021-9722, 10.1007/s00030-020-00634-y
    2. Philip Jameson Graber, Charafeddine Mouzouni, On Mean Field Games models for exhaustible commodities trade, 2020, 26, 1292-8119, 11, 10.1051/cocv/2019008
    3. Pierre Cardaliaguet, Alessio Porretta, 2020, Chapter 1, 978-3-030-59836-5, 1, 10.1007/978-3-030-59837-2_1
    4. Siting Liu, Matthew Jacobs, Wuchen Li, Levon Nurbekyan, Stanley J. Osher, Computational Methods for First-Order Nonlocal Mean Field Games with Applications, 2021, 59, 0036-1429, 2639, 10.1137/20M1334668
    5. Piermarco Cannarsa, Rossana Capuani, Pierre Cardaliaguet, Mean field games with state constraints: from mild to pointwise solutions of the PDE system, 2021, 60, 0944-2669, 10.1007/s00526-021-01936-4
    6. J. Frédéric Bonnans, Justina Gianatti, Laurent Pfeiffer, A Lagrangian Approach for Aggregative Mean Field Games of Controls with Mixed and Final Constraints, 2023, 61, 0363-0129, 105, 10.1137/21M1407720
    7. Rossana Capuani, Antonio Marigonda, Marta Mogentale, 2022, Chapter 34, 978-3-030-97548-7, 297, 10.1007/978-3-030-97549-4_34
    8. Piermarco Cannarsa, Wei Cheng, Cristian Mendico, Kaizhi Wang, Weak KAM Approach to First-Order Mean Field Games with State Constraints, 2021, 1040-7294, 10.1007/s10884-021-10071-9
    9. Saeed Sadeghi Arjmand, Guilherme Mazanti, Multipopulation Minimal-Time Mean Field Games, 2022, 60, 0363-0129, 1942, 10.1137/21M1407306
    10. Saeed Sadeghi Arjmand, Guilherme Mazanti, Nonsmooth mean field games with state constraints, 2022, 28, 1292-8119, 74, 10.1051/cocv/2022069
    11. Rossana Capuani, Antonio Marigonda, Constrained Mean Field Games Equilibria as Fixed Point of Random Lifting of Set-Valued Maps, 2022, 55, 24058963, 180, 10.1016/j.ifacol.2022.11.049
    12. Saeed Sadeghi Arjmand, Guilherme Mazanti, 2021, On the characterization of equilibria of nonsmooth minimal-time mean field games with state constraints, 978-1-6654-3659-5, 5300, 10.1109/CDC45484.2021.9683104
    13. Samuel Daudin, Optimal control of the Fokker-Planck equation under state constraints in the Wasserstein space, 2023, 00217824, 10.1016/j.matpur.2023.05.002
    14. Rossana Capuani, Antonio Marigonda, Michele Ricciardi, Random Lift of Set Valued Maps and Applications to Multiagent Dynamics, 2023, 31, 1877-0533, 10.1007/s11228-023-00693-0
    15. Michael Hintermüller, Thomas M. Surowiec, Mike Theiß, On a Differential Generalized Nash Equilibrium Problem with Mean Field Interaction, 2024, 34, 1052-6234, 2821, 10.1137/22M1489952
    16. Yves Achdou, Paola Mannucci, Claudio Marchi, Nicoletta Tchou, Deterministic Mean Field Games on Networks: A Lagrangian Approach, 2024, 56, 0036-1410, 6689, 10.1137/23M1615073
    17. Guilherme Mazanti, A note on existence and asymptotic behavior of Lagrangian equilibria for first-order optimal-exit mean field games, 2024, 0, 2156-8472, 0, 10.3934/mcrf.2024064
    18. P. Jameson Graber, Remarks on potential mean field games, 2025, 12, 2522-0144, 10.1007/s40687-024-00494-3
    19. Michele Ricciardi, Mauro Rosestolato, Mean field games incorporating carryover effects: optimizing advertising models, 2024, 1593-8883, 10.1007/s10203-024-00500-x
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5500) PDF downloads(778) Cited by(18)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog